Skip to content

Posts from the ‘SAN’ Category

18
Jan

EMC XtremIO and VMware EFI

After a couple weeks of troubleshooting by EMC/XtremIO and VMware engineers, the issue was determined to be an issue with EFI boot handing off a 7MB block to the XtremIO array, which filled the queue, and which would never clear as it was waiting for more data to be able to complete communication (i.e. deadlock). This seems to only happen with EFI firmware VMs (tested with Windows 2012 and Windows 2012 R2) and the issue is on the XtremIO end.

The good news is that the problem can be mitigated by adjusting the Disk.DiskMaxIOSize setting on each ESXi host from the default 32MB (32768) to 4MB (4096). You can find this in vCenter > Host > Configuration > Advanced Settings (bottom one) > Disk > Disk.DiskMaxIOSize. The XtremIO team is working on a permanent fix in the meantime, and the workaround can be implemented hot with no impact to active operations (potentially minor host CPU load increase as ESXi breaks down >4MB I/O into 4MB chunks). Read moreRead more

26
Dec

EMC XtremIO Gen2 and VMware

Synopis: My organization recently received and deployed one X-Brick of EMC’s initial GA release of the XtremIO Gen2 400GB storage array (raw 10TB flash; usable physical 7.47TB). Since this is a virgin product, virtually no community support or feedback exists, so this is a shout out for other org/user experience in the field.

Breakdown: We are a fully virtualized environment running on VMware ESXi 5.5 on modern Dell hardware and Cisco Nexus switches (converged to the host; fiber to the storage), and originally sourced on 3PAR storage. After initial deployment of the XtremIO array in early December (2013), we began our migration of VMs, beginning with lesser-priority, yet still production guests.

Within 24 hours, we encountered our first issues when one VM became unresponsive and upon a soft reboot (Windows 2012 guest), failed to boot–it hung at the Windows logo. Without going into too much detail, we hit an All Paths Down situation to the XtremIO array and later after rebooting the host, we still could not boot that initial guest. Only when we migrated (Storage vMotion) back to our 3PAR array could we successfully boot the VM.

Read moreRead more

18
Jul

HP 3PAR: AO Update…Sorta

I wish there was an awesome update that I’ve just been too preoccupied to post, but it’s more of a “well. . . .” After talking with HP/3PAR folks a couple months back and re-architecting things again, our setup is running pretty well in a tiered config, but the caveats in the prior post remain. Furthermore, there are a few stipulations that I think HP/3PAR should provide customers or that customers should consider themselves before buying into the tiered concept.

  1. Critical mass of each media type: Think of it like failover capacity (in my case, vSphere clusters). If I have only two or three hosts in my cluster, I have to leave at least 33% capacity free on each to handle the loss of one host. But if I have five hosts, or even ten hosts, I only have to leave 20% (or for ten hosts, 10%) free to account for a host loss.Tiered media works the same way, though it feels uber wasted, unless you have a ton of stale/archive data. Our config only included 24 near-line SATA disks (and our tiered upgrade to our existing array only had 16 disks). While that adds 45TB+ to capacity, realistically, those disks can only handle between 1,000 and 2,000 IOPS. Tiering (AO) considers these things, but seems a little under qualified in considering virtual environments. Random seeks are the enemies of SATA, but when AO throws tiny chunks of hundreds of VMs on only two dozen SATA disks (then subtract RAID/parity), it can get bad fast. I’ve found this to especially be the case with OS files. Windows leaves quite a few alone after boot…so AO moves them down. Now run some maintenance reboot those boxes–ouch! Read moreRead more
19
Apr

HP 3PAR: The AO Caveat

Earlier this year, we posted about a new SAN bidding process and the eventual winner, the HP 3PAR V400. Now that we’ve been live on it for about six weeks, it’s time for a small update on a particular feature that might weigh in on your own decision, if you’re in the market.

Our new V400 was our first foray into the tiered storage market and we liked what we heard about gaining the speed of SSD storage on hot blocks while not wasting the cost of average data. EMC claimed advanced metrics, granular policies, and the ability to optimize as frequently as every 10 minutes. This sounded REALLY good. 3PAR also cited some of those things, sans the frequency, and we assumed they were about even, granted the results might be slightly delayed on the V400 (vs. VMAXe). What we’ve discovered isn’t so symmetric.

Read moreRead more

21
Feb

SAN Winner: HP 3PAR V400

At the end of the day, it wasn’t the minor technological differences that made the decision for us. Sure, we believed that EMC’s VMAXe was the truly enterprise-class array. The ace, though, was product positioning.

We have two SANs. We have a CLARiiON CX3-40 from EMC, which is legacy and as the market sometimes calls it, monolithic. It needs to go. We also have a 3PAR T400, which is as flexible as the day we bought it and has plenty of life left in it, due to its architecture (even though we acquired it in 2008). Thus, when the cards were on the table, only HP had the ability to offer a “free” upgrade to our T400 as well as the new V400.

The upgrade turns our T-series into a multi-tier array with SSD, FC, and NL, and the V-series replaces our aging CLARiiON. EMC tried to compete, but all they could offer was a “deal” less appealing than the original single-array proposition.

Honestly, I felt bad for them, because there was nothing they could do unless they literally took a deep loss (no funny money about it). HP’s solution was the equivalent of two new, good, flexible, low-maintenance SANs. EMC just learned how to be flexible and match 3PAR, so their older arrays (one of which was part of their attempt at competition) just didn’t equate.

It’s going to be another hard sell in 2-3 years when we open the next RFP, because HP/3PAR will now have a monopoly on the floor. Who knows, though? Maybe HP will stumble with their new golden egg, or maybe EMC will figure out how to undercut HP with price while not sacrificing features. For now, the trophy goes to HP. Congrats.

——————————————————

By Chris Gurley, MCSE, CCNA
Last updated: February 21, 2012

6
Jan

SANs: EMC VMAXe and HP 3PAR V400

If you’re in the market for a new enterprise-class storage array, both EMC and HP/3PAR have good options for you. Toward the end of 2011, we began evaluating solutions from these two vendors with whom we have history and solid relationships. On the EMC side, we’ve grown up through a CX300 in 2006 and into two CX3-40’s in 2008. At the end of 2008, we deployed a 3PAR T400 at our production site and brought back that CX3-40 to consolidate it with the one at our HQ. It’s been three years hence, and our needs call for new tech.

As is the nature of technology, storage has made leaps and bounds since 2008. What once was unique and elevating to 3PAR–wide striping and simplified provisioning from one big pool of disks–has become common place in arrays of all classes. We used to liken it to replacing the carpet in a room with furniture. It’s a real chore when you have to painstakingly push all the chairs and tables into a corner (or out of the room altogether!) when you want to improve or replace the carpet. With disk abstraction and data-shifting features, though, changes and optimizations can be made without the headaches.

Read moreRead more

31
Aug

VMworld: Enhancements in vStorage VMFS 5 (VSP2376)

Speaker: Mostafa Khalil (VMware)

Agenda:
– VMFS3 Limitations
– VMFS5 Enhancements
– LVM Changes
– VMFS5 Changes

# Excellent presentation and deep-dive on VMFS5 and it’s benefits in vSphere 5

Read moreRead more

30
Aug

VMworld: Storage vMotion Deep Dive (VSP3255)

Speaker: Min Cai, Ali Mashtizadeh (VMware)

Agenda:
– Basics of Storage vMotion
– Use Cases
– History of vMotion
– Architectural Overview in vSphere 5
– Snapshots & Storage vMotion
– Linked Virtual Machines
– Future Roadmap

Read moreRead more

18
Feb

VCE: Virtual Computing Environment

Are you familiar with VCE? If not, add it to your IT acronym dictionary, but it’ll be something you hear more about in the future if virtualization, shared storage, converged networks, and/or server infrastructure are in your purview. VCE stands for “Virtual Computing Environment” and is a consortium of Cisco, EMC, VMware, and Intel (funny…if you take three of those initials, you get V-C-E). The goal and objective, which they seem to be realizing, is to deliver a “datacenter in a box” (or multiple boxes, if your environment is large), and in a lot of ways, I think they have something going…

The highlights for quick consumption:

  • a VCE Vblock is an encapsulated, manufactured product (SAN, servers, network fully assembled at the VCE factory)
  • a Vblock solution is designed to be sized to your environment based on profiling of 200,000+ virtual environments
  • one of the top VCE marketed advantages is a single support contact and services center for all components (no more finger pointing)
  • because a Vblock follows “recipes” for performance needs and profiles, upgrades also come/require fixed increments
  • Cisco UCS blade increments are in “packs” of four (4) blades; EMC disks come in five (5) RAID group “packs”
  • Vblock-0 is good for 300-800 VMs; Vblock-1 is for 800-3000 VMs; Vblock-2 supports 3000-6000 VMs
  • when crossing the VM threshold for a Vblock size, Vblocks can be aggregated

Those are the general facts. So what does all that mean for interested organizations? Is it a good fit for you? Here are some takeaways I drew from the points above as well as the rest of the briefing by our VCE, EMC, and Cisco reps… Read moreRead more

4
Jan

Installing ESXi 4.1 with Boot from SAN

We’ve been running ESX since the days of v2.5, but with the news that v4.1 will be the last “fat” version with a RedHat service console, we decided it was time to transition to ESXi. The 30+ step guide below describes our process using an EMC CLARiiON CX3 SAN and Dell hosts with redundant Qlogic HBAs (fiber environment).

  1. Document network/port mappings in vSphere Client on existing ESX server
  2. Put host into maintenance mode
  3. Shutdown host
  4. Remove host from Storage Group in EMC Navisphere
  5. Create dedicated Storage Group per host for the boot LUN in Navisphere
  6. Create the 5GB boot LUN for the host
  7. Add the boot LUN to the host’s Storage Group
  8. Connect to the host console via the Dell Remote Access Card (DRAC)
  9. Attach ESXi media via DRAC virtual media
  10. Power on host (physically or via the DRAC) Read moreRead more