Skip to content

Posts tagged ‘emc’

18
Jan

EMC XtremIO and VMware EFI

After a couple weeks of troubleshooting by EMC/XtremIO and VMware engineers, the issue was determined to be an issue with EFI boot handing off a 7MB block to the XtremIO array, which filled the queue, and which would never clear as it was waiting for more data to be able to complete communication (i.e. deadlock). This seems to only happen with EFI firmware VMs (tested with Windows 2012 and Windows 2012 R2) and the issue is on the XtremIO end.

The good news is that the problem can be mitigated by adjusting the Disk.DiskMaxIOSize setting on each ESXi host from the default 32MB (32768) to 4MB (4096). You can find this in vCenter > Host > Configuration > Advanced Settings (bottom one) > Disk > Disk.DiskMaxIOSize. The XtremIO team is working on a permanent fix in the meantime, and the workaround can be implemented hot with no impact to active operations (potentially minor host CPU load increase as ESXi breaks down >4MB I/O into 4MB chunks). Read moreRead more

26
Dec

EMC XtremIO Gen2 and VMware

Synopis: My organization recently received and deployed one X-Brick of EMC’s initial GA release of the XtremIO Gen2 400GB storage array (raw 10TB flash; usable physical 7.47TB). Since this is a virgin product, virtually no community support or feedback exists, so this is a shout out for other org/user experience in the field.

Breakdown: We are a fully virtualized environment running on VMware ESXi 5.5 on modern Dell hardware and Cisco Nexus switches (converged to the host; fiber to the storage), and originally sourced on 3PAR storage. After initial deployment of the XtremIO array in early December (2013), we began our migration of VMs, beginning with lesser-priority, yet still production guests.

Within 24 hours, we encountered our first issues when one VM became unresponsive and upon a soft reboot (Windows 2012 guest), failed to boot–it hung at the Windows logo. Without going into too much detail, we hit an All Paths Down situation to the XtremIO array and later after rebooting the host, we still could not boot that initial guest. Only when we migrated (Storage vMotion) back to our 3PAR array could we successfully boot the VM.

Read moreRead more

5
Nov

EMC Avamar – Epic Fail.

Terrible initial implementation. High-downtime expansion. Unreliable backups. Absentee support. That’s EMC Avamar.

On the tiny upside, deduplication works great…when backups work.

In September 2011, our tragedy began. We’re a 99% VMware-virtualized shop and bought into EMC Avamar on the promise that its VMware readiness and design orientation would make for low-maintenance, high-reliability backups. In our minds, this was a sort of near-warm redundancy with backup sets that could restore mission critical systems to another site in <6 hours. Sales even pitched that we could take backups every four to six hours and thus reduce our RPO. Not to be.

Before continuing, I should qualify all that gloom and woe by saying that we have had a few stretches of uneventful reliability, but that’s only when we avoided changing everything. And one of those supposed times, a bug in the core functionality rendered critical backups unusable. But I digress…

Read moreRead more

21
Feb

SAN Winner: HP 3PAR V400

At the end of the day, it wasn’t the minor technological differences that made the decision for us. Sure, we believed that EMC’s VMAXe was the truly enterprise-class array. The ace, though, was product positioning.

We have two SANs. We have a CLARiiON CX3-40 from EMC, which is legacy and as the market sometimes calls it, monolithic. It needs to go. We also have a 3PAR T400, which is as flexible as the day we bought it and has plenty of life left in it, due to its architecture (even though we acquired it in 2008). Thus, when the cards were on the table, only HP had the ability to offer a “free” upgrade to our T400 as well as the new V400.

The upgrade turns our T-series into a multi-tier array with SSD, FC, and NL, and the V-series replaces our aging CLARiiON. EMC tried to compete, but all they could offer was a “deal” less appealing than the original single-array proposition.

Honestly, I felt bad for them, because there was nothing they could do unless they literally took a deep loss (no funny money about it). HP’s solution was the equivalent of two new, good, flexible, low-maintenance SANs. EMC just learned how to be flexible and match 3PAR, so their older arrays (one of which was part of their attempt at competition) just didn’t equate.

It’s going to be another hard sell in 2-3 years when we open the next RFP, because HP/3PAR will now have a monopoly on the floor. Who knows, though? Maybe HP will stumble with their new golden egg, or maybe EMC will figure out how to undercut HP with price while not sacrificing features. For now, the trophy goes to HP. Congrats.

——————————————————

By Chris Gurley, MCSE, CCNA
Last updated: February 21, 2012

6
Jan

SANs: EMC VMAXe and HP 3PAR V400

If you’re in the market for a new enterprise-class storage array, both EMC and HP/3PAR have good options for you. Toward the end of 2011, we began evaluating solutions from these two vendors with whom we have history and solid relationships. On the EMC side, we’ve grown up through a CX300 in 2006 and into two CX3-40’s in 2008. At the end of 2008, we deployed a 3PAR T400 at our production site and brought back that CX3-40 to consolidate it with the one at our HQ. It’s been three years hence, and our needs call for new tech.

As is the nature of technology, storage has made leaps and bounds since 2008. What once was unique and elevating to 3PAR–wide striping and simplified provisioning from one big pool of disks–has become common place in arrays of all classes. We used to liken it to replacing the carpet in a room with furniture. It’s a real chore when you have to painstakingly push all the chairs and tables into a corner (or out of the room altogether!) when you want to improve or replace the carpet. With disk abstraction and data-shifting features, though, changes and optimizations can be made without the headaches.

Read moreRead more

18
Feb

VCE: Virtual Computing Environment

Are you familiar with VCE? If not, add it to your IT acronym dictionary, but it’ll be something you hear more about in the future if virtualization, shared storage, converged networks, and/or server infrastructure are in your purview. VCE stands for “Virtual Computing Environment” and is a consortium of Cisco, EMC, VMware, and Intel (funny…if you take three of those initials, you get V-C-E). The goal and objective, which they seem to be realizing, is to deliver a “datacenter in a box” (or multiple boxes, if your environment is large), and in a lot of ways, I think they have something going…

The highlights for quick consumption:

  • a VCE Vblock is an encapsulated, manufactured product (SAN, servers, network fully assembled at the VCE factory)
  • a Vblock solution is designed to be sized to your environment based on profiling of 200,000+ virtual environments
  • one of the top VCE marketed advantages is a single support contact and services center for all components (no more finger pointing)
  • because a Vblock follows “recipes” for performance needs and profiles, upgrades also come/require fixed increments
  • Cisco UCS blade increments are in “packs” of four (4) blades; EMC disks come in five (5) RAID group “packs”
  • Vblock-0 is good for 300-800 VMs; Vblock-1 is for 800-3000 VMs; Vblock-2 supports 3000-6000 VMs
  • when crossing the VM threshold for a Vblock size, Vblocks can be aggregated

Those are the general facts. So what does all that mean for interested organizations? Is it a good fit for you? Here are some takeaways I drew from the points above as well as the rest of the briefing by our VCE, EMC, and Cisco reps… Read moreRead more

4
Jan

Installing ESXi 4.1 with Boot from SAN

We’ve been running ESX since the days of v2.5, but with the news that v4.1 will be the last “fat” version with a RedHat service console, we decided it was time to transition to ESXi. The 30+ step guide below describes our process using an EMC CLARiiON CX3 SAN and Dell hosts with redundant Qlogic HBAs (fiber environment).

  1. Document network/port mappings in vSphere Client on existing ESX server
  2. Put host into maintenance mode
  3. Shutdown host
  4. Remove host from Storage Group in EMC Navisphere
  5. Create dedicated Storage Group per host for the boot LUN in Navisphere
  6. Create the 5GB boot LUN for the host
  7. Add the boot LUN to the host’s Storage Group
  8. Connect to the host console via the Dell Remote Access Card (DRAC)
  9. Attach ESXi media via DRAC virtual media
  10. Power on host (physically or via the DRAC) Read moreRead more
18
Oct

Virtual Center 2.x incorrectly sizes disks during migration

VMs that were formerly RDMs (Raw Device Mappings) and which have had one or more disks grown via LUN migrations in EMC Navisphere (or similar functions in another vendor’s SAN tool) may fail to create the appropriately-sized disks on the target SAN during a storage migration. This is due to the fact that the RDM mapping file on the source SAN never updated to reflect the size of the previously grown LUN (via LUN migration). VMware Virtual Center uses that mapping file to create new VMDK files on the target.  Thus, if the mapping file is not updated to reflect the proper size, Virtual Center will create a smaller file on the target, possible resulting in loss of data or program integrity.

Example: server1 originally had a 30GB C:\ prior to a rebuild. When it was rebuilt, the same LUNs were used. However, due to a larger RAM allocation (i.e. 8GB instead of 4GB), the C:\ drive needed to be expanded. LUN migration in EMC Navisphere was used to accomplish this. However, the mapping file (the pointer .vmdk file) never changed to the new size. When the migration took place, Virtual Center only created a 30GB virtual disk on the target SAN. Windows booted thinking it had a 50GB disk (the expanded size). The result was that applications (i.e. SQL Server) and possibly other components failed to function after migration.

The solution is to delete and re-add the RDMs of any grown VMs before migration to ensure that the right size is used on the target.

Applies to: VMware ESX 3.x, Virtual Center 2.x