Skip to content

July 18, 2012

2

HP 3PAR: AO Update…Sorta

I wish there was an awesome update that I’ve just been too preoccupied to post, but it’s more of a “well. . . .” After talking with HP/3PAR folks a couple months back and re-architecting things again, our setup is running pretty well in a tiered config, but the caveats in the prior post remain. Furthermore, there are a few stipulations that I think HP/3PAR should provide customers or that customers should consider themselves before buying into the tiered concept.

  1. Critical mass of each media type: Think of it like failover capacity (in my case, vSphere clusters). If I have only two or three hosts in my cluster, I have to leave at least 33% capacity free on each to handle the loss of one host. But if I have five hosts, or even ten hosts, I only have to leave 20% (or for ten hosts, 10%) free to account for a host loss.Tiered media works the same way, though it feels uber wasted, unless you have a ton of stale/archive data. Our config only included 24 near-line SATA disks (and our tiered upgrade to our existing array only had 16 disks). While that adds 45TB+ to capacity, realistically, those disks can only handle between 1,000 and 2,000 IOPS. Tiering (AO) considers these things, but seems a little under qualified in considering virtual environments. Random seeks are the enemies of SATA, but when AO throws tiny chunks of hundreds of VMs on only two dozen SATA disks (then subtract RAID/parity), it can get bad fast. I’ve found this to especially be the case with OS files. Windows leaves quite a few alone after boot…so AO moves them down. Now run some maintenance reboot those boxes–ouch!The nutshell is that media like SATA/NL have a critical mass quantity (in my opinion) and should be sold in sets of at least 32 disks, or maybe even 64. By scaling out to that (and ignoring the awesomely wasted capacity it gives you), you can safely tier even often-stale-yet-sometimes-hot data (like those OS files) and survive. Of course at that point/qty, SAS/FC might be better :).
  2. Disk space usage alerts: This one is more of an annoyance, but if you use AO, especially with SSD/flash, you’ll find that you either have to waste chunks of raw storage or else suffer through alerts when AO moves data that exceeds 85% of a storage type. For FC/SAS data, I like to keep at least 15% free, so that’s not really an issue, and with NL/SATA, we’d crash from latency if we ever filled up those disks with data. SSD/flash, though, is pricey, so I like to use as much as is safely possible, which means we get e-mail alerts daily if I let AO have its way with the storage. So…I’ve cut back my allocations so that AO can only max out SSD at about 80%. That’s some pretty expensively wasted I/O…or a lot of irrelevant alerts.

All these things said, the array is solid and with a conservative AO config (less NL than you might want, and less SSD than you wish you could quietly use), you’ll be great and back to easy, mostly hands-off management. I have high hopes of the next version of AO and System Reporter, but for now, they are just that–hopes.

——————————————————

By Chris Gurley, MCSE, CCNA
Last updated: July 18, 2012

Read more from SAN, VMware
2 Comments Post a comment
  1. Mark Simon
    Jul 19 2012

    Thanks for the update Chris. Lucky for me the environment I support has a larger SAN with fewer Clients than yours. I will not be putting any of my OS disks in AO, especially after reading your post. We are planing on using AO for data, mostly MS SQL, and possibly Application disks. Right now we do not use SSD only FC and SATA. One of my concerns is that trying out AO in test will not be anything like AO in production.

    Reply
  2. Chris
    Jul 19 2012

    Totally agree on AO testing differing from production. A couple other points are 1) to focus AO on your peak performance windows (i.e. tell it to run at 5pm and measure the last 9 hours), and 2) to start small by only giving little chunks of SATA to each AO config, and then increasing it as long as performance is acceptable. Also, always create your VVs in the FC CPGs and then let AO move them down (if you can afford that). You might think that now that you have SATA space, creating there and moving up would be good, but I think you’ll suffer by making those first writes go down there (‘could be wrong; varies by environment, but just a word of caution).

    Reply

Share your thoughts, post a comment.

(required)
(required)

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments