Skip to content

April 19, 2012


HP 3PAR: The AO Caveat

Earlier this year, we posted about a new SAN bidding process and the eventual winner, the HP 3PAR V400. Now that we’ve been live on it for about six weeks, it’s time for a small update on a particular feature that might weigh in on your own decision, if you’re in the market.

Our new V400 was our first foray into the tiered storage market and we liked what we heard about gaining the speed of SSD storage on hot blocks while not wasting the cost of average data. EMC claimed advanced metrics, granular policies, and the ability to optimize as frequently as every 10 minutes. This sounded REALLY good. 3PAR also cited some of those things, sans the frequency, and we assumed they were about even, granted the results might be slightly delayed on the V400 (vs. VMAXe). What we’ve discovered isn’t so symmetric.

HP 3PAR leverage a feature they call “Adaptive Optimization”, which moves 128MB regions of data between storage tiers (0: SSD, 1: FC, 2: NL). The management of this feature was/is incorporated into 3PAR System Reporter product, which accumulates array performance data on an ongoing basis. While this repository of information is definitely the right choice to build AO upon, the implementation thereof is very elementary.

AO configuration is based on policies which apply to Common Provisioning Groups (CPGs), which are the containers/metadata holders of Virtual Volumes (VVs), otherwise known as LUNs in competitor storage products.


To briefly explain this single-step configuration of an AO policy, the tiers are CPGs (CPGs are a single type and RAID config of storage; i.e. SSD RAID 5), and the tier sizes are the maximum allowable space that the policy can use in a given CPG. For scheduling, the date/week day/hour are when the optimization(s) run and any movements are based on the amount data (in hours) specified in Measurement Hours (ranging from 3 to 48 hours; i.e. run at 1700 based on the past 9 hours of data). Mode determines how aggressive it is in moving regions up/down (Performance, Balanced, and Cost), and the last is whether the policy is enabled.

What we’ve found is that these options fall short of our tiering hopes and tend to de-optimize our storage such that things run on the slower side because AO has decided it should move regions down to NL (it seems very biased toward NL, even in a “Performance” mode configuration).

Before I go further, I would like to say that we have no hands-on experience with EMC storage to prove that such limitations are not the case with them, but my understanding from our technical review was that they had more intelligence built in to VMAXe, etc.

Our main complaints are in the reactive nature of AO. In our environment, our cycles of data activity are based more on day of the week than a specific hour of the day. In other words, Mondays look like Mondays, Tuesdays like Tuesdays, etc. With AO, we can only base the “optimization” on up to 48 hours of immediately past data such that even if we focus on weekday business hours, the nightly movements will prepare Tuesday for Monday’s behavior, and so on.

From what EMC said, their tiering software lets you decide what percentage of each type of storage is used by a given policy. So you might have a policy that uses 20% SSD, 70% FC, and 10% NL, and it will move hot/warm/cold data around accordingly. In 3PAR AO, those tier size settings are just “allowable” space, but there’s no way to encourage AO to use the SSD, for example. It may simply decide that it sees the data as cold and to move it down to NL or wherever the coldest allowance is.

3PAR’s answer is to shrink that size setting so it can’t use more than ### GiBs, but this becomes tedious, depending on how many VVs you have in each CPG. For us, we went with a three-policy configuration of “Gold”, “Silver” and “Bronze” that have greater/lesser amounts of SSD, FC, and NL as you cross the spectrum (i.e. Gold has 1200 GB of SSD, 10000 GB of FC, and 500 GB of NL; while Bronze has no SSD, 10000 GB of FC, and 10000 GB of NL; and Silver is a balance of the two). We find that though we’d wish Gold to be aggressive and use all of its SSD, it often leaves hundreds of GBs unused.

All that said, we are meeting with HP 3PAR folks tomorrow to see about tweaking the policies (and creating new ones, probably) to improve the behavior, but some of these things will remain unsolved (i.e. scheduling and the reactive nature of it).

For all this negativity, 3PAR shines with large pools of homogeneous storage (i.e. hundreds of FC disks), and as it stands, I’m not sure we didn’t make a mistake when insisting upon a tiered solution rather than a single 300 x 400GB FC drive configuration. I believed in the power of SSD, which I’m not yet seeing in 3PAR’s setup, but I’m not sure 3PAR knows how to use them properly. So…consider that when shopping. They really do make a good argument for good ‘ole reliable FC disks in large quantities.


By Chris Gurley, MCSE, CCNA
Last updated: May 28, 2012

6 Comments Post a comment
  1. Vish Mulchand
    Apr 20 2012

    Hi Chris

    I work at HP and am responsible for 3PAR product management. Thank you for sharing you experiences with the product. It would be good to connect live and see if can further understand your feedback on AO.

    You can reach me at 510-668-9446 or via email at Look forward to hearing back from you


  2. Chris
    Apr 20 2012

    Hey Vish,

    Thanks for reaching out. We’re actually meeting this morning in person with David Baker and others from HP/3PAR to see about improving the current perception. If we can make some headway on its config/behavior, I’ll definitely follow up here with an update to balance things out.

    My goal is to be fair to the products based on our experiences, so I’ll as quickly post about progress as I will about difficulties. Let me get on the other side of this morning’s meeting, and then we can talk.


  3. Vish Mulchand
    Apr 21 2012

    Ok thanks. I will connect with David and look forward to hearing back from David and/or yourself if additional discussions are required

    • Chris
      Apr 21 2012

      Thanks, Vish. Just in case it didn’t come through, I did call & leave you a voicemail yesterday afternoon. On Monday, I’ll drop you an email with the main, concise points on what we hoped would be and hopefully some day will be in AO.

  4. Mark Simon
    Jul 18 2012

    Any update? Curious Minds want to know.
    We have 3Par storage and are about to setup AO for some of our storage (2 Tier – FC and SATA).

  5. Chris
    Jul 18 2012

    Hey Mark,

    Thanks for the reminder. Check out today’s post for my latest feedback. Nutshell: be conservative with what you let AO put on SATA, and if possible, add a few more disks for I/O.


Share your thoughts, post a comment.


Note: HTML is allowed. Your email address will never be published.

Subscribe to comments