Dell EMC PowerStore

Like half year ago there were rumours that Dell EMC is working on refactoring of its storage portfolio. All agree it was time to do it since many storage systems were in the product line, which all were brought by either Dell or EMC to the table. This caused quite some confusion for clients and for partners since it was not enough to validate the options against other vendors’ offering, but within the house of Dell EMC as well, since there were multiple candidates for the same need.

(Everything I blog at this website is my own and personal view. It does not reflect or realate to my employer’s official standpoint!)

Now time has came and their first – earlier called midrange.Next – device is out and named PowerStore.

I like the design, this was always the strength of Dell. Before I start let me assume that all marketing numbers are correct so it is faster X percent etc. I don’t care about that as numbers themselves without proof are nothing, so I accept everything.

So a storage system is here and it has one sub-version that operates a little different than any other storage array. Will come back to this later. Below here all it can do, it self optmizing and intelligent.

In these difficult times there is no option to build and design hybrid systems, so totally understandable that PowerStore is only available in all flash version. It is always based on a 2U size enclosure, both the head unit which has the two – and always two – controllers and the expansion enclosures. Both have 25 x 2,5″ bays and in the controller chassis only 21 are available for use, while in the expansion units all 25 can be at your disposal. Reason for this is simple, in the head unit they have NVMe NVRAM for their write cache – in smaller models two, larger ones four in mirror pairs. Drive options are below.

PowerStore has SCM – Optane – that can be used to dish out volumes straight away. Also some options for NVMe SSDs, but no surprise these NVMe drives can only hosted in the controller enclosure. Expansion enclosures are only using SAS SSDs and these enclosures are connected to the controller chassie by 4x12GB/s SAS cables. Interesting to see that 15,36TB sized drives are only available in the head unit – because they have NVMe.

Connectivity

The controller nodes are near 1U size and slide into the head enclosure from the back. There are all kind of options, like 16 x 16/32Gbit/s Fibre Channel ports, 24 x 10Gbase-T iSCSI and 24 x 10/25GBit/s iSCSI ports. In total there can be 24 ports in total summing up both controller ports.

PowerStore “sub”types

Two distinct versions:

  • PowerStore T
  • PowerStore X

More on this later…..wait.

Both above can exist in five flavors. The totaly capacity, connectivity are all the same for each. So max 96 drives and 898.56TB max capacity – this is resulted by this calc: (21 x 15,36) + (3 x 25 x 7,68).

The main difference is the quantity of the NVMe NVRAM drives in the reserved four bays at the front and the Xeon processor type and the RAM quantity in the controller(s). So again, this is always a dual controller system and active-active design. Dell EMC does not tell, but not that hard to figure it out this is the previous generation Scalable Xeon – Skylake – based controller and the 9000 model has two Xeon Platinum 8176 in it.

Why does it have this extreme amout of RAM? Because the PowerStore X model runs ESXi and it can be utilized to run virtual machines. About what is the point of the same amount of memory in PowerHouse T models, I have no information so far. Well this is maxed out, you cannot bring compute any closer to storage, so while HCI systems are bringing storage to the compute layer, this does the other way around – impressive! Furthermore even if X models run ESXi they can still serve storage to external nodes as block.

I find it very interesting that the vendor uses no special ASIC or other trick – besides QAT – but does this from code. In the PowerStore X models this is materialized in the picture of two beefy VMs, one on each node which runs the storage services. Expect that the power enlisted above might be less for your VMs as storage VMs must have priority.

As mentioned PowerStore T models are using the same hardware and on them the storage OS is called, PowerStore OS. So not ESXi, but to gain something with this, you can use block and file based storage services on them. This can be configured at the moment of birth and later cannot be changed – this must be confirmed.

At Dell EMC time has come when RAID type is hard coded and here it is RAID5. Yes, many other vendors went to RAID6 or something similar to ensure multiple drive failures will not result data loss. At the age of large drives it is surprising. The RAID set size is also automatic and here is a glitch (this is from the best practices guide):

If you start with less than 10 SSDs, you will end up with 4+1, even if upgrade later to 96 drives. If you purchase 10 or more at setup, you will have 8+1 which – according to above text – “provides greater usable capacity from the same number of drives”. I don’t like to be remined that “if you start small, you will get a little less even if you go big”. As I see it there are kit based storage systems and there are self optimizing systems. This latter one was on the slide I pasted above. Why isn’t it self optimizing itself to do 8+1 if I go with 96 drives at day 2?

Dell EMC guarantees 4:1 deco, it has thin provisioning.

Scale up/out

It can scale up to 3 additional enclosures, up to 96 drives in a system. Four of these systems can be grouped in a cluster – only PowerStore T models! – so management will be single pane, but not the performance. If you export a volume, that will not be striped between multiple arrays but land on one. We have seen other implementations of this idea where a single volume could span over multiple arrays and still transparent to host using it.

Management

CloudIQ is being used as the cloud based analytics platform for the storage. I guess this will be available for all Dell EMC products if not already. PowerStore is compatible with VVols, VMware Cloud Foundation and can be consumed by Kubernetes via CSI and orchestrated also by Ansible. It does have plug-in for vCenter for ease of use by VMware admins.

Summary

There are no figures about performance so far and while I totally like to avoid hero numbers as they are always misleading and showing numbers that are rarely possible in normal use patterns, I feel that I need to know a little more about how the IO path, and look under the hood.

PowerStore X is unique and each must find a use case for it. On the other hand PowerStore T is not unique, it is a midrange device which has pros and cons. If someone tells me “I need NAS and block”, this is their choice. If other wants “block” there are many other and possibly better options out there.

Pros:

  • PowerStore X: unique idea – kind of a HCI
  • PoweStore T: can be both block and file-storage
  • active-active controllers – should note, that in any storage that is A/A the load on any of the controllers must not exceed 50%, otherwise the preformace loss will be serious if any of them goes down. Important, that it is impossible to limit and set this.
  • all inclusive licensing
  • DECO at all times
  • PowerStore T supports many services/protocols in Unified mode (NFSv3, NFSv4, NFSv4.1; CIFS (SMB 1), SMB 2, SMB 3.0, SMB 3.02, and SMB 3.1.1; FTP and SFTP)

Cons:

  • RAID5 – please explain, literally now. RAID6 needs serious controller power to deliver good IOPS/low latency and effective DECO to deliver usable capacity
  • RAID set size is a one time config. Even if you grow it does not change
  • no 16TB SAS SSD
  • no customer self installable solution
  • async replication over IP only
  • no sync replication
  • cluster function – scale out – only available in T models. Volumes won’t be striped anyway but live behind one controller pairs always
  • only the first appliance – which forms the cluster itself – can run unified services, so any cluster that has been formed without unified enabled first node, will not be possible to create
  • scale-out solution takes no account performance data, but only capacity at volume creation
  • X models reserve 50% of their CPU/RAM capacity for the storage/control VM.

Before decide anything I need to learn more. I’d like to know what is the answer and marketing move from Dell EMC to explain why RAID5 is better than RAID6. Yes it can be told that higher performance, but as a client I don’t care how powerful controller the system needs to deliver performance with RAID6. Also I feel it disappointing that system is adaptive/self-optimizing, but I need to make a choice that will last forever and might affect the usable space and performance later down the road. This needs serious explanation too as clients will ask this and I will not lie.

I have trouble defining which systems are it’s competitors. If it is positioned against HPE Nimble, this latter one does some things better:

  • true scale out
  • sync replication with peer persistence
  • predictive analytics – HPE Infosight – cross stack
  • prooven track record
  • full performance regardless how small you start – except C/H/Q special models
  • triple parity – all flash has tripe parity + – so can loose 3 drives at the same time – all flash 3 drives and one additional in disk issue

PowerStore does couple things more than HPE Nimble:

  • Model X runs ESXi
  • SCM as a volume store
  • NAS services – SMB/CIFS/FTP/NFS
  • active-active controllers

If it is positioned against HPE Primera, no need to discuss anything, at it’s current readiness. Is it competing within the Dell EMC branch? Unity XT/SC? Better than those for sure.

I am not judging, but I believe that if you want want to win a GT race, you need to be good in cornering, trail braking, not enough just to be fast in straight line. Know you oppenents and deliver similar services that they do, if you are not first in a market range, you follow rules defined by the first. If that is sync rep, it is sync rep, it does not matter if anyone is using it or not, you need to have it in you system.