Nimble OS 5.2 is here

Let’s start with the summary, this is a milestore release I believe. This is not just a version that can be best described with some makeup here and there, but a version which delivers 1388 bug-fixes, 39 already existing feature enhancemens but 27 new features! For free! This is NOS 5.2, which has reached the IPR state. Before we start, let’s discuss what the hell is “Initial Production Release”.

After a code comes out of development it goes through this pipeline:

IPR is a codebase that is mature and final enough to hit arrays out in the public area. It must be requested from Nimble support as none of the arrays will get that by default. Test systems of HPE Storage partners are the best candindates for this release, so if something comes up, problem will not affect customer workloads or such.

Once there is enough hours in the code it is moved to GAC (General Availability Candidate) state which broadens the deployment for many more systems, but remain still opt-in, just like before automatically no one will have it – as end consumer. The GA – General Availability – is the step, where the version is released to public. One very important thing here, a code can be GA, but certain arrays can have the update blacklisted – because they are overloaded/configured that way, that the update would not happen properly or such. There are releases that are LTSR (Long Term Support Release), but not every version will be LTSR. A gentle reminder, 99,99999% availability expected from the GA code, but not from IPR or GAC!

New things – hardware

8TB SSD + 14TB HDD

These larger drives are supported now, earlier in the all flash models, latter one in hybrid arrays. This will not change the total capacity a model can deliver at its full scale up state, but help in squeezing capacity into much smaller datacenter footprint.

Below here you can see the maximums raw capacity for all arrays – so once again this is not changing now. These larger drives only applicable in larger models starting from HF20C and AF40. – the be fair I have no idea what HF20C would benefit from having 21*14TB capacity, if it can handle only 105TB.

MODELMAX RAW CAPACITY (TB)
HF20210
HF20C105
HF20H211
HF40504
HF40C1470
HF601260
HF60C1470
AF2046
AF20Q46
AF40184
AF60553
AF80553

32 Gbit/s HBA

The two port, 32Gbit/s FC card will be supported in all gen five models – online upgrade – and they are backward compatible with 16 and 8 Gbit/s speeds. **20 models can have two per controller – so can give you 4 * 32 Gbit/s active speed out – all others three – 6 * 32 Gbit/s juice.

Storage Class Memory

The top two all flash model will get it (AF60/AF80) in the form of Intel Optane equipped PCIe form card. This results it will occupy one slot in each controller – and only one since one SCM is supported per controller. Nimble was always unique with it’s CASL (Cache Accelerated Sequential Layout) patent. It’s name tells a lot, it uses cache to deliver flash performance on spinning media since 2008. Now it is using SCM cache to deliver SCM speed on “non SCM flash media”.

This SCM card has 1500 GB capacity and as said, one and only one can be added into a controller chassis. Nimble has developed and possibly changed the CASL for all flash models in way that it can use SCM for read caching. Read latency on all flash Nimble arrays were always great and surely under 1 ms, but not that can be divided by two at least. SCM can be enabled per volume, so it is not a global cache.

SCM is not mirrored as NVDIMM which is the first buffer for their writes, so in case of controller loss or planned failover to the stanby controller will start with an empty SCM cache and go trough cache warm again as heatmap builds.

New things – software

Sync replication – Peer persistence enhancements

The maximum of replicated volumes went up to 512, from 128.

Peer persistence requires automatic switchover – short break here, I’ve been asked once by a customer if PP is the other name of sync replication. Not really, you can have sync replication without peer presistence. This latter one means that you sync replicate something and with the automatic switchover the death of an array/site will not require manual intervention, the surving site/array will bring to volume alive. If you don’t need this, you can still do manual activation, but if you use ASO, that is called Peer Presistence in HPE’s world.

This ASO – so PP – required witness as multisite solutions do. Up to this date the witness was a package that could be deployed to CentOS. Now an OVA is being released so with VMware you can deploy it easily. Requirements are pretty low, 1 vCPU,4GB RAM, 16GB HDD. This is called witness appliance.

Fan-out replication

This can be familiar to some from 3PAR world and now this is coming to Nimble too. Volumes can be async replicated to two other arrays that replication schedules can have different timing, different snapshot protection rules defined.

The GUI has changed a little since the data protection schedule must not be defined on volume collection level, but per replication partner.

Target Driven Zoning – Smart-SAN

Bad news first, Brocade is supported so far. So if you have Cisco Nexus or MDS this is not your day. TDZ enables you to do zoning automatically straight from Nimble UI. This sound great when you deploy a solution to greenfield, but also important if new hosts require a volume from the array or new hosts are procured and you just want to give them the VMware datastore other hosts already have.

Many new things were released around dHCI offering, but I am saving that for a separate release. For me it looks like Nimble is even moving at a higher pace as before and I really think that in midrange this is the best array still, no doubt.

I recomment following Nick Dyer who is the best source for Nimble storage related information since he is a CTO at HPE.