I covered the power of Infosight based on a Nimble array in the last two articles but hey, that is in the cloud….so let’s descend to the controllers’ deep hearth and sqeeze the most information we can. First of all some of those which cannot be found on GUI or it is not that detailed.
As a start, open up you favourite SSH client and connect to the array. Run “array –info name_of_your_array” command. The output will contain how many additional cards with how many and what type of ports are in the controllers and besides that it shows all aspects of capacity and consumed space. Down below you can see that my snapshots are compressed 201,7:1.
If you take your audit requirements seriously, than you do use a syslog, but here you can also get detailed information about who did what, when, which interface. Command is “auditlog –list“. It can be filtered to a user, to a time period, to the used connection method.
Above that is just the list of events, so let’s extract some detailed information about a particular login. It is simple “auditlog –info eventnumber” so in my example this is displayed. User “admin” has logged in through API and the Application name was Veeam. This is the backup software I use for this array.
The next command is my favourite and it list all the physical disk in a group/array “disk –list”. I admit this is shown on the GUI too, but there you need to hover the mouse above the bay and surely that is not a list just a “tooltip”. This command can help you to identify and inventory your serial numbers in your CMDB if needed. Their status is reported too.
You have all heard about the HPE SSDs that are affected by the PowerOnHours bug. Nimble arrays are not affected! Anyway I think it might be important to fetch the SMART data from the drives in the array. This is quite simpe “disk –info disk_number” can do that and many more, like exact firmware version on the disk, the rebuild state if you have changed that drive recently.
The next command’s output is something that is almost there on the GUI, but wait. Let’s run “fc –list” – well I use my array over FC. This will give back the applicable port speeds WWNN and WWPN identifiers.
But let’s use this and run detailed query specific to a port “fc –info port_name –ctrlr A/B“. Clearly states that the port on the array could do 16Gbit, but I am running 8Gbit only, what is the fiwmare on this “HBA”, fabric WWN is reported, so as the initiators that are connected to this port.
Not that we talk about initiators, we need to know about the limits and maximums that an array or a group can support. The next command lists these and also the consumed counts to each of those types. “netconfig –list“
If we would like to know a bit more about the controller connectivity “ip –list” and “netconfig –list” are the primary ones. It reports which controller is active, when was the last network related change in the configuration – on the array. It is quite handy to see the actual ports speeds here too.
Performance policy must not be new to anyone here, but how could we get more information in a list form about them? Simply by running “perfpolicy –list“, which shows their block size, caching and compression details too. If this is still not enough they can be queried by “perfpolicy –info“.
The “pool –info pool_name” is useful when you need to report all kind of capacity/consumption, volume, deco numbers. I have only one array in the pool, so not that groundbreaking, but you can imagine.
If there is at least one additional shelf the next command is your friend, but I’d not discard it just because you don’t have any. Output shows the fans speeds, temperature specifics. “shelf –list“
I hope you have snapshot schedule already as it is free and the best protection as last resort you can have. Bulk output especially the new data in them can be extracted by running “snap –list –vol volume_name“
Select one and run the “snap –info snapshot_name –vol volume_name” and this will tell the compression ratio of that snap, if it is replicated, online and exported to any initiator. We will talk about the “Is Managed” part later.
Let’s run some performance reports. GUI does this just fine, but if you want to go deeper like Dicaprio does that in Inception you are free to do so by running “stats –perf volume_name –iosize –interval 60” to have the tipical IO size reported.
If you are keen to know the IO served by all disk – you can query only one if you want too – “stats –disk all –hdr 1” is at your service. This shows and updated the data per second. If only one disk is important for you replace all with the disk number.
IOPS is one thing, but it is like describing a car with horsepower only, and not by both that and torque. Latency is also important and this command can report that for a volume for example “stats –perf “volume_name” –latency –interval 60“. Sequential, random, read, write etc.
Up to this point all commands were queries, they were not doing any modification to any configuration. But now let’s set the – IMHO – top most important thing. Earlier I mentioned “Is Unmanaged” property at the snapshot part. A snapshot is unmanaged when it was not created by a schedule due to replication or protection, but manually or by a 3rd party application. These unmanaged snapshots are to be retained forewer so if the 3rd party app or a human don’t remove them, they will consume space till infinity.
Let’s first query if we have any unmanaged snapshots that are older than 30 days. Do this by running group -autoclean_unmanaged_snapshots check –snap_ttl_unit days –snap_ttl 30″
If we’d like to set the automatic removal of unmanaged snapshots after 30 days, set this up by entering “group –autoclean_unmanaged_snapshots on –snap_ttl_unit days –snap_ttl 30” command. This is group level setting.
Nimble CLI is quite smart and if you want to utilize it more your starting point should be the CLI guide at Infosight page.