No practical experience yet, but we are in the process of making a quite large purchases (1000s of vms), and I have done a number of technical deepdives on converged and hyperconverged options of various sorts.

Nutanix and Simplivity are probably going to substantially outperform a traditional converged architecture. Why? Because the I/O is local to the VMS. The architecture is arranged such that the VM always has a local I/O option with remote (in many cases rack-aware) duplicates. Mirrors are declustered as in the sense of GPFS/GSS or Isilon, as an analogy. The local copy is on the node, but the remote copy could be on any node. If a disk is gone, it is quickly reconstructed in appropriate chunks on many other nodes. Nutanix doesn't have a special card, it's just a commodity node. You save a lot in cost, but you may not be quite as fast. For most workloads, it probably won't matter.

Simplivity, in particular, includes an NVRAM backed FPGA ($$$$) that means all writes are accelerated, coalesced, deduped, and then finally written out in chunks. the SSD is used as a read cache for hot data. Nutanix also does coalesce, dedup and aggregation to their underlying distributed object store. Metadata is stored in Cassandra that is replicated across nodes. Hot data is mirrored plus ECC checksum on SSD and cold data is distributed using erasure coding to spinning disk. Rebalancing is automatic.

If you have a lot of VMS, you get a lot of advantages from this Dedup. Pretty much all of the O/S data will be deduped (unless you are using something like BitLocker or OS-level encryption! Use some other encryption at rest to get dedup and compression.)

All data in a converged architecture (Netapp, Nimble, Infinidat, whatever) has to go through a switch which will reduce bandwidth. Either 10g for ISCSI or FC. You will immediately be limited by your interconnect. (You could theoretically use Infiniband, but most storage nodes don't support it)

You can test Nutanix for free. You can't do that with Simplivity because of the card.

Rebalancing on adding more nodes is a matter for Simplivity engineering support engagement. Nutanix does it more or less automatically.

Simplivity is architected in notions of clusters of 8 machines and you can federate up to 4 clusters in one "Federation" unit before starting the next one. Nutanix has very large groups of nodes. I've heard that there is 1000 node Nutanix clusters out there. I have no references yet.

Both have copious performance graphs and data you can view through the portal. Both can support multi-tenancy with caveats. (in a virtual DR environment, where the customer owns the Vcenter + Veeam, you don't really have multi-tenancy). Nutanix appears to expose more data via SNMP and take the REST API to the ultimate extremes (you can do absolutely anything with it, purportedly)

For a pure cost play, you'll probably find the converged architecture is significantly less expensive. From an administrative point of view, you'll probably save the most people time with a Simplivity point of view. People who run it claim to have saved many, many hours, because it's all integrated. You don't have to know the store specialities, do the storage setup separate from the machines, do the machine setup and lom integration etc. It's vastly simplified by integrating it all into one pane.

You can easily scale compute nodes in both Simplivity and Nutanix, very cheaply. You can scale storage nodes in Nutanix by mixing in a storage-dense node type (multi-node types are available). In Simplivity you are constrained for storage by the FPGA. Every storage node must have one and they run about $100k+ list. (performance/cost trade-off)

With Simplivity, clusters must all be the same node type. With Nutanix, you can mix and match.

I'll throw in a little plug for Nimble if you are thinking about Veeam for virtual backup. The snapshot mechanisms mesh together nicely! (also the performance is good, and the price might be on par or less than a Netapp with as good or better performance)

If performance and ease of management are the primary concern, Simplivity is definitely something to look at. There are some interesting case studies and reference cases of people going from 10-30 racks of machines down to 3 (for example) and actually saving large amounts of money in colo space. If you're talking about a small environment you probably won't see the ancillary savings from consolidation from Simplivity.

Hope that helps. I'm in the trenches now, too. :)


On 12/17/2016 7:18 AM, Joseph Kern wrote:
Is there anyone running Nutanix (or any "hypeconverged" architecture) at a large scale on this list?

I have a few questions:

1. Nutanix performance compared to Dell/HP + Netapp (do I need to over-purchase Nutanix to get similar performance results for the same hardware)? 2. Is there a way to just scale storage (I have a feeling you need to buy more compute as well)
3. Common pitfalls in implementation or operations and maintnance?
4. Does this current generation of "hyperconverged" architecture seem as immature as I think it is?
5. What type of support and turnaround time does Nutanix offer?


--
Joseph A Kern
joseph.a.k...@gmail.com <mailto:joseph.a.k...@gmail.com>


_______________________________________________
Discuss mailing list
Discuss@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
  http://lopsa.org/

_______________________________________________
Discuss mailing list
Discuss@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to