> Question - has anyone deployed Thumper/ZFS (on NFS or Gluster/Lustre)
> running in a small random files environment?  any thoughts?  .. If
> not, what are the common storage alternatives to address the unique
> requirements of random small files ?
>   

While I haven't yet done testing on it, theoretically the ZFS Hybrid
Pool concept would be the best possible solution.

The Hybrid Pool extends ZFS in two ways:

1) Very fast write-biased (SLC) SSD's are used for NVRAM logging of
synchronous writes.   Given that NFS is entirely synchronous, writting
directly to a fast SSD like this would really be excellent.  Of course,
NVRAM has been one of the reasons NetApp Filers have been so successful,
but a 1GB NVRAM is huge for NetApp... consider that we can now have
mirrored 36GB SSD's to speed up writes.  aka: LogZilla

2) Fast, but less expensive and larger, read-biased (MLC) SSD's are used
to extend the in-memory ZFS Adaptive Replacement Cache (ARC).  Normally
when the ZFS ARC can't grow it starts kicking thing (intellegently) out
of DRAM... so the SSD extends the size of the cache by providing a Layer
2 ARC (L2ARC) to keep things available thats faster than HDD but just
slower than DRAM.  aka: CacheZilla


So, when you combine ZFS with the power to extend it with these two
methods you _should_ have the ultimate small, random IO solution.  ....
should.


benr.
 
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to