On Aug 04, 2009 09:58 -0300, David Pratt wrote: > Hi. Many thanks for your responses. Generally, the qualities of Lustre > appear great for a Storage Repository for virtual machine images in > XenServer since you would get a combination of fault tolerance, a > pretty much infinitely scalable distributed storage pool, speed and > ability to migrate virtual machines across a number or hosts. > XenServer can use NFS, iSCSI, NetApp, EqualLogic or Fibre Channel > storage repositories at this point. It appears there is some > capability to create a plugin to allow for others. It is possible that > the only way to get Lustre to work would be with the development of a > plugin.
You can re-export Lustre via NFS to these clients. > At this point, to create a minimal Lustre install to play > with, how many machines will be required? Depends on how available/robust you need the system. A "functional" system can run on a single node (MDS+OSS+client+NFS server). A highly available system needs at least 3 nodes (MDS, OSS, client+NFSd), with the MDS and OSS doing failover for each other. That said, unless you plan to scale beyond this (i.e. multiple OSS nodes) you could just use a pair of nodes for an HA NFS configuration, which is arguably less complex. > On 3-Aug-09, at 8:01 PM, Klaus Steden wrote: > > Hi David, > > > > I did some experiments last year with Lustre 1.6.x and a Dell iSCSI > > enclosure. It was a little slow (proof of concept mainly) due to > > sharing MDT > > and OST traffic on a single GigE strand, but as long as the > > operating system > > presents a valid block device, Lustre works fine. > > > > hth > > Klaus > > > > On 7/31/09 11:13 AM, "Cliff White" <[email protected]> etched on > > stone > > tablets: > > > >> David Pratt wrote: > >>> Hi. I am exploring possibilities for pooled storage for virtual > >>> machines. Lustre looks quite interesting for both tolerance and > >>> speed. I > >>> have a couple of basic questions: > >>> > >>> 1) Can Lustre present an iSCSI target > >> > >> Lustre doesn't present target, we use targets, and we should work > >> fine > >> with iSCSI. We don't have a lot of iSCSI users, due to performance > >> concerns. > >> > >>> 2) I am looking at physical machines with 4 1TB 24x7 drives in > >>> each. How > >>> many machines will I need to cluster to create a solution with > >>> provide a > >>> good level of speed and fault tolerance. > >>> > >> 'It depends' - what is a 'good level of speed' for your app? > >> > >> Lustre IO scales as you add servers. Basically, if the IO is big > >> enough, > >> the client 'sees' the bandwidth of multiple servers. So, if you know > >> the bandwidth of 1 server (sgp_dd or other raw IO tools helps) then > >> your total bandwidth is going to be that figure, times the number of > >> servers. This assumes whatever network you have is capable of sinking > >> this bandwidth. > >> > >> So, if you know the IO you need, and you know the IO one server can > >> drive, you just divide the one by the other. > >> > >> Fault tolerance at the disk level == RAID. > >> Fault tolerance at the server level is done with shared storage > >> failover, using linux-ha or other packages. > >> hope this helps, > >> cliffw > >> > >>> Many thanks. > >>> > >>> Regards, > >>> David > >>> > >>> > >>> ------------------------------------------------------------------------ > >>> > >>> _______________________________________________ > >>> Lustre-discuss mailing list > >>> [email protected] > >>> http://lists.lustre.org/mailman/listinfo/lustre-discuss > >> > >> _______________________________________________ > >> Lustre-discuss mailing list > >> [email protected] > >> http://lists.lustre.org/mailman/listinfo/lustre-discuss > > > > _______________________________________________ > Lustre-discuss mailing list > [email protected] > http://lists.lustre.org/mailman/listinfo/lustre-discuss Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc. _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
