On Wed, 19 Nov 2014, Simon Thompson (Research Computing - IT Services) wrote:


Yes, what about the random name nature of a vm image?


For example I spin up a new vm, how does it join the gpfs cluster to be able to 
use nsd protocol?


I *think* this bit should be solvable - assuming one can pre-define the range of names the node will have, and can pre-populate your gpfs cluster config with these node names. The guest image should then have the full /var/mmfs tree (pulled from another gpfs node), but with the /var/mmfs/gen/mmfsNodeData file removed. When it starts up, it'll figure out "who" it is and regenerate that file, pull the latest cluster config from the primary config server, and start up.



And how about attaching to the netowkrk as neutron networking uses per tenant 
networks, so how would you actually get access to the gpfs cluster?

This bit is where I can see the potential pitfall. OpenStack naturally uses NAT to handle traffic to and from guests - will GPFS cope with nat'ted clients in this way?


Fair point on NFS from Alex - but will you get the same multi-threaded performance from NFS compared with GPFS?

Also - could you make each hypervisor an NFS server for its guests, thus doing away with the need for CNFS, and removing the potential for the nfs server threads bottlenecking? For instance - if I have 300 worker nodes, and 7 NSD servers - I'd then have 300 NFS servers running, rather than 7 NFS servers. Direct block access to the storage from the hypervisor would also be possible (network configuration permitting), and the NFS traffic would flow only over a "virtual" network within the hypervisor, and so "should" (?) be more efficient.





Simon
________________________________________
From: [email protected] [[email protected]] on 
behalf of Sven Oehme [[email protected]]
Sent: 19 November 2014 19:00
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS inside OpenStack guests

technically there are multiple ways to do this.

1. you can use the NSD protocol for this, just need to have adequate Network 
resources (or use PCI pass trough of the network adapter to the guest)
2. you attach the physical disks as virtio block devices
3. pass trough of the Block HBA (e.g. FC adapter) into the guest.

if you use virtio you need to make sure all caching is disabled entirely or you 
end up with major issues and i am not sure about official support for this, 1 
and 3 are straight forward ...

Sven





On Wed, Nov 19, 2014 at 8:35 AM, Orlando Richards 
<[email protected]<mailto:[email protected]>> wrote:
Hi folks,

Does anyone have experience of running GPFS inside OpenStack guests, to connect to an 
existing (traditional, "bare metal") GPFS filesystem owning cluster?

This is not using GPFS for openstack block/image storage - but using GPFS as a "NAS" 
service, with openstack guest instances as as a "GPFS client".


---
Orlando




--
           --
      Dr Orlando Richards
Research Facilities (ECDF) Systems Leader
      Information Services
  IT Infrastructure Division
      Tel: 0131 650 4994
    skype: orlando.richards

The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org<http://gpfsug.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



--
            --
       Dr Orlando Richards
Research Facilities (ECDF) Systems Leader
       Information Services
   IT Infrastructure Division
       Tel: 0131 650 4994
     skype: orlando.richards

The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to