On 19/11/2014 14:56, "[email protected]" <[email protected]> wrote:
>> >>And how about attaching to the netowkrk as neutron networking uses per >>tenant networks, so how would you actually get access to the gpfs >>cluster? > >This bit is where I can see the potential pitfall. OpenStack naturally >uses NAT to handle traffic to and from guests - will GPFS cope with >nat'ted clients in this way? Well, not necessarily, I was thinking about this and potentially you could create an external shared network which is bound to your GPFS interface, though there¹s possible security questions maybe around exposing a real internal network device into a VM. I think there is also a Mellanox driver for the VPI Pro cards which allow you to pass the card through to instances. I can¹t remember if that was just acceleration for Ethernet or if it could do IB as well. >Also - could you make each hypervisor an NFS server for its guests, thus >doing away with the need for CNFS, and removing the potential for the nfs >server threads bottlenecking? For instance - if I have 300 worker nodes, >and 7 NSD servers - I'd then have 300 NFS servers running, rather than 7 Would you then not need to have 300 server licenses though? Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
