the question is what difference does it make ?
as i mentioned if all your 2 or 3 nodes do is serving NFS it doesn't matter
if the protocol nodes or the NSD services are down in both cases it means
no access to data which it makes no sense to separate them in this case
(unless load dependent).
i haven't seen nodes reboot specifically because of protocol issues lately,
the fact that everything is in userspace makes things easier too.

sven

------------------------------------------
Sven Oehme
Scalable Storage Research
email: oeh...@us.ibm.com
Phone: +1 (408) 824-8904
IBM Almaden Research Lab
------------------------------------------



From:   Zachary Giles <zgi...@gmail.com>
To:     gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   03/06/2016 02:31 AM
Subject:        Re: [gpfsug-discuss] Small cluster
Sent by:        gpfsug-discuss-boun...@spectrumscale.org



Sven,
What about the stability of the new protocol nodes vs the old cNFS? If you
remember, back in the day, cNFS would sometimes have a problem and reboot
the whole server itself. Obviously this was problematic if it's one of the
few servers running your cluster. I assume this is different now with the
Protocol Servers?


On Sat, Mar 5, 2016 at 1:40 PM, Marc A Kaplan <makap...@us.ibm.com> wrote:
  Indeed it seems to just add overhead and expense to split what can be
  done by one node over two nodes!


  _______________________________________________
  gpfsug-discuss mailing list
  gpfsug-discuss at spectrumscale.org
  http://gpfsug.org/mailman/listinfo/gpfsug-discuss




--
Zach Giles
zgiles@gmail.com_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to