Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster
I have a very different experience with NFS: we are using Gigabit Ethernet, and a 64bit RHEL6 clone with ECC memory as a file server; it has RAID1 ext4 home directories and RAID6 ext4 for synchrotron data. We have had zero performance or reliability problems with this in a computer lab with ~ 10 workstations, and I have seen 115 MB/sec file transfers via NFS, at peak times. Just make sure to export using the async option. HTH, Kay On Wed, 31 Jul 2013 09:21:48 +0900, Francois Berenger beren...@riken.jp wrote: Be careful that running data intensive jobs over NFS is super slow (at least an order of magnitude compared to writing things on a local disk). Not only the computation is slow, but you may be slowing down all other users of the cluster too... F.
Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster
We have a very similar setup, and I can only second Kay's experience. Best regards, Dirk. Am 31.07.13 13:36, schrieb Kay Diederichs: I have a very different experience with NFS: we are using Gigabit Ethernet, and a 64bit RHEL6 clone with ECC memory as a file server; it has RAID1 ext4 home directories and RAID6 ext4 for synchrotron data. We have had zero performance or reliability problems with this in a computer lab with ~ 10 workstations, and I have seen 115 MB/sec file transfers via NFS, at peak times. Just make sure to export using the async option. HTH, Kay On Wed, 31 Jul 2013 09:21:48 +0900, Francois Berenger beren...@riken.jp wrote: Be careful that running data intensive jobs over NFS is super slow (at least an order of magnitude compared to writing things on a local disk). Not only the computation is slow, but you may be slowing down all other users of the cluster too... F. -- *** Dirk Kostrewa Gene Center Munich Department of Biochemistry Ludwig-Maximilians-Universität München Feodor-Lynen-Str. 25 D-81377 Munich Germany Phone: +49-89-2180-76845 Fax:+49-89-2180-76999 E-mail: kostr...@genzentrum.lmu.de WWW:www.genzentrum.lmu.de ***
Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster
We don't have any performance/ reliability issues with our cheapskate setup either. Make sure the network is wired with Cat5e or Cat6 cables, especially if distances are 8m+ Dmitry On 2013-07-31, at 7:36 AM, Kay Diederichs wrote: I have a very different experience with NFS: we are using Gigabit Ethernet, and a 64bit RHEL6 clone with ECC memory as a file server; it has RAID1 ext4 home directories and RAID6 ext4 for synchrotron data. We have had zero performance or reliability problems with this in a computer lab with ~ 10 workstations, and I have seen 115 MB/sec file transfers via NFS, at peak times. Just make sure to export using the async option. HTH, Kay On Wed, 31 Jul 2013 09:21:48 +0900, Francois Berenger beren...@riken.jp wrote: Be careful that running data intensive jobs over NFS is super slow (at least an order of magnitude compared to writing things on a local disk). Not only the computation is slow, but you may be slowing down all other users of the cluster too... F. smime.p7s Description: S/MIME cryptographic signature
Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster
At high load levels async is a dangerous option. What it means is that when an NFS client has copied its data to the NFS server (i.e. memory, not disk) it accepts the acknowledgment and carries on assuming the data have been committed. The sync option means that the acknowledgment is not sent until the server has received acknowledgment from the disks that the data are safely committed. In sort, with async you are playing russian roulette with your data if the server dies unexpectedly or the cache gets full in a nasty way. In practice neither usually makes much difference. The key thing is how much data you transfer at once, because the NFS overhead of managing a transaction is quite large. In contrast, using noatime is probably what everyone wants, and leave the client and server to negotiate the largest possible rsize and wsize (e.g. 1MB). So, write 1 byte at a time and performance is sludge, write 1 megabyte and you should get line speed (e.g. ~120MB/s for 1gig Ethernet). Some old CCP4 programs (e.g. FFT, I believe) used disk based Unix sorts which approximated to the first scenario and were absolutely dreadful over NFS. All these things should be directed at local disks or even ramdisks if possible. Hope this helps, Robert -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: rob...@strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and rob...@well.ox.ac.uk Fax: (+44) - 1865 - 287547 Original message Date: Wed, 31 Jul 2013 12:36:59 +0100 From: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK (on behalf of Kay Diederichs kay.diederi...@uni-konstanz.de) Subject: Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster To: CCP4BB@JISCMAIL.AC.UK I have a very different experience with NFS: we are using Gigabit Ethernet, and a 64bit RHEL6 clone with ECC memory as a file server; it has RAID1 ext4 home directories and RAID6 ext4 for synchrotron data. We have had zero performance or reliability problems with this in a computer lab with ~ 10 workstations, and I have seen 115 MB/sec file transfers via NFS, at peak times. Just make sure to export using the async option. HTH, Kay On Wed, 31 Jul 2013 09:21:48 +0900, Francois Berenger beren...@riken.jp wrote: Be careful that running data intensive jobs over NFS is super slow (at least an order of magnitude compared to writing things on a local disk). Not only the computation is slow, but you may be slowing down all other users of the cluster too... F.
Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster
I was hoping to avoid this discussion ... It is true that if the server dies, you loose your data. But for crystallographic calculations, this is true for async as well as for sync. Just repeat the calculation if it dies midway - it's as simple as that. This is probably very different for airline bookings, banking transactions and the like. relatime is the default NFS mount option nowadays (at least on RHEL; just check /proc/mounts). It has been available for ~5 years, and has the performance benefits of noatime. Actually I cannot remember the last time our file server died from a software problem; this is certainly longer than 5 years ago, maybe 10 years. By far the biggest hardware problem in our lab are harddisks; I'd estimate one problem (typically Current_Pending_Sector and/or Offline_Uncorrectable in SMART output) every 1-2 months, per 30 disks. Suitable RAIDs go a long way to keep the problem in check. Kay On Wed, 31 Jul 2013 13:41:24 +0100, Robert Esnouf rob...@strubi.ox.ac.uk wrote: At high load levels async is a dangerous option. What it means is that when an NFS client has copied its data to the NFS server (i.e. memory, not disk) it accepts the acknowledgment and carries on assuming the data have been committed. The sync option means that the acknowledgment is not sent until the server has received acknowledgment from the disks that the data are safely committed. In sort, with async you are playing russian roulette with your data if the server dies unexpectedly or the cache gets full in a nasty way. In practice neither usually makes much difference. The key thing is how much data you transfer at once, because the NFS overhead of managing a transaction is quite large. In contrast, using noatime is probably what everyone wants, and leave the client and server to negotiate the largest possible rsize and wsize (e.g. 1MB). So, write 1 byte at a time and performance is sludge, write 1 megabyte and you should get line speed (e.g. ~120MB/s for 1gig Ethernet). Some old CCP4 programs (e.g. FFT, I believe) used disk based Unix sorts which approximated to the first scenario and were absolutely dreadful over NFS. All these things should be directed at local disks or even ramdisks if possible. Hope this helps, Robert -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: rob...@strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and rob...@well.ox.ac.uk Fax: (+44) - 1865 - 287547 Original message Date: Wed, 31 Jul 2013 12:36:59 +0100 From: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK (on behalf of Kay Diederichs kay.diederi...@uni-konstanz.de) Subject: Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster To: CCP4BB@JISCMAIL.AC.UK I have a very different experience with NFS: we are using Gigabit Ethernet, and a 64bit RHEL6 clone with ECC memory as a file server; it has RAID1 ext4 home directories and RAID6 ext4 for synchrotron data. We have had zero performance or reliability problems with this in a computer lab with ~ 10 workstations, and I have seen 115 MB/sec file transfers via NFS, at peak times. Just make sure to export using the async option. HTH, Kay On Wed, 31 Jul 2013 09:21:48 +0900, Francois Berenger beren...@riken.jp wrote: Be careful that running data intensive jobs over NFS is super slow (at least an order of magnitude compared to writing things on a local disk). Not only the computation is slow, but you may be slowing down all other users of the cluster too... F.
Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster
Dear Sergei, Second point is probably easier to do. An alternative to NFS is sshfs. The advantage is that it uses SSH which is installed by default and configured the same way. If you generate key pairs you can use ssh or sshfs without a password. Check this page below; http://www.howtoforge.com/mounting-remote-directories-with-sshfs-on-ubuntu-11.10 Typically LDAP is being used for centralised authentication but NIS is probably just as good. Page below is about the client setup. https://help.ubuntu.com/community/LDAPClientAuthentication Both of the above are more likely to survive upgrades. Adam
Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster
Be careful that running data intensive jobs over NFS is super slow (at least an order of magnitude compared to writing things on a local disk). Not only the computation is slow, but you may be slowing down all other users of the cluster too... F. On 07/30/2013 11:28 PM, Adam Ralph wrote: Dear Sergei, Second point is probably easier to do. An alternative to NFS is sshfs. The advantage is that it uses SSH which is installed by default and configured the same way. If you generate key pairs you can use ssh or sshfs without a password. Check this page below; http://www.howtoforge.com/mounting-remote-directories-with-sshfs-on-ubuntu-11.10 Typically LDAP is being used for centralised authentication but NIS is probably just as good. Page below is about the client setup. https://help.ubuntu.com/community/LDAPClientAuthentication Both of the above are more likely to survive upgrades. Adam