Re: [Beowulf] Frontier Announcement

2019-05-13 Thread Janne Blomqvist
. People over here have steam coming out of their ears when they hear the prices for Tesla GPU's, and the recent NV data center licensing changes. -- Janne Blomqvist, D.Sc. (Tech.), Scientific Computing Specialist Aalto University School of Science, PHYS & NBE +358503841576 || janne.bl

[Beowulf] MLNX_OFED vs. rdma-core vs. MLNX_OFED with rdma-core

2019-09-23 Thread Janne Blomqvist
MLNX_OFED 5.0? And thus MLNX_OFED will become a much "thinner" add-on than currently? Has anyone tested these different configurations, if there's any difference in performance and/or functionality? 1. Distro RDMA stack (rdma-core) 2. MLNX_OFED full 3. MLNX_OFED RPMS_UPSTREAM_LIBS

Re: [Beowulf] MLNX_OFED vs. rdma-core vs. MLNX_OFED with rdma-core

2019-10-02 Thread Janne Blomqvist
On 23/09/2019 11.16, Janne Blomqvist wrote: > Hello, > > scouring the release notes for the latest MLNX_OFED (version > 4.6-1.0.1.1, and no, still no RHEL 7.7 support), I read a note about an > upcoming API change at > https://docs.mellanox.com/display/MLNXOFEDv461000/Change

Re: [Beowulf] [EXTERNAL] Re: Is Crowd Computing the Next Big Thing?

2019-11-28 Thread Janne Blomqvist
y, so can't double-check) consumed about 170W when idle. -- Janne Blomqvist ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf

Re: [Beowulf] Is Crowd Computing the Next Big Thing?

2019-11-28 Thread Janne Blomqvist
or be thrown in the trash. Of course, if they pay for compute performance (for some suitable definition of performance) rather than just cpu-hours old phones will be less power efficient. -- Janne Blomqvist ___ Beowulf mailing list, Beowulf@beowulf.org sponsor

Re: [Beowulf] apline linux

2021-01-30 Thread Janne Blomqvist
e? AFAIU the libm in musl hasn't received anywhere close to the same level of attention to correctness and performance as the glibc one. Further, depending on how you provision accounts in your cluster, the lack of NSS in musl might be a problem. -- Janne

Re: [Beowulf] [beowulf] nfs vs parallel filesystems

2021-09-18 Thread Janne Blomqvist
1 MPI task per node SEGMENTCOUNT=100 #Offset must be equal to ntasks-per-node OFFSET=1 srun IOR -a POSIX -t 1000 -b 1000 -s $SEGMENTCOUNT -C -Q $OFFSET -e -i 5 -d 10 -v -w -r -W -R -g -u -q -o testfile This should fail due to corruption within minutes if the testfile is on NFS. Not saying a