Re: [Beowulf] If I were specifying a new custer...

2018-10-11 Thread Jonathan Engwall
You can narrow things down for instance you could start with warranty and support. Location is a factor too. You want every kind of stability, reliable power, cold water and to be near your suppliers, factoring in airports or interstate highways. On October 11, 2018, at 3:07 PM, Chris Samuel

Re: [Beowulf] If I were specifying a new custer...

2018-10-11 Thread Chris Samuel
On 12/10/18 08:50, Scott Atchley wrote: Perhaps Power9 or Naples with 8 memory channels? Also, Cavium ThunderX2. I'm not sure if Power or ARM (yet) qualify for a general HPC workload that Doug mentions; sadly a lot of the commercial codes are only available for x86-64 these days. MATLAB

Re: [Beowulf] If I were specifying a new custer...

2018-10-11 Thread Scott Atchley
What do your apps need? • Lots of memory? Perhaps Power9 or Naples with 8 memory channels? Also, Cavium ThunderX2. • More memory bandwidth? Same as above. • Max single thread performance? Intel or Power9? • Are your apps GPU enabled? If not, do you have budget/time/expertise to do the work?

Re: [Beowulf] If I were specifying a new custer...

2018-10-11 Thread Benson Muite
On 10/11/18 10:08 PM, Douglas Eadline wrote: > All: > > Over the last several months I have been reading about: > > 1) Spectre/meltdown > 2) Intel Fab issues > 3) Supermicro MB issues > > I started thinking, if I were going to specify a > single rack cluster, what would I use? > > I'm assuming a

[Beowulf] If I were specifying a new custer...

2018-10-11 Thread Douglas Eadline
All: Over the last several months I have been reading about: 1) Spectre/meltdown 2) Intel Fab issues 3) Supermicro MB issues I started thinking, if I were going to specify a single rack cluster, what would I use? I'm assuming a general HPC workload (not deep learning or analytics) I need to