Re: [Beowulf] Frontier Announcement

2019-05-12 Thread Chris Samuel
On Thursday, 9 May 2019 7:51:26 AM PDT Tony Brian Albers wrote:

> Please stop, both of you.

Sorry for not seeing this before, was away at the Cray User Group and so not 
keeping up with email (I know, what else is new).

I'm disturbed by this thread, distressed at the threats that have been 
mentioned and concerned by the way the thread developed.

I understand from close family experience how receiving threats of violence 
can set the person up for unexpected behaviours later on due to inoffensive 
triggers that can cause hurt, offence and anxiety amongst others and how 
distressing that can be to those on the receiving end of them. 

That said, I expect this to be the end of this branch of this thread.  No 
responses please, either to the list or privately, this is the end of the 
matter.

Thank you,
Chris (list administrator)
-- 
  Chris Samuel  :  http://www.csamuel.org/  :  Berkeley, CA, USA



___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf


Re: [Beowulf] [EXTERNAL] Re: Frontier Announcement

2019-05-12 Thread Lux, Jim (337K) via Beowulf
ML has been around forever (way back to perceptrons, I built a 2 neuron 3 input 
classifier using opamps in 1974).. it was the computational horsepower that 
made empirical experiments with more than 3 layers possible, along with some 
datasets to train/test with.
Without huge datasets of labeled images, for instance, it is hard to try new 
approaches.
Fast computers, sitting on a desk, without needing to ask a “high performance 
computing resource allocation committee” for runtime lets someone do the “hey, 
what if I stack up 10 layers of neurons” and grind it over night and find “hey 
it works, I don’t have any clue why, but it’s cool none-the-less”..

Then, some commercial applications (things like sorting images, or processing 
video streams) and, all of a sudden, there you go.

This, to me, is the true value of Beowulf – commodity stuff (meaning cheap), 
ganged up in multiples, lets you have a “personal” HPC capability.   Something 
a bit more than a fast desktop machine, but one I don’t have to justify the use 
(or non-use) of.  A speedup of 10-50X is the difference between “one try 
overnight” and “one try in a few minutes when I have some spare time in between 
the other 13 things I have to do”.


From: Beowulf  on behalf of "beowulf@beowulf.org" 

Reply-To: John Hearns 
Date: Thursday, May 9, 2019 at 9:54 AM
To: "beowulf@beowulf.org" 
Subject: [EXTERNAL] Re: [Beowulf] Frontier Announcement

Gerald that is an excellent history.
One small thing though: "Of course the ML came along"
What came first - the chicken or the egg? Perhaps the Nvidia ecosystem made the 
ML revolution possible.
You could run ML models on a cheap workstation or a laptop with an Nvidia GPU.
Indeed I am sitting next to my Nvidia Jetson Nano - 90 dollars for a GPU which 
can do deep learning.
Prior to CUDA etc. you could of course do machine learning, but it was being 
done in universities.
I stand to be corrected.





On Thu, 9 May 2019 at 17:40, Gerald Henriksen 
mailto:ghenr...@gmail.com>> wrote:
On Wed, 8 May 2019 14:13:51 -0400, you wrote:

>On Wed, May 8, 2019 at 1:47 PM Jörg Saßmannshausen <
>sassy-w...@sassy.formativ.net> wrote:
>>
>Once upon a time portability, interoperabiilty, standardization, were
>considered good software and hardware attributes.
>Whatever happened to them?

I suspect in a lot of cases they were more ideals and goals than
actual things.

Just look at the struggles the various BSDs have in getting a lot of
software running given the inherent Linuxisms that seem to happen.

In the case of what is relevant to this discussion, CUDA, Nvidia saw
an opportunity (and perhaps also reacted to the threat of not having
their own CPU to counter the integrated GPU market) and invested
heavily into making their GPUs more than simply a 3D graphics device.

As Nvidia built up the libraries and other software to make life
easier for programmers to get the most out of Nvidia hardware AMD and
Intel ignored the threat until it was too late, and partial attempts
at open standards struggled.

And programmers, given struggling with OpenCL or other options vs
going with CUDA with its tools and libraries, went for what gave them
the best performance and easiest implementation (aka a win/win).

Of course then ML came along and suddenly AMD and Intel couldn't
ignore the market anymore, but they are both struggling from a distant
2nd place to try and replicate the CUDA ecosystem...
___
Beowulf mailing list, Beowulf@beowulf.org sponsored 
by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf