Re: [Beowulf] HPC workflows

2018-12-09 Thread Douglas O'Flaherty
> On Dec 9, 2018, at 7:26 AM, Gerald Henriksen wrote: > >> On Fri, 7 Dec 2018 16:19:30 +0100, you wrote: >> >> Perhaps for another thread: >> Actually I went t the AWS USer Group in the UK on Wednesday. Ver >> impressive, and there are the new Lustre filesystems and MPI networking. >> I guess

Re: [Beowulf] HPC workflows

2018-12-09 Thread John Hearns via Beowulf
> but for now just expecting to get something good without an effort is probably premature. Nothing good every came easy. Who said that? My Mum. And she was a very wise woman. On Sun, 9 Dec 2018 at 21:36, INKozin via Beowulf wrote: > While I agree with many points made so far I want to

Re: [Beowulf] HPC workflows

2018-12-09 Thread INKozin via Beowulf
While I agree with many points made so far I want to add that one aspect which used to separate a typical HPC setup from some IT infrastructure is complexity. And I don't mean technological complexity (because technologically HPC can be fairly complex) but the diversity and the interrelationships

Re: [Beowulf] HPC workflows

2018-12-09 Thread Gerald Henriksen
On Fri, 7 Dec 2018 16:19:30 +0100, you wrote: >Perhaps for another thread: >Actually I went t the AWS USer Group in the UK on Wednesday. Ver >impressive, and there are the new Lustre filesystems and MPI networking. >I guess the HPC World will see the same philosophy of building your setup >using

Re: [Beowulf] HPC workflows

2018-12-07 Thread Lux, Jim (337K) via Beowulf
Monolithic static binaries - better have a fat pipe to the server to load the container on your target. On 12/7/18, 10:47 AM, "Beowulf on behalf of Jan Wender" wrote: > Am 07.12.2018 um 17:34 schrieb John Hanks : > In my view containers are little more than incredibly complex

Re: [Beowulf] HPC workflows

2018-12-07 Thread Jan Wender
> Am 07.12.2018 um 17:34 schrieb John Hanks : > In my view containers are little more than incredibly complex static binaries Thanks for this! I was wondering if I am the only one thinking it. - Jan ___ Beowulf mailing list, Beowulf@beowulf.org

Re: [Beowulf] HPC workflows

2018-12-07 Thread Lux, Jim (337K) via Beowulf
On 12/7/18, 8:46 AM, "Beowulf on behalf of Michael Di Domenico" wrote: On Fri, Dec 7, 2018 at 11:35 AM John Hanks wrote: > > But, putting it in a container wouldn't make my life any easier and would, in fact, just add yet another layer of something to keep up to date.

Re: [Beowulf] HPC workflows

2018-12-07 Thread John Hanks
On Fri, Dec 7, 2018 at 7:20 AM John Hearns via Beowulf wrote: > Good points regarding packages shipped with distributions. > One of my pet peeves (only one? Editor) is being on mailiing lists for HPC > software such as OpenMPI and Slurm and seeing many requests along the lines > of > "I

Re: [Beowulf] HPC workflows

2018-12-07 Thread Michael Di Domenico
On Fri, Dec 7, 2018 at 11:35 AM John Hanks wrote: > > But, putting it in a container wouldn't make my life any easier and would, > in fact, just add yet another layer of something to keep up to date. i think the theory behind this is the containers allow the sysadmins to kick the can down the

Re: [Beowulf] HPC workflows

2018-12-07 Thread John Hanks
On Fri, Dec 7, 2018 at 7:04 AM Gerald Henriksen wrote: > On Wed, 5 Dec 2018 09:35:07 -0800, you wrote: > > Now obviously you could do what for example Java does with a jar file, > and simply throw everything into a single rpm/deb and ignore the > packaging guidelines, but then you are back to in

Re: [Beowulf] HPC workflows

2018-12-07 Thread John Hearns via Beowulf
Good points regarding packages shipped with distributions. One of my pet peeves (only one? Editor) is being on mailiing lists for HPC software such as OpenMPI and Slurm and seeing many requests along the lines of "I installed PackageX on my cluster" and then finding fromt he replies that the

Re: [Beowulf] HPC workflows

2018-12-07 Thread Gerald Henriksen
On Wed, 5 Dec 2018 09:35:07 -0800, you wrote: >Certainly the inability of distros to find the person-hours to package >everything plays a role as well, your cause and effect chain there is >pretty accurate. Where I begin to branch is at the idea of software that is >unable to be packaged in an

Re: [Beowulf] HPC workflows

2018-12-05 Thread John Hanks
I think you do a better job explaining the underpinnings of my frustration with it all, but then arrive at a slightly different set of conclusions. I'd be the last to say autotools isn't complex, in fact pretty much all build systems eventually reach an astounding level of complexity. But I'm not

Re: [Beowulf] HPC workflows

2018-12-04 Thread Tony Brian Albers
On Tue, 2018-12-04 at 11:20 -0500, Prentice Bisbal via Beowulf wrote: > On 12/3/18 2:44 PM, Michael Di Domenico wrote: > > On Mon, Dec 3, 2018 at 1:13 PM John Hanks > > wrote: > > >   From the perspective of the software being containerized, I'm > > > even more skeptical. In my world

Re: [Beowulf] HPC workflows

2018-12-04 Thread Gerald Henriksen
On Mon, 3 Dec 2018 10:12:10 -0800, you wrote: > And then I realized that I was seeing >software which was "easier to containerize" and that "easier to >containerize" really meant "written by people who can't figure out >'./configure; make; make install' and who build on a sand-like foundation >of

Re: [Beowulf] HPC workflows

2018-12-04 Thread Prentice Bisbal via Beowulf
On 12/3/18 2:44 PM, Michael Di Domenico wrote: On Mon, Dec 3, 2018 at 1:13 PM John Hanks wrote: From the perspective of the software being containerized, I'm even more skeptical. In my world (bioinformatics) I install a lot of crappy software. We're talking stuff resulting from "I read

Re: [Beowulf] HPC workflows

2018-12-03 Thread Michael Di Domenico
On Mon, Dec 3, 2018 at 1:13 PM John Hanks wrote: > > From the perspective of the software being containerized, I'm even more > skeptical. In my world (bioinformatics) I install a lot of crappy software. > We're talking stuff resulting from "I read the first three days of 'learn > python in 21

Re: [Beowulf] HPC workflows

2018-12-03 Thread John Hanks
On Fri, Nov 30, 2018 at 9:44 PM John Hearns via Beowulf wrote: > John, your reply makes so many points which could start a whole series of > debates. > I would not deny partaking of the occasional round of trolling. > > Best use of our time now may well be to 'rm -rf SLURM' and figure out >

Re: [Beowulf] HPC workflows

2018-12-02 Thread Gerald Henriksen
On Sat, 1 Dec 2018 06:43:05 +0100, you wrote: >My own thoughts on HPC for a tightly coupled, on premise setup is that we >need a lightweight OS on the nodes, which does the bare minimum. No general >purpose utilities, no GUIS, nothing but network and storage. And container >support. One of the

Re: [Beowulf] HPC Workflows

2018-12-02 Thread Tim Cutts
Ho ho. Yes, there is rarely anything completely new. Old ideas get dusted off, polished up, and packaged slightly differently. At the end of the day, a Dockerfile is just a script to build your environment, but it has the advantage now of doing it in a reasonably standard way, rather than

Re: [Beowulf] HPC Workflows

2018-12-01 Thread John Hanks
For me personally I just assume it's my lack of vision that is the problem. I was submitting VMs as jobs using SGE well over 10 years ago. Job scripts that build the software stack if it's not found? 15 or more. Never occurred to me to call it "cloud" or "containerized", it was just a few stupid

Re: [Beowulf] HPC Workflows

2018-12-01 Thread m . somers
Yeah, I often thing some people are using the letters HPC as in 'high profile computing' nowadays. The diluting effect I mentoined a few posts ago. Actually LOT of HPC admin folks I know are scientists, scientificly active and tightly coupled to scientists in groups and they were doing DevOps

Re: [Beowulf] HPC workflows

2018-11-29 Thread Jon Forrest
On 11/27/2018 4:51 AM, Michael Di Domenico wrote: this seems a bit too stringent of a statement for me. i don't dismiss or disagree with your premise, but i don't entirely agree that HPC "must" change in order to compete. I agree completely. There is and always be a need for what I call

Re: [Beowulf] HPC workflows

2018-11-28 Thread Jonathan Engwall
You can probably fork from a central repo. > ___ > Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf >

Re: [Beowulf] HPC workflows

2018-11-28 Thread Gerald Henriksen
On Wed, 28 Nov 2018 13:51:05 +0100, you wrote: >Now I am all for connecting divers and flexible workflows to true HPC systems >and grids that feel different if not experienced >with (otherwise what is the use of a computer if there are no users making use >of it?), but do not make the mistake

Re: [Beowulf] HPC workflows

2018-11-28 Thread Eliot Eshelman
Those interested in providing user-friendly HPC might want to take a look at Open OnDemand. I'm not affiliated with this project, but wanted to make sure it got a plug. I've heard good things so far. http://openondemand.org/ Eliot On 11/26/18 10:26, John Hearns via Beowulf wrote: This may

Re: [Beowulf] HPC workflows

2018-11-28 Thread INKozin via Beowulf
On Wed, 28 Nov 2018 at 11:33, Bogdan Costescu wrote: > On Mon, Nov 26, 2018 at 4:27 PM John Hearns via Beowulf < > beowulf@beowulf.org> wrote: > >> I have come across this question in a few locations. Being specific, I am >> a fan of the Julia language. Ont he Juia forum a respected developer >>

Re: [Beowulf] HPC workflows

2018-11-28 Thread John Pellman
> > If HPC doesn't make it easy for these users to transfer their workflow > to the cluster, and the cloud providers do, then the users will move > to using the cloud even if it costs them 10%, 20% more because at the > end of the day it is about getting the job done and not about spending > time

Re: [Beowulf] HPC workflows

2018-11-28 Thread mark somers
As a follow up note on workflows, we also have used 'sshfs like constructs' to help non technical users to compute things on local clusters, the actual CERN grid infrastructure and on (national) super computers. We built some middleware suitable for that many moons ago:

Re: [Beowulf] HPC workflows

2018-11-28 Thread John Hearns via Beowulf
MArk, again I do not have time to give your answer justice today. However, as you are in NL, can you send me some olliebollen please? I am a terrible addict. On Wed, 28 Nov 2018 at 13:52, mark somers wrote: > Well, please be careful in naming things: > >

Re: [Beowulf] HPC workflows

2018-11-28 Thread mark somers
Well, please be careful in naming things: http://cloudscaling.com/blog/cloud-computing/grid-cloud-hpc-whats-the-diff/ (note; The guy only heard about MPI and does not consider SMP based codes using i.e. OpenMP, but he did understand there are different things being talked about). Now I am all

Re: [Beowulf] HPC workflows

2018-11-28 Thread John Hearns via Beowulf
Bogdan, Igor. Thankyou very much for your thoughtful answers. I don not have much time today to do your replies the justice of a proper answer. Regarding the ssh filesystem, the scenario was that I was working for a well known company. We were running CFD simulations on remote academic HPC setups.

Re: [Beowulf] HPC workflows

2018-11-28 Thread Bogdan Costescu
On Mon, Nov 26, 2018 at 4:27 PM John Hearns via Beowulf wrote: > I have come across this question in a few locations. Being specific, I am > a fan of the Julia language. Ont he Juia forum a respected developer > recently asked what the options were for keeping code developed on a laptop > in

Re: [Beowulf] HPC workflows

2018-11-28 Thread John Hearns via Beowulf
Julia packaging https://docs.julialang.org/en/v1/stdlib/Pkg/index.html On Wed, 28 Nov 2018 at 01:42, Gerald Henriksen wrote: > On Tue, 27 Nov 2018 07:51:06 -0500, you wrote: > > >On Mon, Nov 26, 2018 at 9:50 PM Gerald Henriksen > wrote: > >> On Mon, 26 Nov 2018 16:26:42 +0100, you wrote: >

Re: [Beowulf] HPC workflows

2018-11-28 Thread John Hearns via Beowulf
> * - note the HPC isn't unique in this regard. The Linux distributions > are facing their own version of this, where much of the software is no > longer packagable in the traditional sense as it instead relies on > language specific packaging systems and languages that don't lend > themselves to

Re: [Beowulf] HPC workflows

2018-11-27 Thread Gerald Henriksen
On Tue, 27 Nov 2018 07:51:06 -0500, you wrote: >On Mon, Nov 26, 2018 at 9:50 PM Gerald Henriksen wrote: >> On Mon, 26 Nov 2018 16:26:42 +0100, you wrote: >> If on premise HPC doesn't change to reflect the way the software is >> developed today then the users will in the future prefer cloud HPC.

Re: [Beowulf] HPC workflows

2018-11-27 Thread Michael Di Domenico
On Mon, Nov 26, 2018 at 9:50 PM Gerald Henriksen wrote: > On Mon, 26 Nov 2018 16:26:42 +0100, you wrote: > If on premise HPC doesn't change to reflect the way the software is > developed today then the users will in the future prefer cloud HPC. > > I guess it is a brave new world for on premise

Re: [Beowulf] HPC workflows

2018-11-26 Thread Gerald Henriksen
On Mon, 26 Nov 2018 16:26:42 +0100, you wrote: >This leads me to ask - shoudl we be presenting HPC services as a 'cloud' >service, no matter that it is a non-virtualised on-premise setup? >In which case the way to deploy software would be via downloading from >Repos. >I guess this is actually