Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread John Hearns via Beowulf
>If a sys admin position involves shell prorgramming/scripting, knowing the details of a specific programming language or processor are secondary, but thinking like a >programmer is skill not everyone has or can develop.Just last week a wrote a Lua script without knowing a thing about Lua. I

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Benson Muite
On 06/19/2018 09:47 PM, Prentice Bisbal wrote: On 06/13/2018 10:32 PM, Joe Landman wrote: I'm curious about your next gen plans, given Phi's roadmap. On 6/13/18 9:17 PM, Stu Midgley wrote: low level HPC means... lots of things.  BUT we are a huge Xeon Phi shop and need low-level

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Joe Landman
On 6/19/18 2:47 PM, Prentice Bisbal wrote: On 06/13/2018 10:32 PM, Joe Landman wrote: I'm curious about your next gen plans, given Phi's roadmap. On 6/13/18 9:17 PM, Stu Midgley wrote: low level HPC means... lots of things.  BUT we are a huge Xeon Phi shop and need low-level programmers

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Prentice Bisbal
On 06/13/2018 10:32 PM, Joe Landman wrote: I'm curious about your next gen plans, given Phi's roadmap. On 6/13/18 9:17 PM, Stu Midgley wrote: low level HPC means... lots of things.  BUT we are a huge Xeon Phi shop and need low-level programmers ie. avx512, careful cache/memory management

[Beowulf] GLIBC_3.4.x/GLIBCXX_3.4.x not found on CentOS 7.x?

2018-06-19 Thread Ryan Novosielski
Hi Beowulfers: What do you folks use (besides use Singularity or similar) for software that for whatever reason balks because it asks for GLIBC/GLIBCXX 3.4.20 or newer on CentOS 7.x? From what I’ve read, it’s not safe to build it in an alternate location and use LD_LIBRARY_PATH to have

Re: [Beowulf] GLIBC_3.4.x/GLIBCXX_3.4.x not found on CentOS 7.x?

2018-06-19 Thread David Mathog
On 19 Jun 2018 19:08:12 Ryan Novosielski wrote: What do you folks use (besides use Singularity or similar) for software that for whatever reason balks because it asks for GLIBC/GLIBCXX 3.4.20 or newer on CentOS 7.x? What software would that be? I frequently run into requirements for more

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Prentice Bisbal
Despite the source (just kidding, Bill!) I'm going to have to support this line of questioning. HPC (and general IT), consists of systems with many different layers, and it takes the correct personality type with the good analytical skills to be able to trouble shoot things effectively. I

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Jonathan Engwall
I think the boundary between a final product and the start of a project separates these two view points. Lately, I short stacks of OReilleys scattered about, off libraries and a second stack of notebooks filled with every command that really did work. And I think it is fun. Jonathan On Jun 19,

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Prentice Bisbal
On 06/19/2018 03:10 PM, Joe Landman wrote: On 6/19/18 2:47 PM, Prentice Bisbal wrote: On 06/13/2018 10:32 PM, Joe Landman wrote: I'm curious about your next gen plans, given Phi's roadmap. On 6/13/18 9:17 PM, Stu Midgley wrote: low level HPC means... lots of things.  BUT we are a huge

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Lux, Jim (337K)
From: Beowulf on behalf of "beowulf@beowulf.org" Reply-To: John Hearns Date: Tuesday, June 19, 2018 at 11:40 AM To: "beowulf@beowulf.org" Subject: Re: [Beowulf] Working for DUG, new thead >If a sys admin position involves shell prorgramming/scripting, knowing the >details of a specific

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Lux, Jim (337K)
On 6/19/18, 12:11 PM, "Beowulf on behalf of Joe Landman" wrote: Generally, yes. Optimizing serial code for GPUs doesn't work well. Rewriting for GPUs (e.g. taking into account the GPU data/compute flow architecture) does work well. I've been intrigued recently

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Ryan Novosielski
We bought KNC a long time ago and keep meaning to get them to a place where they can be used and just haven’t. Do you mount filesystems from them? We have GPFS storage, primarily, and would have to re-export it via NFS I suppose if we want the cards to use that storage. I’ve seen complaints

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Stu Midgley
we initially used them as standalone systems (ie. rsh a code onto them and run it) today we use them in offload mode (ie. the host would push memory+commands onto them and pull the results off - all via pragmas ). our last KNC systems were 2RU with 8x7120 phi's... which is a 2.1kW system. They

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread Stu Midgley
We aren't after average HPC programmers... Even good compilers (Intel) are very very limited in their optimisations. We got factors of 2x and 3x by hand writing SSSE3 commands on standard Xeon's rather than let the compiler do its thing... Compiler limitations isn't particular to Phi. On Wed,

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread John Hearns via Beowulf
This thread is going fast! Prentice Bisbal wrote: > I often wonder if that misleading marketing is one of the reasons why the Xeon Phi has already been canned. I know a lot of people who were excited for the Xeon Phi, but > I don't know any who ever bought the Xeon Phis once they came out. In

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread John Hearns via Beowulf
Jim Lux wrote: > I've been intrigued recently about using GPUs for signal processing kinds of things.. There's not much difference between calculating vertices of triangles and doing FIR filters. Rather than look at hardware per se, how about learning about the Julia language for this task? I

Re: [Beowulf] Working for DUG, new thead

2018-06-19 Thread John Hearns via Beowulf
I should do my research... The Celeste project is the poster child for Julia https://www.nextplatform.com/2017/11/28/julia-language-delivers-petascale-hpc-performance/ They use up to 8092 Xeon Phi nodes at NERSC with threads... The per thread runtime graph is interesting there. Only a small