Re: [Beowulf] [External] position adverts?

2024-02-22 Thread Douglas Eadline
> I've always thought employment opps were fine, but e-mails trying to > sell a product were bad. Yea that has been the general rule. HPC is such an interesting community, one day you are working for a vendor, then at some point you move to a university or lab or vice versa, lather rinse repeat

Re: [Beowulf] [External] anyone have modern interconnect metrics?

2024-01-24 Thread Douglas Eadline
--snip-- > Core counts are getting too high to be of use in HPC. High core-count > processors sound great until you realize that all those cores are now > competing for same memory bandwidth and network bandwidth, neither of > which increase with core-count. > > Last April we were evaluating

Re: [Beowulf] And wearing another hat ...

2023-11-11 Thread Douglas Eadline
or instance as convoluted as DC emissions cap aligning to a > climate policy. > > Joshua > > -- Original Message -- > Received: 01:29 PM CDT, 10/31/2023 > From: "Douglas Eadline"  > To: beowulf@beowulf.org > Subject: [Beowulf] And wearing another hat ... &g

Re: [Beowulf] [External] And wearing another hat ...

2023-11-11 Thread Douglas Eadline
> On 10/31/23 2:28 PM, Douglas Eadline wrote: >> Not on Monday night at the Bash, though. I believe >> there is some T-shirt or wardrobe planned. > Sticking with the Heaven and Hell theme, I expect to see Doug in a > bikini with angels wings, like a Victoria's Secret model

[Beowulf] And wearing another hat ...

2023-10-31 Thread Douglas Eadline
All: Back in July, I stepped into the Managing Editor role at HPCwire. I'm covering for a staff sabbatical, and I will be in place through December, including attending SC23. A few things: 1. As ME, I am interested in what types of topics you would like to see covered on HPCwire (even if you

Re: [Beowulf] Your thoughts on the latest RHEL drama?

2023-06-27 Thread Douglas Eadline
akes while figuring out how to get some small Java package to build with Gradle (don't ask). -- Doug > We're all ears... > > > Bill > > On 6/26/23 3:00 PM, Douglas Eadline wrote: >> >> I'll have more to say later and to me the irony of this situation is >>

Re: [Beowulf] Your thoughts on the latest RHEL drama?

2023-06-26 Thread Douglas Eadline
I'll have more to say later and to me the irony of this situation is Red Hat has become what they were created to prevent*. -- Doug * per conversations with Bob Young back in the day > Beowulfers, > > By now, most of you should have heard about Red Hat's latest to > eliminate any

Re: [Beowulf] [External] beowulf hall of fame

2022-02-25 Thread Douglas Eadline
Here is an over simplification Thomas managed the project Jim provided the funds Don was the main engineer who made it happen https://www.youtube.com/watch?v=P-epcSlAFvI -- Doug > Thanks for sharing. Is anyone else disappointed to not see Don Becker on > that list? I don't mean to rain

Re: [Beowulf] [External] SC21 Beowulf Bash Panles

2021-11-15 Thread Douglas Eadline
Silicon Joe Landman, HPC Veteran > > *Very* interested in this - I've been doing a lot of benchmarking with > HPL in the past year for evaluating new systems for purchase (and with > HPCG to a lesser extent), etc. > > Prentice > > On 11/14/21 3:16 PM, Douglas Eadline wrote: &

[Beowulf] SC21 Beowulf Bash Panles

2021-11-14 Thread Douglas Eadline
Hi Everyone, While there is always a bit of Beowulf snark surrounding the Bash I wanted to mention the technical panels are looking to be very interesting. I managed to invite some great people and my only regret is I can't attend them all (I'm moderating the RISC-V panel) Composable

[Beowulf] SC21 Beowulf Bash

2021-11-09 Thread Douglas Eadline
Here is the Hybrid Beowulf Bash info https://beowulfbash.com/ -- Doug ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit

[Beowulf] SC21 Beowulf Bash

2021-10-08 Thread Douglas Eadline
There will be a BeowulfBash as part of SC21. It will be a hybrid affair. More details will be forthcoming. All you need to do right now is put an "X" on your calendar for the evening of November 15th. It will be happening at "St Louis (USA) time" after or slightly before the Open SC Gala

Re: [Beowulf] [EXTERNAL] Re: Deskside clusters

2021-09-17 Thread Douglas Eadline
--snip-- > > Where I disagree with you is (3). Whether or not cache size is important > depends on the size of the job. If your iterating through data-parallel > loops over a large dataset that exceeds cache size, the opportunity to > reread cached data is probably limited or nonexistent. As we

Re: [Beowulf] [EXTERNAL] Re: Deskside clusters

2021-09-14 Thread Douglas Eadline
k done.  I suspect other institutional >> clusters have similar "the 800 pound (363 kg) gorilla has >> requested" scenarios. >> >> >> On 8/24/21, 11:34 AM, "Douglas Eadline" > <mailto:deadl...@eadline.org>> wrote: &

Re: [Beowulf] Deskside clusters

2021-08-24 Thread Douglas Eadline
cle time" for buying hardware > - This has been a topic on this list forever - you have 2 years of > computation to do: do you buy N nodes today at speed X, or do you wait a > year, buy N/2 nodes at speed 4X, and finish your computation at the same > time. > > Fancy desktop PCs with

Re: [Beowulf] List archives

2021-08-24 Thread Douglas Eadline
> Hi Doug, > > Not to derail the discussion, but a quick question you say desk side > cluster is it a single machine that will run a vm cluster? > > Regards, > Jonathan > > -Original Message- > From: Beowulf On Behalf Of Douglas Eadline > Sent: 23 August 2

Re: [Beowulf] List archives

2021-08-23 Thread Douglas Eadline
John, I think that was on twitter. In any case, I'm working with these processors right now. On the new Ryzens, the power usage is actually quite tunable. There are three settings. 1) Package Power Tracking: The PPT threshold is the allowed socket power consumption permitted across the voltage

[Beowulf] Power Cycling Question

2021-07-16 Thread Douglas Eadline
Hi everyone: Reducing power use has become an important topic. One of the questions I always wondered about is why more cluster do not turn off unused nodes. Slurm has hooks to turn nodes off when not in use and turn them on when resources are needed. My understanding is that power cycling

Re: [Beowulf] AMD and AVX512

2021-06-21 Thread Douglas Eadline
> On Wed, 16 Jun 2021 13:15:40 -0400, you wrote: > >>The answer given, and I'm >>not making this up, is that AMD listens to their users and gives the >>users what they want, and right now they're not hearing any demand for >>AVX512. >> >>Personally, I call BS on that one. I can't imagine anyone

Re: [Beowulf] RIP CentOS 8

2020-12-11 Thread Douglas Eadline
Some thoughts an this issue and future HPC First, in general it is poor move by CentOS, a community based distribution that has just killed their community. Nice work. Second, and most importantly, CentOS will not matter to HPC. (and maybe other sectors as well) Distributions will become

[Beowulf] the !(Beowulf Bash) is tonight

2020-11-16 Thread Douglas Eadline
All: If you are interested, the virtual Beowulf Bash is tonight at 5:30 EST. The main goal of tonight is to foster discussion around various HPC and cluster topics. Based on the registration so far it looks like there will be some great discussions. You need to sign up to get the Zoom Link:

[Beowulf] Yes, there will be a 2020 !(Beowulf Bash)

2020-11-11 Thread Douglas Eadline
We are going to give a Zoom Beowulf Bash a try: https://beowulfbash.com/ Date: Monday, November 16 Time: 5:30–7:30 pm EST Our goal is provide a platform for casual community discussion and talk (no selling) We also understand the time zone issue, we tried to make it work east and west of

Re: [Beowulf] ***UNCHECKED*** Re: Re: [EXTERNAL] Re: Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-19 Thread Douglas Eadline
ustermonkey.net/images/stories/beobash/LECCIBG-SC07.jpeg -- Doug > > On Mon, 19 Oct 2020 at 15:28, Douglas Eadline > wrote: > >> --snip-- >> >> > Unfortunately the presumption seems to be that the old is deficient >> > because it is old, and "my

Re: [Beowulf] [EXTERNAL] Re: ***UNCHECKED*** Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-19 Thread Douglas Eadline
--snip-- >> Well, some of it surely comes from the fact that some of us (even older >> ;) > never wanted to touch Fortran with a 10-foot pole, so having a "modern" > fortran means nothing. I am curious why? It was designed for "FORmula TRANslation" -- Doug

Re: [Beowulf] ***UNCHECKED*** Re: Re: [EXTERNAL] Re: Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-19 Thread Douglas Eadline
--snip-- > Unfortunately the presumption seems to be that the old is deficient > because it is old, and "my generation” didn't invent it (which is > clearly perverse; I see no rush to replace English, French, … which are > all older than any of our programming languages, and which adapt, as

Re: [Beowulf] ***UNCHECKED*** Re: [EXTERNAL] Re: Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-15 Thread Douglas Eadline
in the "beowulf" book i recall reading. > > > https://urldefense.us/v3/__https://spinoff.nasa.gov/Spinoff2020/it_1.html__;!!PvBDto6Hs4WbVuu7!bIrJokEF-PV-9u42csgECfdGQv6CPxfH654QEeVT__BsDh4PURoz-fJxq23GkgQ_OaRx87I$ > > but it seems this also exists. i can't recall offhand

[Beowulf] ***UNCHECKED*** Re: [EXTERNAL] Re: Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-15 Thread Douglas Eadline
> On Thu, Oct 15, 2020 at 12:10 AM Lux, Jim (US 7140) via Beowulf > wrote: >> >> Well, maybe a Beowulf cluster of yugos… > > not really that far of a stretch, from what i can recall wasn't the > first beowulf cluster a smattering of random desktops layout on the > floor in an office Found

Re: [Beowulf] [EXTERNAL] Re: ***UNCHECKED*** Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-15 Thread Douglas Eadline
> On Thu, Oct 15, 2020 at 12:10 AM Lux, Jim (US 7140) via Beowulf > wrote: >> >> Well, maybe a Beowulf cluster of yugos… > > not really that far of a stretch, from what i can recall wasn't the > first beowulf cluster a smattering of random desktops layout on the > floor in an office Actually

Re: [Beowulf] ***UNCHECKED*** Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-14 Thread Douglas Eadline
> cost was one factor that accelerated spark/hadoop, it's not the only > or even the biggest factor. the ML folks didn't start with MPi > because the AI frameworks were bred on workstations and then ported to > non HPC hardware (aka cloud platforms) where MPI isn't the dominant > paradigm. now

Re: [Beowulf] ***UNCHECKED*** Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-13 Thread Douglas Eadline
> On Tue, Oct 13, 2020 at 3:54 PM Douglas Eadline > wrote: > >> >> It really depends on what you need to do with Hadoop or Spark. >> IMO many organizations don't have enough data to justify >> standing up a 16-24 node cluster system with a PB of HDFS. >>

[Beowulf] ***UNCHECKED*** Re: Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-13 Thread Douglas Eadline
> On Tue, Oct 13, 2020 at 1:31 PM Douglas Eadline > wrote: > >> >> The reality is almost all Analytics projects require multiple >> tools. For instance, Spark is great, but if you do some >> data munging of CSV files and want to store your results >>

Re: [Beowulf] Spark, Julia, OpenMPI etc. - all in one place

2020-10-13 Thread Douglas Eadline
> On Tue, Oct 13, 2020 at 9:55 AM Douglas Eadline > wrote: > >> >> Spark is a completely separate code base that has its own Map Reduce >> engine. It can work stand-alone, with the YARN scheduler, or with >> other schedulers. It can also take advantage of HDFS

Re: [Beowulf] ***UNCHECKED*** Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-13 Thread Douglas Eadline
ing (data cleaning, verification, building feature matrix) is where scale comes into play. Running models (unless you are training an ML) usually does not require a huge amount of computing power. -- Doug > > Regards, > Jonathan > > -Original Message- > From: Beowulf

[Beowulf] ***UNCHECKED*** Re: Spark, Julia, OpenMPI etc. - all in one place

2020-10-13 Thread Douglas Eadline
I have noticed a lot of Hadoop/Spark references in the replies. The word "Hadoop" is probably the most misunderstood word in computing today and may people have a somewhat vague idea what it actually is. Hadoop V1 was a monolithic Map Reduce framework written in Java. (BTW Map Reduce is a SIMD

Re: [Beowulf] Does anyone know if Apache Ambari has been abandoned?

2020-09-29 Thread Douglas Eadline
The short answer is probably. The long answer is as follows. Hortonworks (company spun out of Yahoo for Hadoop development and support) provided a Hadoop "distro" called HDP (kind of like Red Hat). They put a lot of effort (paid programmers) into the HDP (and variants) and Ambari as way to

Re: [Beowulf] Neocortex unreal supercomputer

2020-06-15 Thread Douglas Eadline
> On 13/6/20 10:11 pm, Jonathan Engwall wrote: > >> There is the strange part. How to utilize such a vast cpu? >> Storage should be the back end, unless the use is an api. In this case a >> gargantuan cpu sits in back, or so it seems. > > My guess is that this sits connected to the server, they

Re: [Beowulf] Sad news - RIP Rich Brueckner

2020-05-26 Thread Douglas Eadline
Sad news. A nice tribute here: https://insidehpc.com/2020/05/hats-over-hearts/ If you are interested, "Remembering Rich" a community event next week Tuesday, June 2, from 4-6pm EST, https://www.xandmarketing.com/rememberingrich Stay safe, -- Doug > Hi all, > > I've learned via

Re: [Beowulf] HPC for community college?

2020-03-02 Thread Douglas Eadline
gt; Regards, > Benson > > On Sat, Feb 22, 2020, at 6:42 AM, Douglas Eadline wrote: >> >> That is the idea behind the Limulus systems -- a personal (or group) small >> turn-key cluster that can deliver local HPC performance. >> Users can learn HPC software, administration,

Re: [Beowulf] HPC for community college?

2020-02-21 Thread Douglas Eadline
That is the idea behind the Limulus systems -- a personal (or group) small turn-key cluster that can deliver local HPC performance. Users can learn HPC software, administration, and run production codes on performance hardware. I have been calling these "No Data Center Needed" computing systems

[Beowulf] Here we go again

2019-12-12 Thread Douglas Eadline
Anyone see anything like this with Epyc, i.e. poor AMD performance when using Intel compilers or MKL? https://www.pugetsystems.com/labs/hpc/AMD-Ryzen-3900X-vs-Intel-Xeon-2175W-Python-numpy---MKL-vs-OpenBLAS-1560/ -- Doug ___ Beowulf mailing list,

Re: [Beowulf] [EXTERNAL] Re: Is Crowd Computing the Next Big Thing?

2019-11-30 Thread Douglas Eadline
an a cluster of RPi or > Beagles, which, frankly, is a doddle. > > (I'm omitting the obligatory "what about a Beowulf cluster of X" comment, > since we are, after all, the Beowulf mailing list and it goes without > saying. > > > On 11/30/19, 1:38 PM, "Beowulf o

Re: [Beowulf] Is Crowd Computing the Next Big Thing?

2019-11-30 Thread Douglas Eadline
"Big Thing" as in over-hyped idea: Yes "Big Thing" as in practical use: No -- Doug > Seen the below where a company wants to rent your smartphone as a cloud > computing resource. From a few years ago there was a company making space > heaters that contained servers to compute and heat your

Re: [Beowulf] Beowulf Bash @ SC19

2019-11-26 Thread Douglas Eadline
Thanks Stu! What he said! -- Doug > Hi Everyone > > It was great to attend the Beowulf Bash@SC19. I caught up with many > friends (made some new) and talked geek shit for hours. I did have a > slight hang-over the next day. > > I also spoke to Donald and thanked him for what he created. > >

[Beowulf] Still time to get a Beowulf Bash t-shirt. at SC19

2019-11-20 Thread Douglas Eadline
We wanted to let everyone at #SC19 know the the "nifty t-shirts" are still available at booth #996. If you made a donation and did not pick up your shirt, please stop by, if you want a shirt at the show, visit https://www.gofundme.com/f/beowulf-bash-2019 and then stop by tomorrow. And thanks

[Beowulf] SC19 Beowulf Bash Info

2019-11-17 Thread Douglas Eadline
If you are attending SC19 and are interested in the Beowulf Bash, please note there are three Hyatt hotels near the Convention center, the Bash is at the GRAND HYATT Hotel (head up Welton Street to 17th when you leave the CC on Monday night) 38th floor, best view in Denver.

Re: [Beowulf] SC19 Beowulf Bash

2019-11-15 Thread Douglas Eadline
Thanks! -- Doug > donated! > > On Tue, Oct 29, 2019 at 10:44 PM Douglas Eadline > wrote: > >> >> >> Once more into the breach. Although a bit bruised from last years event, >> we(1) are intent on making the Bash to happen again this year. >> &g

Re: [Beowulf] SC19 Beowulf Bash

2019-10-30 Thread Douglas Eadline
14123 (+61 4 3741 4123) > Multi-modal Australian ScienceS Imaging and Visualisation Environment > (www.massive.org.au) > Monash University > > > On Wed, 30 Oct 2019 at 01:44, Douglas Eadline > wrote: > >> >> >> Once more into the breach. Although a bit bruise

[Beowulf] SC19 Beowulf Bash

2019-10-29 Thread Douglas Eadline
Once more into the breach. Although a bit bruised from last years event, we(1) are intent on making the Bash to happen again this year. https://beowulfbash.com/ And, thanks again to all those who help contribute last year. We are also anticipating a similar budget this year and like last

Re: [Beowulf] [EXTERNAL] Re: Build Recommendations - Private Cluster

2019-08-23 Thread Douglas Eadline
. Never new that! Thanks for that bit of trivia. -- Doug > > On 8/23/19, 7:45 AM, "Beowulf on behalf of Douglas Eadline" > wrote: > > > > > Hi John > > > > No doom and gloom. > > > > It's in a purpose built wor

Re: [Beowulf] Build Recommendations - Private Cluster

2019-08-23 Thread Douglas Eadline
> Hi John > > No doom and gloom. > > It's in a purpose built workshop/computer room that I have; 42U Rack, cross draft cooling which is sufficient and 32AMP Power into the PDU’s. The equipment is housed in the 42U Rack along with a variety of other machines such as Sun Enterprise 4000 and a 30

Re: [Beowulf] Lustre on google cloud

2019-07-29 Thread Douglas Eadline
> What would be the reason for getting such large data sets back on premise? > Why not leave them in the cloud for example in an S3 bucket on amazon or > google data store. I think this touches on the ownership issue I have seen some people mention (I think Addison Snell or i360). That is, you

Re: [Beowulf] HPE to acquire Cray

2019-05-21 Thread Douglas Eadline
> On Mon, 20 May 2019 18:42:31 -0400, you wrote: > >>I am curious, do you have some evidence for the demise of CentOS >>other than IBM bought RH? > > The important thing to remember about CentOS (and presumably why Red > Hat brought it on board) is that it is not really a RHEL competitor, >

Re: [Beowulf] HPE to acquire Cray

2019-05-21 Thread Douglas Eadline
> On 5/18/19 11:54 AM, Chris Samuel wrote: >> On 17/5/19 12:37 pm, Jonathan Aquilina wrote: >> >>> That is my biggest fear for centos to be fair with the IBM RH >>> acquisition. >> >> I think IBM at least seems to get open source, especially around >> Linux. Now if it was Oracle who had bought

Re: [Beowulf] HPE to acquire Cray

2019-05-20 Thread Douglas Eadline
> I just wanted to point out that Fedora, while having a lot of volunteers > is > primarily driven by Red Hat employees, so I don't think forking it is a > viable option. If you want to get an idea of what forking RHEL/Centos > would > be listen to an Oracle Linux rep who will be more than happy

[Beowulf] Parallel programming book

2019-04-23 Thread Douglas Eadline
Saw this on twitter (open text book) Programming on Parallel Machines; GPU, Multicore, Clusters and More http://heather.cs.ucdavis.edu/parprocbook -- Doug ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your

Re: [Beowulf] Introduction and question

2019-03-21 Thread Douglas Eadline
It should also be pointed out that the early Beowulf community was largely composed of engineers, computer scientists, biologists, chemists, and physicists. Of course all technical backgrounds, but with a common goal -- cheaper, better, faster. By definition the Beowulf community (1) has always

Re: [Beowulf] Large amounts of data to store and process

2019-03-15 Thread Douglas Eadline
the early/mid-00s, HPC took GPUs and starting doing >> programming gymnastics to get the vector processors on the GPUs to do >> physics calculations. >> >> -- >> Prentice >> >> On 3/14/19 2:35 PM, Jonathan Aquilina wrote: >>> Th

Re: [Beowulf] Large amounts of data to store and process

2019-03-14 Thread Douglas Eadline
> I don't want to interrupt the flow but I'M feeling cheeky. One word can > solve everything "Fortran". There I said it. Of course, but you forgot "now get off my lawn" -- Doug > > Jeff > > > On Thu, Mar 14, 2019, 17:03 Douglas Eadline wr

Re: [Beowulf] Large amounts of data to store and process

2019-03-14 Thread Douglas Eadline
e cost must justify the means. HPC has traditionally trickled down in to other sectors. However, many or the HPC problem types are not traditional computing problems. This situation is changing a bit with things like Hadoop/Spark/Tensor Flow -- Doug > > On 14/03/2019, 19:14, "Dougla

Re: [Beowulf] Large amounts of data to store and process

2019-03-14 Thread Douglas Eadline
ther server). Also, more cores usually means lower single core frequency to fit into a given power envelope (die shrinks help with this but based on everything I have read, we are about at the end of the line) It also means lower absolute memory BW per core although more memory channels help a bi

Re: [Beowulf] Large amounts of data to store and process

2019-03-14 Thread Douglas Eadline
Python does not do dynamic compiling like Julia and is by design slower. But, just in case "Python is great" > > > On Wed, Mar 13, 2019 at 5:23 PM Douglas Eadline > wrote: > >> >> I realize it is bad form to reply ones own post and >> I forgot to mention so

Re: [Beowulf] Large amounts of data to store and process

2019-03-13 Thread Douglas Eadline
I realize it is bad form to reply ones own post and I forgot to mention something. Basically the HW performance parade is getting harder to celebrate. Clock frequencies have been slowly increasing while cores are multiply rather quickly. Single core performance boosts are mostly coming from

Re: [Beowulf] NVIDIA to acquire Mellanox

2019-03-11 Thread Douglas Eadline
The HPC twitter gang was all over this today Good coverage and analysis at the next platform https://www.nextplatform.com/2019/03/11/connecting-the-dots-on-why-nvidia-is-buying-mellanox/ -- Doug > I'm pretty sure everybody has seen this by now, but just in case: >

Re: [Beowulf] Large amounts of data to store and process

2019-03-11 Thread Douglas Eadline
> Hi All, > Basically I have sat down with my colleague and we have opted to go down the route of Julia with JuliaDB for this project. But here is an interesting thought that I have been pondering if Julia is an up and coming fast language to work with for large amounts of data how will that >

Re: [Beowulf] List returned to service (was Re: Administrivia: Beowulf down this weekend for OS upgrade)

2019-03-10 Thread Douglas Eadline
Thank you Chris! -- Doug > On Saturday, 9 March 2019 7:29:10 PM PST Chris Samuel wrote: > >> The list is working, and the archives are being updated, but there's a >> niggling issue that stops access to the archives via the web that I've >> not >> been able to solve yet. > > Final, final email

Re: [Beowulf] Large amounts of data to store and process

2019-03-04 Thread Douglas Eadline
> I read though that postgres can handle time shift data no problem. I am > just concerned if the clients would want to do complex big data analytics > on the data. At this stage we are just prototyping but things are very up > in the air at this point I am wondering though if sticking with HDFS

Re: [Beowulf] Large amounts of data to store and process

2019-03-04 Thread Douglas Eadline
> Good Morning all, > > I am working on a project that I sadly cant go into much detail but there > will be quite large amounts of data that will be ingested by this system > and would need to be efficiently returned as output to the end user in > around 10 min or so. I am in discussions with

Re: [Beowulf] 2 starting questions on how I should proceed for a correct first micro-cluster (2-nodes) building

2019-03-03 Thread Douglas Eadline
One thing to remember, a cluster is often defined by its environment and goals. 1. HPC type clusters, as discussed on this list, operate in a particular way (cluster nodes are provisioned in a reproducible way, job scheduler that provides dynamic resources to users, clusters are optimized for

Re: [Beowulf] {Disarmed} Re: A Cooler Cloud: A Clever Conduit Cuts Data Centers? Cooling Needs by 90 Percent

2019-01-28 Thread Douglas Eadline
> > Which law of thermodynamics says there's no such thing as a free lunch? That would be rule 2 The rules: 0. Everyone plays 1. Nobody can win 2. You can't break even 3. Well, you can break even on a really really really cold day 3a. It never gets that cold. -- Doug > > Prentice > >> >>

Re: [Beowulf] If you can help ...

2018-11-19 Thread Douglas Eadline
All: I wanted to thank everyone who helped with the Beowulf Bash Go Fund Me effort. We raised $3070! This amount will really help. Once the financial dust settles, I will report back as to how this amount helped the Bash. Thanks again to all those who helped. It was a great success. Also,for

[Beowulf] If you can help ...

2018-11-09 Thread Douglas Eadline
Everyone: This is a difficult email to write. For years we (Lara Kisielewska, Tim Wilcox, Don Becker, myself, and many others) have organized and staffed the Beowulf Bash each Monday night at SC. The event has always been funded by the various vendors who are part of the Beowulf ecosystem. This

Re: [Beowulf] SC18: Beowulf Video and Other Cool Stuff

2018-11-08 Thread Douglas Eadline
sure it does, but what you mostly see is back and forth (X and Y), movement on the Z access is hard to see (unless you are printing a toothpick on end. > If it's a 3D printer, won't it go up and down, too? > > Prentice > > On 11/08/2018 02:55 PM, Douglas Eadline wrote: >>

[Beowulf] SC18: Beowulf Video and Other Cool Stuff

2018-11-08 Thread Douglas Eadline
If you are attending SC18, please stop by the SC18 30th anniversary display (in lobby D) There you can hang out in comfy chairs (with chargers) and watch a short Beowulf documentary video, talk to other Beowulfers, and watch a 3D printer go back and forth. Here is the press release:

Re: [Beowulf] More about those underwater data centers

2018-11-08 Thread Douglas Eadline
> Alan Turing proved that gin would be excellent for use in delay lines > https://www.theregister.co.uk/2013/06/28/wilkes_centenary_mercury_memory/ > I think we now need Prentice and Joe to prove that the ideal immersive > coolant medium is beer. Not to far off, there are LED bulbs that are

Re: [Beowulf] SC18

2018-11-06 Thread Douglas Eadline
Joe, If you (or anyone needs a place to land) the 30th Anniversary pavilion in Lobby D will have sofas (with USB chargers) and a Beowulf Exhibit. There is a short Beowulf documentary, some 3D printing, and some other memorabilia. Plus some of the pioneers said they planned to stop by. -- Doug

Re: [Beowulf] Oh.. IBM eats Red Hat

2018-10-30 Thread Douglas Eadline
> On Mon, 29 Oct 2018, Douglas Eadline wrote: > >> Those alleged problems could have been solved for far less than $34B > > The big question these days is -- what the hell DOES IBM make money > from? A good question indeed. Tthree things they have been pushing lately

Re: [Beowulf] Oh.. IBM eats Red Hat

2018-10-29 Thread Douglas Eadline
In the sage words of Douglas Adams, "Don't Panic" My take here: https://www.clustermonkey.net/Opinions/breathe-easy-the-red-hat-acquisition-by-ibm-was-always-the-goal.html -- Doug > https://www.reuters.com/article/us-red-hat-m-a-ibm/ibm-to-acquire-softw >

Re: [Beowulf] Oh.. IBM eats Red Hat

2018-10-29 Thread Douglas Eadline
> How well has Linux been supporting IBM's POWER processors? I would > imagine pretty well, since the Linux community always seems eager to run > on new hardware. > > Could it be that the Linux community hasn't been quick enough in > accepting IBM's contributions to Linux to support POWER, so IBM

[Beowulf] It is time ...

2018-10-22 Thread Douglas Eadline
https://beowulfbash.com/ And, if your company wants in on the action, let me know off-list. We have room for more sponsors. The beobash provides a great way to get noticed in-front of a large HPC crowd. -- Doug -- MailScanner: Clean ___ Beowulf

Re: [Beowulf] If I were specifying a new custer...

2018-10-15 Thread Douglas Eadline
Thanks for all the replies. I may write this up. One other pointed question. In general, how much has the Spectere/meltdown issue and Intel Fab issue effected your decisions of Intel vs AMD? Or are your decisions still based on performance/price/other benchmarks? -- Doug > All: > > Over the

[Beowulf] If I were specifying a new custer...

2018-10-11 Thread Douglas Eadline
All: Over the last several months I have been reading about: 1) Spectre/meltdown 2) Intel Fab issues 3) Supermicro MB issues I started thinking, if I were going to specify a single rack cluster, what would I use? I'm assuming a general HPC workload (not deep learning or analytics) I need to

Re: [Beowulf] Hacked MBs It was only a matter of time

2018-10-05 Thread Douglas Eadline
ohnny English (aka Doug Eadline) must be brought out of retirement > due to using only analogue. > The lovelt red Aston Martin belongs to Rowan Atkinson, ad is chosen as > it has no digital ignition or ECU. > > > On Fri, 5 Oct 2018 at 13:23, Douglas Eadline wrote: >> >&g

Re: [Beowulf] Hacked MBs It was only a matter of time

2018-10-05 Thread Douglas Eadline
mberg/ > On Thu, 4 Oct 2018 at 20:52, Andrew Latham wrote: >> >> And news directly from Supermicro >> https://www.supermicro.com/newsroom/pressreleases/2018/press181004_Bloomberg.cfm >> >> On Thu, Oct 4, 2018 at 8:48 AM Douglas Eadline >> wrote: >>> >>&g

[Beowulf] Hacked MBs It was only a matter of time

2018-10-04 Thread Douglas Eadline
https://www.bloomberg.com/news/features/2018-10-04/the-big-hack-how-china-used-a-tiny-chip-to-infiltrate-america-s-top-companies (limited free articles) First question: So who has Supermicro motherboards? Second question: Where else are these devices? Third question: Who else is making/inserting

Re: [Beowulf] Administrivia: list admin travelling

2018-10-02 Thread Douglas Eadline
Too bad you won't be at SC. Darn. And, a thousand thank-yous for tending to the list. Of all the recent shiny things on the inter-webs, this list ranks as one of the most valuable HPC resources since I started reading it back in the 1990's Although I do miss the occasional RGB (Robert Brown)

[Beowulf] SC18: Needed Beowulf Memories (Update)

2018-09-10 Thread Douglas Eadline
Update: It looks like we are going to be coupled with the SC 30th year display outside the main show entrance. We have about 10x20 feet (3x6m) of space. We are also hoping this space becomes a stop and chat area for Beowulfers (there will be chairs) More to come, working on the Bash as well.

Re: [Beowulf] SC18: Needed Beowulf Memories

2018-08-16 Thread Douglas Eadline
Ha Ha, yes I mean anecdotes And by the way, you can't use the anecdote when I used antidote instead of anecdotes, just to be clear -- Doug > There is no antidote to Beowulfry, but I'm sure you will get anecdotes. > :-) > > Peter > > On Thu, Aug 16, 2018 at 8:42 AM, Dougla

[Beowulf] SC18: Needed Beowulf Memories

2018-08-16 Thread Douglas Eadline
Hello fellow Beowulfers Normally we send emails about SC and Beowulf Bash in a timely fashion a few weeks before November (okay that is a bit generous, but let's continue) This year is the 30th anniversary of SC (BTW "SC" stands for Supercomputing, which is what it was originally called, but

Re: [Beowulf] ServerlessHPC

2018-07-25 Thread Douglas Eadline
Wow, bitcoins! Sign me up -- Doug > All credit goes to Pim Schravendijk for coining a new term on Twitter > today > https://twitter.com/rdwrt > https://twitter.com/rdwrt/status/1021761796498182144?s=03 > > We will all be doing it in six months time. > > -- > MailScanner: Clean > >

Re: [Beowulf] New Spectre attacks - no software mitigation - what impact for HPC?

2018-07-17 Thread Douglas Eadline
I saw that as well. I'm always a bit skeptical about some of these theoretical attacks. IMO there should be a "degree of difficultly" (of sorts) assigned to these hardware issues. Then you can decide on a risk strategy. Multicore really introduced a lot of issues. For those that can remember,

Re: [Beowulf] Intel Storm on the Horizon ?

2018-07-03 Thread Douglas Eadline
tart eating >> their lunch for a while. >> >> Just one guy's opinion, and you know what they say about opinions... >> >> >> On Mon, Jul 2, 2018 at 8:30 AM, Douglas Eadline >> wrote: >> >>> >>> These two stories from SemiAccurate are a bi

[Beowulf] Intel Storm on the Horizon ?

2018-07-02 Thread Douglas Eadline
These two stories from SemiAccurate are a bit disconcerting. There is a bit of company dirty laundry that make the Krzanich firing seem silly, but the money statement is: "Yields of fully working chips rounds to zero. Intel’s 10nm process flat out doesn’t work and SemiAccurate’s sources are

Re: [Beowulf] Working for DUG, new thead

2018-06-13 Thread Douglas Eadline
Add to that "Please identify the situations that justify using a LART?" https://en.wiktionary.org/wiki/LART -- Doug > One of my standard interview questions is to say ok, you start on Monday > and you're placed in charge of a web/db server, tell me what you do your > first week. > > What I

Re: [Beowulf] Fwd: Project Natick

2018-06-07 Thread Douglas Eadline
-snip- > > i'm not sure i see a point in all this anyhow, it's a neat science > experiment, but what's the ROI on sinking a container full of servers > vs just pumping cold seawater from 100ft down > I had the same thought. You could even do a salt water/clear water heat exchange and not have

Re: [Beowulf] Bright Cluster Manager

2018-05-04 Thread Douglas Eadline
Good points. I should have mentioned I was talking more about "generic mainstream HPC" (like you say "cloud") and not the performance cases where running on bare metal is essential. -- Doug > On Thursday, 3 May 2018 11:04:38 PM AEST Douglas Eadline wrote: > >&

Re: [Beowulf] Bright Cluster Manager

2018-05-03 Thread Douglas Eadline
And, I forgot to mention, the other important aspect here is reproducibility. Create/modify a code, put it in a signed container (like Singularity), use it, write the paper. Five years later the machine on which it ran is gone, your new grad student wants to re-run some data. Easy, because it is

Re: [Beowulf] Bright Cluster Manager

2018-05-03 Thread Douglas Eadline
Here is where I see it going 1. Computer nodes with a base minimal generic Linux OS (with PR_SET_NO_NEW_PRIVS in kernel, added in 3.5) 2. A Scheduler (that supports containers) 3. Containers (Singularity mostly) All "provisioning" is moved to the container. There will be edge cases of

Re: [Beowulf] nVidia revealed as evil

2018-01-04 Thread Douglas Eadline
My first response was to chuckle. As you know, the entire concept of Beowulf clusters was based largely on using hardware that was not supposed to be used. "You can't use desktop x86 and Ethernet for supercomputing!" (as it was called at the time) We also know Intel decided to fuse off the

[Beowulf] Yes it is that time of year again

2017-10-23 Thread Douglas Eadline
The 2017 Beowulf Bash and Bowling Fete http://beowulfbash.com/ -- Doug -- MailScanner: Clean ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit

Re: [Beowulf] What is rdma, ofed, verbs, psm etc?

2017-09-21 Thread Douglas Eadline
> What about RoCE? Is this something that is commonly used (I would guess > no since I have not found much)? Are there other protocols that are > worth considering (like "gamma" which doesn't seem to be developed > anymore)? Gamma has not been around for years. There was open-mx

  1   2   3   4   5   >