HPC Sysadmins will have to gain other skills
https://youtu.be/Jf8Sheh4MD4?si=La0KfEF6OGPRKA2-
On Sun, 7 Apr 2024 at 23:07, Scott Atchley
wrote:
> On Sun, Mar 24, 2024 at 2:38 PM Michael DiDomenico
> wrote:
>
>> i'm curious if others think DLC might hit a power limit sooner or later,
>> like
There is a Jobs channel on hpc.social
Just saying
On Fri, Feb 23, 2024, 2:09 PM Michael DiDomenico
wrote:
> Maybe we should come with some kind of standard/wording/what-have-you to
> post such. I have some open positions as well. might liven the list up a
> little too... :)
>
> On Thu, Feb
https://sealandgov.org/
Move to Sealand. It is a WW2 gun platform in the English Channel.
I believe the servers are down in the legs.
On Mon, 13 Nov 2023, 17:07 Joshua Mora, wrote:
> Some folks trying to bypass legally government restrictions.
>
> Is land on the Moon or Mars on sale for
netloc is the tool you want to use.
Look in the latest hwloc dovumentation
On Wed, 20 Sep 2023, 13:55 John Hearns, wrote:
> I did manage to get the graphical netloc utility working once. Part of the
> hwloc/openmpi project.
>
> It produces a very pretty image of I topology. I think
Does ibnetdiscover not help you?
On Tue, 19 Sep 2023, 19:03 Michael DiDomenico,
wrote:
> does anyone know if there's a simple command to pull the neighbor of
> the an ib port? for instance, this horrible shell command line
>
> # for x in `ibstat | awk -F \' '/^CA/{print $2}'`; do iblinkinfo
I did manage to get the graphical netloc utility working once. Part of the
hwloc/openmpi project.
It produces a very pretty image of I topology. I think if you zoom in you
can get neighbours.
A few years since I used it.
On Tue, 19 Sep 2023, 19:03 Michael DiDomenico,
wrote:
> does anyone know
I would look at BeeGFS here
On Thu, 10 Aug 2023, 20:19 leo camilo, wrote:
> Hi everyone,
>
> I was hoping I would seek some sage advice from you guys.
>
> At my department we have build this small prototyping cluster with 5
> compute nodes,1 name node and 1 file server.
>
> Up until now, the
the mix, doing that to speed things up.
> Again.. I would like to apologize for being quiet for so long. I'll try
> to toss an "ack" in there from my phone if nothing else.
>
>
> ./Andrew Falgout
> KG5GRX
>
>
> On Mon, Jul 31, 2023 at 6:10 AM John Hearns wrote:
A quick ack would be nice.
On Fri, 28 Jul 2023, 06:38 John Hearns, wrote:
> Andrew, the answer is very much yes. I guess you are looking at the
> interface of 'traditional' HPC which uses workload schedulers and
> Kubernetes style clusters which use containers.
> Firstly I woul
Andrew, the answer is very much yes. I guess you are looking at the
interface of 'traditional' HPC which uses workload schedulers and
Kubernetes style clusters which use containers.
Firstly I would ask if you are coming from the point of view of someone who
wants to build a cluster in your home or
All the cool kids are on hpc.Social
I am on the Slack there. I would encourage everyone to come over
On Wed, 26 Jul 2023, 14:39 Michael DiDomenico,
wrote:
> just a mailing list as far as i know. it used to get a lot more
> traffic, but seems to have simmered down quite a bit
>
> On Tue, Jul
Rugged individuaiist? I like that...Me puts on plaid shirt and goes to
wrestle with some bears,,,
> Maybe it is time for an HPC Linux distro, this is where
Good move. I would say a lightweight distro that does not do much nd is
rebooted every time a job finishes.
Wonder what security types
There is a good discussion on this topic over on the Slack channel at
hpc.social
I would urge anyone on this list to join up there - you will find a home.
hpcsocial.slack.com
On Mon, 26 Jun 2023 at 19:27, Prentice Bisbal via Beowulf <
beowulf@beowulf.org> wrote:
> Beowulfers,
>
> By now, most
That Supermicro board sounds like one of the boards from an ICE cluster,
right?
I know Joe flagged up the BIOS - thinking out loud is it not possible to
copy the BIOS from another, working, board of the same model?
Regarding SGI workstations when I worked in post production at Framestore
we had
Jörg, I would have a look at the Archer/UK-HPC benchmarks
https://github.com/hpc-uk/archer-benchmarks
They have Castep and CP2K in the applications benchmarks which will be
relevant to you.
Also thankyou for looking for advice here!
As someone who has worked for several cluster vendors, please
All good Jim. However to be allowed to benchmark these systems you must
pronounce the CPU as "Milawn"
As I said elsewhere, they are getting pretty far north now. Is the plan to
cross the Alps?
On Tue, 9 Nov 2021 at 09:23, Jim Cownie wrote:
> @Prentice:
> > Certainly looking forward to running
I recently saw a presentation which referenced a framework to test out
Infiniband (or maybe in general MPI) fabrics.
This was a Github repository.
It ran a series of inter-node tests and analysed the results.
It seemed similar in operation to Linktest
As Paul says - start a subnet manager. I guess you are using the distro
supplied IB stack?
Run the following commands:
sminfo
ibdiagnet
these will check out your subnet manager and your fabric
On Wed, 20 Oct 2021 at 17:21, Paul Edmon via Beowulf
wrote:
> Oh you will also need a IB subnet
The engine hoist is just superb! The right tool for the job.
Thinking about this, old style factories had overhead cranes. At Glasgow
University we had a cyclotron, and I am told one of the professors took a
great joy in driving the crane.
The Tate Modern art gallery has a huge overhead crane,
I once had an RMA case for a failed tape with Spectralogic. To prove it was
destroyed and not re-used I asked the workshop guys to put it through a
bandsaw, then sent off the pictures.
On Wed, 29 Sept 2021 at 16:47, Ellis Wilson wrote:
> On 9/29/21 11:41 AM, Jörg Saßmannshausen wrote:
> > If
Some points well made here. I have seen in the past job scripts passed on
from graduate student to graduate student - the case I am thinking on was
an Abaqus script for 8 core systems, being run on a new 32 core system. Why
WOULD a graduate student question a script given to them - which works.
Over on the Julia discussion list there are often topics on performance or
varying performance - these often turn out to be due to the BLAS libraries
in use, and how they are being used.
I believe that there is a project for pureJulia BLAS.
On Mon, 20 Sept 2021 at 18:41, Lux, Jim (US 7140) via
Yes, but which foot? You have enough space for two toes from each foot for
q taste, and you then need some logic to decide which one to use.
On Mon, 20 Sept 2021 at 21:59, Prentice Bisbal via Beowulf <
beowulf@beowulf.org> wrote:
> On 9/20/21 6:35 AM, Jim Cownie wrote:
>
> >> Eadline's Law :
This talk by Keith Manthey is well worth listening to. Vendor neutral as I
recall, so don't worry about a sales message bein gpushed
HPC Storage 101 in this series
https://www.dellhpc.org/eventsarchive.html
On Sat, 18 Sept 2021 at 18:21, Lohit Valleru via Beowulf <
beowulf@beowulf.org> wrote:
>
Eadline's Law : Cache is only good the second time.
On Fri, 17 Sep 2021, 21:25 Douglas Eadline, wrote:
> --snip--
> >
> > Where I disagree with you is (3). Whether or not cache size is important
> > depends on the size of the job. If your iterating through data-parallel
> > loops over a large
Lohit, good morning. I work for Dell in the EMEA HPC team. You make some
interesting observations.
Please ping me offline regarding Isilon.
Regarding NFS we have a brand new Ready Architecture which uses Poweredge
servers and ME series storage (*)
It gets some pretty decent performance and I
If anyone works with Dell kit I am happy to discuss thermal profiles and
power capping. But definitely off list.
On Wed, 25 Aug 2021 at 07:16, Tony Brian Albers wrote:
> I have a Precision 5820 in my office. It's only got one CPU(14 physical
> cores), but it's more quiet than my HP SFF desktop
/twitter.com/thedeadline/status/1424833944000909313
>
> On 17 Aug 2021, at 07:16, Chris Samuel wrote:
>
> Hi John,
>
> On Monday, 16 August 2021 12:57:20 AM PDT John Hearns wrote:
>
> The Beowulf list archives seem to end in July 2021.
> I was looking for Doug Eadline's pos
The Beowulf list archives seem to end in July 2021.
I was looking for Doug Eadline's post on limiting AMD power and the results
on performance.
John H
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your
That is a very interesting point! I never thought of that.
Also mobile drives ARM development - yes I know the CPUs in Isambard and
Fugaku will not be seen in your mobile phone but the ecosystem is propped
up by having a diverse market and also the power saving priorities of
mobile will influence
Regarding benchmarking real world codes on AMD , every year Martyn Guest
presents a comprehensive set of benchmark studies to the UK Computing
Insights Conference.
I suggest a Sunday afternoon with the beverage of your choice is a good
time to settle down and take time to read these or watch the
https://bofhcam.org/co-larters/lart-reference/index.html
[image: image.png]
On Fri, 26 Mar 2021 at 13:57, Michael Di Domenico
wrote:
> does anyone have a recipe for limiting the damage people can do on
> login nodes on rhel7. i want to limit the allocatable cpu/mem per
> user to some low
Referring to lambda functions, I think I flagged up that AWS now supports
containers up to 10GB in size for the lambda payload
https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/
which makes a Julia language lambda possible
https://www.youtube.com/watch?v=6DvpneWRb_w
On
In the seminar the graph of sequencing effort for Sanger/ rest of UK/
worldwide is very impressive.
On Thu, 4 Feb 2021 at 10:21, Tim Cutts wrote:
>
>
> > On 3 Feb 2021, at 18:23, Jörg Saßmannshausen <
> sassy-w...@sassy.formativ.net> wrote:
> >
> > Hi John,
> >
> > interesting stuff and good
https://edition.cnn.com/2021/02/03/europe/tracing-uk-variant-origins-gbr-intl/index.html
Dressed in white lab coats and surgical masks, staff here scurry from
machine to machine -- robots and giant computers that are so heavy, they're
placed on solid steel plates to support their weight.
Heavy
Stupid question from me - does OneAPI handle Xeon Phi?
(a) I should read the manual
(b) it is a discontinued product - why would they put any effort into it
On Thu, 31 Dec 2020 at 05:52, Jonathan Engwall <
engwalljonathanther...@gmail.com> wrote:
> Hello Beowulf,
> Both the Xeon Phi and Tesla
Great vision Doug.
May I also promote EESSI https://www.eessi-hpc.org/
(the European part may maagically be transformed into something else soon)
On Fri, 11 Dec 2020 at 18:57, Douglas Eadline wrote:
>
>
> Some thoughts an this issue and future HPC
>
> First, in general it is poor move by
A quick reminder that there are specific Redhat SKUs for cluster head nodes
and a cheaper one for cluster nodes.
Tha announcement regarding Centos Stream said that there would be a new
offer.
On Wed, 9 Dec 2020 at 11:59, Peter Kjellström wrote:
> On Tue, 8 Dec 2020 18:13:46 +
> Ryan
Jorg, a big seismic processing company I worked with did indeed use Debian.
The answer though is that industrial customers use commercial software
packages which are licensed and they want support from the software vendors.
If you check the OSes which are supported then you find Redhat and SuSE.
Reviving this topic slightly, these were flagged up on the Julia forum
https://github.com/aws/aws-lambda-runtime-interface-emulator
The Lambda Runtime Interface Emulator is a proxy for Lambda’s Runtime and
Extensions APIs, which allows customers to locally test their Lambda
function packaged as
James, that is cool!
A though I have had - for HA setups DRBD can be used for the shared files
which the nodes need to keep updated.
Has anyone tried Syncthing for this purpose?
I suppose there is only one way to find out!
On Fri, 27 Nov 2020 at 01:06, James Braid wrote:
> On Wed, 25 Nov 2020,
Jorg, I think I might know where the Lustre storage is !
It is possible to install storage routers, so you could route between
ethernet and infiniband.
It is also worth saying that Mellanox have Metro Infiniband switches -
though I do not think they go as far as the west of London!
Seriously
h can be fun if your skill
> needs to fetch data from some other source (in my case a rather sluggish
> data service in Azure run by my local council), and there’s no clean way to
> handle the event if you hit the 8 second limit, the function just gets
> terminated and Alexa returns a rath
Or to put it simply: "Alexa - sequence my genome"
On Wed, 25 Nov 2020 at 09:45, John Hearns wrote:
> Tim, that is really smart. Over on the Julia discourse forum I have blue
> skyed about using Lambdas to run Julia functions (it is an inherently
> functional language) (*)
&g
Tim, that is really smart. Over on the Julia discourse forum I have blue
skyed about using Lambdas to run Julia functions (it is an inherently
functional language) (*)
Blue skying further, for exascale compute needs can we think of 'Science as
a Service'?
As in your example the scientist thinks
This article might be interesting here:
https://www.dell.com/support/article/en-uk/sln319015/amd-rome-is-it-for-real-architecture-and-initial-hpc-performance?lang=en
And Hello Joshua. Long time no see.
On Sun, 25 Oct 2020 at 23:11, Joshua Mora wrote:
> Reach out AMD,
> they have specific
> Most compilers had extensions from the IV/66 (or 77) – quoted strings,
for instance, instead of Hollerith constants, and free form input. Some
allowed array index origins other than 1
I can now date exactly when the rot set in.
Hollerith constants are good enough for anyone. It's a gosh darned
which is primarily a people problem, not a computer
>> problem. Their existing work and data flows are already parallelized in
>> some sense, and if they need to do it faster, they just add processors or
>> storage as needed.
>>
>>
>>
>>
>>
>>
>>
>&g
> I have a let it "mellow a bit" approach to shinny new software.
Software as malt whisky... I like it.
Which reminds me to ask re LECBIG plans?
On Mon, 19 Oct 2020 at 15:28, Douglas Eadline wrote:
> --snip--
>
> > Unfortunately the presumption seems to be that the old is deficient
> > because
replace English, French, … which are all older
> than any of our programming languages, and which adapt, as do our
> programming languages).
>
> On 19 Oct 2020, at 09:48, John Hearns wrote:
>
> Jim you make good points here. I guess my replies are:
>
> Modern Fortran workshops ex
6
>
> > On 15 Oct 2020, at 12:07, Oddo Da wrote:
> >
> > On Thu, Oct 15, 2020 at 1:11 AM John Hearns wrote:
> > This has been a great discussion. Please keep it going.
> >
> > I am all out of ammo ;). In all seriousness, it is not easy to ask these
> qu
Hello Prentice. I think you need to come over to the Julia Discourse
https://discourse.julialang.org/t/knet-on-powerpc64le-platform/48149
On Thu, 15 Oct 2020 at 22:09, Joe Landman wrote:
> Cool (shiny!)
> On 10/15/20 5:02 PM, Prentice Bisbal via Beowulf wrote:
>
> So while you've all been
This has been a great discussion. Please keep it going.
To the points on technical debt, may I also add re-validation?
Let's say you have a weather model which your institute has been running
for 20 years.
If you decide to start again from fresh with code in a new language you are
going to have
Jorg, I would back up what Matt Wallis says. What benefits would Openstack
bring you ?
Do you need to set up a flexible infrastructure where clusters can be
created on demand for specific projects?
Regarding Infiniband the concept is SR-IOV. This article is worth reading:
The video is here. From 04:00 onwards
https://fosdem.org/2020/schedule/event/magic_castle/
"OK your cluster will be available in about 20 minutes"
On Tue, 30 Jun 2020 at 14:27, INKozin wrote:
> And that's how you deploy an HPC cluster!
>
> On Tue, 30 Jun 2020 at 14:21,
I saw Magic Castle being demonstrated lve at FOSDEM this year.
It is more a Terraform/ansible setup for configuring clusters on demand.
The person demonstrating it called a Google Home assistant with a voice
command and asked it to build and deploy a cluster - which it did!
On Tue, 30 Jun 2020
Will it dream of electric sheep when they turn out the lights and let it
sleep?
https://www.lanl.gov/discover/news-release-archive/2020/June/0608-artificial-brains.php
On Fri, 12 Jun 2020 at 01:16, Jonathan Engwall <
engwalljonathanther...@gmail.com> wrote:
> This machine is planned, or
Thanks Chris. I worked in one place which was setting up Reframe. It
looked to be complicated to get running.
Has this changed?
On Thu, 30 Apr 2020 at 20:09, Chris Samuel wrote:
> On 4/30/20 6:54 am, John Hearns wrote:
>
> > That is a four letter abbreviation...
>
> A
s normally the Intel C Compiler, or
> C/C++ compiler suite (since you invoke the C compiler as “icc”). :-)
>
> On 30 Apr 2020, at 08:37, John Hearns wrote:
>
> Thanks Prentice. Iw as discussing this only to days ago...
> I used the older version of ICC when working at XMA int the U
Thanks Prentice. Iw as discussing this only to days ago...
I used the older version of ICC when working at XMA int the UK.
When the version as changed I found it a lot more difficult to implement.
I looked two days ago and the project seems to be revived, and incorporated
into oneAPI
Is anyone
Thinking about the applications to be run at a community college, the
concept of a local weather forecast has been running around in my head
lately.
The concept would be to install and run WRF, perhaps overnight, and produce
a weather forecast in the morning.
I suppose this hinges on WRF having a
e been running some of them as a vSphere cluster and
> others as standalone CUDA machines.
>
> So that’s one vote for OpenHPC.
>
> Cheers
>
> Richard
>
> On 21 Aug 2019, at 3:45 pm, John Hearns via Beowulf
> wrote:
>
> Add up the power consumption for each of those
Add up the power consumption for each of those servers. If you plan on
installing this in a domestic house or indeed in a normal office
environment you probably wont have enough amperage in the circuit you
intend to power it from.
Sorry to be all doom and gloom.
Also this setup will make a great
https://www.scientific-computing.com/news/cray-announces-shasta-software
Joe Landman, would you care to tell us more?
The integration of Kubernetes and batch system sounds interesting.
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin
The RadioFreeHPC crew are listening to this thread I think! A very relevant
podcast
https://insidehpc.com/2019/07/podcast-is-cloud-too-expensive-for-hpc/
Re Capital One, here is an article from the Register. I think this is going
off topic.
) Terabyte scale data movement into or out of the cloud is not scary in
2019. You can move data into and out of the cloud at basically the line
rate of your internet connection as long as you take a little care in
selecting and tuning your firewalls and inline security devices. Pushing
1TB/day
Having just spouted on about snaps/flatpak I saw on the roadmap for AWS
Firecracker that snap support is to be included.
Sorry that I am conflating snap and flatpak.
On Tue, 23 Jul 2019 at 07:06, John Hearns wrote:
> Having used Snaps on Ubuntu - which seems to be their preferred met
Having used Snaps on Ubuntu - which seems to be their preferred method of
distributing some applications,
I have a slightly different take on the containerisation angle and would
de-emphaise that.
My take is that snaps/flatpak attack the "my distro ships with gcc version
4.1 but I need gcc
Forgiveness is sought for my ongoing Julia fandom.
We have seen a lot of articles recently on industry websites such asabout
how machine learning workloads are being brought onto traditional HPC
platforms.
This paper on how Julia is bringing them together is I think significant
Igor, if there are any papers published on what you are doing with these
images I would be very interested.
I went to the new London HPC and AI Meetup on Thursday, one talk was by
Odin Vision which was excellent.
Recommend the new Meetup to anyone in the area. Next meeting 21st August.
And a plug
Probably best asking this question over on the GPFS mailing list.
A bit of Googling reminded me of https://www.arcastream.com/ They are
active in the UK Academic community,
not sure about your neck of the woods.
Give them a shout though and ask for Steve Mackie.
Regarding serial ports - if you have IPMI then of course you have a virtual
serial port.
I learned something new about serial ports and IPMI Serial Over LAN
recently..
First of all you have to use the kernel config option console=ttyy0
console=ttyS1,115200
This is well known.
In the bad old
Gerald that is an excellent history.
One small thing though: "Of course the ML came along"
What came first - the chicken or the egg? Perhaps the Nvidia ecosystem made
the ML revolution possible.
You could run ML models on a cheap workstation or a laptop with an Nvidia
GPU.
Indeed I am sitting next
Seriously? Wha.. what? Someone needs to get help.
And it wasn't me. I am a member of the People's Front of Julia.
(contrived Python reference intentional)
On Wed, 8 May 2019 at 22:57, Jeffrey Layton wrote:
> I wrote some OpenACC articles for HPC Admin Magazine. A number of
> pro-OpenMP people
I disagree. IT is a cyclical industry.
Back in the bad old days codes were written to run on IBM mainframes. Which
used the ECDIC character set.
There were Little Endian and Big Endian machines.
VAX machines had a rich set of file IO patterns. I really dont think you
could read data written on an
:18, John Hearns wrote:
> Chris, I have to say this. I have worked for smaller companies, and have
> worked for cluster integrators.
> For big University sized and national labs the procurement exercise will
> end up with a well defined support arrangement.
>
> I have seen,
Chris, I have to say this. I have worked for smaller companies, and have
worked for cluster integrators.
For big University sized and national labs the procurement exercise will
end up with a well defined support arrangement.
I have seen, in once company I worked at, an HPC system arrive which I
://www.brightcomputing.com/
Bright will certainly give you excellent support.
On Thu, 2 May 2019 at 17:02, John Hearns wrote:
> You ask some damned good questions there.
> I will try to answer them from the point of view of someone who has worked
> as an HPC systems integrator and
You ask some damned good questions there.
I will try to answer them from the point of view of someone who has worked
as an HPC systems integrator and supported HPC systems,
both for systems integrators and within companies.
We will start with HP. Did you buy those systems direct from HP as
On the RHEL 6.9 servers run ibstatus also
And sminfo
On Wed, 1 May 2019 at 16:23, John Hearns wrote:
> link_layer: Ethernet
>
> E….
>
> On Wed, 1 May 2019 at 16:18, Faraz Hussain wrote:
>
>>
>> Quoting John Hearns :
>>
>> > Wh
link_layer: Ethernet
E….
On Wed, 1 May 2019 at 16:18, Faraz Hussain wrote:
>
> Quoting John Hearns :
>
> > What does ibstatus give you
>
> [hussaif1@lustwzb33 ~]$ ibstatus
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:00
I think I he wrong track regarding the subnet manager, sorry.
What does ibstatus give you
On Wed, 1 May 2019 at 15:31, John Hearns wrote:
> E.. you are not running a subnet manager?
> DO you have an Infiniband switch or are you connecting two servers
> back-to-back?
>
&
E.. you are not running a subnet manager?
DO you have an Infiniband switch or are you connecting two servers
back-to-back?
Also - have you considered using OpenHPC rather tyhan installing CentOS on
two servers?
When you expand this manual installation is going to be painful.
On Wed, 1 May
Hi Faraz. Could to make another summary for us?
What hardware and what Infiniband switch you have
Run these commands: ibdiagnet smshow
You originally had the OpenMPI which was provided by CentOS ??
You compiled the OpenMPI from source??
How are you bringing the new OpenMPI version itno
Hello Faraz. Please start by running this commandompi_info
On Tue, 30 Apr 2019 at 15:15, Faraz Hussain wrote:
> I installed RedHat 7.5 on two machines with the following Mellanox cards:
>
> 87:00.0 Network controller: Mellanox Technologies MT27520 Family
> [ConnectX-3 Pro
>
> I followed
Hi Jorg. I will mail you offline.
IBM support for GPFS is excellent - so if they advise a check like that it
is needed.
On Tue, 30 Apr 2019 at 04:53, Chris Samuel wrote:
> On Monday, 29 April 2019 3:47:10 PM PDT Jörg Saßmannshausen wrote:
>
> > thanks for the feedback. I guess it also depends
I matriculated (enrolled) at Glasgow University in 1981 (Scots lads and
lasses start Yoonie at a tender age!).
My Computer Science teacher was Jennifer Haselgrove.
https://en.wikipedia.org/wiki/Jenifer_Haselgrove
Wonderful lady, who of course did not have a degree in Comp Sci - as there
were none
I think this should have a new thread.
I have taken a bit of an interest in quantum computing recently.
There are no real qubit based quantum computers which are ready for work at
the moment. There ARE demonstrators available from IBM etc.
The most advanced machine which is available for work is
, Jonathan Aquilina
wrote:
> I do apologize there but I think what is JIT is JuliaDB side of things.
> Julia has a lot of potential for sure will be interesting to see how it
> develops as the little I have already played with it im really liking it.
>
>
>
> *From: *Beowulf on
Jonathan, a small correction if I may. Julia is not JIT - I asked on the
Julia discourse. A much better description is Ahead of Time compilation.
Not really important, but JIT triggers a certain response with most people.
On Thu, 14 Mar 2019 at 07:31, Jonathan Aquilina
wrote:
> Hi All,
>
>
>
>
be there.
On Sun, 10 Mar 2019 at 10:57, John Hearns wrote:
> Jonathan, damn good question.
> There is a lot of debate at the moment on how 'traditional' HPC can
> co-exist with 'big data' style HPC.
>
> Regarding Julia, I am a big fan of it and it bring a task-level paradig
Jonathan, damn good question.
There is a lot of debate at the moment on how 'traditional' HPC can
co-exist with 'big data' style HPC.
Regarding Julia, I am a big fan of it and it bring a task-level paradigm to
HPC work.
To be honest though, traditional Fortran codes will be with us forever.
Talking about missing values... Joe Landman is sure to school me again
for this one (owwwccchhh)
https://docs.julialang.org/en/v1/manual/missing/index.html
Going back to the hardware, a 250Gbyte data size is not too large to hold
in RAM.
This might be a good use case for Intel Optane persistent
Jonathan, I am going to stick my neck out here. I feel that HDFS was a
'thing of its time' - people are slavishly building clusters with local
SATA drives to follow that recipe.
Current parallel filesystems have adapters which make them behave like HDFS
.
you can then install OpenHPC on the same server as OpenHPC is an 'overlay'
then start building the cluster.
On Sun, 3 Mar 2019 at 09:24, John Hearns wrote:
> I second OpenHPC. It is actively maintained and easy to set up.
>
> Regarding the hardware, have a look at Doug Eadline
I second OpenHPC. It is actively maintained and easy to set up.
Regarding the hardware, have a look at Doug Eadlines Limulus clusters. I
think they would be a good fit.
Dougs site is excellent in general https://www.clustermonkey.net/
Also some people build Raspberry Pi clusters for learning.
Pah. This is nothing. This is what a systems engineer in a proper immersive
cooling data centre looks like
https://i.ytimg.com/vi/2S2aEcVbO48/maxresdefault.jpg
You've got 60 seconds to change that hard drive, or you run out of oxygen.
On Tue, 5 Feb 2019 at 16:49, Stu Midgley wrote:
> regular
Prentice, the website refers to Open Compute racks. "... technology has
been designed to fit into standard Open Compute racks".
So yep, 19 inch racks are not being targeted here. But OCP is pretty
widespread.
I would really like to find out if they can retrofit these to existing kit.
I suspect
Thinking about it, if they are sucking in air through very narrow slots
then sending it through an expanding chamber it will make a heck of a
noise. I wonder if you could tune each expansion pipe to a particular note,
and construct a mighty pipe organ on your data centre?
Tunes are produced as
Sorry, their videos do have a fan at one end.
In the video though they do say "enables ten times the server density" - as
opposed to what?
I am keeping an open mind though.
Forced Physics guys - hint I work somewhere which has lots of servers.
On Fri, 25 Jan 2019 at 17:15, John Hea
1 - 100 of 678 matches
Mail list logo