Thu, 27 Sep 2012 11:11:24 +1000 от Christopher Samuel sam...@unimelb.edu.au:
-BEGIN PGP SIGNED MESSAGE-
On 27/09/12 03:52, Andrew Holway wrote:
Let the benchmarks begin!!!
Assuming the license agreement allows you to publish them..
:-) For example: Gaussian-09/03/... licenses
On Wed, 26 Sep 2012 23:59:13 +0200, you wrote:
Yes easily.
Google for what linus posted there and what i posted there in code
around 2007 already.
Where i showed how f'ed up GCC was, where it basically modified some
simple code sample
to something ugly slow, instead of creating a CMOV
On Wed, 26 Sep 2012 10:54:01 +0200, you wrote:
There's nearly nothing there at that link.
Just a handful of SRPMS.
All of the SRPMS for SL are there.
My point of openfabrics is: most people build a cluster in order to
be have more performance
than a single machine can give. Not seldom that's
I have a modest proposal:
standardize the location of liquid-cooled cold plates in each 19 rack.
nodes would have internal heatpipes from heat sources (presumably CPUs
mostly) to plates along the sides to mate/contact with the rack.
I have an aging machineroom with ~50 racks, with the compute
in the spirit of Friday, here's another, even less realistic idea:
let's slide 1U nodes into a rack on their sides, and forget the
silly, fussy, vendor-specific, not-that-cheap rail mechanism entirely.
how often do you actually pull out nodes, and of those few times,
how often do you really need
2012/9/28 Mark Hahn h...@mcmaster.ca:
in the spirit of Friday, here's another, even less realistic idea:
let's slide 1U nodes into a rack on their sides, and forget the
silly, fussy, vendor-specific, not-that-cheap rail mechanism entirely.
That sounds almost as good as submerging your servers
On Fri, Sep 28, 2012 at 03:44:09PM -0400, Douglas Eadline wrote:
Not so crazy.
Years ago I had some L shaped pieces of steel made that allowed
nodes to slide in horizontally as you describe (and for the same
reasons). It provided enough of a lip to support the node
that was attached
-Original Message-
From: beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org] On
Behalf Of Douglas Eadline
Sent: Friday, September 28, 2012 12:44 PM
To: Mark Hahn
Cc: Beowulf Mailing List
Subject: Re: [Beowulf] another crazy idea
Not so crazy.
Years ago I had some L shaped
I have a modest proposal:
That's always a tricky phrase.. fortunately you're not talking about novel
food sources here.
standardize the location of liquid-cooled cold plates in each 19 rack.
nodes would have internal heatpipes from heat sources (presumably CPUs
mostly) to plates along the
Sounds expensive, complicated, and challenging.
How about a MUCH simpler proposal: eliminate fans from compute nodes.
Nodes should:
* assume good front to back airflow
Racks would:
* have large fans front AND back that run at relatively low
rpm, and relatively quiet.
* If front or rear door
Jim Lux wrote:
Whatever happened to a Beowulf made of tower cases piled on brick
and board book cases?
Many years ago at a European Supercomputer Conference I saw a vendor
selling a cluster as a set of short tower cases. Slightly more seriously,
in the past a topic of the Beowulf mailing list
Sounds expensive, complicated, and challenging.
I donno - it seems elegantly modular to me. vendors are responsible
for getting the heat to the cold plate (via heatpipes, probably. these
days, heatpipes are extremely widespread and well-controlled. every
laptop has them, many GPU cards and
12 matches
Mail list logo