On Wed, 2008-09-24 at 14:45 +, [EMAIL PROTECTED] wrote:
A point of interest here is that reducing these service-related
interrupts was an important element in improving the HPL efficiency of
Windows HPC Server 2008 (over 2003) from sub 70% levels to closer to
80%.
Did this include
Hello List.
I've been searching the archives and googling for performance numbers of
different mpi implementations but have been unable to find any. Does
anyone have any numbers, links or analogies?
/Linus
___
Beowulf mailing list, Beowulf@beowulf.org
Robert G. Brown wrote:
On Thu, 18 Sep 2008, Mark Kosmowski wrote:
Questions (same RGB asked):
What environmental conditions should such an office have?
1.5ton A/C?
4kW capable wiring?
Beer keg refrigerator?
How many air vent and drip holes on the walls, ceiling and windows?
I'm really
2008/9/24 Vincent Diepeveen [EMAIL PROTECTED]
In HPC there is however 1 thing i really miss. I'm convinced it exists, a
kind of GPU type cpu, with a lot of memory controllers
attached, that's doing calculations in double precision. A smal team of 5
persons can build it and clock is oh
Very neat link.
This is just a thought, but with the advent (or recent popularity) of
green buildings, could one use the same technique to keep large stores
of computers cool? That is, could one run a series of pipes deep
(100-500ft) into the ground, maybe fill them with antifreeze or
On Tue, 23 Sep 2008, Ellis Wilson wrote:
I guess I don't quite understand why you disagree Prentice. With the
exception that middleware doesn't strive to be a classification per se,
just a solution, it still consists of a style of computing where you
sacrifice absolute high performance because
I am considering adding a small parallel file system ~(5-10TB) my small
cluster (~32 2x dual core Opteron nodes) that is used mostly by a handful of
regular users. Currently the only storage accessible to all nodes is home
directory space which is provided by the Lab's IT department (this is a
Hi folks:
[begin quick (shameless?) plug]
Ohio Linux fest (http://www.ohiolinux.org/) will be running October
10-11th in Columbus OH. We are one of the sponsors this year. We were
one of two talking about Beowulfery and HPC last year (had a JackRabbit
there). Will be going (and having
Ellis Wilson wrote:
This is just a thought, but with the advent (or recent popularity) of
green buildings, could one use the same technique to keep large stores
of computers cool? That is, could one run a series of pipes deep
(100-500ft) into the ground, maybe fill them with antifreeze or
Glen Beane wrote:
I am considering adding a small parallel file system ~(5-10TB) my small
cluster (~32 2x dual core Opteron nodes) that is used mostly by a handful of
regular users. Currently the only storage accessible to all nodes is home
directory space which is provided by the Lab's IT
On 9/25/08 10:19 AM, Joe Landman [EMAIL PROTECTED] wrote:
Glen Beane wrote:
I am considering adding a small parallel file system ~(5-10TB) my small
cluster (~32 2x dual core Opteron nodes) that is used mostly by a handful of
regular users. Currently the only storage accessible to all nodes
Glen Beane wrote:
[...]
Hi Glen:
BLAST uses mmap'ed IO. This has some interesting ... interactions
... with parallel file systems.
for what its worth, we use Paracel BLAST and are also considering
mpiBLAST-pio to take advantage of a parallel file system
Cool.
On Sep 25, 2008, at 10:19 AM, Joe Landman wrote:
We have measured NFSoverRDMA speeds (on SDR IB at that) at 460 MB/s,
on an RDMA adapter reporting 750 MB/s (in a 4x PCIe slot, so ~860 MB/
s max is what we should expect for this). Faster IB hardware should
result in better performance,
On 25 Sep 2008, at 3:19 pm, Joe Landman wrote:
BLAST uses mmap'ed IO. This has some interesting ...
interactions ... with parallel file systems.
It's not *too* bad on Lustre. We use it in production that way.
Are there other recommendations for fast scratch space (it doesn't
have to
Scott Atchley wrote:
On Sep 25, 2008, at 10:19 AM, Joe Landman wrote:
We have measured NFSoverRDMA speeds (on SDR IB at that) at 460 MB/s,
on an RDMA adapter reporting 750 MB/s (in a 4x PCIe slot, so ~860 MB/s
max is what we should expect for this). Faster IB hardware should
result in
Glen,
I have had great success with the *right* 10GbE nic and NFS. The
important things to consider are:
How much bandwidth will your backend storage provide? 2 x FC 4 I'm
guessing best case is 600Mb but likely less.
What access patterns do the typical apps have?
All nodes read from a
On Thu, 2008-09-25 at 09:40 -0400, Glen Beane wrote:
I am considering adding a small parallel file system ~(5-10TB) my small
cluster (~32 2x dual core Opteron nodes) that is used mostly by a handful of
regular users. Currently the only storage accessible to all nodes is home
directory space
Hallo Greg,
Donnerstag, 25. September 2008, meintest Du:
Glen,
I have had great success with the *right* 10GbE nic and NFS. The important
things to consider are:
I have to say my experience was different.
How much bandwidth will your backend storage provide? 2 x FC 4 I'm guessing
On Thu, Sep 25, 2008 at 10:19:26AM -0400, Joe Landman wrote:
BLAST uses mmap'ed IO. This has some interesting ... interactions ...
with parallel file systems.
The PathScale compilers use mmap on their temporary files. This led to
some interesting bugs being reported... fortunately, we were
Greg Lindahl wrote:
On Thu, Sep 25, 2008 at 10:19:26AM -0400, Joe Landman wrote:
BLAST uses mmap'ed IO. This has some interesting ... interactions ...
with parallel file systems.
The PathScale compilers use mmap on their temporary files. This led to
some interesting bugs being reported...
On Wed, 24 Sep 2008, Donald Becker wrote:
xmlsysd is -- I think -- very nicely hierarchically organized. It
achieves adequate efficiency for many uses a different way -- it is only
called on (client side) demand, so the network isn't cluttered with
unwanted or unneeded casts (uni, multi,
Joe Landman [EMAIL PROTECTED] wrote
Glen Beane wrote:
I am considering adding a small parallel file system ~(5-10TB) my small
cluster (~32 2x dual core Opteron nodes) that is used mostly by a
handful of
regular users. Currently the only storage accessible to all nodes
is home
directory
On Thu, Sep 25, 2008 at 02:53:14PM -0400, Robert G. Brown wrote:
The liveness issue is most definitely a problem with xmlsysd/wulfstat,
because frankly TCP sucks for this specific purpose. I'd love to have
what amounts to ping built into the UI, but it is a restricted socket
command and I
On Wed, 24 Sep 2008, Greg Lindahl wrote:
On Wed, Sep 24, 2008 at 01:35:10PM -0400, Robert G. Brown wrote:
So the tradeoff is really a familiar one. Code/Data efficiency vs
Code/Data readability and robustness.
The depressing part about this is that XML proponents are unusually
blind to the
On Thu, 25 Sep 2008, Ellis Wilson wrote:
Very neat link.
This is just a thought, but with the advent (or recent popularity) of
green buildings, could one use the same technique to keep large stores
of computers cool? That is, could one run a series of pipes deep
(100-500ft) into the ground,
On Thu, Sep 25, 2008 at 03:20:15PM -0400, Robert G. Brown wrote:
The fundamental problem is (as Don said as well) that as far as I know
there ARE NO really good solutions to the problem of the representation,
encapsulation, and transmission of hierarchical data structures in a
portable and
On Thu, 25 Sep 2008, Greg Lindahl wrote:
On Thu, Sep 25, 2008 at 03:20:15PM -0400, Robert G. Brown wrote:
The fundamental problem is (as Don said as well) that as far as I know
there ARE NO really good solutions to the problem of the representation,
encapsulation, and transmission of
On Thu, Sep 25, 2008 at 06:34:50PM -0400, Robert G. Brown wrote:
So yeah, XML isn't a magic bullet. I think half of the anger people
seem to feel for it is because they think it somehow should be.
I'm only upset by XML boosters who are so positive about it. You fall
into this category; when I
On Thu, 2008-09-25 at 15:20 -0400, Robert G. Brown wrote:
...XML...The fundamental problem is (as Don said as well) that as far as I
know
there ARE NO really good solutions to the problem of the representation,
encapsulation, and transmission of hierarchical data structures in a
portable and
On Thu, Sep 25, 2008 at 02:08:23PM -0400, Joe Landman wrote:
It looks like people use mmap files to explicitly avoid seeks,
replacing semantics of file IO with memory access semantics.
Well, it explicitly avoids having to call I/O functions all the time
as you skip around a file. It also
Tod Hagan wrote:
On Thu, 2008-09-25 at 15:20 -0400, Robert G. Brown wrote:
...XML...The fundamental problem is (as Don said as well) that as far as I know
there ARE NO really good solutions to the problem of the representation,
encapsulation, and transmission of hierarchical data structures in
On Thu, Sep 25, 2008 at 07:11:14PM -0400, Joe Landman wrote:
In YAML's
case, it suffers from the same problem with Python (yeah, I am gonna get
some nasty dirty emails now). Structure by indentation is IMO *evil*.
Hey, just recognize that it's a religious thing, and rest easy. When's
the
On Thu, 25 Sep 2008, Greg Lindahl wrote:
On Thu, Sep 25, 2008 at 06:34:50PM -0400, Robert G. Brown wrote:
So yeah, XML isn't a magic bullet. I think half of the anger people
seem to feel for it is because they think it somehow should be.
I'm only upset by XML boosters who are so positive
On Thu, 25 Sep 2008, Joe Landman wrote:
dirty emails now). Structure by indentation is IMO *evil*. I have heard that
Ah-up.
GvR actually agrees with this, though that is 3rd order hearsay.
JSON is a little more intelligent. Easier to parse.
I'll look at it.
I guess, as a person who
On Thu, Sep 25, 2008 at 4:53 PM, Tod Hagan [EMAIL PROTECTED] wrote:
On Thu, 2008-09-25 at 15:20 -0400, Robert G. Brown wrote:
...XML...The fundamental problem is (as Don said as well) that as far as
I know
there ARE NO really good solutions to the problem of the representation,
35 matches
Mail list logo