http://www.goodgearguide.com.au/article/270416/inside_tsubame_-_nvidia_gpu_supercomputer?fp=fpid=pf=1
Inside Tsubame - the Nvidia GPU supercomputer
Tokyo Tech University's Tsubame supercomputer attained 29th ranking in the
new Top 500, thanks in part to hundreds of Nvidia Tesla graphics cards.
Eugen Leitl wrote:
http://www.goodgearguide.com.au/article/270416/inside_tsubame_-_nvidia_gpu_supercomputer?fp=fpid=pf=1
Inside Tsubame - the Nvidia GPU supercomputer
Tokyo Tech University's Tsubame supercomputer attained 29th ranking in the
new Top 500, thanks in part to hundreds of Nvidia
On Thu, 11 Dec 2008, David Mathog wrote:
install usb-interface /sbin/modprobe uhci_hcd; /bin/true
install ide-controller /sbin/modprobe via82cxxx; /sbin/modprobe
ide_generic; /bin/true
I don't know why you need these. On all the distributions that I've
worked with recently such issues are
Florent Calvayrac-Castaing wrote:
[...]
Interesting.
I understand why, when I submitted a joint exploratory project
about GPU computing two years ago with a Japanese
colleague we were ranked first in Japan and last in France ; the
idea seems more popular in Japan if they can fork
- Florent Calvayrac-Castaing florent.calvay...@univ-lemans.fr wrote:
By the way, has anyone on the list any idea on
the prospects of Apple's OpenCL ?
I think we need something that is hardware independent.
If OpenCL can deliver that (and Apple obviously believe
it can otherwise they'd
On Dec 12, 2008, at 8:56 AM, Eugen Leitl wrote:
http://www.goodgearguide.com.au/article/270416/inside_tsubame_-
_nvidia_gpu_supercomputer?fp=fpid=pf=1
Inside Tsubame - the Nvidia GPU supercomputer
Tokyo Tech University's Tsubame supercomputer attained 29th ranking
in the
new Top 500,
Very interesting, but perhaps a bit of an overkill. How many TFlop/Watt does
that figure out as? :-(
Cheers,
-Alan
-Missatge original-
De: beowulf-boun...@beowulf.org en nom de Eugen Leitl
Enviat el: dv. 12/12/2008 08:56
Per a: i...@postbiota.org; Beowulf@beowulf.org
Tema: [Beowulf]
Greg Lindahl wrote:
On Fri, Dec 12, 2008 at 11:04:47AM +1100, Chris Samuel wrote:
Hmm, I was thinking that until I read this blog post by
one of the kernel filesystem developers (Val Henson from
Intel) who had some (possibly Apple specific) concerns
about data corruption reliability and
Greg Lindahl wrote:
I was recently surprised to learn that SSD prices are down in the
$2-$3 per gbyte range. I did a survey of one brand (OCZ) at NexTag
and it was:
256 gigs = $700
128 gigs = 300
64 gigs = 180
32 gigs = 70
Also, Micron is saying that they're going to get into the
Jeff Layton wrote:
Greg Lindahl wrote:
I was recently surprised to learn that SSD prices are down in the
$2-$3 per gbyte range. I did a survey of one brand (OCZ) at NexTag
and it was:
256 gigs = $700
128 gigs = 300
64 gigs = 180
32 gigs = 70
Also, Micron is saying that they're
2008/12/12 Loic Tortay tor...@cc.in2p3.fr
If I'm not mistaken, the Tsubame cluster was initially using
Clearspeed accelerators (in Sun X4600 fat nodes).
Therefore, they probably have appropriate programs that need little
adaptation (or less than many) to work on the GPUs.
E I'm no
John Hearns wrote:
2008/12/12 Loic Tortay tor...@cc.in2p3.fr mailto:tor...@cc.in2p3.fr
If I'm not mistaken, the Tsubame cluster was initially using
Clearspeed accelerators (in Sun X4600 fat nodes).
Therefore, they probably have appropriate programs that need little
Geoff Jacobs wrote:
Jeff Layton wrote:
Remember that OCZ does not equal Fusion-IO :) There are many
factors that go into an SSD that determine performance. So the
performance of OCZ is not nearly that of Fusion-IO's product.
For example, I've been tracking some performance testing of a
wide
Peter Jakobi wrote:
On Fri, Dec 12, 2008 at 08:17:19AM -0600, Geoff Jacobs wrote:
Rehi,
Reliability is another question and I posted a quick response to
this list in a different email.
This being my big concern with flash.
related is this topic on SSD / flashes:
what's the
Hello 'wulfers!
Any news on GPCPU stuff from ATI?
Best,
Alcides
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
I am in NewDelhi India. However I would prefer to put the cluster together
myself, because
1) I am a good python programmer and like programming and playing with
computers
2) I will be using the cluster for animation (art + computers) and may have
to bend it and tinker with it...therefore it
Thanks, seems like a good website. Actually it is my mother who is the
chemist.
On Thu, Dec 11, 2008 at 3:03 PM, John Hearns hear...@googlemail.com wrote:
2008/12/10 Dr Cool Santa drcoolsa...@gmail.com
Currently in the lab we use Schrodinger and we are looking into NWchem.
We'd be
Greg Lindahl wrote:
I was recently surprised to learn that SSD prices are down in the
$2-$3 per gbyte range. I did a survey of one brand (OCZ) at NexTag
and it was:
256 gigs = $700
128 gigs = 300
64 gigs = 180
32 gigs = 70
Alas, these drives have lousy random write performance. As in 4
On Fri, Dec 12, 2008 at 08:17:19AM -0600, Geoff Jacobs wrote:
Rehi,
Reliability is another question and I posted a quick response to
this list in a different email.
This being my big concern with flash.
related is this topic on SSD / flashes:
what's the life time when changing the same
From: beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org]
On Behalf Of arjuna
Sent: Thursday, December 11, 2008 2:05 AM
To: beowulf@beowulf.org
Subject: Re: [Beowulf] Newbie Question: Racks versus boxes and good
2008/12/11 arjuna brahmafor...@gmail.com
What is 1u?
Easy question! a U is short for a rack unit. Rack mounted equipment
always comes in multiples of a vertical height unit, which is 1.75 inches. I
gather this is actually an old Russian unit of measurement (you can check on
Wikipedia). So when
Reliability should be fine in laptops, though I'd be less
keen to deploy a rack full of them - they're a lot more
sensitive to electrical noise than traditional HDDs when both
reading and writing, so their reliability in these situations
depends on how good the ADC and DAC converters are in
I was down in our server room with the ICE this afternoon. Its worth
describing how they are put together for the purposes of this thread.
Each rack has four blade chassis in it. These are called Independent Rack
Units in SGI speak.
An IRU has sixteen compute blades, plus the mains PSUs and
When I reboot the nodes in my cluster, the openibd scripts hangs when
shutting down. If I wait long enough(5-10 minutes, probably closer to
10), it eventually completes, or at least fails so the system can
continue shutting down.
If I do 'service openibd stop' before doing the reboot, the openibd
On Fri, 2008-12-12 at 17:35 +, John Hearns wrote:
At the rear of each IRU there is a bank of big fans. Each IRU couples
up to a 1/4 sized rear rack door, using a foam gasket. Each of these
1/4 sized doors is a swing out heat exchanger.
That's the bit of information I was missing. I'd
Prentice Bisbal wrote:
Any ideas why this script would behave differently during a shutdown?
Hi Prentice
Sounds like a race situation. Do you have an NFS mount over IPoIB?
Joe
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email:
2008/12/12 Huw Lynes lyn...@cardiff.ac.uk
That's the bit of information I was missing. I'd assumed the entire door
swung out as one losing all cooling when you work on the rack. The
stable-door approach makes more sense.
I still like our APC contained hot-aisle system though.
Horses for
In message from Dr Cool Santa drcoolsa...@gmail.com (Wed, 10 Dec
2008 19:21:43 +0530):
Currently in the lab we use Schrodinger and we are looking into
NWchem. We'd
be interested in knowing about software that a chemist could use that
makes
use of a parallel supercomputer. And better if it is
From: beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org]
On Behalf Of John Hearns
2008/12/11 arjuna brahmafor...@gmail.com
What is 1u?
Easy question! a U is short for a rack unit. Rack mounted equipment
always comes in multiples of a
On Thu, Dec 11, 2008 at 10:09:10PM -0800, Michael Huntingdon wrote:
Today it really is not just about cooling a group of 1u systems or a single
hptc cabinet. Today maybe you know you need two densely populated cabinets.
Like most people, I can't use very dense systems, due to the
power/cubic
Joe Landman wrote:
Prentice Bisbal wrote:
Any ideas why this script would behave differently during a shutdown?
Hi Prentice
Sounds like a race situation. Do you have an NFS mount over IPoIB?
Joe
I do have NFS mounts, but *NOT* through IPoIB. At least they *shouldn't* be.
I
Bogdan Costescu wrote:
On Thu, 11 Dec 2008, David Mathog wrote:
install usb-interface /sbin/modprobe uhci_hcd; /bin/true
install ide-controller /sbin/modprobe via82cxxx; /sbin/modprobe
ide_generic; /bin/true
I don't know why you need these. On all the distributions that I've
worked
On Thu, 11 Dec 2008, arjuna wrote:
I am in NewDelhi India. However I would prefer to put the cluster together
myself, because
Ya, that's where I lived for seven years growing up.
1) I am a good python programmer and like programming and playing with
computers
2) I will be using the cluster
23.55 Mflops/W according to green500 estimates (#488 in thier list)
2008/12/12 Vincent Diepeveen d...@xs4all.nl
On Dec 12, 2008, at 8:56 AM, Eugen Leitl wrote:
http://www.goodgearguide.com.au/article/270416/inside_tsubame_-
_nvidia_gpu_supercomputer?fp=fpid=pf=1
Inside Tsubame - the
On Thu, Dec 11, 2008 at 12:01:23AM +0100, Vincent Diepeveen wrote:
What is most interesting from supercomputer viewpoint seen is the
comments i
got from some scientists when speaking about climate calculations.
At a presentation at SARA at 11 september 2008 with some bobo's there
Hi,
I need to add to my new ROCKS 5.1 cluster a fileserver,
the /export partition of the first disk on the frontend
might not be enough.
First question: Is there any documentation
on how rocks do this?
Second: is out there anyone with experience
on Dell MD3000(i) with rocks? I will probably
buy
Hello,
my first reply missed the list by mistake so I will repeat a few points that
I mentioned there.
What is 1u?
What is a blade system?
Compute clusters are often built of rack-server hardware meaning boxes
different from desktop boxes and chipset that have features not necessary
for
John Hearns wrote:
2008/12/12 Huw Lynes lyn...@cardiff.ac.uk mailto:lyn...@cardiff.ac.uk
That's the bit of information I was missing. I'd assumed the
entire door
swung out as one losing all cooling when you work on the rack. The
stable-door approach makes more sense.
38 matches
Mail list logo