taking this thread off on another tangent here though. using bio fules might
be good for now but is actually creating lots of problems. the end all
solution would to be to use hydrogen as the fuel source. put water in the
car gets broken down through hydrolysis and the water which is exhaust is
On Thu, 26 Jun 2008, Jon Aquilina wrote:
taking this thread off on another tangent here though. using bio fules might
be good for now but is actually creating lots of problems. the end all
solution would to be to use hydrogen as the fuel source. put water in the
car gets broken down through
At 01:23 25.06.2008, Chris Samuel wrote:
IMHO, the MPI should virtualize these resources
and relieve the end-user/application programmer
from the burden.
IMHO the resource manager (Torque, SGE, LSF, etc) should
be setting up cpusets for the jobs based on what the
scheduler has told it to use
At 18:34 25.06.2008, Mikhail Kuzminsky wrote:
Let me assume now the following situation. I
have OpenMP-parallelized application which have
the number of processes equal to number of CPU
cores per server. And let me assume that this
application uses not too more virtual memory, so
all the
Peter St. John wrote:
Alcides,
I think a short answer is: get a switch, plug all the boxes (throught
the ethernet ports on the motherboards) to the switch, install Ubuntu
and use OpenMP.
Longer answers will be forthcoming, but I bet they will start with
questions about the specific
[EMAIL PROTECTED] wrote:
I respectfully request that you take conversations about washing machines and
other non_beowulf related topics off to some other mailing list. I have
plenty of email to delete without having the load increased by irrelevant
discussions on this one.
Many thanks,
Well I was really thinking that OpenMP might be easiest, the fastest from
setup to application running for someone new (which would include me, this
is hypothetical for me, but I think for myself I'll use MPI not OpenMP,
because I want to micromanage the message passing) but Im no expert by any
In message from Håkon Bugge [EMAIL PROTECTED] (Thu, 26 Jun 2008
11:16:17 +0200):
Numastat statistics before Gaussian-03 run (OpenMP, 8 threads, 8
cores,
requires 512 Mbytes shared memory plus something more, may be fitted
in memory of any node - I have 8 GB per node, 6- GB free in node0 and
On Thu, Jun 26, 2008 at 03:30:17AM -0400, Robert G. Brown wrote:
Stored sunlight is in finite supply in fossil fuels. Plants grow slowly
and require more than JUST sunlight to produce energy, so a fair bit of
energy yield has to be turned right back into fertilizer, fuel for
tractors or
Some of us are latency bound and can't handle the extra bandwidth, but some
of us are compute intensive and don't mind :-)
Peter
On 6/26/08, Prentice Bisbal [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
I respectfully request that you take conversations about washing machines
and other
Geoff,
Oops! I totally misunderstood it. So it's strictly shared-memory, and
requires something like MPI for crossing nodes. Gotcha. Big mistake, thanks.
Peter
On 6/26/08, Geoff Jacobs [EMAIL PROTECTED] wrote:
Peter St. John wrote:
Well I was really thinking that OpenMP might be easiest, the
Peter St. John wrote:
Well I was really thinking that OpenMP might be easiest, the fastest
from setup to application running for someone new (which would include
me, this is hypothetical for me, but I think for myself I'll use MPI not
OpenMP, because I want to micromanage the message passing)
Peter St. John wrote:
Geoff,
Oops! I totally misunderstood it. So it's strictly shared-memory, and
requires something like MPI for crossing nodes. Gotcha. Big mistake, thanks.
Peter
Shared memory only, yes. Many, many people skip OpenMP completely and go
pure MPI. From a coding standpoint
I recall discussion of the hybrid approach, which I think most of the list
doesn't much like, but interested me on account of my application. But I
hadn't realized that hybrid was required by OpenMP for multi node
architectures. So yeah, I'll just go with MPI for starters. When I start :-)
Peter
http://www.clustervision.com/pr_top500_uk.php
Thanks
Andrew Holway
ClusterVision
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
andrew holway wrote:
http://www.clustervision.com/pr_top500_uk.php
cool ... congratulations to ClusterVision!
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: [EMAIL PROTECTED]
web : http://www.scalableinformatics.com
http://jackrabbit.scalableinformatics.com
I would suggest a good email client that can handle threads well such
as gmail. These devils will never learn. Domestic appliences are
indeed deeply ingrained into their souls.
On Thu, Jun 26, 2008 at 3:03 PM, Prentice Bisbal [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
I respectfully
Realize that, here in the Colonies, we've entire subpopulations whose
love for domestic appliances is so engrained, they enshrine them in
front of their houses...
No, I'm not kidding, and can take you on a tour of East Texas as proof...
gerry
andrew holway wrote:
I would suggest a good
I know where this is going and right away I'm going to trump you with
this picture of a trailor park mansion:-
http://bp1.blogger.com/_CCeVPrmu0G8/R8g0BnShRiI/AeU/M_ZJvm985yA/s1600-h/redneckmansion2.BMP
I'm very sorry gregg, I might suggest an [OT] filter on your mail :)
Ta
Andy
On
...and we're back off-topic!
--
Prentice
Gerry Creager wrote:
Realize that, here in the Colonies, we've entire subpopulations whose
love for domestic appliances is so engrained, they enshrine them in
front of their houses...
No, I'm not kidding, and can take you on a tour of East Texas as
I didn't start it, but I've participated in off-topic discussions here
for years.
gc
Prentice Bisbal wrote:
...and we're back off-topic!
--
Prentice
Gerry Creager wrote:
Realize that, here in the Colonies, we've entire subpopulations whose
love for domestic appliances is so engrained, they
Prentice Bisbal wrote:
...and we're back off-topic!
Last I heard, they make everything bigger in Texas ... Just look at TACC
(back on topic ... yess !)
Not just my cluster is bigger than your cluster but my cluster is
WYY bigger than your cluster.
--
Prentice
Gerry Creager
For those who don't read HPCwire and are interested in GPU computing,
this is an interesting read:
http://www.hpcwire.com/topic/processors/GPGPUs_Make_Headway_in_Bioscience.html
Cheers,
Bernard
___
Beowulf mailing list, Beowulf@beowulf.org
To change
Yeah cool, http://en.wikipedia.org/wiki/TACC shows over 500 TeraFLOPS peak.
So when will we get ...I don't know what it's called after tera. PetaFLOPS?
when do we get a PetaFLOPS?
Peter (1.00 PeterFLOPS)
On 6/26/08, Joe Landman [EMAIL PROTECTED] wrote:
Prentice Bisbal wrote:
...and we're back
On Thu, Jun 26, 2008 at 09:14:50PM +0100, andrew holway wrote:
I know where this is going and right away I'm going to trump you with
this picture of a trailor park mansion:-
http://bp1.blogger.com/_CCeVPrmu0G8/R8g0BnShRiI/AeU/M_ZJvm985yA/s1600-h/redneckmansion2.BMP
OK, so the container
Patrick Geoffray wrote:
There are cases where adaptive routing will show a benefit, and
this is why
we see the IB vendors add adaptive routing support as well. But in
general, the average effective bandwidth is much much
higher than the
40% you claim.
Have a look at the slides
On Thursday 26 June 2008 14:27:02 Peter St. John wrote:
Yeah cool, http://en.wikipedia.org/wiki/TACC shows over 500 TeraFLOPS
peak. So when will we get ...I don't know what it's called after tera.
PetaFLOPS? when do we get a PetaFLOPS?
Mmmmh, last week? :)
On Thu, 26 Jun 2008, Joe Landman wrote:
Not just my cluster is bigger than your cluster but my cluster is
WYY bigger than your cluster.
TACC's PDU field is bigger than my cluster.
-- Matt
It's not what I know that counts.
It's what I can remember in time to use.
28 matches
Mail list logo