Re: [Beowulf] Re: hobbyists still OT

2008-06-26 Thread Jon Aquilina
taking this thread off on another tangent here though. using bio fules might be good for now but is actually creating lots of problems. the end all solution would to be to use hydrogen as the fuel source. put water in the car gets broken down through hydrolysis and the water which is exhaust is

Re: [Beowulf] Re: hobbyists still OT

2008-06-26 Thread Robert G. Brown
On Thu, 26 Jun 2008, Jon Aquilina wrote: taking this thread off on another tangent here though. using bio fules might be good for now but is actually creating lots of problems. the end all solution would to be to use hydrogen as the fuel source. put water in the car gets broken down through

Re: [Beowulf] Again about NUMA (numactl and taskset)

2008-06-26 Thread Håkon Bugge
At 01:23 25.06.2008, Chris Samuel wrote: IMHO, the MPI should virtualize these resources and relieve the end-user/application programmer from the burden. IMHO the resource manager (Torque, SGE, LSF, etc) should be setting up cpusets for the jobs based on what the scheduler has told it to use

Re: [Beowulf] Again about NUMA (numactl and taskset)

2008-06-26 Thread Håkon Bugge
At 18:34 25.06.2008, Mikhail Kuzminsky wrote: Let me assume now the following situation. I have OpenMP-parallelized application which have the number of processes equal to number of CPU cores per server. And let me assume that this application uses not too more virtual memory, so all the

Re: [Beowulf] A simple cluster

2008-06-26 Thread Geoff Jacobs
Peter St. John wrote: Alcides, I think a short answer is: get a switch, plug all the boxes (throught the ethernet ports on the motherboards) to the switch, install Ubuntu and use OpenMP. Longer answers will be forthcoming, but I bet they will start with questions about the specific

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Prentice Bisbal
[EMAIL PROTECTED] wrote: I respectfully request that you take conversations about washing machines and other non_beowulf related topics off to some other mailing list. I have plenty of email to delete without having the load increased by irrelevant discussions on this one. Many thanks,

Re: [Beowulf] A simple cluster

2008-06-26 Thread Peter St. John
Well I was really thinking that OpenMP might be easiest, the fastest from setup to application running for someone new (which would include me, this is hypothetical for me, but I think for myself I'll use MPI not OpenMP, because I want to micromanage the message passing) but Im no expert by any

Re: [Beowulf] Again about NUMA (numactl and taskset)

2008-06-26 Thread Mikhail Kuzminsky
In message from Håkon Bugge [EMAIL PROTECTED] (Thu, 26 Jun 2008 11:16:17 +0200): Numastat statistics before Gaussian-03 run (OpenMP, 8 threads, 8 cores, requires 512 Mbytes shared memory plus something more, may be fitted in memory of any node - I have 8 GB per node, 6- GB free in node0 and

Re: [Beowulf] Re: hobbyists still OT

2008-06-26 Thread Karen Shaeffer
On Thu, Jun 26, 2008 at 03:30:17AM -0400, Robert G. Brown wrote: Stored sunlight is in finite supply in fossil fuels. Plants grow slowly and require more than JUST sunlight to produce energy, so a fair bit of energy yield has to be turned right back into fertilizer, fuel for tractors or

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Peter St. John
Some of us are latency bound and can't handle the extra bandwidth, but some of us are compute intensive and don't mind :-) Peter On 6/26/08, Prentice Bisbal [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: I respectfully request that you take conversations about washing machines and other

Re: [Beowulf] A simple cluster

2008-06-26 Thread Peter St. John
Geoff, Oops! I totally misunderstood it. So it's strictly shared-memory, and requires something like MPI for crossing nodes. Gotcha. Big mistake, thanks. Peter On 6/26/08, Geoff Jacobs [EMAIL PROTECTED] wrote: Peter St. John wrote: Well I was really thinking that OpenMP might be easiest, the

Re: [Beowulf] A simple cluster

2008-06-26 Thread Geoff Jacobs
Peter St. John wrote: Well I was really thinking that OpenMP might be easiest, the fastest from setup to application running for someone new (which would include me, this is hypothetical for me, but I think for myself I'll use MPI not OpenMP, because I want to micromanage the message passing)

Re: [Beowulf] A simple cluster

2008-06-26 Thread Geoff Jacobs
Peter St. John wrote: Geoff, Oops! I totally misunderstood it. So it's strictly shared-memory, and requires something like MPI for crossing nodes. Gotcha. Big mistake, thanks. Peter Shared memory only, yes. Many, many people skip OpenMP completely and go pure MPI. From a coding standpoint

Re: [Beowulf] A simple cluster

2008-06-26 Thread Peter St. John
I recall discussion of the hybrid approach, which I think most of the list doesn't much like, but interested me on account of my application. But I hadn't realized that hybrid was required by OpenMP for multi node architectures. So yeah, I'll just go with MPI for starters. When I start :-) Peter

[Beowulf] A press release

2008-06-26 Thread andrew holway
http://www.clustervision.com/pr_top500_uk.php Thanks Andrew Holway ClusterVision ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] A press release

2008-06-26 Thread Joe Landman
andrew holway wrote: http://www.clustervision.com/pr_top500_uk.php cool ... congratulations to ClusterVision! -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics LLC, email: [EMAIL PROTECTED] web : http://www.scalableinformatics.com http://jackrabbit.scalableinformatics.com

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread andrew holway
I would suggest a good email client that can handle threads well such as gmail. These devils will never learn. Domestic appliences are indeed deeply ingrained into their souls. On Thu, Jun 26, 2008 at 3:03 PM, Prentice Bisbal [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: I respectfully

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Gerry Creager
Realize that, here in the Colonies, we've entire subpopulations whose love for domestic appliances is so engrained, they enshrine them in front of their houses... No, I'm not kidding, and can take you on a tour of East Texas as proof... gerry andrew holway wrote: I would suggest a good

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread andrew holway
I know where this is going and right away I'm going to trump you with this picture of a trailor park mansion:- http://bp1.blogger.com/_CCeVPrmu0G8/R8g0BnShRiI/AeU/M_ZJvm985yA/s1600-h/redneckmansion2.BMP I'm very sorry gregg, I might suggest an [OT] filter on your mail :) Ta Andy On

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Prentice Bisbal
...and we're back off-topic! -- Prentice Gerry Creager wrote: Realize that, here in the Colonies, we've entire subpopulations whose love for domestic appliances is so engrained, they enshrine them in front of their houses... No, I'm not kidding, and can take you on a tour of East Texas as

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Gerry Creager
I didn't start it, but I've participated in off-topic discussions here for years. gc Prentice Bisbal wrote: ...and we're back off-topic! -- Prentice Gerry Creager wrote: Realize that, here in the Colonies, we've entire subpopulations whose love for domestic appliances is so engrained, they

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Joe Landman
Prentice Bisbal wrote: ...and we're back off-topic! Last I heard, they make everything bigger in Texas ... Just look at TACC (back on topic ... yess !) Not just my cluster is bigger than your cluster but my cluster is WYY bigger than your cluster. -- Prentice Gerry Creager

[Beowulf] HPCwire: GPGPUs Make Headway in Bioscience

2008-06-26 Thread Bernard Li
For those who don't read HPCwire and are interested in GPU computing, this is an interesting read: http://www.hpcwire.com/topic/processors/GPGPUs_Make_Headway_in_Bioscience.html Cheers, Bernard ___ Beowulf mailing list, Beowulf@beowulf.org To change

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Peter St. John
Yeah cool, http://en.wikipedia.org/wiki/TACC shows over 500 TeraFLOPS peak. So when will we get ...I don't know what it's called after tera. PetaFLOPS? when do we get a PetaFLOPS? Peter (1.00 PeterFLOPS) On 6/26/08, Joe Landman [EMAIL PROTECTED] wrote: Prentice Bisbal wrote: ...and we're back

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Lombard, David N
On Thu, Jun 26, 2008 at 09:14:50PM +0100, andrew holway wrote: I know where this is going and right away I'm going to trump you with this picture of a trailor park mansion:- http://bp1.blogger.com/_CCeVPrmu0G8/R8g0BnShRiI/AeU/M_ZJvm985yA/s1600-h/redneckmansion2.BMP OK, so the container

RE: [Beowulf] Infiniband modular switches

2008-06-26 Thread Gilad Shainer
Patrick Geoffray wrote: There are cases where adaptive routing will show a benefit, and this is why we see the IB vendors add adaptive routing support as well. But in general, the average effective bandwidth is much much higher than the 40% you claim. Have a look at the slides

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Kilian CAVALOTTI
On Thursday 26 June 2008 14:27:02 Peter St. John wrote: Yeah cool, http://en.wikipedia.org/wiki/TACC shows over 500 TeraFLOPS peak. So when will we get ...I don't know what it's called after tera. PetaFLOPS? when do we get a PetaFLOPS? Mmmmh, last week? :)

Re: [Beowulf] Re: hobbyists

2008-06-26 Thread Matt Lawrence
On Thu, 26 Jun 2008, Joe Landman wrote: Not just my cluster is bigger than your cluster but my cluster is WYY bigger than your cluster. TACC's PDU field is bigger than my cluster. -- Matt It's not what I know that counts. It's what I can remember in time to use.