Re: [Beowulf] HDTV video file sizes

2007-05-31 Thread John Hearns
re-CFC) are big Lustre users, from what I can gather. -- John Hearns Senior HPC Engineer Streamline Computing, The Innovation Centre, Warwick Technology Park, Gallows Hill, Warwick CV34 6UW Office: 01926 623130 Mobil

Re: [Beowulf] tftp permission denied

2007-06-03 Thread John Hearns
Craig Tierney wrote: fahad saeed wrote: Now the problem is that when i boot my slave node and 'command' it boot from the network (using Intel boot Boot Agent 1.1.07) I get this error PXE -T00 permission denied PXE -E36 error received from tftp server # tftp localhost # get "blah" I'll

Re: [Beowulf] Cluster Diagram of 500 PC

2007-07-08 Thread John Hearns
suggest you contact a range of vendors directly. John Hearns ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] power usage, Intel 5160 vs. AMD 2216

2007-07-13 Thread John Hearns
Joe Landman wrote: a very interesting one. I wonder how many people have scrubbing turned on in their cluster, and how many use mcelog to monitor the ECC rate. We do on clusters we ship/build. I specifically run a tests to flesh out the memory errors. Sadly, memtest86 only gets the "ob

Re: [Beowulf] Cluster Diagram of 500 PC

2007-07-13 Thread John Hearns
ency. All connected with gigabit ethernet only. http://www.streamline-computing.com/index.php?wcId=76&xwcId=72 -- John Hearns Senior HPC Engineer Streamline Computing, The Innovation Centre, Warwick Technology Park, Gallows Hill, Warwick CV34 6UW Office: 01926

Re: [Beowulf] power usage, Intel 5160 vs. AMD 2216

2007-07-14 Thread John Hearns
Mark Hahn wrote: (you could cobble one together cheaper.. a high quality charger is probably $50-100, good quality battery is around $20, a high quality inverter is $200 or so, and then you'd need some sort of transfer switch. wouldn't it be nice to find a PSU which could simply take a 12V inp

Re: [Beowulf] Nvidia Tesla GPU clusters?

2007-07-18 Thread John Hearns
Robert G. Brown wrote: On Tue, 17 Jul 2007, Jim Lux wrote: http://www.nvidia.com/object/tesla_computing_solutions.html Holy Transputer, Batman! Somehow I feel like I've passed this way before... As before, it looks very cool and promises amazing performance. But in spite of it basically be

Re: [Beowulf] Sidebar: Vista Rant

2007-07-20 Thread John Hearns
n. Then if the PC were used to view medical images by a clinician there would be a degredation of image quality. I'm sure I remember reading such an article. -- John Hearns Senior HPC Engineer Streamline Computing, The Innovation Centre, Warwick Technology Park,

Re: [Beowulf] MPI2007 out - strange pop2 results?

2007-07-20 Thread John Hearns
Gilad Shainer wrote: Hi Kevin, I believe that your company is using this list for pure marketing wars for a long time, so don't be surprise when someone responds back. Quite a lot of companies post to this list. People from Microsoft, Intel, AMD, Qlocig/Pathscale, Myricom, Scalable Informatics

Re: [Beowulf] Nvidia Tesla GPU clusters?

2007-07-21 Thread John Hearns
Greg Lindahl wrote: http://www.nvidia.com/object/tesla_computing_solutions.html If anyone wants to play with this, I just bought a low-end NVidia 8600GT for only $140, so it's not expensive to dip your toe in the water. It's 1/8 as many cores as the top of the line. Greg Lindahl, you are a ba

Re: [Beowulf] BIOS

2007-08-13 Thread John Hearns
Beat Rubischon wrote: It's probably the safest way to organize some students, give them a keyboard, a monitor and a memoy stick containing the flash files... Having been involved in this exercise several times, ie. updating and subequently resetting BIOS settings on large clusters, I agree w

Re: [Beowulf] BIOS

2007-08-14 Thread John Hearns
On Mon, 2007-08-13 at 12:45 -0400, Mark Hahn wrote: > > left afterwards - and I'm sepaking as someone who has tried capturing > > /dev/nvram settings and pushing them out to the updated nodes, which > > doesn't > > necessarily work. > > how does it fail? I'm guessing the issue is that there ar

Re: [Beowulf] Open source prime number application

2007-08-15 Thread John Hearns
On Tue, 2007-08-14 at 20:23 +0930, Tim Simon wrote: > Hi > > > I recently built a small cluster/beowulf, out of old pentium II and > III's, installed MPI, and generally felt happy. However, I dont have > any applications to run on it. > > I have learnt some C++, but I dont really want to know ho

Re: [Beowulf] Big storage

2007-08-28 Thread John Hearns
Andrew Piskorski wrote: I believe Garth's whole point is that your assumption above is often NOT true. He also seemed to imply that this is a function of the ineraction between the block-level RAID implementation and the file system, as his Panasas file system reputedly fixes this scary, "one s

Re: [Beowulf] Big storage

2007-09-14 Thread John Hearns
On Fri, 2007-09-14 at 08:05 -0500, Bruce Allen wrote: > > UGU isn't what it used to be (but neither am I). I'm having trouble > > finding a man page for fsprobe; can you specifiy a flavor of unix? > > http://fuji.web.cern.ch/fuji/fsprobe/ > ___ I just

Re: [Beowulf] Problems with a JS21 - Ah, the networking...

2007-09-29 Thread John Hearns
On Fri, 2007-09-28 at 17:43 -0300, Ivan Paganini wrote: > Hello everybody, > > I am beginning to take care of an IBM's JS21. The cluster consists of > The myrinet connection was working right, but sometimes a user program > just got stuck - one of the processes was sleeping, and all others > were

Re: [Beowulf] best linux distribution

2007-10-08 Thread John Hearns
Barnet Wagman wrote: Does any one use Centos on Beowulf nodes? Of course Centos is really just Redhat, but many people prefer it for use on servers. We have several sites using Scientific Linux, which is along the same lines as CentOS. ___ Beowulf

Re: [Beowulf] best linux distribution

2007-10-08 Thread John Hearns
Mark Hahn wrote: up-to-date. from a quick glance at the SL-5.0 readme, the number of customizations is quite small, so I do wonder what the point is. (_not_ meant as a criticism!). SL exists to populate the huge data centres at CERN and Fermilab, and as a consequence many, many HEP groups ha

Re: [Beowulf] Parallel Development Tools

2007-10-17 Thread John Hearns
Peter St. John wrote: **real** programmers somehow get large numbers of thralls to hoist huge boulders into precise positions. s/boulders/19 inch racks/ ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscr

Re: [Beowulf] Reliable Job Queueing and Notification

2007-10-18 Thread John Hearns
Reuti wrote: Hi, Am 16.10.2007 um 16:08 schrieb Sean Ward: I've started work on a web service which contains several potentially long running processing steps (molecular dynamics), which are perfect to farm out to the fairly large (90 node) Beowulf I have access to. The primary issue is tran

Re: [Beowulf] impressions of Super Micro IPMI management cards?

2007-10-22 Thread John Hearns
On Mon, 2007-10-22 at 09:31 -0400, Chris Dagdigian wrote: > > My needs are pretty minimal -- remote power control. BIOS access and > the ability to trigger a PXE boot off the network. Anything else is > just supplemental. > > Does anyone have any experience/impressions of the "Supermicro >

Re: [Beowulf] small-footprint MS WIn "MinWin"

2007-10-22 Thread John Hearns
On Mon, 2007-10-22 at 12:51 -0500, Brian D. Ropers-Huilman wrote: > > Interesting comment for this particular list. While I'm all in favor > of MS "seeing the light," so to speak, your comment on "... everything > else as a VM guest on top of a device-independent layer. At least I > hope we are g

Re: [Beowulf] Old versions of Linux

2007-10-28 Thread John Hearns
On Sun, 2007-10-28 at 22:45 +0530, Sandip Dev wrote: > I am totally new to clustering. It seems most clustering software like > OpenMosix need 2.4 kernel to work. Sandip, I have supported Mosix installations in the past, however it is now unsupported. > Where can I get a older 2.4 kernel based Li

Re: [Beowulf] Old versions of Linux

2007-10-28 Thread John Hearns
On Mon, 2007-10-29 at 06:54 +0530, Sandip Dev wrote: > Thanks everyone. I got it. I will be using MPICH for my cluster. And > my distro would be Ubuntu Giutsy Gibbon. It already has support for > x86-64 architecture and SMP. So now i have to install MPICH. Hope > everything goes well. Sandip, sor

Re: [Beowulf] Building a new cluster - seeking some advice

2007-12-23 Thread John Hearns
On Sat, 2007-12-22 at 10:20 -0500, Robert G. Brown wrote: > > I personally hope they do it, although I do think that we're about to go > through yet another paradigm shift. With flash coming down to around > $10/GB or less wholesale in sizes up to 16 GB, I think we'll start > seeing pure flash-b

Re: [Beowulf] Building a new cluster - seeking some advice

2007-12-23 Thread John Hearns
On Sat, 2007-12-22 at 13:01 -0500, Mark Hahn wrote: > PATA/SATA-interface flash is accelerating, I think. Intel just introduced > a building block for that, and other vendors have had somewhat obscure > products out for a long time. a flash-based "PATA-fob" seems reasonably > secure to me for t

Re: [Beowulf] High Performance SSH/SCP

2008-02-15 Thread John Hearns
On Fri, 2008-02-15 at 16:38 -0500, Mark Kosmowski wrote: > > I don't think there's anything difficult about setting up rsh, > ssh or > kerberos for anyone who know how to read a manual. A newbie > shouldn't be > setting up a cluster in the first pla

Re: [Beowulf] need for an advice on nfs and diskless clients

2008-02-20 Thread John Hearns
On Wed, 2008-02-20 at 11:26 +0100, Maxime Kinet wrote: > Hi, > > > I'm setting up a cluster, made of diskless workstations, booting > through NFS. For that purpose I created a copy of the node's > filesystem in a /tftpboot directory. That is, I have a full filesytem > for each single > node : /t

Re: [Beowulf] Opinions of Hyper-threading?

2008-02-28 Thread John Hearns
On Wed, 2008-02-27 at 23:30 -0800, Bill Broadley wrote: > > I don't see any particular reason why memory bandwidth can go through a full > doublings in the near future if there was a market for it, last I checked > nvidia was doing pretty well ;-) > > [1] Sorry to use marketing bandwidth, I've

Re: [Beowulf] Live Implementation for Clusters

2008-03-12 Thread John Hearns
for me. I'm doing a talk on Saturday, and was looking at the Cluster Knoppix site, as I'll probably be asked about how to go about getting a taster of cluster building. http://clusterknoppix.sw.be/ I was a bit surprised how out of date this is. Best of British with the project, and as I s

Re: [Beowulf] Live Implementation for Clusters

2008-03-12 Thread John Hearns
using our gig ethernet connected clusters right now for serious computational chemistry work all over the UK. John Hearns Senior HPC Engineer Streamline Computing ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode

Re: [Beowulf] Configuring mpich in a Pentium Dual Core

2008-04-04 Thread John Hearns
Cally K wrote: Dear beowulf users I have an Intel pentium Dual Core machine, and I would like to know how to configure mpich on it, i used the smp option and is there any example programs that I can run besides the hello world that would tell me something like I am using both processors.

Re: [Beowulf] Big storage

2008-04-09 Thread John Hearns
On Wed, 2008-04-09 at 01:01 -0500, Bruce Allen wrote: > Since stock Solaris can not boot from ZFS, I'm a bit reluctant to throw > away drives and storage space to host the OS separately on each X4500. I went to a talk on ZFS recently (very impressive). AFAIK Opensolaris will boot from ZFS, but

Re: [Beowulf] Nvidia, cuda, tesla and... where's my double floating point?

2008-05-05 Thread John Hearns
On Fri, 2008-05-02 at 14:05 +0100, Ricardo Reis wrote: > Does anyone knows if/when there will be double floating point on those > little toys from nvidia? > > Ricardo, I think CUDA is a gret concept, and am starting to work with it at home. I recently went to a talk by David Kirk, as part o

[Beowulf] FreeIPA

2008-05-08 Thread John Hearns
And sorry, that's not free India Pale Ale. This was discussed on my local LUG list today. http://freeipa.org/page/About "FreeIPA is an integrated security information management solution combining Linux (Fedora), Fedora Directory Server, MIT Kerberos, NTP, DNS. It consists of a web interface and

Re: [Beowulf] Do these SGE features exist in Torque?

2008-05-09 Thread John Hearns
On Fri, 2008-05-09 at 14:26 -0400, Prentice Bisbal wrote: > 1. Interactive shells managed by queuing system > 2. Counting licenses in use (done using a contributed shell script in SGE) > 3. Separation of roles between submit hosts, execution hosts, and > administration hosts > 4. Certificate-based

Re: [Beowulf] Re: Purdue Supercomputer

2008-05-12 Thread John Hearns
On Sun, 2008-05-11 at 19:13 -0700, Greg Lindahl wrote: > O > Last I saw someone doing this, IPMI sharing an ethernet port with the > host led to all kinds of weird ARP problems. Whereas a dedicated port > is much easier to configure. My favorite vendors all offer a > dedicated port... Also port nu

Re: [Beowulf] Re: Purdue Supercomputer

2008-05-12 Thread John Hearns
On Sun, 2008-05-11 at 16:01 -0400, Perry E. Metzger wrote: > Who do you favor for console servers these days? Ditto for > addressable/switchable PDUs? > I hope Joe doesn't mind me answering a question directed at him, but for us if you spec a separate console server it would be a cyclades Alterpa

Re: [Beowulf] TOE on Linux?

2008-05-20 Thread John Hearns
On Mon, 2008-05-19 at 18:42 -0400, Mark Hahn wrote: > > > > 1. Is having 10 GbE and Inifiniband in the same cluster overkill, or at > > least unorthodox? This cluster will be used by a variety of users > > I would say so - if you've got IB, why add another interface? > I'm not suggesting getting

Re: [Beowulf] Re: ECC support on motherboards?

2008-05-20 Thread John Hearns
On Tue, 2008-05-20 at 22:16 -0400, Douglas Eadline wrote: > I am in the Eee PC club as well. I got one last fall > the day they went on sale. This was just before SC07. Me too. I paid over the odds for it in Tottenham Court Road (there was a time earlier in the year when the 4gig models were in sh

Re: [Beowulf] OFED/IB for FC8

2008-06-04 Thread John Hearns
e you need, or might not install exactly the way you want it. And you'll always have the version supplied by the distribution, and won't be able to update if you hit a bug or need a new feature. Remember, we're in the era of open source. That's why you chose to us

Re: [Beowulf] A couple of interesting comments

2008-06-06 Thread John Hearns
On Fri, 2008-06-06 at 10:39 -0500, Gerry Creager wrote: > 1. We specified "No OS" in the purchase so that we could install CentOS > as our base. We got a set of systems with a stub OS, and an EULA for > the diagnostics embedded on the disk. After clicking thru the EULA, it > tells us we have

Re: [Beowulf] A couple of interesting comments

2008-06-06 Thread John Hearns
On Fri, 2008-06-06 at 10:39 -0500, Gerry Creager wrote: > W > Also, I'm now told that "almost every customer" ordered their cluster > configuration service at several kilobucks per rack. Since the team I'm > working with has some degree of experience in configuring and installing > hardware and

Re: [Beowulf] NVIDIA GPUs, CUDA, MD5, and "hobbyists"

2008-06-19 Thread John Hearns
On Wed, 2008-06-18 at 16:31 -0700, Jon Forrest wrote: > Kilian CAVALOTTI wrote: > I'm glad you mentioned this. I've read through much of the information > on their web site and I still don't understand the usage model for > CUDA. By that I mean, on a desktop machine, are you supposed to have > 2 g

Re: [Beowulf] SuperMicro and lm_sensors

2008-06-19 Thread John Hearns
On Wed, 2008-06-18 at 15:11 -0700, Greg Lindahl wrote: > Speaking of lm_sensors, does anyone have configs for recent SuperMicro > mobos? My SuperMicro support contact doesn't have ay idea, and running > sensors-detect leaves me with lots of readings which are Can't help you on the lm_sensors front

Re: [Beowulf] NVIDIA GPUs, CUDA, MD5, and "hobbyists"

2008-06-19 Thread John Hearns
On Thu, 2008-06-19 at 12:52 -0400, Peter St. John wrote: > I dug up this pdf from Nvidia: > http://www.nvidia.com/docs/IO/43395/tesla_product_overview_dec.pdf > Since I can't imagine coding a graphics card while it serves my X :-) > I supposed one might put the PCIE card in a box with a cheap SVGA

Re: [Beowulf] Re: "hobbyists"

2008-06-19 Thread John Hearns
On Thu, 2008-06-19 at 13:51 -0400, Robert G. Brown wrote: > > > why worry about ICBMs when DHL/FedEx will deliver it to your selected > > doorstep? > > Well, even a small bomb would be pretty heavy. Kind of at the boundary > of what FedEx will delivery;-) Recall that the first British nuclear

Re: [Beowulf] NVIDIA GPUs, CUDA, MD5, and "hobbyists"

2008-06-20 Thread John Hearns
On Thu, 2008-06-19 at 17:16 -0700, Kilian CAVALOTTI wrote: > I don't even know how you choose (or even if you can choose) on which > GPU you want your code to be executed. It has to be handled by the > driver on the host machine somehow. There are functions for discovering the properties of the

Re: [Beowulf] security for small, personal clusters

2008-06-20 Thread John Hearns
On Fri, 2008-06-20 at 12:30 -0400, Mark Kosmowski wrote: > What kind of security is recommended for the owner of a small personal > cluster? Where should the owner of a small, personal cluster go to > learn about security? Doing searches tends to give a few "head in the > sand" sites but predomin

RE: [Beowulf] Re: "hobbyists"

2008-06-25 Thread John Hearns
On Wed, 2008-06-25 at 16:50 +0200, Geoff Galitz wrote: > > I've never really bought the argument that biofuels are causing a food > shortage considering that there is still so much unused farmland in the US > and farming practices here in the EU. I must admit this out of my field so > I have no r

Re: [Beowulf] Re: "hobbyists" still OT

2008-06-25 Thread John Hearns
On Thu, 2008-06-26 at 13:43 +1000, Chris Samuel wrote: > It is probably worth pointing out that, as a recent > New Scientist article mentioned, a major part for the > rise in grain prices is due the rising demand for meat > from around the world. > > This is, of course, a very inefficient convers

Re: Commodity supercomputing, was: Re: NDAs Re: [Beowulf] Nvidia, cuda, tesla and... where's my double floating point?

2008-06-30 Thread John Hearns
On Mon, 2008-06-30 at 20:20 +0200, Toon Moene wrote: > > Since about a year, it's been clear to me that weather forecasting > (i.e., running a more or less sophisticated atmospheric model to provide > weather predictions) is going to be "mainstream" in the sense that every > business that need

RE: [Beowulf] Re: OT: LTO Ultrium (3) throughput?

2008-07-05 Thread John Hearns
On Fri, 2008-07-04 at 09:10 +0200, Geoff Galitz wrote: > Backing up to tape allows you to go back to a specific point in > history. Particularly useful if you need to recover a file that has > become corrupted or you need to rollback to a specific stage and you > are unaware of that fact for a few

Re: [Beowulf] Roadrunner picture

2008-07-16 Thread John Hearns
On Thu, 2008-07-17 at 04:42 +1000, Andrew Robbie (GMail) wrote: > On Tue, Jul 15, 2008 at 12:35 AM, Josip Loncaric <[EMAIL PROTECTED]> wrote: > > > > Another good link: > > > > http://www.lanl.gov/orgs/hpc/roadrunner/rrtechnicalseminars2008.shtml > > As I was reading the slides, one question leap

Re: [Beowulf] Roadrunner picture

2008-07-16 Thread John Hearns
On Wed, 2008-07-16 at 23:29 +0100, John Hearns wrote: > > To answer your question more directly, Panasas is a storage cluster to > complement your compute cluster. Each storage blade is connected into a > shelf (chassis) with an internal ethernet network. Each shelf is then > co

Re: [Beowulf] Re: Religious wars

2008-07-22 Thread John Hearns
On Tue, 2008-07-22 at 16:19 -0400, Bob Drzyzgula wrote: > But I don't understand... if resources aren't an issue (and > certainly they haven't been for at least a decade, since > BIOSs started supporting El Torito) and systems programmers > are *not* more likely to be vi users than emacs users, >

Re: [Beowulf] Re: Religious wars

2008-07-24 Thread John Hearns
On Wed, 2008-07-23 at 22:54 -0400, Bob Drzyzgula wrote: > On Wed, Jul 23, 2008 at 09:06:03PM -0400, Perry E. Metzger wrote: > > > > "Robert G. Brown" <[EMAIL PROTECTED]> writes: > > > Note that Bob and I started out on systems with far less than 100 MB > > > of DISK and perhaps a MB of system memo

Re: [Beowulf] Building new cluster - estimate

2008-07-28 Thread John Hearns
job. Can people share their recent experiences and recommend reliable > > vendors to deal with? Our standard build would be an APC rack, IPMI in all compute nodes plus two networked APC PDUs. John Hearns ___ Beowulf mailing list, Beowulf@beo

Re: [Beowulf] Building new cluster - estimate

2008-07-29 Thread John Hearns
On Mon, 2008-07-28 at 23:18 -0400, Ivan Oleynik wrote: > > Space is not tight. Computer room is quite spacious but air > conditioning is rudimental, no windows or water lines to dump the > heat. It looks like a big problem, therefore, consider to put the > system somewhere else on campus, althoug

Re: [Beowulf] Building new cluster - estimate

2008-07-30 Thread John Hearns
On Tue, 2008-07-29 at 16:11 -0400, Joe Landman wrote: > Ivan Oleynik wrote: > > vendors have at least list prices available on their websites. > > > > > > I saw only one vendor siliconmechanics.com > > that has online integrator. Others require direct contact of

Re: [Beowulf] Building new cluster - estimate

2008-07-30 Thread John Hearns
On Tue, 2008-07-29 at 18:28 -0400, Mark Hahn wrote: > > > > afaik, their efficiency is maybe 10% better than more routine hardware. > doesn't really change the big picture. and high-eff PSU's are available > in pretty much any form-factor. choosing lower-power processors (and perhaps > avoiding

Re: [Beowulf] Re: Linux cluster authenticating against multiple Active Directory domains

2008-08-01 Thread John Hearns
On Fri, 2008-08-01 at 15:37 +1000, Chris Samuel wrote: > We'd prefer to steer clear of Kerberos, it introduces > arbitrary job limitations through ticket lives that > are not tolerable for HPC work. > Kerberos is heavily used at CERN. They have a solution for that issue - the job can ask for an e

Re: [Beowulf] Re: Building new cluster - estimate (Ivan Oleynik)

2008-08-01 Thread John Hearns
On Fri, 2008-08-01 at 12:12 -0400, Mark Hahn wrote: > I'm not sure about the "most" part - HP's don't, and it looks like supermicro > offers options both ways. all the recent tyan boards I've looked at had > dedicated IPMI/OPMA onboard. all HP machines have dedicated ports. > > but to me this

Re: [Beowulf] Can one Infiniband net support MPI and a parallel file system?

2008-08-06 Thread John Hearns
On Tue, 2008-08-05 at 17:25 -0400, Gus Correa wrote: > Hello Beowulf fans > > Is anybody using Infiniband to provide both > MPI connection and parallel file system services on a Beowulf cluster? > > I thought to have a storage node that would > serve a parallel file system to the beowulf nodes ov

Re: [Beowulf] Building new cluster - estimate

2008-08-07 Thread John Hearns
On Thu, 2008-08-07 at 10:00 -0500, Jon Aquilina wrote: > my 2 cents bout ssd and i bet alot of you would agree. they are not > worth the money yet for the amount of storage space that you are > getting. i have seen at fry's electronics yesterday 1tb hdd for 200 > dollars? why go for something that

Re: [Beowulf] 10gig CX4 switches

2008-09-16 Thread John Hearns
gt; Have a look at the Quadrics switches. The smaller TG201 switch would fit the bill nicely for you - you get get it in two variants, one with 24x copper ports and one with 12x copper ports and 12x empty ports for GBICs. They're pretty cost effective, John Hearns

Re: [Beowulf] MS Cray

2008-09-16 Thread John Hearns
the neighbours being woken up, though I'm quite happy to share a machine room with 200 of the things. John Hearns ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] MS Cray

2008-09-17 Thread John Hearns
2008/9/17 Lux, James P <[EMAIL PROTECTED]> > > . When mainframes first entered the halls of academe, I'm sure the same > sort of discussions arose. Heck, it's why computers like the PDP-8 were > invented. > > Jim > Just let me correct you there. Surely PDP-8s were calculators or Data Processing

Re: [Beowulf] ethernet bonding performance comparison "802.3ad" vs Adaptive Load Balancing

2008-09-18 Thread John Hearns
2008/9/18 Eric Thibodeau <[EMAIL PROTECTED]> > Wooo...I think I'll explore this then with my new machines, link > aggregation on the head node will come in handy for diskless nodes ;) > > These days, would you not be better speccing a 10Gbps ethernet card, and a switch with a couple of 10Gbps por

Re: [Beowulf] Re: MS Cray

2008-09-18 Thread John Hearns
2008/9/18 Lux, James P <[EMAIL PROTECTED]> > > > How about as an "executive toy" for the guy in the corner office running > financial models? (I am a Master of the Universe, and I must have my special > data entirely under my control.) > Cough. I think a lot of them just left the building, carryin

Re: [Beowulf] What services do you run on your cluster nodes?

2008-09-23 Thread John Hearns
2008/9/23 Robert G. Brown <[EMAIL PROTECTED]> > > This meant that there could be hundreds or even thousands of machines > that saw every packet produced by every other machine on the LAN, > possibly after a few ethernet bridge hops. This made conditions ripe > for what used to be called a "packet

Re: Flow Viz... Re: Fwd: Re: [Beowulf] Earthquakes and raised floors...

2006-01-20 Thread John Hearns
On Fri, 2006-01-13 at 16:30 +, John Hearns wrote: > On Mon, 2006-01-09 at 09:56 -0800, Jim Lux wrote: > > > > > The traditional approach is using some form of smoke stream to visualize > > the air flow. Historically, one would use a bit of TiCl4, which combin

Re: [Beowulf] using two separate networks for different data streams

2006-01-26 Thread John Hearns
On Thu, 2006-01-26 at 18:56 +, Ricardo Reis wrote: > Hi > > I've looked around, in the list and google and didn't find anything > elucidating enough on this so mabe someone could englight me or point me > where to look. > We ship quite a few clusters configured like this. One gigabit swit

Re: [Beowulf] Beowulf, Gentoo and Navier Stokes solvers

2006-02-07 Thread John Hearns
On Mon, 2006-02-06 at 14:53 -0800, Jim Lux wrote: > > I'll bet that the folks on the list, if > they don't know, they do know someone who does, and will get you started in > the right direction. That sounds familiar! You're right about the applications areas, need to find out what is being mode

Re: [Beowulf] Apologies for the spam/virus yesterday

2006-02-09 Thread John Hearns
On Thu, 2006-02-09 at 07:09 +, Andrew M.A. Cater wrote: > On Wed, Feb 08, 2006 at 10:01:06PM +, Martin Wheeler wrote: > > On Wed, 8 Feb 2006, Donald Becker wrote: > > > > >The bottom line is that we are considering a message board format to > > >replace the mailing list. I would vote ver

Re: [Beowulf] What did I neglect to add? Specing hardware and software and support for a 16 node beowulf

2006-02-16 Thread John Hearns
On Wed, 2006-02-15 at 17:36 -0800, Dan Stromberg wrote: > I've spent the afternoon brainstorming about things to ask vendors as we > evaluating their responses to our RFP and refine what we spec. > > Did I leave out anything that might come back and bite me later? > > http://dcs.nac.uci.edu/~stro

Re: [Beowulf] 'dual' Quad solution from Tyan

2006-02-28 Thread John Hearns
On Tue, 2006-02-28 at 08:42 -0500, Douglas Eadline wrote: > In the past, there were similar discussions about quad Pentium pro systems. > Memory bandwidth was an issue, but IIRC, the cost for a quad system was > much higher than buying four singles. Hmmm. It might be interesting to cluster togethe

Re: [Beowulf] "dual" Quad solution from Tyan

2006-03-01 Thread John Hearns
On Mon, 2006-02-27 at 20:49 +, Ricardo Reis wrote: > > The professor head of the lab wants me to know "if that is so good why > isn't everyone buying one?" Ricardo, the answer is that people are buying them. I'm answering as we are a European company, and this is a UK website. Speak to you

Re: [Beowulf] A bit OT - scientific workstations - recommendations

2006-03-03 Thread John Hearns
> - 24/7/365 next day on site support Please, please can I have Christmas Day off? (Having said that, being Scottish I should say that people used to work Christmas Day in the shipyards etc. But we get two days off at Hogmanay. No-one in Scotland is compus mentis till the 3rd) _

Re: [Beowulf] nodes reserved but not used?

2006-03-08 Thread John Hearns
On Wed, 2006-03-08 at 11:54 +, Ru-Zhen Li wrote: > Dear All, > > I am having problem with reserved nodes not being used. > > I used > > /home/hep/lrz/mpich2-install/bin/mpiexec -machinefile shortlist -n > 16 ./DLPOLY.Y A very stupid reply, but what happens if you run a simple 'hello wor

[Beowulf] Scyld Beowulf support for SGE

2006-03-09 Thread John Hearns
Duly relayed on behalf of for Ron Chen: Out of the box, SGE does not run on Beowulf clusters. A fix was floating around the dev list but it was not in the offical tree. I decided to get it fixed and I just checked in a fix for SGE 6.0u8 and maintrunk. See issue 1936 for details: http://gridengin

Re: [Beowulf] A bit OT - scientific workstations - recommendations

2006-03-10 Thread John Hearns
On Fri, 2006-03-10 at 14:47 -0500, Douglas Eadline wrote: > > I have heard stories about some of the first vacuum tube computers where a > a full time technician walked around inside the computer > and replaced blown out tubes - between every program run. I tend to > think this has a certain myth

Re: [Beowulf] Cluster newbie, power recommendations

2006-03-21 Thread John Hearns
On Sun, 2006-03-19 at 19:09 -0600, Eric Geater at Home wrote: > Howdy, everyone! > > Maybe this is a question better suited for hardware heads, but I've become > Beowulf curious, and am interested in learning a hardware question. Personally, I would just use 16 power outlets and 16 PSUs. Yes, it i

Re: [Beowulf] [EMAIL PROTECTED]: 1U server with 4 SATA ports and a 32-bit PCI slot]

2006-03-21 Thread John Hearns
On Tue, 2006-03-21 at 08:41 +0100, Eugen Leitl wrote: > * 1U chassis > * 1 dual-core amd64 > * 4 SATA drives > * 1 32-bit PCI slot (preferably 2) It is a bit of a kludge, but you could get the single-socket short Supermicro chassis and use an external SATA connector, and put the drives outside the

Re: [Beowulf] Re: Cluster newbie, power recommendations

2006-03-21 Thread John Hearns
On Tue, 2006-03-21 at 13:06 -0500, Joe Landman wrote: > > Not sure of the performance impact of this, but you could look at OpenVZ > or Xen as well (when it is ready). Xen has very little impact on performance. I saw some very good figures at a recent presentation at FOSDEM. I guess the bigges

Re: [Beowulf] Remote Console

2006-03-22 Thread John Hearns
On Wed, 2006-03-22 at 09:00 -0500, Luis Alejandro del Castillo Riley wrote: > Hi fellows i have 5 pcs one is my master computer and the others are > the slaves machines i am trying to build a cluster and i am looking > for a program can do a remote console > or desktop from the master pc to the oth

Re: [Beowulf] Static Compilation versus Dynamic Compilation

2006-03-23 Thread John Hearns
On Thu, 2006-03-23 at 15:51 -0300, Ivan Silvestre Paganini Marin wrote: > Hello everybody at beowulf list. I am compiling my application using PGI > compilers for fortran 95, and some libraries, like fftw, lapack and > scalapack (from PGI or not, like ACML). I am curious to see if a static > compil

Re: [Beowulf] Determining NFS usage by user on a cluster

2006-04-21 Thread John Hearns
On Thu, 2006-04-20 at 08:54 +0100, Guy Coates wrote: > Konstantin Kudin wrote: > > Hi all, > > > > Is there any good solution to find out which user is loading the NFS > > the most in a cluster configuration? > > > > iftop is quite a handy tool; it displays traffic on a per-host basis, so > if

Re: [Beowulf] /. [HyperTransport 3.0 Ratified]

2006-04-25 Thread John Hearns
On Tue, 2006-04-25 at 08:42 +0200, Eugen Leitl wrote: > Link: http://slashdot.org/article.pl?sid=06/04/24/203238 > Posted by: ScuttleMonkey, on 2006-04-24 21:17:00 > >Hack Jandy writes "The HyperTransport consortium just released the >[1]3.0 specification of HyperTransport. The new specifi

Re: [Beowulf] Opteron cooling specifications?

2006-04-30 Thread John Hearns
On Sat, 2006-04-29 at 18:08 -0700, David Kewley wrote: > What does it mean to say that 32 is "standard"? Why shouldn't 40 be > standard, other than perhaps 32 is more typically done? > Surely related to the maximum amperage you supply to the rack? With dual cores and now quad socket systems cur

Re: [Beowulf] 512 nodes Myrinet cluster Challanges

2006-05-05 Thread John Hearns
On Fri, 2006-05-05 at 10:23 +0200, Alan Louis Scheinine wrote: > Since you'all are talking about IPMI, I have a question. > The newer Tyan boards have a plug-in IPMI 2.0 that uses > one of the two Gigabit Ethernet channels for the Ethernet > connection to IPMI. If I use channel bonding (trunking)

Re: [Beowulf] Bonding Ethernet cards / [was] 512 nodes Myrinet cluster Challenges

2006-05-11 Thread John Hearns
On Tue, 2006-05-09 at 03:38 +0100, Krugger wrote: > Would it not be better to split the networks physically. I mean one > network( + routers) for the NFS (basically for remote IO) and the > other network(+ routers) for the interprocess comunications > s This is a very common configuration for our c

Re: [Beowulf] What does anybody know about DRC ?

2006-05-23 Thread John Hearns
On Tue, 2006-05-23 at 08:58 -0400, leo wrote: > Has anybody had an experience or comment on the DRC CO-processor > Module? > > See http://www.drccomputer.com/ Its very interesting, and if anyone on this side of the pond is interested in trying one, or porting code to it we would like to help. > I

Re: [Beowulf] What can a HS student do with a small Beowulf?

2006-05-24 Thread John Hearns
On Sat, 2006-05-20 at 23:24 -0400, sNAAPS eLYK wrote: > Hello, I'm Kyle Spaans, finishing my last year of highschool in > Northern Ontario, Canada, and I'm a budding Linux user. ps. If you want something fun to do on your cluster, implement this: http://www.kerrighed.org/ There's a live CD vers

Re: [Beowulf] What can a HS student do with a small Beowulf?

2006-05-24 Thread John Hearns
On Tue, 2006-05-23 at 15:52 -0400, Todd Patton wrote: > Is it the norm for the list that the cluster manager, > administrator, builder, programmer, and user roles are played by the > same person(s)? Should a high chool student that really wants to pursue > a career in computational clusters fol

Re: [Beowulf] OpenAFS+OpenLDAP+Kerberos on Master Node

2006-05-31 Thread John Hearns
On Tue, 2006-05-30 at 11:46 -0500, Juan Camilo Hernandez wrote: > Somebody could collaborate to me with a guide to mount a > OpenAFS+OpenLDAP+Kerberos system in my master node. I can't help directly I'm afraid. For the AFS part, it might be worth looking at Scientific Linux, which has good AFS supp

Re: [Beowulf] FreeBSD 6.1 and single system image

2006-07-14 Thread John Hearns
Glen Gardner wrote: I've recently installed FreeBSD 6.1 on a 12 node diskless cluster. I've got a couple of observations to share for those who might be interested... The remarkable thing I notice about the FreeBSD setup is that with all nodes booting the same image (including the head node),

Re: [Beowulf] layer 3 switches

2006-09-22 Thread John Hearns
ay layer 3 functionality is not needed. (*) Well, not really one switch. One switch for admin and Panasas storage traffic, and two for dedicated MPI traffic. -- John Hearns Senior HPC Engineer Streamline Computing, The Innovation Centre, Warwick Technology Park, Ga

Re: [Beowulf] layer 3 switches

2006-09-22 Thread John Hearns
Warren Turkal wrote: Does a layer 3 switch make sense on a Beowulf cluster running on GigE? If so, does anyone have any recommendations? Tech specs for the 5510, and no I don't have shares. http://tinyurl.com/9d8as You can stack up to eight of them. -- John Hearns Senio

Re: [Beowulf] Ineternet cluster

2006-10-01 Thread John Hearns
Maxence Dunnewind wrote: Ok, i would do a "packaging farm" because i know some people who packages some big app, and the building time is about 20 hours :/ So, do you think there really is no solution for parrallel works over Internet ? If you install Sun Gridengine on a set of machines, the

  1   2   3   4   5   6   7   8   >