-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This thread has gone horribly off topic. There's more noise about noise
than there is about the original question.
Prentice Bisbal
Linux Software Support Specialist/System Administrator
School of Natural Sciences
Institute for Advanced
ized rsh is
essentially invisible, therefore
rsh + kerberos = difficulty
which is a first-order relationship.
If anything, setting up rsh is the most difficult one. Why? Since rsh is
so insecure, the distro producers/vendors have created many hurdle you
must hop to get it working (correct file a
advanced kung-fu best
left to the black belts. Letting a neophyte build and run an HPC cluster
is some kind of oxymoron.
Yes, I know that professors usually tell some green graduate student to
go build a cluster for the dept, but that's a completely different topic
outside the scope of this list...
than the Cray purchase.
>
> ==rob
>
When will SGI become RIP? That company has had one foot in the grave for
10 years now!
- --
Prentice Bisbal
Linux Software Support Specialist/System Administrator
School of Natural Sciences
Institute for Advanced Study
Princeton, NJ
-BEGIN PGP SIGNATURE-
itecture text book. you might even
say it's the "gold standard" I'm pretty sure it discusses NUMA somewhere
in between it's covers.
http://www.amazon.com/Computer-Architecture-Quantitative-Approach-Kaufmann/dp/1558605967
Prentice Bisbal
Linux Software Support Sp
Jim Lux wrote:
> Quoting "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>, on Tue
> 25 Mar 2008 04:21:54 AM PDT:
>
>> Dear All,
>>
>> One of my friends is looking for a person
>> who can take up a postdoc for one year in Canada
>> ( 1500 canadian dollars including tax per month )
>> This may be able to
Geoff Jacobs wrote:
> Ricardo Reis wrote:
>> Hi all
>>
>> I beg to take advantage of your experience although the topic isn't
>> completly cluster thing. I got some money to buy a new machine, at least
>> 8Gb and I'm thinking between a 2 x dual core or a 1 x quad (or even 2x
>> quads). It must
Joshua Baker-LePain wrote:
> On Fri, 28 Mar 2008 at 12:26pm, Mark Hahn wrote
>
>>> Also, AFAIK, neither project supports the professional series of
>>> cards (Quadro/FireGL).
>>
>> I'm pretty sure I normally just install the unified NV driver for
>> quadros.
>
> Sure, but that driver doesn't do
Greg Lindahl wrote:
> On Fri, Mar 28, 2008 at 01:40:57PM -0400, Mark Hahn wrote:
>
>> I haven't had problems with either nv or nvidia drivers.
>
> nvidia is somewhat annoying to update to new versions or new kernels.
>
This is easily fixed if you use HP's rpm of the drivers, which includes
th
Greg Lindahl wrote:
> On Fri, Mar 28, 2008 at 06:41:20PM -0400, Joshua Baker-LePain wrote:
>
>> Depending on your distribution, there are various folks packaging up the
>> drivers in more convenient forms than nvidia's installer. The livna RPMs
>> for Fedora, e.g., work very well.
>
> Alas, I
Gerry Creager wrote:
> We're building up a new high thruput cluster. Anticipate delivery with
> nice shiney new racks. We're anticipating a hot-aisle/cold-aisle
> concept. We also hope to go toward chilled-water cool-doors eventually,
> and as we expand the installation. The racks we're getting
Can anyone give me a quick comparison of OpenMPI vs. MPICH? I've always
used MPICH (I never liked lam), and just recently heard about OpenMPI.
Anyone here using it?
--
Prentice Bisbal
Linux Software Support Specialist/System Administrator
School of Natural Sciences
Institute for Advanced
Prentice Bisbal wrote:
> Can anyone give me a quick comparison of OpenMPI vs. MPICH? I've always
> used MPICH (I never liked lam), and just recently heard about OpenMPI.
> Anyone here using it?
>
Thanks to all of those whoe replied. I appreciate th
At a previous job, I installed SGE for our cluster. At my current job
Torque is the queuing system of choice. I'm very familar with SGE, but
only have a cursory knowledge of Torque (installed it for evaluation,
and that's it). We're about to purchase a new cluster. I'd have to make
a good argument
John Hearns wrote:
> On Fri, 2008-05-09 at 14:26 -0400, Prentice Bisbal wrote:
>
>> 1. Interactive shells managed by queuing system
>> 2. Counting licenses in use (done using a contributed shell script in SGE)
>> 3. Separation of roles between submit hosts, execution host
Reuti wrote:
> Hi,
>
> Am 09.05.2008 um 20:26 schrieb Prentice Bisbal:
>
>> At a previous job, I installed SGE for our cluster. At my current job
>> Torque is the queuing system of choice. I'm very familar with SGE, but
>> only have a cursory knowledge of
Jim Lux wrote:
> Actually, you can order cables already pre numbered and labelled. Why
> burn expensive cluster assembler time when you can pay someone
> (potentially offshore) to do it cheaper.
Because that would hurt the US economy, and the labels would probably be
made out of lead. ;-)
--
Pr
Does anyone know of any network cards/drivers that support TOE (TCP
Offload Engine) for Linux? A hardware vendor just told me that Linux
does not support the TOE features of *any* network card.
Given Linux's strong presence in HPC and the value of having TOE in a
cluster, I find that hard to belie
Prentice Bisbal wrote:
> Does anyone know of any network cards/drivers that support TOE (TCP
> Offload Engine) for Linux? A hardware vendor just told me that Linux
> does not support the TOE features of *any* network card.
>
> Given Linux's strong presence in HPC and the valu
Carsten Aulbert wrote:
> I don't know if TOE is equal to TSO, if not, please stop reading here.
TSO != TOE. I'm learning that right now as I read this informative page:
http://lwn.net/Articles/148697/
--
Prentice
___
Beowulf mailing list, Beowulf@beow
Prentice Bisbal wrote:
> Does anyone know of any network cards/drivers that support TOE (TCP
> Offload Engine) for Linux? A hardware vendor just told me that Linux
> does not support the TOE features of *any* network card.
>
> Given Linux's strong presence in HPC and the valu
John Hearns wrote:
> On Mon, 2008-05-19 at 18:42 -0400, Mark Hahn wrote:
>
>>> 1. Is having 10 GbE and Inifiniband in the same cluster overkill, or at
>>> least unorthodox? This cluster will be used by a variety of users
>>>
>> I would say so - if you've got IB, why add another interface
This topic is slightly off topic, since it's not a beowulf specific
problem, but it is HPC-related:
I have several fat servers with 4 cores and 32 GB of RAM, for jobs that
aren't very parallel and need large amounts of RAM. They are not
clustered in any way. At the moment, users ssh into these sys
Perry E. Metzger wrote:
> "Lombard, David N" <[EMAIL PROTECTED]> writes:
>> On Mon, Jun 09, 2008 at 11:41:29AM -0400, Prentice Bisbal wrote:
>>> I would like to impose some CPU and memory limits on users that are hard
>>> limits that can't be c
Chris Samuel wrote:
>
> Unfortunately the kernel implementation of mmap() doesn't check
> the maximum memory size (RLIMIT_RSS) or maximum data size (RLIMIT_DATA)
> limits which were being set, but only the maximum virtual RAM size
> (RLIMIT_AS) - this is documented in the setrlimit(2) man page.
>
Bernard Li wrote:
> Hi all:
>
> I am sure most people have seen the following picture for Roadrunner
> circulating the Net:
>
> http://www.cnn.com/2008/TECH/06/09/fastest.computer.ap/index.html?iref=newssearch
>
> However, they don't look likes blades to me, more like 2U IBM x series
> servers.
Bernard Li wrote:
> Hi all:
>
> I am sure most people have seen the following picture for Roadrunner
> circulating the Net:
>
> http://www.cnn.com/2008/TECH/06/09/fastest.computer.ap/index.html?iref=newssearch
>
> However, they don't look likes blades to me, more like 2U IBM x series
> servers.
Mark Hahn wrote:
>>> Unfortunately the kernel implementation of mmap() doesn't check
>>> the maximum memory size (RLIMIT_RSS) or maximum data size (RLIMIT_DATA)
>>> limits which were being set, but only the maximum virtual RAM size
>>> (RLIMIT_AS) - this is documented in the setrlimit(2) man page.
Lombard, David N wrote:
> On Mon, Jun 09, 2008 at 11:41:29AM -0400, Prentice Bisbal wrote:
>> I would like to impose some CPU and memory limits on users that are hard
>> limits that can't be changed/overridden by the users. What is the best
>> way to do this? All I know i
Vincent Diepeveen wrote:
>
> That has to change in order to get GPU calculations more into mainstream.
>
> When i calculate on paper for some applications, a GPU can be potentially
> factor 4-8 faster than a standard quadcore 2.4ghz is right now.
>
> Getting that performance out of the GPU is mo
Perry E. Metzger wrote:
> Prentice Bisbal <[EMAIL PROTECTED]> writes:
>> Completely untrue. One of my colleagues, who does a lot of work with GPU
>> processors for astrophysics calculations, was able to increase the
>> performance of the MD5 algorithm by ~100x
commercially:
http://www.engadget.com/2007/10/24/elcomsoft-turns-your-pc-into-a-password-cracking-supercomputer/
NVIDIA has promised us some new GPUs through their Professor Partner
program. I'm sure once we get our hands on them, we'll do more
coding/benchmarking. Not sure if they
the need for parallelizing MD5.
There is value, however, if your goal is to recover (discover?) an
MD5-hashed password through a brute-force attack. Last time I checked,
MD5 password s are the default for most Linux distros.
3. To show that more than just "hobbyists" are investigating GPUs.
--
Lawrence Stewart wrote:
> And don't get me started about the ways in which Linux is ill suited to
> HPC. . . . well actually that
> would be a pretty good debate for this forum.
> -Larry
>
But, Larry, SiCortex products are based on Linux. It say so in nice big
letters right there on your compan
Vincent Diepeveen wrote:
> That said, it has improved a lot, now all we need is a better compiler
> for linux. GCC is for my chessprogram generating an
> executable that gets 22% slower positions per second than visual c++
> 2005 is.
>
> Thanks,
> Vincent
>
GCC is a free compiler, and Visual C
Bill Broadley wrote:
> Vincent Diepeveen wrote:
>> intel c++ obviously is close to visual studio. Within 0.5% to 1.5%
>> range (depending upon flags
>
> I believe Microsoft licensed the intel optimization technology, so the
> similarity is hardly surprising.
>
>> and hidden flags that you managed
> Vincent Diepeveen wrote:
>>
>> Some 3d world country managers are begging to adress this issue: "My
>> nations people die,
>> as your bio fuel raises our food prices, the poor are so poor here,
>> they use that stuff as food
>> and cannot afford it now".
All this discussion of politics is compl
[EMAIL PROTECTED] wrote:
> I respectfully request that you take conversations about washing machines and
> other non_beowulf related topics off to some other mailing list. I have
> plenty of email to delete without having the load increased by irrelevant
> discussions on this one.
>
> Many tha
can take you on a tour of East Texas as proof...
>
> gerry
>
> andrew holway wrote:
>> I would suggest a good email client that can handle threads well such
>> as gmail. These devils will never learn. Domestic appliences are
>> indeed deeply ingrained into their souls.
&g
Lombard, David N wrote:
> On Thu, Jun 26, 2008 at 09:14:50PM +0100, andrew holway wrote:
>> I know where this is going and right away I'm going to trump you with
>> this picture of a trailor park mansion:-
>> http://bp1.blogger.com/_CCeVPrmu0G8/R8g0BnShRiI/AeU/M_ZJvm985yA/s1600-h/redneckman
Mark Hahn wrote:
>>> We have our own stack which we stick on top of the customers favourite
>>> red hat clone. Usually Scientific Linux.
>>
>> does it necessarily have to be a redhat clone. can it also be a debian
>> based
>> clone?
>
> but why? is there some concrete advantage to using Debian?
>
Mark Hahn wrote:
does it necessarily have to be a redhat clone. can it also be a debian
based
clone?
>>>
>>> but why? is there some concrete advantage to using Debian?
>>> I've never understood why Debian users tend to be very True Believer,
>>> or what it is that hooks them.
>>
>>
Mark Hahn wrote:
>> Hmmm for me, its all about the kernel. Thats 90+% of the battle.
>> Some distros use good kernels, some do not. I won't mention who I
>> think is in the latter category.
>
> I was hoping for some discussion of concrete issues. for instance,
> I have the impression debi
Perry E. Metzger wrote:
> "Jon Aquilina" <[EMAIL PROTECTED]> writes:
>> my idea is more of for my thesis.
>
> If you're trying to do 3d animation on the cheap and you want
> something that's already cluster capable, I'd try Blender. It is open
> source and it has already made some reasonable lengt
Henning Fehrmann wrote:
> On Wed, Jul 02, 2008 at 09:19:50AM +0100, Tim Cutts wrote:
>> On 2 Jul 2008, at 8:26 am, Carsten Aulbert wrote:
>>
>>> OK, we have 1342 nodes which act as servers as well as clients. Every
>>> node exports a single local directory and all other nodes can mount this.
>>>
>>
Tim Cutts wrote:
>
> On 2 Jul 2008, at 6:06 am, Mark Hahn wrote:
>
I was hoping for some discussion of concrete issues. for instance,
I have the impression debian uses something other than sysvinit -
does that work out well?
>>> Debian uses standard sysvinit-style scripts in
Jon Aquilina wrote:
> like you said in regards to maya money is a factor for me. if i do
> descide to setup a rendering cluster my problem is going to be finding
> someone who can make a small video in blender for me so i can render it.
Blender should come with a few small scene files you can ren
Mark Kosmowski wrote:
> I think I have come to a compromise that can keep me in business.
> Until I have a better understanding of the software and am ready for
> production runs, I'll stick to a small system that can be run on one
> node and leave the other two powered down. I've also applied fo
> You don't need to go this far. Just set up the hostfile to use the same
> host name several times. Just make sure you don't start swapping :)
>
> Jeff
>
Unless the problem is configuring interhost communications correctly.
--
Prentice
___
Beowulf
Joe Landman wrote:
>
>
> Prentice Bisbal wrote:
>
>> Here's another reason to use tarballs: I have /usr/local shared to all
>
> eeek!! something named local is shared???
Nothing wrong with that. "local" doesn't necessarily mean local to the
phys
Tim Cutts wrote:
>
> On 3 Jul 2008, at 2:38 pm, Prentice Bisbal wrote:
>
>> Here's another reason to use tarballs: I have /usr/local shared to all
>> my systems with with NFS.
>
> Heh. Your view of local is different from mine. On my systems
> /usr/local is
Joshua Baker-LePain wrote:
> On Thu, 3 Jul 2008 at 9:34am, Tim Cutts wrote
>> On 2 Jul 2008, at 4:22 pm, Prentice Bisbal wrote:
>
>>> B. Red Hat has done such a good job of spreading FUD about the other
>>> Linux distros, management has a cow if you tell them you
Jeffrey B. Layton wrote:
> Prentice Bisbal wrote:
>>> You don't need to go this far. Just set up the hostfile to use the same
>>> host name several times. Just make sure you don't start swapping :)
>>>
>>> Jeff
>>>
>>>
>>
Tim Cutts wrote:
>
> On 3 Jul 2008, at 5:09 pm, Joshua Baker-LePain wrote:
>
>> On Thu, 3 Jul 2008 at 9:34am, Tim Cutts wrote
>>> On 2 Jul 2008, at 4:22 pm, Prentice Bisbal wrote:
>>
>>>> B. Red Hat has done such a good job of spreading FUD about the
Steffen Grunewald wrote:
>
> Which isn't true. Don't you remember MCC Interim Linux, back in the old
> days of 0.95[abc] kernels? It didn't consist of tens of floppies (yet),
> but it *was* a distro.
>
Actually, no, I don't remember MCC Interim Linux. It was before my time.
My experience with L
[EMAIL PROTECTED] wrote:
>> -Original Message-
>> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Prentice
>> Bisbal
>> Sent: 08 July 2008 15:09
>> Cc: beowulf@beowulf.org
>> Subject: Re: [Beowulf] A press release
>>
>> Stef
Douglas Eadline wrote:
> A blast from the past. I have a copy of the Yggdrasil "Linux Bible".
> A phone book of Linux How-To's and other docs from around 1995.
> Quite useful before Google became the help desk.
>
> --
> Doug
>
Translation: Doug is a pack rat.
--
Prentice
Karen Shaeffer wrote:
>
> Hi,
>
> OK, here is your linux history buff quiz. We all know Patrick V. was
> the technical spirit of slackware. Who was the original sales and
> marketing wizard for slackware? (smiles ;)
>
> Thanks,
> Karen
Quiz #2: Spell Patrick V.'s last name for Karen.
--
Prenti
Jim Lux wrote:
> At 09:04 AM 7/9/2008, Karen Shaeffer wrote:
>> On Wed, Jul 09, 2008 at 09:58:21AM -0400, Prentice Bisbal wrote:
>> > Karen Shaeffer wrote:
>> > >
>> > > Hi,
>> > >
>> > > OK, here is your linux history buff quiz. W
Lloyd Brown wrote:
>
> - Add the path (/usr/local/lib) either to /etc/ld.so.conf file, or to a
> file in /etc/ld.so.conf.d/, then run "ldconfig" to update the path
> cache, etc. This is the recommended system-wide way of doing things.
>
>
What happens when you have two different library paths
Oops. I sent this to the wrong list. Sorry!
Prentice Bisbal wrote:
> Lloyd Brown wrote:
>> - Add the path (/usr/local/lib) either to /etc/ld.so.conf file, or to a
>> file in /etc/ld.so.conf.d/, then run "ldconfig" to update the path
>> cache, etc. This is the
John Hearns wrote:
> On Fri, 2008-08-01 at 15:37 +1000, Chris Samuel wrote:
>
>> We'd prefer to steer clear of Kerberos, it introduces
>> arbitrary job limitations through ticket lives that
>> are not tolerable for HPC work.
>>
> Kerberos is heavily used at CERN. They have a solution for that issu
Perry E. Metzger wrote:
> Prentice Bisbal <[EMAIL PROTECTED]> writes:
>> John Hearns wrote:
>>> On Fri, 2008-08-01 at 15:37 +1000, Chris Samuel wrote:
>>>
>>>> We'd prefer to steer clear of Kerberos, it introduces
>>>> arbitrary job li
Alan Louis Scheinine wrote:
>> I don't believe it. It sounds way to simple!
>
> Perhaps the tricky part begins with the seemingly innocent
> phrase "Standard documentation can tell you how to do
> it -- just read the manuals."
Are you saying RTFM?
I've read the O'Reilly book on Kerberos several
sort of strange myth has been going by so long on this that
> people refuse to believe that the ticket refresh is a single easy
> command?
Maybe you're not reading the questions correctly. In my original
question about how to do this, I asked how to do this using the queuing
system
I will soon be setting up my first cluster that uses Infiniband. I've
searched the net, but I haven't found any good tutorials or articles on
configuring Infiniband networking on Linux and/or how the Infiniband
networking protocol (if that is the correct term) works.
Can someone point me to a good
Anand Vaidya wrote:
>
> On Thu, Aug 28, 2008 at 12:56 AM, Nifty niftyompi Mitch
> <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
>
> On Wed, Aug 27, 2008 at 10:42:23AM -0400, Prentice Bisbal wrote:
> >
> > I will soon be setting up my f
Since an infiniband fabric needs a subnet mananger, should the master
node have an IB HCA and be connected to the IB network in order to run
the subnet manager?
My logic behind this is that the master node will be full
enterprise-level hardware (redundant every thing), and should never go
down or
Greg Lindahl wrote:
> Spoken like a computer scientist.
>
> But this topic has slid off topic for this newsgroup. Don't you have
> work to do? Or do you type as fast as rgb?
>
> -- greg
I agree. This is akin to a religious war which no one will win.
--
Prentice
This discussion is still completely off-topic. This is a list about
computing issues relating to beowulf clusters, not software engineering
at large, sociology or psychology.
--
Prentice
___
Beowulf mailing list, Beowulf@beowulf.org
To change your sub
Prentice Bisbal wrote:
> Since an infiniband fabric needs a subnet mananger, should the master
> node have an IB HCA and be connected to the IB network in order to run
> the subnet manager?
>
> My logic behind this is that the master node will be full
> enterprise-level hardwar
Nifty niftyompi Mitch wrote:
>> I've gotten a lot of response to my IB questions that I posed to the
>> list. Thanks for all your help. All of my questions have been answered.
>> It turns out, as some as you pointed out, that my switch will have a
>> built-in subnet manager, so I won't need to run
Tom Elken wrote:
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Tom Pierce
>
>
> ... It is a cost effective solution, and Dell clusters keep
> popping up at US Universities as well.
>
> Tom
>
> The same is true at UK Universities.
>
> -Tom
Joe Landman wrote:
> Prentice Bisbal wrote:
>
> [...]
>
>> Getting back to hardware, I've always been impressed with the robustness
>> of HP Proliant hardware
>
> Of course, the dirty little (not so) secret of tier 1 systems are that
> they are all built
Joe Landman wrote:
> Prentice Bisbal wrote:
>
>> I'm sure even in the computer world a similar rule applies. $ = cheap
>> components, $$= better components, etc.
>
> A Xeon is a Xeon is a Xeon.
>
> Some RAM DIMM builders use ... ah ... less than spectacular .
Jeff Johnson wrote:
>
>> A Xeon is a Xeon is a Xeon.
>>
> This is a very true statement.
>
> Unfortunately for many, the commonality ends where the processor and
> socket meet. There is a great deal of deviation in motherboard designs.
> Some are much better than others and it is not always ba
Robert G. Brown wrote:
> On Mon, 8 Sep 2008, Greg Lindahl wrote:
>
>> On Mon, Sep 08, 2008 at 02:58:36PM -0400, Prentice Bisbal wrote:
>>
>>> I think these trends have more to do with the cheap cost of Dell
>>> Hardware and Dell's sales force and marke
Gerry Creager wrote:
> Andrew Holway wrote:
>>
>>> After finally diagnosing the problem, the phone support then scheduled a
>>> technician to come out with a new PERC card and motherboard to replace
>>> one or both of them. At that point, they could have skipped the on-site
>>> technician and le
Gus Correa wrote:
> Dear Beowulf and COTS fans
>
> For those of you who haven't read the news today:
>
> http://www.theregister.co.uk/2008/09/16/cray_baby_super/
>
> IGIDH (I guess it doesn't help.)
>
> Gus Correa
>
Quote from article:
"It's also attempting to lure scientists and researchers
John Hearns wrote:
>
>
> 2008/9/16 Prentice Bisbal <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
>
> That will work great until the newbie scientists find that airflow into
> a computer tucked in "behind their desk where no one can see it" is
The more services you run on your cluster node (gmond, sendmail, etc.)
the less performance is available for number crunching, but at the same
time, administration difficulty increases. For example, if you turn off
postfix/sendmail, you'll no longer get automated e-mails from your
system to alert y
Gerry Creager wrote:
> Eric Thibodeau wrote:
>> Prentice Bisbal wrote:
>>> The more services you run on your cluster node (gmond, sendmail, etc.)
>>> the less performance is available for number crunching, but at the same
>>> time, administration difficulty
Interesting opinion on Hadoop:
“The Google guys have to be just laughing in their beer right now
because they invented MapReduce a decade ago to serve the data storage
needs of the Google crawl of the Internet… and moved all of that to Big
Table,” Stonebraker says. “Why did they do that? Becau
This is a good solution for this problem. Unfortunately, my Dell reps
tell me that they are discontinuing this product. I'm not sure if it's
even still available. Apparently it didn't sell very well.
Prentice
On 06/04/2014 08:34 AM, Gavin W. Burris wrote:
Hi, Raphael.
We have been using the
Raphael,
Not many vendors make servers that support that many GPUs is a single
chassis. I know for a fact Dell doesn't. I think the HP Proliant servers
are your best option.
Processor shouldn't be to important in terms of compatibility with the
GPU. If all the work is going to be done on the
On 06/25/2014 03:08 PM, Kilian Cavalotti wrote:
On Wed, Jun 25, 2014 at 10:29 AM, Andrew M.A. Cater
wrote:
RHEL doesn't cut it for these people: they know that they want later
GCC / different commercial compilers / hand written assembly - a later
kernel with a smarter scheduler ...
SCL reall
On 06/25/2014 05:51 PM, Jonathan Aquilina wrote:
You guys mention perl and I learned an interesting hackish way to get the
latest version of perl on ones system.
Have you perl guys used perlbrews before. I set up a perlbrew setup for a
centos vm template at the Data center I used to work at. I h
On 06/25/2014 06:07 PM, Joe Landman wrote:
On 06/25/2014 05:51 PM, Jonathan Aquilina wrote:
You guys mention perl and I learned an interesting hackish way to get
the
latest version of perl on ones system.
Have you perl guys used perlbrews before. I set up a perlbrew setup
for a
Yes, but ran
C applications to push my new/shiny/academic file system. Holy
smokes what I would have given for a flexible environment then.
3.a continues below:
On 06/25/2014 04:50 PM, Prentice Bisbal wrote:
On 06/25/2014 03:08 PM, Kilian Cavalotti wrote:
On Wed, Jun 25, 2014 at 10:29 AM, Andrew M.A. Cater
wr
On 06/26/2014 08:59 AM, Gavin W. Burris wrote:
On Wed 06/25/14 07:14PM -0400, Ellis H. Wilson III wrote:
I ended up doing very crazy root-stealing, chroot-establishing things to get
my science done in my PhD. If you prevent intelligent people from doing
their work, they are going to be your wor
I second Gavin.
A lot of people have been mentioning LXC and Docker ans cures to this
problem, and to paraphrase The Princess Bride, you keep using those
words I don't think they mean what you think they mean. Docker and LXC
are great for isolating running services: apache, DNS, etc. For the m
On 07/01/2014 12:17 AM, Matt Wallis wrote:
On 01/07/14 13:45, Jonathan Aquilina wrote:
This question probably sounds like a stupid one, but what difference
in an
HPC environment and to parallel written code does compiler version make?
Depends on the day of the week, the processor, the code,
On 07/01/2014 12:35 PM, Jonathan Aquilina wrote:
I think my question though is this. can one see negative impacts if the
compiler gets upgraded regardless of if its gcc or intel.
If you're talking about the distro-provided compiler, no. They are
usually tested well by the distro maintainer, and
It appears that page has been taken down:
Sorry...
The page you have requested does not exist or is no longer available.
Prentice
On 07/02/2014 05:11 PM, Douglas Eadline wrote:
China's world-beating supercomputer fails to impress some potential clients
http://www.scmp.com/news/china/article
On 07/02/2014 05:37 PM, James Cuff wrote:
Let this be a lesson to us all.
Repeat after me:
"Top 500 scores mean nothing!"
As a community we have to stop this madness! 100k / day. Sigh.
J.
There is signs of this madness ending, at least in the US. I read a few
studies about how computat
On 07/02/2014 05:21 PM, "C. Bergström" wrote:
On 07/ 3/14 04:17 AM, Jeff Johnson wrote:
If they want to spend a bazillion dollars to run hpl faster than anyone
else who am I to stop them.
If however they want to do real science perhaps they need to architect
something more manageable.
They sho
Beowulfers,
Are any of you monitoring the power draw on your clusters? If so, can
any of you provide me with some statistics on your power draw under
heavy load? Ideally, I'm looking for the power load for a worst-case
scenario, such as running HPL, on a per-rack basis. If you can provide
me
On 07/28/2014 02:13 PM, Mark Hahn wrote:
Are any of you monitoring the power draw on your clusters? If so, can
any of you provide me with some statistics on your power draw under
heavy load?
good question; it's something that deserves more attention and coverage.
ATM, I can only provide one
input to come up with an
average, or ballpark amount. the 5 - 10 kW one vendor specified seems
wy too low for a rack of high-density HPC nodes running at or near
100% utilization.
Jeff White - GNU+Linux Systems Administrator
University of Pittsburgh - CSSD
On 07/28/2014 10:51 AM, Prentice
On 07/28/2014 03:07 PM, Joe Landman wrote:
On 7/28/14, 2:55 PM, Prentice Bisbal wrote:
On 07/28/2014 01:29 PM, Jeff White wrote:
Power draw will vary greatly depending on many factors. Where I am
at we currently have 16 racks of HPC equipment (compute nodes,
storage, network gear, etc
1 - 100 of 899 matches
Mail list logo