On Wed, Aug 06, 2008 at 09:41:35AM -0400, Perry E. Metzger wrote:
Matt Lawrence [EMAIL PROTECTED] writes:
Could be. Given the long and sordid history of NFS, I prefer to not
use it whenever there are practical alternatives.
NFS is a fine protocol and works very well. However,
Robert Latham [EMAIL PROTECTED] writes:
On Wed, Aug 06, 2008 at 09:41:35AM -0400, Perry E. Metzger wrote:
Matt Lawrence [EMAIL PROTECTED] writes:
Could be. Given the long and sordid history of NFS, I prefer to not
use it whenever there are practical alternatives.
NFS is a fine
my 2 cents bout ssd and i bet alot of you would agree. they are not worth
the money yet for the amount of storage space that you are getting. i have
seen at fry's electronics yesterday 1tb hdd for 200 dollars? why go for
something that u get 32gb or 64gb max
On Thu, Aug 7, 2008 at 9:48 AM, Eric
On Thu, 7 Aug 2008, Joe Landman wrote:
Hmmm... I normally recommend avoiding their spec file unless you want to use
only their kernel and do minor tweaks from there.
This said, I really recommend using
make binrpm-pkg
to generate the kernel/modules RPM and SRPM. Then the grub
Matt Lawrence [EMAIL PROTECTED] writes:
Could be. Given the long and sordid history of NFS, I prefer to not
use it whenever there are practical alternatives.
NFS is a fine protocol and works very well. However, traditionally the
Linux implementation of NFS has been of less than perfect
Robert Kubrick wrote:
Or use solid-state data disks? Does anybody here have experience with
SSD disk in HPC?
Not on OUR budget! ;-)
On Aug 5, 2008, at 8:16 PM, Gerry Creager wrote:
Chris Samuel wrote:
- Gerry Creager [EMAIL PROTECTED] wrote:
Chris Samuel wrote:
b) We can use XFS
In message from Gerry Creager [EMAIL PROTECTED] (Wed, 06 Aug
2008 09:59:59 -0500):
Robert Kubrick wrote:
Or use solid-state data disks? Does anybody here have experience
with
SSD disk in HPC?
Not on OUR budget! ;-)
It was the proposal for journal part only ;-)
SSD/flash disks (for
Or use solid-state data disks? Does anybody here have experience with
SSD disk in HPC?
On Aug 5, 2008, at 8:16 PM, Gerry Creager wrote:
Chris Samuel wrote:
- Gerry Creager [EMAIL PROTECTED] wrote:
Chris Samuel wrote:
b) We can use XFS for scratch space rather than being
tied to the
Robert G. Brown [EMAIL PROTECTED] writes:
Once upon a time, running NFS in a LAN that wasn't controlled at the
port level was basically openly inviting anyone that could plug into a
wired port to have open access to all exported files, and I'm not sure
that has fundamentally changed as to
On Wed, 6 Aug 2008, Perry E. Metzger wrote:
Matt Lawrence [EMAIL PROTECTED] writes:
Could be. Given the long and sordid history of NFS, I prefer to not
use it whenever there are practical alternatives.
NFS is a fine protocol and works very well. However, traditionally the
Linux
- Robert G. Brown [EMAIL PROTECTED] wrote:
And even on Linux machines, NFS has been, well, functional
is a good way to describe it.
It actually seems to work pretty well these days, our
general config is:
1) No automounter
2) Hard mounts (so jobs just hang if they loose contact)
3) NFS
Bogdan Costescu wrote:
On Tue, 29 Jul 2008, Chris Samuel wrote:
1) Use a mainline kernel, we've found benefit of that
over stock CentOS kernels.
Care to comment on this statement ?
I do ;) Simply download a kernel from kernel.org and build the kernel
yourself and set:
CONFIG_HZ_100=y
Matt Lawrence wrote:
On Wed, 6 Aug 2008, Robert G. Brown wrote:
On Wed, 6 Aug 2008, Perry E. Metzger wrote:
And even on Linux machines, NFS has been, well, functional is a good
way to describe it. For its primary original purpose, which is serving
home directories or remote mount e.g.
Chris Samuel wrote:
- Robert G. Brown [EMAIL PROTECTED] wrote:
And even on Linux machines, NFS has been, well, functional
is a good way to describe it.
It actually seems to work pretty well these days, our
general config is:
1) No automounter
2) Hard mounts (so jobs just hang if they
In message from Joshua Baker-LePain [EMAIL PROTECTED] (Tue, 5 Aug 2008
14:10:33 -0400 (EDT)):
On Tue, 5 Aug 2008 at 8:34pm, Mikhail Kuzminsky wrote
xfs has a rich set of utilities, but AFAIK no defragmentation tools
(I don't
know what will be after xfsdump/xfsrestore). But which modern linux
- Matt Lawrence [EMAIL PROTECTED] wrote:
I have never had any problems with ext3.
I suspect you're not doing a lot of disk I/O, we
found NFS servers using ext3 as a back end would
crumble under the weight of lots of writes as ext3
is single threaded through the journal daemon.
That means
- Gerry Creager [EMAIL PROTECTED] wrote:
Chris Samuel wrote:
b) We can use XFS for scratch space rather than being
tied to the RHEL One True Filesystem (ext3) which
(in our experience) can't handle large amounts of disk
I/O.
Mirrors our experience, too.
I should point out that
Chris Samuel [EMAIL PROTECTED] writes:
- Matt Lawrence [EMAIL PROTECTED] wrote:
I have never had any problems with ext3.
I suspect you're not doing a lot of disk I/O, we
found NFS servers using ext3 as a back end would
crumble under the weight of lots of writes as ext3
is single
Chris Samuel wrote:
- Gerry Creager [EMAIL PROTECTED] wrote:
Chris Samuel wrote:
b) We can use XFS for scratch space rather than being
tied to the RHEL One True Filesystem (ext3) which
(in our experience) can't handle large amounts of disk
I/O.
Mirrors our experience, too.
I should
On Wed, 6 Aug 2008, Chris Samuel wrote:
Those who want to run the stock CentOS kernel might
like to know that the plus repository includes an
RPM for the XFS kernel module for the mainline kernel.
It works well as long as you remember to install the xfs-progs package. I
spent five minutes
On Wed, 6 Aug 2008, Chris Samuel wrote:
I suspect you're not doing a lot of disk I/O, we
found NFS servers using ext3 as a back end would
crumble under the weight of lots of writes as ext3
is single threaded through the journal daemon.
That means that you end up with all your NFS daemons
As a note: I was pointed to a recent lockup (double lock acquisition)
in XFS with NFS. I don't think I have seen this one in the wild myself.
Right now I am fighting an NFS over RDMA crash in 2.6.26 which seems
to have been cured in 2.6.26.1 . .2 is almost out, so will test with
that as
Chris Samuel wrote:
- Bogdan Costescu [EMAIL PROTECTED] wrote:
On Tue, 29 Jul 2008, Chris Samuel wrote:
1) Use a mainline kernel, we've found benefit of that
over stock CentOS kernels.
Care to comment on this statement ?
a) We found that we got better performance out of
the mainline
Chris Samuel wrote:
- Bogdan Costescu [EMAIL PROTECTED] wrote:
On Tue, 29 Jul 2008, Chris Samuel wrote:
1) Use a mainline kernel, we've found benefit of that
over stock CentOS kernels.
Care to comment on this statement ?
a) We found that we got better performance out of
the mainline
On Mon, 4 Aug 2008, Joe Landman wrote:
I haven't seen or heard anyone claim xfs 'routinely locks up their system'.
I won't comment on your friends sharpness. I will point out that several
very large data stores/large cluster sites use xfs. By definition, no large
data store can be built
Bill Broadley wrote:
In general I'd say that the new kernels do much better on modern
hardware than the ugly situation of downloading a random RPM, or waiting
for official support. Seems like quite a few companies (ati, 3ware,
areca, intel, amd, and many others I'm sure) are trying hard to
stephen mulcahy wrote:
Bill Broadley wrote:
In general I'd say that the new kernels do much better on modern
hardware than the ugly situation of downloading a random RPM, or
waiting for official support. Seems like quite a few companies (ati,
3ware, areca, intel, amd, and many others I'm
On Tue, 2008-07-29 at 16:11 -0400, Joe Landman wrote:
Ivan Oleynik wrote:
vendors have at least list prices available on their websites.
I saw only one vendor siliconmechanics.com http://siliconmechanics.com
that has online integrator. Others require direct contact of a
On Tue, 2008-07-29 at 18:28 -0400, Mark Hahn wrote:
afaik, their efficiency is maybe 10% better than more routine hardware.
doesn't really change the big picture. and high-eff PSU's are available
in pretty much any form-factor. choosing lower-power processors (and perhaps
avoiding
- Bogdan Costescu [EMAIL PROTECTED] wrote:
On Tue, 29 Jul 2008, Chris Samuel wrote:
1) Use a mainline kernel, we've found benefit of that
over stock CentOS kernels.
Care to comment on this statement ?
a) We found that we got better performance out of
the mainline kernels than the
On Mon, 2008-07-28 at 23:18 -0400, Ivan Oleynik wrote:
Space is not tight. Computer room is quite spacious but air
conditioning is rudimental, no windows or water lines to dump the
heat. It looks like a big problem, therefore, consider to put the
system somewhere else on campus, although
vendors have at least list prices available on their websites.
I saw only one vendor siliconmechanics.com that has online integrator.
Others require direct contact of a saleperson.
the price of the cluster should be dominated by the price of a node,
and many sites offer web-configuration of
I check the pricing, IPMI is extra $100/node or $4000/40 nodes=2 extra
IMO, your compute nodes will wind up $3k; spending a couple percent
on managability is just smart. you're the one who asked for advice...
___
Beowulf mailing list,
Mark Hahn wrote:
I check the pricing, IPMI is extra $100/node or $4000/40 nodes=2 extra
IMO, your compute nodes will wind up $3k; spending a couple percent on
managability is just smart. you're the one who asked for advice...
Buy the IPMI daughter cards. It's money well spent.
--
Gerry
On Tue, 29 Jul 2008, Chris Samuel wrote:
1) Use a mainline kernel, we've found benefit of that
over stock CentOS kernels.
Care to comment on this statement ?
--
Bogdan Costescu
IWR, University of Heidelberg, INF 368, D-69120 Heidelberg, Germany
Phone: +49 6221 54 8869/8240, Fax: +49 6221 54
Bogdan Costescu wrote:
On Tue, 29 Jul 2008, Chris Samuel wrote:
1) Use a mainline kernel, we've found benefit of that
over stock CentOS kernels.
Care to comment on this statement ?
2.6.18 (RHEL-5.2) is currently almost 2 years old. One improvement since then
that I use heavily is ECC
Bill,
Thank you for your comments.
Yes, both opteron 2356 and Xeon E5440 are comparable in pricing (~ $700),
but it is 0.5 GHz difference!
Er, so, aren't you more concerned with performance than clockspeeds? I've
seen little if any correlation.
Yes, I care about performance, but our
vendors have at least list prices available on their websites.
I saw only one vendor siliconmechanics.com that has online integrator.
Others require direct contact of a saleperson.
thermal management? servers need cold air in front and unobstructed
exhaust. that means open or mesh
John,
Thanks for your comments.
2. reasonably fast interconnect (IB SDR 10Gb/s would suffice our
computational needs (running LAMMPs molecular dynamics and VASP DFT
codes)
3. 48U rack (preferably with good thermal management)
thermal management? servers need cold air in front
Ivan Oleynik wrote:
vendors have at least list prices available on their websites.
I saw only one vendor siliconmechanics.com http://siliconmechanics.com
that has online integrator. Others require direct contact of a saleperson.
This isn't usually a problem if you have good spec's that
On Mon, 28 Jul 2008, Ivan Oleynik wrote:
Bill,
Thank you for your comments.
Yes, both opteron 2356 and Xeon E5440 are comparable in pricing (~ $700),
but it is 0.5 GHz difference!
Er, so, aren't you more concerned with performance than clockspeeds? I've
seen little if any correlation.
Space is not tight. Computer room is quite spacious but air conditioning is
rudimental, no windows or water lines to dump the heat. It looks like a big
if space is not a big deal, why are you even thinking about rack-mount?
I'd recommend looking at the Intel Twin motherboard systems for this
On Sun, Jul 27, 2008 at 09:58:38PM -0400, Ivan Oleynik wrote:
Sender: [EMAIL PROTECTED]
Joshua,
Thanks for your response.
I may be wrong but Barcelona at 2.3GHz is being offered at the same
price as
Harpertown at 2.8GHz.
Yes, both opteron 2356 and Xeon E5440
On Mon, Jul 28, 2008 at 09:16:11AM +0100, John Hearns wrote:
On Mon, 2008-07-28 at 01:52 -0400, Mark Hahn wrote:
2. reasonably fast interconnect (IB SDR 10Gb/s would suffice our
computational needs (running LAMMPs molecular dynamics and VASP DFT codes)
3. 48U rack (preferably with
John,
That's not so good. Youre going to have to get the BTU rating of the
existing air conditioning, and consider getting more unit(s) installed -
if you have an external wall the facilities people can surely drill
through it for the.
Give serious consideration to putting expensive and
wouldn't a 5100-based board allow you to avoid the premium of fbdimms?
May be I am wrong but I saw only FB-DIMMs options and assumed that we need
to wait for Nehalems for DDR3?
5100+ddr2 is perfectly viable. fbdimms, after all, just a wrapper/extender
that introduces more latency
I check the pricing, IPMI is extra $100/node or $4000/40 nodes=2 extra
IMO, your compute nodes will wind up $3k; spending a couple percent on
managability is just smart. you're the one who asked for advice...
Agree, it looks like there is a concensus regarding IPMI, I will follow this
vendors have at least list prices available on their websites.
I saw only one vendor siliconmechanics.com http://siliconmechanics.com
that has online integrator. Others require direct contact of a saleperson.
This isn't usually a problem if you have good spec's that they can work
Mark,
if space is not a big deal, why are you even thinking about rack-mount?
40 nodes is too much. Even if room is spacious, we do not want to mess up
with boxes as we did in the past.
nothing special about Intel twins afaik - AMD twins are comparable.
but it seems even sillier to go with
Yes, I care about performance, but our previous experience with running our
mpi codes on TACC computers (Ranger, Barcelona 2.0 GHz) and Lonestar (Xeon
5100 2.66GHz) is not in favor of AMD. They have recently upgraded Ranger
to
2.3 GHz, I am going to run tests and report the results.
I
5100+ddr2 is perfectly viable. fbdimms, after all, just a wrapper/extender
that introduces more latency with the claim of higher capacity (they
contain
ddr2 or ddr3 inside the memory-buffer interface.)
Some info re specific motherboards with Intel 5400 chip set that support
DDR2 would be very
considering of building/purchasing 40 node cluster. Before contacting
vendors I would like to get some understanding how much would it cost. The
vendors have at least list prices available on their websites.
1. newest Intel quad-core CPUs (Opteron quad-core cpus are out of question
due to
On Mon, 2008-07-28 at 01:52 -0400, Mark Hahn wrote:
2. reasonably fast interconnect (IB SDR 10Gb/s would suffice our
computational needs (running LAMMPs molecular dynamics and VASP DFT codes)
3. 48U rack (preferably with good thermal management)
thermal management? servers need cold
, if
available.
Thanks,
Ivan
-- Original Message --
Received: Sat, 26 Jul 2008 01:01:58 PM PDT
From: Ivan Oleynik [EMAIL PROTECTED]
To: beowulf@beowulf.org
Subject: [Beowulf] Building new cluster - estimate
I am in process of upgrading computational facilities of my lab and
considering
Matt,
Thanks for your advice.
I suggest that you need a minimum of 16GB/node (2GB/core) and possibly
32GB/node (4GB/node).
8 Gb/node is enough for types of applications we are going to run on this
cluster. Additional memory stickscan be added later if necessary.
You will want to set up
I am in process of upgrading computational facilities of my lab and
considering of building/purchasing 40 node cluster. Before contacting
vendors I would like to get some understanding how much would it cost. The
major considerations/requirements:
1. newest Intel quad-core CPUs (Opteron quad-core
On Wed, 23 Jul 2008, Ivan Oleynik wrote:
In principle, we have some experience in building and managing clusters, but
with 40 node systems it would make sense to get a good cluster integrator to
do the job. Can people share their recent experiences and recommend reliable
vendors to deal with?
57 matches
Mail list logo