On Thursday 14 January 2010, John R Pierce wrote:
Karanbir Singh wrote:
My main issue with that kit is that the linux drivers are very basic,
lack most management capabilities and fail often with obscure issues.
We certainly don't see a high frequency of obscure-cciss-issues. But since no
On Thursday 14 January 2010, Pasi Kärkkäinen wrote:
On Thu, Jan 14, 2010 at 08:14:52PM +, Karanbir Singh wrote:
...
Maybe its just bad luck here :)
I remember a story about two similar HP proliants.. same model number,
ordered the same day, same hardware configuration etc..
The other
2010/1/15 Peter Kjellstrom c...@nsc.liu.se:
IMO the most likely reason for one server working and not another one would be
HP shipping (or bounce-your-servers-around-the-globe as I like to call it)...
Sadly that problem does not seem unique to HP.
Ben
On Fri, Jan 15, 2010 at 11:02:51AM +0100, Peter Kjellstrom wrote:
On Thursday 14 January 2010, Pasi Kärkkäinen wrote:
On Thu, Jan 14, 2010 at 08:14:52PM +, Karanbir Singh wrote:
...
Maybe its just bad luck here :)
I remember a story about two similar HP proliants.. same model
On 01/12/2010 10:43 AM, John Doe wrote:
On the other hand, here, we have around 30 HP servers.
Some DL360/380/180 G5/G6 with CentOS 4/5 and,
in 2 years, only 3 drives failed... That's it; no other problems...
Drives is hardly the issue - most of them are going to be seagate anyway.
My main
On Thu, Jan 14, 2010 at 08:07:43PM +, Karanbir Singh wrote:
On 01/12/2010 10:43 AM, John Doe wrote:
On the other hand, here, we have around 30 HP servers.
Some DL360/380/180 G5/G6 with CentOS 4/5 and,
in 2 years, only 3 drives failed... That's it; no other problems...
Drives is
On 01/12/2010 03:51 PM, nate wrote:
I've used HP/cciss on a couple hundred systems over the past 7 years,
can only recall 2 issues, both around a drive failing the controller
didn't force the drive off line, and there was no way to force it
off line using the command line tool, so had to go on
Karanbir Singh wrote:
My main issue with that kit is that the linux drivers are very basic,
lack most management capabilities and fail often with obscure issues.
And, as Peter pointed out already, they are not really exposing a proper
scsi interface, but modeled around a really old ata
On Thu, Jan 14, 2010 at 08:14:52PM +, Karanbir Singh wrote:
On 01/12/2010 03:51 PM, nate wrote:
I've used HP/cciss on a couple hundred systems over the past 7 years,
can only recall 2 issues, both around a drive failing the controller
didn't force the drive off line, and there was no
On Wednesday 13 January 2010, Pasi Kärkkäinen wrote:
On Wed, Jan 13, 2010 at 01:05:39AM +0100, Peter Kjellstrom wrote:
On Tuesday 12 January 2010, Les Mikesell wrote:
On 1/12/2010 10:39 AM, Peter Kjellstrom wrote:
...
...that said, it's not much worse than the competetion, storage
On Wed, Jan 13, 2010 at 11:43:35AM +0100, Peter Kjellstrom wrote:
On Wednesday 13 January 2010, Pasi Kärkkäinen wrote:
On Wed, Jan 13, 2010 at 01:05:39AM +0100, Peter Kjellstrom wrote:
On Tuesday 12 January 2010, Les Mikesell wrote:
On 1/12/2010 10:39 AM, Peter Kjellstrom wrote:
Christopher Chan wrote:
Funny you should mention software RAID1... I've seen two instances of that
getting silently out-of-sync and royally screwing things up beyond all
repair.
Maybe this thread has gone on long enough now?
Not yet :)
Please tell more about your hardware and software.
Peter Kjellstrom wrote:
Please tell more about your hardware and software. What distro? What
kernel? What disk controller? What disks?
Both of my data-points are several years old so most of the details are lost
in the fog-of-lost-memories...
Both were on desktop class hardware with
On the machine where I had the problem I had to run memtest86 more than a day
to
finally catch it. Then after replacing the RAM and fsck'ing the volume, I
still
had mysterious problems about once a month until I realized that the disks
are
accessed alternately and the fsck pass
On Tuesday 12 January 2010, Christopher Chan wrote:
Keith Keller wrote:
On Tue, Jan 12, 2010 at 08:07:17AM +0800, Christopher Chan wrote:
I see that the Areca driver has finally made it into the mainline Linux
kernel. But I wonder how things have improved from this particular case.
On 12/01/10 00:02, Christopher Chan wrote:
problems mostly centered around management and performance issues. the
world is littered with stores of cciss fail
Really? Man, I have been given this spanking new HP DL370 G6 and running
Centos 5.4 on it...
I've got a couple of DL380's at one
Karanbir Singh wrote:
On 12/01/10 00:02, Christopher Chan wrote:
problems mostly centered around management and performance issues. the
world is littered with stores of cciss fail
Really? Man, I have been given this spanking new HP DL370 G6 and running
Centos 5.4 on it...
On Tuesday 12 January 2010, John R Pierce wrote:
Karanbir Singh wrote:
On 12/01/10 00:02, Christopher Chan wrote:
problems mostly centered around management and performance issues. the
world is littered with stores of cciss fail
Really? Man, I have been given this spanking new HP DL370
From: Karanbir Singh mail-li...@karan.org
On 12/01/10 00:02, Christopher Chan wrote:
problems mostly centered around management and performance issues. the
world is littered with stores of cciss fail
Really? Man, I have been given this spanking new HP DL370 G6 and running
Centos 5.4 on
Karanbir Singh wrote:
On 12/01/10 00:02, Christopher Chan wrote:
problems mostly centered around management and performance issues. the
world is littered with stores of cciss fail
Really? Man, I have been given this spanking new HP DL370 G6 and running
Centos 5.4 on it...
I've got a
Hi
Appologies I have not been following the thread here so am just
wondering if you have a MSA, EVA, XP left hand san or if this is just
storage that sits on the server with samba share? also what link is
between fc or ethernet.
Regards
Per Qvindesland
At Tisdag, 12-01-2010 on 11:57 Chan Chung
2010/1/12 Chan Chung Hang Christopher christopher.c...@bradbury.edu.hk:
Eeek! That thing will be hosting the school's vle. Looks like I better
memorize the after hours password for HP support.
I have had lots[1] of problems lately with DIMMs becoming defective in
six month old G5 HPs. Could
Which is why I specifically said 'performance wise' as respects 3ware. I
don't remember anything bad about 3ware stability wise or monitoring wise.
Is that supposed to be a joke? 3ware has certainly had their fair share of
stability problems (drive time-outs, bbu-problems, inconsistent
Benjamin Donnachie wrote:
2010/1/12 Chan Chung Hang Christopher christopher.c...@bradbury.edu.hk:
Eeek! That thing will be hosting the school's vle. Looks like I better
memorize the after hours password for HP support.
I have had lots[1] of problems lately with DIMMs becoming defective in
Am 12.01.2010 09:01, schrieb Peter Kjellstrom:
Is that supposed to be a joke? 3ware has certainly had their fair share of
stability problems (drive time-outs, bbu-problems, inconsistent
behaviour, ...) and monitoring wise they suck (imho). Do you like tw_cli?
Enjoying the fact that show
Per Qvindesland wrote:
Hi
Appologies I have not been following the thread here so am just
wondering if you have a MSA, EVA, XP left hand san or if this is just
storage that sits on the server with samba share? also what link is
between fc or ethernet.
If you are asking me, then there is no
2010/1/12 Chan Chung Hang Christopher christopher.c...@bradbury.edu.hk:
Boy, a Tyan or Supermicro solution is looking better by the minute for
the new server I plan to get the school for its library server and other
uses. If only Supermicro had a local distributor...I have not had a good
look
On Tue, Jan 12, 2010 at 09:41:19AM +, Karanbir Singh wrote:
On 12/01/10 00:02, Christopher Chan wrote:
problems mostly centered around management and performance issues. the
world is littered with stores of cciss fail
Really? Man, I have been given this spanking new HP DL370 G6 and
Pasi Kärkkäinen wrote:
And I've been running DL380 and DL360 G3/G4 servers for years without
problems.. with CentOS and Xen.. using cciss local storage. :)
I've used HP/cciss on a couple hundred systems over the past 7 years,
can only recall 2 issues, both around a drive failing the controller
On 12/01/10 12:22, Rainer Duffner wrote:
Which is probably the reason why the ZFS-folks are trying to move as
much intelligence out of the HBA into the OS.
not something that is really working - given that I've seen centos stock
with a few hba's easily our perform raid-z - with better
On Tuesday 12 January 2010, Chan Chung Hang Christopher wrote:
Which is why I specifically said 'performance wise' as respects 3ware. I
don't remember anything bad about 3ware stability wise or monitoring
wise.
Is that supposed to be a joke? 3ware has certainly had their fair share
of
On 1/12/2010 10:39 AM, Peter Kjellstrom wrote:
Which is why I specifically said 'performance wise' as respects 3ware. I
don't remember anything bad about 3ware stability wise or monitoring
wise.
Is that supposed to be a joke? 3ware has certainly had their fair share
of stability problems
On Mon, 2010-01-11 at 16:16 -0500, Tom Georgoulias wrote:
CentOS 5.4 x86_64 works fine on the x4540s, I've installed it myself and
didn't have to do anything special to see and use all of the disks.
In my testing, the IO was faster and the storage easier to administer
with when using
On 01/12/2010 12:20 PM, JohnS wrote:
On Mon, 2010-01-11 at 16:16 -0500, Tom Georgoulias wrote:
CentOS 5.4 x86_64 works fine on the x4540s, I've installed it myself and
didn't have to do anything special to see and use all of the disks.
In my testing, the IO was faster and the storage easier
On Tue, Jan 12, 2010 at 09:01:42AM +0100, Peter Kjellstrom wrote:
Is that supposed to be a joke? 3ware has certainly had their fair share of
stability problems (drive time-outs, bbu-problems, inconsistent
behaviour, ...) and monitoring wise they suck (imho). Do you like tw_cli?
I don't
...that said, it's not much worse than the competetion, storage simply
sucks ;-(
So you are saying people dole out huge amounts of money for rubbish?
That the software raid people were and have always been right?
Depends what the software raid people were saying. :)
Hardware Software RAID
On Tuesday 12 January 2010, Les Mikesell wrote:
On 1/12/2010 10:39 AM, Peter Kjellstrom wrote:
...
...that said, it's not much worse than the competetion, storage simply
sucks ;-(
So you are saying people dole out huge amounts of money for rubbish?
That the software raid people were and
On 1/12/2010 6:05 PM, Peter Kjellstrom wrote:
...
...that said, it's not much worse than the competetion, storage simply
sucks ;-(
So you are saying people dole out huge amounts of money for rubbish?
That the software raid people were and have always been right?
Nope, storage sucks, that
On Wed, Jan 13, 2010 at 01:05:39AM +0100, Peter Kjellstrom wrote:
On Tuesday 12 January 2010, Les Mikesell wrote:
On 1/12/2010 10:39 AM, Peter Kjellstrom wrote:
...
...that said, it's not much worse than the competetion, storage simply
sucks ;-(
So you are saying people dole out
Pasi Kärkkäinen wrote:
On Wed, Jan 13, 2010 at 01:05:39AM +0100, Peter Kjellstrom wrote:
On Tuesday 12 January 2010, Les Mikesell wrote:
On 1/12/2010 10:39 AM, Peter Kjellstrom wrote:
...
...that said, it's not much worse than the competetion, storage simply
sucks ;-(
So you are saying
On 01/08/2010 05:28 PM, R-Elists wrote:
what is wrong or what problems are you referring to with cciss please ?
problems mostly centered around management and performance issues. the
world is littered with stores of cciss fail
--
Karanbir Singh
London, UK| http://www.karan.org/ |
On Fri, Jan 08, 2010 at 12:33:39PM +0100, Rainer Duffner wrote:
Karanbir Singh schrieb:
On 01/08/2010 01:58 AM, Christopher Chan wrote:
the thumpers make for decent backup or vtl type roles, not so much for
online high density storage.
I wonder how much that would change
On Monday 11 January 2010, Karanbir Singh wrote:
On 01/08/2010 05:28 PM, R-Elists wrote:
what is wrong or what problems are you referring to with cciss please ?
problems mostly centered around management and performance issues. the
world is littered with stores of cciss fail
I would
On Mon, Jan 11, 2010 at 03:00:41PM +0200, Pasi Kärkkäinen wrote:
On Fri, Jan 08, 2010 at 12:33:39PM +0100, Rainer Duffner wrote:
Karanbir Singh schrieb:
On 01/08/2010 01:58 AM, Christopher Chan wrote:
the thumpers make for decent backup or vtl type roles, not so much for
online
Am 11.01.2010 15:26, schrieb Pasi Kärkkäinen:
It seems X4500 (not available anymore) had Marvell SATA controllers, that
are not supported with RHEL5.
X4540 uses LSI SATA controllers, that are supported.
Indeed:
http://www.sun.com/servers/x64/x4540/os.jsp
5.3+ is needed.
Of course,
Pasi Kärkkäinen wrote:
It seems X4500 (not available anymore) had Marvell SATA controllers, that
are not supported with RHEL5.
And those marvell controllers caused major grief for Sun, especially
when Solaris added support for NCQ somewhere in there. under heavy IO
workloads, the
On Wed, Jan 6, 2010 at 10:35 PM, Boris Epstein borepst...@gmail.com wrote:
some storage servers to run under Linux - most likely CentOS. The storage
volume would be in the range specified: 8-15 TB. Any recommendations as far
as hardware?
I'm kind of partial to Areca raid controllers, you can
On 1/11/2010 11:38 AM, John R Pierce wrote:
Pasi Kärkkäinen wrote:
It seems X4500 (not available anymore) had Marvell SATA controllers, that
are not supported with RHEL5.
And those marvell controllers caused major grief for Sun, especially
when Solaris added support for NCQ somewhere in
On 1/11/2010 1:33 PM, Les Mikesell wrote:
On 1/11/2010 11:38 AM, John R Pierce wrote:
Pasi Kärkkäinen wrote:
It seems X4500 (not available anymore) had Marvell SATA controllers, that
are not supported with RHEL5.
And those marvell controllers caused major grief for Sun,
On 01/11/2010 09:42 AM, Rainer Duffner wrote:
Am 11.01.2010 15:26, schrieb Pasi Kärkkäinen:
X4540 uses LSI SATA controllers, that are supported.
Indeed:
http://www.sun.com/servers/x64/x4540/os.jsp
5.3+ is needed.
Of course, for a true Solaris-admin, this would be a big waste.
;-)
Karanbir Singh wrote:
On 01/08/2010 05:28 PM, R-Elists wrote:
what is wrong or what problems are you referring to with cciss please ?
problems mostly centered around management and performance issues. the
world is littered with stores of cciss fail
Really? Man, I have been given this
Bent Terp wrote:
On Wed, Jan 6, 2010 at 10:35 PM, Boris Epstein borepst...@gmail.com wrote:
some storage servers to run under Linux - most likely CentOS. The storage
volume would be in the range specified: 8-15 TB. Any recommendations as far
as hardware?
I'm kind of partial to Areca raid
On Tue, Jan 12, 2010 at 08:07:17AM +0800, Christopher Chan wrote:
I see that the Areca driver has finally made it into the mainline Linux
kernel. But I wonder how things have improved from this particular case.
http://notemagnet.blogspot.com/2008/08/linux-disk-failures-areca-is-not-so.html
Keith Keller wrote:
On Tue, Jan 12, 2010 at 08:07:17AM +0800, Christopher Chan wrote:
I see that the Areca driver has finally made it into the mainline Linux
kernel. But I wonder how things have improved from this particular case.
On Sat, 2010-01-09 at 07:14 -0800, nate wrote:
JohnS wrote:
Interesting link for info there. I found [1] and at the bottom of the
page there is like tidbits of info in PDFs of the different models. Any
idea where I could get more info than that, like data sheets and case
studies.
JohnS wrote:
Interesting link for info there. I found [1] and at the bottom of the
page there is like tidbits of info in PDFs of the different models. Any
idea where I could get more info than that, like data sheets and case
studies.
Not online at least, note the Confidential stuff at the
On Fri, Jan 08, 2010 at 12:49:30PM +0800, Christopher Chan wrote:
Warren Young wrote:
On 1/7/2010 6:01 PM, Christopher Chan wrote:
...
zfs on *solaris *bsd is getting off topic,
if you need to fight, please take that somewhere else.
Thanks,
Tru
--
Tru Huynh (mirrors, CentOS-3 i386/x86_64
Christopher Chan schrieb:
cause when I did - the x45xx's/zfs were between 18 to 20% slower on disk
i/o alone compared with a supermicro box with dual areca 1220/xfs.
the thumpers make for decent backup or vtl type roles, not so much for
online high density storage.
Speaking of
On 01/08/2010 01:58 AM, Christopher Chan wrote:
the thumpers make for decent backup or vtl type roles, not so much for
online high density storage.
I wonder how much that would change with a bbu NVRAM card for an
external journal for ext4 and the disks on md. Unless one cannot add a
bbu NVRAM
Karanbir Singh schrieb:
On 01/08/2010 01:58 AM, Christopher Chan wrote:
the thumpers make for decent backup or vtl type roles, not so much for
online high density storage.
I wonder how much that would change with a bbu NVRAM card for an
external journal for ext4 and the disks on
Quoting Rainer Duffner rai...@ultra-secure.de:
Maximum 3.5 hot-swap drives density 36x (24 front + 12 rear) HDD bays
http://www.supermicro.com/products/chassis/4U/847/SC847A-R1400.cfm
Did anybody else think WTF? when you saw that picture?
I have seen crazy stuff, but that one is pretty
Warren Young wrote:
On 1/6/2010 2:35 PM, Boris Epstein wrote:
we are trying to set
up some storage servers to run under Linux
snip
Serious system administrators are not Linux fans I don't think. I tend
snip
Dunno why you say that. Lessee, both google and maybe amazon run Linux;
meanwhile,
I suggest you get a second-hand Sun X4500 if you're feeling cheap,
http://www.sun.com/servers/x64/x4500/specs.xml. 48x 500G will do you
nicely with some MD RAID.
Or you can go for the newer X4540 if you're feeling flush.
Regards, Iolaire
On 06/01/2010 22:35, Boris Epstein wrote:
Hello
m.r...@5-cent.us wrote:
Dunno why you say that. Lessee, both google and maybe amazon run Linux;
meanwhile, ATT, where I worked for a couple of years, Trustwave, a root
CA that I worked for earlier this year, and here at the US NIH, we run
Linux.
Since this is a storage thread.. back in 2004
On 1/8/2010 10:09 AM, nate wrote:
m.r...@5-cent.us wrote:
Dunno why you say that. Lessee, both google and maybe amazon run Linux;
meanwhile, ATT, where I worked for a couple of years, Trustwave, a root
CA that I worked for earlier this year, and here at the US NIH, we run
Linux.
Since this
Karanbir Singh wrote:
snip
Good question, they are after all ( the Sun 45xx's ) just
opteron box's with a mostly standard build. Finding a CentOS
compatible ( drivers pre-included, and not crap like cciss )
would not be too hard.
Who wants to offer up a machine to test on :)
--
On Fri, Jan 08, 2010 at 11:06:10AM -0600, Les Mikesell wrote:
On 1/8/2010 10:09 AM, nate wrote:
m.r...@5-cent.us wrote:
Dunno why you say that. Lessee, both google and maybe amazon run Linux;
meanwhile, ATT, where I worked for a couple of years, Trustwave, a root
CA that I worked for
Ray Van Dolson wrote:
Out of curiosity, any idea what a full cabinet of one of these runs?
Over $1M pretty easily, probably close/more than $2M.
nate
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On 1/8/2010 11:41 AM, nate wrote:
Ray Van Dolson wrote:
Out of curiosity, any idea what a full cabinet of one of these runs?
Over $1M pretty easily, probably close/more than $2M.
I think you are confusing it with something else. Somewhere I saw that
these list around $400k for 80TB - but
On Thu, Jan 7, 2010 at 11:25 AM, Boris Epstein borepst...@gmail.com wrote:
On Thu, Jan 7, 2010 at 11:09 AM, Matty matt...@gmail.com wrote:
On Thu, Jan 7, 2010 at 8:08 AM, Chan Chung Hang Christopher
christopher.c...@bradbury.edu.hk wrote:
John Doe wrote:
From: Boris Epstein
Les Mikesell wrote:
On 1/8/2010 11:41 AM, nate wrote:
Ray Van Dolson wrote:
Out of curiosity, any idea what a full cabinet of one of these runs?
Over $1M pretty easily, probably close/more than $2M.
I think you are confusing it with something else. Somewhere I saw that
these list around
On Fri, 2010-01-08 at 14:36 -0800, nate wrote:
Les Mikesell wrote:
On 1/8/2010 11:41 AM, nate wrote:
Ray Van Dolson wrote:
Out of curiosity, any idea what a full cabinet of one of these runs?
Over $1M pretty easily, probably close/more than $2M.
I think you are confusing it with
JohnS wrote:
Just asking is the fiber ports BiDirectional or Directional or can they
support a Bond that is BiDirectional of 4GB/s or can they be trunked
into 16GB/s? Bidirectional. I need about 24 GB/s banwidth sustained,
yes per second. Also what type of sparse file I/O you get . I
JohnS wrote:
Just asking is the fiber ports BiDirectional or Directional or can they
support a Bond that is BiDirectional of 4GB/s or can they be trunked
into 16GB/s? Bidirectional. I need about 24 GB/s banwidth sustained,
yes per second. Also what type of sparse file I/O you get . I see
On Fri, 2010-01-08 at 15:23 -0800, nate wrote:
JohnS wrote:
Just asking is the fiber ports BiDirectional or Directional or can they
support a Bond that is BiDirectional of 4GB/s or can they be trunked
into 16GB/s? Bidirectional. I need about 24 GB/s banwidth sustained,
yes per
JohnS wrote:
Currently using the older model of this one [1] @ 4GB/s on the fiber.
You sound pretty confused, there's no way in hell a Fujitsu DX440
is going to sustain 4 gigabytes/second, maybe 4 Gigabits/second
(~500MB/s)
Thats with BiDirectional, both links at 4 GB/s. Were looking for
On Fri, 2010-01-08 at 15:43 -0800, John R Pierce wrote:
JohnS wrote:
Just asking is the fiber ports BiDirectional or Directional or can they
support a Bond that is BiDirectional of 4GB/s or can they be trunked
into 16GB/s? Bidirectional. I need about 24 GB/s banwidth sustained,
yes
On Fri, 2010-01-08 at 16:08 -0800, nate wrote:
JohnS wrote:
Currently using the older model of this one [1] @ 4GB/s on the fiber.
You sound pretty confused, there's no way in hell a Fujitsu DX440
is going to sustain 4 gigabytes/second, maybe 4 Gigabits/second
(~500MB/s)
G Bits per
JohnS wrote:
On Fri, 2010-01-08 at 16:08 -0800, nate wrote:
JohnS wrote:
Currently using the older model of this one [1] @ 4GB/s on the fiber.
You sound pretty confused, there's no way in hell a Fujitsu DX440
is going to sustain 4 gigabytes/second, maybe 4 Gigabits/second
(~500MB/s)
G
Your ROI of 5 minutes doesn't make any sense to me.
Ok, Job submission and completion is what I am getting at.
ROI generally refers to the time an expense takes to pay off.Like,
if buying $X worth of capital equipment will generate savings or
additional income of $x over Y
On Fri, 2010-01-08 at 17:53 -0800, John R Pierce wrote:
Your ROI of 5 minutes doesn't make any sense to me.
Ok, Job submission and completion is what I am getting at.
ROI generally refers to the time an expense takes to pay off.Like,
if buying $X worth of capital
On Fri, 2010-01-08 at 17:09 -0800, nate wrote:
Using 15K RPM drives I can tell you that a 3PAR T400(very well
versed in their products, fast easy to use) can do 25.6 Gbits/second
(3.2 gigabytes/second) sustained throughput. 640 drives, 48GB data
cache.
If you were starting out at such a
From: Boris Epstein borepst...@gmail.com
This is not directly related to CentOS but still: we are trying to set up some
storage servers to run under Linux - most likely CentOS. The storage volume
would be in the range specified: 8-15 TB. Any recommendations as far as
hardware?
Depends on your
On Thu, Jan 7, 2010 at 11:34 AM, John Doe jd...@yahoo.com wrote:
From: Boris Epstein borepst...@gmail.com
This is not directly related to CentOS but still: we are trying to set up
some storage servers to run under Linux - most likely CentOS. The storage
volume would be in the range specified:
John Doe wrote:
From: Boris Epstein borepst...@gmail.com
This is not directly related to CentOS but still: we are trying to set up
some storage servers to run under Linux - most likely CentOS. The storage
volume would be in the range specified: 8-15 TB. Any recommendations as far
as
Quoting Chan Chung Hang Christopher christopher.c...@bradbury.edu.hk:
John Doe wrote:
From: Boris Epstein borepst...@gmail.com
This is not directly related to CentOS but still: we are trying to
set up some storage servers to run under Linux - most likely
CentOS. The storage volume
On 01/07/2010 03:28 AM, earl ramirez wrote:
You can have a look at this, I don't know what your budget is like
http://www.drobo.com/Products/drobopro/index.php
I have a drobo and it worked off the bat with a few linux distros
I've had 2 drobo's at work - and i can assure you that it is
2010/1/7 Karanbir Singh mail-li...@karan.org:
I've had 2 drobo's at work - and i can assure you that it is essentially
a wasted device.
I agree with this. We had a Drobo on loan for a while, I found it
sluggish and detested the way it over-reports its free space.
Couldn't wait to hand it
On Thu, Jan 7, 2010 at 8:08 AM, Chan Chung Hang Christopher
christopher.c...@bradbury.edu.hk wrote:
John Doe wrote:
From: Boris Epstein borepst...@gmail.com
This is not directly related to CentOS but still: we are trying to set up
some storage servers to run under Linux - most likely CentOS.
On Thu, Jan 7, 2010 at 11:09 AM, Matty matt...@gmail.com wrote:
On Thu, Jan 7, 2010 at 8:08 AM, Chan Chung Hang Christopher
christopher.c...@bradbury.edu.hk wrote:
John Doe wrote:
From: Boris Epstein borepst...@gmail.com
This is not directly related to CentOS but still: we are trying to
Yes, the Sun Fire Xs are costly...
Here, 35k euros for 48 x 1TB by example, or 22k for 48 x 500GB...
Our 12TB HP is around 6k. So 12k for almost the same as the 22k
But if you use 1TB disks on the Sun, you end up using half the Us (and save
some power) in your bay; which might be nice if you are
On 1/6/2010 2:35 PM, Boris Epstein wrote:
we are trying to set
up some storage servers to run under Linux
You should also consider FreeBSD 8.0, which has the newest version of
ZFS up and running stably on it. I use Linux for most server tasks, but
for big storage, Linux just doesn't have
John Doe wrote:
Yes, the Sun Fire Xs are costly...
Here, 35k euros for 48 x 1TB by example, or 22k for 48 x 500GB...
Our 12TB HP is around 6k. So 12k for almost the same as the 22k
But if you use 1TB disks on the Sun, you end up using half the Us (and save
some power) in your bay; which
Christopher Chan wrote:
Yes, the Sun Fire X4540 uses software raid but not necessarily zfs...if
you install another operating system that is not Solaris or OpenSolaris,
it won't be zfs.
the thing to note on the Thumper (X4540), each of those 48 SATA drives
has its own channel to the
Warren Young wrote:
On 1/6/2010 2:35 PM, Boris Epstein wrote:
we are trying to set
up some storage servers to run under Linux
You should also consider FreeBSD 8.0, which has the newest version of
ZFS up and running stably on it. I use Linux for most server tasks, but
for big storage,
On 01/08/2010 01:01 AM, Christopher Chan wrote:
That puts you right on the edge of workability with 32-bit hardware.
ext3's limit on 32-bit is 8 TB, and you can push it to 16 TB by
switching to XFS or JFS. Best to use 64-bit hardware if you can.
Probably XFS if you want data guarantees on
On 01/08/2010 12:53 AM, John R Pierce wrote:
Christopher Chan wrote:
Yes, the Sun Fire X4540 uses software raid but not necessarily zfs...if
you install another operating system that is not Solaris or OpenSolaris,
it won't be zfs.
the thing to note on the Thumper (X4540), each of those 48
Karanbir Singh wrote:
On 01/08/2010 12:53 AM, John R Pierce wrote:
Christopher Chan wrote:
Yes, the Sun Fire X4540 uses software raid but not necessarily zfs...if
you install another operating system that is not Solaris or OpenSolaris,
it won't be zfs.
the thing to note on the Thumper
cause when I did - the x45xx's/zfs were between 18 to 20% slower on disk
i/o alone compared with a supermicro box with dual areca 1220/xfs.
the thumpers make for decent backup or vtl type roles, not so much for
online high density storage.
Speaking of thumpers and Supermicro, it looks
On 1/7/2010 6:01 PM, Christopher Chan wrote:
I'm not recommending OpenSolaris on purpose.
Serious system administrators are not Linux fans I don't think.
I think I must have been sent back in time, say to 1997 or so, because I
can't possibly be reading this in 2010. I base this on the fact
1 - 100 of 115 matches
Mail list logo