On Mon, Jul 16, 2007 at 12:41:15PM +1000, David Chinner wrote:
On Fri, Jul 13, 2007 at 03:36:46PM -0400, Justin Piszcz wrote:
On Fri, 13 Jul 2007, Jon Collette wrote:
Wouldn't Raid 6 be slower than Raid 5 because of the extra fault tolerance?
On Monday 16 July 2007 14:22:25 David Chinner wrote:
You can see from the ext3 graph that it comes to a screeching halt
every 5s (probably when pdflush runs) and at all other times the
seek rate is 10,000 seeks/s. That's pretty bad for a brand new,
empty filesystem and the only way it is
On Mon, 16 Jul 2007 at 12:41pm, David Chinner wrote
If you've got any sort of serious disk array, ext3 is not the filesystem
to use
I do so wish that RedHat shared this view...
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
-
To unsubscribe from this list:
David Chinner wrote:
On Mon, Jul 16, 2007 at 12:41:15PM +1000, David Chinner wrote:
On Fri, Jul 13, 2007 at 03:36:46PM -0400, Justin Piszcz wrote:
...
If you've got any sort of serious disk array, ext3 is not the filesystem
to use
To show what the difference is, I used blktrace and
On Mon, Jul 16, 2007 at 08:40:00PM +0300, Al Boldi wrote:
XFS surely rocks, but it's missing one critical component: data=ordered
And that's one component that's just too critical to overlook for an
enterprise environment that is built on data-integrity over performance.
So that's the
On Mon, Jul 16, 2007 at 11:43:24AM -0400, Joshua Baker-LePain wrote:
On Mon, 16 Jul 2007 at 12:41pm, David Chinner wrote
If you've got any sort of serious disk array, ext3 is not the filesystem
to use
I do so wish that RedHat shared this view...
So they support XFS in Fedora, but not
Matthew Wilcox wrote:
On Mon, Jul 16, 2007 at 08:40:00PM +0300, Al Boldi wrote:
XFS surely rocks, but it's missing one critical component: data=ordered
And that's one component that's just too critical to overlook for an
enterprise environment that is built on data-integrity over
On Mon, 2007-07-16 at 11:48 -0600, Matthew Wilcox wrote:
Wow, thanks for bringing an advocacy thread onto linux-fsdevel. Just what
we wanted. Do you have any insight into how to get the data=ordered
onto the VFS level? Because to me, that sounds like pure nonsense.
First off, I have no idea
On Mon, Jul 16, 2007 at 10:50:34AM -0500, Eric Sandeen wrote:
David Chinner wrote:
On Mon, Jul 16, 2007 at 12:41:15PM +1000, David Chinner wrote:
On Fri, Jul 13, 2007 at 03:36:46PM -0400, Justin Piszcz wrote:
...
If you've got any sort of serious disk array, ext3 is not the filesystem
to
On Fri, Jul 13, 2007 at 03:36:46PM -0400, Justin Piszcz wrote:
On Fri, 13 Jul 2007, Jon Collette wrote:
Wouldn't Raid 6 be slower than Raid 5 because of the extra fault tolerance?
http://www.enterprisenetworksandservers.com/monthly/art.php?1754 - 20%
drop according to this article
His
On Fri, 13 Jul 2007, Andrew Klaassen wrote:
--- Justin Piszcz [EMAIL PROTECTED] wrote:
To give you an example I get 464MB/s write and
627MB/s with a 10 disk
raptor software raid5.
Is that with the 9650?
Andrew
Sorry no, its with software raid 5 and the 965 chipset + three SATA PCI-e
On Fri, 13 Jul 2007, Justin Piszcz wrote:
You are using HW RAID then? Those numbers seem pretty awful for that
setup, including linux-raid@ even it though it appears you're running HW
raid, this is rather peculiar.
No, it has been discussed numerous times on this list.
SW raid is faster
--- Justin Piszcz [EMAIL PROTECTED] wrote:
On Fri, 13 Jul 2007, Andrew Klaassen wrote:
--- Justin Piszcz [EMAIL PROTECTED] wrote:
To give you an example I get 464MB/s write and
627MB/s with a 10 disk
raptor software raid5.
Is that with the 9650?
Andrew
Sorry no,
On Sat, 14 Jul 2007, Andrew Klaassen wrote:
--- Justin Piszcz [EMAIL PROTECTED] wrote:
On Fri, 13 Jul 2007, Andrew Klaassen wrote:
--- Justin Piszcz [EMAIL PROTECTED] wrote:
To give you an example I get 464MB/s write and
627MB/s with a 10 disk
raptor software raid5.
Is that with the
--- Mikael Abrahamsson [EMAIL PROTECTED] wrote:
Take your 3ware HW-raid, do a dd (read or write) to
the device and see it
being very quick (because it can fit all the data
into its cache as it
either reads or writes), then put a filesystem on it
and do writes there,
especially journaled
--- Justin Piszcz [EMAIL PROTECTED] wrote:
03:00.0 RAID bus controller: Silicon Image, Inc. SiI
3132 Serial ATA Raid
II Controller (rev 01)
$19.99 2 port SYBA cards (Silicon Image 3132s)
http://www.directron.com/sdsa2pex2ir.html
Cool, thanks.
What are your bonnie++ rewrite numbers?
On Sat, 14 Jul 2007, Andrew Klaassen wrote:
--- Justin Piszcz [EMAIL PROTECTED] wrote:
03:00.0 RAID bus controller: Silicon Image, Inc. SiI
3132 Serial ATA Raid
II Controller (rev 01)
$19.99 2 port SYBA cards (Silicon Image 3132s)
http://www.directron.com/sdsa2pex2ir.html
Cool, thanks.
Wouldn't Raid 6 be slower than Raid 5 because of the extra fault tolerance?
http://www.enterprisenetworksandservers.com/monthly/art.php?1754 -
20% drop according to this article
His 500GB WD drives are 7200RPM compared to the Raptors 10K. So his
numbers will be slower.
Justin what file
On Fri, 13 Jul 2007, Joshua Baker-LePain wrote:
My new system has a 3ware 9650SE-24M8 controller hooked to 24 500GB WD
drives. The controller is set up as a RAID6 w/ a hot spare. OS is CentOS 5
x86_64. It's all running on a couple of Xeon 5130s on a Supermicro X7DBE
motherboard w/ 4GB of
On Fri, 13 Jul 2007 at 2:35pm, Justin Piszcz wrote
On Fri, 13 Jul 2007, Joshua Baker-LePain wrote:
My new system has a 3ware 9650SE-24M8 controller hooked to 24 500GB WD
drives. The controller is set up as a RAID6 w/ a hot spare. OS is CentOS
5 x86_64. It's all running on a couple of Xeon
Joshua Baker-LePain wrote:
[]
Yep, hardware RAID -- I need the hot swappability (which, AFAIK, is
still an issue with md).
Just out of curiocity - what do you mean by swappability ?
For many years we're using linux software raid, we had no problems
with swappability of the component drives (in
--- Justin Piszcz [EMAIL PROTECTED] wrote:
To give you an example I get 464MB/s write and
627MB/s with a 10 disk
raptor software raid5.
Is that with the 9650?
Andrew
Fussy? Opinionated?
22 matches
Mail list logo