chunk size)
On Fri, 12 Jan 2007, Justin Piszcz wrote:
On Fri, 12 Jan 2007, Michael Tokarev wrote:
Justin Piszcz wrote:
Using 4 raptor 150s:
Without the tweaks, I get 111MB/s write and 87MB/s read.
With the tweaks, 195MB/s write and 211MB/s read.
Using kernel 2.6.19.1
On Fri, 12 Jan 2007, Al Boldi wrote:
Justin Piszcz wrote:
RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
This should be 1:14 not 1:06(was with a similarly sized file but not the
same) the 1:14 is the same file as used with the other benchmarks. and to
get that I used 256mb read-ahead
out
10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s
Awful performance with your numbers/drop_caches settings.. !
What were your tests designed to show?
Justin.
On Fri, 12 Jan 2007, Justin Piszcz wrote:
On Fri, 12 Jan 2007, Al Boldi wrote:
Justin Piszcz wrote:
RAID 5
On Sat, 13 Jan 2007, Al Boldi wrote:
Justin Piszcz wrote:
Btw, max sectors did improve my performance a little bit but
stripe_cache+read_ahead were the main optimizations that made everything
go faster by about ~1.5x. I have individual bonnie++ benchmarks of
[only] the max_sector_kb
With 4 Raptor 150s XFS (default XFS options):
# Stripe tests:
echo 8192 /sys/block/md3/md/stripe_cache_size
# DD TESTS [WRITE]
DEFAULT:
$ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 96.6988 seconds,
=393216 blocks=0, rtextents=0
On Fri, 12 Jan 2007, David Chinner wrote:
On Thu, Jan 11, 2007 at 06:05:36PM -0500, Justin Piszcz wrote:
With 4 Raptor 150s XFS (default XFS options):
I need more context for this to be meaningful in any way.
What type of md config are you using here? RAID0, 1
Using 4 raptor 150s:
Without the tweaks, I get 111MB/s write and 87MB/s read.
With the tweaks, 195MB/s write and 211MB/s read.
Using kernel 2.6.19.1.
Without the tweaks and with the tweaks:
# Stripe tests:
echo 8192 /sys/block/md3/md/stripe_cache_size
# DD TESTS [WRITE]
DEFAULT: (512K)
$ dd
On Sat, 21 Oct 2006, Dan wrote:
I have been using an older 64bit system, socket 754 for a while now. It has
the old PCI bus 33Mhz. I have two low cost (no HW RAID) PCI SATA I cards
each with 4 ports to give me an eight disk RAID 6. I also have a Gig NIC,
on the PCI bus. I have Gig
See if its mounted/etc first, if not: mdadm -S /dev/md0 (stop it)
Then try again.
On Sat, 14 Oct 2006, Ray Greene wrote:
I am having problems creating a RAID 5 array using 3x400GB SATA drives
on a Dell SC430 running Xandros 4.
I created this once with Webmin and it worked OK but then I
fdisk -l
then you have to assemble the array
mdadm --assemble /dev/md0 /dev/hda1 /dev/hdb1 # i think, man mdadm
On Tue, 12 Sep 2006, Dexter Filmore wrote:
When running Knoppix on my file server, I can't mount /dev/md0 simply because
it isn't there.
Am I guessing right that I need to recreate
Strange, what knoppix are you using? I recall doing it to fix an XFS bug
with 4.x and 5.x.
On Tue, 12 Sep 2006, Dexter Filmore wrote:
Am Dienstag, 12. September 2006 16:08 schrieb Justin Piszcz:
/dev/MAKEDEV /dev/md0
also make sure the SW raid modules etc are loaded if necessary.
Won't
On Wed, 6 Sep 2006, Sandra L. McGrew wrote:
I have two hard drives installed in this DELL GX110 Optiplex computer. I
believe that they are configured in RAID5, but am not certain. Is there a
graphical method of determining how many drives are being used and how they
are configured???
I'm
Second, trying checks on a fast (2.2 GHz AMD64) machine, I'm surprised
at how slow it is:
The PCI bus is only capable of 133MB/s max. Unless you have dedicated
SATA ports, each on its own PCI-e bus, you will not get speeds in excess
of 133MB/s, 200MB/s+ I have read reports of someone using
On Tue, 22 Aug 2006, Steve Cousins wrote:
Hi,
I have a set of 11 500 GB drives. Currently each has two 250 GB partitions
(/dev/sd?1 and /dev/sd?2). I have two RAID6 arrays set up, each with 10
drives and then I wanted the 11th drive to be a hot-spare. When I originally
created the array
Adding XFS mailing list to this e-mail to show that the grow for xfs
worked.
On Thu, 17 Aug 2006, ÊæÐÇ wrote:
I've only tried growing a RAID5, which was the only RAID that I remember
being supported (to grow) in the kernel, I am not sure if its posible to
i know this,but how you grow your
On Thu, 17 Aug 2006, ÊæÐÇ wrote:
hello all:
i installed adadm 2.5.2,and compiled the 2.6.17.6 kernel .when
i cmd to grom a raid5 array ,it don't work.how to do can make raid5
grow.thks for your help.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a
I've only tried growing a RAID5, which was the only RAID that I remember
being supported (to grow) in the kernel, I am not sure if its posible to
grow other types of RAID arrays.
On Thu, 17 Aug 2006, ÊæÐÇ wrote:
dear sir:
i try to make a reshape command with mdadm (version
2.5.2).in
On Sat, 12 Aug 2006, Chuck Ebbert wrote:
Doing this on a raid1 array:
echo check /sys/block/md0/md/sync_action
On 2.6.16.27:
Activity lights on both mirrors show activity for a while,
then the array status prints on the console.
On 2.6.18-rc4 + the below patch:
Neil,
It worked, echo'ing the 600 to the stripe width in /sys, however, how
come /dev/md3 says it is 0 MB when I type fdisk -l?
Is this normal?
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md3: 0 MB, 0 bytes
2 heads, 4 sectors/track, 0 cylinders
Units = cylinders of 8 *
On Sat, 8 Jul 2006, Neil Brown wrote:
On Friday July 7, [EMAIL PROTECTED] wrote:
Jul 7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough
stripes. Needed 512
Jul 7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array
info. -28
So the RAID5 reshape only works if
On Tue, 11 Jul 2006, Jan Engelhardt wrote:
md3 : active raid5 sdc1[7] sde1[6] sdd1[5] hdk1[2] hdi1[4] hde1[3] hdc1[1]
hda1[0]
2344252416 blocks super 0.91 level 5, 512k chunk, algorithm 2 [8/8]
[]
[] reshape = 0.2% (1099280/390708736)
finish=1031.7min
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Fri Jun 30 09:17:12 2006
Raid Level : raid5
Array Size : 1953543680 (1863.04 GiB 2000.43 GB)
Device Size : 390708736 (372.61 GiB 400.09 GB)
On Fri, 7 Jul 2006, Justin Piszcz wrote:
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Fri Jun 30 09:17:12 2006
Raid Level : raid5
Array Size : 1953543680 (1863.04 GiB 2000.43 GB)
Device Size
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Fri Jun 30 09:17:12 2006
On Sat, 8 Jul 2006, Reuben Farrelly wrote:
I'm just in the process of upgrading the RAID-1 disks in my server, and have
started to experiment with the RAID-1 --grow command. The first phase of the
change went well, I added the new disks to the old arrays and then increased
the size of the
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
On Fri, 7 Jul 2006, Justin Piszcz wrote:
p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1
p34:~# mdadm -D
On Sat, 8 Jul 2006, Neil Brown wrote:
On Friday July 7, [EMAIL PROTECTED] wrote:
Jul 7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough
stripes. Needed 512
Jul 7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array
info. -28
So the RAID5 reshape only works if
On Sat, 8 Jul 2006, Neil Brown wrote:
On Friday July 7, [EMAIL PROTECTED] wrote:
Hey! You're awake :)
Yes, and thinking about breakfast (it's 8:30am here).
I am going to try it with just 64kb to prove to myself it works with that,
but then I will re-create the raid5 again like I had
On Sat, 8 Jul 2006, Neil Brown wrote:
On Friday July 7, [EMAIL PROTECTED] wrote:
I guess one has to wait until the reshape is complete before growing the
filesystem..?
Yes. The extra space isn't available until the reshape has completed
(if it was available earlier, the reshape wouldn't
On Mon, 3 Jul 2006, Jeff Garzik wrote:
Justin Piszcz wrote:
In the source:
enum {
uli_5289= 0,
uli_5287= 1,
uli_5281= 2,
uli_max_ports = 4,
/* PCI configuration registers
In the source:
enum {
uli_5289= 0,
uli_5287= 1,
uli_5281= 2,
uli_max_ports = 4,
/* PCI configuration registers */
ULI5287_BASE= 0x90, /* sata0 phy SCR registers */
On Wed, 28 Jun 2006, [EMAIL PROTECTED] wrote:
Mike Dresser wrote:
On Fri, 23 Jun 2006, Molle Bestefich wrote:
Christian Pernegger wrote:
Anything specific wrong with the Maxtors?
I'd watch out regarding the Western Digital disks, apparently they
have a bad habit of turning themselves
On Wed, 28 Jun 2006, Brad Campbell wrote:
Guy wrote:
Hello group,
I am upgrading my disks from old 18 Gig SCSI disks to 300 Gig SATA
disks. I need a good SATA controller. My system is old and has PCI V 2.1.
I need a 4 port card, or 2 2 port cards. My system has multi PCI buses,
On Wed, 28 Jun 2006, Christian Pernegger wrote:
My current 15 drive RAID-6 server is built around a KT600 board with an AMD
Sempron
processor and 4 SATA150TX4 cards. It does the job but it's not the fastest
thing around
(takes about 10 hours to do a check of the array or about 15 to do a
Anyone have an ETA on this? I heard soon but was wondering how soon..?
kernel-version-2.6.x
kernel-version-2.6.x/arcmsr
kernel-version-2.6.x/arcmsr/arcmsr.c
kernel-version-2.6.x/arcmsr/arcmsr.h
kernel-version-2.6.x/arcmsr/Makefile
kernel-version-2.6.x/readme.txt
The driver is quite small and
On Sun, 25 Jun 2006, Bill Davidsen wrote:
Justin Piszcz wrote:
On Sat, 24 Jun 2006, Neil Brown wrote:
On Friday June 23, [EMAIL PROTECTED] wrote:
The problem is that there is no cost effective backup available.
One-liner questions :
- How does Google make backups ?
No, Google
I set a disk faulty and then rebuilt it, afterwards, I got horrible
performance, I was using 2.6.16.20 during the tests.
The FS I use is XFS.
# xfs_info /dev/md3
meta-data=/dev/root isize=256agcount=16, agsize=1097941
blks
= sectsz=512 attr=0
mkfs -t xfs -f -d su=128k,sw=14 /dev/md9
Gordon, What speed do you get on your RAID, read and write?
When I made my XFS/RAID-5, I accepted the defaults for the XFS filesystem
but used a 512kb stripe. I get 80-90MB/s reads and ~39MB/s writes.
On 5 x 400GB ATA/100 Seagates (on a regular
301 - 339 of 339 matches
Mail list logo