I recently upgraded my file server, yet I'm still unsatisfied with the write
speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller
(Sil3114 attached via PCI)
Kernel is 2.6.21.1, custom on Slackware
On Mon, 11 Jun 2007, Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied with the write
speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller
(Sil3114 attached via
Bernd Schubert wrote:
Try to increase the read-ahead size of your lvm devices:
blockdev --setra 8192 /dev/raid10/space
or increase it at least to the same size as of your raid (blockdev
--getra /dev/mdX).
This did the trick, although I am still lagging behind the raw md device
by about 3 -
Hi,
RAID levels 0 and 4 do not seem to like the -b internal. Is this
intentional? Runs 2.6.20.2 on i586.
(BTW, do you already have a PAGE_SIZE=8K fix?)
14:47 ichi:/dev # mdadm -C /dev/md0 -l 4 -e 1.0 -b internal -n 2 /dev/ram[01]
mdadm: RUN_ARRAY failed: Input/output error
mdadm: stopped
On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:
On Mon, 11 Jun 2007, Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied with the
write speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to
On Mon, 11 Jun 2007, Dexter Filmore wrote:
On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:
On Mon, 11 Jun 2007, Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied with the
write speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of
On Mon, 11 Jun 2007, Justin Piszcz wrote:
On Mon, 11 Jun 2007, Dexter Filmore wrote:
On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:
On Mon, 11 Jun 2007, Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied with the
write speed.
Machine
On Mon, 11 Jun 2007, Jon Nelson wrote:
On Mon, 11 Jun 2007, Justin Piszcz wrote:
On Mon, 11 Jun 2007, Dexter Filmore wrote:
On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:
On Mon, 11 Jun 2007, Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied
10gb read test:
dd if=/dev/md0 bs=1M count=10240 of=/dev/null
What is the result?
71,7MB/s - but that's reading to null. *writing* real data however looks quite
different.
I've read that LVM can incur a 30-50% slowdown.
Even then the 8-10MB/s I get would be a little low.
--
On 11 Jun 2007, Justin Piszcz told this:
You can do a read test.
10gb read test:
dd if=/dev/md0 bs=1M count=10240 of=/dev/null
What is the result?
I've read that LVM can incur a 30-50% slowdown.
FWIW I see a much smaller penalty than that.
loki:~# lvs -o +devices
LV VG
Jan Engelhardt wrote:
Hi,
RAID levels 0 and 4 do not seem to like the -b internal. Is this
intentional? Runs 2.6.20.2 on i586.
(BTW, do you already have a PAGE_SIZE=8K fix?)
14:47 ichi:/dev # mdadm -C /dev/md0 -l 4 -e 1.0 -b internal -n 2 /dev/ram[01]
mdadm: RUN_ARRAY failed: Input/output
On Mon, 11 Jun 2007, Nix wrote:
On 11 Jun 2007, Justin Piszcz told this:
You can do a read test.
10gb read test:
dd if=/dev/md0 bs=1M count=10240 of=/dev/null
What is the result?
I've read that LVM can incur a 30-50% slowdown.
FWIW I see a much smaller penalty than that.
On Tuesday June 12, [EMAIL PROTECTED] wrote:
Can anyone please advise which commands we should use to get the array
back to at least a read only state?
mdadm --assemble /dev/md0 /dev/sd[abcd]2
and let mdadm figure it out. It is good at that.
If the above doesn't work, add --force, but be
Replacing (n (n-1)) in the context of power of 2 checks
with is_power_of_2
Signed-off-by: vignesh babu [EMAIL PROTECTED]
---
diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
index ef124b7..3e1817a 100644
--- a/drivers/md/dm-raid1.c
+++ b/drivers/md/dm-raid1.c
@@ -19,6 +19,7 @@
14 matches
Mail list logo