On Mon, Feb 04, 2008 at 07:38:40PM +0300, Michael Tokarev wrote:
Eric Sandeen wrote:
[]
http://oss.sgi.com/projects/xfs/faq.html#nulls
and note that recent fixes have been made in this area (also noted in
the faq)
Also - the above all assumes that when a drive says it's written/flushed
data,
Janek Kozicki wrote:
writing on raid10 is supposed to be half the speed of reading. That's
because it must write to both mirrors.
I am not 100% certain about the following rules, but afaik any raid
configuration has a theoretical[1] maximum read speed of the combined speed of
all disks in
On Sat, Feb 02, 2008 at 08:41:31PM +0100, Keld Jørn Simonsen wrote:
Make each of the disks bootable by lilo:
lilo -b /dev/sda /etc/lilo.conf1
lilo -b /dev/sdb /etc/lilo.conf2
There should be no need for that.
to achieve the above effect with lilo you use
raid-extra-boot=mbr-only
in
Tuesday 05 February 2008 21:12:32 Neil Brown napisał(a):
% mdadm --zero-superblock /dev/sdb1
mdadm: Couldn't open /dev/sdb1 for write - not zeroing
That's weird.
Why can't it open it?
Hell if I know. First time I see such a thing.
Maybe you aren't running as root (The '%' prompt is
Tuesday 05 February 2008 12:43:31 Moshe Yudkowsky napisał(a):
1. Where this info on array resides?! I have deleted /etc/mdadm/mdadm.conf
and /dev/md devices and yet it comes seemingly out of nowhere.
/boot has a copy of mdadm.conf so that / and other drives can be started
and then
Marcin Krol wrote:
Tuesday 05 February 2008 21:12:32 Neil Brown napisał(a):
% mdadm --zero-superblock /dev/sdb1
mdadm: Couldn't open /dev/sdb1 for write - not zeroing
That's weird.
Why can't it open it?
Hell if I know. First time I see such a thing.
Maybe you aren't running as root (The
On Wednesday February 6, [EMAIL PROTECTED] wrote:
Maybe the kernel has been told to forget about the partitions of
/dev/sdb.
But fdisk/cfdisk has no problem whatsoever finding the partitions .
It is looking at the partition table on disk. Not at the kernel's
idea of partitions, which
Wednesday 06 February 2008 11:11:51 Peter Rabbitson napisał(a):
lsof /dev/sdf1 gives ZERO results.
What does this say:
dmsetup table
% dmsetup table
vg-home: 0 61440 linear 9:2 384
Regards,
Marcin Krol
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
Marcin Krol wrote:
Hello everyone,
I have had a problem with RAID array (udev messed up disk names, I've had
RAID on
disks only, without raid partitions)
Do you mean that you originally used /dev/sdb for the RAID array? And now you
are using /dev/sdb1?
Given the system seems confused I
Wednesday 06 February 2008 12:22:00:
I have had a problem with RAID array (udev messed up disk names, I've had
RAID on
disks only, without raid partitions)
Do you mean that you originally used /dev/sdb for the RAID array? And now you
are using /dev/sdb1?
That's reconfigured now, it
Wednesday 06 February 2008 11:43:12:
On Wednesday February 6, [EMAIL PROTECTED] wrote:
Maybe the kernel has been told to forget about the partitions of
/dev/sdb.
But fdisk/cfdisk has no problem whatsoever finding the partitions .
It is looking at the partition table on disk.
Hi All,
I was wondering if someone might be willing to confirm what the current
state of my RAID array is, given the following sequence of events (sorry
it's pretty long)
I had a clean, running /dev/md0 using 5 disks in RAID 5 (sda1, sdb1,
sdc1, sdd1, hdd1). It had been clean like that for
Keld Jørn Simonsen wrote:
Make each of the disks bootable by grub
(to be described)
It would probably be good to show how to use grub shell's install
command. It's the most flexible way and give the most (or rather total)
control. I could write some examples.
-
To unsubscribe from this
I read through the document, and I've signed up for a Wiki account so I
can edit it.
One of the things I wanted to do was correct the title. I see that there
are *three* different Wiki pages about how to build a system that boots
from RAID. None of them are complete yet.
So, what is the
- Message from [EMAIL PROTECTED] -
Date: Wed, 6 Feb 2008 12:58:55 -
From: Steve Fairbairn [EMAIL PROTECTED]
Reply-To: Steve Fairbairn [EMAIL PROTECTED]
Subject: Disk failure during grow, what is the current state.
To: linux-raid@vger.kernel.org
As you can see,
I'm having a nightmare with emails today. I can't get a single one
right first time. Apologies to Alex for sending it directly to him and
not to the list on first attempt.
Steve
-Original Message-
From: Steve Fairbairn [mailto:[EMAIL PROTECTED]
Sent: 06 February 2008 15:02
To:
On Wed, Feb 06, 2008 at 08:24:37AM -0600, Moshe Yudkowsky wrote:
I read through the document, and I've signed up for a Wiki account so I
can edit it.
One of the things I wanted to do was correct the title. I see that there
are *three* different Wiki pages about how to build a system that
On Wed, Feb 06, 2008 at 10:05:58AM +0100, Luca Berra wrote:
On Sat, Feb 02, 2008 at 08:41:31PM +0100, Keld Jørn Simonsen wrote:
Make each of the disks bootable by lilo:
lilo -b /dev/sda /etc/lilo.conf1
lilo -b /dev/sdb /etc/lilo.conf2
There should be no need for that.
to achieve the
-Original Message-
From: Steve Fairbairn [mailto:[EMAIL PROTECTED]
Sent: 06 February 2008 15:02
To: 'Nagilum'
Subject: RE: Disk failure during grow, what is the current state.
Array Size : 1953535744 (1863.04 GiB 2000.42 GB)
Used Dev Size : 488383936 (465.76 GiB
Neil Brown wrote:
On Sunday February 3, [EMAIL PROTECTED] wrote:
Hi,
Maybe I'll buy three HDDs to put a raid10 on them. And get the total
capacity of 1.5 of a disc. 'man 4 md' indicates that this is possible
and should work.
I'm wondering - how a single disc failure is handled in such
Keld Jørn Simonsen wrote:
I understand that lilo and grub only can boot partitions that look like
a normal single-drive partition. And then I understand that a plain
raid10 has a layout which is equivalent to raid1. Can such a raid10
partition be used with grub or lilo for booting?
And would
On Feb 6, 2008 12:43 PM, Bill Davidsen [EMAIL PROTECTED] wrote:
Can you create a raid10 with one drive missing and add it later? I
know, I should try it when I get a machine free... but I'm being lazy today.
Yes you can. With 3 drives, however, performance will be awful (at
least with layout
Keld Jørn Simonsen wrote:
Hi
I am looking at revising our howto. I see a number of places where a
chunk size of 32 kiB is recommended, and even recommendations on
maybe using sizes of 4 kiB.
Depending on the raid level, a write smaller than the chunk size causes
the chunk to be read,
Hello, Neil.
.
Possible you have bad memory, or a bad CPU, or you are overclocking
the CPU, or it is getting hot, or something.
As seems to me all my problems has been started after i have started update
MDADM.
This is server worked normaly (but only not like soft-raid) more 2-3 years.
In message [EMAIL PROTECTED] you wrote:
I actually think the kernel should operate with block sizes
like this and not wth 4 kiB blocks. It is the readahead and the elevator
algorithms that save us from randomly reading 4 kb a time.
Exactly, and nothing save a R-A-RW cycle if the
Jon Nelson wrote:
On Feb 6, 2008 12:43 PM, Bill Davidsen [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Can you create a raid10 with one drive missing and add it later? I
know, I should try it when I get a machine free... but I'm being
lazy today.
Yes you can. With 3 drives,
Wolfgang Denk wrote:
In message [EMAIL PROTECTED] you wrote:
I actually think the kernel should operate with block sizes
like this and not wth 4 kiB blocks. It is the readahead and the elevator
algorithms that save us from randomly reading 4 kb a time.
Exactly, and nothing save
Bill Davidsen said: (by the date of Wed, 06 Feb 2008 13:16:14 -0500)
Janek Kozicki wrote:
Justin Piszcz said: (by the date of Tue, 5 Feb 2008 17:28:27 -0500
(EST))
writing on raid10 is supposed to be half the speed of reading. That's
because it must write to both mirrors.
Andreas-Sokov said: (by the date of Wed, 6 Feb 2008 22:15:05 +0300)
Hello, Neil.
.
Possible you have bad memory, or a bad CPU, or you are overclocking
the CPU, or it is getting hot, or something.
As seems to me all my problems has been started after i have started update
On Wed, Feb 06, 2008 at 01:52:11PM -0500, Bill Davidsen wrote:
Keld Jørn Simonsen wrote:
I understand that lilo and grub only can boot partitions that look like
a normal single-drive partition. And then I understand that a plain
raid10 has a layout which is equivalent to raid1. Can such a
On Wed, Feb 06, 2008 at 09:25:36PM +0100, Wolfgang Denk wrote:
In message [EMAIL PROTECTED] you wrote:
I actually think the kernel should operate with block sizes
like this and not wth 4 kiB blocks. It is the readahead and the elevator
algorithms that save us from randomly reading 4
On Wednesday February 6, [EMAIL PROTECTED] wrote:
4. Would it be possible to later '--grow' the array to use 4 discs in
raid10 ? Even with far=2 ?
No.
Well if by later you mean in five years, then maybe. But the
code doesn't currently exist.
That's a
On Wednesday February 6, [EMAIL PROTECTED] wrote:
% cat /proc/partitions
major minor #blocks name
8 0 390711384 sda
8 1 390708801 sda1
816 390711384 sdb
817 390708801 sdb1
832 390711384 sdc
833 390708801 sdc1
848 390710327 sdd
On Thu, Feb 07, 2008 at 01:31:16AM +0100, Keld Jørn Simonsen wrote:
Anyway, why does a SATA-II drive not deliver something like 300 MB/s?
Wait, are you talking about a *single* drive?
In that case, it seems you are confusing the interface speed (300MB/s)
with the mechanical read speed (80MB/s).
On Wednesday February 6, [EMAIL PROTECTED] wrote:
We implemented the option to select kernel page sizes of 4, 16, 64
and 256 kB for some PowerPC systems (440SPe, to be precise). A nice
graphics of the effect can be found here:
On Wednesday February 6, [EMAIL PROTECTED] wrote:
Keld Jørn Simonsen wrote:
Hi
I am looking at revising our howto. I see a number of places where a
chunk size of 32 kiB is recommended, and even recommendations on
maybe using sizes of 4 kiB.
Depending on the raid level, a write
On Thursday February 7, [EMAIL PROTECTED] wrote:
Anyway, why does a SATA-II drive not deliver something like 300 MB/s?
Are you serious?
I high end 15000RPM enterprise grade drive such as the Seagate
Cheetah® 15K.6 Hard Drives only deliver 164MB/sec.
The SATA Bus might be able to deliver
37 matches
Mail list logo