Hello, Neil.
YOU WROTE : 5 февраля 2008 г., 01:48:33:
On Monday February 4, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED]:/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
[multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2]
On Tuesday February 5, [EMAIL PROTECTED] wrote:
Feb 5 11:56:12 raid01 kernel: BUG: unable to handle kernel paging request at
virtual address 001cd901
This looks like some sort of memory corruption.
Feb 5 11:56:12 raid01 kernel: EIP is at md_do_sync+0x629/0xa32
This tells us what code is
Hello everyone,
I have had a problem with RAID array (udev messed up disk names, I've had RAID
on
disks only, without raid partitions) on Debian Etch server with 6 disks and so
I decided
to rearrange this.
Deleted the disks from (2 RAID-5) arrays, deleted the md* devices from /dev,
created
1. Where this info on array resides?! I have deleted /etc/mdadm/mdadm.conf
and /dev/md devices and yet it comes seemingly out of nowhere.
/boot has a copy of mdadm.conf so that / and other drives can be started
and then mounted. update-initramfs will update /boot's copy of mdadm.conf.
--
Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100)
2. How can I delete that damn array so it doesn't hang my server up in a loop?
dd if=/dev/zero of=/dev/sdb1 bs=1M count=10
I'm not using mdadm.conf at all. Everything is stored in the
superblock of the device. So if you
Michael Tokarev wrote:
note that with some workloads, write caching in
the drive actually makes write speed worse, not better - namely,
in case of massive writes.
With write barriers enabled, I did a quick test of
a large copy from one backup filesystem to another.
I'm
Janek Kozicki wrote:
Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100)
2. How can I delete that damn array so it doesn't hang my server up in a
loop?
dd if=/dev/zero of=/dev/sdb1 bs=1M count=10
This works provided the superblocks are at the beginning of the
component
Michael Tokarev said: (by the date of Tue, 05 Feb 2008 16:52:18 +0300)
Janek Kozicki wrote:
I'm not using mdadm.conf at all.
That's wrong, as you need at least something to identify the array
components.
I was afraid of that ;-) So, is that a correct way to automatically
generate a
Moshe Yudkowsky wrote:
Michael Tokarev wrote:
Janek Kozicki wrote:
Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100)
2. How can I delete that damn array so it doesn't hang my server up
in a loop?
dd if=/dev/zero of=/dev/sdb1 bs=1M count=10
This works provided the
Michael Tokarev wrote:
Janek Kozicki wrote:
Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100)
2. How can I delete that damn array so it doesn't hang my server up in a loop?
dd if=/dev/zero of=/dev/sdb1 bs=1M count=10
This works provided the superblocks are at the
Janek Kozicki wrote:
Michael Tokarev said: (by the date of Tue, 05 Feb 2008 16:52:18 +0300)
Janek Kozicki wrote:
I'm not using mdadm.conf at all.
That's wrong, as you need at least something to identify the array
components.
I was afraid of that ;-) So, is that a correct way to
On Thu, Jan 31, 2008 at 02:55:07AM +0100, Keld Jørn Simonsen wrote:
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote:
Keld Jørn Simonsen said: (by the date of Wed, 30 Jan 2008 23:00:07
+0100)
All the raid10's will have double time for writing, and raid5 and raid6
will
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Thu, Jan 31, 2008 at 02:55:07AM +0100, Keld Jørn Simonsen wrote:
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote:
Keld Jørn Simonsen said: (by the date of Wed, 30 Jan 2008 23:00:07 +0100)
All the raid10's will have double
Hi
I am looking at revising our howto. I see a number of places where a
chunk size of 32 kiB is recommended, and even recommendations on
maybe using sizes of 4 kiB.
My own take on that is that this really hurts performance.
Normal disks have a rotation speed of between 5400 (laptop)
7200
Michael Tokarev said: (by the date of Tue, 05 Feb 2008 18:34:47 +0300)
...
So.. probably this is the way your arrays are being assembled, since you
do have HOMEHOST in your mdadm.conf... Looks like it should work, after
all... ;) And in this case there's no need to specify additional
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
Hi
I am looking at revising our howto. I see a number of places where a
chunk size of 32 kiB is recommended, and even recommendations on
maybe using sizes of 4 kiB.
My own take on that is that this really hurts performance.
Normal disks have a
On Tue, Feb 05, 2008 at 11:54:27AM -0500, Justin Piszcz wrote:
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Thu, Jan 31, 2008 at 02:55:07AM +0100, Keld Jørn Simonsen wrote:
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote:
Keld Jørn Simonsen said: (by the date of
On Tuesday February 5, [EMAIL PROTECTED] wrote:
% mdadm --zero-superblock /dev/sdb1
mdadm: Couldn't open /dev/sdb1 for write - not zeroing
That's weird.
Why can't it open it?
Maybe you aren't running as root (The '%' prompt is suspicious).
Maybe the kernel has been told to forget about the
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Tue, Feb 05, 2008 at 11:54:27AM -0500, Justin Piszcz wrote:
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Thu, Jan 31, 2008 at 02:55:07AM +0100, Keld Jørn Simonsen wrote:
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote:
Justin Piszcz said: (by the date of Tue, 5 Feb 2008 17:28:27 -0500 (EST))
I remember testing with bonnie++ and raid10 was about half the speed
(200-265 MiB/s) as RAID5 (400-420 MiB/s) for sequential output,
writing on raid10 is supposed to be half the speed of reading. That's
because it
On Tue, Feb 05, 2008 at 05:28:27PM -0500, Justin Piszcz wrote:
Could you give some figures?
I remember testing with bonnie++ and raid10 was about half the speed
(200-265 MiB/s) as RAID5 (400-420 MiB/s) for sequential output, but input
was closer to RAID5 speeds/did not seem affected
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Tue, Feb 05, 2008 at 05:28:27PM -0500, Justin Piszcz wrote:
Could you give some figures?
I remember testing with bonnie++ and raid10 was about half the speed
(200-265 MiB/s) as RAID5 (400-420 MiB/s) for sequential output, but input
was
Michael Tokarev wrote:
Unfortunately an UPS does not *really* help here. Because unless
it has control program which properly shuts system down on the loss
of input power, and the battery really has the capacity to power the
system while it's shutting down (anyone tested this?
Linda Walsh wrote:
Michael Tokarev wrote:
Unfortunately an UPS does not *really* help here. Because unless
it has control program which properly shuts system down on the loss
of input power, and the battery really has the capacity to power the
system while it's shutting down (anyone tested
24 matches
Mail list logo