On Dec 6, 2007 1:06 AM, Justin Piszcz [EMAIL PROTECTED] wrote:
On Wed, 5 Dec 2007, Jon Nelson wrote:
I saw something really similar while moving some very large (300MB to
4GB) files.
I was really surprised to see actual disk I/O (as measured by dstat)
be really horrible.
Any
On Thu, 6 Dec 2007, David Rees wrote:
On Dec 6, 2007 1:06 AM, Justin Piszcz [EMAIL PROTECTED] wrote:
On Wed, 5 Dec 2007, Jon Nelson wrote:
I saw something really similar while moving some very large (300MB to
4GB) files.
I was really surprised to see actual disk I/O (as measured by dstat)
On 12/6/07, David Rees [EMAIL PROTECTED] wrote:
On Dec 6, 2007 1:06 AM, Justin Piszcz [EMAIL PROTECTED] wrote:
On Wed, 5 Dec 2007, Jon Nelson wrote:
I saw something really similar while moving some very large (300MB to
4GB) files.
I was really surprised to see actual disk I/O (as
On Dec 5 2007 19:29, Nix wrote:
On Dec 1 2007 06:19, Justin Piszcz wrote:
RAID1, 0.90.03 superblocks (in order to be compatible with LILO, if
you use 1.x superblocks with LILO you can't boot)
Says who? (Don't use LILO ;-)
Well, your kernels must be on a 0.90-superblocked RAID-0 or RAID-1
Thank you.
I want to make sure I understand.
1- Does it matter which permutation of drives I use for xfs_repair (as
long as it tells me that the Structure needs cleaning)? When it comes to
linux I consider myself at intermediate level, but I am a beginner when
it comes to raid and filesystem
Justin Piszcz wrote:
root 2206 1 4 Dec02 ?00:10:37 dd if /dev/zero of
1.out bs 1M
root 2207 1 4 Dec02 ?00:10:38 dd if /dev/zero of
2.out bs 1M
root 2208 1 4 Dec02 ?00:10:35 dd if /dev/zero of
3.out bs 1M
root 2209 1 4 Dec02 ?
I come across a situation where external MD bitmaps
aren't usable on any standard linux distribution
unless special (non-trivial) actions are taken.
First is a small buglet in mdadm, or two.
It's not possible to specify --bitmap= in assemble
command line - the option seems to be ignored. But
[Cc'd to xfs list as it contains something related]
Dragos wrote:
Thank you.
I want to make sure I understand.
[Some background for XFS list. The talk is about a broken linux software
raid (the reason for breakage isn't relevant anymore). The OP seems to
lost the order of drives in his
Michael Tokarev wrote:
It's sad that xfs refuses mount when structure needs
cleaning - the best way here is to actually mount it
and see how it looks like, instead of trying repair
tools. Is there some option to force-mount it still
(in readonly mode, knowing it may OOPs kernel etc)?
On 15:31, Bill Davidsen wrote:
Thiemo posted metacode which I find appears correct,
It assumes that _exactly_ one disk has bad data which is hard to verify
in practice. But yes, it's probably the best one can do if both P and
Q happen to be incorrect. IMHO mdadm shouldn't do this automatically
Hello,
here is the second version of the patch. With this version also on
setting /sys/block/*/md/sync_force_parallel the sync_thread is woken up.
Though, I still don't understand why md_wakeup_thread() is not working.
Signed-off-by: Bernd Schubert [EMAIL PROTECTED]
Index:
Hi,
I have a problem with my RAID array under Linux after upgrading to larger
drives. I have a machine with Windows and Linux dual-boot which had a pair
of 160GB drives in a RAID-1 mirror with 3 partitions: partiton 1 = Windows
boot partition (FAT32), partiton 2 = Linux /boot (ext3), partiton 3
On Thu, Dec 06, 2007 at 07:39:28PM +0300, Michael Tokarev wrote:
What to do is to give repairfs a try for each permutation,
but again without letting it to actually fix anything.
Just run it in read-only mode and see which combination
of drives gives less errors, or no fatal errors (there
may
On Sat, 1 Dec 2007 06:26:08 -0500 (EST)
Justin Piszcz [EMAIL PROTECTED] wrote:
I am putting a new machine together and I have dual raptor raid 1 for the
root, which works just fine under all stress tests.
Then I have the WD 750 GiB drive (not RE2, desktop ones for ~150-160 on
sale now
On Thu, 6 Dec 2007, Andrew Morton wrote:
On Sat, 1 Dec 2007 06:26:08 -0500 (EST)
Justin Piszcz [EMAIL PROTECTED] wrote:
I am putting a new machine together and I have dual raptor raid 1 for the
root, which works just fine under all stress tests.
Then I have the WD 750 GiB drive (not RE2,
On Thu, 6 Dec 2007 17:38:08 -0500 (EST)
Justin Piszcz [EMAIL PROTECTED] wrote:
On Thu, 6 Dec 2007, Andrew Morton wrote:
On Sat, 1 Dec 2007 06:26:08 -0500 (EST)
Justin Piszcz [EMAIL PROTECTED] wrote:
I am putting a new machine together and I have dual raptor raid 1 for the
root,
On Thursday December 6, [EMAIL PROTECTED] wrote:
Hello,
here is the second version of the patch. With this version also on
setting /sys/block/*/md/sync_force_parallel the sync_thread is woken up.
Though, I still don't understand why md_wakeup_thread() is not working.
Could give a little
From: H. Peter Anvin [EMAIL PROTECTED]
Make both mktables.c and its output CodingStyle compliant. Update the
copyright notice.
Signed-off-by: H. Peter Anvin [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/mktables.c | 43
Following 3 patches for md provide some code tidyup and a small
functionality improvement.
They do not need to go into 2.6.24 but are definitely appropriate 25-rc1.
(Patches made against 2.6.24-rc3-mm2)
Thanks,
NeilBrown
[PATCH 001 of 3] md: raid6: Fix mktable.c
[PATCH 002 of 3] md: raid6:
I think you would have more luck posting this to
[EMAIL PROTECTED] - I think that is where support for device mapper
happens.
NeilBrown
On Thursday December 6, [EMAIL PROTECTED] wrote:
Hi,
I have a problem with my RAID array under Linux after upgrading to larger
drives. I have a machine
From: H. Peter Anvin [EMAIL PROTECTED]
Date: Fri, 26 Oct 2007 11:22:42 -0700
Clean up the coding style in raid6test/test.c. Break it apart into
subfunctions to make the code more readable.
Signed-off-by: H. Peter Anvin [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat
Currently and md array with a write-intent bitmap does not updated
that bitmap to reflect successful partial resync. Rather the entire
bitmap is updated when the resync completes.
This is because there is no guarentee that resync requests will
complete in order, and tracking each request
On 6 Dec 2007, Jan Engelhardt verbalised:
On Dec 5 2007 19:29, Nix wrote:
On Dec 1 2007 06:19, Justin Piszcz wrote:
RAID1, 0.90.03 superblocks (in order to be compatible with LILO, if
you use 1.x superblocks with LILO you can't boot)
Says who? (Don't use LILO ;-)
Well, your kernels must
23 matches
Mail list logo