Justin Piszcz wrote:
[]
> Good to know/have it confirmed by someone else, the alignment does not
> matter with Linux/SW RAID.
Alignment matters when one partitions Linux/SW raid array.
If the inside partitions will not be aligned on a stripe
boundary, esp. in the worst case when the filesystem blo
On Sat, 29 Dec 2007, dean gaudet wrote:
> On Sat, 29 Dec 2007, Justin Piszcz wrote:
>
> > Curious btw what kind of filesystem size/raid type (5, but defaults I
> > assume,
> > nothing special right? (right-symmetric vs. left-symmetric, etc?)/cache
> > size/chunk size(s) are you using/testing wi
On Sat, 29 Dec 2007, Justin Piszcz wrote:
> Curious btw what kind of filesystem size/raid type (5, but defaults I assume,
> nothing special right? (right-symmetric vs. left-symmetric, etc?)/cache
> size/chunk size(s) are you using/testing with?
mdadm --create --level=5 --chunk=64 -n7 -x1 /dev/md2
On Dec 29, 2007 1:58 PM, dean gaudet <[EMAIL PROTECTED]> wrote:
> On Sat, 29 Dec 2007, Dan Williams wrote:
>
> > On Dec 29, 2007 9:48 AM, dean gaudet <[EMAIL PROTECTED]> wrote:
> > > hmm bummer, i'm doing another test (rsync 3.5M inodes from another box) on
> > > the same 64k chunk array and had ra
On Sat, 29 Dec 2007, dean gaudet wrote:
On Sat, 29 Dec 2007, Dan Williams wrote:
On Dec 29, 2007 9:48 AM, dean gaudet <[EMAIL PROTECTED]> wrote:
hmm bummer, i'm doing another test (rsync 3.5M inodes from another box) on
the same 64k chunk array and had raised the stripe_cache_size to 1024..
Document the amount of memory used by the stripe cache and the fact that
it's tied down and unavailable for other purposes (right?). thanks to Dan
Williams for the formula.
-dean
Signed-off-by: dean gaudet <[EMAIL PROTECTED]>
Index: linux/Documentation/md.txt
=
On Sat, 29 Dec 2007, Dan Williams wrote:
> On Dec 29, 2007 9:48 AM, dean gaudet <[EMAIL PROTECTED]> wrote:
> > hmm bummer, i'm doing another test (rsync 3.5M inodes from another box) on
> > the same 64k chunk array and had raised the stripe_cache_size to 1024...
> > and got a hang. this time i gr
On Dec 29, 2007 9:48 AM, dean gaudet <[EMAIL PROTECTED]> wrote:
> hmm bummer, i'm doing another test (rsync 3.5M inodes from another box) on
> the same 64k chunk array and had raised the stripe_cache_size to 1024...
> and got a hang. this time i grabbed stripe_cache_active before bumping
> the siz
On Sat, 29 Dec 2007, dean gaudet wrote:
On Tue, 25 Dec 2007, Bill Davidsen wrote:
The issue I'm thinking about is hardware sector size, which on modern drives
may be larger than 512b and therefore entail a read-alter-rewrite (RAR) cycle
when writing a 512b block.
i'm not sure any shipping
On Tue, 25 Dec 2007, Bill Davidsen wrote:
> The issue I'm thinking about is hardware sector size, which on modern drives
> may be larger than 512b and therefore entail a read-alter-rewrite (RAR) cycle
> when writing a 512b block.
i'm not sure any shipping SATA disks have larger than 512B sectors
hmm bummer, i'm doing another test (rsync 3.5M inodes from another box) on
the same 64k chunk array and had raised the stripe_cache_size to 1024...
and got a hang. this time i grabbed stripe_cache_active before bumping
the size again -- it was only 905 active. as i recall the bug we were
debu
In case someone else happens upon this I have found that mdadm >=
v2.6.2 cannot add a disk to a degraded raid1 array created with mdadm
< 2.6.2.
I bisected the problem down to mdadm git commit
2fb749d1b7588985b1834e43de4ec5685d0b8d26 which appears to make an
incompatible change to the super block'
12 matches
Mail list logo