One more... (sorry about the dribs-and-drabs approach)
NeilBrown
### Comments for Changeset
raid5_mergeable_bvec tries to ensure that raid5 never sees a read
request that does not fit within just one chunk. However as we
must always accept a single-page read, that is not always possible.
So wh
Another md patch suitable for 2.6.20.
Thanks,
NeilBrown
### Comments for Changeset
If a GFP_KERNEL allocation is attempted in md while the mddev_lock is
held, it is possible for a deadlock to eventuate.
This happens if the array was marked 'clean', and the memalloc triggers
a write-out to the m
On Friday January 5, [EMAIL PROTECTED] wrote:
> In the stop function of any level raid device driver such as faulty,
> multipath, and raidx
> It would better set the mddev->private NULL before free the conf structure.
We set it to NULL immediately after the free, and it really doesn't
make any d
On Wednesday January 24, [EMAIL PROTECTED] wrote:
>
> Okay-- thanks for the explanation and I will await a future patch..
>
This would be that patch. It doesn't seem to break anything, but I
haven't reproduced he bug yet (I think I need to reduce the amount of
memory I have available) so I have
Justin Piszcz wrote:
On Mon, 22 Jan 2007, Andrew Morton wrote:
On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz <[EMAIL PROTECTED]>
wrote:
Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
the OOM killer and kill all of my processes?
Running with PREEMP
On Thu, 25 Jan 2007, Pavel Machek wrote:
> Hi!
>
> > > Is it highmem-related? Can you try it with mem=256M?
> >
> > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot
> > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I
> > use an onboard graphics
Justin Piszcz wrote:
On Mon, 22 Jan 2007, Andrew Morton wrote:
After the oom-killing, please see if you can free up the ZONE_NORMAL memory
via a few `echo 3 > /proc/sys/vm/drop_caches' commands. See if you can
work out what happened to the missing couple-of-hundred MB from
ZONE_NORMAL.
Ru
On Thu, 25 Jan 2007, Pavel Machek wrote:
> Hi!
>
> > > Is it highmem-related? Can you try it with mem=256M?
> >
> > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot
> > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I
> > use an onboard graphics
Hi!
> > Is it highmem-related? Can you try it with mem=256M?
>
> Bad idea, the kernel crashes & burns when I use mem=256, I had to boot
> 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I
> use an onboard graphics controller that has 128MB of RAM allocated to it
> and I
On Thu, 25 Jan 2007, Neil Brown wrote:
> On Wednesday January 24, [EMAIL PROTECTED] wrote:
> > Here you go Neil:
> >
> > p34:~# echo 512 > /sys/block/md3/md/stripe_cache_size
> > p34:~# echo 1024 > /sys/block/md3/md/stripe_cache_size
> > p34:~# echo 2048 > /sys/block/md3/md/stripe_cache_size
>
On Wednesday January 24, [EMAIL PROTECTED] wrote:
> Here you go Neil:
>
> p34:~# echo 512 > /sys/block/md3/md/stripe_cache_size
> p34:~# echo 1024 > /sys/block/md3/md/stripe_cache_size
> p34:~# echo 2048 > /sys/block/md3/md/stripe_cache_size
> p34:~# echo 4096 > /sys/block/md3/md/stripe_cache_size
On Mon, 22 Jan 2007, Andrew Morton wrote:
> > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz <[EMAIL PROTECTED]>
> > wrote:
> > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
> > the OOM killer and kill all of my processes?
>
> What's that? Software raid
And FYI yes I used mem=256M just as you said, not mem=256.
Justin.
On Wed, 24 Jan 2007, Justin Piszcz wrote:
> > Is it highmem-related? Can you try it with mem=256M?
>
> Bad idea, the kernel crashes & burns when I use mem=256, I had to boot
> 2.6.20-rc5-6 single to get back into my machine, ve
On Mon, 22 Jan 2007, Andrew Morton wrote:
> > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz <[EMAIL PROTECTED]>
> > wrote:
> > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
> > the OOM killer and kill all of my processes?
>
> What's that? Software raid
> Is it highmem-related? Can you try it with mem=256M?
Bad idea, the kernel crashes & burns when I use mem=256, I had to boot
2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I
use an onboard graphics controller that has 128MB of RAM allocated to it
and I believe the ICH8
On Mon, 22 Jan 2007, Chuck Ebbert wrote:
> Justin Piszcz wrote:
> > My .config is attached, please let me know if any other information is
> > needed and please CC (lkml) as I am not on the list, thanks!
> >
> > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to
> > the R
On Wednesday January 24, [EMAIL PROTECTED] wrote:
> I have for a long time been wondering if it is possible to convert a 2
> disk Raid-1 array to a 3 disk Raid-5 array using mdadm?
>
> Tonight I stumbled upon this article:
>
> http://www.n8gray.org/blog/2006/09/05/stupid-raid-tricks-with-evms-an
I have for a long time been wondering if it is possible to convert a 2
disk Raid-1 array to a 3 disk Raid-5 array using mdadm?
Tonight I stumbled upon this article:
http://www.n8gray.org/blog/2006/09/05/stupid-raid-tricks-with-evms-and-mdadm/
Is this safe / ok to do? Any comments from you mdad
On 23 Jan 2007, Neil Brown said:
> On Tuesday January 23, [EMAIL PROTECTED] wrote:
>>
>> My question is then : what prevents the upper layer to open the array
>> read-write, submit a write and make the md code BUG_ON() ?
>
> The theory is that when you tell an md array to become read-only, it
> t
This email lists some known regressions in 2.6.20-rc5 compared to 2.6.19
that are not yet fixed in Linus' tree.
If you find your name in the Cc header, you are either submitter of one
of the bugs, maintainer of an affectected subsystem or driver, a patch
of you caused a breakage or I'm considering
Hi all,
My kernel is 2.6.18, and use mdadm 2.5.3 to manage my SW-RAID.
I always run "sync" and "umount" before reset, but linux always ask
me to run "fsck" when I mount again, Why?
THX
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a messag
On 20.01.2007 09:56:21, Benjamin Schieder wrote:
> md3 : active raid5 hda6[0] hdf6[3] hdb6[2] hdc6[1]
> 12000192 blocks level 5, 64k chunk, algorithm 2 [4/4] []
I just noticed something.
Before I grew the md3 array, it did NOT assemble with
# mdadm -As --auto=yes --symlink=yes
I
22 matches
Mail list logo