On Thu, Aug 17, 2006 at 08:28:07AM +0200, Peter T. Breuer wrote:
> 1) if the network disk device has decided to shut down wholesale
>(temporarily) because of lack of contact over the net, then
>retries and writes are _bound_ to fail for a while, so there
>is no point in sending them no
On Wed, Aug 16, 2006 at 06:06:24AM -0400, andy liebman wrote:
> There is absolutely NO PROBLEM making images of single disks and
> restoring them to new disks (thus, creating clones). And it is very
> fast. For an OS drive with about 4 GBs of data, it only takes about 5
> minutes to make the im
On Wed, Aug 16, 2006 at 09:38:54AM +0200, Luca Berra wrote:
> The only risk is if you ever move one disk from one machine to another.
> To work around this you can change the uuid by recreating the array with
> mdadm,
No need to re-create, --update=uuid should be enough according to the
man page.
Jason Lunz wrote:
I just had a disk die in a 2.6.16 (debian kernel) raid1 server, and it's
triggered an oops in raid1.
We just saw this problem as well, on SUSE 2.6.16.21-0.8. However, it
looks like the problem code still exists in at least 2.6.18-rc1, and I
haven't seen any patches recently
Neil introduced read-checking into 2.6.16. In versions prior, mirror copies
were overwritten instead of checked.
I'm running 2.6.17rc4:
# echo "check" > /sys/block/md0/md/sync_action
# dmesg
md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maxi
Adding XFS mailing list to this e-mail to show that the grow for xfs
worked.
On Thu, 17 Aug 2006, ÊæÐÇ wrote:
I've only tried growing a RAID5, which was the only RAID that I remember
being supported (to grow) in the kernel, I am not sure if its posible to
i know this,but how you grow your rai
"Also sprach ptb:"
> 4) what the network device driver wants to do is be able to identify
>the difference between primary requests and retries, and delay
>retries (or repeat them internally) with some reasonable backoff
>scheme to give them more chance of working in the face of a
>