Re: WebMin LVM module development

2000-12-06 Thread Luca Berra

On Wed, Dec 06, 2000 at 05:43:55PM +0800, Michael Boman wrote:
 
 - I have problem using MD as a PV of my VG, any hints what it can be?
 output
 mike:/mnt# vgextend vg_test /dev/md0
 vgextend -- INFO: maximum logical volume size is 255.99 Gigabyte
 vgextend -- ERROR: no physical volumes usable to extend volume group
 "vg_test"
 /output

It's a known problem,
LVM 0.8final does not work
LVM 0.8.1 does work
LVM 0.9 does not again

Tomorrow is holiday in Milan, so i will try to give it a look

Regards,
L.

-- 
Luca Berra -- [EMAIL PROTECTED]
Communication Media  Services S.r.l.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]



Re: [OOPS] raidsetfaulty - raidhotremove - raidhotadd

2000-12-06 Thread Neil Brown

On Wednesday December 6, [EMAIL PROTECTED] wrote:
 Neil Brown wrote:
  
  Could you try this patch and see how it goes?
 
 Same result!
 

Ok... must be something else... I tried again to reproduce it, and
this time I succeeded.
The problem happens when you try to access the last 128k of a raid1
array what have been reconstructed since the last reboot.

The reconstruction creates a sliding 3-part window which is 3*128k
wide.

The leading pane ("pending") may have some outstanding I/O requests,
but no new requests will be added.
The middle pane ("ready") has no outstanding I/O requests, and gets no
new I/O requests, but does get new rebuild requests.
The trailing pain ("active") has outstanding rebuild requests, but no
new I/O requests will be added.

This window slides forward through the address space keeping IO and
reconstruction quite separate.

However, the reconstruction process finishes with "ready" still
covering the tail end of the address space.  "active" has fallen of
the end, and "pending" has collapse down to an empty pain, but "ready"
is still there.

When rebuilding after an unclean shutdown, this gets cleaned up
properly, but when rebuilding onto a spare, it doesn't.

The attached patch, which can also be found at:

 http://www.cse.unsw.edu.au/~neilb/patches/linux/2.4.0-test12-pre6/patch-G-raid1rebuild

fixed the problem.  It should apply to any recent 2.4.0-test kernel.

Please try it and confirm that it works.

Thanks,

NeilBrown


--- ./drivers/md/raid1.c2000/12/06 22:34:27 1.3
+++ ./drivers/md/raid1.c2000/12/06 22:37:04 1.4
@@ -798,6 +798,32 @@
}
 }
 
+static void close_sync(raid1_conf_t *conf)
+{
+   mddev_t *mddev = conf-mddev;
+   /* If reconstruction was interrupted, we need to close the "active" and 
+"pending"
+* holes.
+* we know that there are no active rebuild requests, os cnt_active == 
+cnt_ready ==0
+*/
+   /* this is really needed when recovery stops too... */
+   spin_lock_irq(conf-segment_lock);
+   conf-start_active = conf-start_pending;
+   conf-start_ready = conf-start_pending;
+   wait_event_lock_irq(conf-wait_ready, !conf-cnt_pending, conf-segment_lock);
+   conf-start_active =conf-start_ready = conf-start_pending = 
+conf-start_future;
+   conf-start_future = mddev-sb-size+1;
+   conf-cnt_pending = conf-cnt_future;
+   conf-cnt_future = 0;
+   conf-phase = conf-phase ^1;
+   wait_event_lock_irq(conf-wait_ready, !conf-cnt_pending, conf-segment_lock);
+   conf-start_active = conf-start_ready = conf-start_pending = 
+conf-start_future = 0;
+   conf-phase = 0;
+   conf-cnt_future = conf-cnt_done;;
+   conf-cnt_done = 0;
+   spin_unlock_irq(conf-segment_lock);
+   wake_up(conf-wait_done);
+}
+
 static int raid1_diskop(mddev_t *mddev, mdp_disk_t **d, int state)
 {
int err = 0;
@@ -910,6 +936,7 @@
 * Deactivate a spare disk:
 */
case DISKOP_SPARE_INACTIVE:
+   close_sync(conf);
sdisk = conf-mirrors + spare_disk;
sdisk-operational = 0;
sdisk-write_only = 0;
@@ -922,7 +949,7 @@
 * property)
 */
case DISKOP_SPARE_ACTIVE:
-
+   close_sync(conf);
sdisk = conf-mirrors + spare_disk;
fdisk = conf-mirrors + failed_disk;
 
@@ -1213,27 +1240,7 @@
conf-resync_mirrors = 0;
}
 
-   /* If reconstruction was interrupted, we need to close the "active" and 
"pending"
-* holes.
-* we know that there are no active rebuild requests, os cnt_active == 
cnt_ready ==0
-*/
-   /* this is really needed when recovery stops too... */
-   spin_lock_irq(conf-segment_lock);
-   conf-start_active = conf-start_pending;
-   conf-start_ready = conf-start_pending;
-   wait_event_lock_irq(conf-wait_ready, !conf-cnt_pending, conf-segment_lock);
-   conf-start_active =conf-start_ready = conf-start_pending = 
conf-start_future;
-   conf-start_future = mddev-sb-size+1;
-   conf-cnt_pending = conf-cnt_future;
-   conf-cnt_future = 0;
-   conf-phase = conf-phase ^1;
-   wait_event_lock_irq(conf-wait_ready, !conf-cnt_pending, conf-segment_lock);
-   conf-start_active = conf-start_ready = conf-start_pending = 
conf-start_future = 0;
-   conf-phase = 0;
-   conf-cnt_future = conf-cnt_done;;
-   conf-cnt_done = 0;
-   spin_unlock_irq(conf-segment_lock);
-   wake_up(conf-wait_done);
+   close_sync(conf);
 
up(mddev-recovery_sem);
raid1_shrink_buffers(conf);
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]