Re: [linux-lvm] Re: 2.6.11ac5 oops while reconstructing md array and moving volumegroup with pvmove
On Sat, Apr 02, 2005 at 09:09:37AM +0300, Antti Salmela wrote: > % mdadm --create -l 1 -n 2 /dev/md2 /dev/hde /dev/hdg > % pvcreate /dev/md2 > % vgextend vg1 /dev/md2 > % pvmove /dev/hdf /dev/md2 A few similar reports still appearing, possibly still related to the md bio_clone changes that fixed some bugs for md but created new ones for dm... Would be good if you could re-test with a current 2.6.12- I'll look into later this week if nobody beats me to it - please! Alasdair - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: 2.6.11ac5 oops while reconstructing md array and moving volumegroup with pvmove
On Sat, Apr 02, 2005 at 09:37:07AM +1000, Neil Brown wrote: > On Friday April 1, [EMAIL PROTECTED] wrote: > > I had created a new raid1 array and started moving a volume group to it at > > the > > same time while it was reconstructing. Got this oops after several hours, > > The subject says 'md array' but the Opps seems to say 'dm raid1 > array'. Subject is correct. Events went roughly like this: % mdadm --create -l 1 -n 2 /dev/md2 /dev/hde /dev/hdg % pvcreate /dev/md2 % vgextend vg1 /dev/md2 % pvmove /dev/hdf /dev/md2 All this while logical volumes on vg1 with ext3 filesystems were in light use. I think pvmove needs dm mirror target to move underlying physical extents, wouldn't this look like dm raid1? Got some new ones while trying to finish the move. c028a7f6 Modules linked in: esp6 ah6 nfsd exportfs via686a snd_ymfpci snd_ac97_codec snd_pcm_oss snd_mixer_oss snd_pcm snd_opl3_lib snd_timer snd_hwdep snd_page_alloc snd_mpu401_uart snd_rawmidi snd_seq_device snd soundcore uhci_hcd i2c_dev w83781d i2c_sensor i2c_viapro i2c_core CPU:0 EIP:0060:[core_in_sync+6/32]Not tainted VLI EFLAGS: 00010246 (2.6.11ac5) EIP is at core_in_sync+0x6/0x20 eax: f8a3e000 ebx: c0407260 ecx: 0001 edx: a000 esi: a000 edi: 0001 ebp: f5ece7c0 esp: f7c04ef4 ds: 007b es: 007b ss: 0068 Process kmirrord/0 (pid: 329, threadinfo=f7c04000 task=c1b670a0) Stack: c028aecb c9a2e9e0 f7c04f38 f5ece7cc c028b80e f5ece7c0 c0407328 c049ad84 c028b973 f5ece7c0 c028b9b7 0293 Call Trace: [rh_state+59/80] rh_state+0x3b/0x50 [do_writes+126/384] do_writes+0x7e/0x180 [do_mirror+99/112] do_mirror+0x63/0x70 [do_work+55/112] do_work+0x37/0x70 [worker_thread+353/512] worker_thread+0x161/0x200 [activate_task+90/112] activate_task+0x5a/0x70 [do_work+0/112] do_work+0x0/0x70 [default_wake_function+0/16] default_wake_function+0x0/0x10 [__wake_up_common+55/96] __wake_up_common+0x37/0x60 [default_wake_function+0/16] default_wake_function+0x0/0x10 [worker_thread+0/512] worker_thread+0x0/0x200 [kthread+148/160] kthread+0x94/0xa0 [kthread+0/160] kthread+0x0/0xa0 [kernel_thread_helper+5/24] kernel_thread_helper+0x5/0x18 Code: 27 00 00 00 00 8b 40 04 8b 40 18 0f a3 10 19 d2 31 c0 85 d2 0f 95 c0 c3 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 40 04 8b 40 1c <0f> a3 10 19 d2 31 c0 85 d2 0f 95 c0 c3 8d b6 00 00 00 00 8d bc <1>Unable to handle kernel paging request at virtual address f8a3f560 c028a7f6 Modules linked in: esp6 ah6 nfsd exportfs via686a snd_ymfpci snd_ac97_codec snd_pcm_oss snd_mixer_oss snd_pcm snd_opl3_lib snd_timer snd_hwdep snd_page_alloc snd_mpu401_uart snd_rawmidi snd_seq_device snd soundcore uhci_hcd i2c_dev w83781d i2c_sensor i2c_viapro i2c_core CPU:0 EIP:0060:[core_in_sync+6/32]Not tainted VLI EFLAGS: 00010246 (2.6.11ac5) EIP is at core_in_sync+0x6/0x20 eax: f8a3e000 ebx: c0407260 ecx: edx: ab04 esi: cb2df820 edi: f5ece7c0 ebp: esp: cb056b70 ds: 007b es: 007b ss: 0068 Process ssh (pid: 16046, threadinfo=cb056000 task=c1b670a0) Stack: c028bf7c f7c05f60 f117559c cb2df820 f882a098 c027ffb0 02ac1130 cb056bdc cb2df860 cb2df860 c0280367 0001 0008 c1b670a0 c01266e0 f117559c 00036ed0 f882a098 cb2df860 cb056bdc cb2df860 cb056c3c Call Trace: [mirror_map+76/192] mirror_map+0x4c/0xc0 [__map_bio+64/256] __map_bio+0x40/0x100 [__clone_and_map+551/576] __clone_and_map+0x227/0x240 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [__split_bio+142/256] __split_bio+0x8e/0x100 [dm_request+100/144] dm_request+0x64/0x90 [generic_make_request+326/480] generic_make_request+0x146/0x1e0 [__find_get_block+66/160] __find_get_block+0x42/0xa0 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [bio_clone+160/176] bio_clone+0xa0/0xb0 [__map_bio+64/256] __map_bio+0x40/0x100 [__clone_and_map+551/576] __clone_and_map+0x227/0x240 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [__split_bio+142/256] __split_bio+0x8e/0x100 [dm_request+100/144] dm_request+0x64/0x90 [generic_make_request+326/480] generic_make_request+0x146/0x1e0 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [bh_lru_install+122/176] bh_lru_install+0x7a/0xb0 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [mempool_alloc+99/256] mempool_alloc+0x63/0x100 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [submit_bio+88/224] submit_bio+0x58/0xe0 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [ext3_getblk+131/576] ext3_getblk+0x83/0x240 [bio_alloc+200/400] bio_alloc+0xc8/0x190 [submit_bh+193/272] submit_bh+0xc1/0x110 [ll_rw_block+91/128] ll_rw_block+0x5b/
Re: 2.6.11ac5 oops while reconstructing md array and moving volumegroup with pvmove
On Friday April 1, [EMAIL PROTECTED] wrote: > I had created a new raid1 array and started moving a volume group to it at the > same time while it was reconstructing. Got this oops after several hours, The subject says 'md array' but the Opps seems to say 'dm raid1 array'. Could you please clarify exactly what the configuration is. Thanks, NeilBrown - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
2.6.11ac5 oops while reconstructing md array and moving volumegroup with pvmove
I had created a new raid1 array and started moving a volume group to it at the same time while it was reconstructing. Got this oops after several hours, apparently while cron-jobs were run. 1 GB of memory, HIGHMEM configured. Unable to handle kernel paging request at virtual address f88d63a0 printing eip: c028a7f6 *pde = 018eb067 *pte = Oops: [#1] Modules linked in: esp6 ah6 nfsd exportfs via686a snd_ymfpci snd_ac97_codec snd_pcm_oss snd_mixer_oss snd_pcm snd_opl3_lib snd_timer snd_hwdep snd_page_alloc snd_mpu401_uart snd_rawmidi snd_seq_device snd soundcore uhci_hcd i2c_dev w83781d i2c_sensor i2c_viapro i2c_core CPU:0 EIP:0060:[core_in_sync+6/32]Not tainted VLI EFLAGS: 00010246 (2.6.11ac5) EIP is at core_in_sync+0x6/0x20 eax: f88d3000 ebx: c0407260 ecx: edx: 00019d00 esi: f21d7aa0 edi: f6c78720 ebp: esp: ea22eb74 ds: 007b es: 007b ss: 0068 Process find (pid: 14250, threadinfo=ea22e000 task=ebdf50e0) Stack: c028bf7c f7c05f60 e87945fc f21d7aa0 f882c06c c027ffb0 06740010 ea22ebe0 ebed0520 ebed0520 c0280367 0001 0008 ebdf50e0 c01266e0 e87945fc 0263bff0 f882c06c ebed0520 ea22ebe0 ebed0520 ea22ec40 Call Trace: [mirror_map+76/192] mirror_map+0x4c/0xc0 [__map_bio+64/256] __map_bio+0x40/0x100 [__clone_and_map+551/576] __clone_and_map+0x227/0x240 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [__split_bio+142/256] __split_bio+0x8e/0x100 [dm_request+100/144] dm_request+0x64/0x90 [generic_make_request+326/480] generic_make_request+0x146/0x1e0 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [bio_clone+160/176] bio_clone+0xa0/0xb0 [__map_bio+64/256] __map_bio+0x40/0x100 [__clone_and_map+551/576] __clone_and_map+0x227/0x240 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [__split_bio+142/256] __split_bio+0x8e/0x100 [dm_request+100/144] dm_request+0x64/0x90 [ext3_get_block_handle+123/736] ext3_get_block_handle+0x7b/0x2e0 [dm_request+100/144] dm_request+0x64/0x90 [generic_make_request+326/480] generic_make_request+0x146/0x1e0 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [mempool_alloc+99/256] mempool_alloc+0x63/0x100 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [submit_bio+88/224] submit_bio+0x58/0xe0 [autoremove_wake_function+0/80] autoremove_wake_function+0x0/0x50 [__find_get_block+66/160] __find_get_block+0x42/0xa0 [bio_alloc+200/400] bio_alloc+0xc8/0x190 [submit_bh+193/272] submit_bh+0xc1/0x110 [__ext3_get_inode_loc+309/528] __ext3_get_inode_loc+0x135/0x210 [schedule+691/1184] schedule+0x2b3/0x4a0 [generic_unplug_device+6/16] generic_unplug_device+0x6/0x10 [ext3_read_inode+53/896] ext3_read_inode+0x35/0x380 [ext3_alloc_inode+15/80] ext3_alloc_inode+0xf/0x50 [alloc_inode+22/336] alloc_inode+0x16/0x150 [ext3_lookup+132/176] ext3_lookup+0x84/0xb0 [real_lookup+174/208] real_lookup+0xae/0xd0 [do_lookup+126/144] do_lookup+0x7e/0x90 [link_path_walk+1561/2880] link_path_walk+0x619/0xb40 [path_lookup+109/272] path_lookup+0x6d/0x110 [__user_walk+47/96] __user_walk+0x2f/0x60 [vfs_lstat+26/80] vfs_lstat+0x1a/0x50 [sys_lstat64+18/48] sys_lstat64+0x12/0x30 [shmem_populate+173/336] shmem_populate+0xad/0x150 [do_IRQ+69/96] do_IRQ+0x45/0x60 [sysenter_past_esp+82/117] sysenter_past_esp+0x52/0x75 Code: 27 00 00 00 00 8b 40 04 8b 40 18 0f a3 10 19 d2 31 c0 85 d2 0f 95 c0 c3 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 40 04 8b 40 1c <0f> a3 10 19 d2 31 c0 85 d2 0f 95 c0 c3 8d b6 00 00 00 00 8d bc -- Antti Salmela - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/