Re: [3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-06-05 Thread Tsutomu Itoh
Hi liubo,

(2011/06/01 19:44), Tsutomu Itoh wrote:
> Hi, liubo,
> 
> (2011/06/01 18:42), liubo wrote:
>> On 06/01/2011 04:12 PM, liubo wrote:
>>> On 06/01/2011 03:44 PM, liubo wrote:
> On 05/31/2011 08:27 AM, Tsutomu Itoh wrote:
> The panic occurred when 'btrfs fi bal /test5' was executed.
>
> /test5 is as follows:
> # mount -o space_cache,compress=lzo /dev/sdc3 /test5
> #
> # btrfs fi sh /dev/sdc3
> Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
>  Total devices 5 FS bytes used 7.87MB
>  devid1 size 10.00GB used 2.02GB path /dev/sdc3
>  devid2 size 15.01GB used 3.00GB path /dev/sdc5
>  devid3 size 15.01GB used 3.00GB path /dev/sdc6
>  devid4 size 20.01GB used 2.01GB path /dev/sdc7
>  devid5 size 10.00GB used 2.01GB path /dev/sdc8
>
> Btrfs v0.19-50-ge6bd18d
> # btrfs fi df /test5
> Data, RAID0: total=10.00GB, used=3.52MB
> Data: total=8.00MB, used=1.60MB
> System, RAID1: total=8.00MB, used=4.00KB
> System: total=4.00MB, used=0.00
> Metadata, RAID1: total=1.00GB, used=216.00KB
> Metadata: total=8.00MB, used=0.00
>
>
> Hi, Itoh san,
>
> I've come up with a patch aiming to fix this bug.
> The problems is that the inode allocator stores one inode cache per root,
> which is at least not good for relocation tree, cause we only find
> new inode number from fs tree or file tree (subvol/snapshot).
>
> I've tested with your run.sh and it works well on my box, so you can try 
> this:
>
>>
>> I've tested the following patch for about 1.5 hour, and nothing happened.
>> And would you please test this patch?
> 
> Thank you for your investigation.
> 
> I will also test again. but, I cannot test until next week because I
> will go to LinuxCon tomorrow and the day after tomorrow.
> 

I also tested.

The problem did not occur though I executed the test script for about
two hours.


> Thanks,
> Tsutomu
> 
> 
>>
>> thanks,
>>
>> From: Liu Bo
>>
>> [PATCH] Btrfs: fix save ino cache bug
>>
>> We just get new inode number from fs root or subvol/snap root,
>> so we'd like to save fs/subvol/snap root's inode cache into disk.
>>
>> Signed-off-by: Liu Bo
>> ---
>>   fs/btrfs/inode-map.c |6 ++
>>   1 files changed, 6 insertions(+), 0 deletions(-)
>>
>> diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c
>> index 0009705..8c0c25b 100644
>> --- a/fs/btrfs/inode-map.c
>> +++ b/fs/btrfs/inode-map.c
>> @@ -372,6 +372,12 @@ int btrfs_save_ino_cache(struct btrfs_root *root,
>>  int prealloc;
>>  bool retry = false;
>>
>> +/* only fs tree and subvol/snap needs ino cache */
>> +if (root->root_key.objectid != BTRFS_FS_TREE_OBJECTID&&
>> +(root->root_key.objectid<  BTRFS_FIRST_FREE_OBJECTID ||
>> + root->root_key.objectid>  BTRFS_LAST_FREE_OBJECTID))
>> +return 0;
>> +
>>  path = btrfs_alloc_path();
>>  if (!path)
>>  return -ENOMEM;
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-06-01 Thread Tsutomu Itoh
Hi, liubo,

(2011/06/01 18:42), liubo wrote:
> On 06/01/2011 04:12 PM, liubo wrote:
>> On 06/01/2011 03:44 PM, liubo wrote:
 On 05/31/2011 08:27 AM, Tsutomu Itoh wrote:
 The panic occurred when 'btrfs fi bal /test5' was executed.

 /test5 is as follows:
 # mount -o space_cache,compress=lzo /dev/sdc3 /test5
 #
 # btrfs fi sh /dev/sdc3
 Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
  Total devices 5 FS bytes used 7.87MB
  devid1 size 10.00GB used 2.02GB path /dev/sdc3
  devid2 size 15.01GB used 3.00GB path /dev/sdc5
  devid3 size 15.01GB used 3.00GB path /dev/sdc6
  devid4 size 20.01GB used 2.01GB path /dev/sdc7
  devid5 size 10.00GB used 2.01GB path /dev/sdc8

 Btrfs v0.19-50-ge6bd18d
 # btrfs fi df /test5
 Data, RAID0: total=10.00GB, used=3.52MB
 Data: total=8.00MB, used=1.60MB
 System, RAID1: total=8.00MB, used=4.00KB
 System: total=4.00MB, used=0.00
 Metadata, RAID1: total=1.00GB, used=216.00KB
 Metadata: total=8.00MB, used=0.00


 Hi, Itoh san,

 I've come up with a patch aiming to fix this bug.
 The problems is that the inode allocator stores one inode cache per root,
 which is at least not good for relocation tree, cause we only find
 new inode number from fs tree or file tree (subvol/snapshot).

 I've tested with your run.sh and it works well on my box, so you can try 
 this:

> 
> I've tested the following patch for about 1.5 hour, and nothing happened.
> And would you please test this patch?

Thank you for your investigation.

I will also test again. but, I cannot test until next week because I
will go to LinuxCon tomorrow and the day after tomorrow.

Thanks,
Tsutomu


> 
> thanks,
> 
> From: Liu Bo
> 
> [PATCH] Btrfs: fix save ino cache bug
> 
> We just get new inode number from fs root or subvol/snap root,
> so we'd like to save fs/subvol/snap root's inode cache into disk.
> 
> Signed-off-by: Liu Bo
> ---
>   fs/btrfs/inode-map.c |6 ++
>   1 files changed, 6 insertions(+), 0 deletions(-)
> 
> diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c
> index 0009705..8c0c25b 100644
> --- a/fs/btrfs/inode-map.c
> +++ b/fs/btrfs/inode-map.c
> @@ -372,6 +372,12 @@ int btrfs_save_ino_cache(struct btrfs_root *root,
>   int prealloc;
>   bool retry = false;
> 
> + /* only fs tree and subvol/snap needs ino cache */
> + if (root->root_key.objectid != BTRFS_FS_TREE_OBJECTID&&
> + (root->root_key.objectid<  BTRFS_FIRST_FREE_OBJECTID ||
> +  root->root_key.objectid>  BTRFS_LAST_FREE_OBJECTID))
> + return 0;
> +
>   path = btrfs_alloc_path();
>   if (!path)
>   return -ENOMEM;

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-06-01 Thread liubo
On 06/01/2011 04:12 PM, liubo wrote:
> On 06/01/2011 03:44 PM, liubo wrote:
>> > On 05/31/2011 08:27 AM, Tsutomu Itoh wrote:
 >> > The panic occurred when 'btrfs fi bal /test5' was executed.
 >> > 
 >> > /test5 is as follows:
 >> > # mount -o space_cache,compress=lzo /dev/sdc3 /test5
 >> > #
 >> > # btrfs fi sh /dev/sdc3
 >> > Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
 >> > Total devices 5 FS bytes used 7.87MB
 >> > devid1 size 10.00GB used 2.02GB path /dev/sdc3
 >> > devid2 size 15.01GB used 3.00GB path /dev/sdc5
 >> > devid3 size 15.01GB used 3.00GB path /dev/sdc6
 >> > devid4 size 20.01GB used 2.01GB path /dev/sdc7
 >> > devid5 size 10.00GB used 2.01GB path /dev/sdc8
 >> > 
 >> > Btrfs v0.19-50-ge6bd18d
 >> > # btrfs fi df /test5
 >> > Data, RAID0: total=10.00GB, used=3.52MB
 >> > Data: total=8.00MB, used=1.60MB
 >> > System, RAID1: total=8.00MB, used=4.00KB
 >> > System: total=4.00MB, used=0.00
 >> > Metadata, RAID1: total=1.00GB, used=216.00KB
 >> > Metadata: total=8.00MB, used=0.00
 >> > 
>> > 
>> > Hi, Itoh san, 
>> > 
>> > I've come up with a patch aiming to fix this bug.
>> > The problems is that the inode allocator stores one inode cache per root,
>> > which is at least not good for relocation tree, cause we only find
>> > new inode number from fs tree or file tree (subvol/snapshot).
>> > 
>> > I've tested with your run.sh and it works well on my box, so you can try 
>> > this:
>> > 

I've tested the following patch for about 1.5 hour, and nothing happened.
And would you please test this patch?

thanks,

From: Liu Bo 

[PATCH] Btrfs: fix save ino cache bug

We just get new inode number from fs root or subvol/snap root,
so we'd like to save fs/subvol/snap root's inode cache into disk.

Signed-off-by: Liu Bo 
---
 fs/btrfs/inode-map.c |6 ++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c
index 0009705..8c0c25b 100644
--- a/fs/btrfs/inode-map.c
+++ b/fs/btrfs/inode-map.c
@@ -372,6 +372,12 @@ int btrfs_save_ino_cache(struct btrfs_root *root,
int prealloc;
bool retry = false;
 
+   /* only fs tree and subvol/snap needs ino cache */
+   if (root->root_key.objectid != BTRFS_FS_TREE_OBJECTID &&
+   (root->root_key.objectid < BTRFS_FIRST_FREE_OBJECTID ||
+root->root_key.objectid > BTRFS_LAST_FREE_OBJECTID))
+   return 0;
+
path = btrfs_alloc_path();
if (!path)
return -ENOMEM;
-- 
1.6.5.2

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-06-01 Thread liubo
On 06/01/2011 03:44 PM, liubo wrote:
> On 05/31/2011 08:27 AM, Tsutomu Itoh wrote:
>> > The panic occurred when 'btrfs fi bal /test5' was executed.
>> > 
>> > /test5 is as follows:
>> > # mount -o space_cache,compress=lzo /dev/sdc3 /test5
>> > #
>> > # btrfs fi sh /dev/sdc3
>> > Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
>> > Total devices 5 FS bytes used 7.87MB
>> > devid1 size 10.00GB used 2.02GB path /dev/sdc3
>> > devid2 size 15.01GB used 3.00GB path /dev/sdc5
>> > devid3 size 15.01GB used 3.00GB path /dev/sdc6
>> > devid4 size 20.01GB used 2.01GB path /dev/sdc7
>> > devid5 size 10.00GB used 2.01GB path /dev/sdc8
>> > 
>> > Btrfs v0.19-50-ge6bd18d
>> > # btrfs fi df /test5
>> > Data, RAID0: total=10.00GB, used=3.52MB
>> > Data: total=8.00MB, used=1.60MB
>> > System, RAID1: total=8.00MB, used=4.00KB
>> > System: total=4.00MB, used=0.00
>> > Metadata, RAID1: total=1.00GB, used=216.00KB
>> > Metadata: total=8.00MB, used=0.00
>> > 
> 
> Hi, Itoh san, 
> 
> I've come up with a patch aiming to fix this bug.
> The problems is that the inode allocator stores one inode cache per root,
> which is at least not good for relocation tree, cause we only find
> new inode number from fs tree or file tree (subvol/snapshot).
> 
> I've tested with your run.sh and it works well on my box, so you can try this:
> 

Sorry, I messed up BTRFS_FIRST_FREE_OBJECTID and BTRFS_LAST_FREE_OBJECTID,
plz ignore this.

> ===
> based on 3.0, commit d6c0cb379c5198487e4ac124728cbb2346d63b1f
> ===
> diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c
> index 0009705..ebc2a7b 100644
> --- a/fs/btrfs/inode-map.c
> +++ b/fs/btrfs/inode-map.c
> @@ -372,6 +372,10 @@ int btrfs_save_ino_cache(struct btrfs_root *root,
>   int prealloc;
>   bool retry = false;
>  
> + if (root->root_key.objectid != BTRFS_FS_TREE_OBJECTID &&
> + root->root_key.objectid < BTRFS_FIRST_FREE_OBJECTID)
> + return 0;
> +
>   path = btrfs_alloc_path();
>   if (!path)
>   return -ENOMEM;
> 
> 
> 
> thanks,
> liubo
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-06-01 Thread liubo
On 05/31/2011 08:27 AM, Tsutomu Itoh wrote:
> The panic occurred when 'btrfs fi bal /test5' was executed.
> 
> /test5 is as follows:
> # mount -o space_cache,compress=lzo /dev/sdc3 /test5
> #
> # btrfs fi sh /dev/sdc3
> Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
> Total devices 5 FS bytes used 7.87MB
> devid1 size 10.00GB used 2.02GB path /dev/sdc3
> devid2 size 15.01GB used 3.00GB path /dev/sdc5
> devid3 size 15.01GB used 3.00GB path /dev/sdc6
> devid4 size 20.01GB used 2.01GB path /dev/sdc7
> devid5 size 10.00GB used 2.01GB path /dev/sdc8
> 
> Btrfs v0.19-50-ge6bd18d
> # btrfs fi df /test5
> Data, RAID0: total=10.00GB, used=3.52MB
> Data: total=8.00MB, used=1.60MB
> System, RAID1: total=8.00MB, used=4.00KB
> System: total=4.00MB, used=0.00
> Metadata, RAID1: total=1.00GB, used=216.00KB
> Metadata: total=8.00MB, used=0.00
> 

Hi, Itoh san, 

I've come up with a patch aiming to fix this bug.
The problems is that the inode allocator stores one inode cache per root,
which is at least not good for relocation tree, cause we only find
new inode number from fs tree or file tree (subvol/snapshot).

I've tested with your run.sh and it works well on my box, so you can try this:

===
based on 3.0, commit d6c0cb379c5198487e4ac124728cbb2346d63b1f
===
diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c
index 0009705..ebc2a7b 100644
--- a/fs/btrfs/inode-map.c
+++ b/fs/btrfs/inode-map.c
@@ -372,6 +372,10 @@ int btrfs_save_ino_cache(struct btrfs_root *root,
int prealloc;
bool retry = false;
 
+   if (root->root_key.objectid != BTRFS_FS_TREE_OBJECTID &&
+   root->root_key.objectid < BTRFS_FIRST_FREE_OBJECTID)
+   return 0;
+
path = btrfs_alloc_path();
if (!path)
return -ENOMEM;



thanks,
liubo

> ---
> Tsutomu
> 
> 
> 
> <6>device fsid 25424ba6b248ec38-64dc2480b05ec68c devid 5 transid 4 /dev/sdc8
> <6>device fsid 25424ba6b248ec38-64dc2480b05ec68c devid 1 transid 7 /dev/sdc3
> <6>btrfs: enabling disk space caching
> <6>btrfs: use lzo compression
> <6>device fsid 69423c117ae771dd-c275f966f982cf84 devid 1 transid 7 /dev/sdd4
> <6>btrfs: disk space caching is enabled
> <6>btrfs: relocating block group 1103101952 flags 9
> <6>btrfs: found 318 extents
> <0>[ cut here ]
> <2>kernel BUG at fs/btrfs/relocation.c:4285!
> <0>invalid opcode:  [#1] SMP
> <4>CPU 1
> <4>Modules linked in: btrfs autofs4 sunrpc 8021q garp stp llc 
> cpufreq_ondemand acpi_cpufreq freq_table m
> perf ipv6 zlib_deflate libcrc32c ext3 jbd dm_mirror dm_region_hash dm_log 
> dm_mod kvm uinput ppdev parpor
> t_pc parport sg pcspkr i2c_i801 i2c_core iTCO_wdt iTCO_vendor_support tg3 
> shpchp i3000_edac edac_core ex
> t4 mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom megaraid_sas pata_acpi 
> ata_generic ata_piix floppy [last
> unloaded: btrfs]
> <4>Pid: 6173, comm: btrfs Not tainted 3.0.0-rc1btrfs-test #1 FUJITSU-SV  
> PRIMERGY/D2399
> <4>RIP: 0010:[]  [] 
> btrfs_reloc_cow_block+0x22c/0x270 [btrfs]
> <4>RSP: 0018:8801514236a8  EFLAGS: 00010246
> <4>RAX: 8801930dc000 RBX: 8801936f5800 RCX: 880163241d60
> <4>RDX: 88016325dd18 RSI: 8801931a3000 RDI: 8801632fb3e0
> <4>RBP: 880151423708 R08: 880151423784 R09: 0100
> <4>R10:  R11: 880163224d58 R12: 8801931a3000
> <4>R13: 88016325dd18 R14: 8801632fb3e0 R15: 
> <4>FS:  7f41577ce740() GS:88019fd0() 
> knlGS:
> <4>CS:  0010 DS:  ES:  CR0: 8005003b
> <4>CR2: 010afb80 CR3: 00015142e000 CR4: 06e0
> <4>DR0:  DR1:  DR2: 
> <4>DR3:  DR6: 0ff0 DR7: 0400
> <4>Process btrfs (pid: 6173, threadinfo 880151422000, task 
> 880151997580)
> <0>Stack:
> <4> 88016325dd18 8801632fb3e0 880151423708 a042b2ed
> <4>  0001 880151423708 8801931a3000
> <4> 880163241d60 88016325dd18 8801632fb3e0 
> <0>Call Trace:
> <4> [] ? update_ref_for_cow+0x22d/0x330 [btrfs]
> <4> [] __btrfs_cow_block+0x451/0x5e0 [btrfs]
> <4> [] btrfs_cow_block+0x10b/0x250 [btrfs]
> <4> [] btrfs_search_slot+0x557/0x870 [btrfs]
> <4> [] ? generic_bin_search+0x1f2/0x210 [btrfs]
> <4> [] btrfs_lookup_inode+0x2f/0xa0 [btrfs]
> <4> [] btrfs_update_inode+0xc2/0x140 [btrfs]
> <4> [] btrfs_save_ino_cache+0x7c/0x200 [btrfs]
> <4> [] commit_fs_roots+0xad/0x180 [btrfs]
> <4> [] btrfs_commit_transaction+0x385/0x7d0 [btrfs]
> <4> [] ? wake_up_bit+0x40/0x40
> <4> [] prepare_to_relocate+0xdf/0xf0 [btrfs]
> <4> [] relocate_block_group+0x41/0x600 [btrfs]
> <4> [] ? mutex_lock+0x1e/0x50
> <4> [] ? btrfs_clean_old_snapshots+0xa9/0x150 [btrfs]
> <4> [] btrfs_relocate_block_group+0x1b3/0x2e0 [btrf

Re: [3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-05-30 Thread Tsutomu Itoh
(2011/05/31 15:13), liubo wrote:
> On 05/31/2011 12:31 PM, Tsutomu Itoh wrote:
>> (2011/05/31 10:13), Chris Mason wrote:
>>> Excerpts from Tsutomu Itoh's message of 2011-05-30 20:27:51 -0400:
 The panic occurred when 'btrfs fi bal /test5' was executed.

 /test5 is as follows:
 # mount -o space_cache,compress=lzo /dev/sdc3 /test5
 #
 # btrfs fi sh /dev/sdc3
 Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
 Total devices 5 FS bytes used 7.87MB
 devid1 size 10.00GB used 2.02GB path /dev/sdc3
 devid2 size 15.01GB used 3.00GB path /dev/sdc5
 devid3 size 15.01GB used 3.00GB path /dev/sdc6
 devid4 size 20.01GB used 2.01GB path /dev/sdc7
 devid5 size 10.00GB used 2.01GB path /dev/sdc8

 Btrfs v0.19-50-ge6bd18d
 # btrfs fi df /test5
 Data, RAID0: total=10.00GB, used=3.52MB
 Data: total=8.00MB, used=1.60MB
 System, RAID1: total=8.00MB, used=4.00KB
 System: total=4.00MB, used=0.00
 Metadata, RAID1: total=1.00GB, used=216.00KB
 Metadata: total=8.00MB, used=0.00
>>> The oops is happening as we write inode cache during a commit during thekk
>>> balance.  I did run a number of balances on the inode cache code, do you
>>> have a test script that sets up the filesystem to recreate this?
>>
>> Yes, I have.
>> In my test, the panic is done at frequency once every about ten times.
>>
>> I attached the test script to this mail. (though it is a dirty test script
>> that scrapes up script...)
>>
> 
> I'm getting it to run, hope we can get something valuable. ;)

I executed again, the write error occurred though the panic have not occurred.

See below:
=
...

+ sleep 30
+ ./fsync3.sh
Tue May 31 13:50:57 JST 2011

+ btrfs fi bal /test5
+ wait
write error: Inappropriate ioctl for device
cmp: EOF on /test5/_de100.t

...
...

$ ls -l /test5/_de100*
-rw-r--r-- 1 root root 30 May 31 13:56 /test5/_de100.f
-rw-r--r-- 1 root root  607789056 May 31 13:56 /test5/_de100.t

write error occurred by writing in /test5/_de100.t or mistake of
error number? (it should be ENOSPC??)
(operation: copy from /test5/_de100.f to /test5/_de100.t)
=

And, in my environment, it seems to be easy for the script attached to
this mail to do the panic.

> 
> thanks,
> liubo
> 
>> Thanks,
>> Tsutomu
>>
>>
>>> -chris
>>>


RT2.tar.gz
Description: application/gzip


Re: [3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-05-30 Thread liubo
On 05/31/2011 12:31 PM, Tsutomu Itoh wrote:
> (2011/05/31 10:13), Chris Mason wrote:
>> Excerpts from Tsutomu Itoh's message of 2011-05-30 20:27:51 -0400:
>>> The panic occurred when 'btrfs fi bal /test5' was executed.
>>>
>>> /test5 is as follows:
>>> # mount -o space_cache,compress=lzo /dev/sdc3 /test5
>>> #
>>> # btrfs fi sh /dev/sdc3
>>> Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
>>> Total devices 5 FS bytes used 7.87MB
>>> devid1 size 10.00GB used 2.02GB path /dev/sdc3
>>> devid2 size 15.01GB used 3.00GB path /dev/sdc5
>>> devid3 size 15.01GB used 3.00GB path /dev/sdc6
>>> devid4 size 20.01GB used 2.01GB path /dev/sdc7
>>> devid5 size 10.00GB used 2.01GB path /dev/sdc8
>>>
>>> Btrfs v0.19-50-ge6bd18d
>>> # btrfs fi df /test5
>>> Data, RAID0: total=10.00GB, used=3.52MB
>>> Data: total=8.00MB, used=1.60MB
>>> System, RAID1: total=8.00MB, used=4.00KB
>>> System: total=4.00MB, used=0.00
>>> Metadata, RAID1: total=1.00GB, used=216.00KB
>>> Metadata: total=8.00MB, used=0.00
>> The oops is happening as we write inode cache during a commit during thekk
>> balance.  I did run a number of balances on the inode cache code, do you
>> have a test script that sets up the filesystem to recreate this?
> 
> Yes, I have.
> In my test, the panic is done at frequency once every about ten times.
> 
> I attached the test script to this mail. (though it is a dirty test script
> that scrapes up script...)
> 

I'm getting it to run, hope we can get something valuable. ;)

thanks,
liubo

> Thanks,
> Tsutomu
> 
> 
>> -chris
>>

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-05-30 Thread Tsutomu Itoh
(2011/05/31 10:13), Chris Mason wrote:
> Excerpts from Tsutomu Itoh's message of 2011-05-30 20:27:51 -0400:
>> The panic occurred when 'btrfs fi bal /test5' was executed.
>>
>> /test5 is as follows:
>> # mount -o space_cache,compress=lzo /dev/sdc3 /test5
>> #
>> # btrfs fi sh /dev/sdc3
>> Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
>> Total devices 5 FS bytes used 7.87MB
>> devid1 size 10.00GB used 2.02GB path /dev/sdc3
>> devid2 size 15.01GB used 3.00GB path /dev/sdc5
>> devid3 size 15.01GB used 3.00GB path /dev/sdc6
>> devid4 size 20.01GB used 2.01GB path /dev/sdc7
>> devid5 size 10.00GB used 2.01GB path /dev/sdc8
>>
>> Btrfs v0.19-50-ge6bd18d
>> # btrfs fi df /test5
>> Data, RAID0: total=10.00GB, used=3.52MB
>> Data: total=8.00MB, used=1.60MB
>> System, RAID1: total=8.00MB, used=4.00KB
>> System: total=4.00MB, used=0.00
>> Metadata, RAID1: total=1.00GB, used=216.00KB
>> Metadata: total=8.00MB, used=0.00
> 
> The oops is happening as we write inode cache during a commit during thekk
> balance.  I did run a number of balances on the inode cache code, do you
> have a test script that sets up the filesystem to recreate this?

Yes, I have.
In my test, the panic is done at frequency once every about ten times.

I attached the test script to this mail. (though it is a dirty test script
that scrapes up script...)

Thanks,
Tsutomu


> 
> -chris
> 


RT.tar.gz
Description: application/gzip


Re: [3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-05-30 Thread Chris Mason
Excerpts from Tsutomu Itoh's message of 2011-05-30 20:27:51 -0400:
> The panic occurred when 'btrfs fi bal /test5' was executed.
> 
> /test5 is as follows:
> # mount -o space_cache,compress=lzo /dev/sdc3 /test5
> #
> # btrfs fi sh /dev/sdc3
> Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
> Total devices 5 FS bytes used 7.87MB
> devid1 size 10.00GB used 2.02GB path /dev/sdc3
> devid2 size 15.01GB used 3.00GB path /dev/sdc5
> devid3 size 15.01GB used 3.00GB path /dev/sdc6
> devid4 size 20.01GB used 2.01GB path /dev/sdc7
> devid5 size 10.00GB used 2.01GB path /dev/sdc8
> 
> Btrfs v0.19-50-ge6bd18d
> # btrfs fi df /test5
> Data, RAID0: total=10.00GB, used=3.52MB
> Data: total=8.00MB, used=1.60MB
> System, RAID1: total=8.00MB, used=4.00KB
> System: total=4.00MB, used=0.00
> Metadata, RAID1: total=1.00GB, used=216.00KB
> Metadata: total=8.00MB, used=0.00

The oops is happening as we write inode cache during a commit during the
balance.  I did run a number of balances on the inode cache code, do you
have a test script that sets up the filesystem to recreate this?

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[3.0-rc1] kernel BUG at fs/btrfs/relocation.c:4285!

2011-05-30 Thread Tsutomu Itoh
The panic occurred when 'btrfs fi bal /test5' was executed.

/test5 is as follows:
# mount -o space_cache,compress=lzo /dev/sdc3 /test5
#
# btrfs fi sh /dev/sdc3
Label: none  uuid: 38ec48b2-a64b-4225-8cc6-5eb08024dc64
Total devices 5 FS bytes used 7.87MB
devid1 size 10.00GB used 2.02GB path /dev/sdc3
devid2 size 15.01GB used 3.00GB path /dev/sdc5
devid3 size 15.01GB used 3.00GB path /dev/sdc6
devid4 size 20.01GB used 2.01GB path /dev/sdc7
devid5 size 10.00GB used 2.01GB path /dev/sdc8

Btrfs v0.19-50-ge6bd18d
# btrfs fi df /test5
Data, RAID0: total=10.00GB, used=3.52MB
Data: total=8.00MB, used=1.60MB
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=216.00KB
Metadata: total=8.00MB, used=0.00

---
Tsutomu



<6>device fsid 25424ba6b248ec38-64dc2480b05ec68c devid 5 transid 4 /dev/sdc8
<6>device fsid 25424ba6b248ec38-64dc2480b05ec68c devid 1 transid 7 /dev/sdc3
<6>btrfs: enabling disk space caching
<6>btrfs: use lzo compression
<6>device fsid 69423c117ae771dd-c275f966f982cf84 devid 1 transid 7 /dev/sdd4
<6>btrfs: disk space caching is enabled
<6>btrfs: relocating block group 1103101952 flags 9
<6>btrfs: found 318 extents
<0>[ cut here ]
<2>kernel BUG at fs/btrfs/relocation.c:4285!
<0>invalid opcode:  [#1] SMP
<4>CPU 1
<4>Modules linked in: btrfs autofs4 sunrpc 8021q garp stp llc cpufreq_ondemand 
acpi_cpufreq freq_table m
perf ipv6 zlib_deflate libcrc32c ext3 jbd dm_mirror dm_region_hash dm_log 
dm_mod kvm uinput ppdev parpor
t_pc parport sg pcspkr i2c_i801 i2c_core iTCO_wdt iTCO_vendor_support tg3 
shpchp i3000_edac edac_core ex
t4 mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom megaraid_sas pata_acpi 
ata_generic ata_piix floppy [last
unloaded: btrfs]
<4>Pid: 6173, comm: btrfs Not tainted 3.0.0-rc1btrfs-test #1 FUJITSU-SV  
PRIMERGY/D2399
<4>RIP: 0010:[]  [] 
btrfs_reloc_cow_block+0x22c/0x270 [btrfs]
<4>RSP: 0018:8801514236a8  EFLAGS: 00010246
<4>RAX: 8801930dc000 RBX: 8801936f5800 RCX: 880163241d60
<4>RDX: 88016325dd18 RSI: 8801931a3000 RDI: 8801632fb3e0
<4>RBP: 880151423708 R08: 880151423784 R09: 0100
<4>R10:  R11: 880163224d58 R12: 8801931a3000
<4>R13: 88016325dd18 R14: 8801632fb3e0 R15: 
<4>FS:  7f41577ce740() GS:88019fd0() knlGS:
<4>CS:  0010 DS:  ES:  CR0: 8005003b
<4>CR2: 010afb80 CR3: 00015142e000 CR4: 06e0
<4>DR0:  DR1:  DR2: 
<4>DR3:  DR6: 0ff0 DR7: 0400
<4>Process btrfs (pid: 6173, threadinfo 880151422000, task 880151997580)
<0>Stack:
<4> 88016325dd18 8801632fb3e0 880151423708 a042b2ed
<4>  0001 880151423708 8801931a3000
<4> 880163241d60 88016325dd18 8801632fb3e0 
<0>Call Trace:
<4> [] ? update_ref_for_cow+0x22d/0x330 [btrfs]
<4> [] __btrfs_cow_block+0x451/0x5e0 [btrfs]
<4> [] btrfs_cow_block+0x10b/0x250 [btrfs]
<4> [] btrfs_search_slot+0x557/0x870 [btrfs]
<4> [] ? generic_bin_search+0x1f2/0x210 [btrfs]
<4> [] btrfs_lookup_inode+0x2f/0xa0 [btrfs]
<4> [] btrfs_update_inode+0xc2/0x140 [btrfs]
<4> [] btrfs_save_ino_cache+0x7c/0x200 [btrfs]
<4> [] commit_fs_roots+0xad/0x180 [btrfs]
<4> [] btrfs_commit_transaction+0x385/0x7d0 [btrfs]
<4> [] ? wake_up_bit+0x40/0x40
<4> [] prepare_to_relocate+0xdf/0xf0 [btrfs]
<4> [] relocate_block_group+0x41/0x600 [btrfs]
<4> [] ? mutex_lock+0x1e/0x50
<4> [] ? btrfs_clean_old_snapshots+0xa9/0x150 [btrfs]
<4> [] btrfs_relocate_block_group+0x1b3/0x2e0 [btrfs]
<4> [] ? btrfs_tree_unlock+0x50/0x50 [btrfs]
<4> [] btrfs_relocate_chunk+0x8b/0x680 [btrfs]
<4> [] ? btrfs_set_path_blocking+0x3d/0x50 [btrfs]
<4> [] ? read_extent_buffer+0xd8/0x1d0 [btrfs]
<4> [] ? btrfs_previous_item+0xb1/0x150 [btrfs]
<4> [] ? read_extent_buffer+0xd8/0x1d0 [btrfs]
<4> [] btrfs_balance+0x20a/0x2a0 [btrfs]
<4> [] btrfs_ioctl+0x54c/0xcb0 [btrfs]
<4> [] ? handle_mm_fault+0x15b/0x270
<4> [] ? do_page_fault+0x1e8/0x470
<4> [] do_vfs_ioctl+0x9a/0x540
<4> [] sys_ioctl+0xa1/0xb0
<4> [] system_call_fastpath+0x16/0x1b
<0>Code: 8b 76 10 e8 b7 35 da e0 4c 8b 45 b0 41 80 48 71 20 48 8b 4d b8 8b 45 
c0 e9 52 ff ff ff 48 83 be 0f 01 00 00 f7 0f 85 22 fe ff ff <0f> 0b eb fe 49 3b 
50 20 0f 84 02 ff ff ff 0f 0b 0f 1f 40 00 eb
<1>RIP  [] btrfs_reloc_cow_block+0x22c/0x270 [btrfs]
<4> RSP 

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html