find_mount_root() issue

2014-08-31 Thread Remco Hosman - Yerf IT.nl
issue:
on my system i have 2 entries for /, one with the type ‘rootfs’ and a 2nd one 
with the type ‘btrfs’. find_mount_root() uses the first one and reports a fail.

My change:
if (longest_matchlen  len) {
into:
if (longest_matchlen = len) {

i have not tested this, but in my understanding it will use the last longest 
match instead of the first.

I have no idea if this rootfs entry is normal nor if its always there before 
the ‘proper’ one.

These are the 2 entries in my mount list:
rootfs / rootfs rw 0 0
/dev/sda2 / btrfs rw,noatime,ssd,noacl,space_cache 0 0


Hope this helps,
Remco--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: hitting BUG_ON on troublesome FS

2014-02-04 Thread Remco Hosman - Yerf-it.com
to reply to my own thread:

i managed to empty the filesystem, but it still segfaults in the same way when 
i try to balance the last few blocks. a `btrfs bal start -dusage=0 /mountpoint` 
finishes, but leaves a few blocks allocated. when i skip the usage=0, it 
segfaults.

What can i do to provide useful information?

Remco


On 03 Feb 2014, at 21:51, Remco Hosman - Yerf-it.com re...@yerf-it.com wrote:

 FIrst, a bit of history of the filesystem:
 used to be 6 disks, now 5. partially raid1 / raid10. been migrating back and 
 forth a few times.
 As some point, a balance would not complete and would end with 164 
 ENOSPC’ses, while there was plenty of unallocated space on each disk.
 
 i scanned for extends larger then 1gig and found a few, so ran a recursive 
 balance of the entire FS.
 
 I deceided to empty the filesystem and format it.
 
 i pulled most files off it some via btrfs send/receive, some via rsync. but 1 
 subvol wouldn’t send. i don’t remember the exact error, but it was that a 
 extend could not be found on 1 of the disks.
 
 with only a few 100gig of data left, i decided to balance some remaining 
 empty space before doing a `btrfs dev del`, so have another disk to store 
 more data on.
 but im hitting a snag, i hit a BUG_ON when doing a `btrfs bal start -dusage=2 
 /mountpoint` :
 
 [ 3327.678329] btrfs: found 198 extents
 [ 3328.117274] btrfs: relocating block group 84473084968960 flags 17
 [ 3329.278521] btrfs: found 103 extents
 [ 3331.907931] btrfs: found 103 extents
 [ 3332.386172] btrfs: relocating block group 84466642518016 flags 17
 [ .536595] btrfs: found 86 extents
 [ 3335.982967] btrfs: found 86 extents
 [ 3336.599555] btrfs (4746) used greatest stack depth: 2744 bytes left
 [ 3379.073464] btrfs: relocating block group 89878368419840 flags 17
 [ 3381.608948] btrfs: found 499 extents
 [ 3383.884696] [ cut here ]
 [ 3383.884720] kernel BUG at fs/btrfs/relocation.c:3405!
 [ 3383.884731] invalid opcode:  [#1] SMP 
 [ 3383.884742] Modules linked in:
 [ 3383.884753] CPU: 0 PID: 5663 Comm: btrfs Not tainted 3.13.0 #1
 [ 3383.884763] Hardware name: System manufacturer System Product Name/E45M1-I 
 DELUXE, BIOS 0405 08/08/2012
 [ 3383.884778] task: 8802360eae80 ti: 88010dcaa000 task.ti: 
 88010dcaa000
 [ 3383.884790] RIP: 0010:[812f0bd5]  [812f0bd5] 
 __add_tree_block+0x1c5/0x1e0
 [ 3383.884811] RSP: 0018:88010dcaba38  EFLAGS: 00010202
 [ 3383.884821] RAX: 0001 RBX: 880039f18000 RCX: 
 
 [ 3383.884832] RDX:  RSI:  RDI: 
 
 [ 3383.884843] RBP: 88010dcaba90 R08: 88010dcab9f4 R09: 
 88010dcab930
 [ 3383.884854] R10:  R11: 047f R12: 
 1000
 [ 3383.884865] R13: 88023489c630 R14:  R15: 
 528d112e4000
 [ 3383.884876] FS:  7f8e27e74880() GS:88023ec0() 
 knlGS:
 [ 3383.884888] CS:  0010 DS:  ES:  CR0: 8005003b
 [ 3383.884897] CR2: 7f60d89f35a8 CR3: 0001b5ada000 CR4: 
 07f0
 [ 3383.884907] Stack:
 [ 3383.884941]  88010dcabb28 4000812bde34 00a8528d112e 
 0010
 [ 3383.885012]  1000 1000 0f3a 
 8802348d6990
 [ 3383.885082]  88001cbf5a00 880039f18000 00b8 
 88010dcabb00
 [ 3383.885153] Call Trace:
 [ 3383.885192]  [812f1a54] add_data_references+0x244/0x2e0
 [ 3383.885232]  [812f2a2b] relocate_block_group+0x56b/0x640
 [ 3383.885272]  [812f2ca2] btrfs_relocate_block_group+0x1a2/0x2f0
 [ 3383.885313]  [812cbcca] btrfs_relocate_chunk.isra.27+0x6a/0x740
 [ 3383.885355]  [81281a31] ? btrfs_set_path_blocking+0x31/0x70
 [ 3383.885432]  [81286816] ? btrfs_search_slot+0x386/0x960
 [ 3383.885473]  [812c6f07] ? free_extent_buffer+0x47/0xa0
 [ 3383.885513]  [812ceedb] btrfs_balance+0x90b/0xea0
 [ 3383.885553]  [812d5ec2] btrfs_ioctl_balance+0x162/0x520
 [ 3383.885592]  [812d9aed] btrfs_ioctl+0xcbd/0x25c0
 [ 3383.885632]  [818c094c] ? __do_page_fault+0x1dc/0x520
 [ 3383.885673]  [81136868] do_vfs_ioctl+0x2c8/0x490
 [ 3383.885712]  [81136ab1] SyS_ioctl+0x81/0xa0
 [ 3383.885752]  [818c4f5b] tracesys+0xdd/0xe2
 [ 3383.885787] Code: ff 48 8b 4d a8 48 8d 75 b6 4c 89 ea 48 89 df e8 42 e7 ff 
 ff 4c 89 ef 89 45 a8 e8 c7 0f f9 ff 8b 45 a8 e9 69 ff ff ff 85 c0 74 d6 0f 
 0b 66 0f 1f 84 00 00 00 00 00 b8 f4 ff ff ff e9 50 ff ff ff 
 [ 3383.886001] RIP  [812f0bd5] __add_tree_block+0x1c5/0x1e0
 [ 3383.886042]  RSP 88010dcaba38
 [ 3383.886359] ---[ end trace 075209044ce10da3 ]---
 Anything i can do to resolve / debug the issue?
 
 Remco--
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

Re: hitting BUG_ON on troublesome FS

2014-02-04 Thread Remco Hosman - Yerf-it.com
How can i tell?

Label: data  uuid: a8626d67-4684-4b23-99b3-8d5fa8e7fd69
Total devices 5 FS bytes used 820.00KiB
devid2 size 1.82TiB used 1.00GiB path /dev/sdb2
devid3 size 1.82TiB used 1.00GiB path /dev/sdf2
devid5 size 2.73TiB used 3.00GiB path /dev/sdd2
devid10 size 2.73TiB used 2.03GiB path /dev/sde2
devid11 size 3.64TiB used 1.03GiB path /dev/sdc1

Data, RAID10: total=3D2.00GiB, used=3D768.00KiB
Data, RAID1: total=3D1.00GiB, used=3D12.00KiB
System, RAID1: total=3D32.00MiB, used=3D4.00KiB
Metadata, RAID1: total=3D1.00GiB, used=3D36.00KiB

i made a image with `btrfs-image`, when i do -c 9, the file size is 7k, =
so eazy enough to mail if it would be of any use.

Remco

On 04 Feb 2014, at 22:48, Josef Bacik jba...@fb.com wrote:

 
 On 02/03/2014 03:51 PM, Remco Hosman - Yerf-it.com wrote:
 FIrst, a bit of history of the filesystem:
 used to be 6 disks, now 5. partially raid1 / raid10. been migrating back and 
 forth a few times.
 As some point, a balance would not complete and would end with 164 
 ENOSPC’ses, while there was plenty of unallocated space on each disk.
 
 i scanned for extends larger then 1gig and found a few, so ran a recursive 
 balance of the entire FS.
 
 I deceided to empty the filesystem and format it.
 
 i pulled most files off it some via btrfs send/receive, some via rsync. but 
 1 subvol wouldn’t send. i don’t remember the exact error, but it was that a 
 extend could not be found on 1 of the disks.
 
 with only a few 100gig of data left, i decided to balance some remaining 
 empty space before doing a `btrfs dev del`, so have another disk to store 
 more data on.
 but im hitting a snag, i hit a BUG_ON when doing a `btrfs bal start 
 -dusage=2 /mountpoint` :
 
 [ 3327.678329] btrfs: found 198 extents
 [ 3328.117274] btrfs: relocating block group 84473084968960 flags 17
 [ 3329.278521] btrfs: found 103 extents
 [ 3331.907931] btrfs: found 103 extents
 [ 3332.386172] btrfs: relocating block group 84466642518016 flags 17
 [ .536595] btrfs: found 86 extents
 [ 3335.982967] btrfs: found 86 extents
 [ 3336.599555] btrfs (4746) used greatest stack depth: 2744 bytes left
 [ 3379.073464] btrfs: relocating block group 89878368419840 flags 17
 [ 3381.608948] btrfs: found 499 extents
 [ 3383.884696] [ cut here ]
 [ 3383.884720] kernel BUG at fs/btrfs/relocation.c:3405!
 [ 3383.884731] invalid opcode:  [#1] SMP
 [ 3383.884742] Modules linked in:
 [ 3383.884753] CPU: 0 PID: 5663 Comm: btrfs Not tainted 3.13.0 #1
 [ 3383.884763] Hardware name: System manufacturer System Product 
 Name/E45M1-I DELUXE, BIOS 0405 08/08/2012
 [ 3383.884778] task: 8802360eae80 ti: 88010dcaa000 task.ti: 
 88010dcaa000
 [ 3383.884790] RIP: 0010:[812f0bd5]  [812f0bd5] 
 __add_tree_block+0x1c5/0x1e0
 [ 3383.884811] RSP: 0018:88010dcaba38  EFLAGS: 00010202
 [ 3383.884821] RAX: 0001 RBX: 880039f18000 RCX: 
 
 [ 3383.884832] RDX:  RSI:  RDI: 
 
 [ 3383.884843] RBP: 88010dcaba90 R08: 88010dcab9f4 R09: 
 88010dcab930
 [ 3383.884854] R10:  R11: 047f R12: 
 1000
 [ 3383.884865] R13: 88023489c630 R14:  R15: 
 528d112e4000
 [ 3383.884876] FS:  7f8e27e74880() GS:88023ec0() 
 knlGS:
 [ 3383.884888] CS:  0010 DS:  ES:  CR0: 8005003b
 [ 3383.884897] CR2: 7f60d89f35a8 CR3: 0001b5ada000 CR4: 
 07f0
 [ 3383.884907] Stack:
 [ 3383.884941]  88010dcabb28 4000812bde34 00a8528d112e 
 0010
 [ 3383.885012]  1000 1000 0f3a 
 8802348d6990
 [ 3383.885082]  88001cbf5a00 880039f18000 00b8 
 88010dcabb00
 [ 3383.885153] Call Trace:
 [ 3383.885192]  [812f1a54] add_data_references+0x244/0x2e0
 [ 3383.885232]  [812f2a2b] relocate_block_group+0x56b/0x640
 [ 3383.885272]  [812f2ca2] btrfs_relocate_block_group+0x1a2/0x2f0
 [ 3383.885313]  [812cbcca] btrfs_relocate_chunk.isra.27+0x6a/0x740
 [ 3383.885355]  [81281a31] ? btrfs_set_path_blocking+0x31/0x70
 [ 3383.885432]  [81286816] ? btrfs_search_slot+0x386/0x960
 [ 3383.885473]  [812c6f07] ? free_extent_buffer+0x47/0xa0
 [ 3383.885513]  [812ceedb] btrfs_balance+0x90b/0xea0
 [ 3383.885553]  [812d5ec2] btrfs_ioctl_balance+0x162/0x520
 [ 3383.885592]  [812d9aed] btrfs_ioctl+0xcbd/0x25c0
 [ 3383.885632]  [818c094c] ? __do_page_fault+0x1dc/0x520
 [ 3383.885673]  [81136868] do_vfs_ioctl+0x2c8/0x490
 [ 3383.885712]  [81136ab1] SyS_ioctl+0x81/0xa0
 [ 3383.885752]  [818c4f5b] tracesys+0xdd/0xe2
 [ 3383.885787] Code: ff 48 8b 4d a8 48 8d 75 b6 4c 89 ea 48 89 df e8 42 e7 
 ff ff 4c 89 ef 89 45 a8 e8 c7 0f f9 ff 8b 45 a8 e9 69 ff ff ff 85 c0 74 d6 
 0f 0b 66 0f 1f 84 00

hitting BUG_ON on troublesome FS

2014-02-03 Thread Remco Hosman - Yerf-it.com
FIrst, a bit of history of the filesystem:
used to be 6 disks, now 5. partially raid1 / raid10. been migrating back and 
forth a few times.
As some point, a balance would not complete and would end with 164 ENOSPC’ses, 
while there was plenty of unallocated space on each disk.

i scanned for extends larger then 1gig and found a few, so ran a recursive 
balance of the entire FS.

I deceided to empty the filesystem and format it.

i pulled most files off it some via btrfs send/receive, some via rsync. but 1 
subvol wouldn’t send. i don’t remember the exact error, but it was that a 
extend could not be found on 1 of the disks.

with only a few 100gig of data left, i decided to balance some remaining empty 
space before doing a `btrfs dev del`, so have another disk to store more data 
on.
but im hitting a snag, i hit a BUG_ON when doing a `btrfs bal start -dusage=2 
/mountpoint` :

[ 3327.678329] btrfs: found 198 extents
[ 3328.117274] btrfs: relocating block group 84473084968960 flags 17
[ 3329.278521] btrfs: found 103 extents
[ 3331.907931] btrfs: found 103 extents
[ 3332.386172] btrfs: relocating block group 84466642518016 flags 17
[ .536595] btrfs: found 86 extents
[ 3335.982967] btrfs: found 86 extents
[ 3336.599555] btrfs (4746) used greatest stack depth: 2744 bytes left
[ 3379.073464] btrfs: relocating block group 89878368419840 flags 17
[ 3381.608948] btrfs: found 499 extents
[ 3383.884696] [ cut here ]
[ 3383.884720] kernel BUG at fs/btrfs/relocation.c:3405!
[ 3383.884731] invalid opcode:  [#1] SMP 
[ 3383.884742] Modules linked in:
[ 3383.884753] CPU: 0 PID: 5663 Comm: btrfs Not tainted 3.13.0 #1
[ 3383.884763] Hardware name: System manufacturer System Product Name/E45M1-I 
DELUXE, BIOS 0405 08/08/2012
[ 3383.884778] task: 8802360eae80 ti: 88010dcaa000 task.ti: 
88010dcaa000
[ 3383.884790] RIP: 0010:[812f0bd5]  [812f0bd5] 
__add_tree_block+0x1c5/0x1e0
[ 3383.884811] RSP: 0018:88010dcaba38  EFLAGS: 00010202
[ 3383.884821] RAX: 0001 RBX: 880039f18000 RCX: 
[ 3383.884832] RDX:  RSI:  RDI: 
[ 3383.884843] RBP: 88010dcaba90 R08: 88010dcab9f4 R09: 88010dcab930
[ 3383.884854] R10:  R11: 047f R12: 1000
[ 3383.884865] R13: 88023489c630 R14:  R15: 528d112e4000
[ 3383.884876] FS:  7f8e27e74880() GS:88023ec0() 
knlGS:
[ 3383.884888] CS:  0010 DS:  ES:  CR0: 8005003b
[ 3383.884897] CR2: 7f60d89f35a8 CR3: 0001b5ada000 CR4: 07f0
[ 3383.884907] Stack:
[ 3383.884941]  88010dcabb28 4000812bde34 00a8528d112e 
0010
[ 3383.885012]  1000 1000 0f3a 
8802348d6990
[ 3383.885082]  88001cbf5a00 880039f18000 00b8 
88010dcabb00
[ 3383.885153] Call Trace:
[ 3383.885192]  [812f1a54] add_data_references+0x244/0x2e0
[ 3383.885232]  [812f2a2b] relocate_block_group+0x56b/0x640
[ 3383.885272]  [812f2ca2] btrfs_relocate_block_group+0x1a2/0x2f0
[ 3383.885313]  [812cbcca] btrfs_relocate_chunk.isra.27+0x6a/0x740
[ 3383.885355]  [81281a31] ? btrfs_set_path_blocking+0x31/0x70
[ 3383.885432]  [81286816] ? btrfs_search_slot+0x386/0x960
[ 3383.885473]  [812c6f07] ? free_extent_buffer+0x47/0xa0
[ 3383.885513]  [812ceedb] btrfs_balance+0x90b/0xea0
[ 3383.885553]  [812d5ec2] btrfs_ioctl_balance+0x162/0x520
[ 3383.885592]  [812d9aed] btrfs_ioctl+0xcbd/0x25c0
[ 3383.885632]  [818c094c] ? __do_page_fault+0x1dc/0x520
[ 3383.885673]  [81136868] do_vfs_ioctl+0x2c8/0x490
[ 3383.885712]  [81136ab1] SyS_ioctl+0x81/0xa0
[ 3383.885752]  [818c4f5b] tracesys+0xdd/0xe2
[ 3383.885787] Code: ff 48 8b 4d a8 48 8d 75 b6 4c 89 ea 48 89 df e8 42 e7 ff 
ff 4c 89 ef 89 45 a8 e8 c7 0f f9 ff 8b 45 a8 e9 69 ff ff ff 85 c0 74 d6 0f 0b 
66 0f 1f 84 00 00 00 00 00 b8 f4 ff ff ff e9 50 ff ff ff 
[ 3383.886001] RIP  [812f0bd5] __add_tree_block+0x1c5/0x1e0
[ 3383.886042]  RSP 88010dcaba38
[ 3383.886359] ---[ end trace 075209044ce10da3 ]---
Anything i can do to resolve / debug the issue?

Remco--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ENOSPC during balance

2014-01-17 Thread Remco Hosman - Yerf IT


i did a `btrfs file convert -dconvert=raid10,soft /data`, which converted the 
whole filesystem back to raid10, it completed without errors.
then i did a `btrfs convert -dconvert=raid1 /data`, which completed with 184 
ENOSPC errors, exactly the same amount as before the covert back to raid10



On 13 Jan 2014, at 18:43, David Sterba dste...@suse.cz wrote:

 On Sun, Jan 12, 2014 at 03:49:12PM +0100, Remco Hosman - Yerf IT wrote:
 I am trying to convert my array from raid10 to 1, and its partially
 completed, but at the moment i am getting a '59366.459092] btrfs: 185
 enospc errors during balance’ when i try to balance anything more with
 `btrfs bal start -dconvert=raid1,soft /mountpoint`
 
 I have already scanned for files with extends over 1gig gig, and there
 is at least 100gig unallocated on each of the disks, and scrub reports
 no error at all.
 Kernel is 3.13-rc7 and tools are latest from git.
 
 By unalocated you mean from 'btrfs fi df' output total - used = 100G
 or that the sum of all occupied space is 100G less than the device size?
 So if there's some space left to allocate new 1G-chunks for balance.
 
 How many disks does the fs contain?
 

filesystem is 6 disks of varying sizes.

 Anything else i can try ?
 
 Run
 $ btrfs balance start -dusage=0,profiles=raid10\|raid1 /mnt
 
 if there are some chunks preallocated from previous balance runs, this
 will clean them. The -musage=0 filter could also get some space.
 
 I've experienced similar problems with conversion from raid1 to raid10,
 where it's probably worse regarding the 1G-chunks, because the raid-0
 level needs the chunk on each disk, while raid1 is fine with just 2.
 
 I had done the -dusage=0 cleanup step every time the 'enospc during
 balance' was hit, and it finished in the end. Not perfect, an automatic
 and more intelligent chunk reclaim is among the project ideas though.
 
 If nothing from above helps, please post the output of 'fi df' and 'fi
 show' commands.
 

i did a `btrfs bal start -dconvert=raid10,soft /data`, and it completed without 
errors.
then a `btrfs bal start -donvert=raid1 /data`, which resulted in 184 ENOSPC 
errors.

currently, the filesystem looks like this:

Data, RAID10: total=431.57GiB, used=430.22GiB
Data, RAID1: total=5.18TiB, used=5.18TiB
System, RAID10: total=96.00MiB, used=804.00KiB
Metadata, RAID10: total=12.38GiB, used=9.34GiB

Label: data  uuid: a8626d67-4684-4b23-99b3-8d5fa8e7fd69
Total devices 6 FS bytes used 5.61TiB
devid1 size 1.82TiB used 1.27TiB path /dev/sdg2
devid2 size 1.82TiB used 1.27TiB path /dev/sdb2
devid3 size 1.82TiB used 1.27TiB path /dev/sdf2
devid5 size 2.73TiB used 2.17TiB path /dev/sdd2
devid10 size 2.73TiB used 2.18TiB path /dev/sde2
devid11 size 3.64TiB used 3.08TiB path /dev/sdc1

when i do a `btrfs bal start -dconvert=raid1,soft /data` :
dmesg: [560325.834835] btrfs: 184 enospc errors during balance

Data, RAID10: total=428.57GiB, used=428.57GiB
Data, RAID1: total=5.72TiB, used=5.18TiB
System, RAID10: total=96.00MiB, used=880.00KiB
Metadata, RAID10: total=12.38GiB, used=9.34GiB

Label: data  uuid: a8626d67-4684-4b23-99b3-8d5fa8e7fd69
Total devices 6 FS bytes used 5.61TiB
devid1 size 1.82TiB used 1.45TiB path /dev/sdg2
devid2 size 1.82TiB used 1.44TiB path /dev/sdb2
devid3 size 1.82TiB used 1.44TiB path /dev/sdf2
devid5 size 2.73TiB used 2.35TiB path /dev/sdd2
devid10 size 2.73TiB used 2.36TiB path /dev/sde2
devid11 size 3.64TiB used 3.26TiB path /dev/sdc1

so it looks like it did allocate all the space it needed, but still failed

kernel is currently 3.13-rc7

Remco


 
 david



--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ENOSPC during balance

2014-01-12 Thread Remco Hosman - Yerf IT
HI,

I am trying to convert my array from raid10 to 1, and its partially completed, 
but at the moment i am getting a '59366.459092] btrfs: 185 enospc errors during 
balance’ when i try to balance anything more with `btrfs bal start 
-dconvert=raid1,soft /mountpoint`

I have already scanned for files with extends over 1gig gig, and there is at 
least 100gig unallocated on each of the disks, and scrub reports no error at 
all.
Kernel is 3.13-rc7 and tools are latest from git.

Anything else i can try ?

Thanks,
Remco--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: disallow 'btrfs {balance,replace} cancel' on ro mounts

2013-10-11 Thread Remco Hosman - Yerf-IT

Op 11-10-2013 11:23, Stefan Behrens schreef:

On Fri, 11 Oct 2013 09:13:24 +0800, Wang Shilong wrote:

On 10/11/2013 01:40 AM, Ilya Dryomov wrote:

I have a question in my mind.

Can we reach a state that there is operation in progress when filesystem
has been readonly?If we do cancel operations on a ro filesystem, we should
get No operations in progress .

Well, it's arguable what ro means. No write to the devices at all? Or
replay log on mount (which means to write to the filesystem), but no
write access in addition to that? Or allow filesystem internal things to
be modified and written to disk, like it was done when the balance or
replace control items were modified on disk when the cancelling was
requested?

In any case, don't make it more complicated then necessary IMO, and
return EROFS if someone calls cancel on a ro filesystem regardless of
the state of the operation. It only adds errors to try to distuingish
such things and is of no benefit for anybody IMHO.


just my 2 cents:

I once had to recover a ext3 filesystem from a device that would crash 
if you write anything beyond 2T. The problem is that there was an entry 
in the journal telling the OS to do just that. so the device would just 
crash every time out mount the filesystem. even in RO mode.


The manual speaks about a 'skip journal replay' option, but it was never 
implemented.

kernel source was something like:
case: SKIP_JOURNAL_REPLAY:
return error;


The result: there was no way to mount the filesystem, even in RO mode. i 
endedup dd'ing the whole thing to another device, and then mounting the 
resulting image. it took a very long time!


i would expect a RO mount never to write anything to a filesystem. not 
even replay a journal (or a seperate option for that).
Its possible that the device is not writable at all, if its a snapshot 
or a RO iscsi device of some kind.


Remco


For both balance and replace, cancelling involves changing the on-disk
state and committing a transaction, which is not a good thing to do on
read-only filesystems.

Cc: Stefan Behrens sbehr...@giantdisaster.de
Signed-off-by: Ilya Dryomov idryo...@gmail.com
---
   fs/btrfs/dev-replace.c |3 +++
   fs/btrfs/volumes.c |3 +++
   2 files changed, 6 insertions(+)

diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
index 9efb94e..98df261 100644
--- a/fs/btrfs/dev-replace.c
+++ b/fs/btrfs/dev-replace.c
@@ -650,6 +650,9 @@ static u64 __btrfs_dev_replace_cancel(struct
btrfs_fs_info *fs_info)
   u64 result;
   int ret;
   +if (fs_info-sb-s_flags  MS_RDONLY)
+return -EROFS;
+
   mutex_lock(dev_replace-lock_finishing_cancel_unmount);
   btrfs_dev_replace_lock(dev_replace);
   switch (dev_replace-replace_state) {
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a306db9..2630f38 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -3424,6 +3424,9 @@ int btrfs_pause_balance(struct btrfs_fs_info
*fs_info)
 int btrfs_cancel_balance(struct btrfs_fs_info *fs_info)
   {
+if (fs_info-sb-s_flags  MS_RDONLY)
+return -EROFS;
+
   mutex_lock(fs_info-balance_mutex);
   if (!fs_info-balance_ctl) {
   mutex_unlock(fs_info-balance_mutex);

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: `btrfs receive` almost coming to a halt

2013-05-10 Thread Remco Hosman - Yerf IT

On May 10, 2013, at 9:27 AM, Arne Jansen sensi...@gmx.net wrote:

 On 09.05.2013 17:14, Remco Hosman - Yerf IT wrote:
 kernel: 3.9.0
 btrfs-progs: pulled from git this morning
 
 Trying to receive a 5gig send file. the first bit is fast, doing 10 - 
 50MB/sec.
 then it slows down. cpu usage is 50% (dual core machine).
 when i do a strace, it looks like this, repeating over an over, about 1 
 piece each second:
 --
 read(3, q\0\0\0\20\0008\352\327o, 10) = 10
 read(3, 
 \22\0\10\0\0\0$~\0\0\0\0\30\0\10\0\0\0\2\0\0\0\0\0\17\0\24\0DB2/..., 113) 
 = 113
 open(/media/snaps/yerf-2013-05-02-03:15:01/DB2/DB2-flat.vmdk, 
 O_RDONLY|O_NOATIME) = 6
 ioctl(5, 0x4020940d, 0x7fffc6d41c60)= 0
 close(6)= 0
 read(3, q\0\0\0\20\0\242\357\263, 10) = 10
 read(3, 
 \22\0\10\0\0\0~\0\0\0\0\30\0\10\0\0\0\2\0\0\0\0\0\17\0\24\0DB2/..., 113) 
 = 113
 open(/media/snaps/yerf-2013-05-02-03:15:01/DB2/DB2-flat.vmdk, 
 O_RDONLY|O_NOATIME) = 6
 ioctl(5, 0x4020940d, 0x7fffc6d41c60)= 0
 close(6)= 0
 --
 
 
 Is this the receive side?
 Where does the data come from, a local file or via network?
 

Yes, this is the receiving side. data comes from a local file.

sometimes it does hit a 'good' portion, then i get a strafe like this:
read(3, (\300\0\0\17\0N0\346\307, 10) = 10
read(3, \17\0\24\0DB2/DB2-flat.vmdk.ok\22\0\10\0\0@\25\325..., 49192) = 49192
pwrite(5,\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0...,49152,3574939648)
 = 49152


Managed to find what ioctl 0x4020940d is in the meantime: 
BTRFS_IOC_CLONE_RANGE, with 32 bytes parameters (4x int64). have not managed to 
get the parameters yet. i have no idea how to work gbd.

but i guess they are coming from the 113 bytes it is reading.

Remco

 -Arne
 
 it pauses for a second after ioctl(5, 0x4020940d
 it has been running like that for 3 hours now.
 the file its working is large (80gig) and filefrag reports 648862 extends.
 filesystem is mounted with rw,relatime,compress-force=lzo,space_cache
 
 anything i can do to see what the problem is?
 
 Remco--
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Btrfs balance invalid argument error

2013-05-10 Thread Remco Hosman - Yerf IT
On May 10, 2013, at 10:21 PM, Hugo Mills h...@carfax.org.uk wrote:

 On Fri, May 10, 2013 at 10:07:56PM +0200, Marcus Lövgren wrote:
 Hi list,
 
 I am using kernel 3.9.0, btrfs-progs 0.20-rc1-253-g7854c8b.
 
 I have a three disk array of level single:
 
 # btrfs fi sh
 Label: none  uuid: 2e905f8f-e525-4114-afa6-cce48f77b629
Total devices 3 FS bytes used 3.80TB
devid1 size 2.73TB used 2.25TB path /dev/sdd
devid2 size 2.73TB used 1.55TB path /dev/sdc
devid3 size 2.73TB used 0.00 path /dev/sdb
 
 Btrfs v0.20-rc1-253-g7854c8b
 
 # btrfs fi df /mnt/data
 Data: total=3.79TB, used=3.79TB
 System: total=4.00MB, used=420.00KB
 Metadata: total=6.01GB, used=4.87GB
 
 
 When running
 # btrfs balance start -dconvert=raid5 -mconvert=raid5 /mnt/data
 
 I get
 
 ERROR: error during balancing '/mnt/data' - Invalid argument
 There may be more info in syslog - try dmesg | tail
 
 dmesg | tail says:
 
 btrfs: unable to start balance with target data profile 128
 
 Isn't it possible to convert raid level to raid5?
 
   Yes, it should be possible. It looks like the kernel's got a
 problem with it, which is odd because 3.9 should know about RAID-5.
 

Wasn't there some issues that the kernel or tools wanted 4 disks when 
converting to raid5?

Remco

   Hugo.
 
 -- 
 === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- I think that everything darkling says is actually a joke. ---
 It's just that we haven't worked out most of them yet.  

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


`btrfs receive` almost coming to a halt

2013-05-09 Thread Remco Hosman - Yerf IT
kernel: 3.9.0
btrfs-progs: pulled from git this morning

Trying to receive a 5gig send file. the first bit is fast, doing 10 - 50MB/sec.
then it slows down. cpu usage is 50% (dual core machine).
when i do a strace, it looks like this, repeating over an over, about 1 piece 
each second:
--
read(3, q\0\0\0\20\0008\352\327o, 10) = 10
read(3, \22\0\10\0\0\0$~\0\0\0\0\30\0\10\0\0\0\2\0\0\0\0\0\17\0\24\0DB2/..., 
113) = 113
open(/media/snaps/yerf-2013-05-02-03:15:01/DB2/DB2-flat.vmdk, 
O_RDONLY|O_NOATIME) = 6
ioctl(5, 0x4020940d, 0x7fffc6d41c60)= 0
close(6)= 0
read(3, q\0\0\0\20\0\242\357\263, 10) = 10
read(3, \22\0\10\0\0\0~\0\0\0\0\30\0\10\0\0\0\2\0\0\0\0\0\17\0\24\0DB2/..., 
113) = 113
open(/media/snaps/yerf-2013-05-02-03:15:01/DB2/DB2-flat.vmdk, 
O_RDONLY|O_NOATIME) = 6
ioctl(5, 0x4020940d, 0x7fffc6d41c60)= 0
close(6)= 0
--

it pauses for a second after ioctl(5, 0x4020940d
it has been running like that for 3 hours now.
the file its working is large (80gig) and filefrag reports 648862 extends.
filesystem is mounted with rw,relatime,compress-force=lzo,space_cache

anything i can do to see what the problem is?

Remco--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: converting to raid5

2013-03-19 Thread Remco Hosman - Yerf-IT

Op 15-3-2013 13:47, David Sterba schreef:

On Mon, Mar 11, 2013 at 09:15:44PM +0100, Remco Hosman wrote:

first, i did the following: `btrfs val start -dconvert=raid5,usage=1` to 
convert the mostly empty chunks.
This resulted in a lot of allocated space (10's of gigs), with only a few 100 
meg used.

Matches my expectation, converting to the new profile needs to allocate
full 1G chunks, but the usage=1 filter allows to fill them partially.

After this step, several ~empty raid1 chunks should disappear.

It did not only happen when i added the usage=1, but also without.

i did `btrfs val start -dusage=75` to clean things up.

then i ran `btrfs bal start -dconvert=raid5,soft`.
I noticed how the difference between total and used for raid5 kept growing.

Do you remember if this was temporary or if the difference was
unexpectedly big after the whole operation finished?
It did not finish, the filesystem did not have that much space free so i 
canceled it (even before it ran out of space) and ran `btrfs bal start 
-dusage=1` to cleanup the unused space

My guess is that its taking 1 raid1 chunk (2x1 gig disk space, 1 gig
data), and moving it to 1 raid5 chunk (4gig disk space, 3gig data),
leaving all chunks 33% used.

Why 3G of data in raid5 case? I assume you talk about the actually used
data and this should be the same as in raid1 case, but spread over 3x
1GB chunks and leaving them 33% utilized, that makes sense, but is not
clear from your description.
I assumed that with raid5, btrfs allocated 1 gig on each disk and uses 1 
for parity, giving 3 gig of data in 4gig diskspace.

This is what 3 calls of `btrfs file df /` looks like a few minutes after each 
other, with the balance still running:
Data, RAID1: total=807.00GB, used=805.70GB
Data, RAID5: total=543.00GB, used=192.81GB
--
Data, RAID1: total=800.00GB, used=798.70GB
Data, RAID5: total=564.00GB, used=199.30GB
--
Data, RAID1: total=795.00GB, used=793.70GB
Data, RAID5: total=579.00GB, used=204.81GB

raid1 numbers going down, raid5 going up, all ok.

david


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: lvm volume like support

2013-02-25 Thread Remco Hosman - Yerf-IT
Can't thus be done with a regular file and a loop back device?

Remco

On 26 Feb 2013, at 06:35, Suman C schakr...@gmail.com wrote:

 Yes, zvol like feature where a btrfs subvolume like construct can be
 made available as a LUN/block device. This device can then be used by
 any application that wants a raw block device. iscsi is another
 obvious usecase. Having thin provisioning support would make it pretty
 awesome.
 
 Suman
 
 On Mon, Feb 25, 2013 at 5:46 PM, Fajar A. Nugraha l...@fajar.net wrote:
 On Tue, Feb 26, 2013 at 11:59 AM, Mike Fleetwood
 mike.fleetw...@googlemail.com wrote:
 On 25 February 2013 23:35, Suman C schakr...@gmail.com wrote:
 Hi,
 
 I think it would be great if there is a lvm volume or zfs zvol type
 support in btrfs.
 
 
 Btrfs already has capabilities to add and remove block devices on the
 fly.  Data can be stripped or mirrored or both.  Raid 5/6 is in
 testing at the moment.
 https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
 https://btrfs.wiki.kernel.org/index.php/UseCases#RAID
 
 Which specific features do you think btrfs is lacking?
 
 
 I think he's talking about zvol-like feature.
 
 In zfs, instead of creating a
 filesystem-that-is-accessible-as-a-directory, you can create a zvol
 which behaves just like any other standard block device (e.g. you can
 use it as swap, or create ext4 filesystem on top of it). But it would
 also have most of the benefits that a normal zfs filesystem has, like:
 - thin provisioning (sparse allocation, snapshot  clone)
 - compression
 - integrity check (via checksum)
 
 Typical use cases would be:
 - swap in a pure-zfs system
 - virtualization (xen, kvm, etc)
 - NAS which exports the block device using iscsi/AoE
 
 AFAIK no such feature exist in btrfs yet.
 
 --
 Fajar
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: lvm volume like support

2013-02-25 Thread Remco Hosman - Yerf IT
would be really cool if a TRIM to the loopback device would do a 'hole punch' 
on the file

Remco


On Feb 26, 2013, at 7:25 AM, Suman C schakr...@gmail.com wrote:

 Thanks for the sparse file idea, I am actually using that solution
 already. I am not sure if its the best way, however.
 
 Suman
 
 On Mon, Feb 25, 2013 at 9:57 PM, Roman Mamedov r...@romanrm.ru wrote:
 On Mon, 25 Feb 2013 21:35:08 -0800
 Suman C schakr...@gmail.com wrote:
 
 Yes, zvol like feature where a btrfs subvolume like construct can be
 made available as a LUN/block device. This device can then be used by
 any application that wants a raw block device. iscsi is another
 obvious usecase. Having thin provisioning support would make it pretty
 awesome.
 
 I think what you are missing is that btrfs is a filesystem, not a block 
 device
 management mechanism.
 
 For your use case can simply create a snapshot and then make a sparse file
 inside of it.
 
  btrfs sub create foobar
  dd if=/dev/zero of=foobar/100GB.img bs=1 count=1 seek=100G
 
 If you need this to be a block device, use 'losetup' to make foobar/100GB.img
 appear as one (/dev/loopX). But iSCSI/AoE/NBD can export files as well as 
 block
 devices, so this is not even necessary.
 
 --
 With respect,
 Roman
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: add, remove; how about merge?

2012-12-21 Thread Remco Hosman - Yerf IT
I think thats possible with btrfs-send/receive

Remco

On Dec 21, 2012, at 7:16 PM, Gene Czarcinski g...@czarc.net wrote:

 I am new at this btrfs stuff!
 
 As I understand it, you can add a physical disk or partition and have data 
 spread into the new space.  You can also move data off a disk or partition 
 and then remove it.
 
 Has any though  been given to being able merge to existing (separate) btrfs 
 volumes into a single volume?
 
 Alternatively, how about at tool for transfering a subvolume from one pool to 
 another?  I tried using fsarchiver but that did not work out too well.
 
 Gene
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Sparsify / hole punching tool

2012-11-18 Thread Remco Hosman - Yerf IT
I wrote a little tool you can use to scan a file and punch holes, so make it 
sparse.

Confirmed to work on 3.7.0-rc6
Feel free to use it in any way you like, and of course improve it. 

http://pastebin.com/8SjEsBLD

Remco Hosman--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Sparsify / hole punching tool

2012-11-18 Thread Remco Hosman - Yerf IT
as requested. like this? 


---begin
#define _GNU_SOURCE /* See feature_test_macros(7) */
#include fcntl.h
#include sys/types.h
#include sys/stat.h
#include errno.h
#include stdio.h
#include string.h
#include linux/falloc.h
#include unistd.h

#define BLOCKSIZE 4096

int main(int argc, char** argv) {
ssize_t rsize;
int fh, result;
char buff[BLOCKSIZE], ebuff[BLOCKSIZE];
off_t curpos, pstart, psize;
int freed;

if (argc  2) {
printf(%s [file...]\n, argv[0]);
return 0;
} 

memset(ebuff, 0, BLOCKSIZE);// prepare a block of 0's

for (int i = 1; i  argc; i++) {
char* file=argv[i];
curpos = 0;
pstart = 0;
psize = 0;
freed = 0;

fh = open(file, O_RDWR, O_NOATIME);
if (fh == -1) {
perror(open());
return -1;
}

printf(sparseifying %s , file);
fflush(stdout);
while ((rsize = read(fh, buff, BLOCKSIZE))  0) {
result = memcmp(buff, ebuff,rsize);
if (result == 0) { // block is empty
if (pstart == 0) { // previous block as not empty?
pstart = curpos;
psize = rsize; // save for later punching
} else {
psize += rsize; // previous block was empty too, add size
}

freed += rsize;
} else if (pstart) { // block is not empty and we have a block that 
we still need to punch
result = fallocate(fh, FALLOC_FL_PUNCH_HOLE | 
FALLOC_FL_KEEP_SIZE, pstart, psize);
if (result == -1) {
perror(fallocate());
return -1;
}
pstart = 0;
psize = 0;
}
curpos += rsize;
}

if (pstart) { // still a block to do ?
result = fallocate(fh, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 
pstart, psize);
if (result == -1) {
perror(fallocate());
return -1;
}
}

if (rsize == 0) {
printf(done. freed %i bytes\n, freed);
continue;
}
if (rsize == -1) {
perror(read());
return -1;
}
}
return -1;
}
---end


Remco Hosman


On Nov 18, 2012, at 10:19 PM, Hugo Mills h...@carfax.org.uk wrote:

 On Sun, Nov 18, 2012 at 10:04:29PM +0100, Remco Hosman - Yerf IT wrote:
 I wrote a little tool you can use to scan a file and punch holes, so make it 
 sparse.
 
 Confirmed to work on 3.7.0-rc6
 Feel free to use it in any way you like, and of course improve it. 
 
 http://pastebin.com/8SjEsBLD
 
   For archival purposes, it's probably better to put the whole thing
 inline in the text of your mail. This also makes it far easier to make
 comments on it, should anyone feel moved to do so.
 
   Hugo.
 
 -- 
 === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
 --- In theory, theory and practice are the same. In --- 
  practice,  they're different.  

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html