Re: btrfs RAID 10 truncates files over 2G to 4096 bytes.

2016-07-05 Thread Henk Slager
On Wed, Jul 6, 2016 at 2:32 AM, Tomasz Kusmierz wrote: > > On 6 Jul 2016, at 00:30, Henk Slager wrote: > > On Mon, Jul 4, 2016 at 11:28 PM, Tomasz Kusmierz > wrote: > > I did consider that, but: > - some files were NOT accessed by anything with 100% certainty (well if > there is a rootkit on my

Re: btrfs RAID 10 truncates files over 2G to 4096 bytes.

2016-07-05 Thread Tomasz Kusmierz
On 6 Jul 2016, at 00:30, Henk Slager mailto:eye...@gmail.com>> wrote: > > On Mon, Jul 4, 2016 at 11:28 PM, Tomasz Kusmierz > wrote: >> I did consider that, but: >> - some files were NOT accessed by anything with 100% certainty (well if >> there is a rootkit on my s

Re: btrfs defrag questions

2016-07-05 Thread Henk Slager
On Tue, Jul 5, 2016 at 1:15 AM, Dmitry Katsubo wrote: > On 2016-07-01 22:46, Henk Slager wrote: >> (email ends up in gmail spamfolder) >> On Fri, Jul 1, 2016 at 10:14 PM, Dmitry Katsubo wrote: >>> Hello everyone, >>> >>> Question #1: >>> >>> While doing defrag I got the following message: >>> >>>

Re: btrfs RAID 10 truncates files over 2G to 4096 bytes.

2016-07-05 Thread Henk Slager
On Mon, Jul 4, 2016 at 11:28 PM, Tomasz Kusmierz wrote: > I did consider that, but: > - some files were NOT accessed by anything with 100% certainty (well if there > is a rootkit on my system or something in that shape than maybe yes) > - the only application that could access those files is tote

Re: Unable to mount degraded RAID5

2016-07-05 Thread Chris Murphy
On Tue, Jul 5, 2016 at 12:40 PM, Tomáš Hrdina wrote: > I don't know, if it would be good idea, but my disk, which disconnected > is connected again. Maybe it could help in getting data to the right > state, so other two disk could be mounted alone. But don't know, if it > would stay connected for

Re: Adventures in btrfs raid5 disk recovery

2016-07-05 Thread Chris Murphy
Related: http://www.spinics.net/lists/raid/msg52880.html Looks like there is some traction to figuring out what to do about this, whether it's a udev rule or something that happens in the kernel itself. Pretty much the only hardware setup unaffected by this are those with enterprise or NAS drives.

Re: 64-btrfs.rules and degraded boot

2016-07-05 Thread Chris Murphy
I started a systemd-devel@ thread since that's where most udev stuff gets talked about. https://lists.freedesktop.org/archives/systemd-devel/2016-July/037031.html -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vge

Re: 64-btrfs.rules and degraded boot

2016-07-05 Thread Chris Murphy
On Tue, Jul 5, 2016 at 1:27 PM, Kai Krakow wrote: > Am Tue, 5 Jul 2016 12:53:02 -0600 > schrieb Chris Murphy : > >> For some reason I thought it was possible to do degraded Btrfs boots >> by removing root=UUID= in favor of a remaining good block device, e.g. >> root=/dev/vda2, and then adding degr

Re: 64-btrfs.rules and degraded boot

2016-07-05 Thread Kai Krakow
Am Tue, 5 Jul 2016 12:53:02 -0600 schrieb Chris Murphy : > For some reason I thought it was possible to do degraded Btrfs boots > by removing root=UUID= in favor of a remaining good block device, e.g. > root=/dev/vda2, and then adding degraded to rootflags. But this > doesn't work either on CentOS

[PATCH v2] Btrfs: fix read_node_slot to return errors

2016-07-05 Thread Liu Bo
We use read_node_slot() to read btree node and it has two cases, a) slot is out of range, which means 'no such entry' b) we fail to read the block, due to checksum fails or corrupted content or not with uptodate flag. But we're returning NULL in both cases, this makes it return -ENOENT in case a

64-btrfs.rules and degraded boot

2016-07-05 Thread Chris Murphy
For some reason I thought it was possible to do degraded Btrfs boots by removing root=UUID= in favor of a remaining good block device, e.g. root=/dev/vda2, and then adding degraded to rootflags. But this doesn't work either on CentOS 7.2 or Fedora Rawhide. What happens is systemd waits for vda2 (or

Re: Unable to mount degraded RAID5

2016-07-05 Thread Tomáš Hrdina
I don't know, if it would be good idea, but my disk, which disconnected is connected again. Maybe it could help in getting data to the right state, so other two disk could be mounted alone. But don't know, if it would stay connected for some work. Or if it would make things even worst. Thank you T

Re: [PATCH] Btrfs: fix read_node_slot to return errors

2016-07-05 Thread Liu Bo
On Mon, Jul 04, 2016 at 07:08:18PM +0200, David Sterba wrote: > On Tue, Jun 28, 2016 at 06:55:48PM -0700, Liu Bo wrote: > > @@ -5238,6 +5256,10 @@ static void tree_move_down(struct btrfs_root *root, > > path->slots[*level]); > > path->slots[*level - 1] = 0; >

Re: Bad hard drive - checksum verify failure forces readonly mount

2016-07-05 Thread Vasco Almeida
Bug reported https://bugzilla.kernel.org/show_bug.cgi?id=121491 Thank you for helping. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Unable to mount degraded RAID5

2016-07-05 Thread Chris Murphy
On Mon, Jul 4, 2016 at 9:48 PM, Andrei Borzenkov wrote: > 04.07.2016 23:43, Chris Murphy пишет: >> >> Have you done a scrub on this file system and do you know if anything >> was fixed or if it always found no problem? >> > > scrub on degraded RAID5 cannot fix anything by definition, Right. In th

Re: [PATCH] btrfs: Fix slab accounting flags

2016-07-05 Thread Nikolay Borisov
After some days of inactivity a gentle ping is in order. On 06/23/2016 09:17 PM, Nikolay Borisov wrote: > BTRFS is using a variety of slab caches to satisfy internal needs. > Those slab caches are always allocated with the SLAB_RECLAIM_ACCOUNT, > meaning allocations from the caches are going to be

Re: [Bug-tar] stat() on btrfs reports the st_blocks with delay (data loss in archivers)

2016-07-05 Thread Joerg Schilling
Andreas Dilger wrote: > I think in addition to fixing btrfs (because it needs to work with existing > tar/rsync/etc. tools) it makes sense to *also* fix the heuristics of tar > to handle this situation more robustly. One option is if st_blocks == 0 then > tar should also check if st_mtime is les

[PATCH 0/3] Qgroup fixes for dirty hack routines

2016-07-05 Thread Qu Wenruo
This patchset introduce 2 fixes for data extent owner hacks. One can be triggered by balance, another one can be trigged by log replay after power loss. Root cause are all similar: EXTENT_DATA owner is changed by dirty hacks, from swapping tree blocks containing EXTENT_DATA to manually update ext

[PATCH v2 2/3] btrfs: relocation: Fix leaking qgroups numbers on data extents

2016-07-05 Thread Qu Wenruo
When balancing data extents, qgroup will leak all its numbers for balanced data extents. The root cause is that balancing is doing non-standard and almost insane tree block swap hack. The problem happens in the following steps: (Use 4M as original data extent size, and 257 as src root objectid)

[PATCH 1/3] btrfs: qgroup: Refactor btrfs_qgroup_insert_dirty_extent()

2016-07-05 Thread Qu Wenruo
Refactor btrfs_qgroup_insert_dirty_extent() function, to two functions: 1. _btrfs_qgroup_insert_dirty_extent() Almost the same with original code. For delayed_ref usage, which has delayed refs locked. Change the return value type to int, since caller never needs the pointer, but only n

[PATCH 3/3] btrfs: qgroup: Fix qgroup incorrectness caused by log replay

2016-07-05 Thread Qu Wenruo
When doing log replay at mount time(after power loss), qgroup will leak numbers of replayed data extents. The cause is almost the same of balance. So fix it by manually informing qgroup for owner changed extents. The bug can be detected by btrfs/119 test case. Signed-off-by: Qu Wenruo --- fs/b

Re: Unable to mount degraded RAID5

2016-07-05 Thread Tomáš Hrdina
sudo btrfs-debug-tree -d /dev/sdc btrfs-progs v4.6.1 warning, device 3 is missing checksum verify failed on 12678831570944 found 3DC57E3E wanted 771D2379 checksum verify failed on 12678831570944 found 3DC57E3E wanted 771D2379 bytenr mismatch, want=12678831570944, have=10160133442474442752 Couldn't