Hi,
I've now shut down all fuzzer nodes since they only cost money and
there is no progress on most of the aforementioned bugs.
Best regards
Lukas
-- Forwarded message --
From: Lukas Lueg
Date: 2016-09-26 11:39 GMT+02:00
Subject: Re: State of the fuzzer
On Tue, Oct 11, 2016 at 02:48:09PM +0200, David Sterba wrote:
> Hi,
>
> looks like a lot of random bitflips.
>
> On Mon, Oct 10, 2016 at 11:50:14PM +0200, a...@aron.ws wrote:
> > item 109 has a few strange chars in its name (and it's truncated):
> > 1-x86_64.pkg.tar.xz 0x62 0x14 0x0a 0x0a
> >
> -Original Message-
> From: ch...@colorremedies.com [mailto:ch...@colorremedies.com] On
> Behalf Of Chris Murphy
> Sent: Monday, October 10, 2016 11:23 PM
> To: Jason D. Michaelson
> Cc: Chris Murphy; Btrfs BTRFS
> Subject: Re: raid6 file system in a bad state
>
> What do you get for
>
On Fri, Sep 23, 2016 at 02:05:04PM -0700, Liu Bo wrote:
> While updating btree, we try to push items between sibling
> nodes/leaves in order to keep height as low as possible.
> But we don't memset the original places with zero when
> pushing items so that we could end up leaving stale content
>
On Tue, Oct 11, 2016 at 11:20:41AM -0400, Chris Mason wrote:
>
>
> On 10/11/2016 11:19 AM, Dave Jones wrote:
> > On Tue, Oct 11, 2016 at 04:11:39PM +0100, Al Viro wrote:
> > > On Tue, Oct 11, 2016 at 10:45:08AM -0400, Dave Jones wrote:
> > > > This is from Linus' current tree, with Al's
On Tue, Oct 11, 2016 at 12:47 AM, Wang Xiaoguang
wrote:
> If we use mount option "-o max_inline=sectorsize", say 4096, indeed
> even for a fresh fs, say nodesize is 16k, we can not make the first
> 4k data completely inline, I found this conditon causing this issue:
>
On Tue, Oct 11, 2016 at 9:52 AM, Jason D. Michaelson
wrote:
>> btrfs rescue super-recover -v
>
> root@castor:~/logs# btrfs rescue super-recover -v /dev/sda
> All Devices:
> Device: id = 2, name = /dev/sdh
> Device: id = 3, name = /dev/sdd
>
On Tue, Oct 11, 2016 at 10:45:08AM -0400, Dave Jones wrote:
> This is from Linus' current tree, with Al's iovec fixups on top.
Those iovec fixups are in the current tree... TBH, I don't see anything
in splice-related stuff that could come anywhere near that (short of
some general memory
On 10/11/2016 11:19 AM, Dave Jones wrote:
On Tue, Oct 11, 2016 at 04:11:39PM +0100, Al Viro wrote:
> On Tue, Oct 11, 2016 at 10:45:08AM -0400, Dave Jones wrote:
> > This is from Linus' current tree, with Al's iovec fixups on top.
>
> Those iovec fixups are in the current tree...
ah yeah,
On Tue, Oct 11, 2016 at 04:11:39PM +0100, Al Viro wrote:
> On Tue, Oct 11, 2016 at 10:45:08AM -0400, Dave Jones wrote:
> > This is from Linus' current tree, with Al's iovec fixups on top.
>
> Those iovec fixups are in the current tree...
ah yeah, git quietly dropped my local copy when I
Hello,
I have to build a RAID 6 with the following 3 requirements:
• Use different kinds of disks with different sizes.
• When a disk fails and there's enough space, the RAID should be able
to reconstruct itself out of the degraded state. Meaning, if I have e. g. a
RAID with 8
This is from Linus' current tree, with Al's iovec fixups on top.
[ cut here ]
WARNING: CPU: 1 PID: 3673 at lib/list_debug.c:33 __list_add+0x89/0xb0
list_add corruption. prev->next should be next (e8806648), but was
c967fcd8. (prev=880503878b80).
CPU: 1
Hi Linus,
My for-linus-4.9 has our merge window pull:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
for-linus-4.9
This is later than normal because I was tracking down a use-after-free
during btrfs/101 in xfstests. I had hoped to fix up the offending
patch, but wasn't
Hi,
looks like a lot of random bitflips.
On Mon, Oct 10, 2016 at 11:50:14PM +0200, a...@aron.ws wrote:
> item 109 has a few strange chars in its name (and it's truncated):
> 1-x86_64.pkg.tar.xz 0x62 0x14 0x0a 0x0a
>
> item 105 key (261 DIR_ITEM 54556048) itemoff 11723 itemsize 72
>
readding btrfs
On Tue, Oct 11, 2016 at 1:00 PM, Jason D. Michaelson
wrote:
>
>
>> -Original Message-
>> From: ch...@colorremedies.com [mailto:ch...@colorremedies.com] On
>> Behalf Of Chris Murphy
>> Sent: Tuesday, October 11, 2016 12:41 PM
>> To: Jason D.
On Tue, Oct 11, 2016 at 10:18:51AM +0800, Qu Wenruo wrote:
> >> -/* Caller should ensure sizeof(*ret) >= 29 "NODATASUM|NODATACOW|READONLY"
> >> */
> >> +#define copy_one_inode_flag(flags, name, empty, dst) ({
> >> \
> >> + if (flags & BTRFS_INODE_##name) {
I think you just described all the benefits of btrfs in that type of
configuration unfortunately after btrfs RAID 5 & 6 was marked as
OK it got marked as "it will eat your data" (and there is a tone of
people in random places poping up with raid 5 & 6 that just killed
their data)
On 11
On Tue, Oct 11, 2016 at 8:14 AM, Philip Louis Moetteli
wrote:
>
> Hello,
>
>
> I have to build a RAID 6 with the following 3 requirements:
You should under no circumstances use RAID5/6 for anything other than
test and throw-away data.
It has several known issues that
On 2016-10-11 11:14, Philip Louis Moetteli wrote:
Hello,
I have to build a RAID 6 with the following 3 requirements:
• Use different kinds of disks with different sizes.
• When a disk fails and there's enough space, the RAID should be able
to reconstruct itself out of the
On Tue, Oct 11, 2016 at 03:14:30PM +, Philip Louis Moetteli wrote:
> Hello,
>
>
> I have to build a RAID 6 with the following 3 requirements:
>
> • Use different kinds of disks with different sizes.
> • When a disk fails and there's enough space, the RAID should be able
> to
>
>
> Bad superblocks can't be a good thing and would only cause confusion.
> I'd think that a known bad superblock would be ignored at mount time
> and even by btrfs-find-root, or maybe even replaced like any other kind
> of known bad metadata where good copies are available.
>
>
On Tue, Oct 11, 2016 at 11:54:09AM -0400, Chris Mason wrote:
>
>
> On 10/11/2016 10:45 AM, Dave Jones wrote:
> > This is from Linus' current tree, with Al's iovec fixups on top.
> >
> > [ cut here ]
> > WARNING: CPU: 1 PID: 3673 at lib/list_debug.c:33
On 10/11/2016 10:45 AM, Dave Jones wrote:
> This is from Linus' current tree, with Al's iovec fixups on top.
>
> [ cut here ]
> WARNING: CPU: 1 PID: 3673 at lib/list_debug.c:33 __list_add+0x89/0xb0
> list_add corruption. prev->next should be next (e8806648), but
On Tue, Oct 11, 2016 at 10:10 AM, Jason D. Michaelson
wrote:
> superblock: bytenr=65536, device=/dev/sda
> -
> generation 161562
> root5752616386560
> superblock: bytenr=65536,
https://btrfs.wiki.kernel.org/index.php/Status
Scrub + RAID56 Unstable will verify but not repair
This doesn't seem quite accurate. It does repair the vast majority of
the time. On scrub though, there's maybe a 1 in 3 or 1 in 4 chance bad
data strip results in a.) fixed up data strip from parity
At 10/12/2016 07:58 AM, Chris Murphy wrote:
https://btrfs.wiki.kernel.org/index.php/Status
Scrub + RAID56 Unstable will verify but not repair
This doesn't seem quite accurate. It does repair the vast majority of
the time. On scrub though, there's maybe a 1 in 3 or 1 in 4 chance bad
data strip
Ignoring the RAID56 bugs for a moment, if you have mismatched drives,
BtrFS RAID1 is a pretty good way of utilising available space and
having redundancy.
My home array is BtrFS with a hobbled together collection of disks
ranging from 500GB to 3TB (and 5 of them, so it's not an even number).
I
On Wed, Oct 12, 2016 at 09:32:17AM +0800, Qu Wenruo wrote:
> >But consider the identical scenario with md or LVM raid5, or any
> >conventional hardware raid5. A scrub check simply reports a mismatch.
> >It's unknown whether data or parity is bad, so the bad data strip is
> >propagated upward to
rsync -S causes a large number of small writes separated by small seeks
to form sparse holes in files that contain runs of zero bytes. Rarely,
this can lead btrfs to write a file with a compressed inline extent
followed by other data, like this:
Filesystem type is: 9123683e
File size of
hi,
Stefan often reports enospc error in his servers when having btrfs
compression
enabled. Now he has applied these 2 patches to run and no enospc error
occurs
for more than 6 days, it seems they are useful :)
And these 2 patches are somewhat big, please check it, thanks.
Regards,
hi,
On 10/11/2016 11:49 PM, Chris Murphy wrote:
On Tue, Oct 11, 2016 at 12:47 AM, Wang Xiaoguang
wrote:
If we use mount option "-o max_inline=sectorsize", say 4096, indeed
even for a fresh fs, say nodesize is 16k, we can not make the first
4k data completely
On Mon, Oct 10, 2016 at 08:07:53AM -0400, Austin S. Hemmelgarn wrote:
> On 2016-10-09 19:12, Charles Zeitler wrote:
> >Is there any advantage to using NAS drives
> >under RAID levels, as oppposed to regular
> >'desktop' drives for BTRFS?
[...]
> So, as for what you should use in a RAID array,
At 10/12/2016 12:37 PM, Zygo Blaxell wrote:
On Wed, Oct 12, 2016 at 09:32:17AM +0800, Qu Wenruo wrote:
But consider the identical scenario with md or LVM raid5, or any
conventional hardware raid5. A scrub check simply reports a mismatch.
It's unknown whether data or parity is bad, so the bad
If we use mount option "-o max_inline=sectorsize", say 4096, indeed
even for a fresh fs, say nodesize is 16k, we can not make the first
4k data completely inline, I found this conditon causing this issue:
!compressed_size && (actual_end & (root->sectorsize - 1)) == 0
If it retuns true, we'll
Hi Filipe:
why did you replace the continue statement with a break statement:
because we released ahead of the path, it can not continue to use,
need to jump out, and then go to again.
supplement:
We found a fsync deadlock ie. 32021->32020->32028->14431->14436->32021,
the number id pid.
Hi Filipe:
because btrfs_calc_trunc_metadata_size is reserved leafsize + nodesize *
(8 - 1)
assume leafsize is the same as nodesize, we total reserved 8 nodesize
when split leaf, we need 2 path, if extent_tree level small than 4, it's
OK
because worst case is (leafsize + nodesize * 3) *2, is
From: Robbie Ko
when tree log recovery, space_cache rebuild or dirty maybe save the cache.
and then replay extent with disk_bytenr and disk_num_bytes,
but disk_bytenr and disk_num_bytes maybe had been use for free space inode,
will lead to -EINVEL.
BTRFS: error in
37 matches
Mail list logo