ng since I never ran a kernel newer than 4.8...
Regards,
Tobias
2016-11-01 6:24 GMT+01:00 Qu Wenruo <quwen...@cn.fujitsu.com>:
>
>
> At 11/01/2016 12:46 PM, Tobias Holst wrote:
>>
>> Hi
>>
>> I can't mount my boot partition anymore. When I try it by enteri
Hi
I can't mount my boot partition anymore. When I try it by entering
"mount /dev/sdi1 /mnt/boot/" I get:
> mount: wrong fs type, bad option, bad superblock on /dev/sdi1,
> missing codepage or helper program, or other error
>
> In some cases useful info is found in syslog - try
>
Hi
I am getting some "parent transid verify failed"-errors. Is there any
way to find out what's affected? Are these errors in metadata, data or
both - and if they are errors in the data: How can I find out which
files are affected?
Regards,
Tobias
--
To unsubscribe from this list: send the line
Ah, thanks for the information!
Happy testing :)
2015-11-03 19:34 GMT+01:00 Chris Mason <c...@fb.com>:
> On Tue, Nov 03, 2015 at 07:13:37PM +0100, Tobias Holst wrote:
>> Hi
>>
>> Anything new on this topic?
>>
>> I think it would be a great thing and shou
Hi
Anything new on this topic?
I think it would be a great thing and should be merged as soon as it
is stable. :)
Regards,
Tobias
2015-10-02 13:47 GMT+02:00 Austin S Hemmelgarn :
> On 2015-09-29 23:50, Omar Sandoval wrote:
>>
>> Hi,
>>
>> Here's one more reroll of the
ted to happen ("some devices missing"). ;)
Regards,
Tobias
2015-10-18 16:14 GMT+02:00 Philip Seeger <p0h0i0l0...@gmail.com>:
> Hi Tobias
>
> On 07/20/2015 06:20 PM, Tobias Holst wrote:
>>
>> My btrfs-RAID6 seems to be broken again :(
>>
>> When read
Hi
My btrfs-RAID6 seems to be broken again :(
When reading from it I get several of these:
[ 176.349943] BTRFS info (device dm-4): csum failed ino 1287707
extent 21274957705216 csum 2830458701 wanted 426660650 mirror 2
then followed by a free_raid_bio-crash:
[ 176.349961] [ cut
Message
Subject: Re: Uncorrectable errors on RAID6
From: Tobias Holst to...@tobby.eu
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2015年05月29日 10:00
Thanks, Qu, sad news... :-(
No, I also didn't defrag with older kernels. Maybe I did it a while
ago with 3.19.x, but there was a scrub
Hi
Just a question to understand my logs. Doesn't matter where these
errors come from, I just want to understand them. What is the
difference of these two message types?
BTRFS: dm-4 checksum verify failed on 6318462353408 wanted 25D94CD6 found
8BA427D4 level 1
vs.
BTRFS warning (device
:00 Qu Wenruo quwen...@cn.fujitsu.com:
Original Message
Subject: Re: Uncorrectable errors on RAID6
From: Tobias Holst to...@tobby.eu
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2015年05月28日 21:13
Ah it's already done. You can find the error-log over here:
https://paste.ee/p
+02:00 Tobias Holst to...@tobby.eu:
Hi Qu,
no, I didn't run a replace. But I ran a defrag with -clzo on all
files while there has been slightly I/O on the devices. Don't know if
this could cause corruptions, too?
Later on I deleted a r/o-snapshot which should free a big amount of
storage
2015-05-28 4:49 GMT+02:00 Qu Wenruo quwen...@cn.fujitsu.com:
Original Message
Subject: Uncorrectable errors on RAID6
From: Tobias Holst to...@tobby.eu
To: linux-btrfs@vger.kernel.org linux-btrfs@vger.kernel.org
Date: 2015年05月28日 10:18
Hi
I am doing a scrub on my 6
Hi
I am doing a scrub on my 6-drive btrfs RAID6. Last time it found zero
errors, but now I am getting this in my log:
[ 6610.888020] BTRFS: checksum error at logical 478232346624 on dev
/dev/dm-2, sector 231373760: metadata leaf (level 0) in tree 2
[ 6610.888025] BTRFS: checksum error at logical
@oracle.com:
On Fri, Feb 13, 2015 at 10:54:22PM +0100, Tobias Holst wrote:
It's me again. I just found out why my system crashed during the back up.
I don't know what it means, but maybe it helps you?
The warning means somehow checksum becomes inconsistent with file extents,
but no clear clues
2015-02-13 9:06 GMT+01:00 Liu Bo bo.li@oracle.com:
On Fri, Feb 13, 2015 at 12:22:16AM +0100, Tobias Holst wrote:
Hi
I don't remember the exact mkfs.btrfs options anymore but
ls /sys/fs/btrfs/[UUID]/features/
shows the following output:
big_metadata compress_lzo extended_iref
] extent_readpages+0x15e/0x1a0 [btrfs]
[c04eb400] ? btrfs_submit_direct+0x1b0/0x1b0 [btrfs]
[c04e771f] btrfs_readpages+0x1f/0x30 [btrfs]
[c04dc969] ? btrfs_congested_fn+0x49/0xb0 [btrfs]
Regards,
Tobias
2015-02-13 19:26 GMT+01:00 Tobias Holst to...@tobby.eu:
2015-02-13 9:06 GMT+01:00
system, if it's not repairable.
Regards,
Tobias
2015-02-12 10:16 GMT+01:00 Liu Bo bo.li@oracle.com:
On Wed, Feb 11, 2015 at 03:46:33PM +0100, Tobias Holst wrote:
Hmm, it looks like it is getting worse... Here are some parts of my
syslog, including two crashed btrfs-threads:
So I am
dm-5): Skipping commit of aborted transaction.
BTRFS: error (device dm-5) in cleanup_transaction:1670: errno=-5 IO failure
Any thoughts? Would it help to unplug the dm5-device which seems to
be causing this errors and then balance the array?
Regards,
Tobias
2015-02-09 23:45 GMT+01:00 Tobias
2015-02-10 8:17 GMT+01:00 Kai Krakow hurikha...@gmail.com:
Tobias Holst to...@tobby.eu schrieb:
and btrfs scrub status /[device] gives me the following output:
scrub status for [UUID]
scrub started at Mon Feb 9 18:16:38 2015 and was aborted after 2008
seconds total bytes scrubbed: 113.04GiB
Hi
I am just looking at the features enabled on my btrfs volume.
ls /sys/fs/btrfs/[UUID]/features/
shows the following output:
big_metadata compress_lzo extended_iref mixed_backref raid56
So big_metadata means I am not using skinny-metadata,
compress_lzo means I am using compression.
Hi
I'm having some trouble with my six-drives btrfs raid6 (each drive
encrypted with LUKS). At first: Yes, I do have backups, but it may
take at least days, maybe weeks or even some month to restore
everything from the (offside) backups. So it is not essential to
recover the data, but would be
Hi.
There is a known bug when you re-plug in a missing hdd of a btrfs raid
without wiping the device before. In worst case this results in a
totally corrupted filesystem as it did sometimes during my tests of
the raid6 implementation. With raid1 it may just go back in time to
the point when you
Thank you for your reply.
I'll answer in-line.
2014-11-02 5:49 GMT+01:00 Robert White rwh...@pobox.com:
On 10/31/2014 10:34 AM, Tobias Holst wrote:
I am now using another system with kernel 3.17.2 and btrfs-tools 3.17
and inserted one of the two HDDs of my btrfs-RAID1 to it. I can't add
PM, Tobias Holst to...@tobby.eu wrote:
Addition:
I found some posts here about a general file system corruption in 3.17
and 3.17.1 - is this the cause?
Additionally I am using ro-snapshots - maybe this is the cause, too?
Anyway: Can I fix that or do I have to reinstall? Haven't touched
Hi
I was using a btrfs RAID1 with two disks under Ubuntu 14.04, kernel
3.13 and btrfs-tools 3.14.1 for weeks without issues.
Now I updated to kernel 3.17.1 and btrfs-tools 3.17. After a reboot
everything looked fine and I started some tests. While running
duperemover (just scanning, not doing
).
Regards
Tobias
2014-10-31 1:29 GMT+01:00 Tobias Holst to...@tobby.eu:
Hi
I was using a btrfs RAID1 with two disks under Ubuntu 14.04, kernel
3.13 and btrfs-tools 3.14.1 for weeks without issues.
Now I updated to kernel 3.17.1 and btrfs-tools 3.17. After a reboot
everything looked fine
If it is unknown, which of these options have been used at btrfs
creation time - is it possible to check the state of these options
afterwards on a mounted or unmounted filesystem?
2014-09-23 15:38 GMT+02:00 Austin S Hemmelgarn ahferro...@gmail.com:
Well, running 'mkfs.btrfs -O list-all' with
Hi
Is there anything new on this topic? I am using Ubuntu 14.04.1 and
experiencing the same problem.
- 6 HDDs
- LUKS on every HDD
- btrfs RAID6 over this 6 crypt-devices
No LVM, no nodatacow files.
Mount-options: defaults,compress-force=lzo,space_cache
With the original 3.13-kernel
I think after the balance it was a fine, non-degraded RAID again... As
far as I remember.
Tobby
2014-03-20 1:46 GMT+01:00 Marc MERLIN m...@merlins.org:
On Thu, Mar 20, 2014 at 01:44:20AM +0100, Tobias Holst wrote:
I tried the RAID6 implementation of btrfs and I looks like I had the
same
2014-03-09 18:36 GMT+01:00 Austin S Hemmelgarn ahferro...@gmail.com:
On 03/09/2014 04:17 AM, Swâmi Petaramesh wrote:
Le dimanche 9 mars 2014 08:48:20 KC a écrit :
I am experiencing massive performance degradation on my BTRFS
root partition on SSD.
BTW, is BTRFS still a SSD-killer ? It had
30 matches
Mail list logo