On Mon, Dec 3, 2018, at 4:31 AM, Stefan Malte Schumacher wrote:
> I have noticed an unusual amount of crc-errors in downloaded rars,
> beginning about a week ago. But lets start with the preliminaries. I
> am using Debian Stretch.
> Kernel: Linux mars 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4
On 2018/12/3 下午5:31, Stefan Malte Schumacher wrote:
> Hello,
>
> I have noticed an unusual amount of crc-errors in downloaded rars,
> beginning about a week ago. But lets start with the preliminaries. I
> am using Debian Stretch.
> Kernel: Linux mars 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4
Hello,
I have noticed an unusual amount of crc-errors in downloaded rars,
beginning about a week ago. But lets start with the preliminaries. I
am using Debian Stretch.
Kernel: Linux mars 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4
(2018-08-21) x86_64 GNU/Linux
BTRFS-Tools btrfs-progs 4.7.3-1
Am 22.10.18 um 22:02 schrieb Gervais, Francois:
> Hi,
>
> I think I lost power on my btrfs disk and it looks like it is now in an
> unfunctional state.
>
> Any idea how I could debug that issue?
>
> Here is what I have:
>
> kernel 4.4.0-119-generic
> btrfs-progs v4.4
>
>
>
> sudo btrfs
On 2018/10/23 上午4:02, Gervais, Francois wrote:
> Hi,
>
> I think I lost power on my btrfs disk and it looks like it is now in an
> unfunctional state.
What does the word "unfunctional" mean?
Unable to mount? Or what else?
>
> Any idea how I could debug that issue?
>
> Here is what I have:
Hi,
I think I lost power on my btrfs disk and it looks like it is now in an
unfunctional state.
Any idea how I could debug that issue?
Here is what I have:
kernel 4.4.0-119-generic
btrfs-progs v4.4
sudo btrfs check /dev/sdd
Checking filesystem on /dev/sdd
UUID:
Hi,
My filesytem appears to have become corrupted and btrfs check appears
to get stuck in an infinite loop trying to repair it.
The issue initially manifested itself as a BUG (RIP:
btrfs_set_item_key_safe+0x132/0x190) on v4.14.8 - see attached
dmesg.txt. I do not know whether this is the cause
On Sun, 24 May 2015 01:02:21 AM Jan Voet wrote:
Doing a 'btrfs balance cancel' immediately after the array was mounted
seems to have done the trick. A subsequent 'btrfs check' didn't show any
errors at all and all the data seems to be there. :-)
I add rootflags=skip_balance to the kernel
Jan Voet jan.voet at gmail.com writes:
Duncan 1i5t5.duncan at cox.net writes:
FWIW, btrfs raid5 (and raid6, together called raid56 mode) is still
extremely new, only normal runtime implemented as originally introduced,
with complete repair from a device failure only completely
Chris Murphy posted on Fri, 22 May 2015 13:15:09 -0600 as excerpted:
On Thu, May 21, 2015 at 10:43 PM, Duncan 1i5t5.dun...@cox.net wrote:
For in-production use, therefore, btrfs raid56 mode, while now at least
in theory complete, is really too immature at this point to recommend.
At some
On Thu, May 21, 2015 at 10:43 PM, Duncan 1i5t5.dun...@cox.net wrote:
For in-production use, therefore, btrfs raid56 mode, while now at least
in theory complete, is really too immature at this point to recommend.
At some point perhaps a developer will have time to state the expected
stability
Duncan 1i5t5.duncan at cox.net writes:
FWIW, btrfs raid5 (and raid6, together called raid56 mode) is still
extremely new, only normal runtime implemented as originally introduced,
with complete repair from a device failure only completely implemented in
kernel 3.19, and while in theory
Hi,
I recently upgraded a quite old home NAS system (Celeron M based) to Ubuntu
14.04 with an upgraded linux kernel (3.19.8) and BTRFS tools v3.17. This
system has 5 brand new 6TB drives (HGST) with all drives directly handled by
BTRFS, both data and metadata in RAID5.
After loading up the
Jan Voet posted on Thu, 21 May 2015 21:43:36 + as excerpted:
I recently upgraded a quite old home NAS system (Celeron M based) to
Ubuntu 14.04 with an upgraded linux kernel (3.19.8) and BTRFS tools
v3.17.
This system has 5 brand new 6TB drives (HGST) with all drives directly
handled by
Hello,
I am having trouble with my btrfs setup. An unwanted reset probably
caused the corruption. I can mount the filesystem, but cannot perform
scrub as this ends with GPF.
uname -a
Linux sysresccd 3.14.24-alt441-amd64 #2 SMP Sun Nov 16 08:27:16 UTC
2014 x86_64 AMD Phenom(tm) II X4 965
Zygo Blaxell posted on Mon, 03 Nov 2014 23:31:45 -0500 as excerpted:
On Mon, Nov 03, 2014 at 10:11:18AM -0700, Chris Murphy wrote:
On Nov 2, 2014, at 8:43 PM, Zygo Blaxell zblax...@furryterror.org
wrote:
btrfs seems to assume the data is correct on both disks (the
generation numbers and
On Nov 3, 2014, at 9:31 PM, Zygo Blaxell zblax...@furryterror.org wrote:
On Mon, Nov 03, 2014 at 10:11:18AM -0700, Chris Murphy wrote:
On Nov 2, 2014, at 8:43 PM, Zygo Blaxell zblax...@furryterror.org wrote:
btrfs seems to assume the data is correct on both disks (the generation
numbers
Chris Murphy posted on Tue, 04 Nov 2014 11:28:39 -0700 as excerpted:
It needs to be more than a sequential number. If one of the disks
disappears we need to record this fact on the surviving disks, and also
cope with _both_ disks claiming to be the surviving one.
I agree this is also a
On 11/04/2014 10:28 AM, Chris Murphy wrote:
On Nov 3, 2014, at 9:31 PM, Zygo Blaxell zblax...@furryterror.org wrote:
Now we have two disks with equal generation numbers. Generations 6..9
on sda are not the same as generations 6..9 on sdb, so if we mix the
two disks' metadata we get bad
On Tue, Nov 04, 2014 at 11:28:39AM -0700, Chris Murphy wrote:
On Nov 3, 2014, at 9:31 PM, Zygo Blaxell zblax...@furryterror.org wrote:
It needs to be more than a sequential number. If one of the disks
disappears we need to record this fact on the surviving disks, and also
cope with _both_
On Nov 2, 2014, at 8:43 PM, Zygo Blaxell zblax...@furryterror.org wrote:
On Sun, Nov 02, 2014 at 02:57:22PM -0700, Chris Murphy wrote:
For example if I have a two device Btrfs raid1 for both data and
metadata, and one device is removed and I mount -o degraded,rw one
of them and make some
On Mon, Nov 03, 2014 at 10:11:18AM -0700, Chris Murphy wrote:
On Nov 2, 2014, at 8:43 PM, Zygo Blaxell zblax...@furryterror.org wrote:
btrfs seems to assume the data is correct on both disks (the generation
numbers and checksums are OK) but gets confused by equally plausible but
different
On Nov 1, 2014, at 10:49 PM, Robert White rwh...@pobox.com wrote:
On 10/31/2014 10:34 AM, Tobias Holst wrote:
I am now using another system with kernel 3.17.2 and btrfs-tools 3.17
and inserted one of the two HDDs of my btrfs-RAID1 to it. I can't add
the second one as there are only two slots
Thank you for your reply.
I'll answer in-line.
2014-11-02 5:49 GMT+01:00 Robert White rwh...@pobox.com:
On 10/31/2014 10:34 AM, Tobias Holst wrote:
I am now using another system with kernel 3.17.2 and btrfs-tools 3.17
and inserted one of the two HDDs of my btrfs-RAID1 to it. I can't add
On Sun, Nov 02, 2014 at 02:57:22PM -0700, Chris Murphy wrote:
On Nov 1, 2014, at 10:49 PM, Robert White rwh...@pobox.com wrote:
On 10/31/2014 10:34 AM, Tobias Holst wrote:
I am now using another system with kernel 3.17.2 and btrfs-tools 3.17
and inserted one of the two HDDs of my
On 11/02/2014 06:55 PM, Tobias Holst wrote:
But I can't do a balance anymore?
root@t-mon:~# btrfs balance start /dev/sda1
ERROR: can't access '/dev/sda1'
Balance takes place on a mounted filesystem not a native block device.
So...
mount -t btrfs /dev/sda1 /some/path/somewhere
btrfs balance
On 10/31/2014 10:34 AM, Tobias Holst wrote:
I am now using another system with kernel 3.17.2 and btrfs-tools 3.17
and inserted one of the two HDDs of my btrfs-RAID1 to it. I can't add
the second one as there are only two slots in that server.
This is what I got:
tobby@ubuntu: sudo btrfs
I am now using another system with kernel 3.17.2 and btrfs-tools 3.17
and inserted one of the two HDDs of my btrfs-RAID1 to it. I can't add
the second one as there are only two slots in that server.
This is what I got:
tobby@ubuntu: sudo btrfs check /dev/sdb1
warning, device 2 is missing
Hi
I was using a btrfs RAID1 with two disks under Ubuntu 14.04, kernel
3.13 and btrfs-tools 3.14.1 for weeks without issues.
Now I updated to kernel 3.17.1 and btrfs-tools 3.17. After a reboot
everything looked fine and I started some tests. While running
duperemover (just scanning, not doing
Addition:
I found some posts here about a general file system corruption in 3.17
and 3.17.1 - is this the cause?
Additionally I am using ro-snapshots - maybe this is the cause, too?
Anyway: Can I fix that or do I have to reinstall? Haven't touched the
filesystem, just did a scrub (found 0
On Thu, Oct 30, 2014 at 9:02 PM, Tobias Holst to...@tobby.eu wrote:
Addition:
I found some posts here about a general file system corruption in 3.17
and 3.17.1 - is this the cause?
Additionally I am using ro-snapshots - maybe this is the cause, too?
Anyway: Can I fix that or do I have to
Summarizing what I've seen on the threads...
First of all many thanks for summarizing the info.
1) The bug seems to be read-only snapshot related. The connection to
send is that send creates read-only snapshots, but people creating
read-
only snapshots for other purposes are now reporting
...@prnet.org wrote:
From my own experience and based on what other people are saying, I
think there is a random btrfs filesystem corruption problem in kernel
3.17 at least related to snapshots, therefore I decided to post using
another subject to draw attention from people not concerned about btrfs
send
admin posted on Tue, 14 Oct 2014 13:17:41 +0200 as excerpted:
And if you're affected, be aware that until we have a fix, we don't
know if it'll be possible to remove the affected and currently
undeletable snapshots. If it's not, at some point you'll need to do a
fresh mkfs.btrfs, to get rid
On 10/14/2014 02:35 PM, Duncan wrote:
But at some point, presumably after a fix is in place, since the damaged
snapshots aren't currently always deletable, if the fix only prevents new
damage from occurring and doesn't provide a way to fix the damaged ones,
then mkfs would be the only way to do
Robert White posted on Tue, 14 Oct 2014 15:03:21 -0700 as excerpted:
What happens if btrfs property set is used to (attempt to) promote the
snapshot from read-only to read-write? Can the damaged snapshot then be
subjected to scrub of btrfsck?
e.g.
btrfs property set /path/to/snapshot ro
From my own experience and based on what other people are saying, I
think there is a random btrfs filesystem corruption problem in kernel
3.17 at least related to snapshots, therefore I decided to post using
another subject to draw attention from people not concerned about btrfs
send to it. More
On Mon, Oct 13, 2014 at 4:27 PM, David Arendt ad...@prnet.org wrote:
From my own experience and based on what other people are saying, I
think there is a random btrfs filesystem corruption problem in kernel
3.17 at least related to snapshots, therefore I decided to post using
another subject
I think I just found a consistent simple way to trigger the problem
(at least on my system). And, as I guessed before, it seems to be
related just to readonly snapshots:
1) I create a readonly snapshot
2) I do some changes on the source subvolume for the snapshot (I'm not
sure changes are
On Mon, Oct 13, 2014 at 4:48 PM, john terragon jterra...@gmail.com wrote:
I think I just found a consistent simple way to trigger the problem
(at least on my system). And, as I guessed before, it seems to be
related just to readonly snapshots:
1) I create a readonly snapshot
2) I do some
On Mon, Oct 13, 2014 at 4:55 PM, Rich Freeman
r-bt...@thefreemanclan.net wrote:
On Mon, Oct 13, 2014 at 4:48 PM, john terragon jterra...@gmail.com wrote:
After the rebooting (or the remount) I consistently have the corruption
with the usual multitude of these in dmesg
parent transid verify
I'm using compress=no so compression doesn't seem to be related, at
least in my case. Just read-only snapshots on 3.17 (although I haven't
tried 3.16).
John
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo
As these to machines are running as server for different purposes (yes,
I know that btrfs is unstable and any corruption or data loss is at my
own risk therefore I have good backups), I want to reboot them not more
then necessary.
However I tried to bring my reboot times in relation with
I'm also using no compression.
On 10/13/2014 11:22 PM, john terragon wrote:
I'm using compress=no so compression doesn't seem to be related, at
least in my case. Just read-only snapshots on 3.17 (although I haven't
tried 3.16).
John
--
To unsubscribe from this list: send the line
David Arendt posted on Mon, 13 Oct 2014 23:25:23 +0200 as excerpted:
I'm also using no compression.
On 10/13/2014 11:22 PM, john terragon wrote:
I'm using compress=no so compression doesn't seem to be related, at
least in my case. Just read-only snapshots on 3.17 (although I haven't
tried
Rich Freeman posted on Mon, 13 Oct 2014 16:42:14 -0400 as excerpted:
On Mon, Oct 13, 2014 at 4:27 PM, David Arendt ad...@prnet.org wrote:
From my own experience and based on what other people are saying, I
think there is a random btrfs filesystem corruption problem in kernel
3.17 at least
On Mon, Oct 13, 2014 at 5:22 PM, john terragon jterra...@gmail.com wrote:
I'm using compress=no so compression doesn't seem to be related, at
least in my case. Just read-only snapshots on 3.17 (although I haven't
tried 3.16).
I was using lzo compression, and hence my comment about turning it
And another worrying thing I didn't notice before. Two snapshots have
dates that do not make sense. root-b3 and root-b4 have been created
Oct 14th (and btw root's modification time was also on Oct the 14th).
So why do they show Oct 10th? And root-prov has actually been created
on Oct 10 15:37, as
48 matches
Mail list logo