On Wed, Dec 21, 2016 at 2:09 PM, Chris Murphy <li...@colorremedies.com> wrote:
> What about CONFIG_BTRFS_FS_CHECK_INTEGRITY? And then using check_int
> mount option?
This slows things down, and in that case it might avoid the problem if
it's the result of a race condition.
--
What about CONFIG_BTRFS_FS_CHECK_INTEGRITY? And then using check_int
mount option?
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
the problem on-disk
gracefully. Maybe repair will fix it. Another possibility is to move
to kernel 4.9.0 and see if it still reproduces. Per usual, there's a
ton of bug fixes in each kernel release.
Otherwise I'm out of ideas.
Chris Murphy
--
To unsubscribe from this list: send the line
an be
> reasonably
> expected to handle, or would I be better off recreating the file system and
> restoring from
> either my saved btrfs send archives or the more reliable backups?
It might be that usebackuproot,ro mount will happen faster, and you
can update the backups. Then use --repair.
so it's a verbose dry run, and
see if the file listing it spits out is at all useful - if it has any
of the data you're looking for.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majo
closer to the most recent generation that still has your
data.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
_limit 0 bytes_used 16384 flags 0x0(none)
uuid ----
drop key (0 UNKNOWN.0 0) level 0
IF the whole thing is intact like this one, then you can use btrfs
restore -t to point to this tree root and it'll use it even though
it's deleted.
Anyway, i
so far).
Yeah same here, but unlike your case it completes fast for older file
systems with a decent amount of data on it. I'm not sure what the
pattern is here that results in the hang. Unfortunately strace is not
revealing I think - attached anyway.
--
Chris Murphy
root@f25h ~]
ts I'd
> have to re-write. Still, would be nice to get those back.
You might try 'btrfs check' without repairing, using a recent version
of btrfs-progs and see if it finds anything unusual.
Although, are there many snapshots? That would cause the rentention of roots.
--
Chris Murphy
--
To uns
* I don't have a file system the same size to try it on, maybe
it's a memory intensive task and once the system gets low on RAM while
traversing the file system it slows done a ton.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of
ss data that is month old :( and some of it is strangelly
> corrupted :(
You're leaving out a lot of information. If you have thousands of
snapshots, and one of those snapshots has the data in it you want,
then you shouldn't need to use btrfs restore. Btrfs restore is a
scraping tool to try to find d
-t using each root bytenr from btrfs-find-root. The more
recent the generation, the better your luck that it hasn't been
overwritten yet; but too recent and your data may not exist in that
root. It really depends how fast you umounted the volume after
deleting everything.
--
Chris Murphy
--
To un
On Fri, Dec 9, 2016 at 11:16 AM, Darrick J. Wong
<darrick.w...@oracle.com> wrote:
> [adding mark fasheh (duperemove maintainer) to cc]
>
> On Fri, Dec 09, 2016 at 07:29:21AM -0500, Austin S. Hemmelgarn wrote:
>> On 2016-12-08 21:54, Chris Murphy wrote:
>> >On Thu, Dec
On Fri, Dec 9, 2016 at 6:45 AM, Swâmi Petaramesh <sw...@petaramesh.org> wrote:
> Hi Chris, thanks for your answer,
>
> On 12/09/2016 03:58 AM, Chris Murphy wrote:
>> Can you check some bigger files and see if they've become fragmented?
>> I'm seeing 1.4GiB file
ding, it's here.
https://www.spinics.net/lists/linux-btrfs/msg61304.html
But if you're seeing something similar, then it would explain why it's
so slow in your case.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
On Thu, Dec 8, 2016 at 7:26 PM, Darrick J. Wong <darrick.w...@oracle.com> wrote:
> On Thu, Dec 08, 2016 at 05:45:40PM -0700, Chris Murphy wrote:
>> OK something's wrong.
>>
>> Kernel 4.8.12 and duperemove v0.11.beta4. Brand new file system
>> (mkfs.btrfs -dsingl
set
1361051648 (4)
[0xba8400] Dedupe 1 extents (id: 70367565) with target: (1361051648,
131072), "/mnt/test/Fedora-Workstation-Live-x86_64-25_Beta-1.1.iso2"
^C
[chris@f25s duperemove]$
Cancelled this after 10 minutes. It should not take this long to
dedupe two files.
[chris@f25s
Pretty sure it will not dedupe extents that are referenced in a read
only subvolume.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
# taskset -c 0 btrfs send /mnt/first/subvol.ro/ | btrfs receive /mnt/int/
Attaching top and perf top while this send receive happens. The use of
taskset -c 0 doesn't seem to affect the results.
Chris Murphy
Samples: 51K of event 'cycles:pp', Event count (approx.): 13435020269
Overhead Shared
On Mon, Dec 5, 2016 at 8:46 AM, Chris Mason <c...@fb.com> wrote:
> On 12/04/2016 04:28 PM, Chris Murphy wrote:
>>
>> 4.8.11-300.fc25.x86_64
>>
>> I'm currently doing a btrfs send/receive and I'm seeing a rather large
>> hit for crc32c, bigger than aes-ni (th
On Sun, Dec 4, 2016 at 3:17 PM, Henk Slager <eye...@gmail.com> wrote:
> On Sun, Dec 4, 2016 at 7:30 PM, Chris Murphy <li...@colorremedies.com> wrote:
>> Hi,
>>
>> [chris@f25s ~]$ uname -r
>> 4.8.11-300.fc25.x86_64
>> [chris@f25s ~]$ rpm -q b
-intel
[chris@f25s ~]$ lsmod | grep crc
libcrc32c 16384 1 dm_persistent_data
crct10dif_pclmul 16384 0
crc32_pclmul 16384 0
crc32c_intel 24576 2
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" i
2.38GiB is neither shared nor
exclusive. What would that be? The total not equalling shared +
exclusive is the most common bug I see with fi du.
---
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kerne
problem with the URL to the image included.
Looking at 4.9 there's not many qgroup.c changes, but there's a pile
of other changes, per usual. So even though the problem seems like
it's qgroup related, it might actually be some other problem that then
also triggers qgroup messages.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
29.93GiB /mnt/int/jackson.2015/
That makes zero sense. What's going on here?
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, Dec 3, 2016 at 2:46 PM, Marc Joliet <mar...@gmx.de> wrote:
> On Saturday 03 December 2016 13:42:42 Chris Murphy wrote:
>> On Sat, Dec 3, 2016 at 11:40 AM, Marc Joliet <mar...@gmx.de> wrote:
>> > Hello all,
>> >
>> > I'm having some trouble
I properly disable quota
> support?
'btrfs quota disable' is the only command that applies to this and it
requires rw mount; there's no 'noquota' mount option.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ma
ith the oldest (which is where I got 4.1.36) and
see if you can find a commit that fixes the problem, and then if it
applies cleanly to the much older kernel you're using.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body o
rofile
that tolerates more than one device loss is raid6; and be very
explicit with the manifestly wrong terminology being used by Btrfs's
raid10 terminology. That is a fairly egregious violation of common
terminology and the trust we're supposed to be developing, both in the
usage of common terms, but
d why that one is not upstream or why it
> was reverted. Looks absolutely reasonable to me.
It is upstream and hasn't been reverted.
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/fs/btrfs/volumes.c?id=refs/tags/v4.8.11
line 3650
I would try Duncan's idea of using just one
ched/
>
> use the latest BFQ git here, merge it into v4.8.y:
> https://github.com/linusw/linux-bfq/commits/bfq-v8
>
> This doesn't completely fix the dirty_ration problem, but it is far better
> than CFQ or deadline in my opinion (and experience).
There are several thread
a no space left error.
Try remounting with enoscp_debug, and then trigger the problem again,
and post the resulting kernel messages.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
than healthy
doses of skepticism and awareness of all limitations. :-D
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
he risk of a massive filesystem corruption; you cannot say it
> absolutely doesn't with the current implementation.
I can't say it absolutely doesn't even with md. Of course it
shouldn't, but users do report corruptions on all of the other fs
lists (ext4, XFS, linux-raid) from time to time that ar
On Tue, Nov 29, 2016 at 4:16 PM, Wilson Meier <wilson.me...@gmail.com> wrote:
>
>
> On 29.11.2016 23:52, Chris Murphy wrote:
>> On Tue, Nov 29, 2016 at 3:34 PM, Wilson Meier <wilson.me...@gmail.com> wrote:
>>> On 29.11.2016 18:54, Austin S. Hemmelgarn wrot
hunk, whereas conventional RAID 10 must lose both
mirrored pairs for data loss to happen.
With very cursory testing what I've found is btrfs-progs establishes
an initial stripe number to device mapping that's different than the
kernel code. The kernel code appears to be pretty consistent so long
as the
g
> allocator will be bug prone.
I tend to agree. I think the non-scalability of Btrfs raid10, which
makes it behave more like raid 0+1, is a higher priority because right
now it's misleading to say the least; and then the longer term goal
for scaleable huge file systems is how Btrfs can s
moving to one of those.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ata
integrity. Basically this is Btrfs eating at least one, possibly both,
file systems at the same time without warning, and it's not good
enough to warn in the wiki.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
On Thu, Nov 17, 2016 at 12:20 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
> On 2016-11-17 15:05, Chris Murphy wrote:
>>
>> I think the wiki should be updated to reflect that raid1 and raid10
>> are mostly OK. I think it's grossly misleading to consider
of various notifications of device faultiness I think
make it less than OK also. It's not in the "do not use" category but
it should be in the middle ground status so users can make informed
decisions.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btr
d b.) it has debug enabled. If you can
trigger the problem but it's still not revealing then test with
check_int which has less performance impact than check_int_data. It
looks like what you're getting is a metadata inconsistency but I'm not
certain.
--
Chris Murphy
--
To unsubscribe from this l
newer, and if it breaks then it's a bug and needs a
good bug report write up so it can get fixed.
In the meantime I would be wary with this file system if it's the only
backup copy. (Actually I feel that way no matter the file system.) I'd
make sure btrfs check with progs 4.7.3 or 4.8.1 come u
t; btree space waste bytes: 386160897
> file data blocks allocated: 1269363683328
> referenced 164438126592
>
> How do I repair this?
Yeah good question. I can't tell from the message whether different
counts is a bad thing, or if it's just a notification, or what. Yet
again b
ded, while the drive is not missing. Unless it's
physical removed or somehow dead, it'll still be seen but can produce
all kinds of mayhem.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More maj
On Fri, Oct 14, 2016 at 3:38 PM, Chris Murphy <li...@colorremedies.com> wrote:
> On Fri, Oct 14, 2016 at 1:55 PM, Zygo Blaxell
> <ce3g8...@umail.furryterror.org> wrote:
>
>>
>>> And how common is RMW for metadata operations?
>>
>> RMW in metadat
It should be -e can accept a listing of all the subvolumes you want to
send at once. And possibly an -r flag, if it existed, could
automatically populate -e. But the last time I tested -e I just got
errors.
https://bugzilla.kernel.org/show_bug.cgi?id=111221
--
Chris Murphy
--
To unsubscribe
This may be relevant and is pretty terrible.
http://www.spinics.net/lists/linux-btrfs/msg59741.html
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
nce of torn or
misdirected writes, and no corruptions, in which case Btrfs checksums
aren't really helpful, you're using it for other reasons (snapshots
and what not).
Really seriously the CoW part of Btrfs being violated by all of this
RMW to me sounds like it reduces the pros of Btrfs.
--
Chris Murphy
--
To
operations?
I wonder where all of these damn strange cases where people can't do
anything at all with a normally degraded raid5 - one device failed,
and no other failures, but they can't mount due to a bunch of csum
errors.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubs
o break this too much so I'm not sure.
https://btrfs.wiki.kernel.org/index.php/Design_notes_on_Send/Receive
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
>> That might go far enough back before the bad sectors were a factor.
>> Normally what you'd want is for it to use one of the backup roots, but
>> it's consistently running into a problem with all of them when using
>> recovery mount option.
>>
>
> Is th
ile smaller than 256K would be stored inline? Ouch. That would
> also imply the compressed extent size limit (currently 128K) has to become
> much larger.
There are patches to set strip size. Does it make sense to specify
4KiB strip size for metadata block groups and 64+KiB for data block
to make are substantially
negated by this bug. I think the bark is worse than the bite. It is
not the bark we'd like Btrfs to have though, for sure.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kerne
readding btrfs
On Tue, Oct 11, 2016 at 1:00 PM, Jason D. Michaelson
<jasondmichael...@gmail.com> wrote:
>
>
>> -Original Message-
>> From: ch...@colorremedies.com [mailto:ch...@colorremedies.com] On
>> Behalf Of Chris Murphy
>> Sent: Tuesday, Octo
M chunk. After mounting and copying some data
over, then umounting, same thing. One system chunk, raid6.
So *IF* there is anything wrong with this single system chunk, it's
all bets are off, no way to even attempt to fix the problem. That
might explain why it's not getting past the very earliest stage of
a known bad superblock would be ignored at mount time
and even by btrfs-find-root, or maybe even replaced like any other
kind of known bad metadata where good copies are available.
btrfs-show-super -f /dev/sda
btrfs-show-super -f /dev/sdh
Find out what the difference is between good and bad supers.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
> 2.9.0
Before making any further changes to inline data, does it make sense
to find the source of corruption Zygo has been experiencing? That's in
the "btrfs rare silent data corruption with kernel data leak" thread.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
What do you get for
btrfs-find-root
btrfs rescue super-recover -v
It shouldn't matter which dev you pick, unless it face plants, then try another.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo
of various backports, from some unknown time frame
without going and looking it up. If that's really 4.4.21, then it's
weirdly named, I don't know why any distro would do that.
In any case I would compare 4.8.1 and 4.4.24 because those two should
work and if not it's a bug that needs to get fix
either BluRay ISOs
> that I can re-rip, stuff I don't care about recovering, or stuff that I can
> always re-mirror if I have to. Given that I'm well versed in C programming,
> I'd much rather devote my time to working with the code to resolve whatever
> problem may be happ
On Sun, Oct 2, 2016 at 2:22 PM, Roman Mamedov <r...@romanrm.net> wrote:
> On Sun, 2 Oct 2016 13:29:56 -0600
> Chris Murphy <li...@colorremedies.com> wrote:
>
>> Well short of a bug, the problem aren't the checksums. The problem is
>> the metadata is wrong, so if
work, try -b --repair.
If that doesn't work then I'd probably use --init-extent-tree which,
while it's a heavy hammer, at least still isn't going to pretend bad
metadata is good which is what --init-csum-tree will end up doing.
But before all of that I'm curious what you get for:
btrfs-debug-tree -b 11185160192 /dev/sda3
btrfs-find-root /dev/sda3
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ce. Then try again.
I'd do only one change at a time, and see at which point it
(hopefully) works. Because none of these things should be needed, but
then if they are, the error message needs to be cleared up. So I'm
kinda setting you up to collect enough information to file a bug.
--
Chris
On Tue, Sep 27, 2016 at 4:57 PM, Zygo Blaxell
<ce3g8...@umail.furryterror.org> wrote:
> On Mon, Sep 26, 2016 at 03:06:39PM -0600, Chris Murphy wrote:
>> On Mon, Sep 26, 2016 at 2:15 PM, Ruben Salzgeber
>> <ruben.salzge...@gmail.com> wrote:
>> > Hi everyone
>&
the macbook, set chattr +C on the
> timemachine folder and copy all back. This should deactivate CoW on
> all subfolders and files.
Yeah just realize it also disables compression and checksumming as well.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe
On Mon, Sep 26, 2016 at 8:16 PM, Roman Mamedov <r...@romanrm.net> wrote:
> On Mon, 26 Sep 2016 15:06:39 -0600
> Chris Murphy <li...@colorremedies.com> wrote:
>
>> First question is if the directory containing the sparsebundle file
>> has xattr +C set on it?
>
bundle/bands/0: 43 extents found
backup.sparsebundle/bands/1: 1 extent found
backup.sparsebundle/bands/10: 212 extents found
backup.sparsebundle/bands/11: 1 extent found
Maybe sort -k can sort by the number of extents, and then use head to
just show the top 50 offenders.
--
Chris Murphy
--
it's fixed there,
then get it fixed in the kernel.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
? They'd be
checksummed there. Somehow I think the long term approach is that
partial stripe writes, which apparently are overwrites and not CoW,
need to go away. In particular I wonder what the metadata raid56 write
pattern is, if this usually means a lot of full stripe CoW writes, or
if there are man
other case were if it were possible to
isolate what tree limbs are sick, just cut them off and report the
data loss rather than consider the whole fs unusable. That's what we
do with living things.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs&q
On Sun, Sep 18, 2016 at 11:28 AM, Chris Murphy <li...@colorremedies.com> wrote:
> On Sun, Sep 18, 2016 at 2:34 AM, Anand Jain <anand.j...@oracle.com> wrote:
>>
>> (updated the subject, was [1])
>>
>>> IMO the hot-spare feature makes most sense wit
*somewhere* partially
used somewhere else in a cluster fs anyway. I see hot spare as an edge
case need, especially with hard drives. It's not a general purpose
need.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to major
If that's true, then it's an
explosive number of keys per subvolume potentially. It doesn't depend
on space as much as it depends on fs lifetime
Otherwise I don't see how this is different than using a single DEK
across all company hard drives. Compromise one, you've compromised
them all.
-
On Sat, Sep 17, 2016 at 8:34 AM, Tim Walberg <twalb...@comcast.net> wrote:
> On 09/15/2016 15:18 -0600, Chris Murphy wrote:
>>> > System, single: total=4.00MiB, used=0.00B
>>> > Metadata, RAID1: total=10.00GiB, used=8.14GiB
>>> > Globa
On Fri, Sep 16, 2016 at 6:08 PM, Chris Murphy <li...@colorremedies.com> wrote:
>
> If -o recovery doesn't work, you'll need to use something newer, you
> could use one of:
>
> Fedora Rawhide nightly with 4.8rc6 kernel and btrfs-progs 4.7.2. This
> is a small netinstall
hide/Fedora-Rawhide-20160914.n.0/compose/Everything/x86_64/iso/Fedora-Everything-netinst-x86_64-Rawhide-20160914.n.0.iso.n.0.iso
Or something more official with published hash's for the image and a
GUI, Fedora 24 workstation has kernel 4.5.5 and btrfs-progs 4.5.2
https://getfedora.org/en/workstati
=y
Actually, even before that maybe if you did a 'btrfs-debug-tree /dev/sdX'
That might explode in the vicinity of the problem. Thing is, btrfs
check doesn't see anything wrong with the metadata, so chances are
debug-tree won't either.
--
Chris Murphy
--
To unsubscribe from this list: send the
If you need to get access to the computer sooner than later I suggest
btrfs-image -c9 -t4 -s to make a filename sanitized copy of the
filesystem metadata for them to look at, just in case. They might be
able to figure out the problem just from the stack trace, but better
to have the image before blowing away the file system, just in case
they want it.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Sep 15, 2016 at 3:48 PM, Alexandre Poux <pums...@gmail.com> wrote:
>
> Le 15/09/2016 à 18:54, Chris Murphy a écrit :
>> On Thu, Sep 15, 2016 at 10:30 AM, Alexandre Poux <pums...@gmail.com> wrote:
>>> Thank you very much for your answers
>>>
>&
or lost, it'd take out the whole volume.
btrfs balance start -mconvert=raid1,soft
See what that gets you, and then recheck with btrfs fi df or better
use btrfs fi us.
Unfortunately I don't have an answer for the original question.
--
Chris Murphy
--
To unsubscribe from this list: send
On Thu, Sep 15, 2016 at 2:16 PM, Hugo Mills <h...@carfax.org.uk> wrote:
> On Thu, Sep 15, 2016 at 01:02:43PM -0600, Chris Murphy wrote:
>> On Thu, Sep 15, 2016 at 12:20 PM, Austin S. Hemmelgarn
>> <ahferro...@gmail.com> wrote:
>>
>> > 2. We're developing n
s just much of this is non-obvious to users unfamiliar
with this file system. And even I'm often throwing spaghetti on a
wall.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Sep 13, 2016 at 5:35 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
> On 2016-09-12 16:08, Chris Murphy wrote:
>>
>> - btrfsck status
>> e.g. btrfs-progs 4.7.2 still warns against using --repair, and lists
>> it under dangerous options also
On Thu, Sep 15, 2016 at 10:30 AM, Alexandre Poux <pums...@gmail.com> wrote:
> Thank you very much for your answers
>
> Le 15/09/2016 à 17:38, Chris Murphy a écrit :
>> On Thu, Sep 15, 2016 at 1:44 AM, Alexandre Poux <pums...@gmail.com> wrote:
>>> Is it
sing in order to get rid of the
corruption warnings on any subsequent scrub or balance.
>
> btrfs --version :
> btrfs-progs v4.7.1
You should upgrade to 4.7.2 or downgrade to 4.6.1 before doing btrfs
check. Not urgent so long as you don't actually do a repair with this
version.
--
Chris
regarding herding of
> cats comes to mind with respect to the last group.
Yeah you need the secret decoder ring to sort it out. Forget it, not worth it.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
filenames in those subvolumes discoverable (e.g. btrfs-debug-tree,
btrfs-image) if the subvolume is not opened? And reflink handling
between subvolumes behaves how?
[1] open in the cryptsetup open/luksOpen sense
--
Chris Murphy
--
To unsubscribe from this list: send the line "u
m in the
> following days.
Yep, completely reasonable.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
>From the fsck...
bad block 160420741120
I can't tell though if that's a bad Btrfs leaf/node where both dup
copies are bad; or if it's a bad sector.
I'd mount it ro, and take a backup of anything you care about before
proceeding further.
smartctl -x might reveal if there are problems the drive
it and 4.7.1 are marked as do not use.
https://btrfs.wiki.kernel.org/index.php/Changelog#btrfs-progs-4.7.2_.28Sep_2016.29
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
I just wouldn't use btrfs repair with this version of progs, go back
to v4.6.1 or upgrade to 4.7.2. You could do an offline check (no
repair) and see if that reveals anything useful for developers. But I
can't tell what's going on from the call trace.
--
Chris Murphy
--
To unsubscribe from
Still happens in rc6.
[ 588.463987] =
[ 588.463988] [ INFO: possible recursive locking detected ]
[ 588.463998] 4.8.0-0.rc6.git0.1.fc25.x86_64+debug #1 Tainted: GW
[ 588.463998] -
[ 588.464000]
On Mon, Sep 12, 2016 at 4:21 PM, Kai Krakow <hurikha...@gmail.com> wrote:
> Am Sun, 21 Aug 2016 02:19:33 + (UTC)
> schrieb Duncan <1i5t5.dun...@cox.net>:
>
>> Chris Murphy posted on Sat, 20 Aug 2016 18:36:21 -0600 as excerpted:
>>
>> > FAT leaves a lot
umn for kernel version, so we can add "feature X is
>> known to be OK on Linux 3.18 and later".. ? Or add those to "notes" field,
>> where applicable?
>
> That was my initial idea, and it may be better than a generic kernel version
> for all features. Even if we fill in 4.7 for any of the features that are
> known to work okay for the table.
>
> For RAID 1 I am willing to say it works stable since kernel 3.14, as this was
> the kernel I used when I switched /home and / to Dual SSD RAID 1 on this
> ThinkPad T520.
Just to cut yourself some slack, you could skip 3.14 because it's EOL
now, and just go from 4.4.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
s, they end up with another problem.
I guess right now the only sure fire options they have:
a. Don't automount any Btrfs. (This is what I've recommended.)
b. pass --uuid as well as device= for each device in the array as
discovered by BTRFS_IOC_FS_INFO and BTRFS_IOC_DEV_INFO: while also not
mounti
on how you count there's at least 4, and more realistically
8 ways, scattered across multiple commands. This excludes btrfs
check's -E, -r, and -s flags. And it ignores sequence in the success
rate. The permutations are just excessive. It's definitely not easy to
know how to fix a Btrfs volume
en there are csum errors in kernel and/or
> initrd files? Suppose the bootloader is grub2.
I"m wondering the same thing. I don't know if GRUB's Btrfs code checks
for csum matches, and on error whether it knows to retry from some
other block group.
--
Chris Murphy
--
To unsubscribe from t
no
Incompat features: extref, skinny-metadata
Number of devices: 1
Devices:
IDSIZE PATH
1 1.87GiB /dev/sdb
So with -M, it's single by default..
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message
501 - 600 of 2056 matches
Mail list logo