been able to specify
> rootflags=compress=lzo in my APPEND line, this is also in /etc/fstab
> and it booted... I'm using this as evidence that it must support
> compression on a root btrfs device? Any reason to think otherwise?
I don't understand the question. What is "it" wh
parameter either. This mount option is only
needed to tell the kernel to write new extents with compression.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
rootflags=subvolid= and specifying the ID of the arch/root
subvolume. But I'm suspicious whether that's a problem. You could try
removing the leading / because the path to the subvolume is really
subvol=arch/root. Hmmm.
That's all I'm thinking of at the moment.
Chris Murphy
--
On Mon, Jan 30, 2017 at 2:20 PM, Chris Murphy wrote:
> What people do with huge databases, which have this same problem,
> they'll take a volume snapshot. This first commits everything in
> flight, freezes the fs so no more changes can happen, then takes a
> snapshot, then unfre
On Mon, Jan 30, 2017 at 2:07 PM, Michael Born wrote:
>
>
> Am 30.01.2017 um 21:51 schrieb Chris Murphy:
>> On Mon, Jan 30, 2017 at 1:02 PM, Michael Born
>> wrote:
>>> Hi btrfs experts.
>>>
>>> Hereby I apply for the stupidity of the
hile dd was
happening, so the image you've taken is inconsistent.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jan 25, 2017 at 3:58 PM, Omar Sandoval wrote:
> On Wed, Jan 25, 2017 at 03:55:33PM -0700, Chris Murphy wrote:
>> On Tue, Jan 24, 2017 at 9:42 PM, Omar Sandoval wrote:
>> > On Tue, Jan 24, 2017 at 07:53:06PM -0700, Chris Murphy wrote:
>> >> On Tue, Jan 24,
On Tue, Jan 24, 2017 at 9:42 PM, Omar Sandoval wrote:
> On Tue, Jan 24, 2017 at 07:53:06PM -0700, Chris Murphy wrote:
>> On Tue, Jan 24, 2017 at 3:50 PM, Omar Sandoval wrote:
>>
>> > Got this to repro after installing systemd-container. It's happening on
>&g
omething different about subvolumes and directories
when it comes to xattrs, and the xattr patch I found in bisect is
exposing the difference, hence things getting tripped up.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a me
ead only and can't write all
of what's flushed, it results in journal data loss. If the rest of the
output is useful I can switch systemd to only use volatile storage to
avoid this problem.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
On Tue, Jan 24, 2017 at 1:27 PM, Omar Sandoval wrote:
> On Tue, Jan 24, 2017 at 01:24:51PM -0700, Chris Murphy wrote:
>> On Tue, Jan 24, 2017 at 1:10 PM, Omar Sandoval wrote:
>>
>> > Hm, still no luck, maybe it's a Server vs Workstation thing? I'll try
&
gt; Could you dump the contents of /{etc,run,usr/lib}/tmpfiles.d somewhere?
This is just ls -lZ for those directories, not sure how else to dump them.
https://drive.google.com/open?id=0B_2Asp8DGjJ9NFJUUUpuV2lxcG8
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe li
nforcing=0. So this could be an selinux xattr that's
involved.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
x27;s a Samsung SATA SSD.
dmesg
https://drive.google.com/open?id=0B_2Asp8DGjJ9cjhSRUJxc1k3NVE
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
p6, and then added p6 to p4. Hence p4 and p6 are the same Btrfs
volume (single profile for metadata and data).
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
un to /var, flushing the journal
to disk within about 2 seconds of the first Btrfs error. And systemd
does chattr +C on its logs by default now (and I think it's not user
changeable behavior so I can't test if it's related).
Chris Murphy
--
To unsubscribe from this list: send the lin
On Tue, Jan 24, 2017 at 10:49 AM, Omar Sandoval wrote:
> On Mon, Jan 23, 2017 at 08:51:24PM -0700, Chris Murphy wrote:
>> On Mon, Jan 23, 2017 at 5:05 PM, Omar Sandoval wrote:
>> > Thanks! Hmm, okay, so it's coming from btrfs_update_delayed_inode()...
>>
2-dmesg.log
https://drive.google.com/open?id=0B_2Asp8DGjJ9UnNSRXpualprWHM
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jan 23, 2017 at 3:04 PM, Omar Sandoval wrote:
> On Mon, Jan 23, 2017 at 02:55:21PM -0700, Chris Murphy wrote:
>> On Mon, Jan 23, 2017 at 2:50 PM, Chris Murphy
>> > I haven't found the commit for that patch, so maybe it's something
>> > with the combi
On Mon, Jan 23, 2017 at 2:50 PM, Chris Murphy
> I haven't found the commit for that patch, so maybe it's something
> with the combination of that patch and the previous commit.
I think that's provably not the case based on the bisect log, because
I hit the problem with ker
On Mon, Jan 23, 2017 at 2:31 PM, Omar Sandoval wrote:
> On Wed, Jan 18, 2017 at 02:27:13PM -0700, Chris Murphy wrote:
>> On Wed, Jan 11, 2017 at 4:13 PM, Chris Murphy
>> wrote:
>> > Looks like there's some sort of xattr and Btrfs interaction happening
>> >
OK so all of these pass original check, but have problems reported by
lowmem. Separate notes about each inline.
~500MiB each, these three are data volumes, first two are raid1, third
one is single.
https://drive.google.com/open?id=0B_2Asp8DGjJ9Z3UzWnFKT3A0clU
https://drive.google.com/open?id=0B_2A
t; Idiot me. I always forget that.
>
Well, in your defense the error message is misleading.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
V3pNS212Yk0
There are 4 more file systems, 2 are single device, 2 are two device
(raid1) fs's. I'm not sure how to use this indirect btrfs-image
approach with raid1.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
lowmem mode is still not the default option.
>
> The problem os original mode is, if you're checking a TB level fs and only 2
> or 4G ram, then it's quite possible you ran out of memory and won't be able
> to check the fs forever, more several than annoying.
Fair enough.
tuation
on Btrfs which is already incredibly more complicated and unhelpful
than any other file system.
btrfs-progs 4.9.
kernels vary from 4.4 to 4.10-rc4, but one fs is only a month old
having used 4.7 and 4.8 kernels; and another one just 4.10-rc3 as I
said.
--
Chris Murphy
--
To unsubscribe
On Thu, Jan 19, 2017 at 12:15 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Chris Murphy posted on Wed, 18 Jan 2017 14:30:28 -0700 as excerpted:
>
>> On Wed, Jan 18, 2017 at 2:07 PM, Jon wrote:
>>> So, I had a raid 1 btrfs system setup on my laptop. Recently I upgraded
&
t.
>
> I used raid 1 because I figured that if one drive failed I could simply
> use the other. This recovery scenario makes me think this is incorrect.
> Am I misunderstanding btrfs raid? Is there a process to go through for
> mounting single member of a raid pool?
mount -
On Wed, Jan 11, 2017 at 4:13 PM, Chris Murphy wrote:
> Looks like there's some sort of xattr and Btrfs interaction happening
> here; but as it only happens with some subvolumes/snapshots not all
> (but 100% consistent) maybe the kernel version at the time the
> snapshot was
fixed since 4.7.3
although I don't know off hand if this particular bug is fixed. I did
recently do a btrfs-image with btrfs-progs v4.9 with -s and did not
get a segfault.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
member devices for you regardless of which
device you point it to, and checks the whole file system. It's not
necessary to run it on each device.
'btrfs check --mode=lowmem' might find the problem but I don't think
it can fix anything still.
Chris Murphy
--
To unsubsc
ommit: [6c6ef9f26e598fb977f60935e109cd5b266c941a] xattr:
Stop calling {get,set,remove}xattr inode operations
btrfs-image produces a 159M file.
I've updated the bug report
https://bugzilla.kernel.org/show_bug.cgi?id=191761 and also adding
patch author to cc.
Chris Murphy
--
To unsubscribe from this list: sen
On Mon, Jan 2, 2017 at 11:50 AM, Chris Murphy wrote:
> Attempt 2. The original post isn't showing up on spinics, just my followup.
>
>
> The Problem: The file system goes read-only soon after startup. It
> only happens with a particular subvolume used for root fs, and only
>
' because the fs resize is implied.
First it's resized/grown with add; and then it's resized/shrink with
remove. For replace there's a consolidation of steps, it's been a
while since I've looked at the code so I can't tell you what steps it
skips, what the stat
to use.
> 4.7, 4.8 or 4.9 series?
>
> What are your experiences and recommendations?
I'm using 4.8.15. With 4.9 and 4.10 I'm seeing a regression where a
volume goes read only for inexplicable reasons. Both btrfs check and
scrub show no problems.
http://www.spinics.net/lists/linux-b
e btrfs du command run on). But in the original home,
this is home/chris/gits which is a subvolume that's not empty.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jan 2, 2017 at 2:05 PM, Chris Murphy wrote:
> On Mon, Jan 2, 2017 at 2:02 PM, Jeff Mahoney wrote:
>> On 1/2/17 4:55 AM, Andrei Borzenkov wrote:
>>> I try to understand what exactly is trimmed in case of btrfs. Using
>>> installation in QEMU I see that
/
[sudo] password for chris:
/: 48.8 GiB (52393148416 bytes) trimmed
[chris@f25h ~]$
Otherwise, it should have trimmed ~52GiB.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
rrno=-2 No such entry
[ 34.789000] BTRFS info (device nvme0n1p4): delayed_refs has NO entry
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, Dec 31, 2016 at 12:44 PM, Chris Murphy wrote:
> Problem: The file system (root fs) goes read-only soon after login. It
> only happens with a particular subvolume, and only with kernel 4.9.0.
> If I use kernel 4.9.0 on a different subvolume the problem doesn't
> happen
On Wed, Dec 21, 2016 at 2:09 PM, Chris Murphy wrote:
> What about CONFIG_BTRFS_FS_CHECK_INTEGRITY? And then using check_int
> mount option?
This slows things down, and in that case it might avoid the problem if
it's the result of a race condition.
--
Chris Murphy
--
To unsubscrib
What about CONFIG_BTRFS_FS_CHECK_INTEGRITY? And then using check_int
mount option?
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
#x27;t handling the problem on-disk
gracefully. Maybe repair will fix it. Another possibility is to move
to kernel 4.9.0 and see if it still reproduces. Per usual, there's a
ton of bug fixes in each kernel release.
Otherwise I'm out of ideas.
Chris Murphy
--
To unsubscribe from
t; expected to handle, or would I be better off recreating the file system and
> restoring from
> either my saved btrfs send archives or the more reliable backups?
It might be that usebackuproot,ro mount will happen faster, and you
can update the backups. Then use --repair. It's still
ith the -D and -v options so it's a verbose dry run, and
see if the file listing it spits out is at all useful - if it has any
of the data you're looking for.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to maj
n get closer to the most recent generation that still has your
data.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
has more information than this. I'm used to seeing
dozens or more.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
439
generation 4 root_dirid 256 bytenr 4358144 level 0 refs 1
lastsnap 0 byte_limit 0 bytes_used 16384 flags 0x0(none)
uuid ----
drop key (0 UNKNOWN.0 0) level 0
IF the whole thing is intact like this one, then you can use btrfs
resto
tes I tried (so far).
Yeah same here, but unlike your case it completes fast for older file
systems with a decent amount of data on it. I'm not sure what the
pattern is here that results in the hang. Unfortunately strace is not
revealing I think - attached a
just a few rogue scripts I'd
> have to re-write. Still, would be nice to get those back.
You might try 'btrfs check' without repairing, using a recent version
of btrfs-progs and see if it finds anything unusual.
Although, are there many snapshots? That would cause the rententi
us to me but,
*shrug* I don't have a file system the same size to try it on, maybe
it's a memory intensive task and once the system gets low on RAM while
traversing the file system it slows done a ton.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe lin
s the only line that physically work with
> btrfs restore /dev/sda /mnt2/temp2/ -x -m -S -s -i -t
That particular
>
>
> as you can see there is a vast "generation hole" between 184530 and
> 184334 ... more 200 generations ... and this results in only being
> able to acces
h root bytenr from btrfs-find-root. The more
recent the generation, the better your luck that it hasn't been
overwritten yet; but too recent and your data may not exist in that
root. It really depends how fast you umounted the volume after
deleting everything.
--
Chris Murphy
--
On Fri, Dec 9, 2016 at 11:16 AM, Darrick J. Wong
wrote:
> [adding mark fasheh (duperemove maintainer) to cc]
>
> On Fri, Dec 09, 2016 at 07:29:21AM -0500, Austin S. Hemmelgarn wrote:
>> On 2016-12-08 21:54, Chris Murphy wrote:
>> >On Thu, Dec 8, 2016 at 7:26 PM, Dar
On Fri, Dec 9, 2016 at 6:45 AM, Swâmi Petaramesh wrote:
> Hi Chris, thanks for your answer,
>
> On 12/09/2016 03:58 AM, Chris Murphy wrote:
>> Can you check some bigger files and see if they've become fragmented?
>> I'm seeing 1.4GiB files with 2-3 extents reported
7;m finding, it's here.
https://www.spinics.net/lists/linux-btrfs/msg61304.html
But if you're seeing something similar, then it would explain why it's
so slow in your case.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a messa
On Thu, Dec 8, 2016 at 7:26 PM, Darrick J. Wong wrote:
> On Thu, Dec 08, 2016 at 05:45:40PM -0700, Chris Murphy wrote:
>> OK something's wrong.
>>
>> Kernel 4.8.12 and duperemove v0.11.beta4. Brand new file system
>> (mkfs.btrfs -dsingle -msingle, default mount opt
t offset
1361051648 (4)
[0xba8400] Dedupe 1 extents (id: 70367565) with target: (1361051648,
131072), "/mnt/test/Fedora-Workstation-Live-x86_64-25_Beta-1.1.iso2"
^C
[chris@f25s duperemove]$
Cancelled this after 10 minutes. It should not take this long to
dedupe two files.
[chris
Pretty sure it will not dedupe extents that are referenced in a read
only subvolume.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
# taskset -c 0 btrfs send /mnt/first/subvol.ro/ | btrfs receive /mnt/int/
Attaching top and perf top while this send receive happens. The use of
taskset -c 0 doesn't seem to affect the results.
Chris Murphy
Samples: 51K of event 'cycles:pp', Event count (approx.): 13435020269
O
On Mon, Dec 5, 2016 at 8:46 AM, Chris Mason wrote:
> On 12/04/2016 04:28 PM, Chris Murphy wrote:
>>
>> 4.8.11-300.fc25.x86_64
>>
>> I'm currently doing a btrfs send/receive and I'm seeing a rather large
>> hit for crc32c, bigger than aes-ni (th
On Sun, Dec 4, 2016 at 3:17 PM, Henk Slager wrote:
> On Sun, Dec 4, 2016 at 7:30 PM, Chris Murphy wrote:
>> Hi,
>>
>> [chris@f25s ~]$ uname -r
>> 4.8.11-300.fc25.x86_64
>> [chris@f25s ~]$ rpm -q btrfs-progs
>> btrfs-progs-4.8.5-1.fc26.x86_64
>>
>
rfs loaded, crc32c=crc32c-intel
[chris@f25s ~]$ lsmod | grep crc
libcrc32c 16384 1 dm_persistent_data
crct10dif_pclmul 16384 0
crc32_pclmul 16384 0
crc32c_intel 24576 2
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe
2.38GiB is neither shared nor
exclusive. What would that be? The total not equalling shared +
exclusive is the most common bug I see with fi du.
---
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kerne
developers to look at; put it somewhere like Google Drive or wherever
they can grab it. And put the URL in this thread, and/or also file a
bug about this problem with the URL to the image included.
Looking at 4.9 there's not many qgroup.c changes, but there's a pile
of other changes, per usual. So even though the problem seems like
it's qgroup related, it might actually be some other problem that then
also triggers qgroup messages.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
GiB37.20GiB29.93GiB /mnt/int/jackson.2015/
That makes zero sense. What's going on here?
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, Dec 3, 2016 at 2:46 PM, Marc Joliet wrote:
> On Saturday 03 December 2016 13:42:42 Chris Murphy wrote:
>> On Sat, Dec 3, 2016 at 11:40 AM, Marc Joliet wrote:
>> > Hello all,
>> >
>> > I'm having some trouble with btrfs on a laptop, possibly due
matting: how do I properly disable quota
> support?
'btrfs quota disable' is the only command that applies to this and it
requires rw mount; there's no 'noquota' mount option.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
s with longterm kernel.org
kernels, starting with the oldest (which is where I got 4.1.36) and
see if you can find a commit that fixes the problem, and then if it
applies cleanly to the much older kernel you're using.
--
Chris Murphy
--
To unsubscribe from this list: send the line "
tes more than one device loss is raid6; and be very
explicit with the manifestly wrong terminology being used by Btrfs's
raid10 terminology. That is a fairly egregious violation of common
terminology and the trust we're supposed to be developing, both in the
usage of common terms, but also in B
reverted. Looks absolutely reasonable to me.
It is upstream and hasn't been reverted.
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/fs/btrfs/volumes.c?id=refs/tags/v4.8.11
line 3650
I would try Duncan's idea of using just one filter and seeing what happen
t;
> use the latest BFQ git here, merge it into v4.8.y:
> https://github.com/linusw/linux-bfq/commits/bfq-v8
>
> This doesn't completely fix the dirty_ration problem, but it is far better
> than CFQ or deadline in my opinion (and experience).
There are several threads over the
a no space left error.
Try remounting with enoscp_debug, and then trigger the problem again,
and post the resulting kernel messages.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
es multiple disk failures and 2 bad
> PSU's plus about a dozen (not BTRFS related) kernel panics and 4 unexpected
> power loss events. I also have exhaustive monitoring, so I'm replacing bad
> hardware early instead of waiting for it to actually fail.
Possibly nothing aids predictab
ID1 member should never put your
> system under the risk of a massive filesystem corruption; you cannot say it
> absolutely doesn't with the current implementation.
I can't say it absolutely doesn't even with md. Of course it
shouldn't, but users do report corruptions on a
On Tue, Nov 29, 2016 at 4:16 PM, Wilson Meier wrote:
>
>
> On 29.11.2016 23:52, Chris Murphy wrote:
>> On Tue, Nov 29, 2016 at 3:34 PM, Wilson Meier wrote:
>>> On 29.11.2016 18:54, Austin S. Hemmelgarn wrote:
>>>> On 2016-11-29 12:20, Florian Lindner wrot
ies of a chunk, whereas conventional RAID 10 must lose both
mirrored pairs for data loss to happen.
With very cursory testing what I've found is btrfs-progs establishes
an initial stripe number to device mapping that's different than the
kernel code. The kernel code appears to be pretty cons
t; allocator will be bug prone.
I tend to agree. I think the non-scalability of Btrfs raid10, which
makes it behave more like raid 0+1, is a higher priority because right
now it's misleading to say the least; and then the longer term goal
for scaleable huge file systems is how Btrfs can shed irr
pdates-testing, I suggest moving to one of those.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
trfs eating at least one, possibly both,
file systems at the same time without warning, and it's not good
enough to warn in the wiki.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Nov 17, 2016 at 12:20 PM, Austin S. Hemmelgarn
wrote:
> On 2016-11-17 15:05, Chris Murphy wrote:
>>
>> I think the wiki should be updated to reflect that raid1 and raid10
>> are mostly OK. I think it's grossly misleading to consider either as
>> green/OK w
ck of various notifications of device faultiness I think
make it less than OK also. It's not in the "do not use" category but
it should be in the middle ground status so users can make informed
decisions.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linu
jor changes: a.)
it's a kernel.org kernel and b.) it has debug enabled. If you can
trigger the problem but it's still not revealing then test with
check_int which has less performance impact than check_int_data. It
looks like what you're getting is a metadata inconsistency but I'm not
c
wer than 4.4, so you really ought
to use something newer, and if it breaks then it's a bug and needs a
good bug report write up so it can get fixed.
In the meantime I would be wary with this file system if it's the only
backup copy. (Actually I feel that way no matter the file system.
bytes: 386160897
> file data blocks allocated: 1269363683328
> referenced 164438126592
>
> How do I repair this?
Yeah good question. I can't tell from the message whether different
counts is a bad thing, or if it's just a notification, or what. Yet
again btrfs-progs does not help
ively degraded, while the drive is not missing. Unless it's
physical removed or somehow dead, it'll still be seen but can produce
all kinds of mayhem.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...
On Fri, Oct 14, 2016 at 3:38 PM, Chris Murphy wrote:
> On Fri, Oct 14, 2016 at 1:55 PM, Zygo Blaxell
> wrote:
>
>>
>>> And how common is RMW for metadata operations?
>>
>> RMW in metadata is the norm. It happens on nearly all commits--the only
>> except
It should be -e can accept a listing of all the subvolumes you want to
send at once. And possibly an -r flag, if it existed, could
automatically populate -e. But the last time I tested -e I just got
errors.
https://bugzilla.kernel.org/show_bug.cgi?id=111221
--
Chris Murphy
--
To unsubscribe
This may be relevant and is pretty terrible.
http://www.spinics.net/lists/linux-btrfs/msg59741.html
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
writes, and no corruptions, in which case Btrfs checksums
aren't really helpful, you're using it for other reasons (snapshots
and what not).
Really seriously the CoW part of Btrfs being violated by all of this
RMW to me sounds like it reduces the pros of Btrfs.
--
Chris Murphy
--
To uns
is RMW for
metadata operations?
I wonder where all of these damn strange cases where people can't do
anything at all with a normally degraded raid5 - one device failed,
and no other failures, but they can't mount due to a bunch of csum
errors.
Chris Murphy
--
To unsubscribe from this li
t; reference a file that didn't exist, but what if the referenced file is
> there but contains different data? Are there checks for this sort of
> thing, or is it always assumed that the parent subvols are identical and
> if they're not, you're in undefined behavior land?
I h
above, you could see if you can take a
btrfs-image -t4 -c9 -s, and also btrfs-debug-tree and output to a file
somewhere. Maybe then it's a useful donation image for making the
tools better.
>
>
>> That might go far enough back before the bad sectors were a factor.
>> Norma
stored inline? Ouch. That would
> also imply the compressed extent size limit (currently 128K) has to become
> much larger.
There are patches to set strip size. Does it make sense to specify
4KiB strip size for metadata block groups and 64+KiB for data block
groups?
--
Chris Murphy
--
To
guarantees Btrfs is purported to make are substantially
negated by this bug. I think the bark is worse than the bite. It is
not the bark we'd like Btrfs to have though, for sure.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body o
readding btrfs
On Tue, Oct 11, 2016 at 1:00 PM, Jason D. Michaelson
wrote:
>
>
>> -Original Message-
>> From: ch...@colorremedies.com [mailto:ch...@colorremedies.com] On
>> Behalf Of Chris Murphy
>> Sent: Tuesday, October 11, 2016 12:41 PM
>> To: Jas
SYSTEM|RAID6 chunk.
There is no single SYSTEM chunk. After mounting and copying some data
over, then umounting, same thing. One system chunk, raid6.
So *IF* there is anything wrong with this single system chunk, it's
all bets are off, no way to even attempt to fix the problem. That
might expla
ck would be ignored at mount time
and even by btrfs-find-root, or maybe even replaced like any other
kind of known bad metadata where good copies are available.
btrfs-show-super -f /dev/sda
btrfs-show-super -f /dev/sdh
Find out what the difference is between good and bad supers.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ne data, does it make sense
to find the source of corruption Zygo has been experiencing? That's in
the "btrfs rare silent data corruption with kernel data leak" thread.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
What do you get for
btrfs-find-root
btrfs rescue super-recover -v
It shouldn't matter which dev you pick, unless it face plants, then try another.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo inf
801 - 900 of 2487 matches
Mail list logo