/0x650 [btrfs]()
> [ 88.836083] BTRFS: Transaction aborted (error -95)
Fits this thread, which suggests you're using a btrfs-progs 4.6 or newer.
The call trace breakdown:
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg57565.html
The evaluation:
http://www.mail-archive.com/linux-btrfs
mount option rather
than the block device? I'd think this is better no matter what the
file system is, but certainly with Btrfs and possibly ZFS. That makes
the device discovery problem not systemd's problem to hassle with.
Chris Murphy
On Wed, May 3, 2017 at 11:05 AM, Goffredo Baroncelli
chattr > qemu-image; or if you'd have qemu-img
create a new one with -o nocow=on and then use the dd command.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, May 1, 2017 at 9:23 PM, Marc MERLIN <m...@merlins.org> wrote:
> Hi Chris,
>
> Thanks for the reply, much appreciated.
>
> On Mon, May 01, 2017 at 07:50:22PM -0600, Chris Murphy wrote:
>> What about btfs check (no repair), without and then also with --mode=low
uch a case.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
rrors in the
meantime, maybe a dev will have some idea what's going on from the
call traces, I'm not sure what they mean.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
st
I *think* that's what's going on.
You'd need to use -v or possibly -vv with both the send and receive
commands to get more verbose output and maybe see whether it's the
send or receive side having the problem. I'm gonna guess it's the
receive side.
--
Chris Murphy
--
To unsubscribe from t
-- ./Documents
--- ./Music
--- ./Pictures
--- ./Videos
c-- ./tmp ##enable compression
--- ./Applications
C-- ./hello.qcow2 ##this is nocow
--
Chris Murphy
--
To unsubscribe from th
On Sat, Apr 29, 2017 at 2:46 AM, Christophe de Dinechin
<dinec...@redhat.com> wrote:
>
>> On 28 Apr 2017, at 22:09, Chris Murphy <li...@colorremedies.com> wrote:
>>
>> On Fri, Apr 28, 2017 at 3:10 AM, Christophe de Dinechin
>> <dinec...@redhat.com>
ing fallocated raw files with chattr +C
applied. And these days I'm just using LVM thin volumes. The journaled
file systems in a guest cause a ton of backing file fragmentation
unless nocow is used on Btrfs. I've seen hundreds of thousands of
extents for a single backing file for a Windows
les on Btrfs for backing. And it's resulting in
problems...
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
f what to do about it,
and that may mean short term and long term. I can't speak for systemd
developers but if there's a different way to write to the journals
that'd be better for Btrfs and no worse for ext4 and XFS, it might be
considered.
--
Chris Murphy
--
To unsubscribe from this list: send the line
d only if in betweeen snapshots the file grows by a little
> less than 8MiB or by substantially more.
Just to be clear, none of my own examples involve journals being
snapshot. There are no shared extents for any of those files.
--
Chris Murphy
--
To unsubscribe from this list: send the line "
a separate subvolume to prevent COW from ever happening
inadvertently.
The same behavior happens with NTFS in qcow2 files. They quickly end
up with 100,000+ extents unless set nocow. It's like the worst case
scenario.
--
Chris Murphy
--
To unsubscribe from this list: send the line "
a 1.27 ratio, to 5.70. Here are the results
after catting that file:
https://da.gd/rE8KT
https://da.gd/PD5qI
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
10,
> macOS, Fedora25, Ubuntu 16.04).
I do run VM's quite often with all of my setups but rarely two
concurrently and never three or more. So, hmmm. And are the VM's
backed by a qemu image on Btrfs? Or LVM?
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linu
ne using gmail and some other agents see it as
spam because of DMARC failure. Basically rarcoa.com is configured to
tell mail senders to fail to (re)send emails, they can only be sent
from raroa.com. Anyway, I think this is supposed to be fixed in
mailing list servers, they need to strip these hea
m
in a device failure.
sudo btrfs balance start -mconvert=raid1,soft
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
the scene that is actually managing the snapshotting for
use with a new container, so it'd need to become quota aware.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
so
it's fail safe.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Just stop referencing the extra copy.
Basically right now it's doing a balance first, then convert. There's
no efficiency option to just convert via a prune only.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a messag
Is the file system created with no-holes?
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
12MB metadata chunk and it
doesn't ever get bigger than this.
Anyway, I think ssd mount option still sounds plausibly useful. What
I'm skeptical of on SSD is defragmenting without compression, and also
nocow.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
n just an in memory correction. But still,
they're different messages for the same problem and the auto healing.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
To inhibit chattr +C on systemd-journald journals:
- manually remove the attribute on /var/log/journal and
/var/log/journal/
- write an empty file:
/etc/tmpfiles.d/journal-nocow.conf
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of
On Mon, Apr 17, 2017 at 1:26 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
> On 2017-04-17 14:34, Chris Murphy wrote:
>> Nope. The first paragraph applies to NVMe machine with ssd mount
>> option. Few fragments.
>>
>> The second paragraph applies to SD C
y 4KiB block sizes that then are
>> fsync'd.
>>
>> It's almost like we need these things to not fsync at all, and just
>> rely on the filesystem commit time...
>
> Essentially yes, but that causes all kinds of other problems.
Drat.
--
Chris Murphy
--
To unsubscribe
On Mon, Apr 17, 2017 at 12:07 PM, Liu Bo <bo.li@oracle.com> wrote:
> On Mon, Apr 17, 2017 at 11:36:17AM -0600, Chris Murphy wrote:
>> HI,
>>
>>
>> /dev/nvme0n1p8 on / type btrfs
>> (rw,relatime,seclabel,ssd,space_cache,subvolid=258,subvol=/root)
&
; and the file does
compress also. I thought nocow files were always no compress, but it
seems there's exceptions.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
ions that grow the data base file from within and at the end.
If this is made cow, the file will absolutely fragment a ton. And
especially if the changes are mostly 4KiB block sizes that then are
fsync'd.
It's almost like we need these things to not fsync at all, and just
rely on the filesystem com
On Sat, Apr 15, 2017 at 6:41 PM, Hans van Kranenburg
<hans.van.kranenb...@mendix.com> wrote:
> On 04/15/2017 10:35 PM, Chris Murphy wrote:
>> I have a performance test based on doing a bunch of system updates
>> (~100 rpm packages), and even with the same mount options,
On Sat, Apr 15, 2017 at 12:50 PM, Adam Borowski <kilob...@angband.pl> wrote:
> On Sat, Apr 15, 2017 at 12:41:14PM -0600, Chris Murphy wrote:
>> On Sat, Apr 15, 2017 at 12:31 PM, Adam Borowski <kilob...@angband.pl> wrote:
>> > On Sat, Apr 15, 2017 at 12:17:
On Sat, Apr 15, 2017 at 12:31 PM, Adam Borowski <kilob...@angband.pl> wrote:
> On Sat, Apr 15, 2017 at 12:17:25PM -0600, Chris Murphy wrote:
>> I don't understand this:
>>
>> /dev/mmcblk0p3 on / type btrfs
>> (rw,noatime,seclabel,compress=zlib,nossd,ssd_spread,sp
options which would
seem to be a contradiction. At least it's confusing. So... now what?
kernel 4.10.8-200.fc25.x86_64
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo inf
On Sat, Apr 15, 2017 at 12:14 AM, Andrei Borzenkov <arvidj...@gmail.com> wrote:
> 12.04.2017 20:21, Chris Murphy пишет:
>> btrfs-map-logical is the tool that will convert logical to physical
>> and also give what device it's on; but the device notation is copy 1
>> and c
Can you ro mount it and:
btrfs fi show /mnt
btrfs fi df /mnt
And then next update the btrfs-progs to something newer like 4.9.2 or
4.10.2 and then do another 'btrfs check' without repair. And then
separately do it again with --mode=lowmem and post both sets of
results?
Chris Murphy
On Fri, Apr 14, 2017 at 10:46 AM, Chris Murphy <li...@colorremedies.com> wrote:
>
> The passive repair works when it's a few bad sectors on the drive. But
> when it's piles of missing data, this is the wrong mode. It needs a
> limited scrub or balance to fix things. Right now yo
fe option you have is read-only degraded until you have the
resources to make an independent copy. The more you change this
volume, the more likely it is irrecoverable and there will be data
loss.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
btrfs-map-logical is the tool that will convert logical to physical
and also give what device it's on; but the device notation is copy 1
and copy 2, so you have to infer what device that is, it's not
explicit.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-
n the drive. So it depends on what the original poster is trying to
discover.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
doesn't come can be assumed as metadata. I don't really
> care what is inside the metadata, I just want to know their
> offset-lengths in the file system.
btrfs-debug-tree contains this information in human readable form.
There's also btrfs-heatmap
https://github.com/knorrie/btrfs-heatma
and
dropped the data once power was cut to the drive. I'd check if there
are firmware updates for the drive.
Curious about:
btrfs rescue super -v
btrfs check
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
ng them one at a time is more
expensive in total time than deleting them in one whack. But I've
never deleted 100 or more at once.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.o
ties to the underlying storage. So you'd setup ceph
volume "mail" to be backed in order by brick B then A.
Not very well known but XFS will parallelize across drives in a
linear/concat arrangement, it's quite useful for e.g. busy mail
servers.
--
Chris Murphy
--
To unsubscribe from thi
act have better data safety than Btrfs raid10 because it
is possible to lose more than one drive without data loss. You can
only lose drives on one side of the mirroring, however. This is a
conventional raid0+1, so it's not as scalable as raid10 when it comes
to rebuild time.
--
Chris Murphy
--
To u
l distributed across all of the drives, and that in effect means you
can only lose 1 drive. If you lose a 2nd drive, some amount of
metadata and data will have been lost.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message
h should be IEC units, so I'd expect
it to be closer to the value of btrfs fi us than this. But the code
has changed over time, I'm not sure when the last adjustment was made.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
value of 21.45T, plus some extra for metadata which is
also 2x.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
fs
restore.
If you can't mount ro with all drives, or ro,degraded with just one
device missing, you'll need to use btrfs restore which is more
tolerant of missing metadata.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Mar 27, 2017 at 2:06 PM, Hans van Kranenburg
<hans.van.kranenb...@mendix.com> wrote:
> On 03/27/2017 09:53 PM, Roman Mamedov wrote:
>> On Mon, 27 Mar 2017 13:32:47 -0600
>> Chris Murphy <li...@colorremedies.com> wrote:
>>
>>> How about
How about if qgroups are enabled, then non-root user is prevented from
creating new subvolumes?
Or is there a way for a new nested subvolume to be included in its
parent's quota, rather than the new subvolume having a whole new quota
limit?
Tricky problem.
Chris Murphy
--
To unsubscribe from
://drive.google.com/open?id=0B7NttHfq6wAJd3ZXM3AxS0dVQlU
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Mar 2, 2017 at 6:48 PM, Chris Murphy <li...@colorremedies.com> wrote:
>
> Again, my data is fine. The problem I'm having is this:
> https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/Documentation/filesystems/btrfs.txt?id=refs/tags/v4.10.1
On Thu, Mar 2, 2017 at 6:18 PM, Qu Wenruo <quwen...@cn.fujitsu.com> wrote:
>
>
> At 03/03/2017 09:15 AM, Chris Murphy wrote:
>>
>> [1805985.267438] BTRFS info (device dm-6): allowing degraded mounts
>> [1805985.267566] BTRFS info (device dm-6): disk space cach
[1805985.267438] BTRFS info (device dm-6): allowing degraded mounts
[1805985.267566] BTRFS info (device dm-6): disk space caching is enabled
[1805985.267676] BTRFS info (device dm-6): has skinny extents
[1805987.187857] BTRFS warning (device dm-6): missing devices (1)
exceeds the limit (0),
) which might be what's going on, so that there's a
longer recovery time and maybe the drive figures out the problem and
recovers the data.
smartctl -l scterc /dev/sdX
That should report back the SCT ERC status. Don't change it until we
know the configuration.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
twice, it's not OK once.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
t; [ 1260.559180] BTRFS warning (device sdb1): i/o error at logical
> 40569896960 on dev /dev/sda1, sector 81351616, root 309, inode 55135,
> offset 71278592, length 4096, links 1 (path:
> nefelim4ag/.config/skypeforlinux/Cache/data_3)
That suggests the problem is with data, not metad
Hi,
This man page contains a list for pretty much every other file system,
with a oneliner description: ext4, XFS is in there, and even NTFS, but
not Btrfs.
Also, /etc/filesystems doesn't contain Btrfs. Anyone know if either,
or both, ought to contain an entry for Btrfs?
--
Chris Murphy
s
journal files with chattr +C, and I do manual snapshots of rootfs
periodically, and those snapshots have journal files that have +C
still set.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger
een able to specify
> rootflags=compress=lzo in my APPEND line, this is also in /etc/fstab
> and it booted... I'm using this as evidence that it must support
> compression on a root btrfs device? Any reason to think otherwise?
I don't understand the question. What is "it" when you
is only
needed to tell the kernel to write new extents with compression.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
her that's a problem. You could try
removing the leading / because the path to the subvolume is really
subvol=arch/root. Hmmm.
That's all I'm thinking of at the moment.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to major
On Mon, Jan 30, 2017 at 2:20 PM, Chris Murphy <li...@colorremedies.com> wrote:
> What people do with huge databases, which have this same problem,
> they'll take a volume snapshot. This first commits everything in
> flight, freezes the fs so no more changes can happen, then tak
On Mon, Jan 30, 2017 at 2:07 PM, Michael Born <michael.b...@aei.mpg.de> wrote:
>
>
> Am 30.01.2017 um 21:51 schrieb Chris Murphy:
>> On Mon, Jan 30, 2017 at 1:02 PM, Michael Born <michael.b...@aei.mpg.de>
>> wrote:
>>> Hi btrfs experts.
>>>
>
g modified while dd was
happening, so the image you've taken is inconsistent.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jan 25, 2017 at 3:58 PM, Omar Sandoval <osan...@osandov.com> wrote:
> On Wed, Jan 25, 2017 at 03:55:33PM -0700, Chris Murphy wrote:
>> On Tue, Jan 24, 2017 at 9:42 PM, Omar Sandoval <osan...@osandov.com> wrote:
>> > On Tue, Jan 24, 2017 at 07:53:06PM -0700, Ch
On Tue, Jan 24, 2017 at 9:42 PM, Omar Sandoval <osan...@osandov.com> wrote:
> On Tue, Jan 24, 2017 at 07:53:06PM -0700, Chris Murphy wrote:
>> On Tue, Jan 24, 2017 at 3:50 PM, Omar Sandoval <osan...@osandov.com> wrote:
>>
>> > Got this to repro after installi
t means there's something different about subvolumes and directories
when it comes to xattrs, and the xattr patch I found in bisect is
exposing the difference, hence things getting tripped up.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the bo
ly and can't write all
of what's flushed, it results in journal data loss. If the rest of the
output is useful I can switch systemd to only use volatile storage to
avoid this problem.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a messag
On Tue, Jan 24, 2017 at 1:27 PM, Omar Sandoval <osan...@osandov.com> wrote:
> On Tue, Jan 24, 2017 at 01:24:51PM -0700, Chris Murphy wrote:
>> On Tue, Jan 24, 2017 at 1:10 PM, Omar Sandoval <osan...@osandov.com> wrote:
>>
>> > Hm, still no luck, maybe it's a S
d the WARN_ONCE().
> Could you dump the contents of /{etc,run,usr/lib}/tmpfiles.d somewhere?
This is just ls -lZ for those directories, not sure how else to dump them.
https://drive.google.com/open?id=0B_2Asp8DGjJ9NFJUUUpuV2lxcG8
--
Chris Murphy
--
To unsubscribe from this list: send the line
. So this could be an selinux xattr that's
involved.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
SATA SSD.
dmesg
https://drive.google.com/open?id=0B_2Asp8DGjJ9cjhSRUJxc1k3NVE
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Windows to
make p6, and then added p6 to p4. Hence p4 and p6 are the same Btrfs
volume (single profile for metadata and data).
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
to /var, flushing the journal
to disk within about 2 seconds of the first Btrfs error. And systemd
does chattr +C on its logs by default now (and I think it's not user
changeable behavior so I can't test if it's related).
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe
On Tue, Jan 24, 2017 at 10:49 AM, Omar Sandoval <osan...@osandov.com> wrote:
> On Mon, Jan 23, 2017 at 08:51:24PM -0700, Chris Murphy wrote:
>> On Mon, Jan 23, 2017 at 5:05 PM, Omar Sandoval <osan...@osandov.com> wrote:
>> > Thanks! Hmm, okay, so it's coming f
ov-9f223b_2-dmesg.log
https://drive.google.com/open?id=0B_2Asp8DGjJ9UnNSRXpualprWHM
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jan 23, 2017 at 3:04 PM, Omar Sandoval <osan...@osandov.com> wrote:
> On Mon, Jan 23, 2017 at 02:55:21PM -0700, Chris Murphy wrote:
>> On Mon, Jan 23, 2017 at 2:50 PM, Chris Murphy
>> > I haven't found the commit for that patch, so maybe it's something
&
On Mon, Jan 23, 2017 at 2:50 PM, Chris Murphy
> I haven't found the commit for that patch, so maybe it's something
> with the combination of that patch and the previous commit.
I think that's provably not the case based on the bisect log, because
I hit the problem with kernel that ha
On Mon, Jan 23, 2017 at 2:31 PM, Omar Sandoval <osan...@osandov.com> wrote:
> On Wed, Jan 18, 2017 at 02:27:13PM -0700, Chris Murphy wrote:
>> On Wed, Jan 11, 2017 at 4:13 PM, Chris Murphy <li...@colorremedies.com>
>> wrote:
>> > Looks like there's some
OK so all of these pass original check, but have problems reported by
lowmem. Separate notes about each inline.
~500MiB each, these three are data volumes, first two are raid1, third
one is single.
https://drive.google.com/open?id=0B_2Asp8DGjJ9Z3UzWnFKT3A0clU
while the file system is mounted)
>
> Idiot me. I always forget that.
>
Well, in your defense the error message is misleading.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo i
212Yk0
There are 4 more file systems, 2 are single device, 2 are two device
(raid1) fs's. I'm not sure how to use this indirect btrfs-image
approach with raid1.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger
lowmem mode is still not the default option.
>
> The problem os original mode is, if you're checking a TB level fs and only 2
> or 4G ram, then it's quite possible you ran out of memory and won't be able
> to check the fs forever, more several than annoying.
Fair enough.
--
C
is already incredibly more complicated and unhelpful
than any other file system.
btrfs-progs 4.9.
kernels vary from 4.4 to 4.10-rc4, but one fs is only a month old
having used 4.7 and 4.8 kernels; and another one just 4.10-rc3 as I
said.
--
Chris Murphy
--
To unsubscribe from this list: send
On Thu, Jan 19, 2017 at 12:15 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Chris Murphy posted on Wed, 18 Jan 2017 14:30:28 -0700 as excerpted:
>
>> On Wed, Jan 18, 2017 at 2:07 PM, Jon <jmoro...@hawaii.edu> wrote:
>>> So, I had a raid 1 btrfs system setup
a Btrfs
volume aren't present.
>
> I used raid 1 because I figured that if one drive failed I could simply
> use the other. This recovery scenario makes me think this is incorrect.
> Am I misunderstanding btrfs raid? Is there a process to go through for
> mounting single member of a raid
On Wed, Jan 11, 2017 at 4:13 PM, Chris Murphy <li...@colorremedies.com> wrote:
> Looks like there's some sort of xattr and Btrfs interaction happening
> here; but as it only happens with some subvolumes/snapshots not all
> (but 100% consistent) maybe the kernel version at the time
.5, tons of bugs have been fixed since 4.7.3
although I don't know off hand if this particular bug is fixed. I did
recently do a btrfs-image with btrfs-progs v4.9 with -s and did not
get a segfault.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
evices for you regardless of which
device you point it to, and checks the whole file system. It's not
necessary to run it on each device.
'btrfs check --mode=lowmem' might find the problem but I don't think
it can fix anything still.
Chris Murphy
--
To unsubscribe from this list: send the line
: [6c6ef9f26e598fb977f60935e109cd5b266c941a] xattr:
Stop calling {get,set,remove}xattr inode operations
btrfs-image produces a 159M file.
I've updated the bug report
https://bugzilla.kernel.org/show_bug.cgi?id=191761 and also adding
patch author to cc.
Chris Murphy
--
To unsubscribe from this list: send the line
On Mon, Jan 2, 2017 at 11:50 AM, Chris Murphy <li...@colorremedies.com> wrote:
> Attempt 2. The original post isn't showing up on spinics, just my followup.
>
>
> The Problem: The file system goes read-only soon after startup. It
> only happens with a particular subvo
ink with
remove. For replace there's a consolidation of steps, it's been a
while since I've looked at the code so I can't tell you what steps it
skips, what the state of the devices are in during the replace, which
one active writes go to.
--
Chris Murphy
--
To unsubscribe from this list: se
rsion is reasonably safest to use.
> 4.7, 4.8 or 4.9 series?
>
> What are your experiences and recommendations?
I'm using 4.8.15. With 4.9 and 4.10 I'm seeing a regression where a
volume goes read only for inexplicable reasons. Both btrfs check and
scrub show no problems.
http://www.spinics.n
his is home/chris/gits which is a subvolume that's not empty.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jan 2, 2017 at 2:05 PM, Chris Murphy <li...@colorremedies.com> wrote:
> On Mon, Jan 2, 2017 at 2:02 PM, Jeff Mahoney <je...@suse.com> wrote:
>> On 1/2/17 4:55 AM, Andrei Borzenkov wrote:
>>> I try to understand what exactly is trimmed in case of btrfs. Using
fstrim -v /
[sudo] password for chris:
/: 48.8 GiB (52393148416 bytes) trimmed
[chris@f25h ~]$
Otherwise, it should have trimmed ~52GiB.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
No such entry
[ 34.789000] BTRFS info (device nvme0n1p4): delayed_refs has NO entry
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, Dec 31, 2016 at 12:44 PM, Chris Murphy <li...@colorremedies.com> wrote:
> Problem: The file system (root fs) goes read-only soon after login. It
> only happens with a particular subvolume, and only with kernel 4.9.0.
> If I use kernel 4.9.0 on a different subvolume the
401 - 500 of 2056 matches
Mail list logo