On Fri, Jul 7, 2017 at 10:59 AM, Andrei Borzenkov wrote:
> 07.07.2017 19:42, Chris Murphy пишет:
>> I'm digging through piles of list emails and not really finding an
>> answer to this. Maybe it's Friday and I'm just confused...
>>
>>
>> [root@f26
nd the exit code is the same, and
I just don't understand the purpose of this command. The man page says
"wait" but I don't see any waiting, so at the very least we probably
need to come up with a more descriptive man page description.
Thanks.
--
Chris Murphy
--
To unsubscribe
It's more like a bind mount of a directory, as far as what's going on
under the hood. I take it it's possible to delete a directory that is
bind mounted elsewhere? I'm not sure what happens though.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubs
ixing this would require some design decisions
>> which I hope to hear in this thread soon.
No idea how to proceed. Anand Jain understands the issues related to
seed devices much better than I do. The multiple device stuff is
always complicated.
--
Chris Murphy
--
To unsubscribe from thi
roups, and
metadata block groups use some other non-parity based profile like
raid1.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
rf top' to do something. The
command has been issued, screen is black and appears "hung".
976 monotonic time I'm waiting for a client to authenticate to the
webui management system (cockpit), which ends up failing to
authenticate due to a timeout.
--
Chris Murphy
--
To unsu
On Wed, Jun 21, 2017 at 9:13 AM, Marc MERLIN wrote:
> On Tue, Jun 20, 2017 at 08:43:52PM -0700, Marc MERLIN wrote:
>> On Tue, Jun 20, 2017 at 09:31:42PM -0600, Chris Murphy wrote:
>> > On Tue, Jun 20, 2017 at 5:12 PM, Marc MERLIN wrote:
>> >
>> >
nt #2,
and is wrongly reconstructed due to event #1, Btrfs computes crc32c on
the reconstructed data and compares to extent csum, which then fails
and EIO happens.
Btrfs is susceptible to the write hole happening on disk. But it's
still detected and corrupt data isn't propagated upward.
On Wed, Jun 21, 2017 at 12:51 AM, Marat Khalili wrote:
> On 21/06/17 06:48, Chris Murphy wrote:
>>
>> Another possibility is to ensure a new write is written to a new*not*
>> full stripe, i.e. dynamic stripe size. So if the modification is a 50K
>> file on a 4 disk raid
rom data strip, and by itself parity
is meaningless. But parity plus n-1 data strips is an encoded form of
the missing data strip, and is therefore an encoded copy of the data.
We kinda have to treat the parity as fractionally important compared
to data; just like each mirror copy has some fractional
y eventually run into a similar problem that would
be caught by metadata checksum errors. It'll fail faster with Btrfs
because it's checksumming everything.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majo
x27;t
> know that an error has happened*.
Uhh no I've done quite a number of tests and absolutely if the parity
is corrupt and therefore you get a bad reconstruction, you definitely
get a csum mismatch and EIO. Corrupt data does not propagate upward.
The csums are in the csum tree whi
s, but might give a dev a clue what's going on. I
regularly see normal mode check finds no problems, and lowmem mode
finds problems. Lowmem mode is a total rewrite so it's a different
implementation and can find things normal mode won't.
--
Chris Murphy
--
To unsubscribe from thi
there
are fixes still planned for normal mode. If it's going to stick
around, it needs to be able to use swap, same for lowmem mode. Just
running into a total inability to --repair isn't OK.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
On Sun, Jun 11, 2017 at 4:13 AM, Koen Kooi wrote:
>
>> Op 11 jun. 2017, om 12:05 heeft Koen Kooi het
>> volgende geschreven:
>>
>>>
>>> Op 11 jun. 2017, om 06:20 heeft Chris Murphy het
>>> volgende geschreven:
>>>
>>> On Fri,
On Sun, Jun 11, 2017 at 4:05 AM, Koen Kooi wrote:
>
>> Op 11 jun. 2017, om 06:20 heeft Chris Murphy het
>> volgende geschreven:
>>
>> On Fri, Jun 9, 2017 at 1:57 PM, Hugo Mills wrote:
>>> On Fri, Jun 09, 2017 at 09:12:16PM +0200, Koen Kooi wrote:
>>&g
ile a
bug. The fsck should not crash.
What are these showing?
# btrfs insp dump-s -f /dev/
# btrfs rescue super /dev/
# btrfs-find-root /dev/
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
lt, most likely because I'm on a pre-release of Fedrao 26. But it
looks kinda interesting.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
S and Btrfs. A usability gotcha that
probably is manageable on Android is dealing with inheritance of
encryption when making snapshots, and dealing with reflinks.
https://lwn.net/Articles/677620/
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs
not snapshot/reflink" xattr on the swapfile
Does this solve any usability concerns?
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ile system, the exact
same errors should have happened.
3. My take on this would have been to use btrfs restore and go after
the file path if I absolutely needed a copy of this file (no backup),
and then copied that back to the file system.
Chris Murphy
--
To unsubscribe from this list: send the
.11.3, since it's the most current.
And if that doesn't work then I'd try 4.9.30 or something relatively
recent in that series, just because it's a long term and shouldn't
have any major new features causing this if it's a regression.
--
Chris Murphy
--
To unsubscribe fr
On Tue, May 23, 2017 at 3:49 PM, Marc MERLIN wrote:
> On Tue, May 23, 2017 at 03:38:01PM -0600, Chris Murphy wrote:
>> > I've tried an ext4 to btrfs conversion 3 times in the last 3 years, it
>> > never worked properly any of those times, sadly.
>>
>> Since
ried an ext4 to btrfs conversion 3 times in the last 3 years, it
> never worked properly any of those times, sadly.
Since the 4.6 total rewrite? There are also recent bug fixes related
to convert in the changelog, it should be working now and if there are
problems Qu is probably interested in gettin
On Mon, May 22, 2017 at 5:57 PM, Marc MERLIN wrote:
> On Mon, May 22, 2017 at 05:26:25PM -0600, Chris Murphy wrote:
>> On Mon, May 22, 2017 at 10:31 AM, Marc MERLIN wrote:
>>
>> >
>> > I already have 24GB of RAM in that machine, adding more for the real
>&g
anyway).
If you can acquire an SSD, you can give the system a bunch of swap,
and at least then hopefully the check repair can complete. Yes it'll
be slower than with real RAM but it's not nearly as bad as you might
think it'd be, based on HDD based swap.
--
Chris Murphy
--
To unsubs
1 size 931.51GiB used 507.02GiB path /dev/sdc1
>
>
>
> Any clue to help me mounting back the disk without having to restart the
> system would be greatly appreciated!
> Regards,
> - Sylvain.
Seems normal to me. The device is seriously misbehaving. Btrfs gets
confused. And it
sufficient, that even thought fixes were applied, those
same sectors will still produce errors until flush is used.
Anyway I suspect flawed testing method.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
the thing, the firmware is really complicated now.
I kinda wonder if f2fs could be chopped down to become a modular
allocator for the existing file systems; activate that allocation
method with "ssd" mount option rather than whatever overly smart thing
it does today that's based on
tart out
with just the systemd debug and see if that reveals anything
*assuming* Btrfs is complaining. If Btrfs doesn't complain about a
device going away then I wouldn't worry about it.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs
what's going on.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, May 11, 2017 at 8:56 AM, Marat Khalili wrote:
> Sorry if question sounds unorthodox, Is there some simple way to read (and
> backup) all BTRFS metadata from volume?
btrfs-image
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
y a cp
--reflink followed by rm, behind the scenes. I find it easy to just
boot subvolumes rather than booting from the top level (subvolume id
5, a.k.a. id 0), using rootflags=subvol=.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a m
or a device read error) does not result in
a device write error, then it is corrected. I haven't tried simulating
persistent write failure to see how Btrfs behaves but my recollection
is it just keeps trying and does not eject that device. Nor does it
track bad sectors.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
that's what
Fedora's installer is using when installing live systems, regardless
of file system. Of course you still have to fix up some things
afterthefact, like installing fixing fstab to have the correct volume
UUIDs, installing bootloader and making a new grub.cfg, and also
rebuild the
week.
>
> Sean, how would you approach the copy of the data back and forth if
> the OS is on it? Would a Send-receive and then back work?
Btrfs send receive. Optionally you can send to a file; and then
receive from a file. In that case the file can be on any filesystem.
The one thing not
data on this drive.
> I haven't deleted the Firefox cache yet.
> btrfs scrub did not find any errors
I think you've found a bug in kernel handling, we'll just have to see
what Qu says.
I would go with the deleting of the Firefox cache first, and then if
necessary reinstall just
einstall Firefox.
If that doesn't help, it's a coin toss if it's worth trying btrfs
check --repair, or if you're better off taking a read-only snapshot
and using btrfs send/receive to a new Btrfs filesystem. The send
receive process is easier to use than rsync -a or cp -
qemu-kvm (Fedora 26 pre-beta guest and host)
systemd-233-3.fc26.x86_64
kernel-4.11.0-0.rc8.git0.1.fc26.x86_64
The guest installed OS uses ext4, and boot parameters rd.udev.debug
systemd.log_level=debug so we can see the entirety of Btrfs device
discovery and module loading. Using virsh I can hot p
s]()
> [ 88.836083] BTRFS: Transaction aborted (error -95)
Fits this thread, which suggests you're using a btrfs-progs 4.6 or newer.
The call trace breakdown:
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg57565.html
The evaluation:
http://www.mail-archive.com/linux-btrf
-uuid mount option rather
than the block device? I'd think this is better no matter what the
file system is, but certainly with Btrfs and possibly ZFS. That makes
the device discovery problem not systemd's problem to hassle with.
Chris Murphy
On Wed, May 3, 2017 at 11:05 AM, Goff
such that it inherits +C
(like a new directory). And you can also create a new nocow file, and
then cat the old one into the new one. I haven't tried it but
presumably you can use either 'qemu-img convert' or 'qemu-img dd' to
migrate the data inside a cow qcow2 into a nocow
On Mon, May 1, 2017 at 9:23 PM, Marc MERLIN wrote:
> Hi Chris,
>
> Thanks for the reply, much appreciated.
>
> On Mon, May 01, 2017 at 07:50:22PM -0600, Chris Murphy wrote:
>> What about btfs check (no repair), without and then also with --mode=lowmem?
>>
>> In th
ght now it's all we have for
such a case.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
rors in the
meantime, maybe a dev will have some idea what's going on from the
call traces, I'm not sure what they mean.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
x27;s what's going on.
You'd need to use -v or possibly -vv with both the send and receive
commands to get more verbose output and maybe see whether it's the
send or receive side having the problem. I'm gonna guess it's the
receive side.
--
Chris Murphy
--
To unsubscribe f
- ./Music
--- ./Pictures
--- ./Videos
c-- ./tmp ##enable compression
--- ./Applications
C-- ./hello.qcow2 ##this is nocow
--
Chris Murphy
--
To unsubscribe from this list: send the line &qu
On Sat, Apr 29, 2017 at 2:46 AM, Christophe de Dinechin
wrote:
>
>> On 28 Apr 2017, at 22:09, Chris Murphy wrote:
>>
>> On Fri, Apr 28, 2017 at 3:10 AM, Christophe de Dinechin
>> wrote:
>>
>>>
>>> QEMU qcow2. Host is BTRFS. Guests are BTRFS, LV
llocated raw files with chattr +C
applied. And these days I'm just using LVM thin volumes. The journaled
file systems in a guest cause a ton of backing file fragmentation
unless nocow is used on Btrfs. I've seen hundreds of thousands of
extents for a single backing file for a Windows guest.
-
g. And it's resulting in
problems...
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
about it,
and that may mean short term and long term. I can't speak for systemd
developers but if there's a different way to write to the journals
that'd be better for Btrfs and no worse for ext4 and XFS, it might be
considered.
--
Chris Murphy
--
To unsubscribe from this list:
etweeen snapshots the file grows by a little
> less than 8MiB or by substantially more.
Just to be clear, none of my own examples involve journals being
snapshot. There are no shared extents for any of those files.
--
Chris Murphy
--
To unsubscribe from this list: send the line "
nue with +C on journals, and
then a separate subvolume to prevent COW from ever happening
inadvertently.
The same behavior happens with NTFS in qcow2 files. They quickly end
up with 100,000+ extents unless set nocow. It's like the worst case
scenario.
--
Chris Murphy
--
To unsubscribe
file,
compression goes from a 1.27 ratio, to 5.70. Here are the results
after catting that file:
https://da.gd/rE8KT
https://da.gd/PD5qI
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More m
g (Windows XP, WIndows 10,
> macOS, Fedora25, Ubuntu 16.04).
I do run VM's quite often with all of my setups but rarely two
concurrently and never three or more. So, hmmm. And are the VM's
backed by a qemu image on Btrfs? Or LVM?
--
Chris Murphy
--
To unsubscribe from this list: se
com is configured to
tell mail senders to fail to (re)send emails, they can only be sent
from raroa.com. Anyway, I think this is supposed to be fixed in
mailing list servers, they need to strip these headers and insert
their own rather than leaving them intact only later to get rejected
due to honoring the header's stated policy.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
m, RAID1: total=32.00MiB, used=828.00KiB
> System, single: total=4.00MiB, used=0.00B
> Metadata, RAID1: total=104.00GiB, used=70.64GiB
> GlobalReserve, single: total=512.00MiB, used=28.51MiB
Later, after this problem is solved, you'll want to get rid of that
single system chunk tha
e scene that is actually managing the snapshotting for
use with a new container, so it'd need to become quota aware.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at ht
e COW so
it's fail safe.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
out the
unneeded extra copy. Just stop referencing the extra copy.
Basically right now it's doing a balance first, then convert. There's
no efficiency option to just convert via a prune only.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Is the file system created with no-holes?
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
2MB metadata chunk and it
doesn't ever get bigger than this.
Anyway, I think ssd mount option still sounds plausibly useful. What
I'm skeptical of on SSD is defragmenting without compression, and also
nocow.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
uot; rather than just an in memory correction. But still,
they're different messages for the same problem and the auto healing.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
To inhibit chattr +C on systemd-journald journals:
- manually remove the attribute on /var/log/journal and
/var/log/journal/
- write an empty file:
/etc/tmpfiles.d/journal-nocow.conf
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of
On Mon, Apr 17, 2017 at 1:26 PM, Austin S. Hemmelgarn
wrote:
> On 2017-04-17 14:34, Chris Murphy wrote:
>> Nope. The first paragraph applies to NVMe machine with ssd mount
>> option. Few fragments.
>>
>> The second paragraph applies to SD Card machine with ssd_s
y if the changes are mostly 4KiB block sizes that then are
>> fsync'd.
>>
>> It's almost like we need these things to not fsync at all, and just
>> rely on the filesystem commit time...
>
> Essentially yes, but that causes all kinds of other problems.
Drat.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Apr 17, 2017 at 12:07 PM, Liu Bo wrote:
> On Mon, Apr 17, 2017 at 11:36:17AM -0600, Chris Murphy wrote:
>> HI,
>>
>>
>> /dev/nvme0n1p8 on / type btrfs
>> (rw,relatime,seclabel,ssd,space_cache,subvolid=258,subvol=/root)
>>
>> I've got a t
ress option; and the file does
compress also. I thought nocow files were always no compress, but it
seems there's exceptions.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majord
g overwritten, and one or
more sections that grow the data base file from within and at the end.
If this is made cow, the file will absolutely fragment a ton. And
especially if the changes are mostly 4KiB block sizes that then are
fsync'd.
It's almost like we need these things to not fsy
On Sat, Apr 15, 2017 at 6:41 PM, Hans van Kranenburg
wrote:
> On 04/15/2017 10:35 PM, Chris Murphy wrote:
>> I have a performance test based on doing a bunch of system updates
>> (~100 rpm packages), and even with the same mount options, back to
>> back tests are sufficientl
On Sat, Apr 15, 2017 at 12:50 PM, Adam Borowski wrote:
> On Sat, Apr 15, 2017 at 12:41:14PM -0600, Chris Murphy wrote:
>> On Sat, Apr 15, 2017 at 12:31 PM, Adam Borowski wrote:
>> > On Sat, Apr 15, 2017 at 12:17:25PM -0600, Chris Murphy wrote:
>> >> Then late
On Sat, Apr 15, 2017 at 12:31 PM, Adam Borowski wrote:
> On Sat, Apr 15, 2017 at 12:17:25PM -0600, Chris Murphy wrote:
>> I don't understand this:
>>
>> /dev/mmcblk0p3 on / type btrfs
>> (rw,noatime,seclabel,compress=zlib,nossd,ssd_spread,space_cache,comm
pread options which would
seem to be a contradiction. At least it's confusing. So... now what?
kernel 4.10.8-200.fc25.x86_64
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More maj
On Sat, Apr 15, 2017 at 12:14 AM, Andrei Borzenkov wrote:
> 12.04.2017 20:21, Chris Murphy пишет:
>> btrfs-map-logical is the tool that will convert logical to physical
>> and also give what device it's on; but the device notation is copy 1
>> and copy 2, so you have to
Can you ro mount it and:
btrfs fi show /mnt
btrfs fi df /mnt
And then next update the btrfs-progs to something newer like 4.9.2 or
4.10.2 and then do another 'btrfs check' without repair. And then
separately do it again with --mode=lowmem and post both sets of
results?
Chris Mu
On Fri, Apr 14, 2017 at 10:46 AM, Chris Murphy wrote:
>
> The passive repair works when it's a few bad sectors on the drive. But
> when it's piles of missing data, this is the wrong mode. It needs a
> limited scrub or balance to fix things. Right now you have to manuall
oximately 20 more drives.This probably won't be until Jan 2018.
Yeah that can work. Read-only degraded might even survive another
drive failure, so why not? It's only a year. That'll go by fast.
>
> As I see it, the key here to to be able to safely delete copied files
> and to safely reduce the number of devices in the array.
The only safe option you have is read-only degraded until you have the
resources to make an independent copy. The more you change this
volume, the more likely it is irrecoverable and there will be data
loss.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
btrfs-map-logical is the tool that will convert logical to physical
and also give what device it's on; but the device notation is copy 1
and copy 2, so you have to infer what device that is, it's not
explicit.
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscr
ive. So it depends on what the original poster is trying to
discover.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
e assumed as metadata. I don't really
> care what is inside the metadata, I just want to know their
> offset-lengths in the file system.
btrfs-debug-tree contains this information in human readable form.
There's also btrfs-heatmap
https://github.com/knorrie/btrfs-heatmap
--
Chris
once power was cut to the drive. I'd check if there
are firmware updates for the drive.
Curious about:
btrfs rescue super -v
btrfs check
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger
e
expensive in total time than deleting them in one whack. But I've
never deleted 100 or more at once.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
underlying storage. So you'd setup ceph
volume "mail" to be backed in order by brick B then A.
Not very well known but XFS will parallelize across drives in a
linear/concat arrangement, it's quite useful for e.g. busy mail
servers.
--
Chris Murphy
--
To unsubscribe from this lis
ety than Btrfs raid10 because it
is possible to lose more than one drive without data loss. You can
only lose drives on one side of the mirroring, however. This is a
conventional raid0+1, so it's not as scalable as raid10 when it comes
to rebuild time.
--
Chris Murphy
--
To unsubscribe from
re
all distributed across all of the drives, and that in effect means you
can only lose 1 drive. If you lose a 2nd drive, some amount of
metadata and data will have been lost.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a me
d expect
it to be closer to the value of btrfs fi us than this. But the code
has changed over time, I'm not sure when the last adjustment was made.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.ke
2.00
> Global reserve: 512.00MiB (used: 0.00B)
>
> Data,RAID10: Size:10.72TiB, Used:10.71TiB
This is saying you have 10.72T of data. But because it's raid10, it
will take up 2x that much space. This is what's reflected by the
Overall: Used: value of 21.45T, plus s
On Tue, Apr 4, 2017 at 10:52 AM, Chris Murphy wrote:
> Mounting -o ro,degraded is probably permitted by the file system, but
> chunks of the file system and certainly your data, will be missing. So
> it's just a matter of time before copying data off will fail.
** Context here i
If you can't mount ro with all drives, or ro,degraded with just one
device missing, you'll need to use btrfs restore which is more
tolerant of missing metadata.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Mar 27, 2017 at 2:06 PM, Hans van Kranenburg
wrote:
> On 03/27/2017 09:53 PM, Roman Mamedov wrote:
>> On Mon, 27 Mar 2017 13:32:47 -0600
>> Chris Murphy wrote:
>>
>>> How about if qgroups are enabled, then non-root user is prevented from
>>> cre
How about if qgroups are enabled, then non-root user is prevented from
creating new subvolumes?
Or is there a way for a new nested subvolume to be included in its
parent's quota, rather than the new subvolume having a whole new quota
limit?
Tricky problem.
Chris Murphy
--
To unsubscribe
https://drive.google.com/open?id=0B7NttHfq6wAJd3ZXM3AxS0dVQlU
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Mar 2, 2017 at 6:48 PM, Chris Murphy wrote:
>
> Again, my data is fine. The problem I'm having is this:
> https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/Documentation/filesystems/btrfs.txt?id=refs/tags/v4.10.1
>
> Which says in the
On Thu, Mar 2, 2017 at 6:18 PM, Qu Wenruo wrote:
>
>
> At 03/03/2017 09:15 AM, Chris Murphy wrote:
>>
>> [1805985.267438] BTRFS info (device dm-6): allowing degraded mounts
>> [1805985.267566] BTRFS info (device dm-6): disk space caching is enabled
>> [180598
[1805985.267438] BTRFS info (device dm-6): allowing degraded mounts
[1805985.267566] BTRFS info (device dm-6): disk space caching is enabled
[1805985.267676] BTRFS info (device dm-6): has skinny extents
[1805987.187857] BTRFS warning (device dm-6): missing devices (1)
exceeds the limit (0), writeab
lower values,
Unlikely. The OP suggests single HDD to single SSD. Only if there is
redundancy is it appropriate to set SCT ERC to a low value like 70
deciseconds.
If it's a single drive, the thing to do is disable SCT ERC in the case
it's enabled (?) which might be what's going on, so
K to allow it
twice. If it's not OK twice, it's not OK once.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
0.559180] BTRFS warning (device sdb1): i/o error at logical
> 40569896960 on dev /dev/sda1, sector 81351616, root 309, inode 55135,
> offset 71278592, length 4096, links 1 (path:
> nefelim4ag/.config/skypeforlinux/Cache/data_3)
That suggests the problem is with data, not metadata. What is t
Hi,
This man page contains a list for pretty much every other file system,
with a oneliner description: ext4, XFS is in there, and even NTFS, but
not Btrfs.
Also, /etc/filesystems doesn't contain Btrfs. Anyone know if either,
or both, ought to contain an entry for Btrfs?
--
Chris Murphy
attr +C, and I do manual snapshots of rootfs
periodically, and those snapshots have journal files that have +C
still set.
--
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
701 - 800 of 2487 matches
Mail list logo