On 2016-10-17 23:23, Anand Jain wrote:
I would like to monitor my btrfs-filesystem for missing drives.
This is actually correct behavior, the filesystem reports that it should
have 6 devices, which is how it knows a device is missing.
Missing - means missing at the time of mount. So how
On 2016-10-17 16:40, Chris Murphy wrote:
May be better to use /sys/fs/btrfs//devices to find the device
to monitor, and then monitor them with blktrace - maybe there's some
courser granularity available there, I'm not sure. The thing is, as
far as Btrfs alone is concerned, a drive can be "bad" an
On 2016-10-18 11:02, Stefan Malte Schumacher wrote:
Hello
One of the drives which I added to my array two days ago was most
likely already damaged when I bought it - 312 read errors while
scrubbing and lots of SMART errors. I want to take the drive out, go
to my hardware vendor and have it repla
On 2016-10-19 09:06, Anand Jain wrote:
On 10/19/16 19:15, Austin S. Hemmelgarn wrote:
On 2016-10-18 17:36, Anand Jain wrote:
I would like to monitor my btrfs-filesystem for missing drives.
This is actually correct behavior, the filesystem reports that it
should
have 6 devices, which
On 2016-10-18 17:36, Anand Jain wrote:
I would like to monitor my btrfs-filesystem for missing drives.
This is actually correct behavior, the filesystem reports that it
should
have 6 devices, which is how it knows a device is missing.
Missing - means missing at the time of mount. So h
On 2016-10-20 05:29, Timofey Titovets wrote:
Hi, i use btrfs for NFS VM replica storage and for NFS shared VM storage.
At now i have a small problem what VM image deletion took to long time
and NFS client show a timeout on deletion
(ESXi Storage migration as example).
Kernel: Linux nfs05 4.7.0-0
On 2016-10-20 09:47, Timofey Titovets wrote:
2016-10-20 15:09 GMT+03:00 Austin S. Hemmelgarn :
On 2016-10-20 05:29, Timofey Titovets wrote:
Hi, i use btrfs for NFS VM replica storage and for NFS shared VM storage.
At now i have a small problem what VM image deletion took to long time
and NFS
On 2016-10-20 11:26, Roman Mamedov wrote:
On Thu, 20 Oct 2016 08:09:14 -0400
"Austin S. Hemmelgarn" wrote:
So, it's possible to return unlink() early? or this a bad idea(and why)?
I may be completely off about this, but I could have sworn that unlink()
returns when enough info
On 2016-10-20 13:33, ronnie sahlberg wrote:
On Thu, Oct 20, 2016 at 7:44 AM, Austin S. Hemmelgarn
wrote:
On 2016-10-20 09:47, Timofey Titovets wrote:
2016-10-20 15:09 GMT+03:00 Austin S. Hemmelgarn :
On 2016-10-20 05:29, Timofey Titovets wrote:
Hi, i use btrfs for NFS VM replica storage
On 2016-10-21 18:13, Peter Becker wrote:
if you have >750 GB free you can simply remove one of the drives.
btrfs device delete /dev/sd[x] /mnt
#power off, replace device
btrfs device add /dev/sd[y] /mnt
Make sure to balance afterwards if you do this, the new disk will be
pretty much unused unti
On 11/13/2018 10:31 AM, David Sterba wrote:
On Mon, Oct 01, 2018 at 09:31:04PM +0800, Anand Jain wrote:
+ /*
+ * we are going to replace the device path, make sure its the
+ * same device if the device mounted
+ */
+ if (device->bdev) {
+ struct b
On 2018-11-15 13:39, Juan Alberto Cirez wrote:
Is BTRFS mature enough to be deployed on a production system to underpin
the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
For NVR, I'd say no. BTRFS does pretty horribly with append-only
workloads, even if they are WORM style.
On 2018-12-04 00:37, Tomasz Chmielewski wrote:
I'm trying to use btrfs on an external USB drive, without much success.
When the drive is connected for 2-3+ days, the filesystem gets remounted
readonly, with BTRFS saying "IO failure":
[77760.444607] BTRFS error (device sdb1): bad tree block st
On 2018-12-04 08:37, Graham Cobb wrote:
On 04/12/2018 12:38, Austin S. Hemmelgarn wrote:
In short, USB is _crap_ for fixed storage, don't use it like that, even
if you are using filesystems which don't appear to complain.
That's useful advice, thanks.
Do you (or anyone
On 2018-12-05 14:50, Roman Mamedov wrote:
Hello,
To migrate my FS to a different physical disk, I have added a new empty device
to the FS, then ran the remove operation on the original one.
Now my FS has only devid 2:
Label: 'p1' uuid: d886c190-b383-45ba-9272-9f00c6a10c50
Total device
On 2018-12-06 01:11, Robert White wrote:
(1) Automatic and selective wiping of unused and previously used disk
blocks is a good security measure, particularly when there is an
encryption layer beneath the file system.
(2) USB attached devices _never_ support TRIM and they are the most
likely
On 2018-12-06 23:09, Andrei Borzenkov wrote:
06.12.2018 16:04, Austin S. Hemmelgarn пишет:
* On SCSI devices, a discard operation translates to a SCSI UNMAP
command. As pointed out by Ronnie Sahlberg in his reply, this command
is purely advisory, may not result in any actual state change on
On 2018-12-07 01:43, Doni Crosby wrote:
This is qemu-kvm? What's the cache mode being used? It's possible the
usual write guarantees are thwarted by VM caching.
Yes it is a proxmox host running the system so it is a qemu vm, I'm
unsure on the caching situation.
On the note of QEMU and the cache
On 2018-12-13 05:39, Remi Gauvin wrote:
On 2018-12-13 02:29 AM, Adam Borowski wrote:
For btrfs, a block device is a block device, it's not "racist".
You can freely mix and/or replace. If you want to, say, extend a SD
card with NBD to remote spinning rust, it works well -- tested :p
The pos
On 12/19/2018 7:57 PM, Qu Wenruo wrote:
On 2018/12/19 下午11:41, devz...@web.de wrote:
does compress-force really force compression?
It should.
The only exception is block size.
If the file is smaller than the sector size (4K for x86_64), then no
compression no matter whatever the mount opti
On 12/23/2018 1:16 AM, Adam Borowski wrote:
On Sun, Dec 23, 2018 at 12:24:02AM +, Paul Jones wrote:
IMHO the more pertinent question is :
If a file has portions which are not easily compressible does that imply all
future writes are also incompressible. IMO no, so I think what will be prude
On 2019-01-16 13:15, Chris Murphy wrote:
On Wed, Jan 16, 2019 at 7:58 AM Stefan K wrote:
:(
that means when one jbod fail its there is no guarantee that it works fine?
like in zfs? well that sucks
Didn't anyone think to program it that way?
The mirroring is a function of the block group,
On 2019-01-29 18:15, Hans van Kranenburg wrote:
Hi,
Thought experiment time...
I have an HP z820 workstation here (with ECC memory, yay!) and 4x250G
10k SAS disks (and some spare disks). It's donated hardware, and I'm
going to use it to replace the current server in the office of a
non-profit o
On 2019-01-30 10:26, Christoph Anton Mitterer wrote:
On Wed, 2019-01-30 at 07:58 -0500, Austin S. Hemmelgarn wrote:
Running dm-integrity without a journal is roughly equivalent to
using
the nobarrier mount option (the journal is used to provide the same
guarantees that barriers do). IOW, don
On 2019-01-31 07:38, Ronald Schaten wrote:
Hello everybody...
This is my first mail to this list, and -- as much as I'd like to be --
I'm not a kernel developer. So please forgive me if this isn't the right
place for questions like this. I'm thankful for any pointer into the
right direction.
T
On 2019-02-04 12:47, Patrik Lundquist wrote:
On Sun, 3 Feb 2019 at 01:24, Chris Murphy wrote:
1. At least with raid1/10, a particular device can only be mounted
rw,degraded one time and from then on it fails, and can only be ro
mounted. There are patches for this but I don't think they've been
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it works like
expected
That should be the normal behaviour, cause a server must be up and running, and
I don't care about a device loss, thats why I use a RAID1. The device-loss
problem can
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it
works like expected
That should be the normal behaviour, cause a server must be up and
running, and I don't care
On 2019-02-07 23:51, Andrei Borzenkov wrote:
07.02.2019 22:39, Austin S. Hemmelgarn пишет:
The issue with systemd is that if you pass 'degraded' on most systemd
systems, and devices are missing when the system tries to mount the
volume, systemd won't mount it because it does
that?
Because we currently don't have any code that does it. Part of the
problem is that we're a lot more tolerant of intermittent I/O errors
than LVM and MD are, so we can't reliably tell if a device is truly gone
or not.
On Thursday, February 7, 2019 2:39:34 PM CET A
On 2019-02-08 13:10, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab
it works like expected
That should be the normal
On 2019-02-10 13:34, Chris Murphy wrote:
On Sat, Feb 9, 2019 at 5:13 AM waxhead wrote:
Understood, but that is not quite what I meant - let me rephrase...
If BTRFS still can't mount, why would it blindly accept a previously
non-existing disk to take part of the pool?!
It doesn't do it blindl
On 2019-02-11 22:16, Sébastien Luttringer wrote:
Hello,
The context is a BTRFS filesystem on top of an md device (raid5 on 6 disks).
System is an Arch Linux and the kernel was a vanilla 4.20.2.
# btrfs fi us /home
Overall:
Device size: 27.29TiB
Device allocated:
On 2019-02-15 10:40, Brian B wrote:
It looks like the btrfs code currently uses the total space available on
a disk to determine where it should place the two copies of a file in
RAID1 mode. Wouldn't it make more sense to use the _percentage_ of free
space instead of the number of free bytes?
F
On 2019-02-15 14:50, Zygo Blaxell wrote:
On Fri, Feb 15, 2019 at 11:54:57AM -0500, Austin S. Hemmelgarn wrote:
On 2019-02-15 10:40, Brian B wrote:
It looks like the btrfs code currently uses the total space available on
a disk to determine where it should place the two copies of a file in
On 2019-06-25 06:41, Roman Mamedov wrote:
Hello,
I have a number of VM images in sparse NOCOW files, with:
# du -B M -sc *
...
46030M total
and:
# du -B M -sc --apparent-size *
...
96257M total
But despite there being nothing else on the filesystem and no snapsh
On 2019-07-25 14:37, David Sterba wrote:
On Thu, Jul 18, 2019 at 02:27:49PM +0800, Qu Wenruo wrote:
RAID10 can accept as much as half of its disks to be missing, as long as
each sub stripe still has a good mirror.
Can you please make a test case for that?
I think the number of devices that ca
On 2019-08-23 13:08, Adam Borowski wrote:
the improved collision
resistance of xxhash64 is not a reason as if you intend to dedupe you want
a crypto hash so you don't need to verify.
The improved collision resistance is a roughly 10 orders of magnitude
reduction in the chance of a collision.
On 2019-09-01 21:09, Chris Murphy wrote:
I'm still mostly convinced the policy questions and management should
be dealt with a btrfsd userspace daemon.
Btrfs kernel code itself tolerates quite a lot of read and write
errors, where a userspace service could say, yeah forget that we're
moving over
On 2019-09-04 02:23, Jorge Fernandez Monteagudo wrote:
Hi all!
Is it possible to get a crypted btrfs in a file? Currently I'm doing this to
get a crypted ISO filesystem in a file:
# genisoimage -R -J -iso-level 4 -o iso.img
# fallocate iso-crypted.img -l $(stat --printf="%s" iso.img)
# crypts
On 2019-09-04 08:46, Jorge Fernandez Monteagudo wrote:
Hi Austin!
What you want here is mkfs.btrfs with the `-r` and `--shrink` options.
So, for your specific example, replace the genisoimage command from your
first example with this and update the file names appropriately:
# mkfs.btrfs -r
On 2019-09-09 07:25, zedlr...@server53.web-hosting.com wrote:
Quoting Qu Wenruo :
1) Full online backup (or copy, whatever you want to call it)
btrfs backup [-f]
- backups a btrfs filesystem given by to a partition
(with all subvolumes).
Why not just btrfs send?
Or you want to keep the w
On 2019-09-09 15:26, webmas...@zedlx.com wrote:
This post is a reply to Remi Gauvin's post, but the email got lost so I
can't reply to him directly.
Remi Gauvin wrote on 2019-09-09 17:24 :
On 2019-09-09 11:29 a.m., Graham Cobb wrote:
and does anyone really care about
defrag any more?).
On 2019-09-10 19:32, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
Defrag may break up extents. Defrag may fuse extents. But it shouln't
ever unshare extents.
Actually, spitting or merging extents will unshare them in a large
majority of cases.
Ok, this po
On 2019-09-11 13:20, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-10 19:32, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
=== I CHALLENGE you and anyone else on this mailing list: ===
- Show me an exaple where splittin
On 2019-09-11 17:37, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 13:20, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-10 19:32, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
Give
On 2019-09-12 15:18, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 17:37, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 13:20, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09
On 2019-09-12 19:54, Zygo Blaxell wrote:
On Thu, Sep 12, 2019 at 06:57:26PM -0400, General Zed wrote:
Quoting Chris Murphy :
On Thu, Sep 12, 2019 at 3:34 PM General Zed wrote:
Quoting Chris Murphy :
On Thu, Sep 12, 2019 at 1:18 PM wrote:
It is normal and common for defrag operation t
On 2019-09-12 18:57, General Zed wrote:
Quoting Chris Murphy :
On Thu, Sep 12, 2019 at 3:34 PM General Zed
wrote:
Quoting Chris Murphy :
> On Thu, Sep 12, 2019 at 1:18 PM wrote:
>>
>> It is normal and common for defrag operation to use some disk space
>> while it is running. I estimate t
On 2019-09-12 18:21, General Zed wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-12 15:18, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 17:37, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 13
On 2019-09-13 12:54, General Zed wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-12 18:21, General Zed wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-12 15:18, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 17:37, webmas
On 2019-09-25 00:25, Nick Bowler wrote:
On Tue, Sep 24, 2019, 18:34 Chris Murphy, wrote:
On Tue, Sep 24, 2019 at 4:04 PM Nick Bowler wrote:
- Running Linux 5.2.14, I pushed this system to OOM; the oom killer
ran and killed some userspace tasks. At this point many of the
remaining tasks were
On 2019-10-03 13:51, Graham Cobb wrote:
Hi,
I seem to have another case where scrub gets confused when it is
cancelled and restarted many times (or, maybe, it is my error or
something). I will look into it further but, instead of just hacking
away at my script to work out what is going on, I tho
On 2018-02-01 18:46, Edmund Nadolski wrote:
On 02/01/2018 01:12 AM, Anand Jain wrote:
On 02/01/2018 01:26 PM, Edmund Nadolski wrote:
On 1/31/18 7:36 AM, Anand Jain wrote:
On 01/31/2018 09:42 PM, Nikolay Borisov wrote:
So usually this should be functionality handled by the raid/san
con
On 2018-02-12 10:37, Ellis H. Wilson III wrote:
On 02/11/2018 01:24 PM, Hans van Kranenburg wrote:
Why not just use `btrfs fi du ` now and then and
update your administration with the results? .. Instead of putting the
burden of keeping track of all administration during every tiny change
all
On 2018-02-12 11:39, Ellis H. Wilson III wrote:
On 02/12/2018 11:02 AM, Austin S. Hemmelgarn wrote:
BTRFS in general works fine at that scale, dependent of course on the
level of concurrent access you need to support. Each tree update
needs to lock a bunch of things in the tree itself, and
On 2018-02-15 10:42, Ellis H. Wilson III wrote:
On 02/14/2018 06:24 PM, Duncan wrote:
Frame-of-reference here: RAID0. Around 70TB raw capacity. No
compression. No quotas enabled. Many (potentially tens to hundreds) of
subvolumes, each with tens of snapshots. No control over size or number
o
On 2018-02-15 11:58, Ellis H. Wilson III wrote:
On 02/15/2018 11:51 AM, Austin S. Hemmelgarn wrote:
There are scaling performance issues with directory listings on BTRFS
for directories with more than a few thousand files, but they're not
well documented (most people don't hit th
On 2018-02-15 11:18, Alex Adriaanse wrote:
We've been using Btrfs in production on AWS EC2 with EBS devices for over 2
years. There is so much I love about Btrfs: CoW snapshots, compression,
subvolumes, flexibility, the tools, etc. However, lack of stability has been a
serious ongoing issue fo
On 2018-02-20 09:59, Ellis H. Wilson III wrote:
On 02/16/2018 07:59 PM, Qu Wenruo wrote:
On 2018年02月16日 22:12, Ellis H. Wilson III wrote:
$ sudo btrfs-debug-tree -t chunk /dev/sdb | grep CHUNK_ITEM | wc -l
3454
OK, this explains everything.
There are too many chunks.
This means at mount you
On 2018-02-21 10:56, Hans van Kranenburg wrote:
On 02/21/2018 04:19 PM, Ellis H. Wilson III wrote:
$ sudo btrfs fi df /mnt/btrfs
Data, single: total=3.32TiB, used=3.32TiB
System, DUP: total=8.00MiB, used=384.00KiB
Metadata, DUP: total=16.50GiB, used=15.82GiB
GlobalReserve, single: total=512.00M
On 2018-02-23 06:21, Shyam Prasad N wrote:
Hi,
Can someone explain me why there is a difference in the number of
blocks reported by df and du commands below?
=
# df -h /dc
Filesystem Size Used Avail Use% Mounted on
/dev/drbd1 746G 519G 225G 70% /dc
# btrfs fil
On 2018-02-27 08:09, vinayak hegde wrote:
I am using btrfs, But I am seeing du -sh and df -h showing huge size
difference on ssd.
mount:
/dev/drbd1 on /dc/fileunifier.datacache type btrfs
(rw,noatime,nodiratime,flushoncommit,discard,nospace_cache,recovery,commit=5,subvolid=5,subvol=/)
du -sh /
On 2018-02-28 14:09, Duncan wrote:
vinayak hegde posted on Tue, 27 Feb 2018 18:39:51 +0530 as excerpted:
I am using btrfs, But I am seeing du -sh and df -h showing huge size
difference on ssd.
mount:
/dev/drbd1 on /dc/fileunifier.datacache type btrfs
(rw,noatime,nodiratime,flushoncommit,disc
On 2018-02-28 14:54, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 28 Feb 2018 14:24:40 -0500 as
excerpted:
I believe this effect is what Austin was referencing when he suggested
the defrag, tho defrag won't necessarily /entirely/ clear it up. One
way to be /sure/ it's cleared u
On 2018-03-01 05:18, Andrei Borzenkov wrote:
On Thu, Mar 1, 2018 at 12:26 PM, vinayak hegde wrote:
No, there is no opened file which is deleted, I did umount and mounted
again and reboot also.
I think I am hitting the below issue, lot of random writes were
happening and the file is not fully w
On 2018-03-05 10:28, Christoph Hellwig wrote:
On Sat, Mar 03, 2018 at 06:59:26AM +, Duncan wrote:
Indeed. Preallocation with COW doesn't make the sense it does on an
overwrite-in-place filesystem.
It makes a whole lot of sense, it just is a little harder to implement.
There is no reason
On 2018-03-08 05:36, waxhead wrote:
Just out of curiosity, are there any work going on for enabling
different "RAID" levels per subvolume?!
Not that I know of, but it would be great to have (I could get rid of
some of the various small isolated volumes I have solely to have a
different storage
On 2018-03-09 11:02, Paul Richards wrote:
Hello there,
I have a 3 disk btrfs RAID 1 filesystem, with a single failed drive.
Before I attempt any recovery I’d like to ask what is the recommended
approach? (The wiki docs suggest consulting here before attempting
recovery[1].)
The system is power
will give you degraded
performance for the longest amount of time.
Thanks again for your notes, they should be on the wiki.. :)
I've been meaning to add it for a while actually, I just haven't gotten
around to it yet.
On Fri, 9 Mar 2018 at 16:43, Austin S. Hemmelgarn <mail
On 2018-03-13 09:07, Valerio Pachera wrote:
Short version:
656G used (df -h)
450G used (du -sh)
10G used by snapshots
196G discrepancy <-
I don't undertand what is using 196G.
df -h /mnt/dati/
File systemDim. Usati Dispon. Uso% Montato su
/dev/mapper/vg00-dati 919G 656G26
On 2018-03-13 15:36, Goffredo Baroncelli wrote:
On 03/12/2018 10:48 PM, Christoph Anton Mitterer wrote:
On Mon, 2018-03-12 at 22:22 +0100, Goffredo Baroncelli wrote:
Unfortunately no, the likelihood might be 100%: there are some
patterns which trigger this problem quite easily. See The link whi
On 2018-03-14 05:20, Nikolay Borisov wrote:
On 13.03.2018 17:06, Anand Jain wrote:
We aren't checking the SB csum when the device scanned,
instead we do that when mounting the device, and if the
csum fails we fail the mount. How if we check the csum
when the device is scanned, I can't see any re
On 2018-03-14 14:39, Goffredo Baroncelli wrote:
On 03/14/2018 01:02 PM, Austin S. Hemmelgarn wrote:
[...]
In btrfs, a checksum mismatch creates an -EIO error during the reading. In a
conventional filesystem (or a btrfs filesystem w/o datasum) there is no
checksum, so this problem doesn
On 2018-03-21 03:46, Nikolay Borisov wrote:
On 20.03.2018 22:06, Goffredo Baroncelli wrote:
On 03/20/2018 07:45 AM, Misono, Tomohiro wrote:
Deletion of subvolume by non-privileged user is completely restricted
by default because we can delete a subvolume even if it is not empty
and may cause
On 2018-03-21 16:02, Christoph Anton Mitterer wrote:
On the note of maintenance specifically:
- Maintenance tools
- How to get the status of the RAID? (Querying kernel logs is IMO
rather a bad way for this)
This includes:
- Is the raid degraded or not?
Check for the 'degraded' f
On 2018-03-21 16:38, Goffredo Baroncelli wrote:
On 03/21/2018 12:47 PM, Austin S. Hemmelgarn wrote:
I agree as well, with the addendum that I'd love to see a new ioctl that does
proper permissions checks. While letting rmdir(2) work for an empty subvolume
with the appropriate permis
fail. Returns 2 if an internal error
+occurred.
+
+Copyright (C) 2018 Austin S. Hemmelgarn
+
+This program is free software; you can redistribute it and/or
+modify it under the terms of the GNU General Public
+License v2 as published by the Free Software Foundation.
+
+This program is distribu
On 2018-03-30 12:38, Adam Borowski wrote:
On Fri, Mar 30, 2018 at 10:42:10AM +0100, Pete wrote:
I've just notice work going on to make rmdir be able to delete
subvolumes. Is there an intent to allow ls -l to display directories as
subvolumes?
That's entirely up to coreutils guys.
Expanding
On 2018-04-02 11:18, Goffredo Baroncelli wrote:
On 04/02/2018 07:45 AM, Zygo Blaxell wrote:
[...]
It is possible to combine writes from a single transaction into full
RMW stripes, but this *does* have an impact on fragmentation in btrfs.
Any partially-filled stripe is effectively read-only and t
On 2018-04-10 09:08, James Courtier-Dutton wrote:
Hi,
I have disk that in the past had errors on it.
I have fixed up the errors.
btrfs scrub now reports no errors.
How do I reset these counters to zero?
BTRFS info (device sdc2): bdev /dev/sdc2 errs: wr 0, rd 35, flush 0,
corrupt 1, gen 0
Run
On 2018-04-15 21:04, Chris Murphy wrote:
I just ran into this:
https://github.com/neilbrown/mdadm/pull/32/commits/af1ddca7d5311dfc9ed60a5eb6497db1296f1bec
This solution is inadequate, can it be made more generic? This isn't
an md specific problem, it affects Btrfs and LVM as well. And in fact
ra
On 2018-04-16 11:02, Wol's lists wrote:
On 16/04/18 12:43, Austin S. Hemmelgarn wrote:
On 2018-04-15 21:04, Chris Murphy wrote:
I just ran into this:
https://github.com/neilbrown/mdadm/pull/32/commits/af1ddca7d5311dfc9ed60a5eb6497db1296f1bec
This solution is inadequate, can it be made
On 2018-04-16 13:10, Chris Murphy wrote:
Adding linux-usb@ and linux-scsi@
(This email does contain the thread initiating email, but some replies
are on the other lists.)
On Mon, Apr 16, 2018 at 5:43 AM, Austin S. Hemmelgarn
wrote:
On 2018-04-15 21:04, Chris Murphy wrote:
I just ran into
On 2018-04-18 11:10, Brendan Hide wrote:
Hi, all
I'm looking for some advice re compression with NVME. Compression helps
performance with a minor CPU hit - but is it still worth it with the far
higher throughputs offered by newer PCI and NVME-type SSDs?
I've ordered a PCIe-to-M.2 adapter alo
On 2018-04-20 10:21, David Sterba wrote:
This patchset adds new ioctl similar to TRIM, that provides several
other ways how to clear the unused space. The changelogs are
incomplete, for preview not for inclusion yet.
+1 for the idea. This will be insanely useful for certain VM setups.
It com
On 2018-04-23 14:25, waxhead wrote:
Howdy!
I am pondering writing a little C program that use libmicrohttpd and
libbtrfsutil to display some very basic (overview) details about BTRFS.
I was hoping to display the same information that'btrfs fi sh /mnt' and
'btrfs fi us -T /mnt' do, but somewh
On 2018-04-25 07:02, David Sterba wrote:
On Wed, Apr 25, 2018 at 06:31:20AM +, Duncan wrote:
David Sterba posted on Tue, 24 Apr 2018 13:58:57 +0200 as excerpted:
btrfs-progs version 4.16.1 have been released. This is a bugfix
release.
Changes:
* remove obsolete tools: btrfs-debug-tre
On 2018-04-25 07:13, Gandalf Corvotempesta wrote:
2018-04-23 17:16 GMT+02:00 David Sterba :
Reviewed and updated for 4.16, there's no change regarding the overall
status, though 4.16 has some raid56 fixes.
Thank you!
Any ETA for a stable RAID56 ? (or, even better, for a stable btrfs
ready for
On 2018-04-25 07:29, Christoph Anton Mitterer wrote:
On Wed, 2018-04-25 at 07:22 -0400, Austin S. Hemmelgarn wrote:
While I can understand Duncan's point here, I'm inclined to agree
with
David
Same from my side... and I run a multi-PiB storage site (though not
with btrfs).
Cosmet
On 2018-05-02 12:55, waxhead wrote:
Goffredo Baroncelli wrote:
Hi
On 05/02/2018 03:47 AM, Duncan wrote:
Gandalf Corvotempesta posted on Tue, 01 May 2018 21:57:59 + as
excerpted:
Hi to all I've found some patches from Andrea Mazzoleni that adds
support up to 6 parity raid.
Why these are wa
On 2018-05-02 13:25, Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the best
of my knowledge nothing. In any case the data is checksummed so it is
impossible to return corrupted data (modulo bug :-) ).
On 2018-05-02 16:40, Goffredo Baroncelli wrote:
On 05/02/2018 09:29 PM, Austin S. Hemmelgarn wrote:
On 2018-05-02 13:25, Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the best
of my knowledge nothing
On 2018-05-03 04:11, Andrei Borzenkov wrote:
On Wed, May 2, 2018 at 10:29 PM, Austin S. Hemmelgarn
wrote:
...
Assume you have a BTRFS raid5 volume consisting of 6 8TB disks (which gives
you 40TB of usable space). You're storing roughly 20TB of data on it, using
a 16kB block size, and it
On 2017-11-28 13:48, David Sterba wrote:
On Mon, Nov 27, 2017 at 05:41:56PM +0800, Lu Fengqi wrote:
As we all know, under certain circumstances, it is more appropriate to
create some subvolumes rather than keep everything in the same
subvolume. As the condition of demand change, the user may nee
On 2017-11-28 18:49, David Sterba wrote:
On Tue, Nov 28, 2017 at 09:31:57PM +, Nick Terrell wrote:
On Nov 21, 2017, at 8:22 AM, David Sterba wrote:
On Wed, Nov 15, 2017 at 08:09:15PM +, Nick Terrell wrote:
On 11/15/17, 6:41 AM, "David Sterba" wrote:
The branch is now in a state th
On 2017-12-01 12:13, Andrei Borzenkov wrote:
01.12.2017 20:06, Hans van Kranenburg пишет:
Additional tips (forgot to ask for your /proc/mounts before):
* Use the noatime mount option, so that only accessing files does not
lead to changes in metadata,
Is not 'lazytime" default today? It gives
On 2017-12-01 16:50, Matt McKinnon wrote:
Well, it's at zero now...
# btrfs fi df /export/
Data, single: total=30.45TiB, used=30.25TiB
System, DUP: total=32.00MiB, used=3.62MiB
Metadata, DUP: total=66.50GiB, used=65.16GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
GlobalReserve seems to
On 2017-12-04 09:10, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 04 Dec 2017 07:18:11 -0500 as
excerpted:
On 2017-12-01 16:50, Matt McKinnon wrote:
Well, it's at zero now...
# btrfs fi df /export/
Data, single: total=30.45TiB, used=30.25TiB
System, DUP: total=32.00MiB, used=3.
On 2017-12-05 03:43, Qu Wenruo wrote:
On 2017年12月05日 16:25, Misono, Tomohiro wrote:
Hello all,
I want to address some issues of subvolume usability for a normal user.
i.e. a user can create subvolumes, but
- Cannot delete their own subvolume (by default)
- Cannot tell subvolumes from dire
101 - 200 of 1429 matches
Mail list logo