On 2016-03-30 20:32, Liu Bo wrote:
On Wed, Mar 30, 2016 at 11:27:55AM -0700, Darrick J. Wong wrote:
Hi all,
Christoph and I have been working on adding reflink and CoW support to
XFS recently. Since the purpose of (mode 0) fallocate is to make sure
that future file writes cannot ENOSPC, I exte
On 2016-03-31 07:18, Austin S. Hemmelgarn wrote:
On 2016-03-30 20:32, Liu Bo wrote:
On Wed, Mar 30, 2016 at 11:27:55AM -0700, Darrick J. Wong wrote:
Hi all,
Christoph and I have been working on adding reflink and CoW support to
XFS recently. Since the purpose of (mode 0) fallocate is to make
On 2016-03-31 11:31, Andreas Dilger wrote:
On Mar 31, 2016, at 1:55 AM, Christoph Hellwig wrote:
On Wed, Mar 30, 2016 at 05:32:42PM -0700, Liu Bo wrote:
Well, btrfs fallocate doesn't allocate space if it's a shared one
because it thinks the space is already allocated. So a later overwrite
ov
On 2016-04-05 00:19, Duncan wrote:
Gareth Pye posted on Tue, 05 Apr 2016 13:44:05 +1000 as excerpted:
On Tue, Apr 5, 2016 at 12:37 PM, Duncan <1i5t5.dun...@cox.net> wrote:
1) It appears btrfs scrub start's -c option only takes numeric class,
so try -c3 instead of -c idle.
Does it count as a
On 2016-04-02 01:43, Chris Murphy wrote:
On Fri, Apr 1, 2016 at 10:55 PM, Duncan <1i5t5.dun...@cox.net> wrote:
Marc Haber posted on Fri, 01 Apr 2016 15:40:29 +0200 as excerpted:
[4/502]mh@swivel:~$ sudo btrfs fi usage /
Overall:
Device size: 600.00GiB
Device allocate
On 2016-04-05 13:53, Yauhen Kharuzhy wrote:
Hello,
I try to understand btrfs logic in mounting of multi-device filesystem
when device generations are different. All my questions are related to
RAID5/6 for system, metadata, and data case.
Kernel can mount FS with different device generations (if
On 2016-04-05 14:36, Yauhen Kharuzhy wrote:
2016-04-05 11:15 GMT-07:00 Austin S. Hemmelgarn :
On 2016-04-05 13:53, Yauhen Kharuzhy wrote:
Hello,
I try to understand btrfs logic in mounting of multi-device filesystem
when device generations are different. All my questions are related to
RAID5
On 2016-04-05 23:58, Nicholas D Steeves wrote:
On 11 March 2016 at 20:20, Chris Murphy wrote:
On Fri, Mar 11, 2016 at 5:10 PM, Nicholas D Steeves wrote:
P.S. Rather than parity, I mean instead of distributing into stripes, do a copy!
raid56 by definition are parity based, so I'd say no that
On 2016-04-06 19:08, Chris Murphy wrote:
On Wed, Apr 6, 2016 at 9:34 AM, Ank Ular wrote:
From the ouput of 'dmesg', the section:
[ 20.998071] BTRFS: device label FSgyroA devid 9 transid 625039 /dev/sdm
[ 20.84] BTRFS: device label FSgyroA devid 10 transid 625039 /dev/sdn
[ 21.00412
On 2016-04-06 19:08, Chris Murphy wrote:
On Wed, Apr 6, 2016 at 9:34 AM, Ank Ular wrote:
From the ouput of 'dmesg', the section:
[ 20.998071] BTRFS: device label FSgyroA devid 9 transid 625039 /dev/sdm
[ 20.84] BTRFS: device label FSgyroA devid 10 transid 625039 /dev/sdn
[ 21.00412
Sorry about the almost duplicate mail, Thunderbird's 'Send' button
happens to be right below 'Undo' when you open the edit menu...
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.ke
On 2016-04-07 15:32, Chris Murphy wrote:
On Thu, Apr 7, 2016 at 5:19 AM, Austin S. Hemmelgarn
wrote:
On 2016-04-06 19:08, Chris Murphy wrote:
On Wed, Apr 6, 2016 at 9:34 AM, Ank Ular wrote:
From the ouput of 'dmesg', the section:
[ 20.998071] BTRFS: device label FSgyr
On 2016-04-08 14:05, Chris Murphy wrote:
On Fri, Apr 8, 2016 at 5:29 AM, Austin S. Hemmelgarn
wrote:
I entirely agree. If the fix doesn't require any kind of decision to be
made other than whether to fix it or not, it should be trivially fixable
with the tools. TBH though, this parti
On 2016-04-08 12:17, Chris Murphy wrote:
On Fri, Apr 8, 2016 at 5:29 AM, Austin S. Hemmelgarn
wrote:
I entirely agree. If the fix doesn't require any kind of decision to be
made other than whether to fix it or not, it should be trivially fixable
with the tools. TBH though, this parti
On 2016-04-08 14:30, Chris Murphy wrote:
On Fri, Apr 8, 2016 at 12:18 PM, Austin S. Hemmelgarn
wrote:
On 2016-04-08 14:05, Chris Murphy wrote:
On Fri, Apr 8, 2016 at 5:29 AM, Austin S. Hemmelgarn
wrote:
I entirely agree. If the fix doesn't require any kind of decision to be
made
On 2016-04-09 03:24, Duncan wrote:
Yauhen Kharuzhy posted on Fri, 08 Apr 2016 22:53:00 +0300 as excerpted:
On Fri, Apr 08, 2016 at 03:23:28PM -0400, Austin S. Hemmelgarn wrote:
On 2016-04-08 12:17, Chris Murphy wrote:
I would personally suggest adding a per-filesystem node in sysfs to
handle
On 2016-04-14 21:55, Chris Murphy wrote:
Hi,
I'm realizing instead of doing 'btrfs subvolume -t' and then 'btrfs
subvolume -tr' and comparing, it would be better if -t just had a
column for whether a subvolume is ro. And maybe it's useful to know if
a subvolume is a snapshot or not (?). I'm not
On 2016-04-15 18:04, Nicholas D Steeves wrote:
Hi,
I happened to notice this when checking free space of my backup and
primary system. I'll use an example of a file that won't have any
private or confidential information. For du -hc
./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache; ls -alh
./var
On 2016-04-17 20:55, Chris Murphy wrote:
On Mon, Apr 11, 2016 at 5:32 AM, Austin S. Hemmelgarn
wrote:
On 2016-04-09 03:24, Duncan wrote:
Yauhen Kharuzhy posted on Fri, 08 Apr 2016 22:53:00 +0300 as excerpted:
On Fri, Apr 08, 2016 at 03:23:28PM -0400, Austin S. Hemmelgarn wrote:
I would
On 2016-04-18 01:22, David Alcorn wrote:
Debian's default installer (1) can not create a BTRFS raid array
during installation, and (2) installs to the default subvol of the
BTRFS target. The default subvol is 5 (BTRFS root) unless (i) prior
to installation a BTRFS file-system was created, (ii) t
On 2016-04-18 11:12, Chris Murphy wrote:
On Mon, Apr 18, 2016 at 6:31 AM, Austin S. Hemmelgarn
wrote:
On 2016-04-18 01:22, David Alcorn wrote:
I erred and shutdown my NAS during a balance. Grub lost track of my
root. Root was on RAID 6 array subvolid 257. I can boot a different
root from
On 2016-04-18 11:39, Chris Murphy wrote:
On Mon, Apr 18, 2016 at 9:15 AM, Austin S. Hemmelgarn
wrote:
I don't know about the current state of the Debian installer, but I know
back when I used Debian regularly and used the standard text based
installer, as long as I didn't for
On 2016-04-18 16:34, Nicholas D Steeves wrote:
On 18 April 2016 at 11:52, Austin S. Hemmelgarn wrote:
On 2016-04-18 11:39, Chris Murphy wrote:
On Mon, Apr 18, 2016 at 9:15 AM, Austin S. Hemmelgarn
wrote:
Like I said in one of my earlier e-mails though, these kind of limitations
are part of
On 2016-04-20 16:23, Konstantin Svist wrote:
Pretty much all commands print out the usage message when no device is
specified:
[root@host ~]# btrfs scrub start
btrfs scrub start: too few arguments
usage: btrfs scrub start [-BdqrRf] [-c ioprio_class -n ioprio_classdata]
|
...
However, balance do
On 2016-04-21 02:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i
did. It seems to be mean like
On 2016-04-23 01:38, Duncan wrote:
Juan Alberto Cirez posted on Fri, 22 Apr 2016 14:36:44 -0600 as excerpted:
Good morning,
I am new to this list and to btrfs in general. I have a quick question:
Can I add a new device to the pool while the btrfs filesystem balance
command is running on the dri
On 2016-04-25 08:43, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 25 Apr 2016 07:18:10 -0400 as
excerpted:
On 2016-04-23 01:38, Duncan wrote:
And again with snapshotting operations. Making a snapshot is normally
nearly instantaneous, but there's a scaling issue if you have too man
On 2016-04-26 06:50, Juan Alberto Cirez wrote:
Thank you guys so very kindly for all your help and taking the time to
answer my question. I have been reading the wiki and online use cases
and otherwise delving deeper into the btrfs architecture.
I am managing a 520TB storage pool spread across 1
.
On Tue, Apr 26, 2016 at 5:11 AM, Austin S. Hemmelgarn
wrote:
On 2016-04-26 06:50, Juan Alberto Cirez wrote:
Thank you guys so very kindly for all your help and taking the time to
answer my question. I have been reading the wiki and online use cases
and otherwise delving deeper into
o performance, you may want to compare BTRFS raid10
mode to BTRFS raid1 on top of two LVM RAID0 volumes. I find this tends
to get better overall performance with no difference in data safety,
because BTRFS still has a pretty brain-dead I/O scheduler in the
multi-device code.
On Tue, Apr 26, 2016
ay the existing code repairs them, all while using
measurably less memory as advertised, so you can add:
Tested-by: Austin S. Hemmelgarn
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo i
On 2016-04-26 20:58, Chris Murphy wrote:
On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
wrote:
With GlusterFS as a distributed volume, the files are already spread
among the servers causing file I/O to be spread fairly evenly among
them as well, thus probably providing the benefit one mig
On 2016-04-27 19:19, Chris Murphy wrote:
On Wed, Apr 27, 2016 at 5:22 AM, Austin S. Hemmelgarn
wrote:
On 2016-04-26 20:58, Chris Murphy wrote:
On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
wrote:
With GlusterFS as a distributed volume, the files are already spread
among the servers
On 2016-04-27 22:55, Chris Murphy wrote:
On Wed, Apr 27, 2016 at 8:51 PM, Chris Murphy wrote:
On Wed, Apr 27, 2016 at 2:18 PM, Juan Alberto Cirez
wrote:
Quick question: Supposed I have n-number of storage pods (physical
servers with n-number of physical hhds). The end deployment will be
btrfs
On 2016-04-27 16:18, Juan Alberto Cirez wrote:
Quick question: Supposed I have n-number of storage pods (physical
servers with n-number of physical hhds). The end deployment will be
btrfs at the brick/block level with a distributed file system on top.
Keeping in mind that my overriding goal is to
On 2016-05-01 08:47, Duncan wrote:
Meanwhile, what kernel IO scheduler do you use (deadline, noop,
cfq,... cfq is the normal default)? Do you use either normal
process nice/priority or ionice to control the rsync? What
about cgroups?
CFQ is the default on many systems, unless you are using a ne
On 2016-05-01 19:49, Duncan wrote:
Kai Krakow posted on Sun, 01 May 2016 18:54:18 +0200 as excerpted:
It affects all file systems. The "btrfs fi sync" is used to finish my
rsync backup and ensure everything is written before I'm trying to
unmount it or the system goes back to sleep.
"df" and f
On 2016-05-04 14:07, Chris Murphy wrote:
On Wed, May 4, 2016 at 7:52 AM, Niccolò Belli wrote:
I tried to add rootflags=noatime,compress=lzo,discard,autodefrag to
GRUB_CMDLINE_LINUX in /etc/default/grub as you suggested but my system
didn't manage to boot, probably because grub automatically a
On 2016-05-04 19:18, Dmitry Katsubo wrote:
Dear btrfs community,
I am interested in spare volumes and hot auto-replacement feature [1]. I have a
couple of questions:
* Which kernel version this feature will be included?
Probably 4.7. I would not suggest using it in production for at least a
On 2016-05-06 05:08, David Sterba wrote:
On Thu, May 05, 2016 at 07:23:11AM -0400, Austin S. Hemmelgarn wrote:
On 2016-05-04 19:18, Dmitry Katsubo wrote:
Dear btrfs community,
I am interested in spare volumes and hot auto-replacement feature [1]. I have a
couple of questions:
* Which kernel
On 2016-05-06 07:48, Niccolò Belli wrote:
The following are my subvolumes:
$ sudo btrfs subvol list /
[sudo] password di niko: ID 257 gen 1040 top level 5 path @
ID 258 gen 1040 top level 5 path @home
ID 270 gen 889 top level 257 path var/cache/pacman/pkg
ID 271 gen 15 top level 257 path var/abs
On 2016-05-07 12:11, Niccolò Belli wrote:
Il 2016-05-07 17:58 Clemens Eisserer ha scritto:
Hi Niccolo,
btrfs + dmcrypt + compress=lzo + autodefrag = corruption at first boot
Just to be curious - couldn't it be a hardware issue? I use almost the
same setup (compress-force=lzo instead of compr
On 2016-05-09 12:29, Zygo Blaxell wrote:
On Mon, May 09, 2016 at 04:53:13PM +0200, Niccolò Belli wrote:
While trying to find a common denominator for my issue I did lots of backups
of /dev/mapper/cryptroot and I restored them into /dev/mapper/cryptroot
dozens of times (triggering a 150GB+ random
On 2016-05-11 14:36, Richard Lochner wrote:
Hello,
I have encountered a data corruption error with BTRFS which may or may
not be of interest to your developers.
The problem is that an unmodified file on a RAID-1 volume that had
been scrubbed successfully is now corrupt. The details follow.
Th
On 2016-05-12 10:35, Niccolò Belli wrote:
On lunedì 9 maggio 2016 18:29:41 CEST, Zygo Blaxell wrote:
Did you also check the data matches the backup? btrfs check will only
look at the metadata, which is 0.1% of what you've copied. From what
you've written, there should be a lot of errors in the
On 2016-05-12 13:49, Richard A. Lochner wrote:
Austin,
I rebooted the computer and reran the scrub to no avail. The error is
consistent.
The reason I brought this question to the mailing list is because it
seemed like a situation that might be of interest to the developers.
Perhaps, there mig
On 2016-05-13 07:07, Niccolò Belli wrote:
On giovedì 12 maggio 2016 17:43:38 CEST, Austin S. Hemmelgarn wrote:
That's probably a good indication of the CPU and the MB being OK, but
not necessarily the RAM. There's two other possible options for
testing the RAM that haven't bee
On 2017-02-07 08:53, Peter Zaitsev wrote:
Hi,
I have tried BTRFS from Ubuntu 16.04 LTS for write intensive OLTP MySQL
Workload.
It did not go very well ranging from multi-seconds stalls where no
transactions are completed to the finally kernel OOPS with "no space left
on device" error message
On 2017-02-07 10:00, Timofey Titovets wrote:
2017-02-07 17:13 GMT+03:00 Peter Zaitsev :
Hi Hugo,
For the use case I'm looking for I'm interested in having snapshot(s)
open at all time. Imagine for example snapshot being created every
hour and several of these snapshots kept at all time provi
On 2017-02-07 10:20, Timofey Titovets wrote:
I think that you have a problem with extent bookkeeping (if i
understand how btrfs manage extents).
So for deal with it, try enable compression, as compression will force
all extents to be fragmented with size ~128kb.
No, it will compress everything
On 2017-02-07 14:31, Peter Zaitsev wrote:
Hi Hugo,
As I re-read it closely (and also other comments in the thread) I know
understand there is a difference how nodatacow works even if snapshot are
in place.
On autodefrag I wonder is there some more detailed documentation about how
autodefrag wor
On 2017-02-07 13:59, Peter Zaitsev wrote:
Jeff,
Thank you very much for explanations. Indeed it was not clear in the
documentation - I read it simply as "if you have snapshots enabled
nodatacow makes no difference"
I will rebuild the database in this mode from scratch and see how
performance ch
On 2017-02-07 14:39, Kai Krakow wrote:
Am Tue, 7 Feb 2017 10:06:34 -0500
schrieb "Austin S. Hemmelgarn" :
4. Try using in-line compression. This can actually significantly
improve performance, especially if you have slow storage devices and
a really nice CPU.
Just a side
On 2017-02-07 14:47, Kai Krakow wrote:
Am Mon, 6 Feb 2017 08:19:37 -0500
schrieb "Austin S. Hemmelgarn" :
MDRAID uses stripe selection based on latency and other measurements
(like head position). It would be nice if btrfs implemented similar
functionality. This would also be h
On 2017-02-07 15:19, Kai Krakow wrote:
Am Tue, 7 Feb 2017 14:50:04 -0500
schrieb "Austin S. Hemmelgarn" :
Also does autodefrag works with nodatacow (ie with snapshot) or
are these exclusive ?
I'm not sure about this one. I would assume based on the fact that
many other th
On 2017-02-07 15:36, Kai Krakow wrote:
Am Tue, 7 Feb 2017 09:13:25 -0500
schrieb Peter Zaitsev :
Hi Hugo,
For the use case I'm looking for I'm interested in having snapshot(s)
open at all time. Imagine for example snapshot being created every
hour and several of these snapshots kept at all
On 2017-02-07 15:54, Kai Krakow wrote:
Am Tue, 7 Feb 2017 15:27:34 -0500
schrieb "Austin S. Hemmelgarn" :
I'm not sure about this one. I would assume based on the fact that
many other things don't work with nodatacow and that regular defrag
doesn't work on files whic
On 2017-02-07 13:27, David Sterba wrote:
On Fri, Feb 03, 2017 at 08:48:58AM -0500, Austin S. Hemmelgarn wrote:
This adds some extra documentation to the btrfs-receive manpage that
explains some of the security related aspects of btrfs-receive. The
first part covers the fact that the subvolume
On 2017-02-07 22:21, Hans Deragon wrote:
Greetings,
On 2017-02-02 10:06, Austin S. Hemmelgarn wrote:
On 2017-02-02 09:25, Adam Borowski wrote:
On Thu, Feb 02, 2017 at 07:49:50AM -0500, Austin S. Hemmelgarn wrote:
This is a severe bug that makes a not all that uncommon (albeit bad) use
case
On 2017-02-07 17:28, Kai Krakow wrote:
Am Thu, 19 Jan 2017 15:02:14 -0500
schrieb "Austin S. Hemmelgarn" :
On 2017-01-19 13:23, Roman Mamedov wrote:
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" wrote:
I was wondering, from a point of view of data saf
On 2017-02-08 07:14, Martin Raiber wrote:
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
Out of curiosity, I see one problem here:
If you're doing snapshots of the live database, each snapshot leaves
the database files like killing the database in-flight. Like shutting
the system down in the midd
On 2017-02-07 20:49, Nicholas D Steeves wrote:
Dear btrfs community,
Please accept my apologies in advance if I missed something in recent
btrfs development; my MUA tells me I'm ~1500 unread messages
out-of-date. :/
I recently read about "mount -t btrfs -o user_subvol_rm_allowed" while
doing re
On 2017-02-08 08:26, Martin Raiber wrote:
On 08.02.2017 14:08 Austin S. Hemmelgarn wrote:
On 2017-02-08 07:14, Martin Raiber wrote:
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
Out of curiosity, I see one problem here:
If you're doing snapshots of the live database, each snapshot leave
On 2017-02-08 08:46, Tomasz Torcz wrote:
On Wed, Feb 08, 2017 at 07:50:22AM -0500, Austin S. Hemmelgarn wrote:
It is exponentially safer in BTRFS
to run single data single metadata than half raid1 data half raid1 metadata.
Why?
To convert to profiles _designed_ for a single device and
On 2017-02-08 13:38, Libor Klepáč wrote:
Hello,
inspired by recent discussion on BTRFS vs. databases i wanted to ask on
suitability of BTRFS for hosting a Cyrus imap server spool. I haven't found
any recent article on this topic.
I'm preparing migration of our mailserver to Debian Stretch, ie. k
On 2017-02-08 09:46, Peter Grandi wrote:
My system is or seems to be running out of disk space but I
can't find out how or why. [ ... ]
FilesystemSize Used Avail Use% Mounted on
/dev/sda3 28G 26G 2.1G 93% /
[ ... ]
So from chunk level, your fs is already full. And
On 2017-02-08 16:45, Peter Grandi wrote:
[ ... ]
The issue isn't total size, it's the difference between total
size and the amount of data you want to store on it. and how
well you manage chunk usage. If you're balancing regularly to
compact chunks that are less than 50% full, [ ... ] BTRFS on
1
On 2017-02-09 06:49, Adam Borowski wrote:
On Wed, Feb 08, 2017 at 02:21:13PM -0500, Austin S. Hemmelgarn wrote:
- maybe deduplication (cyrus does it by hardlinking of same content messages
now) later
Deduplication beyond what Cyrus does is probably not worth it. In most
cases about 10% of an
On 2017-02-08 20:42, Ian Kelling wrote:
I had a file read fail repeatably, in syslog, lines like this
kernel: BTRFS warning (device dm-5): csum failed ino 2241616 off
51580928 csum 4redacted expected csum 2redacted
I rmed the file.
Another error more recently, 5 instances which look like this:
On 2017-02-09 08:25, Adam Borowski wrote:
On Wed, Feb 08, 2017 at 11:48:04AM +0800, Qu Wenruo wrote:
Just don't believe the vanilla df output for btrfs.
For btrfs, unlike other fs like ext4/xfs, which allocates chunk dynamically
and has different metadata/data profile, we can only get a clear v
On 2017-02-09 22:58, Andrei Borzenkov wrote:
07.02.2017 23:47, Austin S. Hemmelgarn пишет:
...
Sadly, freezefs (the generic interface based off of xfs_freeze) only
works for block device snapshots. Filesystem level snapshots need the
application software to sync all it's data and then
On 2017-02-10 09:21, Peter Zaitsev wrote:
Hi,
As I have been reading btrfs whitepaper it speaks about autodefrag in very
generic terms - once random write in the file is detected it is put in the
queue to be defragmented. Yet I could not find any specifics about this
process described anywher
I was just experimenting with snapshots on 4.9.0, and came across some
unexpected behavior.
The simple explanation is that if you snapshot a subvolume, any files in
the subvolume that have the NOCOW attribute will not have that attribute
in the snapshot. Some further testing indicates that th
On 2017-02-14 11:07, Chris Murphy wrote:
On Tue, Feb 14, 2017 at 8:30 AM, Austin S. Hemmelgarn
wrote:
I was just experimenting with snapshots on 4.9.0, and came across some
unexpected behavior.
The simple explanation is that if you snapshot a subvolume, any files in the
subvolume that have
On 2017-02-14 11:46, Austin S. Hemmelgarn wrote:
On 2017-02-14 11:07, Chris Murphy wrote:
On Tue, Feb 14, 2017 at 8:30 AM, Austin S. Hemmelgarn
wrote:
I was just experimenting with snapshots on 4.9.0, and came across some
unexpected behavior.
The simple explanation is that if you snapshot a
On 2017-02-16 15:13, E V wrote:
It would be nice if there was an easy way to tell btrfs to allocate
another metadata chunk. For example, the below fs is full due to
exhausted metadata:
Device size:1013.28GiB
Device allocated: 1013.28GiB
Device unallocated:
On 2017-02-16 15:36, Chris Murphy wrote:
Hi,
This man page contains a list for pretty much every other file system,
with a oneliner description: ext4, XFS is in there, and even NTFS, but
not Btrfs.
Also, /etc/filesystems doesn't contain Btrfs. Anyone know if either,
or both, ought to contain an
On 2017-02-17 03:26, Duncan wrote:
Imran Geriskovan posted on Thu, 16 Feb 2017 13:42:09 +0200 as excerpted:
Opps.. I mean 4.9/4.10 Experiences
On 2/16/17, Imran Geriskovan wrote:
What are your experiences for btrfs regarding 4.10 and 4.11 kernels?
I'm still on 4.8.x. I'd be happy to hear fro
On 2017-02-23 05:51, Christian Theune wrote:
Hi,
not sure whether it’s possible, but we tried space_cache=v2 and obviously after
working fine in staging it broke in production. Or rather: we upgraded from 4.4
to 4.9 and enabled the space_cache. Our production volume is around 50TiB
usable (un
On 2017-02-23 08:19, Christian Theune wrote:
Hi,
just for future reference if someone finds this thread: there is a bit of
output I’m seeing with this crashing kernel (unclear whether related to btrfs
or not):
31 | 02/23/2017 | 09:51:22 | OS Stop/Shutdown #0x4f | Run-time critical stop
| A
On 2017-02-23 19:54, Qu Wenruo wrote:
At 02/23/2017 06:51 PM, Christian Theune wrote:
Hi,
not sure whether it’s possible, but we tried space_cache=v2 and
obviously after working fine in staging it broke in production. Or
rather: we upgraded from 4.4 to 4.9 and enabled the space_cache. Our
pro
On 2017-02-27 14:15, John Marrett wrote:
Liubo correctly identified direct IO as a solution for my test
performance issues, with it in use I achieved 908 read and 305 write,
not quite as fast as ZFS but more than adequate for my needs. I then
applied Peter's recommendation of switching to raid10
On 2017-03-02 12:26, Andrei Borzenkov wrote:
02.03.2017 16:41, Duncan пишет:
Chris Murphy posted on Wed, 01 Mar 2017 17:30:37 -0700 as excerpted:
[1717713.408675] BTRFS warning (device dm-8): missing devices (1)
exceeds the limit (0), writeable mount is not allowed
[1717713.446453] BTRFS error
On 2017-03-02 19:47, Peter Grandi wrote:
[ ... ] Meanwhile, the problem as I understand it is that at
the first raid1 degraded writable mount, no single-mode chunks
exist, but without the second device, they are created. [
... ]
That does not make any sense, unless there is a fundamental
mista
On 2017-03-03 00:56, Kai Krakow wrote:
Am Thu, 2 Mar 2017 11:37:53 +0100
schrieb Adam Borowski :
On Wed, Mar 01, 2017 at 05:30:37PM -0700, Chris Murphy wrote:
[1717713.408675] BTRFS warning (device dm-8): missing devices (1)
exceeds the limit (0), writeable mount is not allowed
[1717713.446453
On 2017-03-03 15:10, Kai Krakow wrote:
Am Fri, 3 Mar 2017 07:19:06 -0500
schrieb "Austin S. Hemmelgarn" :
On 2017-03-03 00:56, Kai Krakow wrote:
Am Thu, 2 Mar 2017 11:37:53 +0100
schrieb Adam Borowski :
On Wed, Mar 01, 2017 at 05:30:37PM -0700, Chris Murphy wrote:
[...]
Wel
On 2017-03-05 14:13, Peter Grandi wrote:
What makes me think that "unmirrored" 'raid1' profile chunks
are "not a thing" is that it is impossible to remove
explicitly a member device from a 'raid1' profile volume:
first one has to 'convert' to 'single', and then the 'remove'
copies back to the rem
r.c | 5 +-
fs/btrfs/volumes.c | 156 -
fs/btrfs/volumes.h | 37 +
6 files changed, 188 insertions(+), 101 deletions(-)
Everything appears to work as advertised here, so for the patcheset as a
whole, you can add:
Tested-by: Austin S
On 2017-03-09 04:49, Peter Grandi wrote:
Consider the common case of a 3-member volume with a 'raid1'
target profile: if the sysadm thinks that a drive should be
replaced, the goal is to take it out *without* converting every
chunk to 'single', because with 2-out-of-3 devices half of the
chunks w
On 2017-03-13 07:52, Juan Orti Alcaine wrote:
2017-03-13 12:29 GMT+01:00 Hérikz Nawarro :
Hello everyone,
Today is safe to use btrfs for home storage? No raid, just secure
storage for some files and create snapshots from it.
In my humble opinion, yes. I'm running a RAID1 btrfs at home for 5
I'm currently working on a plugin for colllectd [1] to track per-device
per-filesystem error rates for BTRFS volumes. Overall, this is actually
going quite well (I've got most of the secondary logic like matching
filesystems to watch and parsing the data done already), but I've come
across a r
On 2017-03-17 15:25, John Marrett wrote:
Peter,
Bad news. That means that probably the disk is damaged and
further issues may happen.
This system has a long history, I have had a dual drive failure in the
past, I managed to recover from that with ddrescue. I've subsequently
copied the content
On 2017-03-17 15:01, Eric Sandeen wrote:
On 3/17/17 11:25 AM, Austin S. Hemmelgarn wrote:
I'm currently working on a plugin for colllectd [1] to track per-device
per-filesystem error rates for BTRFS volumes. Overall, this is actually going
quite well (I've got most of the secon
On 2017-03-23 06:09, Hugo Mills wrote:
On Wed, Mar 22, 2017 at 10:37:23PM -0700, Sean Greenslade wrote:
Hello, all. I'm currently tracking down the source of some strange
behavior in my setup. I recognize that this isn't strictly a btrfs
issue, but I figured I'd start at the bottom of the stack
On 2017-03-25 23:00, J. Hart wrote:
I have a Btrfs filesystem on a backup server. This filesystem has a
directory to hold backups for filesystems from remote machines. In this
directory is a subdirectory for each machine. Under each machine
subdirectory is one directory for each filesystem (ex
On 2017-03-27 07:02, Moritz Sichert wrote:
Am 27.03.2017 um 05:46 schrieb Qu Wenruo:
At 03/27/2017 11:26 AM, Andrei Borzenkov wrote:
27.03.2017 03:39, Qu Wenruo пишет:
At 03/26/2017 06:03 AM, Moritz Sichert wrote:
Hi,
I tried to configure qgroups on a btrfs filesystem but was really
surp
On 2017-03-27 09:24, Hugo Mills wrote:
On Mon, Mar 27, 2017 at 03:20:37PM +0200, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:07 PM, Hugo Mills wrote:
On my hardware (consumer HDDs and SATA, RAID-1 over 6 devices), it
takes about a minute to move 1 GiB of data. At that rate, it would
On 2017-03-27 09:50, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn wrote:
Something I’d like to verify: does having traffic on the volume have
the potential to delay this infinitely? I.e. does the system write
to any segments that we’re trying to free so it
On 2017-03-27 09:54, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:50 PM, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn wrote:
Something I’d like to verify: does having traffic on the volume have
the potential to delay this infinitely? I.e. does the
On 2017-03-27 15:32, Chris Murphy wrote:
How about if qgroups are enabled, then non-root user is prevented from
creating new subvolumes?
Or is there a way for a new nested subvolume to be included in its
parent's quota, rather than the new subvolume having a whole new quota
limit?
Tricky proble
501 - 600 of 1429 matches
Mail list logo