On 2016-11-30 09:04, Roman Mamedov wrote:
On Wed, 30 Nov 2016 07:50:17 -0500
"Austin S. Hemmelgarn" wrote:
*) Read performance is not optimized: all metadata is always read from the
first device unless it has failed, data reads are supposedly balanced between
devices per PID of t
On 2016-11-30 10:49, Wilson Meier wrote:
Am 30/11/16 um 15:37 schrieb Austin S. Hemmelgarn:
On 2016-11-30 08:12, Wilson Meier wrote:
Am 30/11/16 um 11:41 schrieb Duncan:
Wilson Meier posted on Wed, 30 Nov 2016 09:35:36 +0100 as excerpted:
Am 30/11/16 um 09:06 schrieb Martin Steigerwald
On 2016-11-30 12:18, Marc MERLIN wrote:
On Wed, Nov 30, 2016 at 08:46:46AM -0800, Marc MERLIN wrote:
+btrfs mailing list, see below why
Ok, Linus helped me find a workaround for this problem:
https://lkml.org/lkml/2016/11/29/667
namely:
echo 2 > /proc/sys/vm/dirty_ratio
echo 1 > /proc/sys
On 2016-11-30 19:48, Chris Murphy wrote:
On Wed, Nov 30, 2016 at 4:57 PM, Eric Wheeler wrote:
On Wed, 30 Nov 2016, Marc MERLIN wrote:
+btrfs mailing list, see below why
On Tue, Nov 29, 2016 at 12:59:44PM -0800, Eric Wheeler wrote:
On Mon, 27 Nov 2016, Coly Li wrote:
Yes, too many work queu
that this switch is
passed and an error occurs reading the stats, the return code will have
bit 0 set (so if there are errors reading counters, and the counters
which were read were non-zero, the return value will be 129).
Signed-off-by: Austin S. Hemmelgarn
---
Tested on multiple filesystems wi
these profiles.
Signed-off-by: Austin S. Hemmelgarn
---
This should work to cover most of the issues brought up on the mailing
list recently regarding this particular aspect of documentation.
Documentation/mkfs.btrfs.asciidoc | 44 ---
1 file changed, 36 inser
On 2016-12-01 15:32, Mike Fleetwood wrote:
On 1 December 2016 at 18:43, Austin S. Hemmelgarn wrote:
Currently, `btrfs device stats` returns non-zero only when there was an
error getting the counter values. This is fine for when it gets run by a
user directly, but is a serious pain when trying
that this switch is passed
and an error occurs reading the stats, the return code will have bit
0 set (so if there are errors reading counters, and the counters which
were read were non-zero, the return value will be 65).
Signed-off-by: Austin S. Hemmelgarn
---
Changes since v1:
* Switched to u
that this switch is passed
and an error occurs reading the stats, the return code will have bit
0 set (so if there are errors reading counters, and the counters which
were read were non-zero, the return value will be 65).
Signed-off-by: Austin S. Hemmelgarn
---
Changes since v1:
* Switched to u
On 2016-12-08 10:11, Swâmi Petaramesh wrote:
Hi, Some real world figures about running duperemove deduplication on
BTRFS :
I have an external 2,5", 5400 RPM, 1 TB HD, USB3, on which I store the
BTRFS backups (full rsync) of 5 PCs, using 2 different distros,
typically at the same update level, an
On 2016-12-08 12:20, David Sterba wrote:
On Mon, Dec 05, 2016 at 01:35:20PM -0500, Austin S. Hemmelgarn wrote:
Currently, `btrfs device stats` returns non-zero only when there was an
error getting the counter values. This is fine for when it gets run by a
user directly, but is a serious pain
On 2016-12-08 15:07, Jeff Mahoney wrote:
On 12/8/16 10:42 AM, Austin S. Hemmelgarn wrote:
On 2016-12-08 10:11, Swâmi Petaramesh wrote:
Hi, Some real world figures about running duperemove deduplication on
BTRFS :
I have an external 2,5", 5400 RPM, 1 TB HD, USB3, on which I store the
On 2016-12-08 21:54, Chris Murphy wrote:
On Thu, Dec 8, 2016 at 7:26 PM, Darrick J. Wong wrote:
On Thu, Dec 08, 2016 at 05:45:40PM -0700, Chris Murphy wrote:
OK something's wrong.
Kernel 4.8.12 and duperemove v0.11.beta4. Brand new file system
(mkfs.btrfs -dsingle -msingle, default mount opti
On 2016-12-09 06:02, Ulli Horlacher wrote:
Is file autoversioning possible with btrfs?
I have a VMS background, where the standard filesystem automatically
creates a new version for every file that is written.
The number of versions can be controlled globally, on directory or on file
base.
Wit
On 2016-12-10 20:42, Markus Binsteiner wrote:
Hi Xin,
thanks. I did not enable autosnap, and I'm pretty sure Debian didn't
do it for me either, as I would have seen the subvolumes created by it
at some stage. Good to know about this feature though, will definitely
use it next time around.
BTRFS
On 2016-12-21 21:28, Anand Jain wrote:
A quick design specific question.
The following command converts file-data-extents to the specified
encoder (lzo).
$ btrfs filesystem defrag -v -r -f -clzo dir/
However the lzo property does not persist through the file modify.
As the above operation d
On 2016-12-22 10:14, Adam Borowski wrote:
On Thu, Dec 22, 2016 at 10:11:35AM +, Duncan wrote:
Given the maturing-but-not-yet-fully-stable-and-mature state of btrfs
today, being no further from a usable current backup than the data you're
willing to lose, at least worst-case, remains an even
On 2016-12-23 03:14, Adam Borowski wrote:
On Thu, Dec 22, 2016 at 01:28:37PM -0500, Austin S. Hemmelgarn wrote:
On 2016-12-22 10:14, Adam Borowski wrote:
On the other, other filesystems:
* suffer from silent data loss every time the disk doesn't notice an error!
Allowing silent data
On 2016-12-22 18:38, Xin Zhou wrote:
Hi,
If the change of disk format between versions is precisely documented,
it is plausible to create a utility to convert the old volume to new ones,
trigger the workflow, upgrade the kernel and boots up for mounting the new
volume.
Currently, the btrfs wiki
On 2016-12-30 15:28, Peter Becker wrote:
Hello, i have a 8 TB volume with multiple files with hundreds of GB each.
I try to dedupe this because the first hundred GB of many files are identical.
With 128KB blocksize with nofiemap and lookup-extends=no option, will
take more then a week (only dedup
On 2017-01-03 09:21, Janos Toth F. wrote:
So, in order to defrag "everything" in the filesystem (which is
possible to / potentially needs defrag) I need to run:
1: a recursive defrag starting from the root subvolume (to pick up all
the files in all the possible subvolumes and directories)
2: a no
On 2017-01-03 13:16, Janos Toth F. wrote:
On Tue, Jan 3, 2017 at 5:01 PM, Austin S. Hemmelgarn
wrote:
I agree on this point. I actually hadn't known that it didn't recurse into
sub-volumes, and that's a pretty significant caveat that should be
documented (and ideally fixed,
lving the block size will roughly
quadruple the time it takes to make the comparisons).
2017-01-03 20:37 GMT+01:00 Austin S. Hemmelgarn :
On 2017-01-03 14:21, Peter Becker wrote:
All invocations are justified, but not relevant in (offline) backup
and archive scenarios.
For example you have
Wenruo works.
AFAIK, that uses a different code path from the batch deduplication
ioctl. It also doesn't have the context switches and other overhead
from an ioctl involved, because it's done in kernel code.
2017-01-03 21:40 GMT+01:00 Austin S. Hemmelgarn :
On 2017-01-03 15:20, Peter
On 2017-01-04 17:12, Janos Toth F. wrote:
I separated these 9 camera storages into 9 subvolumes (so now I have
10 subvols in total in this filesystem with the "root" subvol). It's
obviously way too early to talk about long term performance but now I
can tell that recursive defrag does NOT descend
On 2017-01-09 22:55, Duncan wrote:
This post is triggered by a balance problem due to oversized chunks that
I have currently.
Proposal 1: Ensure maximum chunk sizes are less than 1/8 the size of the
filesystem (down to where they can't be any smaller, at least).
Proposal 2: Drastically reduce d
On 2017-01-10 10:29, Hugo Mills wrote:
On Tue, Jan 10, 2017 at 09:57:52AM -0500, Austin S. Hemmelgarn wrote:
On 2017-01-09 22:55, Duncan wrote:
This post is triggered by a balance problem due to oversized chunks that
I have currently.
Proposal 1: Ensure maximum chunk sizes are less than 1/8
On 2017-01-10 10:47, Hugo Mills wrote:
On Tue, Jan 10, 2017 at 10:42:51AM -0500, Austin S. Hemmelgarn wrote:
Most of the issue in this case is with the size of the initial
chunk. That said, I've got quite a few reasonably sized filesystems
(I think the largest is 200GB) with moderate usage
On 2017-01-10 10:42, Austin S. Hemmelgarn wrote:
Most of the issue in this case is with the size of the initial
chunk. That said, I've got quite a few reasonably sized filesystems
(I think the largest is 200GB) with moderate usage (max 90GB of
data), and none of them are using more tha
On 2017-01-10 16:49, Chris Murphy wrote:
On Tue, Jan 10, 2017 at 2:07 PM, Vinko Magecic
wrote:
Hello,
I set up a raid 1 with two btrfs devices and came across some situations in my
testing that I can't get a straight answer on.
1) When replacing a volume, do I still need to `umount /path` an
On 2017-01-11 15:37, Tomasz Kusmierz wrote:
I would like to use this thread to ask few questions:
If we have 2 devices dying on us and we run RAID6 - this theoretically will
still run (despite our current problems). Now let’s say that we booted up raid6
of 10 disk and 2 of them dies but operat
On 2017-01-16 06:10, Christoph Groth wrote:
Hi,
I’ve been using a btrfs RAID1 of two hard disks since early 2012 on my
home server. The machine has been working well overall, but recently
some problems with the file system surfaced. Since I do have backups, I
do not worry about the data, but I
On 2017-01-16 10:42, Christoph Groth wrote:
Austin S. Hemmelgarn wrote:
On 2017-01-16 06:10, Christoph Groth wrote:
root@mim:~# btrfs fi df /
Data, RAID1: total=417.00GiB, used=344.62GiB
Data, single: total=8.00MiB, used=0.00B
System, RAID1: total=40.00MiB, used=68.00KiB
System, single
On 2017-01-16 23:50, Janos Toth F. wrote:
BTRFS uses a 2 level allocation system. At the higher level, you have
chunks. These are just big blocks of space on the disk that get used for
only one type of lower level allocation (Data, Metadata, or System). Data
chunks are normally 1GB, Metadata 2
On 2017-01-17 04:18, Christoph Groth wrote:
Austin S. Hemmelgarn wrote:
There's not really much in the way of great documentation that I know
of. I can however cover the basics here:
(...)
Thanks for this explanation. I'm sure it will be also useful to others.
Glad I could h
On 2017-01-18 09:21, Steven Hum wrote:
Added 2 drives to my RAID10, then ran btrfs balance. The system appears
to have crashed after several hours (I was ssh'd in at the time on my
local network). When I reboot the Arch system, I ran btrfs check and no
errors were reported.
However, attempting
On 2017-01-19 11:39, Alejandro R. Mosteo wrote:
Hello list,
I was wondering, from a point of view of data safety, if there is any
difference between using dup or making a raid1 from two partitions in
the same disk. This is thinking on having some protection against the
typical aging HDD that sta
On 2017-01-19 13:23, Roman Mamedov wrote:
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" wrote:
I was wondering, from a point of view of data safety, if there is any
difference between using dup or making a raid1 from two partitions in
the same disk. This is thinking on having some p
On 2017-01-27 06:01, Oliver Freyermuth wrote:
I'm also running 'memtester 12G' right now, which at least tests 2/3 of the
memory. I'll leave that running for a day or so, but of course it will not
provide a clear answer...
A small update: while the online memtester is without any errors still
On 2017-01-27 11:47, Hans Deragon wrote:
On 2017-01-24 14:48, Adam Borowski wrote:
On Tue, Jan 24, 2017 at 01:57:24PM -0500, Hans Deragon wrote:
If I remove 'ro' from the option, I cannot get the filesystem mounted
because of the following error: BTRFS: missing devices(1) exceeds the
limit(0)
On 2017-01-28 04:17, Andrei Borzenkov wrote:
27.01.2017 23:03, Austin S. Hemmelgarn пишет:
On 2017-01-27 11:47, Hans Deragon wrote:
On 2017-01-24 14:48, Adam Borowski wrote:
On Tue, Jan 24, 2017 at 01:57:24PM -0500, Hans Deragon wrote:
If I remove 'ro' from the option, I cann
On 2017-01-28 00:00, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 27 Jan 2017 07:58:20 -0500 as
excerpted:
On 2017-01-27 06:01, Oliver Freyermuth wrote:
I'm also running 'memtester 12G' right now, which at least tests 2/3
of the memory. I'll leave that running for
On 2017-01-30 23:58, Duncan wrote:
Oliver Freyermuth posted on Sat, 28 Jan 2017 17:46:24 +0100 as excerpted:
Just don't count on restore to save your *** and always treat what it
can often bring to current as a pleasant surprise, and having it fail
won't be a down side, while having it work, if
On 2017-02-01 00:09, Duncan wrote:
Christian Lupien posted on Tue, 31 Jan 2017 18:32:58 -0500 as excerpted:
I have been testing btrfs send/receive. I like it.
During those tests I discovered that it is possible to access and modify
(add files, delete files ...) of the new receive snapshot duri
On 2017-02-02 05:52, Graham Cobb wrote:
On 02/02/17 00:02, Duncan wrote:
If it's a workaround, then many of the Linux procedures we as admins and
users use every day are equally workarounds. Setting 007 perms on a dir
that doesn't have anything immediately security vulnerable in it, simply
to k
On 2017-02-01 17:48, Duncan wrote:
Adam Borowski posted on Wed, 01 Feb 2017 12:55:30 +0100 as excerpted:
On Wed, Feb 01, 2017 at 05:23:16AM +, Duncan wrote:
Hans Deragon posted on Tue, 31 Jan 2017 21:51:22 -0500 as excerpted:
But the current scenario makes it difficult for me to put redun
On 2017-02-02 09:25, Adam Borowski wrote:
On Thu, Feb 02, 2017 at 07:49:50AM -0500, Austin S. Hemmelgarn wrote:
This is a severe bug that makes a not all that uncommon (albeit bad) use
case fail completely. The fix had no dependencies itself and
I don't see what's bad in mount
On 2017-02-03 04:14, Duncan wrote:
Graham Cobb posted on Thu, 02 Feb 2017 10:52:26 + as excerpted:
On 02/02/17 00:02, Duncan wrote:
If it's a workaround, then many of the Linux procedures we as admins
and users use every day are equally workarounds. Setting 007 perms on
a dir that doesn't
of the send stream.
Signed-off-by: Austin S. Hemmelgarn
---
Inspired by a recent thread on the ML.
This could probably be more thorough, but I felt it was more important
to get it documented as quickly as possible, and this should cover the
basic info that most people will care about
On 2017-02-03 10:44, Graham Cobb wrote:
On 03/02/17 12:44, Austin S. Hemmelgarn wrote:
I can look at making a patch for this, but it may be next week before I
have time (I'm not great at multi-tasking when it comes to software
development, and I'm in the middle of helping to fi
On 2017-02-03 14:17, Graham Cobb wrote:
On 03/02/17 16:01, Austin S. Hemmelgarn wrote:
Ironically, I ended up having time sooner than I thought. The message
doesn't appear to be in any of the archives yet, but the message ID is:
<20170203134858.75210-1-ahferro...@gmail.com>
A
of the send stream.
Signed-off-by: Austin S. Hemmelgarn
Suggested-by: Graham Cobb
---
Chages since v1:
* Updated the description based on suggestions from Graham Cobb.
Inspired by a recent thread on the ML.
This could probably be more thorough, but I felt it was more important
to get it
On 2017-02-05 06:54, Kai Krakow wrote:
Am Wed, 1 Feb 2017 17:43:32 +
schrieb Graham Cobb :
On 01/02/17 12:28, Austin S. Hemmelgarn wrote:
On 2017-02-01 00:09, Duncan wrote:
Christian Lupien posted on Tue, 31 Jan 2017 18:32:58 -0500 as
excerpted:
[...]
I'm just a btrfs-using
On 2017-02-05 23:26, Duncan wrote:
Hans van Kranenburg posted on Sun, 05 Feb 2017 22:55:42 +0100 as
excerpted:
On 02/05/2017 10:42 PM, Alexander Tomokhov wrote:
Is it possible, having two drives to do raid1 for metadata but keep
data on a single drive only?
Nope.
Would be a really nice feat
On 2017-02-04 16:10, Kai Krakow wrote:
Am Sat, 04 Feb 2017 20:50:03 +
schrieb "Jorg Bornschein" :
February 4, 2017 1:07 AM, "Goldwyn Rodrigues"
wrote:
Yes, please check if disabling quotas makes a difference in
execution time of btrfs balance.
Just FYI: With quotas disabled it took ~20
On 2016-02-26 05:50, Vytautas D wrote:
Hi all,
Are there any known issues upgrading btrfs running ubuntu kernel 3.13
to 3.16 ? System was once converted from ext4 using btrfs-convert (
btrfs-progs 3.17 ).
The commit that worries me is following:
* Btrfs: incompatible format change to remove ho
Added linux-btrfs as this should be documented there as a known issue
until it gets fixed (although I have no idea which side is the issue).
On 2016-02-25 14:22, Stanislav Brabec wrote:
While writing a test suite for util-linux[1], I experienced a a strange
behavior of loop device:
When two loo
On 2016-02-26 10:50, Stanislav Brabec wrote:
Austin S. Hemmelgarn wrote:
> Added linux-btrfs as this should be documented there as a known issue
> until it gets fixed (although I have no idea which side is the issue).
This is a very bad behavior, as it makes impossible to safely use
On 2016-02-26 12:07, Stanislav Brabec wrote:
Austin S. Hemmelgarn wrote:
> On 2016-02-26 10:50, Stanislav Brabec wrote:
That's just it though, from what I can tell based on what I've seen and
what you said above, mount(8) isn't doing things correctly in this case.
If we we
On 2016-02-26 14:12, Stanislav Brabec wrote:
Al Viro wrote:
On Fri, Feb 26, 2016 at 11:39:11AM -0500, Austin S. Hemmelgarn wrote:
That's just it though, from what I can tell based on what I've seen
and what you said above, mount(8) isn't doing things correctly in
this case. I
On 2016-02-26 15:30, Al Viro wrote:
On Fri, Feb 26, 2016 at 03:05:27PM -0500, Austin S. Hemmelgarn wrote:
Where is /mnt/2?
It's kind of interesting, but I can't reproduce _any_ of this
behavior with either ext4 or BTRFS when I manually set up the loop
devices and point mount(8
On 2016-02-26 16:45, Al Viro wrote:
On Fri, Feb 26, 2016 at 10:36:50PM +0100, Stanislav Brabec wrote:
It should definitely report error whenever trying -oloop on top of
anything else than a file. Or at least a warning.
Well, even losetup should report a warning.
Keep in mind that with crypto
On 2016-03-01 11:08, Anand Jain wrote:
This patchset adds btrfs encryption support.
While I think this is a great feature to have, I personally think we're
better off waiting for the ext4/F2FS encryption API's to get pushed up
to the VFS layer in mainline, and then use those for the user facing
On 2016-03-01 11:46, Chris Mason wrote:
On Tue, Mar 01, 2016 at 05:29:52PM +0100, Tomasz Torcz wrote:
On Wed, Mar 02, 2016 at 12:08:09AM +0800, Anand Jain wrote:
This patchset adds btrfs encryption support.
Warning:
The code is in prototype/experimental stage and is not suitable
for the produc
On 2016-03-01 16:44, Duncan wrote:
John Smith posted on Tue, 01 Mar 2016 15:24:04 +0100 as excerpted:
what is the status of btrfs raid5 in kernel 4.4? Thank you
That is a very good question. =:^)
The answer, to the best I can give it, is, btrfs raid56 mode has no known
outstanding bugs spec
On 2016-03-03 14:53, Holger Hoffstätte wrote:
On 03/03/16 19:33, Liu Bo wrote:
On Thu, Mar 03, 2016 at 01:28:29PM +0100, Holger Hoffstätte wrote:
(..)
I've noticed that slow slow buffered writes create a huge number of
unnecessary 4k sized extents. At first I wrote it off as odd buffering
beha
On 2016-03-01 23:48, Anand Jain wrote:
On 03/02/2016 02:23 AM, Chris Mason wrote:
On Tue, Mar 01, 2016 at 09:59:27AM -0800, Christoph Hellwig wrote:
On Tue, Mar 01, 2016 at 11:46:16AM -0500, Chris Mason wrote:
We'll definitely move in line with the common API over time. Thanks
Anand for st
On 2016-03-07 13:39, Chris Murphy wrote:
On Mon, Mar 7, 2016 at 1:42 AM, Marc Haber wrote:
[1] Does RHEL 6 have btrfs in the first place?
They do, but you need a decoder ring to figure out what's been
backported to have some vague idea of what equivalent kernel.org
kernel it is.
Yeah, in gen
On 2016-03-07 17:55, Tobias Hunger wrote:
Hi,
I have been running systemd-nspawn containers on top of a btrfs
filesystem for a while now.
This works great: Snapshots are a huge help to manage containers!
But today I ran btrfs subvol list . *inside* a container. To my
surprise I got a list of *
On 2016-03-08 16:28, Chris Murphy wrote:
On Tue, Mar 8, 2016 at 12:58 PM, Liu Bo wrote:
On Mon, Mar 07, 2016 at 04:45:09PM -0700, Chris Murphy wrote:
On Mon, Mar 7, 2016 at 3:55 PM, Tobias Hunger wrote:
Hi,
I have been running systemd-nspawn containers on top of a btrfs
filesystem for a whi
On 2016-03-09 21:55, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 09 Mar 2016 07:15:36 -0500 as
excerpted:
On 2016-03-08 16:28, Chris Murphy wrote:
Yes, it's a bit peculiar I can create subvolumes and snapshot them, but
can't 'btrfs sub list/show'
It's an o
On 2016-03-15 09:46, Marc Haber wrote:
On Tue, Mar 15, 2016 at 11:52:30AM +0100, Holger Hoffstätte wrote:
On 03/14/16 21:13, Marc Haber wrote:
Do I need to wait for clear_cache to finish, like until I see disk
usage dropping?
The cache isn't that big, so you won't see a huge drop. Just use th
On 2016-03-15 18:29, Peter Chant wrote:
On 03/15/2016 03:52 PM, Duncan wrote:
Tho even with autodefrag, given the previous relatime and snapshotting,
it could be that the free-space in existing chunks is fragmented, which
over time and continued usage would force higher file fragmentation
despit
or socket:
* btrfs check
* btrfs restore
* btrfs-image
* btrfs-find-root
* btrfs-debug-tree
Signed-off-by: Austin S. Hemmelgarn
---
This has been build and runtime tested on an x86-64 system with glibc.
It has been build tested on x86-64 with uclibc.
It has not been tested on Andro
On 2016-03-18 05:17, Duncan wrote:
Pete posted on Thu, 17 Mar 2016 21:08:23 + as excerpted:
5 Reallocated_Sector_Ct 0x0033 100 100 010Pre-fail Always
- 0
This one is available on ssds and spinning rust, and while it never
actually hit failure mode for me on an
On 2016-03-17 05:04, Qu Wenruo wrote:
Austin S. Hemmelgarn wrote on 2016/03/16 11:26 -0400:
Currently, open_ctree_fs_info will open whatever path you pass it and
try to interpret it as a BTRFS filesystem. While this is not
nessecarily dangerous (except possibly if done on a character device
On 2016-03-17 20:38, Qu Wenruo wrote:
Austin S. Hemmelgarn wrote on 2016/03/17 07:22 -0400:
On 2016-03-17 05:04, Qu Wenruo wrote:
Austin S. Hemmelgarn wrote on 2016/03/16 11:26 -0400:
Currently, open_ctree_fs_info will open whatever path you pass it and
try to interpret it as a BTRFS
On 2016-03-18 11:17, David Sterba wrote:
On Fri, Mar 18, 2016 at 10:03:42AM -0400, Austin S. Hemmelgarn wrote:
There are other tools that have similarly poor error behavior when
called incorrectly (btrfs rescue immediately comes to mind), but they
don't use open_ctree_fs_info, so this do
On 2016-03-17 04:58, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 16 Mar 2016 11:26:11 -0400 as
excerpted:
Currently, open_ctree_fs_info will open whatever path you pass it and
try to interpret it as a BTRFS filesystem. While this is not
nessecarily dangerous (except possibly if done on
On 2016-03-16 02:51, Chris Murphy wrote:
On Tue, Mar 15, 2016 at 10:23 PM, Nazar Mokrynskyi wrote:
Sounds like a really good idea!
I'll try to implement in in my backup tool, but it might take some time to
see real benefit from it (or no benefit:)).
There is a catch. I'm not sure how much te
On 2016-03-18 14:16, Pete wrote:
On 03/18/2016 09:17 AM, Duncan wrote:
So bottom line regarding that smartctl output, yeah, a new device is
probably a very good idea at this point. Those smart attributes indicate
either head slop or spin wobble, and some errors and command timeouts and
retries
On 2016-03-18 11:17, David Sterba wrote:
On Fri, Mar 18, 2016 at 10:03:42AM -0400, Austin S. Hemmelgarn wrote:
This has been both build and runtime tested on an x86-64 system with
glibc. It has been build but not runtime tested with uClibc on x86-64
and ARMv7. It has not been tested on
inode that open(2)
opens, and thus don't need special handling for symlinks.
Signed-off-by: Austin S. Hemmelgarn
---
Changes from v1:
* Updated commit message to use the new name for btrfs-debug-tree
* Added a bit of clarity to the commit message to explain that stat(2)
follow
On 2016-03-21 02:37, Martin Volf wrote:
Hello,
I have just tried the new "btrfs fi du" command from btrfs-progs 4.5
on 4.4.6 linux kernel, and it gave me:
# btrfs fi du /bin
Total Exclusive Set shared Filename
(many lines of output for individual files, probably OK)
...
ERROR: cannot
On 2016-03-21 05:55, Duncan wrote:
Chris Murphy posted on Sun, 20 Mar 2016 21:43:52 -0600 as excerpted:
Hi folks,
So I just ran into this:
https://raid.wiki.kernel.org/index.php/
Recovering_a_failed_software_RAID#Making_the_harddisks_read-
only_using_an_overlay_file
[That's a single link, wr
27;t be opening them writable).
Signed-off-by: Austin S. Hemmelgarn
---
Build and runtime tested on x86-64 with glibc.
I intend to take the time at some point this week to audit all users of
open_file_or_dir() and similarly change any that don't need to write
to what they're opening, pos
On 2016-03-21 02:37, Martin Volf wrote:
Hello,
I have just tried the new "btrfs fi du" command from btrfs-progs 4.5
on 4.4.6 linux kernel, and it gave me:
# btrfs fi du /bin
Total Exclusive Set shared Filename
(many lines of output for individual files, probably OK)
...
ERROR: cannot
On 2016-03-21 13:29, Tycho Andersen wrote:
On Mon, Mar 21, 2016 at 11:22:06AM -0600, Chris Murphy wrote:
On Mon, Mar 21, 2016 at 9:21 AM, Tycho Andersen
wrote:
Hi all,
I'm seeing some strange behavior when bind mounting files from a btrfs
subvolume. Consider the output below:
root@criu2:/tmp
On 2016-03-21 13:40, David Sterba wrote:
On Mon, Mar 21, 2016 at 08:23:11AM -0400, Austin S. Hemmelgarn wrote:
Currently, btrfs fi du uses open_file_or_dir(), which tries to open
it's argument with o_RDWR. Because of POSIX semantics, this fails for
non-root users when the file is read-on
On 2016-03-21 13:13, Chris Murphy wrote:
On Mon, Mar 21, 2016 at 5:22 AM, Austin S. Hemmelgarn
wrote:
On 2016-03-21 05:55, Duncan wrote:
Chris Murphy posted on Sun, 20 Mar 2016 21:43:52 -0600 as excerpted:
Hi folks,
So I just ran into this:
https://raid.wiki.kernel.org/index.php
On 2016-03-22 16:42, Henk Slager wrote:
On Mon, Mar 21, 2016 at 4:43 AM, Chris Murphy wrote:
Hi folks,
So I just ran into this:
https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file
This is a device mapper overlay file -
On 2016-03-23 13:41, Vytautas D wrote:
I can think of few ways to revert changes with btrfs, but I wonder
what are the trade-offs between each method and perhaps there is
already research done on this?
ways to restore a snapshot ( post kernel 3.* ):
a) via set-default
1. btrfs subvolume set-de
o
dup profile to remove this potential data integrity reduction.
Signed-off-by: Austin S. Hemmelgarn
---
A few years back I had sent a patch to do this to the ML, got asked to
rebase it, didn't have the time to do so then, and forgot about it as
the use case this caters to is not one that I ha
On 2016-03-27 20:56, Duncan wrote:
But there's another option you didn't mention, that may be useful,
depending on your exact need and usage of that swap:
Split your swap space in half, say (roughly, you can make one slightly
larger than the other to allow for the EFI on one device) 8 GiB on ea
On 2016-03-28 10:37, Marc Haber wrote:
Hi,
I have a btrfs which btrfs check --repair doesn't fix:
# btrfs check --repair /dev/mapper/fanbtr
bad metadata [4425377054720, 4425377071104) crossing stripe boundary
bad metadata [4425380134912, 4425380151296) crossing stripe boundary
bad metadata [442
kernel with these patches in a VM on my laptop and tested the
new functionality, and everything appears to work like it's supposed to
without breaking any existing code, so for the patch-set as a whole:
Tested-by: Austin S. Hemmelgarn
--
To unsubscribe from this list: send the line "un
On 2016-03-29 15:24, Yauhen Kharuzhy wrote:
On Tue, Mar 29, 2016 at 10:41:36PM +0800, Anand Jain wrote:
No. No. No please don't do that, it would lead to trouble in handing
slow devices. I purposely didn't do it.
Hmm. Can you explain please? Sometimes admins may want to have
autoreplaceme
On 2016-03-29 16:26, Chris Murphy wrote:
On Tue, Mar 29, 2016 at 1:59 PM, Austin S. Hemmelgarn
wrote:
On 2016-03-29 15:24, Yauhen Kharuzhy wrote:
On Tue, Mar 29, 2016 at 10:41:36PM +0800, Anand Jain wrote:
No. No. No please don't do that, it would lead to trouble in handing
On 2016-03-30 14:27, Darrick J. Wong wrote:
Hi all,
Christoph and I have been working on adding reflink and CoW support to
XFS recently. Since the purpose of (mode 0) fallocate is to make sure
that future file writes cannot ENOSPC, I extended the XFS fallocate
handler to unshare any shared bloc
On 2016-03-31 03:58, Christoph Hellwig wrote:
On Wed, Mar 30, 2016 at 02:58:38PM -0400, Austin S. Hemmelgarn wrote:
Nothing that I can find in the man-pages or API documentation for Linux's
fallocate explicitly says that it will be fast. There are bits that say it
should be efficient, but
401 - 500 of 1429 matches
Mail list logo