On 2016-07-06 14:23, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 12:04 PM, Austin S. Hemmelgarn
wrote:
On 2016-07-06 13:19, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 3:51 AM, Andrei Borzenkov
wrote:
3) can we query btrfs whether it is mountable in degraded mode?
according to documentation
On 2016-07-06 14:45, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 11:18 AM, Austin S. Hemmelgarn
wrote:
On 2016-07-06 12:43, Chris Murphy wrote:
So does it make sense to just set the default to 180? Or is there a
smarter way to do this? I don't know.
Just thinking about this:
1. Peopl
On 2016-07-06 13:19, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 3:51 AM, Andrei Borzenkov wrote:
3) can we query btrfs whether it is mountable in degraded mode?
according to documentation, "btrfs device ready" (which udev builtin
follows) checks "if it has ALL of it’s devices in cache for mount
On 2016-07-06 12:43, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 5:51 AM, Austin S. Hemmelgarn
wrote:
On 2016-07-05 19:05, Chris Murphy wrote:
Related:
http://www.spinics.net/lists/raid/msg52880.html
Looks like there is some traction to figuring out what to do about
this, whether it's a
On 2016-07-06 12:05, Austin S. Hemmelgarn wrote:
On 2016-07-06 11:22, Joerg Schilling wrote:
"Austin S. Hemmelgarn" wrote:
It should be obvious that a file that offers content also has
allocated blocks.
What you mean then is that POSIX _implies_ that this is the case, but
do
On 2016-07-06 11:22, Joerg Schilling wrote:
"Austin S. Hemmelgarn" wrote:
It should be obvious that a file that offers content also has allocated blocks.
What you mean then is that POSIX _implies_ that this is the case, but
does not say whether or not it is required. There are al
On 2016-07-06 10:53, Joerg Schilling wrote:
Antonio Diaz Diaz wrote:
Joerg Schilling wrote:
POSIX requires st_blocks to be != 0 in case that the file contains data.
Please, could you provide a reference? I can't find such requirement at
http://pubs.opengroup.org/onlinepubs/9699919799/basede
On 2016-07-06 08:39, Andrei Borzenkov wrote:
Отправлено с iPhone
6 июля 2016 г., в 15:14, Austin S. Hemmelgarn написал(а):
On 2016-07-06 07:55, Andrei Borzenkov wrote:
On Wed, Jul 6, 2016 at 2:45 PM, Austin S. Hemmelgarn
wrote:
On 2016-07-06 05:51, Andrei Borzenkov wrote:
On Tue, Jul
On 2016-07-06 07:55, Andrei Borzenkov wrote:
On Wed, Jul 6, 2016 at 2:45 PM, Austin S. Hemmelgarn
wrote:
On 2016-07-06 05:51, Andrei Borzenkov wrote:
On Tue, Jul 5, 2016 at 11:10 PM, Chris Murphy
wrote:
I started a systemd-devel@ thread since that's where most udev stuff
gets talked
On 2016-07-05 19:05, Chris Murphy wrote:
Related:
http://www.spinics.net/lists/raid/msg52880.html
Looks like there is some traction to figuring out what to do about
this, whether it's a udev rule or something that happens in the kernel
itself. Pretty much the only hardware setup unaffected by th
On 2016-07-06 05:51, Andrei Borzenkov wrote:
On Tue, Jul 5, 2016 at 11:10 PM, Chris Murphy wrote:
I started a systemd-devel@ thread since that's where most udev stuff
gets talked about.
https://lists.freedesktop.org/archives/systemd-devel/2016-July/037031.html
Before discussing how to imple
On 2016-07-05 05:28, Joerg Schilling wrote:
Andreas Dilger wrote:
I think in addition to fixing btrfs (because it needs to work with existing
tar/rsync/etc. tools) it makes sense to *also* fix the heuristics of tar
to handle this situation more robustly. One option is if st_blocks == 0 then
t
On 2016-06-29 14:12, Saint Germain wrote:
On Wed, 29 Jun 2016 11:28:24 -0600, Chris Murphy
wrote :
Already got a backup. I just really want to try to repair it (in
order to test BTRFS).
I don't know that this is a good test because I think the file system
has already been sufficient corrupte
On 2016-06-28 08:14, Steven Haigh wrote:
On 28/06/16 22:05, Austin S. Hemmelgarn wrote:
On 2016-06-27 17:57, Zygo Blaxell wrote:
On Mon, Jun 27, 2016 at 10:17:04AM -0600, Chris Murphy wrote:
On Mon, Jun 27, 2016 at 5:21 AM, Austin S. Hemmelgarn
wrote:
On 2016-06-25 12:44, Chris Murphy wrote
On 2016-06-27 17:57, Zygo Blaxell wrote:
On Mon, Jun 27, 2016 at 10:17:04AM -0600, Chris Murphy wrote:
On Mon, Jun 27, 2016 at 5:21 AM, Austin S. Hemmelgarn
wrote:
On 2016-06-25 12:44, Chris Murphy wrote:
On Fri, Jun 24, 2016 at 12:19 PM, Austin S. Hemmelgarn
wrote:
OK but hold on. During
On 2016-06-27 23:17, Zygo Blaxell wrote:
On Mon, Jun 27, 2016 at 08:39:21PM -0600, Chris Murphy wrote:
On Mon, Jun 27, 2016 at 7:52 PM, Zygo Blaxell
wrote:
On Mon, Jun 27, 2016 at 04:30:23PM -0600, Chris Murphy wrote:
Btrfs does have something of a work around for when things get slow,
and th
On 2016-06-27 13:29, Chris Murphy wrote:
On Sun, Jun 26, 2016 at 10:02 PM, Nick Austin wrote:
On Sun, Jun 26, 2016 at 8:57 PM, Nick Austin wrote:
sudo btrfs fi show /mnt/newdata
Label: '/var/data' uuid: e4a2eb77-956e-447a-875e-4f6595a5d3ec
Total devices 4 FS bytes used 8.07TiB
On 2016-06-25 12:44, Chris Murphy wrote:
On Fri, Jun 24, 2016 at 12:19 PM, Austin S. Hemmelgarn
wrote:
Well, the obvious major advantage that comes to mind for me to checksumming
parity is that it would let us scrub the parity data itself and verify it.
OK but hold on. During scrub, it
On 2016-06-24 13:52, Chris Murphy wrote:
On Fri, Jun 24, 2016 at 11:21 AM, Andrei Borzenkov wrote:
24.06.2016 20:06, Chris Murphy пишет:
On Fri, Jun 24, 2016 at 3:52 AM, Andrei Borzenkov wrote:
On Fri, Jun 24, 2016 at 11:50 AM, Hugo Mills wrote:
eta)data and RAID56 parity is not data.
On 2016-06-24 13:43, Steven Haigh wrote:
On 25/06/16 03:40, Austin S. Hemmelgarn wrote:
On 2016-06-24 13:05, Steven Haigh wrote:
On 25/06/16 02:59, ronnie sahlberg wrote:
What I have in mind here is that a file seems to get CREATED when I copy
the file that crashes the system in the target
On 2016-06-24 13:05, Steven Haigh wrote:
On 25/06/16 02:59, ronnie sahlberg wrote:
What I have in mind here is that a file seems to get CREATED when I copy
the file that crashes the system in the target directory. I'm thinking
if I 'cp -an source/ target/' that it will make this somewhat easier (
On 2016-06-24 06:59, Hugo Mills wrote:
On Fri, Jun 24, 2016 at 01:19:30PM +0300, Andrei Borzenkov wrote:
On Fri, Jun 24, 2016 at 1:16 PM, Hugo Mills wrote:
On Fri, Jun 24, 2016 at 12:52:21PM +0300, Andrei Borzenkov wrote:
On Fri, Jun 24, 2016 at 11:50 AM, Hugo Mills wrote:
On Fri, Jun 24, 2
On 2016-06-24 01:20, Chris Murphy wrote:
On Thu, Jun 23, 2016 at 8:07 PM, Zygo Blaxell
wrote:
With simple files changing one character with vi and gedit,
I get completely different logical and physical numbers with each
change, so it's clearly cowing the entire stripe (192KiB in my 3 dev
raid5
On 2016-06-23 13:44, Steven Haigh wrote:
Hi all,
Relative newbie to BTRFS, but long time linux user. I pass the full
disks from a Xen Dom0 -> guest DomU and run BTRFS within the DomU.
I've migrated my existing mdadm RAID6 to a BTRFS raid6 layout. I have a
drive that threw a few UNC errors durin
get
progress information.
Because it simply daemonizes prior to calling the balance ioctl, this
doesn't actually need any kernel support.
Signed-off-by: Austin S. Hemmelgarn
---
This works as is, but there are two specific things I would love to
eventually fix but don't have the t
On 2016-06-21 07:33, Hugo Mills wrote:
On Tue, Jun 21, 2016 at 07:24:24AM -0400, Austin S. Hemmelgarn wrote:
On 2016-06-21 04:55, Duncan wrote:
Dmitry Katsubo posted on Mon, 20 Jun 2016 18:33:54 +0200 as excerpted:
Dear btfs community,
I have added a drive to existing raid1 btrfs volume and
On 2016-06-21 04:55, Duncan wrote:
Dmitry Katsubo posted on Mon, 20 Jun 2016 18:33:54 +0200 as excerpted:
Dear btfs community,
I have added a drive to existing raid1 btrfs volume and decided to
perform balancing so that data distributes "fairly" among drives. I have
started "btrfs balance star
On 2016-06-10 18:39, Hans van Kranenburg wrote:
On 06/11/2016 12:10 AM, ojab // wrote:
On Fri, Jun 10, 2016 at 9:56 PM, Hans van Kranenburg
wrote:
You can work around it by either adding two disks (like Henk said),
or by
temporarily converting some chunks to single. Just enough to get some
fre
On 2016-06-12 06:35, boli wrote:
It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
These 90 hours seem like a rather long time, given that a rebalance/convert
from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a scrub
takes about 7 hours (4-disk-ra
On 2016-06-10 15:26, Henk Slager wrote:
On Thu, Jun 9, 2016 at 3:54 PM, Brendan Hide wrote:
On 06/09/2016 03:07 PM, Austin S. Hemmelgarn wrote:
On 2016-06-09 08:34, Brendan Hide wrote:
Hey, all
I noticed this odd behaviour while migrating from a 1TB spindle to SSD
(in this case on a
On 2016-06-10 13:22, Adam Borowski wrote:
On Fri, Jun 10, 2016 at 01:12:42PM -0400, Austin S. Hemmelgarn wrote:
On 2016-06-10 12:50, Adam Borowski wrote:
And, as of coreutils 8.25, the default is no reflink, with "never" not being
recognized even as a way to avoid an alias. A
On 2016-06-10 12:50, Adam Borowski wrote:
On Fri, Jun 10, 2016 at 08:54:36AM -0700, Nikolaus Rath wrote:
On Jun 10 2016, "Austin S. Hemmelgarn" wrote:
JFYI, if you've using GNU cp, you can pass '--reflink=never' to avoid
it making reflinks.
I would have exp
On 2016-06-09 23:40, Nikolaus Rath wrote:
On May 11 2016, Nikolaus Rath wrote:
Hello,
I recently ran btrfsck on one of my file systems, and got the following
messages:
checking extents
checking free space cache
checking fs roots
root 5 inode 3149867 errors 400, nbytes wrong
root 5 inode 31502
On 2016-06-09 08:34, Brendan Hide wrote:
Hey, all
I noticed this odd behaviour while migrating from a 1TB spindle to SSD
(in this case on a LUKS-encrypted 200GB partition) - and am curious if
this behaviour I've noted below is expected or known. I figure it is a
bug. Depending on the situation,
On 2016-06-09 02:16, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 03 Jun 2016 10:21:12 -0400 as
excerpted:
As far as BTRFS raid10 mode in general, there are a few things that are
important to remember about it:
1. It stores exactly two copies of everything, any extra disks just add
to the
On 2016-06-07 09:52, Kai Hendry wrote:
On Tue, 7 Jun 2016, at 07:10 PM, Austin S. Hemmelgarn wrote:
Yes, although you would then need to be certain to run a balance with
-dconvert=raid1 -mconvert=raid1 to clean up anything that got allocated
before the new disk was added.
I don't
On 2016-06-07 00:02, Kai Hendry wrote:
Sorry I unsubscribed from linux-btrfs@vger.kernel.org since the traffic
was a bit too high for me.
Entirely understandable, although for what it's worth it's nowhere near
as busy as some other mailing lists (linux-ker...@vger.kernel.org for
example sees we
On 2016-06-06 01:44, Kai Hendry wrote:
Hi there,
I planned to remove one of my disks, so that I can take it from
Singapore to the UK and then re-establish another remote RAID1 store.
delete is an alias of remove, so I added a new disk (devid 3) and
proceeded to run:
btrfs device delete /dev/sd
On 2016-06-05 22:40, James Johnston wrote:
On 06/06/2016 at 01:47, Chris Murphy wrote:
On Sun, Jun 5, 2016 at 4:45 AM, Mladen Milinkovic wrote:
On 06/03/2016 04:05 PM, Chris Murphy wrote:
Make certain the kernel command timer value is greater than the driver
error recovery timeout. The former
On 2016-06-03 21:48, Chris Murphy wrote:
On Fri, Jun 3, 2016 at 6:48 PM, Nicholas D Steeves wrote:
On 3 June 2016 at 11:33, Austin S. Hemmelgarn wrote:
On 2016-06-03 10:11, Martin wrote:
Make certain the kernel command timer value is greater than the driver
error recovery timeout. The
On 2016-06-03 21:51, Christoph Anton Mitterer wrote:
On Fri, 2016-06-03 at 15:50 -0400, Austin S Hemmelgarn wrote:
There's no point in trying to do higher parity levels if we can't get
regular parity working correctly. Given the current state of things,
it might be better to brea
On 2016-06-05 16:31, Christoph Anton Mitterer wrote:
On Sun, 2016-06-05 at 09:36 -0600, Chris Murphy wrote:
That's ridiculous. It isn't incorrect to refer to only 2 copies as
raid1.
No, if there are only two devices then not.
But obviously we're talking about how btrfs does RAID1, in which even
On 2016-06-03 13:38, Christoph Anton Mitterer wrote:
> Hey..
>
> Hm... so the overall btrfs state seems to be still pretty worrying,
> doesn't it?
>
> - RAID5/6 seems far from being stable or even usable,... not to talk
> about higher parity levels, whose earlier posted patches (e.g.
> http:/
On 2016-06-03 10:11, Martin wrote:
Make certain the kernel command timer value is greater than the driver
error recovery timeout. The former is found in sysfs, per block
device, the latter can be get and set with smartctl. Wrong
configuration is common (it's actually the default) when using
consu
On 2016-06-03 09:31, Martin wrote:
In general, avoid Ubuntu LTS versions when dealing with BTRFS, as well as
most enterprise distros, they all tend to back-port patches instead of using
newer kernels, which means it's functionally impossible to provide good
support for them here (because we can't
On 2016-06-03 05:49, Martin wrote:
Hello,
We would like to use urBackup to make laptop backups, and they mention
btrfs as an option.
https://www.urbackup.org/administration_manual.html#x1-8400010.6
So if we go with btrfs and we need 100TB usable space in raid6, and to
have it replicated each n
On 2016-06-02 18:45, Henk Slager wrote:
On Thu, Jun 2, 2016 at 3:55 PM, MegaBrutal wrote:
2016-06-02 0:22 GMT+02:00 Henk Slager :
What is the kernel version used?
Is the fs on a mechanical disk or SSD?
What are the mount options?
How old is the fs?
Linux 4.4.0-22-generic (Ubuntu 16.04).
Mech
On 2016-06-01 14:30, MegaBrutal wrote:
Hi all,
I have a 20 GB file system and df says I have about 2,6 GB free space,
yet I can't do anything on the file system because I get "No space
left on device" errors. I read that balance may help to remedy the
situation, but it actually doesn't.
Some d
On 2016-05-26 18:12, Graham Cobb wrote:
On 19/05/16 02:33, Qu Wenruo wrote:
Graham Cobb wrote on 2016/05/18 14:29 +0100:
A while ago I had a "no space" problem (despite fi df, fi show and fi
usage all agreeing I had over 1TB free). But this email isn't about
that.
As part of fixing that pro
On 2016-05-29 16:45, Ferry Toth wrote:
Op Sun, 29 May 2016 12:33:06 -0600, schreef Chris Murphy:
On Sun, May 29, 2016 at 12:03 PM, Holger Hoffstätte
wrote:
On 05/29/16 19:53, Chris Murphy wrote:
But I'm skeptical of bcache using a hidden area historically for the
bootloader, to put its devic
On 2016-05-27 15:47, Nicholas D Steeves wrote:
On 16 May 2016 at 08:39, Austin S. Hemmelgarn wrote:
On 2016-05-16 08:14, Richard W.M. Jones wrote:
It would be really helpful if the btrfs tools had a machine-readable
output.
With machine-readable output, there'd be a flag which would
c
On 2016-05-25 07:11, David Sterba wrote:
On Wed, May 25, 2016 at 08:33:45AM +0800, Qu Wenruo wrote:
David Sterba wrote on 2016/05/24 11:51 +0200:
On Tue, May 24, 2016 at 08:31:01AM +0800, Qu Wenruo wrote:
This could be made static (with thread local storage) so the state does
not get regener
On 2016-05-25 07:07, Hugo Mills wrote:
On Wed, May 25, 2016 at 04:00:00AM -0700, H. Peter Anvin wrote:
On 05/25/16 02:29, Hugo Mills wrote:
On Wed, May 25, 2016 at 01:58:15AM -0700, H. Peter Anvin wrote:
Hi,
I'm looking at using a btrfs with snapshots to implement a generational
backup capaci
On 2016-05-25 04:58, H. Peter Anvin wrote:
Hi,
I'm looking at using a btrfs with snapshots to implement a generational
backup capacity. However, doing it the naïve way would have the side
effect that for a file that has been partially modified, after
snapshotting the file would be written with
On 2016-05-20 18:26, Henk Slager wrote:
Yes, sorry, I took some shortcut in the discussion and jumped to a
method for avoiding this 0.5-2% slowdown that you mention. (Or a
kernel crashing in bcache code due to corrupt SB on a backing device
or corrupted caching device contents).
I am actually bit
On 2016-05-20 13:02, Ferry Toth wrote:
We have 4 1TB drives in MBR, 1MB free at the beginning, grub on all 4,
then 8GB swap, then all the rest btrfs (no LVM used). The 4 btrfs
partitions are in the same pool, which is in btrfs RAID10 format. /boot
is in subvolume @boot.
If you have GRUB installed
On 2016-05-19 19:23, Henk Slager wrote:
On Thu, May 19, 2016 at 8:51 PM, Austin S. Hemmelgarn
wrote:
On 2016-05-19 14:09, Kai Krakow wrote:
Am Wed, 18 May 2016 22:44:55 + (UTC)
schrieb Ferry Toth :
Op Tue, 17 May 2016 20:33:35 +0200, schreef Kai Krakow:
Bcache is actually low
On 2016-05-19 17:01, Kai Krakow wrote:
Am Thu, 19 May 2016 14:51:01 -0400
schrieb "Austin S. Hemmelgarn" :
For a point of reference, I've
got a pair of 250GB Crucial MX100's (they cost less than 0.50 USD per
GB when I got them and provide essentially the same power-loss
On 2016-05-19 14:09, Kai Krakow wrote:
Am Wed, 18 May 2016 22:44:55 + (UTC)
schrieb Ferry Toth :
Op Tue, 17 May 2016 20:33:35 +0200, schreef Kai Krakow:
Am Tue, 17 May 2016 07:32:11 -0400 schrieb "Austin S. Hemmelgarn"
:
On 2016-05-17 02:27, Ferry
On 2016-05-18 07:24, Austin S. Hemmelgarn wrote:
On 2016-05-17 13:30, Josef Bacik wrote:
Our enospc flushing sucks. It is born from a time where we were early
enospc'ing constantly because multiple threads would race in for the same
reservation and randomly starve other ones out. So I ca
for about 16 hours now, nothing is breaking, and a number of the
tests are actually completing marginally faster, so you can add:
Tested-by: Austin S. Hemmelgarn
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...
On 2016-05-17 11:45, Peter Kese wrote:
I've been using btrfs on my main system for a few months. I know btrfs
is a little bit beta, but I thought not using any fancy features like
quotas, snapshotting, raid, etc. would keep me on the safe side.
Then I tried a software upgrade (Ubuntu 15.10 -> 16
On 2016-05-17 08:23, David Sterba wrote:
On Tue, May 17, 2016 at 07:14:12AM -0400, Austin S. Hemmelgarn wrote:
By this example I don't mean that JSON has to be the format -- in fact
it's a terrible format with all sorts of problems -- any format which
is parseable with C libraries wo
On 2016-05-17 07:24, Alex Lyakas wrote:
RFC: This patch not for merging, but only for review and discussion.
When mounting, we consider only the primary superblock on each device.
But when writing the superblocks, we might silently ignore errors
from the primary superblock, if we succeeded to wr
On 2016-05-17 02:27, Ferry Toth wrote:
Op Mon, 16 May 2016 01:05:24 +0200, schreef Kai Krakow:
Am Sun, 15 May 2016 21:11:11 + (UTC)
schrieb Duncan <1i5t5.dun...@cox.net>:
Ferry Toth posted on Sun, 15 May 2016 12:12:09 + as excerpted:
You can go there with only one additional HDD
On 2016-05-16 23:42, Chris Murphy wrote:
On Mon, May 16, 2016 at 5:44 PM, Richard A. Lochner wrote:
Chris,
It has actually happened to me three times that I know of in ~7mos.,
but your point about the "larger footprint" for data corruption is a
good one. No doubt I have silently experienced t
On 2016-05-17 05:33, David Sterba wrote:
On Mon, May 16, 2016 at 01:14:56PM +0100, Richard W.M. Jones wrote:
I don't have time to implement this right now, so I'm just posting
this as a suggestion/request ...
Neither do have I, but agree with the idea and the proposed way. Here
are my notes
ht
On 2016-05-16 08:14, Richard W.M. Jones wrote:
I don't have time to implement this right now, so I'm just posting
this as a suggestion/request ...
It would be really helpful if the btrfs tools had a machine-readable
output.
Libguestfs parses btrfs tools output in a number of places, eg:
https:/
On 2016-05-16 07:34, Andrei Borzenkov wrote:
16.05.2016 14:17, Austin S. Hemmelgarn пишет:
On 2016-05-13 17:35, Chris Murphy wrote:
On Fri, May 13, 2016 at 9:28 AM, Nikolaus Rath wrote:
On May 13 2016, Duncan <1i5t5.dun...@cox.net> wrote:
Because btrfs can be multi-device, it needs so
On 2016-05-16 02:20, Qu Wenruo wrote:
Duncan wrote on 2016/05/16 05:59 +:
Qu Wenruo posted on Mon, 16 May 2016 10:24:23 +0800 as excerpted:
IIRC clear_cache option is fs level option.
So the first mount with clear_cache, then all subvolume will have
clear_cache.
Question: Does clear_c
On 2016-05-16 02:07, Chris Murphy wrote:
Current hypothesis
"I suspected, and I still suspect that the error occurred upon a
metadata update that corrupted the checksum for the file, probably due
to silent memory corruption. If the checksum was silently corrupted,
it would be simply written to
On 2016-05-15 08:12, Ferry Toth wrote:
Is there anything going on in this area?
We have btrfs in RAID10 using 4 HDD's for many years now with a rotating
scheme of snapshots for easy backup. <10% files (bytes) change between
oldest snapshot and the current state.
However, the filesystem seems to
On 2016-05-13 17:35, Chris Murphy wrote:
On Fri, May 13, 2016 at 9:28 AM, Nikolaus Rath wrote:
On May 13 2016, Duncan <1i5t5.dun...@cox.net> wrote:
Because btrfs can be multi-device, it needs some way to track which
devices belong to each filesystem, and it uses filesystem UUID for this
purpos
On 2016-05-13 12:28, Goffredo Baroncelli wrote:
On 2016-05-11 21:26, Austin S. Hemmelgarn wrote:
(although it can't tell the difference between a corrupted checksum and a
corrupted block of data).
I don't think so. The data checksums are stored in metadata blocks, and as
meta
On 2016-05-12 16:54, Mark Fasheh wrote:
On Wed, May 11, 2016 at 07:36:59PM +0200, David Sterba wrote:
On Tue, May 10, 2016 at 07:52:11PM -0700, Mark Fasheh wrote:
Taking your history with qgroups out of this btw, my opinion does not
change.
With respect to in-memory only dedupe, it is my hones
On 2016-05-13 07:07, Niccolò Belli wrote:
On giovedì 12 maggio 2016 17:43:38 CEST, Austin S. Hemmelgarn wrote:
That's probably a good indication of the CPU and the MB being OK, but
not necessarily the RAM. There's two other possible options for
testing the RAM that haven't bee
On 2016-05-12 13:49, Richard A. Lochner wrote:
Austin,
I rebooted the computer and reran the scrub to no avail. The error is
consistent.
The reason I brought this question to the mailing list is because it
seemed like a situation that might be of interest to the developers.
Perhaps, there mig
On 2016-05-12 10:35, Niccolò Belli wrote:
On lunedì 9 maggio 2016 18:29:41 CEST, Zygo Blaxell wrote:
Did you also check the data matches the backup? btrfs check will only
look at the metadata, which is 0.1% of what you've copied. From what
you've written, there should be a lot of errors in the
On 2016-05-11 14:36, Richard Lochner wrote:
Hello,
I have encountered a data corruption error with BTRFS which may or may
not be of interest to your developers.
The problem is that an unmodified file on a RAID-1 volume that had
been scrubbed successfully is now corrupt. The details follow.
Th
On 2016-05-09 12:29, Zygo Blaxell wrote:
On Mon, May 09, 2016 at 04:53:13PM +0200, Niccolò Belli wrote:
While trying to find a common denominator for my issue I did lots of backups
of /dev/mapper/cryptroot and I restored them into /dev/mapper/cryptroot
dozens of times (triggering a 150GB+ random
On 2016-05-07 12:11, Niccolò Belli wrote:
Il 2016-05-07 17:58 Clemens Eisserer ha scritto:
Hi Niccolo,
btrfs + dmcrypt + compress=lzo + autodefrag = corruption at first boot
Just to be curious - couldn't it be a hardware issue? I use almost the
same setup (compress-force=lzo instead of compr
On 2016-05-06 07:48, Niccolò Belli wrote:
The following are my subvolumes:
$ sudo btrfs subvol list /
[sudo] password di niko: ID 257 gen 1040 top level 5 path @
ID 258 gen 1040 top level 5 path @home
ID 270 gen 889 top level 257 path var/cache/pacman/pkg
ID 271 gen 15 top level 257 path var/abs
On 2016-05-06 05:08, David Sterba wrote:
On Thu, May 05, 2016 at 07:23:11AM -0400, Austin S. Hemmelgarn wrote:
On 2016-05-04 19:18, Dmitry Katsubo wrote:
Dear btrfs community,
I am interested in spare volumes and hot auto-replacement feature [1]. I have a
couple of questions:
* Which kernel
On 2016-05-04 19:18, Dmitry Katsubo wrote:
Dear btrfs community,
I am interested in spare volumes and hot auto-replacement feature [1]. I have a
couple of questions:
* Which kernel version this feature will be included?
Probably 4.7. I would not suggest using it in production for at least a
On 2016-05-04 14:07, Chris Murphy wrote:
On Wed, May 4, 2016 at 7:52 AM, Niccolò Belli wrote:
I tried to add rootflags=noatime,compress=lzo,discard,autodefrag to
GRUB_CMDLINE_LINUX in /etc/default/grub as you suggested but my system
didn't manage to boot, probably because grub automatically a
On 2016-05-01 19:49, Duncan wrote:
Kai Krakow posted on Sun, 01 May 2016 18:54:18 +0200 as excerpted:
It affects all file systems. The "btrfs fi sync" is used to finish my
rsync backup and ensure everything is written before I'm trying to
unmount it or the system goes back to sleep.
"df" and f
On 2016-05-01 08:47, Duncan wrote:
Meanwhile, what kernel IO scheduler do you use (deadline, noop,
cfq,... cfq is the normal default)? Do you use either normal
process nice/priority or ionice to control the rsync? What
about cgroups?
CFQ is the default on many systems, unless you are using a ne
On 2016-04-27 16:18, Juan Alberto Cirez wrote:
Quick question: Supposed I have n-number of storage pods (physical
servers with n-number of physical hhds). The end deployment will be
btrfs at the brick/block level with a distributed file system on top.
Keeping in mind that my overriding goal is to
On 2016-04-27 22:55, Chris Murphy wrote:
On Wed, Apr 27, 2016 at 8:51 PM, Chris Murphy wrote:
On Wed, Apr 27, 2016 at 2:18 PM, Juan Alberto Cirez
wrote:
Quick question: Supposed I have n-number of storage pods (physical
servers with n-number of physical hhds). The end deployment will be
btrfs
On 2016-04-27 19:19, Chris Murphy wrote:
On Wed, Apr 27, 2016 at 5:22 AM, Austin S. Hemmelgarn
wrote:
On 2016-04-26 20:58, Chris Murphy wrote:
On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
wrote:
With GlusterFS as a distributed volume, the files are already spread
among the servers
On 2016-04-26 20:58, Chris Murphy wrote:
On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
wrote:
With GlusterFS as a distributed volume, the files are already spread
among the servers causing file I/O to be spread fairly evenly among
them as well, thus probably providing the benefit one mig
ay the existing code repairs them, all while using
measurably less memory as advertised, so you can add:
Tested-by: Austin S. Hemmelgarn
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo i
o performance, you may want to compare BTRFS raid10
mode to BTRFS raid1 on top of two LVM RAID0 volumes. I find this tends
to get better overall performance with no difference in data safety,
because BTRFS still has a pretty brain-dead I/O scheduler in the
multi-device code.
On Tue, Apr 26, 2016
.
On Tue, Apr 26, 2016 at 5:11 AM, Austin S. Hemmelgarn
wrote:
On 2016-04-26 06:50, Juan Alberto Cirez wrote:
Thank you guys so very kindly for all your help and taking the time to
answer my question. I have been reading the wiki and online use cases
and otherwise delving deeper into
On 2016-04-26 06:50, Juan Alberto Cirez wrote:
Thank you guys so very kindly for all your help and taking the time to
answer my question. I have been reading the wiki and online use cases
and otherwise delving deeper into the btrfs architecture.
I am managing a 520TB storage pool spread across 1
On 2016-04-25 08:43, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 25 Apr 2016 07:18:10 -0400 as
excerpted:
On 2016-04-23 01:38, Duncan wrote:
And again with snapshotting operations. Making a snapshot is normally
nearly instantaneous, but there's a scaling issue if you have too man
On 2016-04-23 01:38, Duncan wrote:
Juan Alberto Cirez posted on Fri, 22 Apr 2016 14:36:44 -0600 as excerpted:
Good morning,
I am new to this list and to btrfs in general. I have a quick question:
Can I add a new device to the pool while the btrfs filesystem balance
command is running on the dri
On 2016-04-21 02:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i
did. It seems to be mean like
On 2016-04-20 16:23, Konstantin Svist wrote:
Pretty much all commands print out the usage message when no device is
specified:
[root@host ~]# btrfs scrub start
btrfs scrub start: too few arguments
usage: btrfs scrub start [-BdqrRf] [-c ioprio_class -n ioprio_classdata]
|
...
However, balance do
On 2016-04-18 16:34, Nicholas D Steeves wrote:
On 18 April 2016 at 11:52, Austin S. Hemmelgarn wrote:
On 2016-04-18 11:39, Chris Murphy wrote:
On Mon, Apr 18, 2016 at 9:15 AM, Austin S. Hemmelgarn
wrote:
Like I said in one of my earlier e-mails though, these kind of limitations
are part of
801 - 900 of 1429 matches
Mail list logo