On 2017-12-05 14:09, Goffredo Baroncelli wrote:
On 12/05/2017 07:46 PM, Graham Cobb wrote:
On 05/12/17 18:01, Goffredo Baroncelli wrote:
On 12/05/2017 04:42 PM, Graham Cobb wrote:
[]
Then no impact to kernel, all complex work is done in user space.
Exactly how hard is it to just check ow
On 2017-12-05 17:08, Goffredo Baroncelli wrote:
On 12/05/2017 09:17 PM, Austin S. Hemmelgarn wrote:
On 2017-12-05 14:09, Goffredo Baroncelli wrote:
On 12/05/2017 07:46 PM, Graham Cobb wrote:
On 05/12/17 18:01, Goffredo Baroncelli wrote:
On 12/05/2017 04:42 PM, Graham Cobb wrote
On 2017-12-05 23:52, Misono, Tomohiro wrote:
On 2017/12/05 21:41, Austin S. Hemmelgarn wrote:
On 2017-12-05 03:43, Qu Wenruo wrote:
On 2017年12月05日 16:25, Misono, Tomohiro wrote:
Hello all,
I want to address some issues of subvolume usability for a normal user.
i.e. a user can create
On 2017-12-07 06:55, Duncan wrote:
Misono, Tomohiro posted on Thu, 07 Dec 2017 16:15:47 +0900 as excerpted:
On 2017/12/07 11:56, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 06 Dec 2017 07:39:56 -0500 as
excerpted:
Somewhat OT, but the only operation that's remotely 'i
On 2017-12-07 09:36, Anand Jain wrote:
Add ability to deregister a or all devices. I have named this sub cmd
as deregister, but I am open to your suggestions.
Being a bit picky here, but from the perspective of a native speaker of
American English, I would say that 'deregister' sounds rather syn
On 2017-12-08 02:57, Anand Jain wrote:
-EXPERIMENTAL-
As of now when primary SB fails we won't self heal and would fail mount,
this is an experimental patch which thinks why not go and read backup
copy.
I like the concept, and actually think this should be default behavior
on a filesystem that's
On 2017-12-08 07:59, Qu Wenruo wrote:
On 2017年12月08日 20:51, Austin S. Hemmelgarn wrote:
On 2017-12-08 02:57, Anand Jain wrote:
-EXPERIMENTAL-
As of now when primary SB fails we won't self heal and would fail mount,
this is an experimental patch which thinks why not go and read backup
On 2017-12-07 21:17, Duncan wrote:
Anand Jain posted on Fri, 08 Dec 2017 08:51:43 +0800 as excerpted:
On 12/07/2017 10:52 PM, Austin S. Hemmelgarn wrote:
On 2017-12-07 09:36, Anand Jain wrote:
Add ability to deregister a or all devices. I have named this sub cmd
as deregister, but I am open
On 2017-12-12 11:24, Hugo Mills wrote:
On Tue, Dec 12, 2017 at 04:18:09PM +, Neal Becker wrote:
Is it possible to check while it is mounted?
Certainly not while mounted read-write. While mounted read-only --
I'm not certain. Possibly.
In theory, it is possible, but I think that the saf
On 2017-12-17 08:52, Anand Jain wrote:
In two device configs of RAID1/RAID5 where one device can be missing
in the degraded mount, or in the configs such as four devices RAID6
where two devices can be missing, in these type of configs it can form
two separate set of devices where each of the set
On 2017-12-17 10:48, Peter Grandi wrote:
"Duncan"'s reply is slightly optimistic in parts, so some
further information...
[ ... ]
Basically, at this point btrfs doesn't have "dynamic" device
handling. That is, if a device disappears, it doesn't know
it.
That's just the consequence of what i
On 2017-12-16 14:50, Dark Penguin wrote:
Could someone please point me towards some read about how btrfs handles
multiple devices? Namely, kicking faulty devices and re-adding them.
I've been using btrfs on single devices for a while, but now I want to
start using it in raid1 mode. I booted into
On 2017-12-18 09:39, Anand Jain wrote:
Now the procedure to assemble the disks would be to continue to mount
the good set first without the device set on which new data can be
ignored, and later run btrfs device scan to bring in the missing device
and complete the RAID group which then shall
On 2017-12-18 14:43, Tomasz Pala wrote:
On Mon, Dec 18, 2017 at 08:06:57 -0500, Austin S. Hemmelgarn wrote:
The fact is, the only cases where this is really an issue is if you've
either got intermittently bad hardware, or are dealing with external
Well, the RAID1+ is all about the fa
On 2017-12-18 17:01, Peter Grandi wrote:
The fact is, the only cases where this is really an issue is
if you've either got intermittently bad hardware, or are
dealing with external
Well, the RAID1+ is all about the failing hardware.
storage devices. For the majority of people who are using
For those who are interested in better monitoring, netdata [1] [2] has
just merged support for low-level usage monitoring of BTRFS volumes. It
provides graphs for both physical on-disk usage by chunk type (both
allocated and used) and per-chunk type graphs of usage as reported by
`btrfs filesy
On 2017-12-19 09:46, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 07:25:49 -0500, Austin S. Hemmelgarn wrote:
Well, the RAID1+ is all about the failing hardware.
About catastrophically failing hardware, not intermittent failure.
It shouldn't matter - as long as disk failing once is kicke
On 2017-12-19 12:56, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 11:35:02 -0500, Austin S. Hemmelgarn wrote:
2. printed on screen when creating/converting "RAID1" profile (by btrfs tools),
I don't agree on this one. It is in no way unreasonable to expect that
someo
On 2017-12-19 15:41, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 12:35:20 -0700, Chris Murphy wrote:
with a read only file system. Another reason is the kernel code and
udev rule for device "readiness" means the volume is not "ready" until
all member devices are present. And while the volume is
On 2017-12-19 16:58, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 15:11:22 -0500, Austin S. Hemmelgarn wrote:
Except the systems running on those ancient kernel versions are not
necessarily using a recent version of btrfs-progs.
Still much easier to update a userspace tools than kernel
On 2017-12-19 18:53, Chris Murphy wrote:
On Tue, Dec 19, 2017 at 1:11 PM, Austin S. Hemmelgarn
wrote:
On 2017-12-19 12:56, Tomasz Pala wrote:
BTRFS lacks all of these - there are major functional changes in current
kernels and it reaches far beyond LTS. All the knowledge YOU have here,
on
On 2017-12-19 17:23, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 15:47:03 -0500, Austin S. Hemmelgarn wrote:
Sth like this? I got such problem a few months ago, my solution was
accepted upstream:
https://github.com/systemd/systemd/commit/0e8856d25ab71764a279c2377ae593c0f2460d8f
Rationale is in
On 2017-12-20 11:53, Andrei Borzenkov wrote:
19.12.2017 22:47, Chris Murphy пишет:
BTW, doesn't SuSE use btrfs by default? Would you expect everyone using
this distro to research every component used?
As far as I'm aware, only Btrfs single device stuff is "supported".
The multiple device st
On 2017-12-20 15:07, Chris Murphy wrote:
On Wed, Dec 20, 2017 at 1:02 PM, Chris Murphy wrote:
On Wed, Dec 20, 2017 at 9:53 AM, Andrei Borzenkov wrote:
19.12.2017 22:47, Chris Murphy пишет:
BTW, doesn't SuSE use btrfs by default? Would you expect everyone using
this distro to research ever
On 2017-12-21 06:44, Andrei Borzenkov wrote:
On Tue, Dec 19, 2017 at 11:47 PM, Austin S. Hemmelgarn
wrote:
On 2017-12-19 15:41, Tomasz Pala wrote:
On Tue, Dec 19, 2017 at 12:35:20 -0700, Chris Murphy wrote:
with a read only file system. Another reason is the kernel code and
udev rule for
So, for a while now I've been recommending small filtered balances to
people as part of regular maintenance for BTRFS filesystems under the
logic that it does help in some cases and can't really hurt (and if done
right, is really inexpensive in terms of resources). This ended up
integrated par
On 2018-01-08 11:20, ein wrote:
On 01/08/2018 04:55 PM, Austin S. Hemmelgarn wrote:
[...]
And here's the FAQ entry:
Q: Do I need to run a balance regularly?
A: In general usage, no. A full unfiltered balance typically takes a
long time, and will rewrite huge amounts of data unnecess
On 2018-01-08 13:17, Graham Cobb wrote:
On 08/01/18 16:34, Austin S. Hemmelgarn wrote:
Ideally, I think it should be as generic as reasonably possible,
possibly something along the lines of:
A: While not strictly necessary, running regular filtered balances (for
example `btrfs balance start
On 2018-01-08 16:43, Tom Worster wrote:
On 01/08/2018 04:55 PM, Austin S. Hemmelgarn wrote:
On 2018-01-08 11:20, ein wrote:
> On 01/08/2018 04:55 PM, Austin S. Hemmelgarn wrote:
>
> > [...]
> >
> > And here's the FAQ entry:
> >
> > Q: Do I need
On 2018-01-09 03:33, Marat Khalili wrote:
On 08/01/18 19:34, Austin S. Hemmelgarn wrote:
A: While not strictly necessary, running regular filtered balances
(for example `btrfs balance start -dusage=50 -dlimit=2 -musage=50
-mlimit=4`, see `man btrfs-balance` for more info on what the options
On 2018-01-09 23:38, Duncan wrote:
Graham Cobb posted on Mon, 08 Jan 2018 18:17:13 + as excerpted:
On 08/01/18 16:34, Austin S. Hemmelgarn wrote:
Ideally, I think it should be as generic as reasonably possible,
possibly something along the lines of:
A: While not strictly necessary
On 2018-01-10 11:30, Tom Worster wrote:
On 9 Jan 2018, at 22:49, Duncan wrote:
AFAIK, such corruption reports re balance aren't really balance, per se,
at all.
Instead, what I've seen in nearly all cases is a number of filesystem
maintenance commands involving heavy I/O colliding, that is, bei
On 2018-01-10 16:37, waxhead wrote:
Austin S. Hemmelgarn wrote:
So, for a while now I've been recommending small filtered balances to
people as part of regular maintenance for BTRFS filesystems under the
logic that it does help in some cases and can't really hurt (and if done
right,
On 2018-01-10 15:44, Timofey Titovets wrote:
2018-01-10 21:33 GMT+03:00 Tom Worster :
On 10 Jan 2018, at 12:01, Austin S. Hemmelgarn wrote:
On 2018-01-10 11:30, Tom Worster wrote:
Also, for future reference, the term we typically use is ENOSPC, as that's
the symbolic name for the error
On 2018-01-08 10:55, Austin S. Hemmelgarn wrote:
So, for a while now I've been recommending small filtered balances to
people as part of regular maintenance for BTRFS filesystems under the
logic that it does help in some cases and can't really hurt (and if done
right, is really inex
On 2018-01-12 14:26, Tom Worster wrote:
On 12 Jan 2018, at 13:24, Austin S. Hemmelgarn wrote:
OK, I've gotten a lot of good feedback on this, and the general
consensus seems to be:
* If we're going to recommend regular balance, we should explain how
it actually helps things.
*
On 2018-01-13 17:09, Chris Murphy wrote:
On Fri, Jan 12, 2018 at 11:24 AM, Austin S. Hemmelgarn
wrote:
To that end, I propose the following text for the FAQ:
Q: Do I need to run a balance regularly?
A: While not strictly necessary for normal operations, running a filtered
balance regularly
On 2018-01-16 01:45, Chris Murphy wrote:
On Mon, Jan 15, 2018 at 11:23 AM, Tom Worster wrote:
On 13 Jan 2018, at 17:09, Chris Murphy wrote:
On Fri, Jan 12, 2018 at 11:24 AM, Austin S. Hemmelgarn
wrote:
To that end, I propose the following text for the FAQ:
Q: Do I need to run a balance
On 2018-01-22 21:35, Chris Murphy wrote:
On Mon, Jan 22, 2018 at 2:06 PM, Claes Fransson
wrote:
Hi!
I really like the features of BTRFS, especially deduplication,
snapshotting and checksumming. However, when using it on my laptop the
last couple of years, it has became corrupted a lot of times
On 2018-01-23 19:44, Chris Murphy wrote:
On Tue, Jan 23, 2018 at 5:51 AM, Austin S. Hemmelgarn
wrote:
This is extremely important to understand. BTRFS and ZFS are essentially
the only filesystems available on Linux that actually validate things enough
to notice this reliably (ReFS on Windows
On 2018-01-24 18:54, Chris Murphy wrote:
On Wed, Jan 24, 2018 at 5:30 AM, Austin S. Hemmelgarn
wrote:
APFS is really vague on this front, it may be checksumming metadata,
it's not checksumming data and with no option to. Apple proposes their
branded storage devices do not return bogus
On 2018-01-26 09:02, Christophe Yayon wrote:
Hi all,
I don't know if it the right place to ask. Sorry it's not...
No, it's just fine to ask here. Questions like this are part of why the
mailing list exists.
Just a little question about "degraded" mount option. Is it a good idea to add
this
On 2018-01-26 09:47, Christophe Yayon wrote:
Hi Austin,
Thanks for your answer. It was my opinion too as the "degraded" seems to be flagged as
"Mostly OK" on btrfs wiki status page. I am running Archlinux with recent kernel on all
my servers (because of use of btrfs as my main filesystem, i ne
On 2018-01-29 06:24, Adam Borowski wrote:
On Mon, Jan 29, 2018 at 09:54:04AM +0100, Tomasz Pala wrote:
it is a btrfs drawback that doesn't provice anything else except for this
IOCTL with it's logic
How can it provide you with something it doesn't yet have? If you want the
information, call m
On 2018-01-27 17:42, Tomasz Pala wrote:
On Sat, Jan 27, 2018 at 14:26:41 +0100, Adam Borowski wrote:
It's quite obvious who's the culprit: every single remaining rc system
manages to mount degraded btrfs without problems. They just don't try to
outsmart the kernel.
Yes. They are stupid enoug
On 2018-01-29 12:58, Andrei Borzenkov wrote:
29.01.2018 14:24, Adam Borowski пишет:
...
So any event (the user's request) has already happened. A rc system, of
which systemd is one, knows whether we reached the "want root filesystem" or
"want secondary filesystems" stage. Once you're there, y
On 2018-01-29 16:54, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2018-01-29 12:58, Andrei Borzenkov wrote:
29.01.2018 14:24, Adam Borowski пишет:
...
So any event (the user's request) has already happened. A rc
system, of
which systemd is one, knows whether we reached the "
On 2018-01-30 08:46, Tomasz Pala wrote:
On Mon, Jan 29, 2018 at 08:05:42 -0500, Austin S. Hemmelgarn wrote:
Seriously, _THERE IS A RACE CONDITION IN SYSTEMD'S CURRENT HANDLING OF
THIS_. It's functionally no different than prefacing an attempt to send
a signal to a process by check
On 2018-01-30 10:09, Tomasz Pala wrote:
On Mon, Jan 29, 2018 at 08:42:32 -0500, Austin S. Hemmelgarn wrote:
Yes. They are stupid enough to fail miserably with any more complicated
setups, like stacking volume managers, crypto layer, network attached
storage etc.
I think you mean any setup
On 2018-01-30 14:50, Tomasz Pala wrote:
On Tue, Jan 30, 2018 at 08:46:32 -0500, Austin S. Hemmelgarn wrote:
I personally think the degraded mount option is a mistake as this
assumes that a lightly degraded system is not able to work which is false.
If the system can mount to some working state
On 2018-01-31 09:52, Peter Becker wrote:
This is all clear. My question referes to "use the lower devid disk
containing the stripe"
2018-01-31 10:01 GMT+01:00 Anand Jain :
When a stripe is not present on the read optimized disk it will just
use the lower devid disk containing the stripe (in
On 2019-10-10 17:21, Ulli Horlacher wrote:
On Thu 2019-10-10 (20:47), Kai Krakow wrote:
I run into the problem that "rsync -ax" sees btrfs subvolumes as "other
filesystems" and ignores them.
I worked around it by mounting the btrfs-pool at a special directory:
mount -o subvolid=0 /dev/disk/b
On 2019-10-21 06:47, Christian Pernegger wrote:
[Please CC me, I'm not on the list.]
Am So., 20. Okt. 2019 um 12:28 Uhr schrieb Qu Wenruo :
Question: Can I work with the mounted backup image on the machine that
also contains the original disc? I vaguely recall something about
btrfs really not l
On 2019-10-21 09:02, Christian Pernegger wrote:
[Please CC me, I'm not on the list.]
Am Mo., 21. Okt. 2019 um 13:47 Uhr schrieb Austin S. Hemmelgarn
:
I've [worked with fs clones] like this dozens of times on single-device volumes
with exactly zero issues.
Thank you, I
On 2019-10-22 06:01, Qu Wenruo wrote:
On 2019/10/22 下午5:47, Tobias Reinhard wrote:
Hi,
I noticed that if you punch a hole in the middle of a file the available
filesystem space seems not to increase.
Kernel is 5.2.11
To reproduce:
->mkfs.btrfs /dev/loop1 -f
btrfs-progs v4.15.1
See http:/
On 2019-10-22 18:56, Christian Pernegger wrote:
[Please CC me, I'm not on the list.]
Am Mo., 21. Okt. 2019 um 15:34 Uhr schrieb Qu Wenruo :
[...] just fstrim wiped some old tree blocks. But maybe it's some unfortunate
race, that fstrim trimmed some tree blocks still in use.
Forgive me for as
On 2018-07-16 14:29, Goffredo Baroncelli wrote:
On 07/15/2018 04:37 PM, waxhead wrote:
David Sterba wrote:
An interesting question is the naming of the extended profiles. I picked
something that can be easily understood but it's not a final proposal.
Years ago, Hugo proposed a naming scheme tha
On 2018-07-16 16:58, Wolf wrote:
Greetings,
I would like to ask what what is healthy amount of free space to keep on
each device for btrfs to be happy?
This is how my disk array currently looks like
[root@dennas ~]# btrfs fi usage /raid
Overall:
Device size:
On 2018-07-17 13:54, Martin Steigerwald wrote:
Nikolay Borisov - 17.07.18, 10:16:
On 17.07.2018 11:02, Martin Steigerwald wrote:
Nikolay Borisov - 17.07.18, 09:20:
On 16.07.2018 23:58, Wolf wrote:
Greetings,
I would like to ask what what is healthy amount of free space to
keep on each device
On 2018-07-18 04:39, Duncan wrote:
Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted:
As implemented in BTRFS, raid1 doesn't have striping.
The argument is that because there's only two copies, on multi-device
btrfs raid1 with 4+ devices of equal size so chunk allocations tend to
On 2018-07-18 03:20, Duncan wrote:
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
On 07/17/2018 11:12 PM, Duncan wrote:
Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as
excerpted:
On 07/15/2018 04:37 PM, waxhead wrote:
Striping and mirroring/pa
On 2018-07-18 09:07, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 6:35 AM, Austin S. Hemmelgarn
wrote:
If you're doing a training presentation, it may be worth mentioning that
preallocation with fallocate() does not behave the same on BTRFS as it does
on other filesystems. For example
On 2018-07-18 13:04, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 7:30 AM, Austin S. Hemmelgarn
wrote:
I'm not sure. In this particular case, this will fail on BTRFS for any X
larger than just short of one third of the total free space. I would expect
it to fail for any X larger than
On 2018-07-18 13:40, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 11:14 AM, Chris Murphy wrote:
I don't know for sure, but based on the addresses reported before and
after dd for the fallocated tmp file, it looks like Btrfs is not using
the originally fallocated addresses for dd. So maybe it is
On 2018-07-18 17:32, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 12:01 PM, Austin S. Hemmelgarn
wrote:
On 2018-07-18 13:40, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 11:14 AM, Chris Murphy
wrote:
I don't know for sure, but based on the addresses reported before and
after dd fo
On 2018-07-18 15:42, Goffredo Baroncelli wrote:
On 07/18/2018 09:20 AM, Duncan wrote:
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
On 07/17/2018 11:12 PM, Duncan wrote:
Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as
excerpted:
On 07/15/2018 0
On 2018-07-19 03:27, Qu Wenruo wrote:
On 2018年07月14日 02:46, David Sterba wrote:
Hi,
I have some goodies that go into the RAID56 problem, although not
implementing all the remaining features, it can be useful independently.
This time my hackweek project
https://hackweek.suse.com/17/projects/
On 2018-07-19 13:29, Goffredo Baroncelli wrote:
On 07/19/2018 01:43 PM, Austin S. Hemmelgarn wrote:
On 2018-07-18 15:42, Goffredo Baroncelli wrote:
On 07/18/2018 09:20 AM, Duncan wrote:
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
On 07/17/2018 11:12 PM
On 2018-07-20 01:01, Andrei Borzenkov wrote:
18.07.2018 16:30, Austin S. Hemmelgarn пишет:
On 2018-07-18 09:07, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 6:35 AM, Austin S. Hemmelgarn
wrote:
If you're doing a training presentation, it may be worth mentioning that
preallocation
On 2018-07-20 13:13, Goffredo Baroncelli wrote:
On 07/19/2018 09:10 PM, Austin S. Hemmelgarn wrote:
On 2018-07-19 13:29, Goffredo Baroncelli wrote:
[...]
So until now you are repeating what I told: the only useful raid profile are
- striping
- mirroring
- striping+paring (even limiting the
On 2018-07-20 14:41, Hugo Mills wrote:
On Fri, Jul 20, 2018 at 09:38:14PM +0300, Andrei Borzenkov wrote:
20.07.2018 20:16, Goffredo Baroncelli пишет:
[snip]
Limiting the number of disk per raid, in BTRFS would be quite simple to implement in the
"chunk allocator"
You mean that currently RA
On 2018-07-31 23:45, MegaBrutal wrote:
Hi all,
I know it's a decade-old question, but I'd like to hear your thoughts
of today. By now, I became a heavy BTRFS user. Almost everywhere I use
BTRFS, except in situations when it is obvious there is no benefit
(e.g. /var/log, /boot). At home, all my d
On 2018-08-02 06:56, Qu Wenruo wrote:
On 2018年08月02日 18:45, Andrei Borzenkov wrote:
Отправлено с iPhone
2 авг. 2018 г., в 10:02, Qu Wenruo написал(а):
On 2018年08月01日 11:45, MegaBrutal wrote:
Hi all,
I know it's a decade-old question, but I'd like to hear your thoughts
of today. By no
On 2018-08-09 19:35, Qu Wenruo wrote:
On 8/10/18 1:48 AM, Tomasz Pala wrote:
On Tue, Jul 31, 2018 at 22:32:07 +0800, Qu Wenruo wrote:
2) Different limitations on exclusive/shared bytes
Btrfs can set different limit on exclusive/shared bytes, further
complicating the problem.
3) Btrf
On 2018-08-09 13:48, Tomasz Pala wrote:
On Tue, Jul 31, 2018 at 22:32:07 +0800, Qu Wenruo wrote:
2) Different limitations on exclusive/shared bytes
Btrfs can set different limit on exclusive/shared bytes, further
complicating the problem.
3) Btrfs quota only accounts data/metadata used
On 2018-08-10 14:21, Tomasz Pala wrote:
On Fri, Aug 10, 2018 at 07:39:30 -0400, Austin S. Hemmelgarn wrote:
I.e.: every shared segment should be accounted within quota (at least once).
I think what you mean to say here is that every shared extent should be
accounted to quotas for every
On 2018-08-10 14:07, Chris Murphy wrote:
On Thu, Aug 9, 2018 at 5:35 PM, Qu Wenruo wrote:
On 8/10/18 1:48 AM, Tomasz Pala wrote:
On Tue, Jul 31, 2018 at 22:32:07 +0800, Qu Wenruo wrote:
2) Different limitations on exclusive/shared bytes
Btrfs can set different limit on exclusive/shared
On 2018-08-12 03:04, Andrei Borzenkov wrote:
12.08.2018 06:16, Chris Murphy пишет:
On Fri, Aug 10, 2018 at 9:29 PM, Duncan <1i5t5.dun...@cox.net> wrote:
Chris Murphy posted on Fri, 10 Aug 2018 12:07:34 -0600 as excerpted:
But whether data is shared or exclusive seems potentially ephemeral, an
On 2018-08-10 06:07, Cerem Cem ASLAN wrote:
Original question is here: https://superuser.com/questions/1347843
How can we sure that a readonly snapshot is not corrupted due to a disk failure?
Is the only way calculating the checksums one on another and store it
for further examination, or does
On 2018-08-17 05:08, Martin Steigerwald wrote:
Hi!
This happened about two weeks ago. I already dealt with it and all is
well.
Linux hung on suspend so I switched off this ThinkPad T520 forcefully.
After that it did not boot the operating system anymore. Intel SSD 320,
latest firmware, which sh
On 2018-08-17 08:28, Martin Steigerwald wrote:
Thanks for your detailed answer.
Austin S. Hemmelgarn - 17.08.18, 13:58:
On 2018-08-17 05:08, Martin Steigerwald wrote:
[…]
I have seen a discussion about the limitation in point 2. That
allowing to add a device and make it into RAID 1 again
On 2018-08-17 08:50, Roman Mamedov wrote:
On Fri, 17 Aug 2018 14:28:25 +0200
Martin Steigerwald wrote:
First off, keep in mind that the SSD firmware doing compression only
really helps with wear-leveling. Doing it in the filesystem will help
not only with that, but will also give you more spa
On 2018-08-19 06:25, Andrei Borzenkov wrote:
Отправлено с iPhone
19 авг. 2018 г., в 11:37, Martin Steigerwald написал(а):
waxhead - 18.08.18, 22:45:
Adam Hunt wrote:
Back in 2014 Ted Tso introduced the lazytime mount option for ext4
and shortly thereafter a more generic VFS implementation
On 2018-08-21 08:06, Adam Borowski wrote:
On Mon, Aug 20, 2018 at 08:16:16AM -0400, Austin S. Hemmelgarn wrote:
Also, slightly OT, but atimes are not where the real benefit is here for
most people. No sane software other than mutt uses atimes (and mutt's use
of them is not sane, but tha
On 2018-08-21 09:32, Janos Toth F. wrote:
so pretty much everyone who wants to avoid the overhead from them can just
use the `noatime` mount option.
It would be great if someone finally fixed this old bug then:
https://bugzilla.kernel.org/show_bug.cgi?id=61601
Until then, it seems practically i
On 2018-08-21 09:43, David Howells wrote:
Qu Wenruo wrote:
But to be more clear, NOSSD shouldn't be a special case.
In fact currently NOSSD only affects whether we will output the message
"enabling ssd optimization", no real effect if I didn't miss anything.
That's not quite true. In:
On 2018-08-21 12:05, David Sterba wrote:
On Tue, Aug 21, 2018 at 10:10:04AM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 09:32, Janos Toth F. wrote:
so pretty much everyone who wants to avoid the overhead from them can just
use the `noatime` mount option.
It would be great if someone
On 2018-08-21 23:57, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 21 Aug 2018 13:01:00 -0400 as
excerpted:
Otherwise, the only option for people who want it set is to patch the
kernel to get noatime as the default (instead of relatime). I would
look at pushing such a patch upstream
On 2018-08-22 09:48, David Sterba wrote:
On Tue, Aug 21, 2018 at 01:01:00PM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 12:05, David Sterba wrote:
On Tue, Aug 21, 2018 at 10:10:04AM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 09:32, Janos Toth F. wrote:
so pretty much everyone who
On 2018-08-22 11:01, David Sterba wrote:
On Wed, Aug 22, 2018 at 09:56:59AM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-22 09:48, David Sterba wrote:
On Tue, Aug 21, 2018 at 01:01:00PM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 12:05, David Sterba wrote:
On Tue, Aug 21, 2018 at 10:10
On 2018-08-23 10:04, Stefan Malte Schumacher wrote:
Hallo,
I originally had RAID with six 4TB drives, which was more than 80
percent full. So now I bought
a 10TB drive, added it to the Array and gave the command to remove the
oldest drive in the array.
btrfs device delete /dev/sda /mnt/btrfs-
On 2018-08-27 17:05, Eugene Bright wrote:
Greetings!
BTRFS wiki says there is no per-subvolume compression option [1].
At the same time next command allow me to set properties per-subvolume:
btrfs property set /volume compression zstd
Corresponding get command shows distinct propertie
On 2018-08-27 18:53, John Petrini wrote:
Hi List,
I'm seeing corruption errors when running btrfs device stats but I'm
not sure what that means exactly. I've just completed a full scrub and
it reported no errors. I'm hoping someone here can enlighten me.
Thanks!
The first thing to understand h
On 2018-08-28 11:27, Noah Massey wrote:
On Tue, Aug 28, 2018 at 10:59 AM Menion wrote:
[sudo] password for menion:
ID gen top level path
-- --- -
257 600627 5 /@
258 600626 5 /@home
296 599489 5
/@apt-snapsho
On 2018-08-28 12:05, Noah Massey wrote:
On Tue, Aug 28, 2018 at 11:47 AM Austin S. Hemmelgarn
wrote:
On 2018-08-28 11:27, Noah Massey wrote:
On Tue, Aug 28, 2018 at 10:59 AM Menion wrote:
[sudo] password for menion:
ID gen top level path
t looks like that cannot be easily disabled, and without the
apt-btrfs-snapshot package scheduling cleanups it's not ever
automatically removed?
> just google it, there is no mention of this behaviour
> Il giorno mar 28 ago 2018 alle ore 19:07 Austin S. Hemmelgarn
&g
On 2018-08-29 08:33, Nikolay Borisov wrote:
On 29.08.2018 15:09, Qu Wenruo wrote:
On 2018/8/29 下午4:35, Nikolay Borisov wrote:
Here is the userspace tooling support for utilising the new metadata_uuid field,
enabling the change of fsid without having to rewrite every metadata block. This
pat
On 2018-08-29 13:24, Axel Burri wrote:
This patch allows to build distinct binaries for specific btrfs
subcommands, e.g. "btrfs-subvolume-show" which would be identical to
"btrfs subvolume show".
Motivation:
While btrfs-progs offer the all-inclusive "btrfs" command, it gets
pretty cumbersome t
On 2018-08-30 13:13, Axel Burri wrote:
On 29/08/2018 21.02, Austin S. Hemmelgarn wrote:
On 2018-08-29 13:24, Axel Burri wrote:
This patch allows to build distinct binaries for specific btrfs
subcommands, e.g. "btrfs-subvolume-show" which would be identical to
"btrfs
On 2018-09-06 03:23, Nathan Dehnel wrote:
https://lwn.net/Articles/287289/
In 2008, HP released the source code for a filesystem called advfs so
that its features could be incorporated into linux filesystems. Advfs
had a feature where a group of file writes were an atomic transaction.
https://w
201 - 300 of 1429 matches
Mail list logo