Am 08.09.2018 um 18:24 schrieb Adam Borowski:
> On Thu, Sep 06, 2018 at 06:08:33AM -0400, Austin S. Hemmelgarn wrote:
>> On 2018-09-06 03:23, Nathan Dehnel wrote:
>>> So I guess my question is, does btrfs support atomic writes across
>>> multiple files? Or is anyone interested in such a feature?
7fe655700
R09: 0101
[Fri Aug 17 16:21:06 2018] R10: 56521bf7c0cc R11: 0246
R12: 7f67fd6d6440
[Fri Aug 17 16:21:06 2018] R13: 7f67fd6d5900 R14: 0064
R15: 0000
Regards,
Martin Raiber
On 02.08.2018 14:27 Austin S. Hemmelgarn wrote:
> On 2018-08-02 06:56, Qu Wenruo wrote:
>>
>> On 2018年08月02日 18:45, Andrei Borzenkov wrote:
>>>
>>> Отправлено с iPhone
>>>
2 авг. 2018 г., в 10:02, Qu Wenruo
написал(а):
> On 2018年08月01日 11:45, MegaBrutal wrote:
> Hi all,
On 10.07.2018 09:04 Pete wrote:
> I've just had the error in the subject which caused the file system to
> go read-only.
>
> Further part of error message:
> WARNING: CPU: 14 PID: 1351 at fs/btrfs/extent-tree.c:3076
> btrfs_run_delayed_refs*0x163/0x190
>
> 'Screenshot' here:
>
On 08.01.2018 19:34 Austin S. Hemmelgarn wrote:
> On 2018-01-08 13:17, Graham Cobb wrote:
>> On 08/01/18 16:34, Austin S. Hemmelgarn wrote:
>>> Ideally, I think it should be as generic as reasonably possible,
>>> possibly something along the lines of:
>>>
>>> A: While not strictly necessary,
with btrfs_should_throttle_delayed_refs . Maybe by
creating a snapshot of a file and then modifying it (some action that
creates delayed refs, is not truncate which is already throttled and
does not commit a transaction which is also throttled).
Regards,
Martin Raiber
--
To unsubscribe from this list: send the line
On 03.12.2017 16:39 Martin Raiber wrote:
> Am 26.11.2017 um 17:02 schrieb Tomasz Chmielewski:
>> On 2017-11-27 00:37, Martin Raiber wrote:
>>> On 26.11.2017 08:46 Tomasz Chmielewski wrote:
>>>> Got this one on a 4.14-rc7 filesystem with some 400 GB left:
>>>
Am 26.11.2017 um 17:02 schrieb Tomasz Chmielewski:
> On 2017-11-27 00:37, Martin Raiber wrote:
>> On 26.11.2017 08:46 Tomasz Chmielewski wrote:
>>> Got this one on a 4.14-rc7 filesystem with some 400 GB left:
>> I guess it is too late now, but I guess the "btrfs fi
fault and as usual not all the conditions may be
relevant. Could also be instead an upper layer error (Hyper-V storage),
memory issue or an application error.
Regards,
Martin Raiber
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message t
On 02.11.2017 16:10 Hans van Kranenburg wrote:
> On 11/02/2017 04:02 PM, Martin Raiber wrote:
>> snapshot cleanup is a little slow in my case (50TB volume). Would it
>> help to have multiple btrfs-cleaner threads? The block layer underneath
>> would have higher throughput w
Hi,
snapshot cleanup is a little slow in my case (50TB volume). Would it
help to have multiple btrfs-cleaner threads? The block layer underneath
would have higher throughput with more simultaneous read/write requests.
Regards,
Martin Raiber
--
To unsubscribe from this list: send the line
On 19.10.2017 10:16 Vladimir Panteleev wrote:
> On Tue, 17 Oct 2017 16:21:04 -0700, Duncan wrote:
>> * try the balance on 4.14-rc5+, where the known bug should be fixed
>
> Thanks! However, I'm getting the same error on
> 4.14.0-rc5-g9aa0d2dde6eb. The stack trace is different, though:
>
> Aside
le operations?
>
as far as I can see it only uses the log tree in some cases where the
log tree was already used for the file or the parent directory. The
cases are documented here
https://github.com/torvalds/linux/blob/master/fs/btrfs/tree-log.c#L45 .
So rename isn't much heavier than unlink+cre
tuff like e.g. page files or the shadow storage area from the image
backups, as well and has a mode to store image backups as raw btrfs files.
Linux VMs I'd backup as files either from the hypervisor or from in VM.
If you want to backup big btrfs image files it can do that too, and
faster th
ty
>> data pages to disk, and then commit transaction.
>> While only calling btrfs_commit_transacation() doesn't trigger dirty
>> page writeback.
>>
>> So there is a difference.
this conversation made me realize why btrfs has sub-optimal meta-data
performance. Cow b-tree
On 08.02.2017 14:08 Austin S. Hemmelgarn wrote:
> On 2017-02-08 07:14, Martin Raiber wrote:
>> Hi,
>>
>> On 08.02.2017 03:11 Peter Zaitsev wrote:
>>> Out of curiosity, I see one problem here:
>>> If you're doing snapshots of the live database, each snap
d
the properly snapshotted state, so I see the advantages more with
usability and taking care of corner cases automatically.
Regards,
Martin Raiber
smime.p7s
Description: S/MIME Cryptographic Signature
On 04.01.2017 00:43 Hans van Kranenburg wrote:
> On 01/04/2017 12:12 AM, Peter Becker wrote:
>> Good hint, this would be an option and i will try this.
>>
>> Regardless of this the curiosity has packed me and I will try to
>> figure out where the problem with the low transfer rate is.
>>
>>
problem.
Perhaps it would be good to somehow show that "global reserve" belongs
to metadata and show in btrfs fi usage/df that metadata is full if
global reserve>=free metadata, so that future users are not as confused
by this situation as I was.
Regards,
Martin Raiber
smime.p7s
Description: S/MIME Cryptographic Signature
On 22.11.2016 15:16 Martin Raiber wrote:
> ...
> Interestingly,
> after running "btrfs check --repair" "df" shows 0 free space (Used
> 516456408 Available 0), being inconsistent with the below other btrfs
> free space information.
>
> btrfs fi usa
Hi,
I'm having a file system which is currently broken because of ENOSPC issues.
It is a single device file system with no compression and no quotas
enabled but with some snapshots. Creation and initial ENOSPC/free space
inconsistency with 4.4.20 and 4.4.30 (both vanilla).
Currently I am on
On 20.07.2016 11:15 Libor Klepáč wrote:
> Hello,
> we use backuppc to backup our hosting machines.
>
> I have recently migrated it to btrfs, so we can use send-recieve for offsite
> backups of our backups.
>
> I have several btrfs volumes, each hosts nspawn container, which runs in
> /system
artin
From ebc5e8721264823a0df92b31e5fb1381f7f5e6f8 Mon Sep 17 00:00:00 2001
From: Martin Raiber <mar...@urbackup.org>
Date: Fri, 25 Sep 2015 13:24:13 +0200
Subject: [PATCH 1/1] btrfs: test for incremental send after file unlink and
then cloning
Creating a snapshot, then removing a file and clonin
Hi,
this looks like http://www.spinics.net/lists/linux-btrfs/msg26774.html
which seems to be fixed, but it occurs on the latest stable kernel for
me (3.16.2):
[3.648344] BTRFS info (device xvda3): metadata ratio 4
[3.648350] BTRFS info (device xvda3): not using ssd allocation scheme
[
24 matches
Mail list logo