>> [ ... ] these extents are all over the place, they're not
>> contiguous at all. 4K here, 4K there, 4K over there, back to
>> 4K here next to this one, 4K over there...12K over there, 500K
>> unwritten, 4K over there. This seems not so consequential on
>> SSD, [ ... ]
> Indeed there were recent
> [ ... ] Instead, you can use raw files (preferably sparse unless
> there's both nocow and no snapshots). Btrfs does natively everything
> you'd gain from qcow2, and does it better: you can delete the master
> of a cloned image, deduplicate them, deduplicate two unrelated images;
> you can turn
Goffredo Baroncelli posted on Fri, 28 Apr 2017 19:05:21 +0200 as
excerpted:
> After some thinking I adopted a different strategies: I used journald as
> collector, then I forward all the log to rsyslogd, which used a "log
> append" format. Journald never write on the root filesystem, only in
>
r.kernel.org>
> Subject: Re: btrfs, journald logs, fragmentation, and fallocate
>
>
> In the past I faced the same problems; I collected some data here
> http://kreijack.blogspot.it/2014/06/btrfs-and-systemd-journal.html.
> Unfortunately the journald files are very ba
> [ ... ] these extents are all over the place, they're not
> contiguous at all. 4K here, 4K there, 4K over there, back to
> 4K here next to this one, 4K over there...12K over there, 500K
> unwritten, 4K over there. This seems not so consequential on
> SSD, [ ... ]
Indeed there were recent
On Fri, Apr 28, 2017 at 11:41:00AM -0600, Chris Murphy wrote:
> The same behavior happens with NTFS in qcow2 files. They quickly end
> up with 100,000+ extents unless set nocow. It's like the worst case
> scenario.
You should never use qcow2 on btrfs, especially if snapshots are involved.
They
On Fri, Apr 28, 2017 at 1:39 PM, Peter Grandi
wrote:
> In a particularly demented setup I had to decastrophize with
> great pain a Zimbra QCOW2 disk image (XFS on NFS on XFS on
> RAID6) containining an ever growing number Maildir email archive
> ended up with over a
On Fri, Apr 28, 2017 at 11:53 AM, Peter Grandi
wrote:
> Well, depends, but probably the single file: it is more likely
> that the 20,000 fragments will actually be contiguous, and that
> there will be less metadata IO than for 40,000 separate journal
> files.
You
On Fri, Apr 28, 2017 at 11:46 AM, Peter Grandi
wrote:
> So there are three layers of silliness here:
>
> * Writing large files slowly to a COW filesystem and
> snapshotting it frequently.
> * A filesystem that does delayed allocation instead of
> allocate-ahead,
>> The gotcha though is there's a pile of data in the journal
>> that would never make it to rsyslogd. If you use journalctl
>> -o verbose you can see some of this.
> You can send *all the info* to rsyslogd via imjournal
> http://www.rsyslog.com/doc/v8-stable/configuration/modules/imjournal.html
On 2017-04-28 19:41, Chris Murphy wrote:
> On Fri, Apr 28, 2017 at 11:05 AM, Goffredo Baroncelli
> wrote:
>
>> In the past I faced the same problems; I collected some data here
>> http://kreijack.blogspot.it/2014/06/btrfs-and-systemd-journal.html.
>> Unfortunately the
> [ ... ] And that makes me wonder whether metadata
> fragmentation is happening as a result. But in any case,
> there's a lot of metadata being written for each journal
> update compared to what's being added to the journal file. [
> ... ]
That's the "wandering trees" problem in COW filesystems,
> Old news is that systemd-journald journals end up pretty
> heavily fragmented on Btrfs due to COW.
This has been discussed before in detail indeeed here, but also
here: http://www.sabi.co.uk/blog/15-one.html?150203#150203
> While journald uses chattr +C on journal files now, COW still
>
On Fri, Apr 28, 2017 at 11:05 AM, Goffredo Baroncelli
wrote:
> In the past I faced the same problems; I collected some data here
> http://kreijack.blogspot.it/2014/06/btrfs-and-systemd-journal.html.
> Unfortunately the journald files are very bad, because first the data is
On 2017-04-28 18:16, Chris Murphy wrote:
> Old news is that systemd-journald journals end up pretty heavily
> fragmented on Btrfs due to COW. While journald uses chattr +C on
> journal files now, COW still happens if the subvolume the journal is
> in gets snapshot. e.g. a week old system.journal
15 matches
Mail list logo