2017-12-11 8:18 GMT+03:00 Dave :
> On Tue, Oct 31, 2017 someone wrote:
>>
>>
>> > 2. Put $HOME/.cache on a separate BTRFS subvolume that is mounted
>> > nocow -- it will NOT be snapshotted
>
> I did exactly this. It servers the purpose of avoiding snapshots.
> However,
On Tue, Oct 31, 2017 someone wrote:
>
>
> > 2. Put $HOME/.cache on a separate BTRFS subvolume that is mounted
> > nocow -- it will NOT be snapshotted
I did exactly this. It servers the purpose of avoiding snapshots.
However, today I saw the following at
https://wiki.archlinux.org/index.php/Btrfs
On 2017-11-03 03:26, Kai Krakow wrote:
Am Thu, 2 Nov 2017 22:47:31 -0400
schrieb Dave :
On Thu, Nov 2, 2017 at 5:16 PM, Kai Krakow
wrote:
You may want to try btrfs autodefrag mount option and see if it
improves things (tho, the effect may take
Am Thu, 2 Nov 2017 22:47:31 -0400
schrieb Dave :
> On Thu, Nov 2, 2017 at 5:16 PM, Kai Krakow
> wrote:
>
> >
> > You may want to try btrfs autodefrag mount option and see if it
> > improves things (tho, the effect may take days or weeks to apply if
Am Fri, 3 Nov 2017 08:58:22 +0300
schrieb Marat Khalili :
> On 02/11/17 04:39, Dave wrote:
> > I'm going to make this change now. What would be a good way to
> > implement this so that the change applies to the $HOME/.cache of
> > each user?
> I'd make each user's .cache a symlink
Am Thu, 2 Nov 2017 22:59:36 -0400
schrieb Dave :
> On Thu, Nov 2, 2017 at 7:07 AM, Austin S. Hemmelgarn
> wrote:
> > On 2017-11-01 21:39, Dave wrote:
> >> I'm going to make this change now. What would be a good way to
> >> implement this so that
On 02/11/17 04:39, Dave wrote:
I'm going to make this change now. What would be a good way to
implement this so that the change applies to the $HOME/.cache of each
user?
I'd make each user's .cache a symlink (should work but if it won't then
bind mount) to a per-user directory in some
On Thu, Nov 2, 2017 at 7:07 AM, Austin S. Hemmelgarn
wrote:
> On 2017-11-01 21:39, Dave wrote:
>> I'm going to make this change now. What would be a good way to
>> implement this so that the change applies to the $HOME/.cache of each
>> user?
>>
>> The simple way would be to
On Thu, Nov 2, 2017 at 5:16 PM, Kai Krakow wrote:
>
> You may want to try btrfs autodefrag mount option and see if it
> improves things (tho, the effect may take days or weeks to apply if you
> didn't enable it right from the creation of the filesystem).
>
> Also,
Am Tue, 31 Oct 2017 20:37:27 -0400
schrieb Dave :
> > Also, you can declare the '.firefox/default/' directory to be
> > NOCOW, and that "just works".
>
> The cache is in a separate location from the profiles, as I'm sure you
> know. The reason I suggested a separate
On 2017-11-02 14:09, Dave wrote:
On Thu, Nov 2, 2017 at 7:17 AM, Austin S. Hemmelgarn
wrote:
And the worst performing machine was the one with the most RAM and a
fast NVMe drive and top of the line hardware.
Somewhat nonsensically, I'll bet that NVMe is a contributing
On Thu, Nov 2, 2017 at 7:17 AM, Austin S. Hemmelgarn
wrote:
>> And the worst performing machine was the one with the most RAM and a
>> fast NVMe drive and top of the line hardware.
>
> Somewhat nonsensically, I'll bet that NVMe is a contributing factor in this
> particular
On 2017-11-01 20:09, Dave wrote:
On Wed, Nov 1, 2017 at 1:48 PM, Peter Grandi wrote:
When defragmenting individual files on a BTRFS filesystem with
COW, I assume reflinks between that file and all snapshots are
broken. So if there are 30 snapshots on that volume,
On 2017-11-01 21:39, Dave wrote:
On Wed, Nov 1, 2017 at 8:21 AM, Austin S. Hemmelgarn
wrote:
The cache is in a separate location from the profiles, as I'm sure you
know. The reason I suggested a separate BTRFS subvolume for
$HOME/.cache is that this will prevent the
On Wed, Nov 1, 2017 at 8:21 AM, Austin S. Hemmelgarn
wrote:
>> The cache is in a separate location from the profiles, as I'm sure you
>> know. The reason I suggested a separate BTRFS subvolume for
>> $HOME/.cache is that this will prevent the cache files for all
>>
> Another one is to find the most fragmented files first or all
> files of at least 1M with with at least say 100 fragments as in:
> find "$HOME" -xdev -type f -size +1M -print0 | xargs -0 filefrag \
> | perl -n -e 'print "$1\0" if (m/(.*): ([0-9]+) extents/ && $1 > 100)' \
> | xargs -0 btrfs fi
On Wed, Nov 1, 2017 at 1:48 PM, Peter Grandi wrote:
>> When defragmenting individual files on a BTRFS filesystem with
>> COW, I assume reflinks between that file and all snapshots are
>> broken. So if there are 30 snapshots on that volume, that one
>> file will suddenly
On Wed, Nov 1, 2017 at 9:31 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Dave posted on Tue, 31 Oct 2017 17:47:54 -0400 as excerpted:
>
>> 6. Make sure Firefox is running in multi-process mode. (Duncan's
>> instructions, while greatly appreciated and very useful, left me
>> slightly confused about
> When defragmenting individual files on a BTRFS filesystem with
> COW, I assume reflinks between that file and all snapshots are
> broken. So if there are 30 snapshots on that volume, that one
> file will suddenly take up 30 times more space... [ ... ]
Defragmentation works by effectively making
Dave posted on Tue, 31 Oct 2017 17:47:54 -0400 as excerpted:
> 6. Make sure Firefox is running in multi-process mode. (Duncan's
> instructions, while greatly appreciated and very useful, left me
> slightly confused about pulseaudio's compatibility with multi-process
> mode.)
Just to clarify:
se
considerable increase of space usage depending on the broken up
ref-links.
I am running Ubuntu 16.04 with Linux kernel 4.10 and I have several
snapshots.
Therefore, I better should avoid calling "btrfs filesystem defragment -r"?
What is the defragmenting best practice?
Avoi
On Tue, Oct 31, 2017 at 05:47:54PM -0400, Dave wrote:
> I'm following up on all the suggestions regarding Firefox performance
> on BTRFS.
>
>
>
> 5. Firefox profile sync has not worked well for us in the past, so we
> don't use it.
> 6. Our machines generally have plenty of RAM so we could put
ing on the broken up
> ref-links.
>
>I am running Ubuntu 16.04 with Linux kernel 4.10 and I have several
>snapshots.
>Therefore, I better should avoid calling "btrfs filesystem defragment -r"?
>
>What is the defragmenting best practice?
>Avoid it completly?
My question
> I'm following up on all the suggestions regarding Firefox performance
> on BTRFS. [ ... ]
I haven't read that yet, so maybe I am missing something, but I
use Firefox with Btrfs all the time and I haven't got issues.
[ ... ]
> 1. BTRFS snapshots have proven to be too useful (and too important
I'm following up on all the suggestions regarding Firefox performance
on BTRFS. I have time to make these changes now, but I am having
trouble figuring out what to do. The constraints are:
1. BTRFS snapshots have proven to be too useful (and too important to
our overall IT approach) to forego.
2.
Am Freitag, 22. September 2017, 13:22:52 CEST schrieb Austin S. Hemmelgarn:
> > I'm not sure where Firefox puts its cache, I only use it on very rare
> > occasions. But I think it's going to .cache/mozilla last time looked
> > at it.
>
> I'm pretty sure that is correct.
FWIW, on my system
On 2017-09-21 16:10, Kai Krakow wrote:
Am Wed, 20 Sep 2017 07:46:52 -0400
schrieb "Austin S. Hemmelgarn" :
Fragmentation: Files with a lot of random writes can become
heavily fragmented (1+ extents) causing excessive multi-second
spikes of CPU load on systems
Am Thu, 21 Sep 2017 22:10:13 +0200
schrieb Kai Krakow :
> Am Wed, 20 Sep 2017 07:46:52 -0400
> schrieb "Austin S. Hemmelgarn" :
>
> > > Fragmentation: Files with a lot of random writes can become
> > > heavily fragmented (1+ extents) causing
These are great suggestions. I will test several of them (or all of
them) and report back with my results once I have done the testing.
Thank you! This is a fantastic mailing list.
P.S. I'm inclined to stay with Firefox, but I will definitely test
Chromium vs Firefox after making a series of
Am Wed, 20 Sep 2017 07:46:52 -0400
schrieb "Austin S. Hemmelgarn" :
> > Fragmentation: Files with a lot of random writes can become
> > heavily fragmented (1+ extents) causing excessive multi-second
> > spikes of CPU load on systems with an SSD or large amount a
On September 19, 2017 11:38:13 PM PDT, Dave wrote:
>>On Thu 2017-08-31 (09:05), Ulli Horlacher wrote:
>
>Here's my scenario. Some months ago I built an over-the-top powerful
>desktop computer / workstation and I was looking forward to really
>fantastic performance
Dave posted on Wed, 20 Sep 2017 02:38:13 -0400 as excerpted:
> Here's my scenario. Some months ago I built an over-the-top powerful
> desktop computer / workstation and I was looking forward to really
> fantastic performance improvements over my 6 year old Ubuntu machine. I
> installed Arch Linux
efragment -r"?
What is the defragmenting best practice?
Avoid it completly?
My question is the same as the OP in this thread, so I came here to
read the answers before asking. However, it turns out that I still
need to ask something. Should I ask here or start a new thread? (I'll
assume here, since the t
icated data). This may cause
>> considerable increase of space usage depending on the broken up
>> ref-links.
>>
>> I am running Ubuntu 16.04 with Linux kernel 4.10 and I have several
>> snapshots.
>> Therefore, I better should avoid calling "btrfs
I better should avoid calling "btrfs filesystem defragment -r"?
>
>What is the defragmenting best practice?
>Avoid it completly?
My question is the same as the OP in this thread, so I came here to
read the answers before asking. However, it turns out that I still
need to
On 15 September 2017 at 18:08, Kai Krakow wrote:
[..]
> According to Tomasz, your tests should not run at vastly different
> speeds because fragmentation has no impact on performance, quod est
> demonstrandum... I think we will not get to the "erat" part.
No. This is not
Am Fri, 15 Sep 2017 16:11:50 +0200
schrieb Michał Sokołowski :
> On 09/15/2017 03:07 PM, Tomasz Kłoczko wrote:
> > [...]
> > Case #1
> > 2x 7200 rpm HDD -> md raid 1 -> host BTRFS rootfs -> qemu cow2
> > storage -> guest BTRFS filesystem
> > SQL table row insertions per
[ ... ]
Case #1
2x 7200 rpm HDD -> md raid 1 -> host BTRFS rootfs
-> qemu cow2 storage -> guest BTRFS filesystem
SQL table row insertions per second: 1-2
Case #2
2x 7200 rpm HDD -> md raid 1 -> host BTRFS rootfs
-> qemu raw storage -> guest EXT4 filesystem
On 09/15/2017 03:07 PM, Tomasz Kłoczko wrote:
> [...]
> Case #1
> 2x 7200 rpm HDD -> md raid 1 -> host BTRFS rootfs -> qemu cow2 storage
> -> guest BTRFS filesystem
> SQL table row insertions per second: 1-2
>
> Case #2
> 2x 7200 rpm HDD -> md raid 1 -> host BTRFS rootfs -> qemu raw storage ->
>
On 15 September 2017 at 11:54, Michał Sokołowski wrote:
[..]
>> Just please some example which I can try to replay which ill be
>> showing that we have similar results.
>
> Case #1
> 2x 7200 rpm HDD -> md raid 1 -> host BTRFS rootfs -> qemu cow2 storage
> -> guest BTRFS
On 2017-09-14 22:26, Tomasz Kłoczko wrote:
On 14 September 2017 at 19:53, Austin S. Hemmelgarn
wrote:
[..]
While it's not for BTRFS< a tool called e4rat might be of interest to you
regarding this. It reorganizes files on an ext4 filesystem so that stuff
used by the boot
> Case #1
> 2x 7200 rpm HDD -> md raid 1 -> host BTRFS rootfs -> qemu cow2 storage
> -> guest BTRFS filesystem
> SQL table row insertions per second: 1-2
"Doctor, if I stab my hand with a fork it hurts a lot: can you
cure that?"
> Case #2
> 2x 7200 rpm HDD -> md raid 1 -> host BTRFS rootfs ->
On 09/14/2017 07:48 PM, Tomasz Kłoczko wrote:
> On 14 September 2017 at 16:24, Kai Krakow wrote:
> [..]
>> > Getting e.g. boot files into read order or at least nearby improves
>> > boot time a lot. Similar for loading applications.
> [...]
> Just please some example which I
On 14 September 2017 at 19:53, Austin S. Hemmelgarn
wrote:
[..]
> While it's not for BTRFS< a tool called e4rat might be of interest to you
> regarding this. It reorganizes files on an ext4 filesystem so that stuff
> used by the boot loader is right at the beginning of the
Am Thu, 14 Sep 2017 18:48:54 +0100
schrieb Tomasz Kłoczko :
> On 14 September 2017 at 16:24, Kai Krakow
> wrote: [..]
> > Getting e.g. boot files into read order or at least nearby improves
> > boot time a lot. Similar for loading applications.
>
On 2017-09-14 13:48, Tomasz Kłoczko wrote:
On 14 September 2017 at 16:24, Kai Krakow wrote:
[..]
Getting e.g. boot files into read order or at least nearby improves
boot time a lot. Similar for loading applications.
By how much it is possible to improve boot time?
Just
On 14 September 2017 at 16:24, Kai Krakow wrote:
[..]
> Getting e.g. boot files into read order or at least nearby improves
> boot time a lot. Similar for loading applications.
By how much it is possible to improve boot time?
Just please some example which I can try to
Am Thu, 14 Sep 2017 17:24:34 +0200
schrieb Kai Krakow :
Errors corrected, see below...
> Am Thu, 14 Sep 2017 14:31:48 +0100
> schrieb Tomasz Kłoczko :
>
> > On 14 September 2017 at 12:38, Kai Krakow
> > wrote: [..]
> > >
Am Thu, 14 Sep 2017 14:31:48 +0100
schrieb Tomasz Kłoczko :
> On 14 September 2017 at 12:38, Kai Krakow
> wrote: [..]
> >
> > I suggest you only ever defragment parts of your main subvolume or
> > rely on autodefrag, and let bees do optimizing the
On 14 September 2017 at 12:38, Kai Krakow wrote:
[..]
>
> I suggest you only ever defragment parts of your main subvolume or rely
> on autodefrag, and let bees do optimizing the snapshots.
>
> Also, I experimented with adding btrfs support to shake, still working
> on better
On 2017-09-14 03:54, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 12 Sep 2017 13:27:00 -0400 as
excerpted:
The tricky part though is that differing workloads are impacted
differently by fragmentation. Using just four generic examples:
* Mostly sequential write focused workloads (like
up
> ref-links.
>
> I am running Ubuntu 16.04 with Linux kernel 4.10 and I have several
> snapshots.
> Therefore, I better should avoid calling "btrfs filesystem defragment
> -r"?
>
> What is the defragmenting best practice?
> Avoid it completly?
Yo
Austin S. Hemmelgarn posted on Tue, 12 Sep 2017 13:27:00 -0400 as
excerpted:
> The tricky part though is that differing workloads are impacted
> differently by fragmentation. Using just four generic examples:
>
> * Mostly sequential write focused workloads (like security recording
> systems)
er should avoid calling "btrfs filesystem defragment -r"?
What is the defragmenting best practice?
That really depends on what you're doing.
First, you need to understand that defrag won't break _all_ reflinks,
just the particular instances you point it at. So, if you have
subvolume A
napshots or de-duplicated data). This may cause
considerable increase of space usage depending on the broken up
ref-links.
I am running Ubuntu 16.04 with Linux kernel 4.10 and I have several
snapshots.
Therefore, I better should avoid calling "btrfs filesystem defragment -r"?
What i
55 matches
Mail list logo