Am Wed, 24 May 2017 08:00:33 +0200
schrieb Kai Krakow <hurikha...@gmail.com>:

> Am Wed, 24 May 2017 07:34:34 +0200
> schrieb gentoo-u...@c-14.de:
> 
> > On 17-05-23 at 22:16, Ian Zimmerman wrote:  
> > > So what are gentoo users' opinions on this matter of faith?    
> > I use an ext4 partition backed by zram. Gives me ~3x compression on
> > the things I normally have lying around there (plain text files) and
> > ensures that anything I throw there (or programs throw there) gets
> > cleaned up on reboot.
> >   
> > > I have long been in the camp that thinks tmpfs for /tmp has no
> > > advantages (and may have disadvantages) over a normal filesystem
> > > like ext3, because the files there are normally so small that they
> > > will stay in the page cache 100% of the time.    
> > I've never actually benchmarked this. Most of the things I notice
> > that tend to end up there are temporary build files generated during
> > configure stages or temporary log files used by various programs
> > (clang static analyzer). Even if the entire file stays in the page
> > cache, it'll still generate IO overhead and extra seeks that might
> > slow down the rest of your system (unless your /tmp is on a
> > different hard drive) which on spinning rust will cause slowdowns
> > while on an ssd it'll eat away at your writes (which you may or may
> > not have to worry about).
> >   
> > > But I see that tmpfs is the default with systemd.  Surely they
> > > have a good reason for this? :)    
> > Or someone decided they liked the idea and made it the default and
> > nobody ever complained (or if they did were told to just change it
> > on their system). 
> > 
> > Either way, it'd be nice if someone actually benchmarked this.  
> 
> While I have no benchmarks and use the systemd default of tmpfs
> for /tmp, I also put /var/tmp/portage on tmpfs, automounted through
> systemd so it is cleaned up when no longer used (by unmounting).
> 
> What can I say? It works so much faster: Building packages is a lot
> faster most of the time, even if you'd expect gcc uses a lot of
> memory.
> 
> Well, why might that be? First, tmpfs is backed by swap space, that
> means, you need a swap partition of course.

To get in line with Rich Freeman: I didn't want to imply that zswap
only works with swap, neither that tmpfs only works with swap. Both
work without. But if you want to put some serious amount of data into
tmpfs, you need swap as a backing device sooner or later.

> Swap is a lot simpler than
> file systems, so swapping out unused temporary files is fast and is a
> good thing. Also, unused memory sitting around may be swapped out
> early. Why would you want inactive memory resident? So this is also a
> good thing. Portage can use memory much more efficient by this.
> 
> Applying this reasoning over to /tmp should no explain why it works so
> well and why you may want it.
> 
> BTW: I also use zswap, so tmpfs sits in front of a compressed
> write-back cache before being written out to swap compressed. This
> should generally be much more efficient (performance-wise) than
> putting /tmp on zram.
> 
> I configured tmpfs for portage to use up to 30GB of space, which is
> almost twice the RAM I have. And it works because tmpfs is not
> required to be resident all the time: Inactive parts will be swapped
> out. The kernel handles this much similar to the page cache, with the
> difference that your files aren't backed by your normal file system
> but by swap. And swap has a lot lower IO overhead.
> 
> Overall, having less IO overhead (and less head movement for portage
> builds) is a very very efficient thing to do. GCC constantly needs all
> sorts of files from your installation (libs for linking, header files,
> etc), and writes a lot of transient files which are needed once later
> and then discarded. There's no point in putting it on a non-transient
> file system.
> 
> I use the following measures to get more performance out of this
> setup:
> 
>   * I have three swap partitions spread across three HDDs
>   * I have a lot of swap space (60 GB) to have space for tmpfs
>   * I have bcache in front of my HDD filesystem
>   * I have a relatively big SSD dedicated to bcache
> 
> My best recommendation is to separate swap and filesystem devices.
> While I didn't do it that way, I still separate them through bcache
> and thus decouple fs access and swap access although they are on the
> same physical devices. My bcache is big enough that most accesses
> would go to the SSD only. I enabled write-back to have that effect
> also for write access.
> 
> If you cannot physically split swap from fs, a tmpfs setup for portage
> may not be recommended (except you have a lot of memory, like 16GB or
> above). But YMMV.
> 
> Still, I recommend it for /tmp, especially if your system is on SSD.
> Unix semantics suggest that /tmp is not expected to survive reboots
> anyways (in contrast, /var/tmp is expected to survive reboots), so
> tmpfs is a logical consequence to use for /tmp.


-- 
Regards,
Kai

Replies to list-only preferred.


Reply via email to