Re: [gentoo-user] tmp on tmpfs

2017-05-31 Thread Mick
On Wednesday 31 May 2017 12:36:37 Kent Fredric wrote:
> On Sun, 28 May 2017 11:07:03 +0100
> 
> Mick  wrote:
> > Did you also have zbud enabled at the time?
> 
> Historical kernel configs say yes:
> 
> xzcat /root/kernels/04.04.26-gentoo/2016-11-30-23-33-29_success.xz  | grep
> -E "Z(SWAP|BUD)" CONFIG_ZSWAP=y
> CONFIG_ZBUD=y
> 
> Though I should mention there are other issues with that box on top of this
> that could be exacerbated by this, which are only occasionally a problem
> without this.
> 
> But it used to be all these would trigger kernel panics.
> 
> [1262560.644640] irq/30-eth0: page allocation failure: order:0,
> mode:0x2080020(GFP_ATOMIC) [1262560.644750] irq/30-eth0: page allocation
> failure: order:0, mode:0x2080020(GFP_ATOMIC) [1262560.644860] irq/30-eth0:
> page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC)
[snip ...]

Fair enough.  I didn't have any such problems here, but I noticed dekstop 
latency going through the roof when paging started taking place.  I have 
disabled zswap and will see if things improve.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] tmp on tmpfs

2017-05-30 Thread Kent Fredric
On Sun, 28 May 2017 11:07:03 +0100
Mick  wrote:

> Did you also have zbud enabled at the time?

Historical kernel configs say yes:

xzcat /root/kernels/04.04.26-gentoo/2016-11-30-23-33-29_success.xz  | grep -E 
"Z(SWAP|BUD)"
CONFIG_ZSWAP=y
CONFIG_ZBUD=y

Though I should mention there are other issues with that box on top of this 
that could be exacerbated
by this, which are only occasionally a problem without this.

But it used to be all these would trigger kernel panics.

[1262560.644640] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262560.644750] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262560.644860] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262560.644970] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.614082] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.614213] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.614321] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.656214] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.656329] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.656440] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.656550] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.656660] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.656770] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1262566.670106] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1349283.357400] ksoftirqd/0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1606358.941209] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1606358.941565] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1606358.941680] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1606358.941789] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1606358.941896] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1606358.942013] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1606358.942120] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1606358.942226] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1606358.942331] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1606358.942469] irq/30-eth0: page allocation failure: order:0, 
mode:0x2080020(GFP_ATOMIC)
[1607776.644830] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1607776.687657] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1612837.743021] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1658262.328936] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1666011.039154] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1668636.093637] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1669722.355688] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1680913.653645] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1680919.640022] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1680962.743563] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1680962.755535] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1681008.201625] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1681008.513501] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1690596.427305] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1690596.427499] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1690596.435733] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1690851.884134] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691003.944968] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691037.167644] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691037.173233] irq/30-eth0: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[1691386.668001] irq/30-eth0: page allocation failure: 

Re: [gentoo-user] tmp on tmpfs

2017-05-28 Thread Mick
On Friday 26 May 2017 10:36:40 Kent Fredric wrote:
> On Wed, 24 May 2017 15:45:45 +0300
> 
> Andrew Savchenko  wrote:
> > - smaller CPU overhead: not every i/o is being compressed, e.g. if
> > there is sill enough RAM available it is used without compression
> > overhead as usual, but if memory is not enough, swapped out pages
> > are being compressed instead of swapping out to disk;
> 
> I found the opposite problem somehow. CPU started becomming frequently
> pegged in zswap for no obvious reason, while the underlying IO that zswap
> was doing was only measurable in kb/s , far, far, far below the noise
> thresholds and by no means a strain on even my crappy spinning rust based
> swap.
> 
> And to add to that, zswap introduced general protection faults and kernel
> panics.
> 
> So nah, I'm glad I turned that off, it was a huge mistake.

Did you also have zbud enabled at the time?

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] tmp on tmpfs

2017-05-25 Thread Kent Fredric
On Wed, 24 May 2017 15:45:45 +0300
Andrew Savchenko  wrote:

> - smaller CPU overhead: not every i/o is being compressed, e.g. if
> there is sill enough RAM available it is used without compression
> overhead as usual, but if memory is not enough, swapped out pages
> are being compressed instead of swapping out to disk;

I found the opposite problem somehow. CPU started becomming frequently pegged
in zswap for no obvious reason, while the underlying IO that zswap was doing
was only measurable in kb/s , far, far, far below the noise thresholds and
by no means a strain on even my crappy spinning rust based swap.

And to add to that, zswap introduced general protection faults and kernel 
panics.

So nah, I'm glad I turned that off, it was a huge mistake.


pgplFWP7n2s17.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] tmp on tmpfs

2017-05-24 Thread R0b0t1
On Wed, May 24, 2017 at 12:16 AM, Ian Zimmerman  wrote:
> So what are gentoo users' opinions on this matter of faith?
>

On Wed, May 24, 2017 at 12:34 AM,   wrote:
> Either way, it'd be nice if someone actually benchmarked this.
>

I don't have exhaustive benchmarks but moving PORTAGE_TMPDIR to a
tmpfs makes builds at least an order of magnitude faster. For general
usage with /tmp you may or may not notice, but the lack of normal IO
overhead can only make it faster.



Re: [gentoo-user] tmp on tmpfs

2017-05-24 Thread Andrew Savchenko
Hi,

On Wed, 24 May 2017 05:34:09 -0400 Rich Freeman wrote:
> On Wed, May 24, 2017 at 1:16 AM, Ian Zimmerman  wrote:
> >
> > I have long been in the camp that thinks tmpfs for /tmp has no
> > advantages (and may have disadvantages) over a normal filesystem like
> > ext3, because the files there are normally so small that they will stay
> > in the page cache 100% of the time.
> >
> 
> The file being in the page cache only speeds up reads of the file.  On
> a conventional filesystem the file will still be forced to be
> committed to disk within 30 seconds, or whatever you've set your max
> writeback delay to.  That means guaranteed disk write IO.  If the
> drive is mostly idle it will have no impact on performance, but if the
> disk is fairly busy then it will, especially for spinning disks.  For
> an SSD /tmp would be a source of erase cycles (which also have
> performance implications, but there it is more of a wear issue).  When
> the file is removed that would also generate write IO.
> 
> The flip side is that on most systems /tmp probably doesn't get THAT much IO.
> 
> On Gentoo doing your builds in tmpfs definitely has a large
> performance impact, because there are a lot of files created during
> the build process that are sizable but which don't end up getting
> installed (object files mostly).  Plus you have the extraction of the
> source itself.  For a typical build that is many MB of data being
> extracted and then deleted after maybe a minute, which is a lot of
> useless IO, especially when the actual install is probably creating a
> fairly sizable IO queue on its own.
> 
> To avoid a reply, I'll also note that tmpfs does NOT require swap to
> work.  It does of course require plenty of memory, and as with any
> situation where lots of memory is required swap may be useful, but it
> is not a requirement.
> 
> Others have mentioned zram.  I've used it, but unless something has
> changed one of its limitations is that it can't give up memory.  That
> is less of an issue if you're using swap since it can be swapped out
> if idle.  However, if you're not using swap then you're potentially
> giving up a chunk of RAM to do it, though less RAM than a tmpfs if it
> is full most of the time (which I doubt is typically the case).
 
For similar needs I found zswap the most suitable, it's so much
better than zram:

- smaller CPU overhead: not every i/o is being compressed, e.g. if
there is sill enough RAM available it is used without compression
overhead as usual, but if memory is not enough, swapped out pages
are being compressed instead of swapping out to disk;

- no size limitation: if zswap pool is full, data is being pulled
to swap, the same happens with non-compressible pages;

- pool size and compression type can be dynamically adjusted, I
prefer z3fold.

So I have normal tmpfs on /tmp (and /var/tmp on hosts with lots or
RAM), but both tmpfs and running daemons/apps can benefit from
compressed memory for rarely used pages while enjoing full RAM
speed for frequently accessed ones.

Best regards,
Andrew Savchenko


pgpr8SXNUHm1t.pgp
Description: PGP signature


Re: [gentoo-user] tmp on tmpfs

2017-05-24 Thread Rich Freeman
On Wed, May 24, 2017 at 5:43 AM,   wrote:
> On 17-05-24 at 05:34, Rich Freeman wrote:
> [..]
>> Others have mentioned zram.  I've used it, but unless something has
>> changed one of its limitations is that it can't give up memory.  That
>> is less of an issue if you're using swap since it can be swapped out
>> if idle.  However, if you're not using swap then you're potentially
>> giving up a chunk of RAM to do it, though less RAM than a tmpfs if it
>> is full most of the time (which I doubt is typically the case).
> Seems to work fine here (with kernels newer than the late 3.x when I started 
> using zram):
>

Thanks.  Useful to know.  Perhaps something changed.  Then again, I
don't think I actually tested it so it could have also been
misinformation somewhere.

-- 
Rich



Re: [gentoo-user] tmp on tmpfs

2017-05-24 Thread gentoo-user
On 17-05-24 at 05:34, Rich Freeman wrote:
[..]
> Others have mentioned zram.  I've used it, but unless something has
> changed one of its limitations is that it can't give up memory.  That
> is less of an issue if you're using swap since it can be swapped out
> if idle.  However, if you're not using swap then you're potentially
> giving up a chunk of RAM to do it, though less RAM than a tmpfs if it
> is full most of the time (which I doubt is typically the case).
Seems to work fine here (with kernels newer than the late 3.x when I started 
using zram):

radiocarbon:~% dd if=/dev/urandom of=/tmp/foo
^C3405370+0 records in
3405370+0 records out
1743549440 bytes (1.7 GB, 1.6 GiB) copied, 10.8268 s, 161 MB/s
dd if=/dev/urandom of=/tmp/foo  8 MiB 10.853 (user: 0.339, kernel: 10.482)
radiocarbon:~% zramctl
NAME   ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4 2G4K   64B4K   8 [SWAP]
/dev/zram1 lz4 3G  1.6G  1.6G  1.6G   8 /tmp
radiocarbon:~% free -m
  totalusedfree  shared  buff/cache   available
Mem:   79203096  61 22847634487
Swap:  2047   02047
radiocarbon:~% rm /tmp/foo
radiocarbon:~% zramctl
NAME   ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4 2G4K   64B4K   8 [SWAP]
/dev/zram1 lz4 3G  3.9M1M  1.3M   8 /tmp
radiocarbon:~% free -m
  totalusedfree  shared  buff/cache   available
Mem:   792014123458 22930496171
Swap:  2047   02047

-- 
Simon Thelen



Re: [gentoo-user] tmp on tmpfs

2017-05-24 Thread Rich Freeman
On Wed, May 24, 2017 at 1:16 AM, Ian Zimmerman  wrote:
>
> I have long been in the camp that thinks tmpfs for /tmp has no
> advantages (and may have disadvantages) over a normal filesystem like
> ext3, because the files there are normally so small that they will stay
> in the page cache 100% of the time.
>

The file being in the page cache only speeds up reads of the file.  On
a conventional filesystem the file will still be forced to be
committed to disk within 30 seconds, or whatever you've set your max
writeback delay to.  That means guaranteed disk write IO.  If the
drive is mostly idle it will have no impact on performance, but if the
disk is fairly busy then it will, especially for spinning disks.  For
an SSD /tmp would be a source of erase cycles (which also have
performance implications, but there it is more of a wear issue).  When
the file is removed that would also generate write IO.

The flip side is that on most systems /tmp probably doesn't get THAT much IO.

On Gentoo doing your builds in tmpfs definitely has a large
performance impact, because there are a lot of files created during
the build process that are sizable but which don't end up getting
installed (object files mostly).  Plus you have the extraction of the
source itself.  For a typical build that is many MB of data being
extracted and then deleted after maybe a minute, which is a lot of
useless IO, especially when the actual install is probably creating a
fairly sizable IO queue on its own.

To avoid a reply, I'll also note that tmpfs does NOT require swap to
work.  It does of course require plenty of memory, and as with any
situation where lots of memory is required swap may be useful, but it
is not a requirement.

Others have mentioned zram.  I've used it, but unless something has
changed one of its limitations is that it can't give up memory.  That
is less of an issue if you're using swap since it can be swapped out
if idle.  However, if you're not using swap then you're potentially
giving up a chunk of RAM to do it, though less RAM than a tmpfs if it
is full most of the time (which I doubt is typically the case).

-- 
Rich



Re: [gentoo-user] tmp on tmpfs

2017-05-24 Thread Andrew Tselischev
On Tue, May 23, 2017 at 10:16:56PM -0700, Ian Zimmerman wrote:
> So what are gentoo users' opinions on this matter of faith?
> 
> I have long been in the camp that thinks tmpfs for /tmp has no
> advantages (and may have disadvantages) over a normal filesystem like
> ext3, because the files there are normally so small that they will stay
> in the page cache 100% of the time.
> 
> But I see that tmpfs is the default with systemd.  Surely they have a
> good reason for this? :)

for most purposes, it avoids thrashing your storage media with useless i/o.
if your purposes are unusual, by all means change it back.



Re: [gentoo-user] tmp on tmpfs

2017-05-23 Thread gentoo-user
On 17-05-23 at 22:16, Ian Zimmerman wrote:
> So what are gentoo users' opinions on this matter of faith?
I use an ext4 partition backed by zram. Gives me ~3x compression on the
things I normally have lying around there (plain text files) and ensures
that anything I throw there (or programs throw there) gets cleaned up on
reboot.

> I have long been in the camp that thinks tmpfs for /tmp has no
> advantages (and may have disadvantages) over a normal filesystem like
> ext3, because the files there are normally so small that they will stay
> in the page cache 100% of the time.
I've never actually benchmarked this. Most of the things I notice that
tend to end up there are temporary build files generated during
configure stages or temporary log files used by various programs (clang
static analyzer). Even if the entire file stays in the page cache, it'll
still generate IO overhead and extra seeks that might slow down the rest
of your system (unless your /tmp is on a different hard drive) which on
spinning rust will cause slowdowns while on an ssd it'll eat away at
your writes (which you may or may not have to worry about).

> But I see that tmpfs is the default with systemd.  Surely they have a
> good reason for this? :)
Or someone decided they liked the idea and made it the default and
nobody ever complained (or if they did were told to just change it on
their system). 

Either way, it'd be nice if someone actually benchmarked this.

-- 
Simon Thelen



[gentoo-user] tmp on tmpfs

2017-05-23 Thread Ian Zimmerman
So what are gentoo users' opinions on this matter of faith?

I have long been in the camp that thinks tmpfs for /tmp has no
advantages (and may have disadvantages) over a normal filesystem like
ext3, because the files there are normally so small that they will stay
in the page cache 100% of the time.

But I see that tmpfs is the default with systemd.  Surely they have a
good reason for this? :)

-- 
Please *no* private Cc: on mailing lists and newsgroups
Personal signed mail: please _encrypt_ and sign
Don't clear-text sign:
http://primate.net/~itz/blog/the-problem-with-gpg-signatures.html