Re: how to recycle Inact memory more aggressively?

2016-03-15 Thread Adrian Chadd
[snip]

It's not rsync itself. It's just triggering some odd behaviour.

I've poked alc; I'll work with him to see if this can be figured out.

Thanks! I'm glad I'm not the only person who has seen this behaviour!


-adrian
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-15 Thread Jeffrey Bouquet


On Tue, 15 Mar 2016 09:30:11 -0600, Ian Lepore  wrote:

> On Tue, 2016-03-15 at 07:20 -0700, Jeffrey Bouquet wrote:
> > rsync... see bottom posting
> > 
> > On Tue, 15 Mar 2016 07:43:46 +0100, olli hauer  wrote:
> > 
> > > On 2016-03-14 15:19, Ian Lepore wrote:
> > > > On Sun, 2016-03-13 at 19:08 -0700, Adrian Chadd wrote:
> > > > > On 13 March 2016 at 18:51, Mark Johnston 
> > > > > wrote:
> > > > > > On Sun, Mar 13, 2016 at 06:33:46PM -0700, Adrian Chadd wrote:
> > > > > > > Hi,
> > > > > > > 
> > > > > > > I can reproduce this by doing a mkimage on a large
> > > > > > > destination
> > > > > > > file
> > > > > > > image. it looks like it causes all the desktop processes to
> > > > > > > get
> > > > > > > paged
> > > > > > > out whilst it's doing so, and then the whole UI freezes
> > > > > > > until it
> > > > > > > catches up.
> > > > > > 
> > > > > > mkimg(1) maps the destination file with MAP_NOSYNC, so if
> > > > > > it's
> > > > > > larger
> > > > > > than RAM, I think it'll basically force the pagedaemon to
> > > > > > write out
> > > > > > the
> > > > > > image as it tries to reclaim pages from the inactive queue.
> > > > > > This
> > > > > > can
> > > > > > cause stalls if the pagedaemon blocks waiting for some I/O to
> > > > > > complete.
> > > > > > The user/alc/PQ_LAUNDRY branch helps alleviate this problem
> > > > > > by
> > > > > > using a
> > > > > > different thread to launder dirty pages. I use mkimg on
> > > > > > various
> > > > > > desktop
> > > > > > machines to build bhyve images and have noticed the problem
> > > > > > you're
> > > > > > describing; PQ_LAUNDRY helps quite a bit in that case. But I
> > > > > > don't
> > > > > > know
> > > > > > why this would be a new problem.
> > > > > > 
> > > > > 
> > > > > That's why I'm confused. I just know that it didn't used to
> > > > > cause the
> > > > > whole UI to hang due to paging.
> > > > > 
> > > > 
> > > > I've been noticing this too.  This machine runs 10-stable and
> > > > this use
> > > > of the swap began happening recently when I updated from 10
> > > > -stable
> > > > around the 10.2 release time to 10-stable right about when the
> > > > 10.3
> > > > code freeze began.
> > > > 
> > > > In my case I have no zfs anything here.  I noticed the problem
> > > > bigtime
> > > > yesterday when rsync was syncing a ufs filesystem of about 500GB
> > > > from
> > > > one disk to another (probably 70-80 GB actually needed copying). 
> > > >  My
> > > > desktop apps were noticibly unresponsive when I activated a
> > > > window that
> > > > had been idle for a while (like it would take a couple seconds
> > > > for the
> > > > app to begin responding).  I could see lots of swap In happening
> > > > in top
> > > > during this unresponsiveness, and noticible amounts of Out
> > > > activity
> > > > when nothing was happening except the rsync.
> > > > 
> > > > This is amd64, 12GB ram, 16GB swap, a tmpfs had about 400MB in it
> > > > at
> > > > the time.  Prior to the update around the 10.3 freeze, this
> > > > machine
> > > > would never touch the swap no matter what workload I threw at it
> > > > (this
> > > > rsync stuff happens every day, it's the usual workload).
> > > > 
> > > 
> > > I'm not sure if it is the same problem, or port related.
> > > 
> > > On two systems without zfs but with many files e.g. svn servers I
> > > see now
> > > from time to time they are running out of swap.
> > > 
> > >  kernel: swap_pager_getswapspace(9): failed
> > >  kernel: swap_pager_getswapspace(16): failed
> > >  ...
> > > 
> > > It also happened on one system during the nightly periodic tasks
> > > holding
> > > only millions of backup files.
> > > 
> > > $ freebsd-version -ku
> > >   10.2-RELEASE-p9
> > >   10.2-RELEASE-p13
> > > 
> > > 
> > > ___
> > > freebsd-current@freebsd.org mailing list
> > > https://lists.freebsd.org/mailman/listinfo/freebsd-current
> > > To unsubscribe, send any mail to "
> > > freebsd-current-unsubscr...@freebsd.org"
> > 
> > 
> > Just a point I've bought up elsewhere...
> > I've, if I recall, wrecked several filesystems (although EIDE) using
> > rsync at the normal bus rate, and sometimes
> > thumbdrives with whatever filesystem type on them.
> > 
> > I settled on --bwlimit=1500,  max for unattended  rsync usage and
> > almost every day
> > use --bwlimit=700.
> > 
> > The latter enables several resource-intensive processes ( music,
> > classical music videos, svn, pkg, browsing, etc) to
> > proceed apace concurrently on the desktop (SATA not EIDE) with nary a
> > hang nor slowdown.
> > 
> > If I recall, the usual speed is 1 so that is less than ten
> > percent, if I recall, of the usual speed.
> > 
> > YMMV.
> > 
> > J.
> > 
> > PS as an afterthough, it would be useful if that were more prominent
> > on the man page somewhere or even
> > in the port's pkg-message or pkg-description.  
> > The SATA more robust than EIDE on 

Re: how to recycle Inact memory more aggressively?

2016-03-15 Thread olli hauer
...

>> Just a point I've bought up elsewhere...
>> I've, if I recall, wrecked several filesystems (although EIDE) using
>> rsync at the normal bus rate, and sometimes
>> thumbdrives with whatever filesystem type on them.
>>
>> I settled on --bwlimit=1500,  max for unattended  rsync usage and
>> almost every day
>> use --bwlimit=700.

It happened also on VM's where the host is connected via FC to the storage
but only on the FreeBSD VM's.


>> The latter enables several resource-intensive processes ( music,
>> classical music videos, svn, pkg, browsing, etc) to
>> proceed apace concurrently on the desktop (SATA not EIDE) with nary a
>> hang nor slowdown.

I don't have any *NIX system with a gui, only around 110+ headless systems
and halve of them are running FreeBSD.


> I have no real idea what any of that is about, but before it turns into
> some kind of "rsync is bad" mythology, let me just say that I've been
> using rsync to copy gigabytes of backup data every day for years now. 
>  I've never had any kind of problem, especially system responsiveness
> problems, until this increased swapfile activity thing started
> happening on 10-stable in the past few months.
> 
> To reiterate: rsync is not in any way at fault here, and any suggestion
> that the unresponsiveness should be "fixed" by screwing around with
> rsync parms that have worked fine for a decade is just something I
> completely reject.
> 
> I'm sure I'd see the same kind of increased swapping with ANY process
> that read and wrote gigabytes of data in a short period of time.  And
> that's what this thread is about:  What has changed to cause this
> regression that multiple people are seeing where lots of IO now drives
> an unusual amount of swapfile activity on systems that used to NEVER
> write anything to swap?

All those systems are running already for years, for me it looks more
like a missing *free* (memory leak).

Looking at the net/rsync history the last update was in Dec. 2015. Perhaps
it is worth to test net/rsync r359474 for a while to get a comparsion, but
Gary reported the issue by using `cp' and not rsync.

-- 
olli
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-15 Thread Gary Jennejohn
On Sun, 13 Mar 2016 18:33:20 -0700
Mark Johnston  wrote:

> On Sat, Mar 12, 2016 at 09:38:35AM +0100, Gary Jennejohn wrote:
> > In the course of the last year or so the behavior of the vm system
> > has changed in regard to how aggressively Inact memory is recycled.
> > 
> > My box has 8GB of memory.  At the moment I'm copying 100s of gigabytes
> > from one file system to another one.  
> 
> How exactly are you copying them? How large are the files you're
> copying? Which filesystems are in use?
> 

cp(1) from one UFS filesystem to another for backup.

The files I copied were all movies on the order of 2GB to 4GB.  So
it seems unlikely that cp(1) would try to mmap them.

The aggregate total was on the order of 300GB.

To add further detail - I tend to keep an eye on top while doing
copies like this.  Previously, I would observe Inact getting up to
about 6GB, but it would quickly drop on the order of 3GB and Free
would increase correspondingly.

Now I see that Inact is pretty much stuck at 6GB and Free only
grows by a few 100MB at best, which are quickly used up.

In the good old days large file copies would only cause a few
MB to be swapped out, but now its on the order of 100MB.


-- 
Gary Jennejohn
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-15 Thread Ian Lepore
On Tue, 2016-03-15 at 07:20 -0700, Jeffrey Bouquet wrote:
> rsync... see bottom posting
> 
> On Tue, 15 Mar 2016 07:43:46 +0100, olli hauer  wrote:
> 
> > On 2016-03-14 15:19, Ian Lepore wrote:
> > > On Sun, 2016-03-13 at 19:08 -0700, Adrian Chadd wrote:
> > > > On 13 March 2016 at 18:51, Mark Johnston 
> > > > wrote:
> > > > > On Sun, Mar 13, 2016 at 06:33:46PM -0700, Adrian Chadd wrote:
> > > > > > Hi,
> > > > > > 
> > > > > > I can reproduce this by doing a mkimage on a large
> > > > > > destination
> > > > > > file
> > > > > > image. it looks like it causes all the desktop processes to
> > > > > > get
> > > > > > paged
> > > > > > out whilst it's doing so, and then the whole UI freezes
> > > > > > until it
> > > > > > catches up.
> > > > > 
> > > > > mkimg(1) maps the destination file with MAP_NOSYNC, so if
> > > > > it's
> > > > > larger
> > > > > than RAM, I think it'll basically force the pagedaemon to
> > > > > write out
> > > > > the
> > > > > image as it tries to reclaim pages from the inactive queue.
> > > > > This
> > > > > can
> > > > > cause stalls if the pagedaemon blocks waiting for some I/O to
> > > > > complete.
> > > > > The user/alc/PQ_LAUNDRY branch helps alleviate this problem
> > > > > by
> > > > > using a
> > > > > different thread to launder dirty pages. I use mkimg on
> > > > > various
> > > > > desktop
> > > > > machines to build bhyve images and have noticed the problem
> > > > > you're
> > > > > describing; PQ_LAUNDRY helps quite a bit in that case. But I
> > > > > don't
> > > > > know
> > > > > why this would be a new problem.
> > > > > 
> > > > 
> > > > That's why I'm confused. I just know that it didn't used to
> > > > cause the
> > > > whole UI to hang due to paging.
> > > > 
> > > 
> > > I've been noticing this too.  This machine runs 10-stable and
> > > this use
> > > of the swap began happening recently when I updated from 10
> > > -stable
> > > around the 10.2 release time to 10-stable right about when the
> > > 10.3
> > > code freeze began.
> > > 
> > > In my case I have no zfs anything here.  I noticed the problem
> > > bigtime
> > > yesterday when rsync was syncing a ufs filesystem of about 500GB
> > > from
> > > one disk to another (probably 70-80 GB actually needed copying). 
> > >  My
> > > desktop apps were noticibly unresponsive when I activated a
> > > window that
> > > had been idle for a while (like it would take a couple seconds
> > > for the
> > > app to begin responding).  I could see lots of swap In happening
> > > in top
> > > during this unresponsiveness, and noticible amounts of Out
> > > activity
> > > when nothing was happening except the rsync.
> > > 
> > > This is amd64, 12GB ram, 16GB swap, a tmpfs had about 400MB in it
> > > at
> > > the time.  Prior to the update around the 10.3 freeze, this
> > > machine
> > > would never touch the swap no matter what workload I threw at it
> > > (this
> > > rsync stuff happens every day, it's the usual workload).
> > > 
> > 
> > I'm not sure if it is the same problem, or port related.
> > 
> > On two systems without zfs but with many files e.g. svn servers I
> > see now
> > from time to time they are running out of swap.
> > 
> >  kernel: swap_pager_getswapspace(9): failed
> >  kernel: swap_pager_getswapspace(16): failed
> >  ...
> > 
> > It also happened on one system during the nightly periodic tasks
> > holding
> > only millions of backup files.
> > 
> > $ freebsd-version -ku
> >   10.2-RELEASE-p9
> >   10.2-RELEASE-p13
> > 
> > 
> > ___
> > freebsd-current@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-current
> > To unsubscribe, send any mail to "
> > freebsd-current-unsubscr...@freebsd.org"
> 
> 
> Just a point I've bought up elsewhere...
> I've, if I recall, wrecked several filesystems (although EIDE) using
> rsync at the normal bus rate, and sometimes
> thumbdrives with whatever filesystem type on them.
> 
> I settled on --bwlimit=1500,  max for unattended  rsync usage and
> almost every day
> use --bwlimit=700.
> 
> The latter enables several resource-intensive processes ( music,
> classical music videos, svn, pkg, browsing, etc) to
> proceed apace concurrently on the desktop (SATA not EIDE) with nary a
> hang nor slowdown.
> 
> If I recall, the usual speed is 1 so that is less than ten
> percent, if I recall, of the usual speed.
> 
> YMMV.
> 
> J.
> 
> PS as an afterthough, it would be useful if that were more prominent
> on the man page somewhere or even
> in the port's pkg-message or pkg-description.  
> The SATA more robust than EIDE on FreeBSD that I've come across,
> though I prefer not to hint at because I
> believe it to be the fault of EIDE firmware rather than FreeBSD code.
> FWIW.

I have no real idea what any of that is about, but before it turns into
some kind of "rsync is bad" mythology, let me just say that I've been
using rsync to copy gigabytes of backup data every day 

Re: how to recycle Inact memory more aggressively?

2016-03-15 Thread Jeffrey Bouquet
rsync... see bottom posting

On Tue, 15 Mar 2016 07:43:46 +0100, olli hauer  wrote:

> On 2016-03-14 15:19, Ian Lepore wrote:
> > On Sun, 2016-03-13 at 19:08 -0700, Adrian Chadd wrote:
> >> On 13 March 2016 at 18:51, Mark Johnston  wrote:
> >>> On Sun, Mar 13, 2016 at 06:33:46PM -0700, Adrian Chadd wrote:
>  Hi,
> 
>  I can reproduce this by doing a mkimage on a large destination
>  file
>  image. it looks like it causes all the desktop processes to get
>  paged
>  out whilst it's doing so, and then the whole UI freezes until it
>  catches up.
> >>>
> >>> mkimg(1) maps the destination file with MAP_NOSYNC, so if it's
> >>> larger
> >>> than RAM, I think it'll basically force the pagedaemon to write out
> >>> the
> >>> image as it tries to reclaim pages from the inactive queue. This
> >>> can
> >>> cause stalls if the pagedaemon blocks waiting for some I/O to
> >>> complete.
> >>> The user/alc/PQ_LAUNDRY branch helps alleviate this problem by
> >>> using a
> >>> different thread to launder dirty pages. I use mkimg on various
> >>> desktop
> >>> machines to build bhyve images and have noticed the problem you're
> >>> describing; PQ_LAUNDRY helps quite a bit in that case. But I don't
> >>> know
> >>> why this would be a new problem.
> >>>
> >>
> >> That's why I'm confused. I just know that it didn't used to cause the
> >> whole UI to hang due to paging.
> >>
> > 
> > I've been noticing this too.  This machine runs 10-stable and this use
> > of the swap began happening recently when I updated from 10-stable
> > around the 10.2 release time to 10-stable right about when the 10.3
> > code freeze began.
> > 
> > In my case I have no zfs anything here.  I noticed the problem bigtime
> > yesterday when rsync was syncing a ufs filesystem of about 500GB from
> > one disk to another (probably 70-80 GB actually needed copying).  My
> > desktop apps were noticibly unresponsive when I activated a window that
> > had been idle for a while (like it would take a couple seconds for the
> > app to begin responding).  I could see lots of swap In happening in top
> > during this unresponsiveness, and noticible amounts of Out activity
> > when nothing was happening except the rsync.
> > 
> > This is amd64, 12GB ram, 16GB swap, a tmpfs had about 400MB in it at
> > the time.  Prior to the update around the 10.3 freeze, this machine
> > would never touch the swap no matter what workload I threw at it (this
> > rsync stuff happens every day, it's the usual workload).
> > 
> 
> I'm not sure if it is the same problem, or port related.
> 
> On two systems without zfs but with many files e.g. svn servers I see now
> from time to time they are running out of swap.
> 
>  kernel: swap_pager_getswapspace(9): failed
>  kernel: swap_pager_getswapspace(16): failed
>  ...
> 
> It also happened on one system during the nightly periodic tasks holding
> only millions of backup files.
> 
> $ freebsd-version -ku
>   10.2-RELEASE-p9
>   10.2-RELEASE-p13
> 
> 
> ___
> freebsd-current@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Just a point I've bought up elsewhere...
I've, if I recall, wrecked several filesystems (although EIDE) using rsync at 
the normal bus rate, and sometimes
thumbdrives with whatever filesystem type on them.

I settled on --bwlimit=1500,  max for unattended  rsync usage and almost every 
day
use --bwlimit=700.

The latter enables several resource-intensive processes ( music, classical 
music videos, svn, pkg, browsing, etc) to
proceed apace concurrently on the desktop (SATA not EIDE) with nary a hang nor 
slowdown.

If I recall, the usual speed is 1 so that is less than ten percent, if I 
recall, of the usual speed.

YMMV.

J.

PS as an afterthough, it would be useful if that were more prominent on the man 
page somewhere or even
in the port's pkg-message or pkg-description.  
The SATA more robust than EIDE on FreeBSD that I've come across, though I 
prefer not to hint at because I
believe it to be the fault of EIDE firmware rather than FreeBSD code. FWIW.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-15 Thread Gary Jennejohn
On Sun, 13 Mar 2016 16:41:17 +0100
Fabian Keil  wrote:

> Gary Jennejohn  wrote:
> 
> > In the course of the last year or so the behavior of the vm system
> > has changed in regard to how aggressively Inact memory is recycled.
> > 
> > My box has 8GB of memory.  At the moment I'm copying 100s of gigabytes
> > from one file system to another one.
> > 
> > Looking at top I observe that there are about 6GB of Inact memory.
> > This value hardly changes.  Instead of aggressively recycling the
> > Inact memory the vm now seems to prefer to swap.  
> 
> Are you using ZFS?
> 

No, only UFS, so it's not due to pressure caused by ZFS.

> > Last year, can't rmember excatly when, the behavior was totally
> > different.  The vm very aggessively recycled Inact memory and,
> > even when copying 100s of GB of files, the system hardly swapped.
> > 
> > It seems rather strange to me that the vm happily allows gigbytes
> > of Inact memory to be present and prefers swapping to recyclincg.
> >
> > Are there any sysctl's I can set to get the old behavior back?  
> 
> I don't think so.
> 
> I'm currently using this patch set to work around the issue:
> https://www.fabiankeil.de/sourcecode/electrobsd/vm-limit-inactive-memory-more-aggressively.diff
> 
> Patch 4 adds a couple of sysctls that can be used to let the ZFS
> ARC indirectly put pressure on the inactive memory until a given
> target is reached.
> 

Thanks, I'll take a closer look at it.

-- 
Gary Jennejohn
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-15 Thread olli hauer
On 2016-03-14 15:19, Ian Lepore wrote:
> On Sun, 2016-03-13 at 19:08 -0700, Adrian Chadd wrote:
>> On 13 March 2016 at 18:51, Mark Johnston  wrote:
>>> On Sun, Mar 13, 2016 at 06:33:46PM -0700, Adrian Chadd wrote:
 Hi,

 I can reproduce this by doing a mkimage on a large destination
 file
 image. it looks like it causes all the desktop processes to get
 paged
 out whilst it's doing so, and then the whole UI freezes until it
 catches up.
>>>
>>> mkimg(1) maps the destination file with MAP_NOSYNC, so if it's
>>> larger
>>> than RAM, I think it'll basically force the pagedaemon to write out
>>> the
>>> image as it tries to reclaim pages from the inactive queue. This
>>> can
>>> cause stalls if the pagedaemon blocks waiting for some I/O to
>>> complete.
>>> The user/alc/PQ_LAUNDRY branch helps alleviate this problem by
>>> using a
>>> different thread to launder dirty pages. I use mkimg on various
>>> desktop
>>> machines to build bhyve images and have noticed the problem you're
>>> describing; PQ_LAUNDRY helps quite a bit in that case. But I don't
>>> know
>>> why this would be a new problem.
>>>
>>
>> That's why I'm confused. I just know that it didn't used to cause the
>> whole UI to hang due to paging.
>>
> 
> I've been noticing this too.  This machine runs 10-stable and this use
> of the swap began happening recently when I updated from 10-stable
> around the 10.2 release time to 10-stable right about when the 10.3
> code freeze began.
> 
> In my case I have no zfs anything here.  I noticed the problem bigtime
> yesterday when rsync was syncing a ufs filesystem of about 500GB from
> one disk to another (probably 70-80 GB actually needed copying).  My
> desktop apps were noticibly unresponsive when I activated a window that
> had been idle for a while (like it would take a couple seconds for the
> app to begin responding).  I could see lots of swap In happening in top
> during this unresponsiveness, and noticible amounts of Out activity
> when nothing was happening except the rsync.
> 
> This is amd64, 12GB ram, 16GB swap, a tmpfs had about 400MB in it at
> the time.  Prior to the update around the 10.3 freeze, this machine
> would never touch the swap no matter what workload I threw at it (this
> rsync stuff happens every day, it's the usual workload).
> 

I'm not sure if it is the same problem, or port related.

On two systems without zfs but with many files e.g. svn servers I see now
from time to time they are running out of swap.

 kernel: swap_pager_getswapspace(9): failed
 kernel: swap_pager_getswapspace(16): failed
 ...

It also happened on one system during the nightly periodic tasks holding
only millions of backup files.

$ freebsd-version -ku
  10.2-RELEASE-p9
  10.2-RELEASE-p13


___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-14 Thread Ian Lepore
On Sun, 2016-03-13 at 19:08 -0700, Adrian Chadd wrote:
> On 13 March 2016 at 18:51, Mark Johnston  wrote:
> > On Sun, Mar 13, 2016 at 06:33:46PM -0700, Adrian Chadd wrote:
> > > Hi,
> > > 
> > > I can reproduce this by doing a mkimage on a large destination
> > > file
> > > image. it looks like it causes all the desktop processes to get
> > > paged
> > > out whilst it's doing so, and then the whole UI freezes until it
> > > catches up.
> > 
> > mkimg(1) maps the destination file with MAP_NOSYNC, so if it's
> > larger
> > than RAM, I think it'll basically force the pagedaemon to write out
> > the
> > image as it tries to reclaim pages from the inactive queue. This
> > can
> > cause stalls if the pagedaemon blocks waiting for some I/O to
> > complete.
> > The user/alc/PQ_LAUNDRY branch helps alleviate this problem by
> > using a
> > different thread to launder dirty pages. I use mkimg on various
> > desktop
> > machines to build bhyve images and have noticed the problem you're
> > describing; PQ_LAUNDRY helps quite a bit in that case. But I don't
> > know
> > why this would be a new problem.
> > 
> 
> That's why I'm confused. I just know that it didn't used to cause the
> whole UI to hang due to paging.
> 

I've been noticing this too.  This machine runs 10-stable and this use
of the swap began happening recently when I updated from 10-stable
around the 10.2 release time to 10-stable right about when the 10.3
code freeze began.

In my case I have no zfs anything here.  I noticed the problem bigtime
yesterday when rsync was syncing a ufs filesystem of about 500GB from
one disk to another (probably 70-80 GB actually needed copying).  My
desktop apps were noticibly unresponsive when I activated a window that
had been idle for a while (like it would take a couple seconds for the
app to begin responding).  I could see lots of swap In happening in top
during this unresponsiveness, and noticible amounts of Out activity
when nothing was happening except the rsync.

This is amd64, 12GB ram, 16GB swap, a tmpfs had about 400MB in it at
the time.  Prior to the update around the 10.3 freeze, this machine
would never touch the swap no matter what workload I threw at it (this
rsync stuff happens every day, it's the usual workload).

-- Ian

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-13 Thread Adrian Chadd
On 13 March 2016 at 18:51, Mark Johnston  wrote:
> On Sun, Mar 13, 2016 at 06:33:46PM -0700, Adrian Chadd wrote:
>> Hi,
>>
>> I can reproduce this by doing a mkimage on a large destination file
>> image. it looks like it causes all the desktop processes to get paged
>> out whilst it's doing so, and then the whole UI freezes until it
>> catches up.
>
> mkimg(1) maps the destination file with MAP_NOSYNC, so if it's larger
> than RAM, I think it'll basically force the pagedaemon to write out the
> image as it tries to reclaim pages from the inactive queue. This can
> cause stalls if the pagedaemon blocks waiting for some I/O to complete.
> The user/alc/PQ_LAUNDRY branch helps alleviate this problem by using a
> different thread to launder dirty pages. I use mkimg on various desktop
> machines to build bhyve images and have noticed the problem you're
> describing; PQ_LAUNDRY helps quite a bit in that case. But I don't know
> why this would be a new problem.
>

That's why I'm confused. I just know that it didn't used to cause the
whole UI to hang due to paging.



-adrian
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-13 Thread Mark Johnston
On Sun, Mar 13, 2016 at 06:33:46PM -0700, Adrian Chadd wrote:
> Hi,
> 
> I can reproduce this by doing a mkimage on a large destination file
> image. it looks like it causes all the desktop processes to get paged
> out whilst it's doing so, and then the whole UI freezes until it
> catches up.

mkimg(1) maps the destination file with MAP_NOSYNC, so if it's larger
than RAM, I think it'll basically force the pagedaemon to write out the
image as it tries to reclaim pages from the inactive queue. This can
cause stalls if the pagedaemon blocks waiting for some I/O to complete.
The user/alc/PQ_LAUNDRY branch helps alleviate this problem by using a
different thread to launder dirty pages. I use mkimg on various desktop
machines to build bhyve images and have noticed the problem you're
describing; PQ_LAUNDRY helps quite a bit in that case. But I don't know
why this would be a new problem.

> 
> I'll poke alc and others to see if I can figure out how to trace
> what's going on. eg, are we running out of free pages and instead of
> waiting, deciding we're okay just paging out binaries/libraries so we
> can issue more dirty write io..
> 
> 
> -a
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-13 Thread Adrian Chadd
Hi,

I can reproduce this by doing a mkimage on a large destination file
image. it looks like it causes all the desktop processes to get paged
out whilst it's doing so, and then the whole UI freezes until it
catches up.

I'll poke alc and others to see if I can figure out how to trace
what's going on. eg, are we running out of free pages and instead of
waiting, deciding we're okay just paging out binaries/libraries so we
can issue more dirty write io..


-a
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-13 Thread Mark Johnston
On Sat, Mar 12, 2016 at 09:38:35AM +0100, Gary Jennejohn wrote:
> In the course of the last year or so the behavior of the vm system
> has changed in regard to how aggressively Inact memory is recycled.
> 
> My box has 8GB of memory.  At the moment I'm copying 100s of gigabytes
> from one file system to another one.

How exactly are you copying them? How large are the files you're
copying? Which filesystems are in use?

> 
> Looking at top I observe that there are about 6GB of Inact memory.
> This value hardly changes.  Instead of aggressively recycling the
> Inact memory the vm now seems to prefer to swap.

The VM will swap a small number of dirty pages as it encounters them
during inactive queue scans. If the system is swapping more than it used
to, it's presumably because the pagedaemon is encountering more dirty
pages in the inactive queue than it used to. This could simply be the
result of external factors (e.g., the applications you're running are
generating more dirty pages than they were a year ago for some reason).
On the other hand, some of the changes to remove object page cache uses
could cause this: cache pages would be reused in preference to inactive
pages, so if pages that were previously cached are now being enqueued at
the end of the inactive queue, I'd expect to see more swapping than
before. For example, with r281079+r286255, we deactivate pages that
precede a faulted page rather than caching them. I think this would
result in more churn of the inactive queue, which could lead to
increased swap usage. cp(1), for instance, will mmap small source files,
so the above-mentioned changes might be relevant if you're copying many
small files that aren't already resident in memory. But I think you
need to be more specific about your setup.

> 
> Last year, can't rmember excatly when, the behavior was totally
> different.  The vm very aggessively recycled Inact memory and,
> even when copying 100s of GB of files, the system hardly swapped.
> 
> It seems rather strange to me that the vm happily allows gigbytes
> of Inact memory to be present and prefers swapping to recyclincg.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-13 Thread Fabian Keil
Gary Jennejohn  wrote:

> In the course of the last year or so the behavior of the vm system
> has changed in regard to how aggressively Inact memory is recycled.
> 
> My box has 8GB of memory.  At the moment I'm copying 100s of gigabytes
> from one file system to another one.
> 
> Looking at top I observe that there are about 6GB of Inact memory.
> This value hardly changes.  Instead of aggressively recycling the
> Inact memory the vm now seems to prefer to swap.

Are you using ZFS?

> Last year, can't rmember excatly when, the behavior was totally
> different.  The vm very aggessively recycled Inact memory and,
> even when copying 100s of GB of files, the system hardly swapped.
> 
> It seems rather strange to me that the vm happily allows gigbytes
> of Inact memory to be present and prefers swapping to recyclincg.
>
> Are there any sysctl's I can set to get the old behavior back?

I don't think so.

I'm currently using this patch set to work around the issue:
https://www.fabiankeil.de/sourcecode/electrobsd/vm-limit-inactive-memory-more-aggressively.diff

Patch 4 adds a couple of sysctls that can be used to let the ZFS
ARC indirectly put pressure on the inactive memory until a given
target is reached.

Fabian


pgpM814FgDXE8.pgp
Description: OpenPGP digital signature


Re: how to recycle Inact memory more aggressively?

2016-03-13 Thread Adrian Chadd
Yeah, but his comment is that "i'm doing a large file copy operation;
why is the system paging out binaries versus recycling other file
cache memory?"

I have a feeling this is more due to the last few years of VM work to
improve file serving performance and it hasn't really been
tested/evaluated in desktop style environments where binary execution
latency matters (ie, paging out binaries is a no-no.) Bugs have crept
in and been fixed when people notice. :)

I've noticed the same on my 8 and 16G desktop laptops but I haven't
started digging into it. I was hoping it was going to be a VM bug
versus something more structural in the VM changes.


-a


On 13 March 2016 at 07:55, RW  wrote:
> On Sat, 12 Mar 2016 09:38:35 +0100
> Gary Jennejohn wrote:
>
>> In the course of the last year or so the behavior of the vm system
>> has changed in regard to how aggressively Inact memory is recycled.
>>
>> My box has 8GB of memory.  At the moment I'm copying 100s of gigabytes
>> from one file system to another one.
>>
>> Looking at top I observe that there are about 6GB of Inact memory.
>> This value hardly changes.  Instead of aggressively recycling the
>> Inact memory the vm now seems to prefer to swap.
>
> Paging-out is a side-effect of processing inactive memory. As the
> inactive queue is recycled a small number of pages can get copied
> out to swap with the contents remaining in memory. If you turn this
> off, the writes to can end up being done while something is waiting,
> rather than in the background.
>
> A small amount of swap in use is normal. If you see a large amount
> then check for memory leaks and unwanted files on tmpfs.
>
>
>
>
>
>
> ___
> freebsd-current@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: how to recycle Inact memory more aggressively?

2016-03-13 Thread RW
On Sat, 12 Mar 2016 09:38:35 +0100
Gary Jennejohn wrote:

> In the course of the last year or so the behavior of the vm system
> has changed in regard to how aggressively Inact memory is recycled.
> 
> My box has 8GB of memory.  At the moment I'm copying 100s of gigabytes
> from one file system to another one.
> 
> Looking at top I observe that there are about 6GB of Inact memory.
> This value hardly changes.  Instead of aggressively recycling the
> Inact memory the vm now seems to prefer to swap.

Paging-out is a side-effect of processing inactive memory. As the
inactive queue is recycled a small number of pages can get copied
out to swap with the contents remaining in memory. If you turn this
off, the writes to can end up being done while something is waiting,
rather than in the background.

A small amount of swap in use is normal. If you see a large amount
then check for memory leaks and unwanted files on tmpfs.






___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


how to recycle Inact memory more aggressively?

2016-03-12 Thread Gary Jennejohn
In the course of the last year or so the behavior of the vm system
has changed in regard to how aggressively Inact memory is recycled.

My box has 8GB of memory.  At the moment I'm copying 100s of gigabytes
from one file system to another one.

Looking at top I observe that there are about 6GB of Inact memory.
This value hardly changes.  Instead of aggressively recycling the
Inact memory the vm now seems to prefer to swap.

Last year, can't rmember excatly when, the behavior was totally
different.  The vm very aggessively recycled Inact memory and,
even when copying 100s of GB of files, the system hardly swapped.

It seems rather strange to me that the vm happily allows gigbytes
of Inact memory to be present and prefers swapping to recyclincg.

Are there any sysctl's I can set to get the old behavior back?

-- 
Gary Jennejohn
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"