Re: swapfile query

2017-08-20 Thread Stefan Esser
Am 20.08.17 um 01:39 schrieb Greg 'groggy' Lehey:
>> 3. should total swap be 1x 2x or some other multiple of RAM these days?
> 
> It never needed to be.  The only issue is that if you want processor
> dumps, you once needed a swap partition (and not a swap file) at least
> marginally larger than memory.  With compressed dumps, that
> requirement is relaxed, but I suspect that a 4 GB partition could be
> too small.

Well, no, it (2x RAM) used to be needed at a time ... ;-)

The VAX supported paging, but did not use a multi-level page table as
most CPUs do today. There was a linear list of page addresses per
process, and new page allocations could lead to a situation, where
there was no free space in this list. This required a kind of garbage
collection run, which was implemented by swapping out all processes
and starting with a clean state. This required 2 times RAM configured
as swap, to prevent a dead-lock (when a new page needed to be allocated
to complete the swap-out).

This MMU was used in at least all VAX 11-7xx, the µVAX 2 and µVAX 3
and thus in many of the machines used to run BSD back in the 80s ...

And thus, swap of at least 2 times RAM used to be not just a best
practice, but a strict requirement for stable operation of these
machines.

Regards, STefan
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: swapfile query

2017-08-20 Thread John Baldwin
On Saturday, August 19, 2017 06:08:29 PM tech-lists wrote:
> On 19/08/2017 17:54, Cy Schubert wrote:
> > Then it doesn't matter if you use one or many swapfiles and deleting the 4 
> > GB won't make a difference. Just add the desired swap as required.
> > 
> > With 128 GB RAM you shouldn't be swapping anyway. If your system is you 
> > have more serious problems than the lack of swap.
> 
> The system is a bhyve host. There are 9 guests, two of them are
> freebsd-11-stable, the rest are ubuntu-14.04-LTS. Restarting some (but
> not all) of the guests has the effect of decreasing swap usage. The
> system also runs ZFS. The guests live on the ZFS filesystem.
> 
> The OS & swap on the host are SSD and are not part of the ZFS system.
> 
> What I'm seeing is, the host system won't touch swap for days. I guess
> when the guests get busier than an as yet unknown amount, the host
> starts using swap. The issue I'm having isn't so much it using swap,
> it's that the used swap seemingly is not liberated after it has been
> used, and I don't know exactly how to narrow it down.

Note that once memory is placed in swap, it won't be pulled back in until
some thread or process actually needs it.  If nothing needs the memory it
doesn't hurt to just leave it out on swap.  It might also mean that the
memory freed up by your temporary memory pressure from your guests will now
be available the next time you have memory pressure so that you won't have
to swap that next time.

-- 
John Baldwin
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: swapfile query

2017-08-20 Thread Cy Schubert
In message <201708210241.v7l2ftcf073...@donotpassgo.dyslexicfish.net>, 
Jamie La
ndeg-Jones writes:
> > 3. should total swap be 1x 2x or some other multiple of RAM these days?
> 
> According to tuning(7) :
> 
> | SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP
> |   The swap partition should typically be approximately 2x the size of
> |   main memory for systems with less than 4GB of RAM, or approximately
> |   equal to the size of main memory if you have more. Keep in mind
> |   future memory expansion when sizing the swap partition.

Generally that's the current recommendation today. It used to be 4x RAM on 
SunOS and other "early" UNIX in the 90's and as workloads became less time 
sharing oriented and more OLTP and DBMS oriented the recommendations have 
shifted to more RAM with less swap (imagine a database swapping). Typically 
this is a good starting point and generally a sound recommendation.

-- 
Cheers,
Cy Schubert 
FreeBSD UNIX:     Web:  http://www.FreeBSD.org

The need of the many outweighs the greed of the few.


___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: swapfile query

2017-08-20 Thread Jamie Landeg-Jones
> 3. should total swap be 1x 2x or some other multiple of RAM these days?

According to tuning(7) :

| SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP
|   The swap partition should typically be approximately 2x the size of
|   main memory for systems with less than 4GB of RAM, or approximately
|   equal to the size of main memory if you have more. Keep in mind
|   future memory expansion when sizing the swap partition.

cheers, Jamie

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: swapfile query

2017-08-20 Thread tech-lists
On 19/08/2017 22:00, Cy Schubert wrote:
> An easy way to find out is to run top, type in "w", then "o" and "swap" to 
> see which processes are using swap. You'll notice that the numbers won't 
> add up. I haven't looked at this but my guess is that there may be swap 
> leak. You can verify this by replacing the swapfile (add a new and remove 
> the old).

Thanks for the tip. I need to wait (might be a few weeks) to see when it
starts eating swap again, then I'll do what you suggest. I got the
system from 94% swap in use to 39% by restarting some of the VMs, then I
was able to swapoff/swapon to empty the swapfile.

Here's a snapshot of the system in an idle state, sr is mostly above 300

procs  memory   pagedisks faults cpu
r b w  avm   fre   flt  re  pi  pofr   sr ad0 da0   insycs
us sy id
0 0 27 189G   27G   103   0   0   077  329   0   0   74   801  1663
0  0 100
0 0 27 189G   27G38   0   0   0 0  337   0   0   93   745  1997
0  0 100
0 0 27 189G   27G   187   0   0   0 0  329   0   0   90   669  1853
0  0 100
1 0 27 189G   27G38   0   0   0 0  340   0   0   83   774  1816
0  0 100
0 0 27 189G   27G37   0   0   0 1  370   0   0   77   767  1839
0  0 100
1 0 27 189G   27G48   0   0   0 0  294   1   0  125  2239  3382
0  0 100
0 0 27 189G   27G20   0   0   0 0  329   3   0   88   651  1797
0  0 100
^C

yet mem from top shows 27GB free:

last pid: 71790;  load averages:  0.11,  0.09,  0.05
up 121+03:19:29 13:55:49
99 processes:  1 running, 98 sleeping
CPU:  0.0% user,  0.0% nice,  0.1% system,  0.1% interrupt, 99.8% idle
Mem: 769M Active, 11G Inact, 21G Laundry, 64G Wired, 924M Buf, 27G Free
ARC: 60G Total, 8979M MFU, 51G MRU, 16K Anon, 153M Header, 755K Other
 60G Compressed, 62G Uncompressed, 1.04:1 Ratio, 41M Overhead
Swap: 4034M Total, 4034M Free

thanks,
-- 
J.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: swapfile query

2017-08-20 Thread tech-lists
On 20/08/2017 08:22, Gary Jennejohn wrote:
> Depends.  I have vm.pageout_update_period=0 in /etc/sysctl.conf
> and scan rate (sr) really does reflect the true scan rate.  On
> my system sr is 0 while the system is idle.
> 
> As an aside, my system (8GB RAM) hardly ever swaps, even under
> heavy memory load.

Mine is:

root@host:~ # sysctl vm.pageout_update_period
vm.pageout_update_period: 600
root@host:~ #

I'll try with a 0 setting

-- 
J.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: swapfile query

2017-08-20 Thread Gary Jennejohn
On Sat, 19 Aug 2017 15:38:15 -0700
Cy Schubert  wrote:

> In message <20170819213149.GA34140@raichu>, Mark Johnston writes:
> > On Sat, Aug 19, 2017 at 02:24:19PM -0700, Cy Schubert wrote:  
> > > In message <201708192100.v7jl0vfk003...@slippy.cwsent.com>, Cy Schubert 
> > > writes:
> > >   
> > > > (On my -CURRENT laptop I see a scan rate in the hundreds on a totally 
> > > > idl  
> > e   
> > > > laptop and in the teens of my idle firewall. IMO this doesn't seem 
> > > > right,  
> >
> > > > at least not compared to previous releases of FreeBSD or from the days 
> > > > wh  
> > en   
> > > > I worked on Solaris. You shouldn't see a scan rate on an idle system.)  
> > > 
> > > It appears that on an idle system with many pages in use, i.e. a laptop 
> > > running X and not really doing anything else, pages are scanned though 
> > > the 
> > > system is idle. This is likely an artifact of r308474.  
> > 
> > It's an intentional consequence of r254304. The page daemon performs a
> > slow and steady scan of the queue of active pages and will gradually
> > move unreferenced pages to the inactive queue.  
> 
> This is certainly better.
> 
> It's probably good idea to remove scan rate from vmstat output as it's not 
> meaningful in the traditional sense any more. For example on a 
> traditionally scanning VM system (Solaris or z/OS) the number of pages 
> scanned per second (or unreferenced interval count -- the inverse of the 
> scan rate) is the first indication that you need to look at your vm 
> subsystem. As of r254304 rate cannot be used used as a metric any more 
> except when one sees it deviate wildly from previous observations. (Not 
> that I'm complaining.)
> 
> See below:
> 
> procs  memory   pagedisks faults cpu
> r b w  avm   fre   flt  re  pi  pofr   sr ad0 da0   insycs us 
> sy id
> 0 0 0 3.9G  292M 4   0   0   0   193  125   0   0  434   773   588  0  
> 0 100
> 1 0 0 3.9G  292M55   0   0   0   181  123  22   0  460  2467  1402  0  
> 1 99
> 0 0 0 3.9G  290M   969   0   0   1   316  124   1   0  490 12571  4004  3  
> 1 95
> 0 0 0 3.9G  289M   261   0   0   0   160  124  21   0  505 20426  7751  2  
> 2 97
> 0 0 0 1.5G  755M  3481   0   1   1 60951   74  18   0  463 19918  6576 13  
> 4 82
> 
> At this point I closed firefox. Pages are freed and scan rate decreases. We 
> now have a new normal.
> 
> 0 0 0 1.5G  752M10   0   0   0 0   24   1   0  409   595   365  0  
> 0 100
> 0 0 0 1.5G  754M 1   0   0   0   403   23  49   0  478   609  1321  0  
> 1 99
> 0 0 0 1.5G  754M19   0   0   0   171   24   0   0  402   655   382  0  
> 0 100
> 0 0 0 1.5G  754M 0   0   0   0   170   24   0   0  423   568   463  0  
> 0 100
> 0 0 0 1.5G  754M 0   0   0   0   174   12   0   0  403   627   359  0  
> 0 100
> 0 0 0 1.5G  754M 0   0   0   0   172   35   4   0  425   625   474  0  
> 0 100
> 0 0 0 1.5G  754M 1   0   0   0   170   24   4   0  416   651   398  0  
> 0 100
> 0 0 0 1.5G  754M 0   0   0   0   163   23   1   0  426   655   490  0  
> 0 100
> 0 0 0 1.5G  754M 0   0   0   0   176   23   0   0  429   663   384  0  
> 0 100
> 0 0 0 1.5G  754M 0   0   0   0   163   23   0   0  445   661   482  0  
> 0 100
> 
> Should we consider removing scan rate from vmstat output? It doesn't really 
> mean anything in relation to tuning any more.
> 

Depends.  I have vm.pageout_update_period=0 in /etc/sysctl.conf
and scan rate (sr) really does reflect the true scan rate.  On
my system sr is 0 while the system is idle.

As an aside, my system (8GB RAM) hardly ever swaps, even under
heavy memory load.

Perhaps the output of sr could be somehow scaled based on the setting
of the sysctl?  Just a thought, I haven't looked at the source.

-- 
Gary Jennejohn
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"