Re: PQ_LAUNDRY: unexpected behaviour

2017-01-02 Thread Mark Johnston
On Mon, Jan 02, 2017 at 10:31:50AM -0330, Jonathan Anderson wrote:
> Hi all,
> 
> I'm seeing some unexpected PQ_LAUNDRY behaviour on something fairly close
> to -CURRENT (drm-next-4.7 with an IFC on 26 Dec). Aside from the use of
> not-quite-CURRENT, it's also very possible that I don't understand how the
> laundry queue is supposed to work. Nonetheless, I thought I'd check whether
> there is a tunable I should change, an issue with the laundry queue itself,
> etc.

My suspicion is that this is a memory leak of some sort and unrelated to
PQ_LAUNDRY itself. That is, with the previous policy you would see lots
of swap usage and a large inactive queue instead.

> 
> After running X overnight (i915 can now run overnight on drm-next-4.7!), I
> end up with a little over half of my system memory in the laundry queue and
> a bunch of swap utilization. Even after closing X and shutting down lots of
> services, I see the following in top:
> 
> ```
> Mem: 977M Active, 31M Inact, 4722M Laundry, 1917M Wired, 165M Free
> ARC: 697M Total, 67M MFU, 278M MRU, 27K Anon, 22M Header, 331M Other
> Swap: 4096M Total, 2037M Used, 2059M Free, 49% Inuse
> 
>   PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIMEWCPU
> COMMAND
>   911 root  1  520 57788K  4308K select  1   0:00   0.00% sshd
>   974 root  1  200 43780K 0K wait2   0:00   0.00%
> 
>  1406 jon   1  200 33520K  2748K select  0   0:04   0.00%
> gpg-agent
>  2038 jon   1  200 31280K  5452K ttyin   3   0:18   0.00% zsh
>  1251 jon   1  220 31280K  4500K pause   3   0:02   1.46% zsh
>  7102 jon   1  200 31280K  3744K ttyin   0   0:00   0.00% zsh
>  1898 jon   1  200 31280K  3036K ttyin   1   0:00   0.00% zsh
>  1627 jon   1  210 31280K 0K pause   0   0:00   0.00% 
> 22989 jon   1  200 31152K  6020K ttyin   1   0:01   0.00% zsh
> 22495 jon   1  490 31152K  6016K ttyin   0   0:02   0.00% zsh
>  1621 jon   1  200 28196K  8816K select  2   0:40   0.00% tmux
>  6214 jon   1  520 27008K  2872K ttyin   1   0:00   0.00% zsh
>  6969 jon   1  520 27008K  2872K ttyin   3   0:00   0.00% zsh
>  6609 root  1  200 20688K  4604K select  1   0:00   0.00%
> wpa_supplicant
>   914 root  1  200 20664K  5232K select  2   0:02   0.00%
> sendmail
>   917 smmsp 1  200 20664K 0K pause   0   0:00   0.00%
> 
> 24206 jon   1  230 20168K  3500K CPU00   0:00   0.00% top
>   921 root  1  200 12616K   608K nanslp  1   0:00   0.00% cron
> ```
> 
> Are there any things I could do (e.g., sysctls, tunables) to figure out
> what's happening? Can I manually force the laundry to be done? `swapoff -a`
> fails due to a lack of memory.

Is that the full list of processes? Does "ipcs -m" show any named shm
segments?

Looking at the DRM code, the GEM uses swap objects to back allocations
by the drivers, so this could be the result of a kernel page leak in the
drm-next branch. If so, you'll need a reboot to recover.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


PQ_LAUNDRY: unexpected behaviour

2017-01-02 Thread Jonathan Anderson
Hi all,

I'm seeing some unexpected PQ_LAUNDRY behaviour on something fairly close
to -CURRENT (drm-next-4.7 with an IFC on 26 Dec). Aside from the use of
not-quite-CURRENT, it's also very possible that I don't understand how the
laundry queue is supposed to work. Nonetheless, I thought I'd check whether
there is a tunable I should change, an issue with the laundry queue itself,
etc.

After running X overnight (i915 can now run overnight on drm-next-4.7!), I
end up with a little over half of my system memory in the laundry queue and
a bunch of swap utilization. Even after closing X and shutting down lots of
services, I see the following in top:

```
Mem: 977M Active, 31M Inact, 4722M Laundry, 1917M Wired, 165M Free
ARC: 697M Total, 67M MFU, 278M MRU, 27K Anon, 22M Header, 331M Other
Swap: 4096M Total, 2037M Used, 2059M Free, 49% Inuse

  PID USERNAMETHR PRI NICE   SIZERES STATE   C   TIMEWCPU
COMMAND
  911 root  1  520 57788K  4308K select  1   0:00   0.00% sshd
  974 root  1  200 43780K 0K wait2   0:00   0.00%

 1406 jon   1  200 33520K  2748K select  0   0:04   0.00%
gpg-agent
 2038 jon   1  200 31280K  5452K ttyin   3   0:18   0.00% zsh
 1251 jon   1  220 31280K  4500K pause   3   0:02   1.46% zsh
 7102 jon   1  200 31280K  3744K ttyin   0   0:00   0.00% zsh
 1898 jon   1  200 31280K  3036K ttyin   1   0:00   0.00% zsh
 1627 jon   1  210 31280K 0K pause   0   0:00   0.00% 
22989 jon   1  200 31152K  6020K ttyin   1   0:01   0.00% zsh
22495 jon   1  490 31152K  6016K ttyin   0   0:02   0.00% zsh
 1621 jon   1  200 28196K  8816K select  2   0:40   0.00% tmux
 6214 jon   1  520 27008K  2872K ttyin   1   0:00   0.00% zsh
 6969 jon   1  520 27008K  2872K ttyin   3   0:00   0.00% zsh
 6609 root  1  200 20688K  4604K select  1   0:00   0.00%
wpa_supplicant
  914 root  1  200 20664K  5232K select  2   0:02   0.00%
sendmail
  917 smmsp 1  200 20664K 0K pause   0   0:00   0.00%

24206 jon   1  230 20168K  3500K CPU00   0:00   0.00% top
  921 root  1  200 12616K   608K nanslp  1   0:00   0.00% cron
```

Are there any things I could do (e.g., sysctls, tunables) to figure out
what's happening? Can I manually force the laundry to be done? `swapoff -a`
fails due to a lack of memory.

Thanks,


Jon
-- 
jonat...@freebsd.org
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ACPI Error on HP ProBook 430 G2

2017-01-02 Thread Hans Petter Selasky

On 12/22/16 21:04, Moore, Robert wrote:

ACPICA version 20161222 happened today, with a fix for the problem below.



+1

When will the fix be merged to -head ?

--HPS

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"