On Wednesday, November 05, 2014 4:52:50 am Andriy Gapon wrote:
On 04/11/2014 14:55, Steven Hartland wrote:
This is likely spikes in uma zones used by ARC.
The VM doesn't ever clean uma zones unless it hits a low memory condition,
which
explains why your little script helps.
Check
On 05/11/2014 06:15, Marcus Reid wrote:
On Tue, Nov 04, 2014 at 06:13:44PM +, Steven Hartland wrote:
On 04/11/2014 17:22, Allan Jude wrote:
snip...
Justin Gibbs and I were helping George from Voxer look at the same issue
they are having. They had ~169GB in inact, and only ~60GB being used
On 04/11/2014 14:55, Steven Hartland wrote:
This is likely spikes in uma zones used by ARC.
The VM doesn't ever clean uma zones unless it hits a low memory condition,
which
explains why your little script helps.
Check the output of vmstat -z to confirm.
Steve,
this is nonsense :-) You
On 04/11/2014 14:55, Steven Hartland wrote:
This is likely spikes in uma zones used by ARC.
The VM doesn't ever clean uma zones unless it hits a low memory condition,
which
explains why your little script helps.
Check the output of vmstat -z to confirm.
Steve,
this is nonsense :-) You
On 05/11/2014 09:52, Andriy Gapon wrote:
On 04/11/2014 14:55, Steven Hartland wrote:
This is likely spikes in uma zones used by ARC.
The VM doesn't ever clean uma zones unless it hits a low memory condition, which
explains why your little script helps.
Check the output of vmstat -z to
Steven Hartland wrote
On 05/11/2014 06:15, Marcus Reid wrote:
On Tue, Nov 04, 2014 at 06:13:44PM +, Steven Hartland wrote:
On 04/11/2014 17:22, Allan Jude wrote:
snip...
Justin Gibbs and I were helping George from Voxer look at the same
issue
they are having. They had ~169GB in inact,
On 11/4/2014 5:47 AM, Dmitriy Makarov wrote:
Funny thing is that when we manually allocate and release memory, using
simple python script:
...
Current workaround is to periodically invoke this python script by cron.
I wonder if this is related to PR
On 05/11/2014 14:36, James R. Van Artsdalen wrote:
I wonder if this is related to PR
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194513
This is against zfs recv and hanging in process state kmem arena
but also has a workaround of allocating lots of memory in userland.
If something
On 05/11/2014 18:32, James R. Van Artsdalen wrote:
On 11/5/2014 6:41 AM, Andriy Gapon wrote:
If something hangs (appears to hang) and it's ZFS related, then
https://wiki.freebsd.org/AvgZfsDeadlockDebug
I don't think thezpool history hang is in ZFS or storage layer code:
it seems be
On 11/5/2014 6:41 AM, Andriy Gapon wrote:
If something hangs (appears to hang) and it's ZFS related, then
https://wiki.freebsd.org/AvgZfsDeadlockDebug
I don't think thezpool history hang is in ZFS or storage layer code:
it seems be stalled in kernel malloc(). See PID 12105 (zpool
history)
This is likely spikes in uma zones used by ARC.
The VM doesn't ever clean uma zones unless it hits a low memory
condition, which explains why your little script helps.
Check the output of vmstat -z to confirm.
On 04/11/2014 11:47, Dmitriy Makarov wrote:
Hi Current,
It seems like there is
ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP
UMA Kegs: 384, 0, 210, 10, 216, 0, 0
UMA Zones: 2176, 0, 210, 0, 216, 0, 0
UMA Slabs: 80, 0, 2921231, 1024519,133906002, 0, 0
UMA
On 11/04/2014 08:22, Dmitriy Makarov wrote:
ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP
UMA Kegs: 384, 0, 210, 10, 216, 0, 0
UMA Zones: 2176, 0, 210, 0, 216, 0, 0
UMA Slabs: 80,
On Nov 4, 2014, at 9:22 AM, Allan Jude allanj...@freebsd.org wrote:
On 11/04/2014 08:22, Dmitriy Makarov wrote:
ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP
UMA Kegs: 384, 0, 210, 10, 216, 0, 0
UMA Zones: 2176,
On 04/11/2014 17:22, Allan Jude wrote:
snip...
Justin Gibbs and I were helping George from Voxer look at the same issue
they are having. They had ~169GB in inact, and only ~60GB being used for
ARC.
Are there any further debugging steps we can recommend to him to help
investigate this?
The
On 04/11/2014 17:57, Ben Perrault wrote:
snip...
I would also be interested in any additional debugging steps and would be
willing to help test in any way I can - as I've seen the behavior a few times
as well. As recently a Sunday evening, I caught a system running with ~44GB ARC
but ~117GB
On Tue, Nov 04, 2014 at 06:13:44PM +, Steven Hartland wrote:
On 04/11/2014 17:22, Allan Jude wrote:
snip...
Justin Gibbs and I were helping George from Voxer look at the same issue
they are having. They had ~169GB in inact, and only ~60GB being used for
ARC.
Are there any
17 matches
Mail list logo