[Touch-packages] [Bug 1352718] Re: Unknown memory utilization in Ubuntu14.04 Trusty

2015-03-06 Thread Launchpad Bug Tracker
[Expired for procps (Ubuntu) because there has been no activity for 60
days.]

** Changed in: procps (Ubuntu)
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to procps in Ubuntu.
https://bugs.launchpad.net/bugs/1352718

Title:
  Unknown memory utilization in Ubuntu14.04 Trusty

Status in procps package in Ubuntu:
  Expired

Bug description:
  I'm running Ubuntu Trusty 14.04 on a new machine with 8GB of RAM, and
  it seems to be locking up periodically and nothing is in syslog file.
  I've installed Nagios and have been watching the graphs, and it looks
  like memory is going high from 7% to 72% in just a span of 10 mins.
  Only node process are running on server. In top I found all process
  are running very normal memory consumption. Even after stopping node
  process. Memory remains with same utilization.


  free agrees, claiming I'm using more than 5.7G of memory:

 free -h
   total   used   free sharedbuffers cached
  Mem:  7.8G   6.5G   1.3G   2.2M   233M   612M
  -/+ buffers/cache:   5.7G   2.1G
  Swap: 2.0G 0B   2.0G

  
  However, I'm having trouble determining what exactly is eating all of that 
memory. Running top or htop doesn't seem to single anything out, and ps_mem.py 
(https://raw.github.com/pixelb/ps_mem/master/ps_mem.py) claims that I'm using 
much less than the system thinks...

  ...
   Private  +   Shared  =  RAM used Program

  184.0 KiB +  29.5 KiB = 213.5 KiB atd
  176.0 KiB +  48.5 KiB = 224.5 KiB acpid
  164.0 KiB +  99.5 KiB = 263.5 KiB anvil
  272.0 KiB +  52.0 KiB = 324.0 KiB upstart-file-bridge
  288.0 KiB +  76.0 KiB = 364.0 KiB cron
  312.0 KiB +  60.0 KiB = 372.0 KiB irqbalance
  208.0 KiB + 188.0 KiB = 396.0 KiB sh (2)
  328.0 KiB +  87.5 KiB = 415.5 KiB upstart-udev-bridge
  312.0 KiB + 104.5 KiB = 416.5 KiB log
  424.0 KiB +  53.5 KiB = 477.5 KiB upstart-socket-bridge
  304.0 KiB + 213.5 KiB = 517.5 KiB pickup
  336.0 KiB + 213.5 KiB = 549.5 KiB qmgr
  396.0 KiB + 165.5 KiB = 561.5 KiB dovecot
  360.0 KiB + 205.5 KiB = 565.5 KiB master
  528.0 KiB +  52.5 KiB = 580.5 KiB nrpe
  608.0 KiB + 148.5 KiB = 756.5 KiB systemd-logind
  764.0 KiB +  61.5 KiB = 825.5 KiB dbus-daemon
  772.0 KiB + 107.0 KiB = 879.0 KiB top
  808.0 KiB +  87.5 KiB = 895.5 KiB systemd-udevd
  940.0 KiB + 147.5 KiB =   1.1 MiB ntpd
  956.0 KiB + 285.0 KiB =   1.2 MiB getty (6)
1.1 MiB + 134.0 KiB =   1.2 MiB config
1.6 MiB + 121.5 KiB =   1.7 MiB init
2.5 MiB +  22.0 KiB =   2.6 MiB dhclient
2.8 MiB + 476.5 KiB =   3.3 MiB vmtoolsd
4.2 MiB + 452.5 KiB =   4.6 MiB whoopsie
5.1 MiB +  96.5 KiB =   5.2 MiB rsyslogd
3.6 MiB +   2.3 MiB =   5.9 MiB sshd (4)
6.7 MiB +   1.0 MiB =   7.7 MiB bash (3)
8.3 MiB + 277.5 KiB =   8.6 MiB redis-server (3)
   13.0 MiB +  26.5 KiB =  13.0 MiB docker
  342.0 MiB +   6.9 MiB = 348.9 MiB nodejs (8)
  -
  414.3 MiB
  =

  
  This other formula for totaling the memory roughly agrees:

  # ps -e -orss=,args= | sort -b -k1,1n |   awk '{total = total + 
$1}END{print total}'
  503612

  If the processes only total 500 MiB, where's the rest of the memory
  going?

  Slabtop doesn't look like I have a huge cache or anything...

   Active / Total Objects (% used): 672886 / 681837 (98.7%)
   Active / Total Slabs (% used)  : 15441 / 15441 (100.0%)
   Active / Total Caches (% used) : 70 / 101 (69.3%)
   Active / Total Size (% used)   : 179811.23K / 184282.05K (97.6%)
   Minimum / Average / Maximum Object : 0.01K / 0.27K / 8.00K

OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME   
  171318 171318 100%0.19K   4079   42 32632K dentry 
  127257 127257 100%0.10K   3263   39 13052K buffer_head
   75669  75669 100%0.96K   2293   33 73376K ext4_inode_cache   
   35328  34959  98%0.06K552   64  2208K kmalloc-64 
   33354  33354 100%0.04K327  102  1308K ext4_extent_status 
   25560  25560 100%0.11K710   36  2840K sysfs_dir_cache
   18944  18944 100%0.01K 37  512   148K kmalloc-8  
   18848  18848 100%0.50K589   32  9424K kmalloc-512
   17680  17680 100%0.05K208   85   832K shared_policy_node 
   17248  17248 100%0.12K539   32  2156K au_dinfo   
   15390   9116  59%0.55K270   57  8640K radix_tree_node
   15372  15372 100%0.09K366   42  1464K kmalloc-96 
   13398  13398 100%0.75K319   

[Touch-packages] [Bug 1352718] Re: Unknown memory utilization in Ubuntu14.04 Trusty

2015-01-05 Thread Bryan Quigley
This might be better suited for AskUbuntu or the Ubuntu forum.

It looks like the buffers/cache is the cause.  Please attach boot logs
(kern.log/syslog) to help us see what is going.

** Changed in: procps (Ubuntu)
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to procps in Ubuntu.
https://bugs.launchpad.net/bugs/1352718

Title:
  Unknown memory utilization in Ubuntu14.04 Trusty

Status in procps package in Ubuntu:
  Incomplete

Bug description:
  I'm running Ubuntu Trusty 14.04 on a new machine with 8GB of RAM, and
  it seems to be locking up periodically and nothing is in syslog file.
  I've installed Nagios and have been watching the graphs, and it looks
  like memory is going high from 7% to 72% in just a span of 10 mins.
  Only node process are running on server. In top I found all process
  are running very normal memory consumption. Even after stopping node
  process. Memory remains with same utilization.


  free agrees, claiming I'm using more than 5.7G of memory:

 free -h
   total   used   free sharedbuffers cached
  Mem:  7.8G   6.5G   1.3G   2.2M   233M   612M
  -/+ buffers/cache:   5.7G   2.1G
  Swap: 2.0G 0B   2.0G

  
  However, I'm having trouble determining what exactly is eating all of that 
memory. Running top or htop doesn't seem to single anything out, and ps_mem.py 
(https://raw.github.com/pixelb/ps_mem/master/ps_mem.py) claims that I'm using 
much less than the system thinks...

  ...
   Private  +   Shared  =  RAM used Program

  184.0 KiB +  29.5 KiB = 213.5 KiB atd
  176.0 KiB +  48.5 KiB = 224.5 KiB acpid
  164.0 KiB +  99.5 KiB = 263.5 KiB anvil
  272.0 KiB +  52.0 KiB = 324.0 KiB upstart-file-bridge
  288.0 KiB +  76.0 KiB = 364.0 KiB cron
  312.0 KiB +  60.0 KiB = 372.0 KiB irqbalance
  208.0 KiB + 188.0 KiB = 396.0 KiB sh (2)
  328.0 KiB +  87.5 KiB = 415.5 KiB upstart-udev-bridge
  312.0 KiB + 104.5 KiB = 416.5 KiB log
  424.0 KiB +  53.5 KiB = 477.5 KiB upstart-socket-bridge
  304.0 KiB + 213.5 KiB = 517.5 KiB pickup
  336.0 KiB + 213.5 KiB = 549.5 KiB qmgr
  396.0 KiB + 165.5 KiB = 561.5 KiB dovecot
  360.0 KiB + 205.5 KiB = 565.5 KiB master
  528.0 KiB +  52.5 KiB = 580.5 KiB nrpe
  608.0 KiB + 148.5 KiB = 756.5 KiB systemd-logind
  764.0 KiB +  61.5 KiB = 825.5 KiB dbus-daemon
  772.0 KiB + 107.0 KiB = 879.0 KiB top
  808.0 KiB +  87.5 KiB = 895.5 KiB systemd-udevd
  940.0 KiB + 147.5 KiB =   1.1 MiB ntpd
  956.0 KiB + 285.0 KiB =   1.2 MiB getty (6)
1.1 MiB + 134.0 KiB =   1.2 MiB config
1.6 MiB + 121.5 KiB =   1.7 MiB init
2.5 MiB +  22.0 KiB =   2.6 MiB dhclient
2.8 MiB + 476.5 KiB =   3.3 MiB vmtoolsd
4.2 MiB + 452.5 KiB =   4.6 MiB whoopsie
5.1 MiB +  96.5 KiB =   5.2 MiB rsyslogd
3.6 MiB +   2.3 MiB =   5.9 MiB sshd (4)
6.7 MiB +   1.0 MiB =   7.7 MiB bash (3)
8.3 MiB + 277.5 KiB =   8.6 MiB redis-server (3)
   13.0 MiB +  26.5 KiB =  13.0 MiB docker
  342.0 MiB +   6.9 MiB = 348.9 MiB nodejs (8)
  -
  414.3 MiB
  =

  
  This other formula for totaling the memory roughly agrees:

  # ps -e -orss=,args= | sort -b -k1,1n |   awk '{total = total + 
$1}END{print total}'
  503612

  If the processes only total 500 MiB, where's the rest of the memory
  going?

  Slabtop doesn't look like I have a huge cache or anything...

   Active / Total Objects (% used): 672886 / 681837 (98.7%)
   Active / Total Slabs (% used)  : 15441 / 15441 (100.0%)
   Active / Total Caches (% used) : 70 / 101 (69.3%)
   Active / Total Size (% used)   : 179811.23K / 184282.05K (97.6%)
   Minimum / Average / Maximum Object : 0.01K / 0.27K / 8.00K

OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME   
  171318 171318 100%0.19K   4079   42 32632K dentry 
  127257 127257 100%0.10K   3263   39 13052K buffer_head
   75669  75669 100%0.96K   2293   33 73376K ext4_inode_cache   
   35328  34959  98%0.06K552   64  2208K kmalloc-64 
   33354  33354 100%0.04K327  102  1308K ext4_extent_status 
   25560  25560 100%0.11K710   36  2840K sysfs_dir_cache
   18944  18944 100%0.01K 37  512   148K kmalloc-8  
   18848  18848 100%0.50K589   32  9424K kmalloc-512
   17680  17680 100%0.05K208   85   832K shared_policy_node 
   17248  17248 100%0.12K539   32  2156K au_dinfo   
   15390   9116  59%0.55K270   57  8640K radix_tree_node
   15372  15372 100%  

[Touch-packages] [Bug 1352718] Re: Unknown memory utilization in Ubuntu14.04 Trusty

2014-08-05 Thread Santosh Teli
** Package changed: ubuntu = procps (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to procps in Ubuntu.
https://bugs.launchpad.net/bugs/1352718

Title:
  Unknown memory utilization in Ubuntu14.04 Trusty

Status in “procps” package in Ubuntu:
  New

Bug description:
  I'm running Ubuntu Trusty 14.04 on a new machine with 8GB of RAM, and
  it seems to be locking up periodically and nothing is in syslog file.
  I've installed Nagios and have been watching the graphs, and it looks
  like memory is going high from 7% to 72% in just a span of 10 mins.
  Only node process are running on server. In top I found all process
  are running very normal memory consumption. Even after stopping node
  process. Memory remains with same utilization.


  free agrees, claiming I'm using more than 5.7G of memory:

 free -h
   total   used   free sharedbuffers cached
  Mem:  7.8G   6.5G   1.3G   2.2M   233M   612M
  -/+ buffers/cache:   5.7G   2.1G
  Swap: 2.0G 0B   2.0G

  
  However, I'm having trouble determining what exactly is eating all of that 
memory. Running top or htop doesn't seem to single anything out, and ps_mem.py 
(https://raw.github.com/pixelb/ps_mem/master/ps_mem.py) claims that I'm using 
much less than the system thinks...

  ...
   Private  +   Shared  =  RAM used Program

  184.0 KiB +  29.5 KiB = 213.5 KiB atd
  176.0 KiB +  48.5 KiB = 224.5 KiB acpid
  164.0 KiB +  99.5 KiB = 263.5 KiB anvil
  272.0 KiB +  52.0 KiB = 324.0 KiB upstart-file-bridge
  288.0 KiB +  76.0 KiB = 364.0 KiB cron
  312.0 KiB +  60.0 KiB = 372.0 KiB irqbalance
  208.0 KiB + 188.0 KiB = 396.0 KiB sh (2)
  328.0 KiB +  87.5 KiB = 415.5 KiB upstart-udev-bridge
  312.0 KiB + 104.5 KiB = 416.5 KiB log
  424.0 KiB +  53.5 KiB = 477.5 KiB upstart-socket-bridge
  304.0 KiB + 213.5 KiB = 517.5 KiB pickup
  336.0 KiB + 213.5 KiB = 549.5 KiB qmgr
  396.0 KiB + 165.5 KiB = 561.5 KiB dovecot
  360.0 KiB + 205.5 KiB = 565.5 KiB master
  528.0 KiB +  52.5 KiB = 580.5 KiB nrpe
  608.0 KiB + 148.5 KiB = 756.5 KiB systemd-logind
  764.0 KiB +  61.5 KiB = 825.5 KiB dbus-daemon
  772.0 KiB + 107.0 KiB = 879.0 KiB top
  808.0 KiB +  87.5 KiB = 895.5 KiB systemd-udevd
  940.0 KiB + 147.5 KiB =   1.1 MiB ntpd
  956.0 KiB + 285.0 KiB =   1.2 MiB getty (6)
1.1 MiB + 134.0 KiB =   1.2 MiB config
1.6 MiB + 121.5 KiB =   1.7 MiB init
2.5 MiB +  22.0 KiB =   2.6 MiB dhclient
2.8 MiB + 476.5 KiB =   3.3 MiB vmtoolsd
4.2 MiB + 452.5 KiB =   4.6 MiB whoopsie
5.1 MiB +  96.5 KiB =   5.2 MiB rsyslogd
3.6 MiB +   2.3 MiB =   5.9 MiB sshd (4)
6.7 MiB +   1.0 MiB =   7.7 MiB bash (3)
8.3 MiB + 277.5 KiB =   8.6 MiB redis-server (3)
   13.0 MiB +  26.5 KiB =  13.0 MiB docker
  342.0 MiB +   6.9 MiB = 348.9 MiB nodejs (8)
  -
  414.3 MiB
  =

  
  This other formula for totaling the memory roughly agrees:

  # ps -e -orss=,args= | sort -b -k1,1n |   awk '{total = total + 
$1}END{print total}'
  503612

  If the processes only total 500 MiB, where's the rest of the memory
  going?

  Slabtop doesn't look like I have a huge cache or anything...

   Active / Total Objects (% used): 672886 / 681837 (98.7%)
   Active / Total Slabs (% used)  : 15441 / 15441 (100.0%)
   Active / Total Caches (% used) : 70 / 101 (69.3%)
   Active / Total Size (% used)   : 179811.23K / 184282.05K (97.6%)
   Minimum / Average / Maximum Object : 0.01K / 0.27K / 8.00K

OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME   
  171318 171318 100%0.19K   4079   42 32632K dentry 
  127257 127257 100%0.10K   3263   39 13052K buffer_head
   75669  75669 100%0.96K   2293   33 73376K ext4_inode_cache   
   35328  34959  98%0.06K552   64  2208K kmalloc-64 
   33354  33354 100%0.04K327  102  1308K ext4_extent_status 
   25560  25560 100%0.11K710   36  2840K sysfs_dir_cache
   18944  18944 100%0.01K 37  512   148K kmalloc-8  
   18848  18848 100%0.50K589   32  9424K kmalloc-512
   17680  17680 100%0.05K208   85   832K shared_policy_node 
   17248  17248 100%0.12K539   32  2156K au_dinfo   
   15390   9116  59%0.55K270   57  8640K radix_tree_node
   15372  15372 100%0.09K366   42  1464K kmalloc-96 
   13398  13398 100%0.75K319   42 10208K au_icntnr  
   11424  11424 100%0.57K204   56  6528K 

[Touch-packages] [Bug 1352718] Re: Unknown memory utilization in Ubuntu14.04 Trusty

2014-08-05 Thread Brian Murray
** Tags added: trusty

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to procps in Ubuntu.
https://bugs.launchpad.net/bugs/1352718

Title:
  Unknown memory utilization in Ubuntu14.04 Trusty

Status in “procps” package in Ubuntu:
  New

Bug description:
  I'm running Ubuntu Trusty 14.04 on a new machine with 8GB of RAM, and
  it seems to be locking up periodically and nothing is in syslog file.
  I've installed Nagios and have been watching the graphs, and it looks
  like memory is going high from 7% to 72% in just a span of 10 mins.
  Only node process are running on server. In top I found all process
  are running very normal memory consumption. Even after stopping node
  process. Memory remains with same utilization.


  free agrees, claiming I'm using more than 5.7G of memory:

 free -h
   total   used   free sharedbuffers cached
  Mem:  7.8G   6.5G   1.3G   2.2M   233M   612M
  -/+ buffers/cache:   5.7G   2.1G
  Swap: 2.0G 0B   2.0G

  
  However, I'm having trouble determining what exactly is eating all of that 
memory. Running top or htop doesn't seem to single anything out, and ps_mem.py 
(https://raw.github.com/pixelb/ps_mem/master/ps_mem.py) claims that I'm using 
much less than the system thinks...

  ...
   Private  +   Shared  =  RAM used Program

  184.0 KiB +  29.5 KiB = 213.5 KiB atd
  176.0 KiB +  48.5 KiB = 224.5 KiB acpid
  164.0 KiB +  99.5 KiB = 263.5 KiB anvil
  272.0 KiB +  52.0 KiB = 324.0 KiB upstart-file-bridge
  288.0 KiB +  76.0 KiB = 364.0 KiB cron
  312.0 KiB +  60.0 KiB = 372.0 KiB irqbalance
  208.0 KiB + 188.0 KiB = 396.0 KiB sh (2)
  328.0 KiB +  87.5 KiB = 415.5 KiB upstart-udev-bridge
  312.0 KiB + 104.5 KiB = 416.5 KiB log
  424.0 KiB +  53.5 KiB = 477.5 KiB upstart-socket-bridge
  304.0 KiB + 213.5 KiB = 517.5 KiB pickup
  336.0 KiB + 213.5 KiB = 549.5 KiB qmgr
  396.0 KiB + 165.5 KiB = 561.5 KiB dovecot
  360.0 KiB + 205.5 KiB = 565.5 KiB master
  528.0 KiB +  52.5 KiB = 580.5 KiB nrpe
  608.0 KiB + 148.5 KiB = 756.5 KiB systemd-logind
  764.0 KiB +  61.5 KiB = 825.5 KiB dbus-daemon
  772.0 KiB + 107.0 KiB = 879.0 KiB top
  808.0 KiB +  87.5 KiB = 895.5 KiB systemd-udevd
  940.0 KiB + 147.5 KiB =   1.1 MiB ntpd
  956.0 KiB + 285.0 KiB =   1.2 MiB getty (6)
1.1 MiB + 134.0 KiB =   1.2 MiB config
1.6 MiB + 121.5 KiB =   1.7 MiB init
2.5 MiB +  22.0 KiB =   2.6 MiB dhclient
2.8 MiB + 476.5 KiB =   3.3 MiB vmtoolsd
4.2 MiB + 452.5 KiB =   4.6 MiB whoopsie
5.1 MiB +  96.5 KiB =   5.2 MiB rsyslogd
3.6 MiB +   2.3 MiB =   5.9 MiB sshd (4)
6.7 MiB +   1.0 MiB =   7.7 MiB bash (3)
8.3 MiB + 277.5 KiB =   8.6 MiB redis-server (3)
   13.0 MiB +  26.5 KiB =  13.0 MiB docker
  342.0 MiB +   6.9 MiB = 348.9 MiB nodejs (8)
  -
  414.3 MiB
  =

  
  This other formula for totaling the memory roughly agrees:

  # ps -e -orss=,args= | sort -b -k1,1n |   awk '{total = total + 
$1}END{print total}'
  503612

  If the processes only total 500 MiB, where's the rest of the memory
  going?

  Slabtop doesn't look like I have a huge cache or anything...

   Active / Total Objects (% used): 672886 / 681837 (98.7%)
   Active / Total Slabs (% used)  : 15441 / 15441 (100.0%)
   Active / Total Caches (% used) : 70 / 101 (69.3%)
   Active / Total Size (% used)   : 179811.23K / 184282.05K (97.6%)
   Minimum / Average / Maximum Object : 0.01K / 0.27K / 8.00K

OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME   
  171318 171318 100%0.19K   4079   42 32632K dentry 
  127257 127257 100%0.10K   3263   39 13052K buffer_head
   75669  75669 100%0.96K   2293   33 73376K ext4_inode_cache   
   35328  34959  98%0.06K552   64  2208K kmalloc-64 
   33354  33354 100%0.04K327  102  1308K ext4_extent_status 
   25560  25560 100%0.11K710   36  2840K sysfs_dir_cache
   18944  18944 100%0.01K 37  512   148K kmalloc-8  
   18848  18848 100%0.50K589   32  9424K kmalloc-512
   17680  17680 100%0.05K208   85   832K shared_policy_node 
   17248  17248 100%0.12K539   32  2156K au_dinfo   
   15390   9116  59%0.55K270   57  8640K radix_tree_node
   15372  15372 100%0.09K366   42  1464K kmalloc-96 
   13398  13398 100%0.75K319   42 10208K au_icntnr  
   11424  11424 100%0.57K204   56  6528K inode_cache
   11312