[no subject]

2018-11-20 Thread Netflix
>
Date: Wed, 21 Nov 2018 04:26:53 +
Subject: Aviso de bloqueio temporário - Conta ID: 04265321112018
Content-Type: text/html; charset=UTF-8
Message-Id: <20181121042653.b7dc844...@help22.servicodeentrega.org>
X-Rspamd-Queue-Id: D9C226DD72
X-Spamd-Result: default: False [1.23 / 15.00];
 FORGED_RECIPIENTS_FORWARDING(0.00)[];
 FORWARDED(0.00)[sta...@mailman.ysv.freebsd.org];
 R_DKIM_REJECT(1.00)[help22.servicodeentrega.org];
 ZERO_FONT(0.20)[2];
 SPF_FAIL_FORWARDING(0.00)[];
 DKIM_TRACE(0.00)[help22.servicodeentrega.org:-];
 
RCVD_IN_DNSWL_MED(-0.20)[5.0.0.0.0.5.0.0.0.0.0.0.0.0.0.0.a.6.0.2.4.5.2.2.0.0.9.1.1.0.0.2.list.dnswl.org
 : 127.0.9.2];
 MX_GOOD(-0.01)[cached: help22.servicodeentrega.org];
 MIME_HEADER_CTYPE_ONLY(2.00)[];
 NEURAL_HAM_SHORT(-0.98)[-0.979,0];
 RCVD_NO_TLS_LAST(0.10)[];
 FROM_EQ_ENVFROM(0.00)[];
 IP_SCORE(-3.68)[ip: (-9.87), ipnet: 2001:1900:2254::/48(-4.77), asn: 
10310(-3.67), country: US(-0.09)];
 ASN(0.00)[asn:10310, ipnet:2001:1900:2254::/48, country:US];
 FORGED_RECIPIENTS(0.00)[sta...@freebsd.org,freebsd-stable@freebsd.org];
 MID_RHS_MATCH_FROM(0.00)[];
 ARC_NA(0.00)[];
 R_SPF_FAIL(0.00)[-all];
 RCVD_COUNT_FIVE(0.00)[5];
 NEURAL_HAM_MEDIUM(-0.97)[-0.969,0];
 FROM_HAS_DN(0.00)[];
 NEURAL_HAM_LONG(-0.98)[-0.984,0];
 R_BAD_CTE_7BIT(1.05)[7bit,utf8];
 SUBJECT_NEEDS_ENCODING(1.00)[];
 RCPT_COUNT_ONE(0.00)[1];
 MANY_INVISIBLE_PARTS(0.20)[3];
 TO_DN_EQ_ADDR_ALL(0.00)[];
 MIME_HTML_ONLY(0.20)[];
 HFILTER_URL_ONLY(2.20)[1];
 GREYLIST(0.00)[pass,body];
 DMARC_POLICY_SOFTFAIL(0.10)[help22.servicodeentrega.org : No valid 
SPF,none]
X-Rspamd-Server: mx1.freebsd.org

http://www.w3.org/1999/xhtml"; 
xmlns:o="urn:schemas-microsoft-com:office:office" 
style="padding:0;margin:0;background-color:rgb(51, 51, 
51);background-color:rgb(51, 51, 51);margin-top:0;">
  
   
   
  #message_content > 
@media yahoo {
#message_content >   table {border-collapse: collapse; mso-table-lspace:0pt; 
mso-table-rspace:0pt; table-layout: fixed;}
#message_content >   table table { table-layout: auto; }
#message_content >   }
* [data-term] {border-bottom: none !important;pointer-events: none 
!important;}
#message_content > .ii a {color: inherit !important; text-decoration:none 
!important;}
#message_content > a[x-apple-data-detectors] { color: inherit !important; 
text-decoration: none !important; font-size: inherit !important; font-family: 
inherit !important; font-weight: inherit !important; line-height: inherit 
!important; }
#message_content > body, .container, html {
#message_content >   background-color: #33;
  margin-top: 0;
}
hide, .hide div, .hide table, .hide td, .hide tr, .hide a, .hide img 
{display:none !important; width:0 !important; height:0 !important; max-height:0 
!important; line-height:0 !important; mso-hide:all !important; overflow:hidden 
!important; visibility:hidden !important;}
#message_content > img {
#message_content >   -ms-interpolation-mode: bicubic;
  border: none;
  outline: none;
}
desktop-hide, .desktop-hide img, .desktop-hide-max, .desktop-hide-max img {
#message_content >   display:none;
}
gmail-fix-no-inline {
#message_content >   display:none;
  display:none !important;
}
@media (max-width: 500px) {
#message_content >   .ios-hide {
#message_content > display: none;
  }
  .desktop-hide, .desktop-hide img {
#message_content > display: initial !important;
  }
  table.desktop-hide {
#message_content > display: table !important;
  }
}
a {
#message_content >   color: inherit !important;
}
a img {
#message_content >   border-style: none;
}
iosnonlink a{
#message_content >   text-decoration: none !important;
}
copy a {
#message_content >   font-family: Helvetica, Arial, sans;
  text-decoration: underline !important;
  color: inherit !important;
}
copy a.secondary-cta {
#message_content >   color: #e50914 !important;
}
html, body {
#message_content >   padding: 0;
  margin: 0;
  background-color: #33;
}
/* Content */
content-shell {
#message_content >   background: url('http://cdn.nflximg.com/us/email/hitch/netflix-crop.png'"  
target="_blank">http://cdn.nflximg.com/us/email/hitch/netflix-crop.png') 
no-repeat #ff;
  background-repeat: no-repeat;
  background-color: #ff;
}
body {
#message_content >   font-family: Helvetica, Arial, sans;
  color: #221F1F;
}
help-center-link {
#message_content >   text-decoration: underline;
  font-weight: bold;
}
@media (max-width: 599px) {
#message_content >   .ios-hide-max {
#message_content > display: none;
  }
  .inbox-fix {
#message_content > display: none;
  }
  .desktop-hide

Re: Where is my memory on 'fresh' 11-STABLE? It should be used by ARC, but it is not used for it anymore.

2018-11-20 Thread Mark Johnston
On Tue, Nov 20, 2018 at 03:42:24PM +0300, Lev Serebryakov wrote:
> 
>  I have server which is mostly torrent box. It uses ZFS and equipped
> with 16GiB of physical memory. It is running 11-STABLE (r339914 now).
> 
>  I've updated it to r339914 from some 11.1-STABLE revision 3 weeks ago.
> 
>  I was used to see 13-14GiB of memory in ZFS ARC and it was Ok.
> Sometimes it "locks" under heavy disk load due to ARC memory pressure,
> but it was bearable, and as ZFS is main reason this server exists, I
> didn't limit ARC.
> 
>  But new revision (r339914) shows very strange behaivor: ARC is no more
> than 4GiB, but kernel has 15GiB wired:
> 
> Mem: 22M Active, 656M Inact, 62M Laundry, 15G Wired, 237M Free
> ARC: 4252M Total, 2680M MFU, 907M MRU, 3680K Anon, 15M Header, 634M Other
>  2789M Compressed, 3126M Uncompressed, 1.12:1 Ratio
> 
>  It is typical numbers for last week: 15G wired, 237M Free, but only
> 4252M ARC!
> 
>  Where is other 11G of memory?!
> 
> I've checked USED and FREE in "vmstat -z" output and got this:
> 
> $ vmstat -z | tr : , | awk -F , '1{print $2*$4,$2*$5,$1}' | sort -n |
> tail -20
> 23001088 9171456 MAP ENTRY
> 29680800 8404320 VM OBJECT
> 34417408 10813952 256
> 36377964 2665656 S VFS Cache
> 50377392 53856 sa_cache
> 50593792 622985216 zio_buf_131072
> 68913152 976896 mbuf_cluster
> 73543680 7225344 mbuf_jumbo_page
> 92358552 67848 zfs_znode_cache
> 95731712 51761152 4096
> 126962880 159581760 dmu_buf_impl_t
> 150958080 233920512 mbuf_jumbo_9k
> 165164600 92040 VNODE
> 192701120 30350880 UMA Slabs
> 205520896 291504128 zio_data_buf_1048576
> 222822400 529530880 zio_data_buf_524288
> 259143168 293476864 zio_buf_512
> 352485376 377061376 zio_buf_16384
> 376109552 346474128 dnode_t
> 2943016960 5761941504 abd_chunk
> $
> 
>  And total USED/FREE numbers is very strange for me:
> 
> $ vmstat -z | tr : , | awk -F , '1{u+=$2*$4; f+=$2*$5} END{print u,f}'
> 5717965420 9328951088
> $
> 
>  So, only ~5.7G is used and 9.3G is free! But why this memory is not
> used by ARC anymore and why is it wired and not free?

Could you show the output of "vmstat -s" when in this state?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Where is my memory on 'fresh' 11-STABLE? It should be used by ARC, but it is not used for it anymore.

2018-11-20 Thread Eugene M. Zheganin

Hello,

20.11.2018 15:42, Lev Serebryakov пишет:

  I have server which is mostly torrent box. It uses ZFS and equipped
with 16GiB of physical memory. It is running 11-STABLE (r339914 now).

  I've updated it to r339914 from some 11.1-STABLE revision 3 weeks ago.

  I was used to see 13-14GiB of memory in ZFS ARC and it was Ok.
Sometimes it "locks" under heavy disk load due to ARC memory pressure,
but it was bearable, and as ZFS is main reason this server exists, I
didn't limit ARC.

  But new revision (r339914) shows very strange behaivor: ARC is no more
than 4GiB, but kernel has 15GiB wired:

Mem: 22M Active, 656M Inact, 62M Laundry, 15G Wired, 237M Free
ARC: 4252M Total, 2680M MFU, 907M MRU, 3680K Anon, 15M Header, 634M Other
  2789M Compressed, 3126M Uncompressed, 1.12:1 Ratio

  It is typical numbers for last week: 15G wired, 237M Free, but only
4252M ARC!

  Where is other 11G of memory?!

[...]
  And total USED/FREE numbers is very strange for me:

$ vmstat -z | tr : , | awk -F , '1{u+=$2*$4; f+=$2*$5} END{print u,f}'
5717965420 9328951088
$

  So, only ~5.7G is used and 9.3G is free! But why this memory is not
used by ARC anymore and why is it wired and not free?
I'm getting pretty much same story on recent 11-STABLE from 9th 
November. Previous versions didn't have that much questions about memory 
usage (and I run several 11-STABLEs).


Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Where is my memory on 'fresh' 11-STABLE? It should be used by ARC, but it is not used for it anymore.

2018-11-20 Thread Lev Serebryakov

 I have server which is mostly torrent box. It uses ZFS and equipped
with 16GiB of physical memory. It is running 11-STABLE (r339914 now).

 I've updated it to r339914 from some 11.1-STABLE revision 3 weeks ago.

 I was used to see 13-14GiB of memory in ZFS ARC and it was Ok.
Sometimes it "locks" under heavy disk load due to ARC memory pressure,
but it was bearable, and as ZFS is main reason this server exists, I
didn't limit ARC.

 But new revision (r339914) shows very strange behaivor: ARC is no more
than 4GiB, but kernel has 15GiB wired:

Mem: 22M Active, 656M Inact, 62M Laundry, 15G Wired, 237M Free
ARC: 4252M Total, 2680M MFU, 907M MRU, 3680K Anon, 15M Header, 634M Other
 2789M Compressed, 3126M Uncompressed, 1.12:1 Ratio

 It is typical numbers for last week: 15G wired, 237M Free, but only
4252M ARC!

 Where is other 11G of memory?!

I've checked USED and FREE in "vmstat -z" output and got this:

$ vmstat -z | tr : , | awk -F , '1{print $2*$4,$2*$5,$1}' | sort -n |
tail -20
23001088 9171456 MAP ENTRY
29680800 8404320 VM OBJECT
34417408 10813952 256
36377964 2665656 S VFS Cache
50377392 53856 sa_cache
50593792 622985216 zio_buf_131072
68913152 976896 mbuf_cluster
73543680 7225344 mbuf_jumbo_page
92358552 67848 zfs_znode_cache
95731712 51761152 4096
126962880 159581760 dmu_buf_impl_t
150958080 233920512 mbuf_jumbo_9k
165164600 92040 VNODE
192701120 30350880 UMA Slabs
205520896 291504128 zio_data_buf_1048576
222822400 529530880 zio_data_buf_524288
259143168 293476864 zio_buf_512
352485376 377061376 zio_buf_16384
376109552 346474128 dnode_t
2943016960 5761941504 abd_chunk
$

 And total USED/FREE numbers is very strange for me:

$ vmstat -z | tr : , | awk -F , '1{u+=$2*$4; f+=$2*$5} END{print u,f}'
5717965420 9328951088
$

 So, only ~5.7G is used and 9.3G is free! But why this memory is not
used by ARC anymore and why is it wired and not free?

-- 
// Lev Serebryakov



signature.asc
Description: OpenPGP digital signature


Re: plenty of memory, but system us intensively swapping

2018-11-20 Thread Eugene M. Zheganin

Hello,

On 20.11.2018 16:22, Trond Endrestøl wrote:


I know others have created a daemon that observe the ARC and the
amount of wired and free memory, and when these values exceed some
threshold, the daemon will allocate a number of gigabytes, writing
zero to the first byte or word of every page, and then freeing the
allocated memory before going back to sleep.

The ARC will release most of its allocations and the kernel will also
release some but not all of its wired memory, and some user pages are
likely to be thrown onto the swap device, turning the user experience
to a mild nightmare while waiting for applications to be paged back
into memory.

ZFS seems to be the common factor in most, if not all, of these cases.

I created my own and not so sophisticated C program that I run every
now and then:

#include 
#include 
#include 

int main(int argc, char **argv)
{
   const size_t pagesize = (size_t)getpagesize();
   const size_t gigabyte = 1024ULL * 1024ULL * 1024ULL;

   size_t amount, n = 1ULL;
   char *p, *offset;

   if (argc > 1) {
 sscanf(argv[1], "%zu", &n);
   }

   amount = n * gigabyte;

   if (amount > 0ULL) {
 if ( (p = malloc(amount)) != NULL) {
   for (offset = p; offset < p + amount; offset += pagesize) {
 *offset = '\0';
   }

   free(p);
 }
 else {
   fprintf(stderr,
   "%s:%s:%d: unable to allocate %zu gigabyte%s\n",
   argv[0], __FILE__, __LINE__,
   n, (n == 1ULL) ? "" : "s");
   return 2;
 }
   }
   else {
 return 1;
   }

   return 0;
} // main()

// allocate_gigabytes.c

Jeez, thanks a lot, this stuff is working. Now the system has 8 Gigs of 
free memory and stopped swapping.


Well, the next question is addressed to the core team which I suppose 
reads this ML eventually - why we don't have something similar as a 
watchdog in the base system ? I understand that this solution is 
architecturally ugly, but it's now worse not to have any, and it still 
works. At least I'm about to run this periodically.


Trond, thanks again.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: plenty of memory, but system us intensively swapping

2018-11-20 Thread Trond Endrestøl
On Tue, 20 Nov 2018 15:22+0500, Eugene M. Zheganin wrote:

> Hello,
> 
> On 20.11.2018 15:12, Trond Endrestøl wrote:
> > On freebsd-hackers the other day,
> > https://lists.freebsd.org/pipermail/freebsd-hackers/2018-November/053575.html,
> > it was suggested to set vm.pageout_update_period=0. This sysctl is at
> > 600 initially.
> > 
> > ZFS' ARC needs to be capped, otherwise it will eat most, if not all,
> > of your memory.
> Well, as you can see, ARC ate only half, and the other half is eaten by the
> kernel. So far I suppose that if I will cap the ARC, the kernel will simply
> eat the rest.

I know others have created a daemon that observe the ARC and the 
amount of wired and free memory, and when these values exceed some 
threshold, the daemon will allocate a number of gigabytes, writing 
zero to the first byte or word of every page, and then freeing the 
allocated memory before going back to sleep.

The ARC will release most of its allocations and the kernel will also 
release some but not all of its wired memory, and some user pages are 
likely to be thrown onto the swap device, turning the user experience 
to a mild nightmare while waiting for applications to be paged back 
into memory.

ZFS seems to be the common factor in most, if not all, of these cases.

I created my own and not so sophisticated C program that I run every 
now and then:

#include 
#include 
#include 

int main(int argc, char **argv)
{
  const size_t pagesize = (size_t)getpagesize();
  const size_t gigabyte = 1024ULL * 1024ULL * 1024ULL;

  size_t amount, n = 1ULL;
  char *p, *offset;

  if (argc > 1) {
sscanf(argv[1], "%zu", &n);
  }

  amount = n * gigabyte;

  if (amount > 0ULL) {
if ( (p = malloc(amount)) != NULL) {
  for (offset = p; offset < p + amount; offset += pagesize) {
*offset = '\0';
  }

  free(p);
}
else {
  fprintf(stderr,
  "%s:%s:%d: unable to allocate %zu gigabyte%s\n",
  argv[0], __FILE__, __LINE__,
  n, (n == 1ULL) ? "" : "s");
  return 2;
}
  }
  else {
return 1;
  }

  return 0;
} // main()

// allocate_gigabytes.c

-- 
Trond.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: plenty of memory, but system us intensively swapping

2018-11-20 Thread Eugene M. Zheganin

Hello,

On 20.11.2018 15:12, Trond Endrestøl wrote:

On freebsd-hackers the other day,
https://lists.freebsd.org/pipermail/freebsd-hackers/2018-November/053575.html,
it was suggested to set vm.pageout_update_period=0. This sysctl is at
600 initially.

ZFS' ARC needs to be capped, otherwise it will eat most, if not all,
of your memory.
Well, as you can see, ARC ate only half, and the other half is eaten by 
the kernel. So far I suppose that if I will cap the ARC, the kernel will 
simply eat the rest.


Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: plenty of memory, but system us intensively swapping

2018-11-20 Thread Trond Endrestøl
On Tue, 20 Nov 2018 14:53+0500, Eugene M. Zheganin wrote:

> Hello,
> 
> 
> I have a recent FreeBSD 11-STABLE which is mainly used as an iSCSI target. The
> system has 64G of RAM but is swapping intensively. Yup, about of half of the
> memory is used as ZFS ARC (isn't capped in loader.conf), and another half is
> eaten by the kernel, but it oly uses only about half of it (thus 25% of the
> total amount).
> 
> Could this be tweaked by some sysctl oids (I suppose not, but worth asking).

On freebsd-hackers the other day, 
https://lists.freebsd.org/pipermail/freebsd-hackers/2018-November/053575.html, 
it was suggested to set vm.pageout_update_period=0. This sysctl is at 
600 initially.

ZFS' ARC needs to be capped, otherwise it will eat most, if not all, 
of your memory.

> top, vmstat 1 snapshots and zfs-stats -a are listed below.
> 
> 
> Thanks.
> 
> 
> [root@san01:nginx/vhost.d]# vmstat 1
> procs  memory   pagedisks faults cpu
> r b w  avm   fre   flt  re  pi  pofr   sr da0 da1   in sycs us sy id
> 0 0 38  23G  609M  1544  68 118  64   895  839   0   0 3644 2678   649  0 13
> 87
> 0 0 53  23G  601M  1507 185 742 315  1780 33523 651 664 56438 785 476583  0 28
> 72
> 0 0 53  23G  548M  1727 330 809 380  2377 33256 758 763 5 1273 468545  0
> 26 73
> 0 0 53  23G  528M  1702 239 660 305  1347 32335 611 631 59962 1025 490365  0
> 22 78
> 0 0 52  23G  854M  2409 309 693 203 97943 16944 525 515 64309 1570 540533  0
> 29 71
> 3 0 54  23G  1.1G  2756 639 641 149 124049 19531 542 538 64777 1576 553946  0
> 35 65
> 0 0 53  23G  982M  1694 236 680 282  2754 35602 597 603 66540 1385 583687  0
> 28 72
> 0 0 41  23G  867M  1882 223 767 307  1162 34936 682 638 67284 780 568818  0 33
> 67
> 0 0 39  23G  769M  1542 167 673 336  1187 35123 646 610 65925 1176 551623  0
> 23 77
> 2 0 41  23G  700M  3602 535 688 327  2192 37109 622 594 65862 4256 518934  0
> 33 67
> 0 0 54  23G  650M  2957 219 726 464  4838 36464 852 868 65384 4110 558132  1
> 37 62
> 0 0 54  23G  641M  1576 245 730 344  1139 33681 740 679 67216 970 560379  0 31
> 69
> 
> 
> [root@san01:nginx/vhost.d]# top
> last pid: 55190;  load averages: 11.32, 12.15, 10.76
> up 10+16:05:14  14:38:58
> 101 processes: 1 running, 100 sleeping
> CPU:  0.2% user,  0.0% nice, 28.9% system,  1.6% interrupt, 69.3% idle
> Mem: 85M Active, 1528K Inact, 12K Laundry, 62G Wired, 540M Free
> ARC: 31G Total, 19G MFU, 6935M MRU, 2979M Anon, 556M Header, 1046M Other
>  25G Compressed, 34G Uncompressed, 1.39:1 Ratio
> Swap: 32G Total, 1186M Used, 31G Free, 3% Inuse, 7920K In, 3752K Out
>   PID USERNAME THR PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
> 40132 root 131  520  3152M 75876K uwait  14  36:59   6.10% java
> 55142 root   1  200  7904K  2728K CPU20  20   0:00   0.72% top
> 20026 root   1  200   106M  5676K nanslp 28   1:23   0.60% gstat
> 53642 root   1  200  7904K  2896K select 14   0:03   0.58% top
>   977 zfsreplica 1  200 30300K  3568K kqread 21   4:00   0.42% uwsgi
>   968 zfsreplica 1  200 30300K  2224K swread 11   2:03   0.21% uwsgi
>   973 zfsreplica 1  200 30300K  2264K swread 13  12:26   0.13% uwsgi
> 53000 www1  200 23376K  1372K kqread 24   0:00   0.05% nginx
>  1292 root   1  200  6584K  2040K select 29   0:23   0.04%
> blacklistd
>   776 zabbix 1  200 12408K  4236K nanslp 26   4:42   0.03%
> zabbix_agentd
>  1289 root   1  200 67760K  5148K select 13   9:50   0.03% bsnmpd
>   777 zabbix 1  200 12408K  1408K select 25   5:06   0.03%
> zabbix_agentd
>   785 zfsreplica 1  200 27688K  3960K kqread 28   2:04   0.02% uwsgi
>   975 zfsreplica 1  200 30300K   464K kqread 18   2:33   0.02% uwsgi
>   974 zfsreplica 1  200 30300K   480K kqread 30   3:39   0.02% uwsgi
>   965 zfsreplica 1  200 30300K   464K kqread  4   3:23   0.02% uwsgi
>   976 zfsreplica 1  200 30300K   464K kqread 14   2:59   0.01% uwsgi
>   972 zfsreplica 1  200 30300K   464K kqread 10   2:57   0.01% uwsgi
>   963 zfsreplica 1  200 30300K   460K kqread  3   2:45   0.01% uwsgi
>   971 zfsreplica 1  200 30300K   464K kqread 13   3:16   0.01% uwsgi
> 69644 emz1  200 13148K  4596K select 24   0:05   0.01% sshd
> 18203 vryabov1  200 13148K  4624K select  9   0:02   0.01% sshd
>   636 root   1  200  6412K  1884K select 17   4:10   0.01% syslogd
> 51266 emz1  200 13148K  4576K select  5   0:00   0.01% sshd
>   964 zfsreplica 1  200 30300K   460K kqread 18  11:02   0.01% uwsgi
>   962 zfsreplica 1  200 30300K   460K kqread 28   6:56   0.01% uwsgi
>   969 zfsreplica 1  200 30300K   464K kqread 12   2:07   0.01% uwsgi
>   967 zfsreplica 1  200 30300K   464K kqread 27   5:18   0.01% uwsgi
>   970 zfsreplica 1  200 30300K   464K kqread  0   4:25   0.01% uwsgi
>   966 zfsr

plenty of memory, but system us intensively swapping

2018-11-20 Thread Eugene M. Zheganin

Hello,


I have a recent FreeBSD 11-STABLE which is mainly used as an iSCSI 
target. The system has 64G of RAM but is swapping intensively. Yup, 
about of half of the memory is used as ZFS ARC (isn't capped in 
loader.conf), and another half is eaten by the kernel, but it oly uses 
only about half of it (thus 25% of the total amount).


Could this be tweaked by some sysctl oids (I suppose not, but worth asking).

top, vmstat 1 snapshots and zfs-stats -a are listed below.


Thanks.


[root@san01:nginx/vhost.d]# vmstat 1
procs  memory   pagedisks faults cpu
r b w  avm   fre   flt  re  pi  pofr   sr da0 da1   in sycs us sy id
0 0 38  23G  609M  1544  68 118  64   895  839   0   0 3644 2678   649  
0 13 87
0 0 53  23G  601M  1507 185 742 315  1780 33523 651 664 56438 785 
476583  0 28 72
0 0 53  23G  548M  1727 330 809 380  2377 33256 758 763 5 1273 
468545  0 26 73
0 0 53  23G  528M  1702 239 660 305  1347 32335 611 631 59962 1025 
490365  0 22 78
0 0 52  23G  854M  2409 309 693 203 97943 16944 525 515 64309 1570 
540533  0 29 71
3 0 54  23G  1.1G  2756 639 641 149 124049 19531 542 538 64777 1576 
553946  0 35 65
0 0 53  23G  982M  1694 236 680 282  2754 35602 597 603 66540 1385 
583687  0 28 72
0 0 41  23G  867M  1882 223 767 307  1162 34936 682 638 67284 780 
568818  0 33 67
0 0 39  23G  769M  1542 167 673 336  1187 35123 646 610 65925 1176 
551623  0 23 77
2 0 41  23G  700M  3602 535 688 327  2192 37109 622 594 65862 4256 
518934  0 33 67
0 0 54  23G  650M  2957 219 726 464  4838 36464 852 868 65384 4110 
558132  1 37 62
0 0 54  23G  641M  1576 245 730 344  1139 33681 740 679 67216 970 
560379  0 31 69



[root@san01:nginx/vhost.d]# top
last pid: 55190;  load averages: 11.32, 12.15, 10.76 
  up 10+16:05:14  14:38:58

101 processes: 1 running, 100 sleeping
CPU:  0.2% user,  0.0% nice, 28.9% system,  1.6% interrupt, 69.3% idle
Mem: 85M Active, 1528K Inact, 12K Laundry, 62G Wired, 540M Free
ARC: 31G Total, 19G MFU, 6935M MRU, 2979M Anon, 556M Header, 1046M Other
 25G Compressed, 34G Uncompressed, 1.39:1 Ratio
Swap: 32G Total, 1186M Used, 31G Free, 3% Inuse, 7920K In, 3752K Out
  PID USERNAME THR PRI NICE   SIZERES STATE   C   TIMEWCPU 
COMMAND

40132 root 131  520  3152M 75876K uwait  14  36:59   6.10% java
55142 root   1  200  7904K  2728K CPU20  20   0:00   0.72% top
20026 root   1  200   106M  5676K nanslp 28   1:23   0.60% gstat
53642 root   1  200  7904K  2896K select 14   0:03   0.58% top
  977 zfsreplica 1  200 30300K  3568K kqread 21   4:00   0.42% 
uwsgi
  968 zfsreplica 1  200 30300K  2224K swread 11   2:03   0.21% 
uwsgi
  973 zfsreplica 1  200 30300K  2264K swread 13  12:26   0.13% 
uwsgi

53000 www1  200 23376K  1372K kqread 24   0:00   0.05% nginx
 1292 root   1  200  6584K  2040K select 29   0:23   0.04% 
blacklistd
  776 zabbix 1  200 12408K  4236K nanslp 26   4:42   0.03% 
zabbix_agentd
 1289 root   1  200 67760K  5148K select 13   9:50   0.03% 
bsnmpd
  777 zabbix 1  200 12408K  1408K select 25   5:06   0.03% 
zabbix_agentd
  785 zfsreplica 1  200 27688K  3960K kqread 28   2:04   0.02% 
uwsgi
  975 zfsreplica 1  200 30300K   464K kqread 18   2:33   0.02% 
uwsgi
  974 zfsreplica 1  200 30300K   480K kqread 30   3:39   0.02% 
uwsgi
  965 zfsreplica 1  200 30300K   464K kqread  4   3:23   0.02% 
uwsgi
  976 zfsreplica 1  200 30300K   464K kqread 14   2:59   0.01% 
uwsgi
  972 zfsreplica 1  200 30300K   464K kqread 10   2:57   0.01% 
uwsgi
  963 zfsreplica 1  200 30300K   460K kqread  3   2:45   0.01% 
uwsgi
  971 zfsreplica 1  200 30300K   464K kqread 13   3:16   0.01% 
uwsgi

69644 emz1  200 13148K  4596K select 24   0:05   0.01% sshd
18203 vryabov1  200 13148K  4624K select  9   0:02   0.01% sshd
  636 root   1  200  6412K  1884K select 17   4:10   0.01% 
syslogd

51266 emz1  200 13148K  4576K select  5   0:00   0.01% sshd
  964 zfsreplica 1  200 30300K   460K kqread 18  11:02   0.01% 
uwsgi
  962 zfsreplica 1  200 30300K   460K kqread 28   6:56   0.01% 
uwsgi
  969 zfsreplica 1  200 30300K   464K kqread 12   2:07   0.01% 
uwsgi
  967 zfsreplica 1  200 30300K   464K kqread 27   5:18   0.01% 
uwsgi
  970 zfsreplica 1  200 30300K   464K kqread  0   4:25   0.01% 
uwsgi
  966 zfsreplica 1  220 30300K   468K kqread 14   4:29   0.01% 
uwsgi

53001 www1  200 23376K  1256K kqread 10   0:00   0.01% nginx
  791 zfsreplica 1  200 27664K  4244K kqread 17   1:34   0.01% 
uwsgi

52431 root   1  200 17132K  4492K select 21   0:00   0.01% mc
70013 root   1  200 17132K  4492K select  4   0:03   0.01% mc
  870 root   1  200 12448K 12544K select 19   0:51

Re: Memory error logged in /var/log/messages

2018-11-20 Thread Alfred Bartsch


Am 19.11.18 um 14:10 schrieb Patrick M. Hausen:
> Hi all,
> 
> one of our production servers, 11.2p3 is logging this every couple of minutes:
> 
> Nov 19 11:48:06 ph002 kernel: MCA: CPU 0 COR (5) OVER MS channel 3 memory 
> error
> Nov 19 11:48:06 ph002 kernel: MCA: Address 0x1f709a48c0
> Nov 19 11:48:06 ph002 kernel: MCA: Misc 0x9001040188c
> Nov 19 11:48:06 ph002 kernel: MCA: Bank 12, Status 0xcc00010c000800c3
> Nov 19 11:48:06 ph002 kernel: MCA: Global Cap 0x07000c16, Status 
> 0x
> Nov 19 11:48:06 ph002 kernel: MCA: Vendor "GenuineIntel", ID 0x406f1, APIC ID > 0
> 
> Address and core varies but it is always bank 12.
> 
> It seems like applications are unaffected, we use, of course ECC memory.
> 
> Is the OS able to work around these errors and just notifies us or is 
> in-memory
> data already getting corrupted?
> 
> I’m at a bit of a loss identifying which DIMM might be the cause so I 
> contacted Supermicro
> support. They answered:
> 
>> We can't really answer this, we do not know how various OS's map the memory 
>> slots.
>> Our advise is always to look at IPMI, but if that doesn't log any issues 
>> then we're not sure you're looking at a hardware issue.
>>
>> But assuming the OS looks at the ranks of a module as a bank and you use 
>> dual rank memory then it should logically point at DIMMC2.
> 
> They are right on the IPMI (I told them when opening the case) - there’s 
> nothing at all
> in the event log.
> 
> Can they be correct that it might not even be a hardware issue?
> If not how can I be sure which DIMM is to blame? Spare parts are ready but 
> I’d like to
> have a rather short maintenance break outside regular business hours.
> 
> I’ll attach a dmesg.boot. HW is a X10DRW-NT mainboard, SYS-1028R-WTNRT server 
> platform.
> 
> Thanks for any hints,
> Patrick
> 
> 
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
> 

Hi Patrick,
we had a similar experience with one of our servers (HP DL380 G7): Tons
of MCA errors concerning a single memory bank. This bank number did not
correspond to a special memory slot (HP numbers them from A to I for
each cpu). iLO and mcelog output was not of any help for me.
We did not notice any data loss, but to get rid of these annoying
messages, I did the following:
After taking the server out of production, I removed pairs of memory
modules until the MCA messages stopped. Then the last removed pair
contained the problematic module. Re-adding one of these last modules
left a 50-percent chance to identify the defective module. After
replacing this module, the server did no longer complain about memory
problems.

There should definitely be a more sophisticated method to identify
problematic memory modules. Perhaps there is someone on the list who is
able to shed some light on this kind of errors.

-- 
Sincerely
Alfred Bartsch
Data-Service GmbH
Beethovenstr. 2A
23617 Stockelsdorf
fon: +49 451 490010 fax: +49 451 4900123
Amtsgericht Lübeck, HRB 318 BS
Geschäftsführer: Wilfried Paepcke, Dr. Andreas Longwitz, Dr. Hans-Martin
Rasch, Dr. Uwe Szyszka
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"