> On 27. Jun 2025, at 07:53, Ben Hutton <b...@benhutton.com.au> wrote:
> 
> Hi Michael,
> 8G
> $ netstat -m 
> 1590074/4231/1594305 mbufs in use (current/cache/total) 
> 797974/2592/800566/1800796 mbuf clusters in use (current/cache/total/max) 
> 797974/790 mbuf+clusters out of packet secondary zone in use (current/cache) 
> 644657/1542/646199/1550398 4k (page size) jumbo clusters in use 
> (current/cache/total/max) 
> 0/0/0/74192 9k jumbo clusters in use (current/cache/total/max) 
> 0/0/0/41733 16k jumbo clusters in use (current/cache/total/max) 
> 4572094K/12409K/4584504K bytes allocated to network (current/cache/total) 
> 0/8507/8489 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 
> 0/30432/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters) 
> 0/0/0 requests for jumbo clusters delayed (4k/9k/16k) 
> 485354407/0/0 requests for jumbo clusters denied (4k/9k/16k) 
> 2 sendfile syscalls 
> 2 sendfile syscalls completed without I/O request 
> 0 requests for I/O initiated by sendfile 
> 0 pages read by sendfile as part of a request 
> 2 pages were valid at time of a sendfile request 
> 0 pages were valid and substituted to bogus page 
> 0 pages were requested for read ahead by applications 
> 0 pages were read ahead by sendfile 
> 0 times sendfile encountered an already busy page 
> 0 requests for sfbufs denied 
> 0 requests for sfbufs delayed

OK. You can use netstat -x to figure out how the occupancy of the send
and receive buffers for TCP connections is. The output contain the IP
addresses of your current connections. So you might not want to post
it here. But you can remove the IP addresses from it and send it to me...

Best regards
Michael
> Kind regards
> Ben
> On 27/06/2025 12:52, Michael Tuexen wrote:
>>> On 27. Jun 2025, at 04:17, Ben Hutton <b...@benhutton.com.au> wrote:
>>> 
>>> Hi,
>>> I'm currently having an issue with a spring-boot application (with nginx in 
>>> front on the same instance) running on FreeBSD 14.1 in AWS. Two of our 
>>> instances at present have had the application go offline with the following 
>>> appearing in the /var/log/messages:
>>> Jun 26 07:57:47 freebsd kernel: [zone: mbuf_jumbo_page] kern.ipc.nmbjumbop 
>>> limit reached 
>>> Jun 26 07:57:47 freebsd kernel: [zone: mbuf_cluster] kern.ipc.nmbclusters 
>>> limit reached 
>>> Jun 26 07:59:34 freebsd kernel: sonewconn: pcb 0xfffff8021bd74000 
>>> (0.0.0.0:443 (proto 6)): Listen queue overflow: 193 already in queue 
>>> awaiting acceptance (104 occurrences), euid 0, rgid 0, jail 0 
>>> Jun 26 08:01:51 freebsd kernel: sonewconn: pcb 0xfffff8021bd74000 
>>> (0.0.0.0:443 (proto 6)): Listen queue overflow: 193 already in queue 
>>> awaiting acceptance (13 occurrences), euid 0, rgid 0, jail 0
>>> 
>>> Each time this has occurred I have increased the nmbjumbop and nmbclusters 
>>> values. The last time by a huge amount to see if we can mitigate the issue. 
>>> Once I adjust the values the application starts responding to requests 
>>> again.
>>> My question is, is just increasing this the correct course of action or 
>>> should I be investigating something else, or adjusting other settings 
>>> accordingly? Also if this is due to an underlying issue and not just 
>>> network load how would I get to the root cause? Note the application 
>>> streams allot of files in rapid succession which I'm suspecting is what is 
>>> causing the issue.
>>> 
>> Hi Ben,
>> 
>> how much memory does your VM have? What is the output of
>> netstat -m
>> when the system is in operation?
>> 
>> Best regards
>> Michael
>> 
>>> Thanks
>>> Ben
>>> 
>>> 
>> 


Reply via email to