My previous post made me realize it that the current patch might still not
be optimal in some situations...

In case someone has a default socket buffer size which is greater than our
default (currently 128K) then we should probably respect that.

At the moment, if the end user doesn't override the buffer size in their
configuration, then regardless of the OS defined buffer size we will change
the buffer size to our default, which could result in downsizing the buffer.

Let's say someone has a high performance network and sized their udp buffers
to 1Mb (as part of tuning the OS), then unless the administrator put 1Mb in
the buffer configuration for snmp, we will downsize it to 128K.

So my suggestion (which combines the previous posting and this posting) is:

If (server)
        DEFAULT_BUFFER = SNMP_MAX_PDU_SIZE * 2;
else
        DEFAULT_BUFFER = SNMP_MAX_PDU_SIZE;

if (valid snmp udp buffer size has been specified in config file) {
  change udp buffer to the the specified size
  (this could mean upsize/downsize ... or Super Size for only $.40 more ;-)
} else if (current OS udp buffer < DEFAULT_BUFFER) {
  upsize udp buffer to the DEFAULT_BUFFER size
}

This leaves an OS buffer which is bigger than the DEFAULT_BUFFER untouched.

The codechange required for this is minimal ... And I could code it up if
you think it makes sense.

-- Geert


-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Geert De
Peuter
Sent: Sunday, October 03, 2004 9:23 AM
To: [EMAIL PROTECTED]; 'John Naylon'
Subject: RE: default sock buffer size: what should it be?


The recent addition only allows tweaking of the UDP buffers.

First a note on why I "forgot" about TCP
On the TCP side I looked at the code and saw the return code of the send is
intercepted.  Therefore I made the (wrong) assumption that this return code
would be interpreted appropriately and an failed PDU would be resend,
instead of freed, as I think it is right now :-(. Haven't looked at the
required changes for fixing this on TCP level too, but if guaranteed
delivery (to the socket layer) is needed for TCP, probably an internal
buffer should be kept and the sender should maintain it's registration in
the select() loop until the packet has been delivered.  (I have to admit
have probably not spend enough time in that piece of code to make more
assumptions here). So for TCP buffers I have concerns if "we" should tune
those on the OS level like we do with UDP (yes, with UDP there is no other
alternative).

This patch to allow UDP buffer tuning was written because it was assumed a
1<<17K buf is good enough for everyone.  Sadly enough it sometimes isn't and
there was no way to change this 1<<17K hardcoded limit (besides changing the
code).

The new default behaves "exactly" how it used to be.  This means you will
normally end up with a 128K receive and send buffer (yes, normally, because
the tuning parameters in the OS have to allow this size).  I'm in favour of
not changing default behavior unless there is a reason for it, but have to
agree that this might be a bit excessive for most get/set operations.

As you said, the SNMP_MAX_PDU_SIZE size is 64K, that means that every
application that does a synchronous network call (as far as I can see this
is every snmp client application) will not need more than 64K (ever).
Therefore the default send and receive buffer should be 64K for client apps
using synch network calls.

For server applications I would keep the default SNMP_MAX_PDU_SIZE * 2 but
recommend higher values (if memory is not an issue).

The patch allows everyone to tune their send and receive buffers anyway.  If
people are really short on memory then they can even change the buffer below
SNMP_MAX_PDU_SIZE.

So my recommendation is.
1) use SNMP_MAX_PDU_SIZE for send/receive buffers of client apps
2) use SNMP_MAX_PDU_SIZE * 2 for send/receive buffers of server apps

This would save you some memory on the client apps (while still staying in
the safe range).

On the server side I wouldn't change anything by default.
High volume trap receivers should be aware of potential buffer overflow
issues and should tune their buffers accordingly to minimize loss
(SNMP_MAX_PDU_SIZE * 16 for example, which is close to the default Sun
suggests for their high performance networking - HPPI/P) ... I'm sure one
size won't fit all in this area.

Cheers,
-- Geert

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Robert
Story (Coders)
Sent: Sunday, October 03, 2004 3:33 AM
To: [EMAIL PROTECTED]; John Naylon
Subject: default sock buffer size: what should it be?


With the recent addition of the ability to set the buffers size for UDP and
TCP sockets in 5.2, it seems like a good time to revisit the question of
what the default value should be. The current behavior is to set them to
128k, which seems rather large. The rational, from the CVS log message:

  - set SO_SNDBUF and SO_RCVBUF to 128Kb for newly-opened UDP sockets,
    to enable large PDUs to be sent and received.  Some
    implementations default very low (Solaris 2.7 8Kb, Linux 2.4
    64Kb).

The other important bit of information about socket size is that if affects
the number of packets that can be received while a process is busy. The
original author of the patch to allow increased buffers sizes was motivated
by the fact that his snmptrapd was losing packets due to an insufficient
receive buffer.

The new patch allows on to set independent buffer sizes for client vs
server, and send vs receive. Given that SNMP_MAX_PDU_SIZE is less than 64k,
a default buffer size of 128k seems excessive.


I suggest that:


1) snmpd and snmptraps both set the default receive buffers size to at least
128k, if not more, to minimize the changes of missing packets.

2) if no buffer size is specified, that the default be to use the default
size specified by the OS. I'm guessing that the average PDU is pretty small
(<1k), so the OS defaults are probably very safe.


Thoughts, opinions and arguments welcomed.


-- 
Robert Story; NET-SNMP Junkie <http://www.net-snmp.org/>
<irc://irc.freenode.net/#net-snmp>
Archive:
<http://sourceforge.net/mailarchive/forum.php?forum=net-snmp-coders>

You are lost in a twisty maze of little standards, all different. 


-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use
IT products in your business? Tell us what you think of them. Give us Your
Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Net-snmp-coders mailing list [EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders



-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use
IT products in your business? Tell us what you think of them. Give us Your
Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Net-snmp-coders mailing list [EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders



-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Net-snmp-coders mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders

Reply via email to