Re: Default net.ipv6.mld_max_msf = 10 and net.core.optmem_max=10240

2005-12-15 Thread Stig Venaas
On Wed, Dec 14, 2005 at 03:39:50PM -0800, David S. Miller wrote:
 From: Hoerdt Mickael [EMAIL PROTECTED]
 Date: Wed, 14 Dec 2005 23:38:56 +0100
 
  As implemented now, the default memory allocated in net.core.optmem_max 
  permit
  to join up to 320 (S,G) channels per sockets (for IPv6, each channels 
  cost 32bytes in
  net.core.optmem_max), thing is that net.ipv6.mld_max_msf is setting an 
  hard limit on it, so assuming that you don't change the value of 
  net.core.opmem_max, would it make sense to increase net.ipv6.mld_max_msf 
  to let say, 256 ? the rest of the memory can
  still be used for various option setup on the socket.
 
 I think people running programs that need the higher value
 can increase the limit.  This is no different than having
 to tweak tcp_wmem[] or the socket buffering limits via
 sysctl.

Right. I'm happy with the default net.core.optmem_max limit, but
the default of 10 for net.ipv6.mld_max_msf is really too low,
and I can't see any problems with increasing it to 64, as David
Stevens suggests.

Stig
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Default net.ipv6.mld_max_msf = 10 and net.core.optmem_max=10240

2005-12-14 Thread Hoerdt Mickael

Hi david  all,

As implemented now, the default memory allocated in net.core.optmem_max 
permit
to join up to 320 (S,G) channels per sockets (for IPv6, each channels 
cost 32bytes in
net.core.optmem_max), thing is that net.ipv6.mld_max_msf is setting an 
hard limit on it, so assuming that you don't change the value of 
net.core.opmem_max, would it make sense to increase net.ipv6.mld_max_msf 
to let say, 256 ? the rest of the memory can

still be used for various option setup on the socket.

Cheers,

Hoerdt Mickaël

David Stevens wrote:

[I'm CC-ing Dave Miller and Yoshifuji Hideaki; you probably ought to bring 
this up on

   [EMAIL PROTECTED]

Hoerdt,
   I don't object to increasing the default, but how much is a good 
question. For an
include-mode filter, it'll do a linear search on the sources for every 
packet received for
that group. If those are large numbers, then an administrator should 
decide that's a good

use of the machine, I think.
   The reports are (roughly) an n^2 algorithm in the number of 
sources. The per-packet
filtering can be improved by using a hash for source look-ups, but I don't 
think there's a
significant improvement for report computations (it's n^3 in the obvious 
way, so already pretty good).
   I've done testing with hundreds of sources and no apparent 
performance problems
(though performance isn't what I was testing). I don't know what a 
reasonable limit on

reasonable hardware is.
   Like the per-socket group limit, this one is probably too low for 
common applications,
and also like that, easily evaded. 1024 or 2048 as the default seems high 
to me, on the
assumption that a few apps doing that would kill performance, but since I 
haven't tested,

I don't really know.
   I also see it appears not to be enforced in the full-state API (an 
oversight, unless

I'm missing the check when I look now).

   I don't see any problem with bumping this up to, say, 64, 
immediately, which would
solve the immediate problem, I guess. But I'm not the maintainer. :-) I 
think some stress
testing to show how well this scales for higher numbers would be 
appropriate before
going too high. If you have numbers (or can get them), that'd help. I 
wouldn't mind doing
some tests along these lines myself, but I don't expect to have much 
uncommitted time

available through December.

   +-DLS

Hoerdt Mickael [EMAIL PROTECTED] wrote on 11/30/2005 08:29:51 
AM:


 


Hello David,

It seems for me that net.ipv6.mld_max_msf and and igmp_max_msf default 
values are
a little bit too short for multi-source multicast applications. On the 
M6bone, we
are using a software named dbeacon 
(,http://mars.innerghost.net.ipv4.sixxs.org/matrix/) which joins a high 
number (currently it's up to 30 sources) SSM sources on the same socket. 
   



 

This create a management problem because when users are installing it : 
root admin must change this value, but dbeacon is run by normal users on 
   



 


the hosts.

For Layered multicast, this can be a problem too, It's easy to imagine a 
   



 


flow with 256
different layer,  FLUTE application is one implementation of this 
layered multicast concept (http://atm.tut.fi/mad/) .Could it be possible 
   



 

to increase this default value to let say, 1024 or 2048 ? If not 
possible, could you tell me why, and then we may consider developping an 
   



 

application layer instanciating several sockets for joining a high 
number of SSM channels per application.


Thank you,

Hoerdt Mickaël
   




 



-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Default net.ipv6.mld_max_msf = 10 and net.core.optmem_max=10240

2005-12-14 Thread David S. Miller
From: Hoerdt Mickael [EMAIL PROTECTED]
Date: Wed, 14 Dec 2005 23:38:56 +0100

 As implemented now, the default memory allocated in net.core.optmem_max 
 permit
 to join up to 320 (S,G) channels per sockets (for IPv6, each channels 
 cost 32bytes in
 net.core.optmem_max), thing is that net.ipv6.mld_max_msf is setting an 
 hard limit on it, so assuming that you don't change the value of 
 net.core.opmem_max, would it make sense to increase net.ipv6.mld_max_msf 
 to let say, 256 ? the rest of the memory can
 still be used for various option setup on the socket.

I think people running programs that need the higher value
can increase the limit.  This is no different than having
to tweak tcp_wmem[] or the socket buffering limits via
sysctl.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html