On Tuesday 13 March 2007 10:32, Evgeniy Polyakov wrote:
> On Fri, Mar 02, 2007 at 11:52:47AM +0300, Evgeniy Polyakov 
([EMAIL PROTECTED]) wrote:
> So, I ask network developers about testing environment for socket lookup
> benchmarking. What would be the best test case to determine performance
> of the lookup algo? Is it enough to replace algo and locking and create
> say one million of connections and try to run trivial web server (that
> is what I'm going to test if there will not be any better suggestion,
> but I only have single-core athlon 64 with 1gb of ram as a test bed and
> two core duo machines as generators, probably I can use one of them as a
> test machine too. They have gigabit adapters and aree connected over
> gigabit switch)?

One million concurrent sockets on your machines will be tricky :)

$ egrep "(filp|dent|^TCP|sock_inode_cache)" /proc/slabinfo |cut -c1-40
TCP                   12     14   1152  
sock_inode_cache     423    430    384  
dentry_cache       36996  47850    132  
filp                4081   4680    192  

that means at the minimum 1860 bytes of LOWMEM per tcp socket on 32bit kernel, 
(2512 bytes on a 64bit kernel)

I had one bench program but apparently I lost it :(
It was able to open long lived sockets, (one million if enough memory), and 
was generating kind of random trafic on all sockets. damned.
The 'server' side had to listen to many (>16) ports because of the 65536 
limit.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to