HI Sunay,

> The limitations is typically based on system size,
> CPU and memory
> available etc. We might have some issues with the
> hash scaling as
> well. What kind of system and RAM do you have? We
> should try
> it inhouse and see what we get to.

The system that got in trouble at ~11,500 was an x86 with 1G memory (512M 
swap). I managed to repeat the test on another x86 with 1G memory - this one 
with 2G swap configured. This second system survived the creation of 12K flows. 
On this surviving system, the memory requirement for the 12K flows appears to 
be in the area of 500M.


> BTW, you do know that you can create flows on remote
> subnets as well
> (although we don't allow subnets and IP addresses to
> mix right now).
> See if you can make work with subnets and reduce the
> number of flows
> you need while we look at this issue.

Yes, I realize it would be more practical with flows defined at subnet level 
and the commands I used to test the flow creations may indicate that each IP in 
a range would get the same bandwidth ;-).

However, in the actual application these experiments are conducted for, the 
remote_ip's would not fall cleanly within subnet ranges and more importantly: 
they would require individual 'maxbw' parameter setting.

A related question: which API would be most suitable for an application that 
would try to maintain a high number of non-permanent flows in a near real-time 
fashion?

BR,
Michael A
-- 
This message posted from opensolaris.org

Reply via email to