Venugopal,

I'm sorry if these sounds like basic questions.  I really appreciate the 
patience and the help.  Replies in-line.  

On Nov 24, 2009, at 9:29 AM, venugopal iyer wrote:

> 
> 
> Hi, Cesar:
> 
> On Mon, 23 Nov 2009, Cesar Delgado wrote:
> 
>> I'm setting up a server to go to a hosting site where I have a 1Mbps pipe.  
>> From what I read I know I can't set the limit to this as the lowest setting 
>> is ~1.2Mbps and this is something that's getting worked on in Crossbow2.  I 
>> am seeing some strange behavior.
>> 
>> FIrst I have a question on flowadm's show-usage command.  When I try to run 
>> show-prop with the name of a flow I get an error.  The flow exists.  What am 
>> I doing wrong?
>> 
>> root at myhost:~# flowadm show-usage -f /var/log/net.log http-flow
>> flowadm: invalid flow: '(null)'
> 
> This is a bug, I have submitted
> 
> 6904427 flowadm show-usage doesn't work with a flow name

Thanks for submitting that.  I haven't been able to find a link to the 
bugtracker for Crossbow.  Could you please send me the URL?

> 
> 
>> 
>> 
>> Ok, now for my problem.  I have the following setting:
>> 
>> root at myhost:~# flowadm show-flowprop http-flow
>> FLOW         PROPERTY        VALUE          DEFAULT        POSSIBLE
>> http-flow    maxbw               1.228      --             1228k
>> http-flow    priority        medium         --             medium
>> 
>> I ran a test hitting the webserver and I see this:
> 
> I have the following flow
> 
> # flowadm show-flow FLOW        LINK        IPADDR                   PROTO  
> LPORT   RPORT   DSFLD
> tcp-flow    <link>      --                       tcp    --      --      --
> 
> 
> # flowadm show-flowprop tcp-flow
> FLOW         PROPERTY        VALUE          DEFAULT        POSSIBLE
> tcp-flow     maxbw               1.228      --             1228K tcp-flow     
> priority        --             --             ?
> 
> When I send TCP traffic (am using a traffic generator - netperf, to
> this machine from a peer) for about 2 mins.
> 
> On the peer the traffic generator (sender) says I am capped to about
> 1.14 Mbps.
> 
> Size   Size    Size     Time     Throughput bytes  bytes   bytes    secs.    
> 10^6bits/sec
> 
> 49152  49152  49152    120.49      1.14
> 
> Now, when I try show-usage during the traffic flow on
> the machine with the above flow in place (receiver), I am seeing:
> 
> # flowadm show-usage -s 11/24/2009 -f /var/tmp/tcpflow
> FLOW         START         END           RBYTES   OBYTES   BANDWIDTH
> tcp-flow     08:51:48      08:52:08      3428658  107802       1.414 Mbp
> tcp-flow     08:52:08      08:52:28      3431198  107802       1.415 Mbp
> tcp-flow     08:52:28      08:52:48      3434614  107888       1.417 Mbp
> tcp-flow     08:52:48      08:53:08      3443298  107802       1.420 Mbp
> tcp-flow     08:53:08      08:53:28      3444324  107802       1.420 Mbp
> tcp-flow     08:53:28      08:53:48      1376806  43576    0.568 Mbps
> 
> ...
> 
> I think the difference you see is likely to be because of the time
> period when the stats are written to the file (the bandwidth is computed for 
> every 20 seconds period which might not be exactly in
> sync with the bandwidth enforcement period in the kernel) and also
> could be because of rounding up etc. But, if you look at the  entire
> duration, it averages to about the configured limit (in the above
> example, I think it is about 1.275 Mbps for the 2 min duration)

The way I'm testing it is setting up Apache and then moving down a file with 
`wget`.  The use case for this machine is an Apache based app that serves large 
files to customers.  That is why I think a `wget` is more telling of "real" 
performance than netperf.  I'm running the test again and on the client side I 
am seeing usage over the maxbw limit I have set.  `wget` is reporting about 
2Mbps transfer rate which is much closer to what I was seeing in the show-usage 
statistics.  

[cdelgado at Bluegene tmp]$ wget sol/myfile.dat
--10:01:30--  http://sol/myfile.dat
Resolving sol... 192.168.69.104
Connecting to sol|192.168.69.104|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576000 (1000M)
Saving to: `myfile.dat'

 5% [==>                                                      ] 55,530,974   
267K/s  eta 60m 44s


> 
> BTW, setting a maxbw for a link (dladm) doesn't really impact the
> flow as the bandwidth for both are independent.

Thank you for this clarification but I still don't understand how I can be 
seeing ~2Mbps transfer if both the link and the flow are both capped at 1.2Mbps.
 

> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>> 
>> root at myhost:~# flowadm show-usage -s 11/23/2009,01:32:22 -e 
>> 11/23/2009,01:46:22 -f /var/log/net.log | grep -v "0 Mbps\|^FLOW"
>> http-flow    01:32:22      01:32:42      1512     2571     0.001 Mbps
>> ssh-flow     01:32:42      01:33:02      1818     3578     0.002 Mbps
>> http-flow    01:33:02      01:33:22      66917    3165136      1.292 Mbp
>> ssh-flow     01:33:02      01:33:22      3618     5344     0.003 Mbps
>> http-flow    01:33:22      01:33:42      117947   5713018      2.332 Mbp
>> ssh-flow     01:33:22      01:33:42      4182     3020     0.002 Mbps
>> http-flow    01:33:42      01:34:02      118998   5685520      2.321 Mbp
>> ssh-flow     01:33:42      01:34:02      11616    9924     0.008 Mbps
>> http-flow    01:34:02      01:34:22      117084   5725664      2.337 Mbp
>> http-flow    01:34:22      01:34:42      119130   5725168      2.337 Mbp
>> http-flow    01:34:42      01:35:02      114180   5725168      2.335 Mbp
>> http-flow    01:35:02      01:35:22      109230   5725664      2.333 Mbp
>> http-flow    01:35:22      01:35:42      116160   5725168      2.336 Mbp
>> http-flow    01:35:42      01:36:02      119262   5725168      2.337 Mbp
>> http-flow    01:36:02      01:36:22      119196   5725664      2.337 Mbp
>> http-flow    01:36:22      01:36:42      117216   5725168      2.336 Mbp
>> http-flow    01:36:42      01:37:02      119394   5722636      2.336 Mbp
>> http-flow    01:37:02      01:37:22      119526   5725168      2.337 Mbp
>> http-flow    01:37:22      01:37:42      119460   5725168      2.337 Mbp
>> http-flow    01:37:42      01:38:02      119460   5725664      2.338 Mbp
>> http-flow    01:38:02      01:38:22      119724   5725168      2.337 Mbp
>> http-flow    01:38:22      01:38:42      119724   5725168      2.337 Mbp
>> http-flow    01:38:42      01:39:02      119130   5722636      2.336 Mbp
>> http-flow    01:39:02      01:39:22      118866   5725168      2.337 Mbp
>> http-flow    01:39:22      01:39:42      116490   5725664      2.336 Mbp
>> http-flow    01:39:42      01:40:02      119790   5725168      2.337 Mbp
>> http-flow    01:40:02      01:40:22      117678   5725168      2.337 Mbp
>> http-flow    01:40:22      01:40:42      118668   5725664      2.337 Mbp
>> http-flow    01:40:42      01:41:02      117414   5725168      2.337 Mbp
>> http-flow    01:41:02      01:41:22      119790   5725168      2.337 Mbp
>> http-flow    01:41:22      01:41:42      119813   5720510      2.336 Mbp
>> http-flow    01:41:42      01:42:02      119394   5725664      2.338 Mbp
>> http-flow    01:42:02      01:42:22      119724   5722272      2.336 Mbp
>> http-flow    01:42:22      01:42:42      119526   5725664      2.338 Mbp
>> http-flow    01:42:42      01:43:02      119196   5722140      2.336 Mbp
>> http-flow    01:43:02      01:43:22      119394   5725664      2.338 Mbp
>> http-flow    01:43:22      01:43:42      119658   5725168      2.337 Mbp
>> http-flow    01:43:42      01:44:02      119064   5725168      2.337 Mbp
>> http-flow    01:44:02      01:44:22      113256   5676668      2.315 Mbp
>> ssh-flow     01:44:02      01:44:22      18414    49646    0.027 Mbps
>> http-flow    01:44:22      01:44:42      118206   5725664      2.337 Mbp
>> http-flow    01:44:42      01:45:02      117282   5722140      2.335 Mbp
>> ssh-flow     01:44:42      01:45:02      4698     3544     0.003 Mbps
>> http-flow    01:45:02      01:45:22      118536   5688284      2.322 Mbp
>> ssh-flow     01:45:02      01:45:22      4092     3198     0.002 Mbps
>> http-flow    01:45:22      01:45:42      119130   5725168      2.337 Mbp
>> ssh-flow     01:45:22      01:45:42      1980     1478     0.001 Mbps
>> 
>> That's above the flow's maxbw parameter.  After that I tried to change the  
>> maxbw of the link with dladm and that brought the bandwidth down but still 
>> not down to 1.2 Mbps.
>> 
>> root at myhost:~# dladm show-linkprop -p maxbw e1000g0
>> LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
>> e1000g0      maxbw           rw       1.228      --             --
>> 
>> root at myhost:~# flowadm show-usage -s 11/23/2009,01:46:02 -e 
>> 11/23/2009,01:46:22 -f /var/log/net.log | grep -v "0 Mbps\|^FLOW"
>> http-flow    01:46:02      01:46:22      119394   5725168      2.337 Mbp
>> ssh-flow     01:46:02      01:46:22      4680     2980     0.003 Mbps
>> http-flow    01:46:22      01:46:42      94314    4520316      1.845 Mbp
>> 
>> Any ideas or is there a subtlety that I'm missing and the behavior is 
>> correct?
>> 
>> Thanks for the help.
>> 
>> -Cesar
>> 
>> _______________________________________________
>> crossbow-discuss mailing list
>> crossbow-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/crossbow-discuss
>> 

Reply via email to