See my answers Al:

> On Jul 28, 2016, at 9:09 AM, MORTON, ALFRED C (AL) <[email protected]> wrote:
> 
> Hi Luis,
> thanks for writing this up, some suggestions/comments 
> on test 1) only so far below,
> Al
> 
>> -----Original Message-----
>> From: [email protected]
>> [mailto:[email protected]] On Behalf Of
>> Luis Gomez
>> Sent: Wednesday, July 27, 2016 6:13 PM
>> To: [email protected]; openflowplugin-dev; Jin Gan;
>> Sanjib Mohapatra
>> Subject: [integration-dev] OpenFlow tests for next ODL perf paper
>> 
>> Hi all,
>> 
>> I got the action point from last S3P call to start some discussion
>> around OpenFlow tests we can do for next ODL perf paper (Boron).
>> 
>> I am not sure we will have time for all the below but ideally I was
>> thinking in 4 tests:
>> 
>> 1) REST programming performance:
>> 
>> - Goal: Measure OF programming rate using NB REST interface
>> - Methodology: Use test REST scripts (flows in datastore) to program
>> 100K flows on an OF network of different size: 16-32-64-128 switches,
>> etc...
>> - Test variations: Use single flow/REST request and multiple flows/REST
>> (bulk) request.
>> - Collected data: controller programming time (from first to last flow)
>> from REST script, switch programming time (from first to last flow)
>> polling the OVS, flow confirmation delay (time after T1) polling the
>> operational DS.
> [ACM] 
> 
> I think it's good to delineate each of these time intervals, 
> especially if we are looking for the long wait or see unexpected waiting.
> IMO, the overall time to conduct the REST programming operation is useful,
> as this is what will be relevant in a production environment.

Right, that is the controller programming time.

> 
> (My minimal knowledge of the NB interface is about to show, mea culpa)
> It seems to me that 100K is a large number, and that if the controller
> started the operation when the first flow requests arrive on NB, the 
> overall programming time (relevant in production) would be reduced.

It is actually the other way around because today the REST transaction to write 
into datastore is much more costly than the amount of data the transaction 
carries. Also it is well known the bottleneck for this test case is the 
controller programming, like once you get the flow data in the datastore the 
flow programming rate is magnitudes faster than the REST rate.

> 
> If each bulk request must be completely received before any SB action,
> then the way to reduce overall programming time would be to divide the 
> 100K flows into smaller chunks and the controller can start work when one
> arrives.

So for the reason I explained before there is a tradeoff number around 200 
flows/REST: below this number the amount of REST transactions penalize the 
performance, over this number the latency of the REST transaction (writing all 
the flow information in the datastore) penalizes the performance. 

> 
> This would cause some of the time intervals to overlap, and possibly
> change in length from the 100K case.  But if dividing 100K into chunks 
> is more realistic from a production network perspective, we should consider
> it here.

For the first paper we only tested 2 scenarios: single flow/REST and 200 
flows/REST.

> 
> 
> 

_______________________________________________
openflowplugin-dev mailing list
[email protected]
https://lists.opendaylight.org/mailman/listinfo/openflowplugin-dev

Reply via email to