Hi Abhijit, 

We can definitely leverage any automation your team will prepare to test all 
the combinations below, but just to be clear the ODL perf paper we will only 
include out-of-the-box Boron release.

BR/Luis 

> On Jul 27, 2016, at 4:23 PM, Abhijit Kumbhare <[email protected]> wrote:
> 
> Hi Luis,
> 
> I was discussing with our OpenFlow team (Manohar/Muthu/Shuva/Vinayak) today 
> about the tests - and one of the things which would be very interesting would 
> be the bulk-o-matic with small/medium/large configs (same as your test 2) and 
> the following combinations:
> 
> 1. Lithium design + the old FRM
> 2. Lithium design + the new FRM => are there improvements?
> 3. He design => bulk-o-matic may not have been used for the He/Li
> 
> They were also planning to discuss this with Sanjib in their daytime today if 
> this has been already done.
> 
> Thanks,
> Abhijit
> 
> On Wed, Jul 27, 2016 at 3:13 PM, Luis Gomez <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi all,
> 
> I got the action point from last S3P call to start some discussion around 
> OpenFlow tests we can do for next ODL perf paper (Boron).
> 
> I am not sure we will have time for all the below but ideally I was thinking 
> in 4 tests:
> 
> 1) REST programming performance:
> 
> - Goal: Measure OF programming rate using NB REST interface
> - Methodology: Use test REST scripts (flows in datastore) to program 100K 
> flows on an OF network of different size: 16-32-64-128 switches, etc...
> - Test variations: Use single flow/REST request and multiple flows/REST 
> (bulk) request.
> - Collected data: controller programming time (from first to last flow) from 
> REST script, switch programming time (from first to last flow) polling the 
> OVS, flow confirmation delay (time after T1) polling the operational DS.
> 
> 2) Java programming performance:
> 
> - Goal: Measure OF programming rate using internal Java interface
> - Methodology: Use bluk-o-matic application (flows in datastore or rpc) to 
> program 100K flows on an OF network of different size: 16-32-64-128 switches, 
> etc...
> - Test variations: Use single flow/REST request and multiple flows/REST 
> (bulk) request.
> - Collected data: controller programming time (from first to last flow) from 
> bulk-o-matic, switch programming time (from first to last flow) polling the 
> OVS, flow confirmation delay (time after T1) polling the operational DS.
> 
> 3) Network message processing latency:
> 
> - Goal: Measure OF packet message processing time
> - Methodology: Use some OF public tool (Cbench, MT-Cbench, SDN-blaster) to 
> generate OF packets and measure the delay (latency mode) of the received 
> controller flows on an OF network of different size: 16-32-64-128 switches, 
> etc...
> - Test variations: Use controller drop-test application in DS and RPC mode.
> - Collected data: packet processing rate (latency=1/rate)
> 
> 4) Topology scalability:
> 
> - Goal: Scale OF network and measure learning time.
> - Methodology: Use some OF public tool (Mininet, Multinet) to generate 
> different sizes of large topologies: 1000, 2000, 3000 switches, etc...
> - Collected data: Time for the controller to learn about the topology.
> 
> In addition the same tests (or a subset) should run in a cluster environment 
> (3 node cluster).
> 
> The main problem we have today for running and automating the above is people 
> resources, so far Jin and Sanjib offered to help but more help would be 
> appreciated.
> 
> BR/Luis
> 
> _______________________________________________
> integration-dev mailing list
> [email protected] 
> <mailto:[email protected]>
> https://lists.opendaylight.org/mailman/listinfo/integration-dev 
> <https://lists.opendaylight.org/mailman/listinfo/integration-dev>
> 

_______________________________________________
openflowplugin-dev mailing list
[email protected]
https://lists.opendaylight.org/mailman/listinfo/openflowplugin-dev

Reply via email to