Yes, we set up that too in some tests to avoid too many logs.

> On Jul 28, 2016, at 11:43 PM, Shuva Jyoti Kar <[email protected]> 
> wrote:
> 
> Also it would be great if we could the test with the logging in ERROR mode 
> only 
> 
> -----Original Message-----
> From: Luis Gomez [mailto:[email protected]] 
> Sent: Friday, July 29, 2016 12:11 PM
> To: Shuva Jyoti Kar
> Cc: Sanjib Mohapatra; Abhijit Kumbhare; 
> [email protected]; openflowplugin-dev; Jin Gan; BUCI 
> DUAC R&D INDIA PDG SDNC XFT6
> Subject: Re: [integration-dev] OpenFlow tests for next ODL perf paper
> 
> Marcus has a setup with SSD in Intel Lab that we used for 1st paper and also 
> plan to use for the second. Disabling persistence is something we can do in 
> the ODL infra for example.
> 
> BR/Luis
> 
> 
>> On Jul 28, 2016, at 11:34 PM, Shuva Jyoti Kar <[email protected]> 
>> wrote:
>> 
>> Ahhh....can we have 2 sets - one with SDD and the other with persistence off 
>> ?
>> 
>> -----Original Message-----
>> From: Luis Gomez [mailto:[email protected]]
>> Sent: Friday, July 29, 2016 12:04 PM
>> To: Shuva Jyoti Kar
>> Cc: Sanjib Mohapatra; Abhijit Kumbhare; 
>> [email protected]; openflowplugin-dev; Jin Gan; 
>> BUCI DUAC R&D INDIA PDG SDNC XFT6
>> Subject: Re: [integration-dev] OpenFlow tests for next ODL perf paper
>> 
>> Thanks Shuva, we will not use it then.
>> 
>> On the other hand, in general for performance test we either use fast disk 
>> (SSD) or disable the datastore persistence [1] so that controller does not 
>> need to write into physical HDD every datastore transaction.
>> 
>> [1] 
>> https://wiki.opendaylight.org/view/Integration/Distribution/Cluster_Sc
>> ripts
>> 
>> 
>>> On Jul 28, 2016, at 11:28 PM, Shuva Jyoti Kar 
>>> <[email protected]> wrote:
>>> 
>>> It was only experimental. There's some level of testing(triggering 
>>> scenarios like kernel panic)  to be done before actually formalizing it.
>>> So would not recommend formal testing using it now.
>>> 
>>> 
>>> Thanks
>>> Shuva
>>> -----Original Message-----
>>> From: Sanjib Mohapatra
>>> Sent: Friday, July 29, 2016 11:42 AM
>>> To: Luis Gomez
>>> Cc: Abhijit Kumbhare; [email protected];
>>> openflowplugin-dev; Jin Gan; BUCI DUAC R&D INDIA PDG SDNC XFT6
>>> Subject: RE: [integration-dev] OpenFlow tests for next ODL perf paper
>>> 
>>> Hi Luis
>>> 
>>> Got a recommendation from dev to use fsync = off in akka.conf under 
>>> "odl-cluster-data"
>>> It was tested at their end and they found performance to be increased 2x.
>>> 
>>> persistence {
>>> journal {
>>>     leveldb {
>>>       # Set native = off to use a Java-only implementation of leveldb.
>>>       # Note that the Java-only version is not currently considered by Akka 
>>> to be production quality.
>>> 
>>>       fsync = off
>>>     }
>>>   }
>>> }
>>> 
>>> Thanks
>>> Sanjib
>>> -----Original Message-----
>>> From: Luis Gomez [mailto:[email protected]]
>>> Sent: 29 July 2016 02:27
>>> To: Sanjib Mohapatra
>>> Cc: Abhijit Kumbhare; [email protected];
>>> openflowplugin-dev; Jin Gan; BUCI DUAC R&D INDIA PDG SDNC XFT6
>>> Subject: Re: [integration-dev] OpenFlow tests for next ODL perf paper
>>> 
>>> 
>>>> On Jul 28, 2016, at 12:24 PM, Sanjib Mohapatra 
>>>> <[email protected]> wrote:
>>>> 
>>>> Hi Luis
>>>> 
>>>> I would like to have one clarification, By “100K flows on an OF network of 
>>>> different size: 16-32-64-128 switches”, does it mean same 100K flows for 
>>>> 16-32-64 switches ?
>>> 
>>> The goal is to run the perf test on different OF network sizes so we know 
>>> what is the impact (if any) of this.
>>> 
>>>> 
>>>> I have some local scripts to run 150K-15 DPN, 330K-33DPN etc, I run it 
>>>> with following configuration optimisation in controller nodes. The DPNs 
>>>> are equally spread across controller nodes.
>>> 
>>> If you increase the number of flows from one network to another, you are 
>>> running some sort of scalability test mixed with the performance test that 
>>> can confuse the result, like the perf degradation is because of the number 
>>> of switches or because the number of flows that has to be stored and 
>>> programmed… For this reason I would prefer to run the test with the same 
>>> condition of fix 100K flows divided among the switches that are available 
>>> in the network. It is also that testing this way we can easily compare one 
>>> network result to another.
>>> 
>>>> 
>>>>     I.          fsync = off in akka.conf
>>> 
>>> What is this setting?
>>> 
>>>>   II.          changing root logging to ERROR from INFO
>>>>  III.          Set JAVA_MIN_MEM=512M JAVA_MAX_MEM=8192M in karaf
>>>>  IV.          <skip-table-features>true</skip-table-features> in 
>>>> 42-openflowplugin-He.xml
>>>> 
>>>> I can rerun those scripts by tweaking it to the current requirement . Do 
>>>> let me know if any other configuration optimisation is required. Also 
>>>> could you please provide path to Boron distribution. I ran below scripts 
>>>> in stable Beryllium.
>>> 
>>> Master distribution is in nexus: 
>>> https://nexus.opendaylight.org/content/repositories/opendaylight.snap
>>> s hot/org/opendaylight/integration/distribution-karaf/0.5.0-SNAPSHOT/
>>> 
>>> Does your test automation provide all the perf numbers required in this TC: 
>>> controller programming time, switch programming time, and flow confirmation 
>>> delay? If not, that is the next thing to work on.
>>> 
>>>> 
>>>> 
>>>> root@mininet-vm:/home/mininet/integration/test/csit/suites/openflowp
>>>> l u gin/Clustering_Bulkomatic# pybot -L TRACE  -v 
>>>> MININET_USER:mininet -v USER_HOME:/home/mininet -v 
>>>> CONTROLLER:10.183.181.51 -v
>>>> CONTROLLER1:10.183.181.52 -v CONTROLLER2:10.183.181.53 -v USER:root 
>>>> -v PASSWORD:rootroot -v WORKSPACE:/home/mininet -v 
>>>> BUNDLEFOLDER:/controller-Be/deploy/current/odl -v 
>>>> DEFAULT_LINUX_PROMPT:\#  -v NUM_ODL_SYSTEM:3 -v 
>>>> MININET_PASSWORD:rootroot -v 
>>>> OVS_SWITCH_FILE:Multi_Switch_Medium_Config.py
>>>> 150K__Cluster_Reconcilliation_Multi_DPN.robot
>>>> ====================================================================
>>>> = = ======== Cluster Reconcilliation Multi DPN :: Test suite for 
>>>> Cluster with Bulk Flows...
>>>> ====================================================================
>>>> = = ======== Check Shards Status And Initialize Variables :: Check 
>>>> Status for a... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Get Inventory Follower Before Cluster Restart :: Find a 
>>>> follower i... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Start Mininet Connect To Follower Node1 :: Start mininet 
>>>> with conn... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Add Bulk Flow From Follower :: 10000 Flows (10K per DPN) in
>>>> 15 DPN... | PASS |
>>>> ------------------------------------------------------------------------------
>>>> Verify Flows In Switch :: Verify 150K flows are installed.            | 
>>>> PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Stop Mininet Connected To Follower Node1 and Exit :: Stop 
>>>> mininet ... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Delete All Flows From Follower Node1 :: 150000 Flows 
>>>> deleted via F... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Cluster Reconcilliation Multi DPN :: Test suite for Cluster 
>>>> with B... | PASS |
>>>> 7 critical tests, 7 passed, 0 failed
>>>> 7 tests total, 7 passed, 0 failed
>>>> ====================================================================
>>>> =
>>>> =
>>>> ========
>>>> 
>>>> 
>>>> root@mininet-vm:/home/mininet/integration/test/csit/suites/openflowp
>>>> l u gin/Clustering_Bulkomatic# pybot -L TRACE  -v 
>>>> MININET_USER:mininet -v USER_HOME:/home/mininet -v 
>>>> CONTROLLER:10.183.181.51 -v
>>>> CONTROLLER1:10.183.181.52 -v CONTROLLER2:10.183.181.53 -v USER:root 
>>>> -v PASSWORD:rootroot -v WORKSPACE:/home/mininet -v 
>>>> BUNDLEFOLDER:/controller-Be/deploy/current/odl -v 
>>>> DEFAULT_LINUX_PROMPT:\#  -v NUM_ODL_SYSTEM:3 -v 
>>>> MININET_PASSWORD:rootroot -v 
>>>> OVS_SWITCH_FILE:Multi_11Switch_Medium_Config.py
>>>> 330K__Cluster_Reconcilliation_Multi_DPN.robot
>>>> ====================================================================
>>>> = = ======== Cluster Reconcilliation Multi DPN :: Test suite for 
>>>> Cluster with Bulk Flows...
>>>> ====================================================================
>>>> = = ======== Check Shards Status And Initialize Variables :: Check 
>>>> Status for a... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Get Inventory Follower Before Cluster Restart :: Find a 
>>>> follower i... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Start Mininet Connect To Follower Node1 :: Start mininet 
>>>> with conn... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Add Bulk Flow From Follower :: 10000 Flows (10K per DPN) in
>>>> 33 DPN... | PASS |
>>>> ------------------------------------------------------------------------------
>>>> Verify Flows In Switch :: Verify 330K flows are installed.            | 
>>>> PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Stop Mininet Connected To Follower Node1 and Exit :: Stop 
>>>> mininet ... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Delete All Flows From Follower Node1 :: 330000 Flows 
>>>> deleted via F... | PASS |
>>>> --------------------------------------------------------------------
>>>> -
>>>> -
>>>> -------- Cluster Reconcilliation Multi DPN :: Test suite for Cluster 
>>>> with B... | PASS |
>>>> 7 critical tests, 7 passed, 0 failed
>>>> 7 tests total, 7 passed, 0 failed
>>>> ====================================================================
>>>> =
>>>> =
>>>> ========
>>>> Output:  
>>>> /home/mininet/integration/test/csit/suites/openflowplugin/Clustering_Bulkomatic/output.xml
>>>> Log:     
>>>> /home/mininet/integration/test/csit/suites/openflowplugin/Clustering_Bulkomatic/log.html
>>>> Report:  
>>>> /home/mininet/integration/test/csit/suites/openflowplugin/Clustering
>>>> _
>>>> B
>>>> ulkomatic/report.html
>>>> root@mininet-vm:/home/mininet/integration/test/csit/suites/openflowp
>>>> l
>>>> u
>>>> gin/Clustering_Bulkomatic#
>>>> 
>>>> Thanks
>>>> Sanjib
>>>> 
>>>> From: Luis Gomez [mailto:[email protected]]
>>>> Sent: 28 July 2016 05:17
>>>> To: Abhijit Kumbhare
>>>> Cc: [email protected]; openflowplugin-dev; Jin 
>>>> Gan; Sanjib Mohapatra
>>>> Subject: Re: [integration-dev] OpenFlow tests for next ODL perf 
>>>> paper
>>>> 
>>>> Hi Abhijit,
>>>> 
>>>> We can definitely leverage any automation your team will prepare to test 
>>>> all the combinations below, but just to be clear the ODL perf paper we 
>>>> will only include out-of-the-box Boron release.
>>>> 
>>>> BR/Luis
>>>> 
>>>> On Jul 27, 2016, at 4:23 PM, Abhijit Kumbhare <[email protected]> 
>>>> wrote:
>>>> 
>>>> Hi Luis,
>>>> 
>>>> I was discussing with our OpenFlow team (Manohar/Muthu/Shuva/Vinayak) 
>>>> today about the tests - and one of the things which would be very 
>>>> interesting would be the bulk-o-matic with small/medium/large configs 
>>>> (same as your test 2) and the following combinations:
>>>> 
>>>> 1. Lithium design + the old FRM
>>>> 2. Lithium design + the new FRM => are there improvements?
>>>> 3. He design => bulk-o-matic may not have been used for the He/Li
>>>> 
>>>> They were also planning to discuss this with Sanjib in their daytime today 
>>>> if this has been already done.
>>>> 
>>>> Thanks,
>>>> Abhijit
>>>> 
>>>> On Wed, Jul 27, 2016 at 3:13 PM, Luis Gomez <[email protected]> wrote:
>>>> Hi all,
>>>> 
>>>> I got the action point from last S3P call to start some discussion around 
>>>> OpenFlow tests we can do for next ODL perf paper (Boron).
>>>> 
>>>> I am not sure we will have time for all the below but ideally I was 
>>>> thinking in 4 tests:
>>>> 
>>>> 1) REST programming performance:
>>>> 
>>>> - Goal: Measure OF programming rate using NB REST interface
>>>> - Methodology: Use test REST scripts (flows in datastore) to program 100K 
>>>> flows on an OF network of different size: 16-32-64-128 switches, etc...
>>>> - Test variations: Use single flow/REST request and multiple flows/REST 
>>>> (bulk) request.
>>>> - Collected data: controller programming time (from first to last flow) 
>>>> from REST script, switch programming time (from first to last flow) 
>>>> polling the OVS, flow confirmation delay (time after T1) polling the 
>>>> operational DS.
>>>> 
>>>> 2) Java programming performance:
>>>> 
>>>> - Goal: Measure OF programming rate using internal Java interface
>>>> - Methodology: Use bluk-o-matic application (flows in datastore or rpc) to 
>>>> program 100K flows on an OF network of different size: 16-32-64-128 
>>>> switches, etc...
>>>> - Test variations: Use single flow/REST request and multiple flows/REST 
>>>> (bulk) request.
>>>> - Collected data: controller programming time (from first to last flow) 
>>>> from bulk-o-matic, switch programming time (from first to last flow) 
>>>> polling the OVS, flow confirmation delay (time after T1) polling the 
>>>> operational DS.
>>>> 
>>>> 3) Network message processing latency:
>>>> 
>>>> - Goal: Measure OF packet message processing time
>>>> - Methodology: Use some OF public tool (Cbench, MT-Cbench, SDN-blaster) to 
>>>> generate OF packets and measure the delay (latency mode) of the received 
>>>> controller flows on an OF network of different size: 16-32-64-128 
>>>> switches, etc...
>>>> - Test variations: Use controller drop-test application in DS and RPC mode.
>>>> - Collected data: packet processing rate (latency=1/rate)
>>>> 
>>>> 4) Topology scalability:
>>>> 
>>>> - Goal: Scale OF network and measure learning time.
>>>> - Methodology: Use some OF public tool (Mininet, Multinet) to generate 
>>>> different sizes of large topologies: 1000, 2000, 3000 switches, etc...
>>>> - Collected data: Time for the controller to learn about the topology.
>>>> 
>>>> In addition the same tests (or a subset) should run in a cluster 
>>>> environment (3 node cluster).
>>>> 
>>>> The main problem we have today for running and automating the above is 
>>>> people resources, so far Jin and Sanjib offered to help but more help 
>>>> would be appreciated.
>>>> 
>>>> BR/Luis
>>>> 
>>>> _______________________________________________
>>>> integration-dev mailing list
>>>> [email protected]
>>>> https://lists.opendaylight.org/mailman/listinfo/integration-dev
>>> 
>> 
> 

_______________________________________________
openflowplugin-dev mailing list
[email protected]
https://lists.opendaylight.org/mailman/listinfo/openflowplugin-dev
  • Re: [openflowpl... Abhijit Kumbhare
    • Re: [openf... Luis Gomez
      • Re: [o... Sanjib Mohapatra
        • Re... Luis Gomez
          • ... Sanjib Mohapatra
            • ... Shuva Jyoti Kar
              • ... Luis Gomez
              • ... Shuva Jyoti Kar
              • ... Luis Gomez
              • ... Shuva Jyoti Kar
              • ... Luis Gomez
              • ... Muthukumaran K
              • ... Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco)
              • ... Shuva Jyoti Kar
              • ... Luis Gomez
              • ... Shuva Jyoti Kar
              • ... MORTON, ALFRED C (AL)
              • ... Luis Gomez
              • ... MORTON, ALFRED C (AL)
              • ... Luis Gomez
          • ... Sanjib Mohapatra

Reply via email to