I am currently working with Ryu’s switch tester.py and I am getting results I 
don’t quite understand, or I have uncovered a bug in tester.py.  When running 
the full of13 test suite many tests fail which I think should pass.  Most of 
the tests which feature a “goto_table:1” action fail.

For a baseline, I set up two OVS switches version 2.4.90 and connect them 
through veth ports.  I then start tester.py and I get a total test result of 
OK(594) / ERROR(397).

One of the failures is appears as follows:

action: 23_SET_NW_TTL (IPv4)
    
ethernet/ipv4(ttl=64)/tcp-->'eth_type=0x0800,actions=set_nw_ttl:32,output:2'    
                     OK
    
ethernet/vlan/ipv4(ttl=64)/tcp-->'eth_type=0x0800,actions=set_nw_ttl:32,output:2'
                    OK
    
ethernet/mpls/ipv4(ttl=64)/tcp-->'actions=pop_mpls:0x0800,goto_table:1','table_id:1,eth_type=0x0800,actions=set_nw_ttl:32,output:2'
 ERROR
        Receiving timeout: no change in tx_packets on target.
    
ethernet/itag/ethernet/ipv4(ttl=64)/tcp-->'actions=pop_pbb,goto_table:1','table_id:1,eth_type=0x0800,actions=set_nw_ttl:32,output:2'
 ERROR
        Failed to add flows: OFPErrorMsg[type=0x02, code=0x00]

The fourth test always fails but I don’t think OVS is supposed to support 
MAC-in-MAC networks. 

The third test shouldn’t fail but it does.  If I remove all the tests and run 
ONLY this one test file it passes:

action: 23_SET_NW_TTL (IPv4)
    
ethernet/ipv4(ttl=64)/tcp-->'eth_type=0x0800,actions=set_nw_ttl:32,output:2'    
                     OK
    
ethernet/vlan/ipv4(ttl=64)/tcp-->'eth_type=0x0800,actions=set_nw_ttl:32,output:2'
                    OK
    
ethernet/mpls/ipv4(ttl=64)/tcp-->'actions=pop_mpls:0x0800,goto_table:1','table_id:1,eth_type=0x0800,actions=set_nw_ttl:32,output:2'
 OK
    
ethernet/itag/ethernet/ipv4(ttl=64)/tcp-->'actions=pop_pbb,goto_table:1','table_id:1,eth_type=0x0800,actions=set_nw_ttl:32,output:2'
 ERROR
        Failed to add flows: OFPErrorMsg[type=0x02, code=0x00]

So it appears that tester.py does not clean up sufficiently after each test 
file is run.  Is it correct to interpret the data this way?

As an experiment, I added a timer.sleep(2) in the test loop as follows:

 420         self.logger.info('--- Test start ---')
 421         test_keys = tests.keys()
 422         test_keys.sort()
 423         for file_name in test_keys:
 424             time.sleep(2) 
 425             report = self._test_file_execute(tests[file_name])
 426             for result, descriptions in report.items():
 427                 test_report.setdefault(result, [])
 428                 test_report[result].extend(descriptions)
 429         self._test_end(msg='---  Test end  ---', report=test_report)

With the delay in place, the full test now yields OK(621) / ERROR(370).  About 
27 more tests pass than before.  So some tests are affected by a race condition 
with the setup.

Can someone familiar with tester.py advise me as to a next step to take?



----------------------------
Alan Deikman
ZNYX Networks, Inc.



------------------------------------------------------------------------------
_______________________________________________
Ryu-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ryu-devel

Reply via email to