Hello everybody,

I´ve installed 1G-NetFPGA cards working as OpenFlow switches. They are connected one to the other. To each of them I have connected two hosts as in the figure.

HOST1--------OFS1------------OFS2----------HOST3
                        /                        \
                      /                             \
               HOST2                        HOST4

The controller is Beacon, working with the default bundles. We are trying to test the platform obtaining the packet loss rate depending on the the load of the link (1 Gbps) between OF switches. The tool we are using to meassure packet loss is Iperf. We have realized that UDP datagram size used with Iperf has a very high impact on the results. Here you are the results reported by Iperf when we generate a 10Mbps UDP flow between HOST1 and HOST2 with a background traffic of 1Gbps (500Gbps from HOST1 to HOST2 and 500Gbps from HOST3 to HOST4) and datagram size of 1470 bytes, depending on the datagram size of packets from the 10Mbps flow:

Datagram size (bytes)        Packet loss rate (%)
100                                            0.00043
200                                            0.11
250                                            0.37
500                                            1.8
750                                             4
1000                                           5
1250                                          9.3
1500                                         13

These results are not the expected by me since the differences are very big and losses for small datagram sizes are very small.Could it have something to do with the design of the OpenFlow switch? Maybe the packet preprocessor or any other module gives this advantage to small datagram. I´ve been looking for documentation, modules designs and functions, etc for the openflow switch implementation for the NetFPGA but I´ve only found it for the 10G NetFPGA card and as far as I know there are many differences. Does anyone have any explanation to these results?.
Thanks in advance.

Pedro Bastida.

_______________________________________________
openflow-discuss mailing list
openflow-discuss@lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/openflow-discuss

Reply via email to