Re: bhyve with vlans - host and vm can't pass traffic
On 4/22/2015 8:34 PM, Scott O'Connell wrote: I tried your suggestions. I was successful in changing the vmhost01 bridge to include vlan100 and tap0, and in the vm (dev) binding the address directly to vtnet0. On the VMHOST: tap0: flags=8943UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST metric 0 mtu 1500 options=8LINKSTATE ether 00:bd:4c:d1:02:00 media: Ethernet autoselect status: active Opened by PID 888 bridge0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST metric 0 mtu 1500 ether 02:d3:e4:02:03:00 id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200 root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0 member: tap0 flags=143LEARNING,DISCOVER,AUTOEDGE,AUTOPTP ifmaxaddr 0 port 6 priority 128 path cost 200 member: vlan100 flags=143LEARNING,DISCOVER,AUTOEDGE,AUTOPTP ifmaxaddr 0 port 5 priority 128 path cost 200 In the VM: vtnet0: flags=8943UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST metric 0 mtu 1500 options=80028VLAN_MTU,JUMBO_MTU,LINKSTATE ether 00:a0:98:2b:34:37 inet 10.0.1.6 netmask 0xff00 broadcast 10.0.1.255 nd6 options=29PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL media: Ethernet 10Gbase-T full-duplex status: active lo0: flags=8049UP,LOOPBACK,RUNNING,MULTICAST metric 0 mtu 16384 options=63RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 inet 127.0.0.1 netmask 0xff00 nd6 options=21PERFORMNUD,AUTO_LINKLOCAL The same results with regard to connectivity. Both the VMHOST and the VM can communicate everywhere, except with each other. I'm not sure how much detail to post, or what protocol I should be testing from the tcpdump, but here are a couple of relevant portions. Captured on the VMHOST with tcpdump -i tap0 -n -vv, and on the VM with tcpdump -i vtnet0 -n -vv A ping from the VM (10.0.1.6) to VMHOST (10.0.1.17): Captured on tap0: 18:18:40.656407 IP (tos 0x0, ttl 64, id 2398, offset 0, flags [none], proto ICMP (1), length 84) 10.0.1.6 10.0.1.17: ICMP echo request, id 46082, seq 689, length 64 18:18:40.656429 IP (tos 0x0, ttl 64, id 3824, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (-55a3)!) 10.0.1.17 10.0.1.6: ICMP echo reply, id 46082, seq 689, length 64 Captured on vtnet0: 18:18:40.906203 IP (tos 0x0, ttl 64, id 2398, offset 0, flags [none], proto ICMP (1), length 84) 10.0.1.6 10.0.1.17: ICMP echo request, id 46082, seq 689, length 64 18:18:40.906366 IP (tos 0x0, ttl 64, id 3824, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (-55a3)!) 10.0.1.17 10.0.1.6: ICMP echo reply, id 46082, seq 689, length 64 100% packet loss on the ping. Here is the same traffic from both systems between the VM (10.0.1.6) and the switch (10.0.1.1) through the VMHOST: Captured on tap0: 18:23:42.712065 IP (tos 0x0, ttl 64, id 2858, offset 0, flags [none], proto ICMP (1), length 84) 10.0.1.6 10.0.1.1: ICMP echo request, id 58626, seq 2, length 64 18:23:42.712595 IP (tos 0x0, ttl 255, id 2858, offset 0, flags [none], proto ICMP (1), length 84) 10.0.1.1 10.0.1.6: ICMP echo reply, id 58626, seq 2, length 64 Captured on vtnet0: 18:23:43.141890 IP (tos 0x0, ttl 64, id 2858, offset 0, flags [none], proto ICMP (1), length 84) 10.0.1.6 10.0.1.1: ICMP echo request, id 58626, seq 2, length 64 18:23:43.142553 IP (tos 0x0, ttl 255, id 2858, offset 0, flags [none], proto ICMP (1), length 84) 10.0.1.1 10.0.1.6: ICMP echo reply, id 58626, seq 2, length 64 100% packet success on the ping. I'm never quite sure when checksum's with TCP Dump or Wireshark are expected, and when they aren't, but it appears that is where the problem lies here. With that said, if I'm understanding this correctly, and checksums are the problem, I'm not sure what to try next. Thanks again! Hi Scott, I certainly appears that ICMP echo reply packets are being returned but the host isn't processing them for some reason. Do you have any firewalls running on either system? You might try including a -e in the tcpdump command line arguments. IIRC, that will also show you VLAN and MAC address info from the packet headers. Maybe one of the network kernel developers could provide some additional insight as to what may be happening in this scenario. -Matthew ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
[Bug 194855] VLAN interface MTU does not change on changing the base interface MTU
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194855 Gleb Smirnoff gleb...@freebsd.org changed: What|Removed |Added CC||gleb...@freebsd.org Assignee|freebsd-net@FreeBSD.org |gleb...@freebsd.org -- You are receiving this mail because: You are the assignee for the bug. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: Testing Congestion Control Algorithms
This is veering somewhat off tangent for the freebsd-net list, but... On 04/23/2015 21:15, Karlis Laivins wrote: Hello once again, Before I dive in the TEACUP, I wanted to clarify this - should I build the testbed to consist of FreeBSD machines, will I be able to use congestion control module (.ko) that was created by modifying the cc_newreno (written in C) in TEACUP, or will I have to rewrite it in Python? Short answer: TEACUP doesn't implement CC algorithms per se. It focuses on controlling all the data generation capture tools, end hosts and bottleneck to start/log/stop TCP performance tests using whatever CC algorithm you select. You will need to copy-paste-edit about 5 lines of TEACUP's python code so TEACUP will recognise and kldload your new module on your FreeBSD end hosts. For the long answer -- ping me offlist. Sorry, if this question seems silly, but I have limited time to do the tests and I want to be sure that I don't have to redo something in a language that I haven't used yet. TEACUP takes a bit of time to set up the end hosts and bottleneck router with the right tools. But once you have it running you'll be able to iterate across a range of TCP and path parameters quite efficiently. cheers, gja Thank you in advance for your answer! With Best Regards, Karlis On Thu, Apr 23, 2015 at 12:45 PM, grenville armitage garmit...@swin.edu.au wrote: On 04/23/2015 17:17, Karlis Laivins wrote: Hi, I am currently working on a modification of TCP NewReno congestion control algorithm. It seems that I have been able to write a working module. Now, I am looking for a way to test the performance of the built-in congestion control algorithms and the new algorithm. I have heard about the NS-2 simulator, and I am trying to compile and configure it now, but that's just a statistical tool (from what I hear) and the results are far from reality (please correct me, if I am wrong). Please recommend a tool or way I can test the performance of the congestion control algorithm in a real environment (sender side - 2 Computers, one connected to the wireless network, other to a wire, receiver - one PC, running FTP server, both senders each sending a big file at the same time). I would like to get comparable performance results from each of the existing congestion control algorithm as well as the new one I have created by modifying the NewReno algorithm. Thank you in advance for your assistance. Lars is right, the ns-2 tangent is starting to diverge from freebsd-net@ Indeed, I would suggest you don't bother with ns-2 -- it wont help you do meaningful comparisons to a kernel-resident cc module you develop under FreeBSD. If you have the time and inclination to build a small testbed using a couple of physical hosts, you might find this tool useful -- http://caia.swin.edu.au/tools/teacup My colleague and I built TEACUP (TCP Experiment Automation Controlled Using Python) to automate many aspects of running TCP performance experiments in our small, specially-constructed physical testbed. TEACUP enables repeatable testing of different TCP algorithms over a range of emulated network path conditions, bottleneck rate limits and bottleneck queuing disciplines. (e.g. I've used it to experiment with custom FreeBSD CC modules vs conventional FreeBSD and Linux CC algorithms.) A key caveat: TEACUP assumes your physical testbed is a multi-host/single-bottleneck dumbbell-like topology with suitably configured end hosts and Linux-based bottleneck router (see http://caia.swin.edu.au/reports/150210C/CAIA-TR-150210C.pdf for an example). TEACUP does not try to run experiments over arbitrary network paths or the wider Internet. This has satisfied our use-cases, other people's mileage may vary :-) We've released TEACUP in case it may be useful to other researchers who already have (or are interested in setting up) similar network testbeds. (Small note -- we recently found a small bug in some of the v0.9 data analysis code, which will be fixed when v0.9.2 comes out RSN.) cheers, gja ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org -- Professor Grenville Armitage Centre for Advanced Internet Architectures School of Software and Electrical Engineering Faculty of Science, Engineering and Technology Swinburne University of Technology, Australia http://caia.swin.edu.au/cv/garmitage ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
IPSec Performance under Xen
Hello, I have set up 2 VM's under Xen running each one IPSec-Endpoint. Everything seems to work fine, but (measured with benchmarks/iperf) the performance drops from ~10 Gb/s on a non-IPSec-Kernel to ~200 Mb/s with IPSec compiled in, regardless of whether actually using IPSec or not. I have read about reasoning why IPSec isn't enabled in GENERIC, but wanted to ask if this is the kind of performance hit one has to expect. I have observed this on FreeBSD 10.1 and 10 Stable, both AMD64. The Hypervisor is running Xen 4.4 with a Linux 3.16 Dom0. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: IPSec Performance under Xen
On 24.04.2015 01:00, Sydney Meyer wrote: Hello, I have set up 2 VM's under Xen running each one IPSec-Endpoint. Everything seems to work fine, but (measured with benchmarks/iperf) the performance drops from ~10 Gb/s on a non-IPSec-Kernel to ~200 Mb/s with IPSec compiled in, regardless of whether actually using IPSec or not. I have read about reasoning why IPSec isn't enabled in GENERIC, but wanted to ask if this is the kind of performance hit one has to expect. Hi, I have a guess. Since you use iperf, I think the main bottleneck there is the fact, that the socket has a PCB. When you have compiled IPSEC in the kernel, it enables the code, that does initialization of PCB's security policy inp_sp via ipsec_init_policy(). Then each packet that has associated PCB (iperf uses sockets, so it has PCB) on output goes through a bunch of checks, that includes several lookups with taking of exclusive locks. Even if you don't use any security policies ALL packets that have associated PCB will go through such UNNEEDED checks. I am not very familiar with this code, but maybe George or Robert can answer why we do this for every PCB? Why not initialize inp_sp only when application does needed configuration via setsockopt(IP_IPSEC_POLICY)? -- WBR, Andrey V. Elsukov ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: IPSec Performance under Xen
On 24.04.2015 01:00, Sydney Meyer wrote: Hello, I have set up 2 VM's under Xen running each one IPSec-Endpoint. Everything seems to work fine, but (measured with benchmarks/iperf) the performance drops from ~10 Gb/s on a non-IPSec-Kernel to ~200 Mb/s with IPSec compiled in, regardless of whether actually using IPSec or not. Can you test this patch to see the difference? It isn't a fix. It is just to see how will help avoiding of PCB check. --- ip_output.c (revision 281867) +++ ip_output.c (working copy) @@ -482,7 +482,7 @@ again: sendit: #ifdef IPSEC - switch(ip_ipsec_output(m, inp, flags, error)) { + switch(ip_ipsec_output(m, NULL, flags, error)) { case 1: goto bad; case -1: -- WBR, Andrey V. Elsukov ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: IPSec Performance under Xen
Hello Andrey, first off, thank you for your explanation. As for your Hint, i am not a C Programmer but i think i have a better understanding of the issue now. I believe this is a know issue and the reason why IPSEC isn't in GENERIC, afaik from this discussion (https://lists.freebsd.org/pipermail/freebsd-hackers/2009-April/028364.html). I have compiled the patched kernel and am installing on the vm's now.. will get back to you. S. On Apr 24, 2015, at 01:26, Andrey V. Elsukov bu7c...@yandex.ru wrote: On 24.04.2015 01:00, Sydney Meyer wrote: Hello, I have set up 2 VM's under Xen running each one IPSec-Endpoint. Everything seems to work fine, but (measured with benchmarks/iperf) the performance drops from ~10 Gb/s on a non-IPSec-Kernel to ~200 Mb/s with IPSec compiled in, regardless of whether actually using IPSec or not. Can you test this patch to see the difference? It isn't a fix. It is just to see how will help avoiding of PCB check. --- ip_output.c (revision 281867) +++ ip_output.c (working copy) @@ -482,7 +482,7 @@ again: sendit: #ifdef IPSEC - switch(ip_ipsec_output(m, inp, flags, error)) { + switch(ip_ipsec_output(m, NULL, flags, error)) { case 1: goto bad; case -1: -- WBR, Andrey V. Elsukov ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: IPSec Performance under Xen
On 24.04.2015 03:55, Sydney Meyer wrote: Andrey, with your patch applied the performance drop while using the IPSEC-enabled kernel without doing actual IPSec traffic seems to be gone. I haven't tested IPSec itself yet, as i had to start from scratch with new VM's but i will set up a IPSec connection and report back. Thank you. But I think something will not work if you try it with IPSec. Probably if you use some IKE software, it will not work with this patch. -- WBR, Andrey V. Elsukov ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: IPSec Performance under Xen
You're right.. strongswan fails/hangs with: initiating IKE_SA host-host[1] to 10.0.30.66 generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(HASH_ALG) ] sending packet: from 10.0.30.114[500] to 10.0.30.66[500] (1148 bytes) received packet: from 10.0.30.66[500] to 10.0.30.114[500] (456 bytes) parsed IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(HASH_ALG) N(MULT_AUTH) ] authentication of 'sun.strongswan.org' (myself) with pre-shared key establishing CHILD_SA host-host generating IKE_AUTH request 1 [ IDi N(INIT_CONTACT) IDr AUTH N(ESP_TFC_PAD_N) SA TSi TSr N(MOBIKE_SUP) N(ADD_6_ADDR) N(MULT_AUTH) N(EAP_ONLY) ] sending packet: from 10.0.30.114[4500] to 10.0.30.66[4500] (444 bytes) retransmit 1 of request with message ID 1 sending packet: from 10.0.30.114[4500] to 10.0.30.66[4500] (444 bytes) retransmit 2 of request with message ID 1 sending packet: from 10.0.30.114[4500] to 10.0.30.66[4500] (444 bytes) .. S. On Apr 24, 2015, at 03:00, Andrey V. Elsukov a...@freebsd.org wrote: On 24.04.2015 03:55, Sydney Meyer wrote: Andrey, with your patch applied the performance drop while using the IPSEC-enabled kernel without doing actual IPSec traffic seems to be gone. I haven't tested IPSec itself yet, as i had to start from scratch with new VM's but i will set up a IPSec connection and report back. Thank you. But I think something will not work if you try it with IPSec. Probably if you use some IKE software, it will not work with this patch. -- WBR, Andrey V. Elsukov ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Testing Congestion Control Algorithms
Hi, I am currently working on a modification of TCP NewReno congestion control algorithm. It seems that I have been able to write a working module. Now, I am looking for a way to test the performance of the built-in congestion control algorithms and the new algorithm. I have heard about the NS-2 simulator, and I am trying to compile and configure it now, but that's just a statistical tool (from what I hear) and the results are far from reality (please correct me, if I am wrong). Please recommend a tool or way I can test the performance of the congestion control algorithm in a real environment (sender side - 2 Computers, one connected to the wireless network, other to a wire, receiver - one PC, running FTP server, both senders each sending a big file at the same time). I would like to get comparable performance results from each of the existing congestion control algorithm as well as the new one I have created by modifying the NewReno algorithm. Thank you in advance for your assistance. Best Regards, Karlis ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: Testing Congestion Control Algorithms
Hi, On 2015-4-23, at 09:17, Karlis Laivins karlis.laiv...@gmail.com wrote: I am currently working on a modification of TCP NewReno congestion control algorithm. It seems that I have been able to write a working module. Now, I am looking for a way to test the performance of the built-in congestion control algorithms and the new algorithm. I have heard about the NS-2 simulator, and I am trying to compile and configure it now, but that's just a statistical tool (from what I hear) and the results are far from reality (please correct me, if I am wrong). Please recommend a tool or way I can test the performance of the congestion control algorithm in a real environment (sender side - 2 Computers, one connected to the wireless network, other to a wire, receiver - one PC, running FTP server, both senders each sending a big file at the same time). I would like to get comparable performance results from each of the existing congestion control algorithm as well as the new one I have created by modifying the NewReno algorithm. I think you are moving away from the scope where freebsd-net is the correct mailing list. There are literally thousands of research papers comparing congestion control algorithms and other TCP improvements. I suggest you check some of those (Sally Floyd's papers http://www.icir.org/floyd/ are still a good starting point) and read up on what the ICCRG has done: https://irtf.org/iccrg Lars ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: Testing Congestion Control Algorithms
On 4/23/15 3:17 PM, Karlis Laivins wrote: Hi, I am currently working on a modification of TCP NewReno congestion control algorithm. It seems that I have been able to write a working module. Now, I am looking for a way to test the performance of the built-in congestion control algorithms and the new algorithm. I have heard about the NS-2 simulator, and I am trying to compile and configure it now, but that's just a statistical tool (from what I hear) and the results are far from reality (please correct me, if I am wrong). Please recommend a tool or way I can test the performance of the congestion control algorithm in a real environment (sender side - 2 Computers, one connected to the wireless network, other to a wire, receiver - one PC, running FTP server, both senders each sending a big file at the same time). I would like to get comparable performance results from each of the existing congestion control algorithm as well as the new one I have created by modifying the NewReno algorithm. Thank you in advance for your assistance. There is a setup called 'imunes' (from memory) that used the virtual stack capabilities (VNET/VIMAGE) to give you a fully configurable test network within a singe computer. see http://imunes.tel.fer.hr/ I'm not sure if the CC stuff can be set up per vnet but if it can then you could use that.. or do it yourself using vnet jails and dummynet for link latency/throughput. if Marko is around he may be able to tell you about the current state of imunes. Best Regards, Karlis ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: igb(4) watchdog timeout, lagg(4) fails
Bezüglich Harald Schmalzbauer's Nachricht vom 20.02.2015 14:17 (localtime): (https://lists.freebsd.org/pipermail/freebsd-stable/2015-February/081810.html) Bezüglich Harald Schmalzbauer's Nachricht vom 11.02.2015 20:48 (localtime): Bezüglich Jack Vogel's Nachricht vom 11.02.2015 18:31 (localtime): tdh and tdt mean the head and tail indices of the ring, and these values are obviously severely borked :) Hello Jack, could you find some time for having a look at this problem? The reported values don't bother me, but the watchdog timeout which happens on NICs that are PCIe-connected via the PCH. Please see my previouse findings. I think the most significant hint for my problem seems to be the link_irq, which becomes garbage at the first watchdog timeout occurrence, like previously described: … For the records: Rebooting the machine (ESXi guest-only!) brought the stalled igb1 back to operation. The guest has 2 igb (kawela) ports, one from a NIC(Intel ET Dual Port 82576)@CPU-PCIe and the second port from an identical NIC, but connected via PCH-PCIe. The watchdog timeout problem only occurs with the port from the PCH-PCIe-connected NIC (falisfied)! After the reboot the suspicious dev.igb.1.link_irq=848 turned into: dev.igb.0.link_irq: 3 dev.igb.1.link_irq: 4 Jack, I'd like to let you know that dev.igb.1.link_irq again shows garbage after the watchdog timeout problem occured again: dev.igb.1.link_irq: 1458 I can imagine that resetting goes wrong and ends in loss of link_irq. I now have to reboot the guest to get igb1 back to a working state, then the link_irq will show 4 again, but I can't tell you what was first, the timeour-reset or the link_irq jam. I guess the latter can't be the case, but I have no idea about the code Thanks for any help, currently my lagg setup is permanently degraded :-( Would be nice to have it back in a working state :-) -Harry signature.asc Description: OpenPGP digital signature
Re: TCP Initial Window 10 MFC
On 03/04/14 at 10:22P, hiren panchasara wrote: On Tue, Mar 4, 2014 at 7:38 PM, Lawrence Stewart lstew...@freebsd.org wrote: skip I lost the battle of wills on this topic and 10.0 shipped with IW10 enabled by default :( As for having it configurable, it is a trivial patch which perhaps, Hiren, you might be willing to take a stab at? I obviously did not manage to carve out the time last year to push forward with the agenda I proposed in this thread, but I will get back to it at some point. Hi Lawrence, Let's fix it the right way if possible. Below is a rough/untested quick patch I came up with. Is this how you were planning to have nonstandard sysctl knob designed? A bit more updated patch: https://people.freebsd.org/~hiren/initcwnd.patch How do we go about the existing knob 'sysctl net.inet.tcp.experimental.initcwnd10' ? cheers, Hiren pgpuRi4KXFSHD.pgp Description: PGP signature
Re: Testing Congestion Control Algorithms
On 04/23/15 at 10:17P, Karlis Laivins wrote: Hi, I am currently working on a modification of TCP NewReno congestion control algorithm. It seems that I have been able to write a working module. Now, I am looking for a way to test the performance of the built-in congestion control algorithms and the new algorithm. I have heard about the NS-2 simulator, and I am trying to compile and configure it now, but that's just a statistical tool (from what I hear) and the results are far from reality (please correct me, if I am wrong). Please recommend a tool or way I can test the performance of the congestion control algorithm in a real environment (sender side - 2 Computers, one connected to the wireless network, other to a wire, receiver - one PC, running FTP server, both senders each sending a big file at the same time). I would like to get comparable performance results from each of the existing congestion control algorithm as well as the new one I have created by modifying the NewReno algorithm. Comparing performance of different cc algorithms might be tricky as they are designed with different goals in mind. I suggest you stick with your modified newreno vs original newreno and see if your modifications are letting you do whatever that you have decided. You may want to look at dummynet(4). This can be a useful read http://info.iet.unipi.it/~luigi/papers/20091201-dummynet.pdf siftr(4) is a handy tool to have around to make sure your congestion control module is behaving the way you want. cheers, Hiren pgp90EX4ctNV0.pgp Description: PGP signature
Re: Testing Congestion Control Algorithms
Hello once again, Before I dive in the TEACUP, I wanted to clarify this - should I build the testbed to consist of FreeBSD machines, will I be able to use congestion control module (.ko) that was created by modifying the cc_newreno (written in C) in TEACUP, or will I have to rewrite it in Python? Sorry, if this question seems silly, but I have limited time to do the tests and I want to be sure that I don't have to redo something in a language that I haven't used yet. Thank you in advance for your answer! With Best Regards, Karlis On Thu, Apr 23, 2015 at 12:45 PM, grenville armitage garmit...@swin.edu.au wrote: On 04/23/2015 17:17, Karlis Laivins wrote: Hi, I am currently working on a modification of TCP NewReno congestion control algorithm. It seems that I have been able to write a working module. Now, I am looking for a way to test the performance of the built-in congestion control algorithms and the new algorithm. I have heard about the NS-2 simulator, and I am trying to compile and configure it now, but that's just a statistical tool (from what I hear) and the results are far from reality (please correct me, if I am wrong). Please recommend a tool or way I can test the performance of the congestion control algorithm in a real environment (sender side - 2 Computers, one connected to the wireless network, other to a wire, receiver - one PC, running FTP server, both senders each sending a big file at the same time). I would like to get comparable performance results from each of the existing congestion control algorithm as well as the new one I have created by modifying the NewReno algorithm. Thank you in advance for your assistance. Lars is right, the ns-2 tangent is starting to diverge from freebsd-net@ Indeed, I would suggest you don't bother with ns-2 -- it wont help you do meaningful comparisons to a kernel-resident cc module you develop under FreeBSD. If you have the time and inclination to build a small testbed using a couple of physical hosts, you might find this tool useful -- http://caia.swin.edu.au/tools/teacup My colleague and I built TEACUP (TCP Experiment Automation Controlled Using Python) to automate many aspects of running TCP performance experiments in our small, specially-constructed physical testbed. TEACUP enables repeatable testing of different TCP algorithms over a range of emulated network path conditions, bottleneck rate limits and bottleneck queuing disciplines. (e.g. I've used it to experiment with custom FreeBSD CC modules vs conventional FreeBSD and Linux CC algorithms.) A key caveat: TEACUP assumes your physical testbed is a multi-host/single-bottleneck dumbbell-like topology with suitably configured end hosts and Linux-based bottleneck router (see http://caia.swin.edu.au/reports/150210C/CAIA-TR-150210C.pdf for an example). TEACUP does not try to run experiments over arbitrary network paths or the wider Internet. This has satisfied our use-cases, other people's mileage may vary :-) We've released TEACUP in case it may be useful to other researchers who already have (or are interested in setting up) similar network testbeds. (Small note -- we recently found a small bug in some of the v0.9 data analysis code, which will be fixed when v0.9.2 comes out RSN.) cheers, gja ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: Testing Congestion Control Algorithms
Hi, Thank you very much for this suggestion! I will try to build the testbed and use the tool suggested by you. Best Regards, Karlis On Thu, Apr 23, 2015 at 12:45 PM, grenville armitage garmit...@swin.edu.au wrote: On 04/23/2015 17:17, Karlis Laivins wrote: Hi, I am currently working on a modification of TCP NewReno congestion control algorithm. It seems that I have been able to write a working module. Now, I am looking for a way to test the performance of the built-in congestion control algorithms and the new algorithm. I have heard about the NS-2 simulator, and I am trying to compile and configure it now, but that's just a statistical tool (from what I hear) and the results are far from reality (please correct me, if I am wrong). Please recommend a tool or way I can test the performance of the congestion control algorithm in a real environment (sender side - 2 Computers, one connected to the wireless network, other to a wire, receiver - one PC, running FTP server, both senders each sending a big file at the same time). I would like to get comparable performance results from each of the existing congestion control algorithm as well as the new one I have created by modifying the NewReno algorithm. Thank you in advance for your assistance. Lars is right, the ns-2 tangent is starting to diverge from freebsd-net@ Indeed, I would suggest you don't bother with ns-2 -- it wont help you do meaningful comparisons to a kernel-resident cc module you develop under FreeBSD. If you have the time and inclination to build a small testbed using a couple of physical hosts, you might find this tool useful -- http://caia.swin.edu.au/tools/teacup My colleague and I built TEACUP (TCP Experiment Automation Controlled Using Python) to automate many aspects of running TCP performance experiments in our small, specially-constructed physical testbed. TEACUP enables repeatable testing of different TCP algorithms over a range of emulated network path conditions, bottleneck rate limits and bottleneck queuing disciplines. (e.g. I've used it to experiment with custom FreeBSD CC modules vs conventional FreeBSD and Linux CC algorithms.) A key caveat: TEACUP assumes your physical testbed is a multi-host/single-bottleneck dumbbell-like topology with suitably configured end hosts and Linux-based bottleneck router (see http://caia.swin.edu.au/reports/150210C/CAIA-TR-150210C.pdf for an example). TEACUP does not try to run experiments over arbitrary network paths or the wider Internet. This has satisfied our use-cases, other people's mileage may vary :-) We've released TEACUP in case it may be useful to other researchers who already have (or are interested in setting up) similar network testbeds. (Small note -- we recently found a small bug in some of the v0.9 data analysis code, which will be fixed when v0.9.2 comes out RSN.) cheers, gja ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: Testing Congestion Control Algorithms
On 04/23/2015 17:17, Karlis Laivins wrote: Hi, I am currently working on a modification of TCP NewReno congestion control algorithm. It seems that I have been able to write a working module. Now, I am looking for a way to test the performance of the built-in congestion control algorithms and the new algorithm. I have heard about the NS-2 simulator, and I am trying to compile and configure it now, but that's just a statistical tool (from what I hear) and the results are far from reality (please correct me, if I am wrong). Please recommend a tool or way I can test the performance of the congestion control algorithm in a real environment (sender side - 2 Computers, one connected to the wireless network, other to a wire, receiver - one PC, running FTP server, both senders each sending a big file at the same time). I would like to get comparable performance results from each of the existing congestion control algorithm as well as the new one I have created by modifying the NewReno algorithm. Thank you in advance for your assistance. Lars is right, the ns-2 tangent is starting to diverge from freebsd-net@ Indeed, I would suggest you don't bother with ns-2 -- it wont help you do meaningful comparisons to a kernel-resident cc module you develop under FreeBSD. If you have the time and inclination to build a small testbed using a couple of physical hosts, you might find this tool useful -- http://caia.swin.edu.au/tools/teacup My colleague and I built TEACUP (TCP Experiment Automation Controlled Using Python) to automate many aspects of running TCP performance experiments in our small, specially-constructed physical testbed. TEACUP enables repeatable testing of different TCP algorithms over a range of emulated network path conditions, bottleneck rate limits and bottleneck queuing disciplines. (e.g. I've used it to experiment with custom FreeBSD CC modules vs conventional FreeBSD and Linux CC algorithms.) A key caveat: TEACUP assumes your physical testbed is a multi-host/single-bottleneck dumbbell-like topology with suitably configured end hosts and Linux-based bottleneck router (see http://caia.swin.edu.au/reports/150210C/CAIA-TR-150210C.pdf for an example). TEACUP does not try to run experiments over arbitrary network paths or the wider Internet. This has satisfied our use-cases, other people's mileage may vary :-) We've released TEACUP in case it may be useful to other researchers who already have (or are interested in setting up) similar network testbeds. (Small note -- we recently found a small bug in some of the v0.9 data analysis code, which will be fixed when v0.9.2 comes out RSN.) cheers, gja ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: net.inet.ip.forwarding impact on throughput
On Tue, Apr 21, 2015 at 12:47:45PM -0700, Scott Larson wrote: We're in the process of migrating our network into the future with 40G at the core, including our firewall/traffic routers with 40G interfaces. An issue which this exposed and threw me for a week turns out to be directly related to net.inet.ip.forwarding and I'm looking to just get some insight on what exactly is occurring as a result of using it. Enabling forwarding disables LRO and TSO and that probably accounts for a large part of the difference in throughput that you've observed. The number of packets passing through the stack (and not the amount of data passing through) is the dominant bottleneck. fastforwarding _should_ make a difference, but only if packets actually take the fast-forward path. Check the counters available via netstat: # netstat -sp ip | grep forwarded Regards, Navdeep What I am seeing is when that knob is set to 0, an identical pair of what will be PF/relayd servers with direct DAC links between each other using Chelsio T580s can sustain around 38Gb/s on iperf runs. However the moment I set that knob to 1, that throughput collapses down into the 3 to 5Gb/s range. As the old gear this is replacing is all GigE I'd never witnessed this. Twiddling net.inet.ip.fastforwarding has no apparent effect. I've not found any docs going in depth on what deeper changes enabling forwarding does to the network stack. Does it ultimately put a lower priority on traffic where the server functioning as the packet router is the final endpoint in exchange for having more resources available to route traffic across interfaces as would generally be the case? *[image: userimage]Scott Larson[image: los angeles] https://www.google.com/maps/place/4216+Glencoe+Ave,+Marina+Del+Rey,+CA+90292/@33.9892151,-118.4421334,17z/data=!3m1!4b1!4m2!3m1!1s0x80c2ba88ffae914d:0x14e1d00084d4d09cLead Systems Administrator[image: wdlogo] https://www.wiredrive.com/ [image: linkedin] https://www.linkedin.com/company/wiredrive [image: facebook] https://www.twitter.com/wiredrive [image: twitter] https://www.facebook.com/wiredrive [image: instagram] https://www.instagram.com/wiredriveT 310 823 8238 x1106 310%20823%208238%20x1106 | M 310 904 8818 310%20904%208818* ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: IPSec Performance under Xen
Sure.. i'll get back to you.. S. On Apr 24, 2015, at 01:26, Andrey V. Elsukov bu7c...@yandex.ru wrote: On 24.04.2015 01:00, Sydney Meyer wrote: Hello, I have set up 2 VM's under Xen running each one IPSec-Endpoint. Everything seems to work fine, but (measured with benchmarks/iperf) the performance drops from ~10 Gb/s on a non-IPSec-Kernel to ~200 Mb/s with IPSec compiled in, regardless of whether actually using IPSec or not. Can you test this patch to see the difference? It isn't a fix. It is just to see how will help avoiding of PCB check. --- ip_output.c (revision 281867) +++ ip_output.c (working copy) @@ -482,7 +482,7 @@ again: sendit: #ifdef IPSEC - switch(ip_ipsec_output(m, inp, flags, error)) { + switch(ip_ipsec_output(m, NULL, flags, error)) { case 1: goto bad; case -1: -- WBR, Andrey V. Elsukov ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: IPSec Performance under Xen
Andrey, with your patch applied the performance drop while using the IPSEC-enabled kernel without doing actual IPSec traffic seems to be gone. I haven't tested IPSec itself yet, as i had to start from scratch with new VM's but i will set up a IPSec connection and report back. S. On Apr 24, 2015, at 01:26, Andrey V. Elsukov bu7c...@yandex.ru wrote: On 24.04.2015 01:00, Sydney Meyer wrote: Hello, I have set up 2 VM's under Xen running each one IPSec-Endpoint. Everything seems to work fine, but (measured with benchmarks/iperf) the performance drops from ~10 Gb/s on a non-IPSec-Kernel to ~200 Mb/s with IPSec compiled in, regardless of whether actually using IPSec or not. Can you test this patch to see the difference? It isn't a fix. It is just to see how will help avoiding of PCB check. --- ip_output.c (revision 281867) +++ ip_output.c (working copy) @@ -482,7 +482,7 @@ again: sendit: #ifdef IPSEC - switch(ip_ipsec_output(m, inp, flags, error)) { + switch(ip_ipsec_output(m, NULL, flags, error)) { case 1: goto bad; case -1: -- WBR, Andrey V. Elsukov ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org