ipfw: Can't see other flows in pipe

2009-03-06 Thread Sebastian Mellmann
Hi everyone!

I've got the following ipfw rules:


cmd=ipfw
webclient_upload_bandwidth=1024kbit/s
webclient_download_bandwidth=6144Kbit/s
bottleneck_bandwidth=100Mbit/s
client_rtt_delay=10
queue=50

client1_subnet=192.168.5.0/26
server1_subnet=192.168.7.0/24

$cmd pipe 100 config mask all bw $webclient_upload_bandwidth queue
queue_size delay $client_rtt_delay

$cmd pipe 200 config mask all bw $webclient_download_bandwidth queue
queue_size delay $client_rtt_delay

$cmd add pipe 100 all from $client1_subnet to $server1_subnet in recv $in_if

$cmd add pipe 200 all from $server1_subnet to $client1_subnet out xmit $in_if

$cmd pipe 500 config bw $bottleneck_bandwidth
$cmd add pipe 500 all from any to any via $in_if

$cmd pipe 510 config bw $bottleneck_bandwidth
$cmd add pipe 510 all from any to any via $out_if


For testing purposes I've got 4 concurrent downloads via scp from the
server1_subnet to the client1_subnet.

ipfw pipe show gives me the following:

00510: 100.000 Mbit/s0 ms   50 sl. 1 queues (1 buckets) droptail
mask: 0x00 0x/0x - 0x/0x
BKT Prot ___Source IP/port Dest. IP/port Tot_pkt/bytes
Pkt/Byte Drp
  0 tcp  192.168.5.4/47753 192.168.7.1/22610244 609078476  2 
104   1
00100:   1.024 Mbit/s0 ms   50 sl. 4 queues (64 buckets) droptail
mask: 0xff 0x/0x - 0x/0x
BKT Prot ___Source IP/port Dest. IP/port Tot_pkt/bytes
Pkt/Byte Drp
 18 tcp  192.168.5.4/47753 192.168.7.1/2215067   820472  0   
0   0
 29 tcp  192.168.5.1/59724 192.168.7.1/2264519  3512539  0   
0   0
 34 tcp  192.168.5.2/58805 192.168.7.1/2264035  3481423  0   
0   0
 54 tcp  192.168.5.3/40995 192.168.7.1/2266705  3633640  0   
0   0
00305: unlimited0 ms   50 sl. 0 queues (1 buckets) droptail
00310: unlimited0 ms   50 sl. 0 queues (1 buckets) droptail
00200:   6.144 Mbit/s0 ms   50 sl. 4 queues (64 buckets) droptail
mask: 0xff 0x/0x - 0x/0x
BKT Prot ___Source IP/port Dest. IP/port Tot_pkt/bytes
Pkt/Byte Drp
  2 tcp  192.168.7.1/22192.168.5.2/58805 121901 182399179 29
43124 234
 47 tcp  192.168.7.1/22192.168.5.3/40995 126392 189093880 43
64124 241
 51 tcp  192.168.7.1/22192.168.5.1/59724 122550 183349839 34
50624 251
 60 tcp  192.168.7.1/22192.168.5.4/47753 28565 42735852  0   
0  55
00315: unlimited0 ms   50 sl. 0 queues (1 buckets) droptail
00500: 100.000 Mbit/s0 ms   50 sl. 1 queues (1 buckets) droptail
mask: 0x00 0x/0x - 0x/0x
BKT Prot ___Source IP/port Dest. IP/port Tot_pkt/bytes
Pkt/Byte Drp
  0 tcp  192.168.5.4/47753 192.168.7.1/22609337 607754332  2
1552   0
00300: unlimited0 ms   50 sl. 0 queues (1 buckets) droptail


Why do I only see ONE connection inside the 500/510 pipe?
I thought I could see any connection going through that pipe.



Regards,
Sebastian



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: IPFW DUMMYNET: Several pipes after each other

2009-01-27 Thread Sebastian Mellmann
Ian Smith wrote:

 That's a very good ipfw tutorial, given parts of it are a bit outdated
 (FreeBSD 4.x) but it covers a lot of useful background. I just skimmed
 lots of it now but nothing I read jarred, unlike the Handbook section.

   If I choose the default (50 packets) it means that it takes approx.
 100ms
   (600kbits / 6144kbits) to fill the queue.
   So the question is: Which value to choose for the queue?

 It's going to depend on lots of things, your workload, upstream push ..
 you could start with more like the default and adjust as necessary?

Sounds like a good idea.
I'll just start with the default and see if works in my setup.

And you'll surely need a much larger queue for this pipe, at
 100Mbit/s.
  
   As already asked above:
  
   How do I know the queue is large or small enough for my needs?

 I'm never sure, so tend to experiment.  How fast your hardware is and
 kern.hz setting could be significant factors, as could be TCP/UDP mix
 and other factors I know little about.  Reducing reported packet drops
 is about all I've used for a guide so far.  This one is a FreeBSd 4.8
 box, a 2.4GHz P4 doing little but being a filtering bridge between a
 8192/384kbps ADSL link and nests of mostly XP boxes in 3 LAN groups:

 !ipfw pipe show | egrep 'tcp|bit'
 00010: 256.000 Kbit/s0 ms  30 KB 1 queues (1 buckets) droptail
   0 tcp 192.168.0.23/1043 207.46.17.61/807196387 2897628161  0
0 9706
 00020:   5.120 Mbit/s0 ms  50 KB 1 queues (1 buckets) droptail
   0 tcp 207.46.17.61/80   192.168.0.23/1043  9977802 12858014698
 00 63260

 00040:  96.000 Kbit/s0 ms  20 KB 1 queues (1 buckets) droptail
   0 tcp 192.168.0.45/103766.249.89.147/443   2315107 299340364  0
   0 2086
 00050:   1.536 Mbit/s0 ms  40 KB 1 queues (1 buckets) droptail
   0 tcp66.249.89.147/443  192.168.0.45/1037  3279021 3802388928  0
0 22433

 00060: 192.000 Kbit/s0 ms  30 KB 1 queues (1 buckets) droptail
   0 tcp 192.168.0.64/1032207.46.106.36/1863  1847947 563209421  0
   0 141
 00070:   3.072 Mbit/s0 ms  40 KB 1 queues (1 buckets) droptail
   0 tcp207.46.106.36/1863 192.168.0.64/1032  2438211 3075075035  0
0 4550

 It's nearly all streaming rather than more interactive traffic, so
 pipe latency isn't so much of a concern.  Anyway, I rarely actually
 catch any traffic still in-queue, which you can stare at for tuning.

 Also, that's aggregate traffic, not per IP as with your masks (which
 look maybe wider than necessary, 0x covers a /16) so you may
 wind up with lots of separate queues sharing a pipe, which may look
 very different.  How many hosts, how much memory to spare for each?


Is there any chance to get the dropped packets for _each_ queue (e.g.
logged to a file for further investigation)?
Does ipfw provide something here?

I'm mainly doing experiments with different kinds of settings (bandwidth
limitations, variable delay, dropped packets probability etcpp.) and I
want to see how many packets are actually dropped by ipfw.


 HTH, Ian

Regards and thanks for the help so far,
Sebastian


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: IPFW DUMMYNET: Several pipes after each other

2009-01-26 Thread Sebastian Mellmann
Ian Smith wrote:
On Thu, 22 Jan 2009 08:10:09 +0100 (CET)
  
   So far I've got those rules:
  
   in_if=em0
   out_if=em1
   management_if=em2
   in_ip=100.100.100.1
   out_ip=200.200.200.1
   management_ip=172.16.0.201
   client1_subnet=192.168.5.0/26
   client2_subnet=192.168.6.0/26
   server_subnet=192.168.7.0/24
  
   download_bandwidth=6144Kbit/s
   upload_bandwidth=1024Kbit/s
   delay=0
   queue_size=10

 10 slots ie packets is likely too small a queue size at these rates.
 You want to check the dropped packet stats from 'ipfw pipe show' re
 that; see the section in ipfw(8) about calculating sizes / delays.


I had a look at the ipfw howto on the freebsd site [1], but I'm not 100%
sure how to choose a good value for the queue size.

If I choose the default (50 packets) it means that it takes approx. 100ms
(600kbits / 6144kbits) to fill the queue.
So the question is: Which value to choose for the queue?

 I suggest using 'in recv' and 'out xmit' rather than via for these, for
 the sake of clarity.  'in recv' and 'in via' come to the same thing, as
 only the receive interface is known on inbound packets, but 'out via'
 applies to packets that were *received* on the specified interface as
 well as those going out on that interface after routing, which can lead
 to surprising results sometimes, and being more specific never hurts ..

Thanks for the hint.
I'll change that.

   But when I have a look at the pipes with 'ipfw show' I can only see
   packets go through the pipe 50 and nothing goes through the other pipes
   (which makes sense actually since IPFW work that way?).

 IPFW works that way if you (likely) have net.inet.ip.fw.one_pass=1 .. so
 that packets exiting from pipes aren't seen by the firewall again.  If
 you set one_pass=0, packets are reinjected into the firewall at the rule
 following the pipe (or queue) action, which is what you want to do here.

Actually this is also described in the manpage of ipfw(8).
Shame on me ;-)

 And you'll surely need a much larger queue for this pipe, at 100Mbit/s.


As already asked above:

How do I know the queue is large or small enough for my needs?


 cheers, Ian

Regards,
Sebastian

[1] http://www.freebsd-howto.com/HOWTO/Ipfw-HOWTO



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: shell scripting, how to auto-timeout?

2009-01-22 Thread Sebastian Mellmann

 The java process has the potential to run forever, and I want it to
 run for at most 1 second then get killed.  I could write a parent
 script that somehow gets the PID of the child script, but the problem
 is that the java program writes to standard out, the result of the
 program is written to standard out.  I also don't really want to share
 a PID with a temporary file.

 So what I might do is this:

 /usr/local/bin/java PipeGenerator $* 
 sleep 1
 kill the java command if not already killed

   

What about using 'pidof' and then check if it gives you a PID and kill
the process?

 Also with the above code I would be waiting for 1 second even if the
 java process finished sooner.  But that is a penalty I'm willing to
 pay, unless there is a more elegant solution.

 How do I do this?
   


Regards,
Sebastian
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


IPFW DUMMYNET: Several pipes after each other

2009-01-21 Thread Sebastian Mellmann
Hi!

I'm using FreeBSD 7.0 with IPFW DUMMYNET enabled.

I've got a problem with creating a ruleset which allows me to limit the
overall bandwidth of a link and afterwards pass the packets to another
pipe for processing.

So far I've got those rules:

in_if=em0
out_if=em1
management_if=em2
in_ip=100.100.100.1
out_ip=200.200.200.1
management_ip=172.16.0.201
client1_subnet=192.168.5.0/26
client2_subnet=192.168.6.0/26
server_subnet=192.168.7.0/24

download_bandwidth=6144Kbit/s
upload_bandwidth=1024Kbit/s
delay=0
queue_size=10

cmd=ipfw

$cmd add 10 allow all from any to any via lo0

$cmd pipe 100 config mask src-ip 0x bw $upload_bandwidth queue
$queue_size delay $delay
$cmd pipe 200 config mask dst-ip 0x bw $download_bandwidth queue
$queue_size

$cmd add pipe 100 all from $client1_subnet to $server_subnet in via $in_if
$cmd add pipe 200 all from $server_subnet to $client1_subnet out via $in_if

$cmd add pipe 100 all from $client2_subnet to $server_subnet in via $in_if
$cmd add pipe 200 all from $server_subnet to $client2_subnet out via $in_if

$cmd add 1 allow all from any to any via $management_if
$cmd add 2 allow all from any to any via $in_if
$cmd add 3 allow all from any to any via $out_if

---

What I want to add now, is the possibility to limit the bandwidth of the
whole link, e.g. 100Mbit/s.

I've tried to add a pipe:

$cmd pipe 50 config bw 100Mbit/s queue $queue_size
$cmd add pipe 50 all from any to any via $in_if


But when I have a look at the pipes with 'ipfw show' I can only see
packets go through the pipe 50 and nothing goes through the other pipes
(which makes sense actually since IPFW work that way?).


Any help is much appreciated.

Thanks in advance.

Sebastian M.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org