question about delayed ACKs on OpenBSD
Hello I've noticed a bit different behaviour with regard to delayed acks on OBSD. Some other systems (2 linux distros, win2k/xp) I tested, pretty much acted as I've always seen it - 1 ack per max. 2 segments, but no bigger delay than some arbitrary value (looking at rfc, no more than 500ms, but usually less), thus in reality - 1 ack every 2 segments assuming latency is low enough. For my ridiculously asymmetric line - 24:1 (6144/256) - at single full download, that's roughly 2/3+ upload used for acks only, partially due to hefty adsl overhead (and after looking at pppoa rfc, 2 atm cells used for just 1 ack). On OpenBSD though, the result was generally perfect 66% segments acked. Looking at tcpdump output, the acks on receiving side were sent precisely after receiving : 1,2,1,2,1,2... segments. The test was made on lan between two obsd 4.0 boxes (generic kernel), limiting the speed with one queue (and none as well) on sending host, as needed. Speed didn't seem to matter though - behaviour was the same with 256kbit as it was with 100mbit. Assuming it's intended behaviour - what are the reasons for implementing it in this way ?
Re: Is doing a network restore from bsd.rd at all possible?
smith wrote: If you successfully do this, can you post how you did it? The magic is in bsd's ftp(1) -o flag, which makes it a bit similar beast to the wget. It can also pull the file using http or, since 4.0, https - check AUTO-FETCHING FILES section in the man, it's quite fexible piece of tool. As for recovering / cloning using bsd.rd, you could simply do something like: newfs /dev/rwd1e mount -o async /dev/wd1e /mnt cd /mnt ftp -o - ftp://openbsd.example.com/partition.dump | restore rvf - cd / umount /mnt One remark though - use or prepare larger /tmp before doing so, or you may irritate restore quite a bit, if you recover some larger filesystem.
Re: Is doing a network restore from bsd.rd at all possible?
So my question is this: is doing a remote network restore using 'bsd.rd' at all possible (or even suggested/recommended) or are directly attached devices (IDE/SCSI/USB drives tapes drives) the only supported restore(8) sources with 'bsd.rd'? You can pipe ftp's output to restore.
Re: Mail gateway behind MS Exchange
Cedric Brisseau wrote: I think spamd can't help a lot since mails aren't received directly. Maybe you have similar cases with spamassassin+clamav or relaydb, procmail ? postfix (with basic smtpd restrictions that can do wonders) clamav + spamassassin (with bayes enabled) ran from amavisd You can set up clean/spam/virus/etc. quarantines with amavisd nicely, so no need to worry that some very important mail could be misclassified and discarded (I guess that's what you meant by not received directly). Bayes filter in SA is, in my case and after decent learning, effectively 100% accurate, with minimally adjusted SA's default bayes scores.
Re: pf queue monitoring
Lawrence Horvath wrote: Is there a way to monitor how much traffic is passing through a queue in bps? Besides pfctl -vvsq, try pftop from ports - it's great pf monitor, similar in use to top.
Re: Tuning OpenBSD network throughput
Matthew R. Dempsky wrote: I have three machines that I'm using for testing network performance: - 2.0GHz Pentium 4, 256MiB RAM, Ubuntu 6.06, e1000 - 266MHz Pentium II, 192MiB RAM, Debian Unstable, sk98lin - 600MHz Pentium M, 256MiB RAM, OpenBSD 4.0-current, em(4) [cut] Can anyone explain the huge discrepancy here? Can I do anything to get OpenBSD to achieve at least 150 Mbits/sec? Thanks. Besides certain compex cards (wb driver, 3.8, with queuing under PF), I haven't had any strange problems with bandwidth either (testing plenty of fxp, xl, rl, vr cards). Well, there were some, but that's different subject regarding cbq/hfsc and non-borrowing queues, at certain speeds. You can try few other methods to measure bandwidth. For example: - 2 nc, one reading from /dev/zero, the other writing to /dev/null - out of box openbsd's httpd, wget writing to /dev/null - some ftp transfer or even scp from dd's premade file (disk shouldn't really be a bottleneck at fast ethernet speeds, neither should be encryption with not hopelessy old cpu) Pair these with some simple PF setting using queues, and watch the bandwidth with pftop, systat, ifstat or pfctl -vvsq to name a few. Time command can be helpful too. Also, remember that queuing works only in outgoing direction, if you decide to use it.
Re: Network equipment testing with two NICs
Matthew R. Dempsky wrote: Is my guess correct? If so, is it possible to have OpenBSD route traffic both ways across the ethernet cable? Thanks. icmp's replies would go through loopback in such case. If you wanted to force it to go over the cable, you could use route(8) to manually set routing or use pf and set reply-to option on interface, where icmp request is incoming.
Re: Network equipment testing with two NICs
Matthew R. Dempsky wrote: On Tue, Aug 01, 2006 at 11:24:17PM +0200, Michal Soltys wrote: icmp's replies would go through loopback in such case. Really? I got the impression from tcpdump that traffic from sk0 to sk1 (whether ICMP request or reply) always went over the ethernet cable while traffic from sk1 to sk0 did not. I meant the case of the first ping - you requests were sent through cable, but not replies . I think I know why you have this assymetrical situation, but I must check it myself first, before posting further. The output of ``tcpdump -i sk0'' shows only and all packets that were sent through or received from sk0's PHY, right? Yes.
Re: 256 color support for terminals under X
Bihlmaier Andreas wrote: Hello misc@, I stumbled across a problem with all X terminal emulators in OpenBSD (that is xterm and aterm, eterm and rxvt from ports). None of the above seems to support 256 colors. I tried various combinations of $TERM (xterm, xterm-color, xterm-xfree86, xterm-256color) with all the terminals, running and not running screen. Check simple test scripts at http://frexx.de/xterm-256-notes/ - at least you will have answers if colors work properly.