Re: WANTLIB/LIB_DEPENDS semantics change

2010-07-03 Thread Marc Espie
Current tests goes fine, and actually show some existing problems because
the old infrastructure is:
- too complicated
- bsd.port.mk lies about what's going on.

Namely, libspecs were apparently tied to the pkgspec in a LIB_DEPENDS.
But that's not true ! bsd.port.mk was only using those specs as witness
to see if the LIB_DEPENDS should be used as a RUN_DEPENDS, without ever
checking that the lib came from THAT one.

The new code *does* really check, for old style LIB_DEPENDS, that the libspec
indeed comes from the right location. And lo and behold, I've already
caught 3 ports which were attaching the wrong libspec to a given LIB_DEPENDS.

So, floating the libspecs  out is the right thing to do...



socket buffers

2010-07-03 Thread Stuart Henderson
Does anyone know offhand the reason why network connections fail
if socket buffers are set above 256k?

# sysctl net.inet.tcp.sendspace=262145 
# telnet naiad 80
Trying 2a01:348:108:108:a00:20ff:feda:88b6...
Trying 195.95.187.35...
#

I was thinking of looking into it, but before going down that rabbit
hole I thought I'd ask in case there's a quick answer that somebody
already knows...

(yes, people do use buffers much bigger than this, I looked at some
of the academic ftp mirror sites - looks like mirrorservice.org wil
negotiate 3MB buffers, aarnet 35MB, if you let them - presumably
they try to avoid buffers being a bottleneck for clients reaching
them over a national network of at least 1Gb/s end-to-end).



Re: socket buffers

2010-07-03 Thread Joerg Sonnenberger
On Sat, Jul 03, 2010 at 11:54:17AM +0100, Stuart Henderson wrote:
 Does anyone know offhand the reason why network connections fail
 if socket buffers are set above 256k?

You might have to patch sb_max for that.

Joerg



Re: socket buffers

2010-07-03 Thread Claudio Jeker
On Sat, Jul 03, 2010 at 11:54:17AM +0100, Stuart Henderson wrote:
 Does anyone know offhand the reason why network connections fail
 if socket buffers are set above 256k?
 

There is this magical define in uipc_socket2.c called SB_MAX that limits
the socket buffers to 256k going over that line makes the initial scaling
fail and you end up with no buffer at all.

 # sysctl net.inet.tcp.sendspace=262145 
 # telnet naiad 80
 Trying 2a01:348:108:108:a00:20ff:feda:88b6...
 Trying 195.95.187.35...
 #
 
 I was thinking of looking into it, but before going down that rabbit
 hole I thought I'd ask in case there's a quick answer that somebody
 already knows...
 
 (yes, people do use buffers much bigger than this, I looked at some
 of the academic ftp mirror sites - looks like mirrorservice.org wil
 negotiate 3MB buffers, aarnet 35MB, if you let them - presumably
 they try to avoid buffers being a bottleneck for clients reaching
 them over a national network of at least 1Gb/s end-to-end).
 

35M, that is insane. Either they have machines with infinite memory or you
can kill the boxes easily.

-- 
:wq Claudio



Re: socket buffers

2010-07-03 Thread Joerg Sonnenberger
On Sat, Jul 03, 2010 at 05:40:45PM +0200, Claudio Jeker wrote:
 35M, that is insane. Either they have machines with infinite memory or you
 can kill the boxes easily.

You don't need 35MB per client connection if interfaces like sendfile(2)
are used. All the kernel has to guarantee in that case is copy-on-write
for the file content as far as it has been send already. Media
distribution server normally don't change files inplace, so the only
backpressure this creates is on the VFS cache. Let's assume the server
is busy due to a new OpenBSD/Linux/Firefox/whatever release. A lot of
clients will try to fetch a small number of distinct files. The memory
the kernel has to commit is limited by the size of that active set, not
by the number of clients.

Joerg



Re: socket buffers

2010-07-03 Thread Stuart Henderson
On 2010/07/03 18:17, Joerg Sonnenberger wrote:
 On Sat, Jul 03, 2010 at 05:40:45PM +0200, Claudio Jeker wrote:
  35M, that is insane. Either they have machines with infinite memory or you
  can kill the boxes easily.

some would also say that 16K is insane ;-)

 You don't need 35MB per client connection if interfaces like sendfile(2)
 are used. All the kernel has to guarantee in that case is copy-on-write
 for the file content as far as it has been send already. Media
 distribution server normally don't change files inplace, so the only
 backpressure this creates is on the VFS cache. Let's assume the server
 is busy due to a new OpenBSD/Linux/Firefox/whatever release. A lot of
 clients will try to fetch a small number of distinct files. The memory
 the kernel has to commit is limited by the size of that active set, not
 by the number of clients.

there is some pretty serious hardware behind it...
http://mirror.aarnet.edu.au/indexabout.html



Re: Call for testing: IPsec diff (update)

2010-07-03 Thread Reyk Floeter
On Fri, Jul 02, 2010 at 10:49:52PM +0200, Reyk Floeter wrote:
 I need people to test the following IPsec diff on existing setups
 running -current.  This diff will add some cool features for the next
 release but I first need regression testing with plain old setups
 (ipsec.conf with static keying or isakmpd); preferrably on IPsecs that
 are running closely to production.  This diff depends on -current and
 my latest changes on enc(4) from earlier this week.
 

here is an updated diff that will apply to -current.

Index: net/if_bridge.c
===
RCS file: /cvs/src/sys/net/if_bridge.c,v
retrieving revision 1.181
diff -u -p -r1.181 if_bridge.c
--- net/if_bridge.c 2 Jul 2010 02:40:16 -   1.181
+++ net/if_bridge.c 3 Jul 2010 17:22:52 -
@@ -152,7 +152,8 @@ u_int8_t bridge_filterrule(struct brl_he
 struct mbuf *bridge_filter(struct bridge_softc *, int, struct ifnet *,
 struct ether_header *, struct mbuf *m);
 #endif
-intbridge_ifenqueue(struct bridge_softc *, struct ifnet *, struct mbuf *);
+intbridge_ifenqueue(struct bridge_softc *, struct ifnet *, struct mbuf *,
+struct ether_header *);
 void   bridge_fragment(struct bridge_softc *, struct ifnet *,
 struct ether_header *, struct mbuf *);
 #ifdef INET
@@ -1143,7 +1144,7 @@ bridge_output(struct ifnet *ifp, struct 
mc = m1;
}
 
-   error = bridge_ifenqueue(sc, dst_if, mc);
+   error = bridge_ifenqueue(sc, dst_if, mc, eh);
if (error)
continue;
}
@@ -1160,7 +1161,7 @@ sendunicast:
splx(s);
return (ENETDOWN);
}
-   bridge_ifenqueue(sc, dst_if, m);
+   bridge_ifenqueue(sc, dst_if, m, eh);
splx(s);
return (0);
 }
@@ -1372,7 +1373,7 @@ bridgeintr_frame(struct bridge_softc *sc
bridge_fragment(sc, dst_if, eh, m);
else {
s = splnet();
-   bridge_ifenqueue(sc, dst_if, m);
+   bridge_ifenqueue(sc, dst_if, m, eh);
splx(s);
}
 }
@@ -1665,7 +1666,7 @@ bridge_broadcast(struct bridge_softc *sc
if ((len - ETHER_HDR_LEN)  dst_if-if_mtu)
bridge_fragment(sc, dst_if, eh, mc);
else {
-   bridge_ifenqueue(sc, dst_if, mc);
+   bridge_ifenqueue(sc, dst_if, mc, eh);
}
}
 
@@ -1757,7 +1758,7 @@ bridge_span(struct bridge_softc *sc, str
continue;
}
 
-   error = bridge_ifenqueue(sc, ifp, mc);
+   error = bridge_ifenqueue(sc, ifp, mc, eh);
if (error)
continue;
}
@@ -2402,7 +2403,7 @@ bridge_ipsec(struct bridge_softc *sc, st
 
s = spltdb();
 
-   tdb = gettdb(spi, dst, proto);
+   tdb = gettdb(ifp-if_rdomain, spi, dst, proto);
if (tdb != NULL  (tdb-tdb_flags  TDBF_INVALID) == 0 
tdb-tdb_xform != NULL) {
if (tdb-tdb_first_use == 0) {
@@ -2457,7 +2458,7 @@ bridge_ipsec(struct bridge_softc *sc, st
switch (af) {
 #ifdef INET
case AF_INET:
-   if ((encif = enc_getif(0,
+   if ((encif = enc_getif(tdb-tdb_rdomain,
tdb-tdb_tap)) == NULL ||
pf_test(dir, encif,
m, NULL) != PF_PASS) {
@@ -2468,7 +2469,7 @@ bridge_ipsec(struct bridge_softc *sc, st
 #endif /* INET */
 #ifdef INET6
case AF_INET6:
-   if ((encif = enc_getif(0,
+   if ((encif = enc_getif(tdb-tdb_rdomain,
tdb-tdb_tap)) == NULL ||
pf_test6(dir, encif,
m, NULL) != PF_PASS) {
@@ -2720,7 +2721,7 @@ bridge_fragment(struct bridge_softc *sc,
if ((ifp-if_capabilities  IFCAP_VLAN_MTU) 
(len - sizeof(struct ether_vlan_header) = ifp-if_mtu)) {
s = splnet();
-   bridge_ifenqueue(sc, ifp, m);
+   bridge_ifenqueue(sc, ifp, m, eh);
splx(s);
return;
}
@@ -2790,7 +2791,7 @@ bridge_fragment(struct bridge_softc *sc,
}
bcopy(eh, mtod(m, caddr_t), sizeof(*eh));
s = splnet();
-   error = bridge_ifenqueue(sc, ifp, m);
+   error = bridge_ifenqueue(sc, ifp, m, eh);
if (error) {
splx(s);
   

Re: socket buffers

2010-07-03 Thread Rod Whitworth
On Sat, 3 Jul 2010 17:46:22 +0100, Stuart Henderson wrote:

there is some pretty serious hardware behind it...
http://mirror.aarnet.edu.au/indexabout.html

Those guys have some serious uses for that equipment in addition to
being a great source of ftp mirrors.

They are ready (or very close) to handling data from Australia and New
Zealand SKA sites.
(Square Kilometer Array http://www.skatelescope.org/) The data is
measured in Terabytes/day.

BTW their OpenBSD mirror currently has 4.7 pkgs for some archs (I did
not check all but amd64 is there) but not i386. Weird.
*** NOTE *** Please DO NOT CC me. I am subscribed to the list.
Mail to the sender address that does not originate at the list server is 
tarpitted. The reply-to: address is provided for those who feel compelled to 
reply off list. Thankyou.

Rod/
---
This life is not the real thing.
It is not even in Beta.
If it was, then OpenBSD would already have a man page for it.