Re: multi-pool malloc wip diff
On Wed, May 11, 2016 at 10:04:48AM +0200, Otto Moerbeek wrote: > On Fri, Apr 29, 2016 at 08:42:00AM +0200, Otto Moerbeek wrote: > > > Hi, > > > > new diff in http://www.drijf.net/openbsd/malloc/ > > > > Should fix the issue Ted spotted and contains initial code to only set > > up multiple pools if threaded. This one is only lightly tested by me, > > but I wanted to post this before I'll be away for a semi-long weekend, > > > > I don't think this is ready for wide testing, but of course I > > encourage playing, testing, reviewing or printing out and using as > > wallpaper ;-) > > > > -Otto > > And I just publsihed a new diff at http://www.drijf.net/openbsd/malloc > > Changes: > > - diff against current. Due to the TIB work quite some things changed. > MOst importnat change is that I now have direct access to the thread > id. Since this is a random value, I can use the lower bits directly > to compute the "home" pool for a thread. > > - Realized wrterror() is since some years unconditionally a dead(-end) > function. I'm using this in the multi-thread related parts, but this > could probably be used in other places as well. > > Please be very sure you are running current packages, otherwise you > are not testing what you think you are testing. > > -Otto New diff posted in http://www.drijf.net/openbsd/malloc. The intention is that this is close to commitable. But while a straight merge of most recent diff with the current src, a bug might have crept in. So beware etc. -Otto
Re: bigger mbuf clusters for sosend()
> On 22 Aug 2016, at 03:36, Hrvoje Popovski wrote: > > On 13.8.2016. 10:59, Claudio Jeker wrote: >> This diff refactors the uio to mbuf code to make use of bigger buffers (up >> to 64k) and also switches the MCLGET to use M_WAIT like the MGET calls in >> the same function. I see no point in not waiting for a cluster and instead >> chain lots of mbufs together as a consequence. >> >> This makes in my opinion the code easier to read and allows for further >> optimizations (like using non-DMA reachable mbufs for AF_UNIX sockets). >> >> This increased the preformance of loopback connections significantly when >> I tested this at n2k16. >> > > Hi, > > it seems that this patch speeds up forwarding for about 40kpps. At least > with my test box and with this setup :) > Which means that -current can forward full 10Gbps with 1500byte packets :) thats kind of nuts cos this shouldnt affect the forwarding path at all. im keen for it to go in still though. claudio? dlg > > pf=NO > ddb.panic=1 > ddb.console=1 > kern.pool_debug=0 > kern.maxclusters=32768 > net.inet.ip.forwarding=1 > net.inet.ip.ifq.maxlen=8192 > > > sending from 12.0.0.11 to 11.0.0.11 > > > # netstat -rnf inet | grep ix > 11/8 192.168.11.2 UGS0 118844402 - 8 ix0 > 12/8 192.168.12.2 UGS00 - 8 ix1 > 192.168.11.0/30192.168.11.1 UC 10 - 4 ix0 > 192.168.11.1 a0:36:9f:2e:96:a0 UHLl 01 - 1 ix0 > 192.168.11.2 90:e2:ba:1a:df:85 UHLc 12 - 4 ix0 > 192.168.11.3 192.168.11.1 UHb00 - 1 ix0 > 192.168.12.0/30192.168.12.1 UC 00 - 4 ix1 > 192.168.12.1 a0:36:9f:2e:96:a1 UHLl 00 - 1 ix1 > 192.168.12.3 192.168.12.1 UHb00 - 1 ix1 > > without patch > sending receiving > 800Kpps 800kpps > 840Kpps 840kpps > 850Kpps 770kpps > 1.4Mpps 690kpps > 14Mpps685kpps > > with patch > sending receiving > 800kpps 800kpps > 880kpps 880kpps > 890kpps 790kpps > 1.4Mpps 700kpps > 14Mpps700kpps > > > # netstat -i 1 > em0 inem0 out total in total out > packets errs packets errs colls packets errs packets errs colls > 2 02 0 0888799 0 888779 0 0 > 1 01 0 0889240 0 889124 0 0 > 1 01 0 0889296 0 889407 0 0 > 1 01 0 0888941 0 888932 0 0 > 1 01 0 0889268 0 889291 0 0 > 1 01 0 0889095 0 889200 0 0 > > > OpenBSD 6.0-current (GENERIC.MP) #0: Sun Aug 21 18:57:24 CEST 2016 >r...@x3550m4.my.domain:/usr/src/sys/arch/amd64/compile/GENERIC.MP > RTC BIOS diagnostic error 80 > real mem = 34315051008 (32725MB) > avail mem = 33270591488 (31729MB) > mpath0 at root > scsibus0 at mpath0: 256 targets > mainbus0 at root > bios0 at mainbus0: SMBIOS rev. 2.7 @ 0x7e67c000 (84 entries) > bios0: vendor IBM version "-[D7E146CUS-1.82]-" date 04/09/2015 > bios0: IBM IBM System x3550 M4 Server -[7914T91]- > acpi0 at bios0: rev 2 > acpi0: sleep states S0 S5 > acpi0: tables DSDT FACP TCPA ERST HEST HPET APIC MCFG OEM0 OEM1 SLIT > SRAT SLIC SSDT SSDT SSDT SSDT DMAR > acpi0: wakeup devices MRP1(S4) DCC0(S4) MRP3(S4) MRP5(S4) EHC2(S5) > PEX0(S5) PEX7(S5) EHC1(S5) IP2P(S3) MRPB(S4) MRPC(S4) MRPD(S4) MRPF(S4) > MRPG(S4) MRPH(S4) MRPI(S4) [...] > acpitimer0 at acpi0: 3579545 Hz, 24 bits > acpihpet0 at acpi0: 14318179 Hz > acpimadt0 at acpi0 addr 0xfee0: PC-AT compat > cpu0 at mainbus0: apid 0 (boot processor) > cpu0: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz, 2400.35 MHz > cpu0: > FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,SENSOR,ARAT > cpu0: 256KB 64b/line 8-way L2 cache > cpu0: smt 0, core 0, package 0 > mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges > cpu0: apic clock running at 100MHz > cpu0: mwait min=64, max=64, C-substates=0.2.1.1, IBE > cpu1 at mainbus0: apid 2 (application processor) > cpu1: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz, 2400.00 MHz > cpu1: > FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,E
Re: add option for disabling TLS session tickets to libttls
On Sun, Aug 21, 2016 at 02:25:15PM -0400, Ted Unangst wrote: > Andreas Bartelt wrote: > > Since the use of TLS session tickets potentially interferes with forward > > secrecy on a per-session basis, I'd personally prefer an opt-in in > > libtls as well as in httpd with regard to its usage. However, such a > > semantic change would not be transparent. Any opinions on this? > > Defaulting to off makes sense to me. It's the marginally safer option and at > small scale probably not a performance concern. But if the default results in > 900 "tutorials" telling people to turn it back on because web scale, then all > we've done is make things difficult. > While I agree it is important to turn them on for HTTP servers or any other protocol that does a lot of reconnects. This should also include the magic to make them work accross multiple processes (see my relayd diff for that -- which uses the libssl callback madness though). Without tickets the full TLS handshake will be made for every reconnect which is a common mode of operation for HTTP. Also I think tickets are a bit saver than the session cache (which AFAIK is also default on for servers) and probably the fallback mode. Client side tickets should be enabled since they are just pass along to the next connect without processing them. > > As kind of a first step, the attached diff adds an function to libtls > > which allows to (optionally) disable the use of tls session tickets. > > Can you please add an option to enable tickets? That makes it easier to write > software that works with either default. -- :wq Claudio
Re: add option for disabling TLS session tickets to libttls
On 08/21/16 20:25, Ted Unangst wrote: Andreas Bartelt wrote: Since the use of TLS session tickets potentially interferes with forward secrecy on a per-session basis, I'd personally prefer an opt-in in libtls as well as in httpd with regard to its usage. However, such a semantic change would not be transparent. Any opinions on this? Defaulting to off makes sense to me. It's the marginally safer option and at small scale probably not a performance concern. But if the default results in 900 "tutorials" telling people to turn it back on because web scale, then all we've done is make things difficult. I'm not so sure that disabling session tickets is only marginally safer. Please correct me if the following analysis is wrong. With session tickets disabled: - in case forward secrecy is not enabled and the attacker somehow obtains the server's private key -> attacker can decrypt past, present and future TLS traffic to this server - in case forward secrecy is enabled and the attacker somehow obtains the server's private key -> attacker can conduct active MITM attacks on present and future TLS traffic to this server. However, passive MITM attacks won't succeed. With session tickets enabled: - in case the attacker somehow obtains the secret key which is used by the server to encrypt all of its session tickets -> attacker can conduct passive MITM attacks with regard to all TLS traffic to this server in the scope (i.e., lifetime) of the obtained secret key. This is because TLS clients send their session tickets back to the server during session resumption which enables a relatively straightforward way for snooping them on the wire. Decrypted session tickets might also enable active interference with their corresponding TLS sessions (e.g., the attacker could actively resume them). In my opinion, the security of this TLS extension strongly depends on the assumptions about the attacker's capabilities and on the absence of other vulnerabilities (e.g., some kind of key leakage similar to heartbleed?). That being said, I still think that this TLS extension can be deployed with reasonable security. However, it doesn't look to me like a conservative ``default'' configuration. As kind of a first step, the attached diff adds an function to libtls which allows to (optionally) disable the use of tls session tickets. Can you please add an option to enable tickets? That makes it easier to write software that works with either default. diff, which also disables session tickets by default in libtls, is attached. Index: src/lib/libtls/tls.h === RCS file: /cvs/src/lib/libtls/tls.h,v retrieving revision 1.33 diff -u -p -u -r1.33 tls.h --- src/lib/libtls/tls.h12 Aug 2016 15:10:59 - 1.33 +++ src/lib/libtls/tls.h22 Aug 2016 03:59:02 - @@ -41,6 +41,9 @@ extern "C" { #define TLS_WANT_POLLIN-2 #define TLS_WANT_POLLOUT -3 +#define TLS_SESSION_TICKETS_DISABLE0 +#define TLS_SESSION_TICKETS_ENABLE 1 + struct tls; struct tls_config; @@ -73,6 +76,9 @@ int tls_config_set_keypair_mem(struct tl size_t _cert_len, const uint8_t *_key, size_t _key_len); void tls_config_set_protocols(struct tls_config *_config, uint32_t _protocols); void tls_config_set_verify_depth(struct tls_config *_config, int _verify_depth); + +void tls_config_enable_session_tickets(struct tls_config *_config); +void tls_config_disable_session_tickets(struct tls_config *_config); void tls_config_prefer_ciphers_client(struct tls_config *_config); void tls_config_prefer_ciphers_server(struct tls_config *_config); Index: src/lib/libtls/tls_config.c === RCS file: /cvs/src/lib/libtls/tls_config.c,v retrieving revision 1.27 diff -u -p -u -r1.27 tls_config.c --- src/lib/libtls/tls_config.c 13 Aug 2016 13:15:53 - 1.27 +++ src/lib/libtls/tls_config.c 22 Aug 2016 03:59:02 - @@ -193,6 +193,8 @@ tls_config_new(void) tls_config_set_protocols(config, TLS_PROTOCOLS_DEFAULT); tls_config_set_verify_depth(config, 6); + tls_config_disable_session_tickets(config); + tls_config_prefer_ciphers_server(config); tls_config_verify(config); @@ -524,6 +526,18 @@ void tls_config_set_verify_depth(struct tls_config *config, int verify_depth) { config->verify_depth = verify_depth; +} + +void +tls_config_enable_session_tickets(struct tls_config *config) +{ + config->session_tickets = TLS_SESSION_TICKETS_ENABLE; +} + +void +tls_config_disable_session_tickets(struct tls_config *config) +{ + config->session_tickets = TLS_SESSION_TICKETS_DISABLE; } void Index: src/lib/libtls/tls_init.3 === RCS file: /cvs/src/lib/libtls/tls_init.3,v retrieving revision 1.66 diff -u -p -u -r1.66 tls_init.3 --- src/lib/libtls/tls_init.3 18 Aug 2016 15:
Re: ftp: avoid casts in networking functions
On Sun, 21 Aug 2016 14:23:33 -0600, "Todd C. Miller" wrote: > We could get rid of struct sockinet entirely but I wasn't sure it > was worth it. Here's a diff that replaces sockunion with the more common sockaddr_union. - todd Index: usr.bin/ftp/ftp.c === RCS file: /cvs/src/usr.bin/ftp/ftp.c,v retrieving revision 1.99 diff -u -p -u -r1.99 ftp.c --- usr.bin/ftp/ftp.c 20 Aug 2016 20:18:42 - 1.99 +++ usr.bin/ftp/ftp.c 22 Aug 2016 01:17:13 - @@ -83,20 +83,13 @@ #include "ftp_var.h" -union sockunion { - struct sockinet { - u_char si_len; - u_char si_family; - u_short si_port; - } su_si; - struct sockaddr_in su_sin; - struct sockaddr_in6 su_sin6; +union sockaddr_union { + struct sockaddr sa; + struct sockaddr_in sin; + struct sockaddr_in6 sin6; }; -#define su_len su_si.si_len -#define su_family su_si.si_family -#define su_portsu_si.si_port -union sockunion myctladdr, hisctladdr, data_addr; +union sockaddr_union myctladdr, hisctladdr, data_addr; intdata = -1; intabrtflag = 0; @@ -259,12 +252,12 @@ hookup(char *host, char *port) ares = NULL; } #endif /* !SMALL */ - if (getsockname(s, (struct sockaddr *)&myctladdr, &namelen) < 0) { + if (getsockname(s, &myctladdr.sa, &namelen) < 0) { warn("getsockname"); code = -1; goto bad; } - if (hisctladdr.su_family == AF_INET) { + if (hisctladdr.sa.sa_family == AF_INET) { tos = IPTOS_LOWDELAY; if (setsockopt(s, IPPROTO_IP, IP_TOS, (char *)&tos, sizeof(int)) < 0) warn("setsockopt TOS (ignored)"); @@ -1280,9 +1273,9 @@ initconn(void) struct addrinfo *ares; #endif - if (myctladdr.su_family == AF_INET6 -&& (IN6_IS_ADDR_LINKLOCAL(&myctladdr.su_sin6.sin6_addr) - || IN6_IS_ADDR_SITELOCAL(&myctladdr.su_sin6.sin6_addr))) { + if (myctladdr.sa.sa_family == AF_INET6 +&& (IN6_IS_ADDR_LINKLOCAL(&myctladdr.sin6.sin6_addr) + || IN6_IS_ADDR_SITELOCAL(&myctladdr.sin6.sin6_addr))) { warnx("use of scoped address can be troublesome"); } #ifndef SMALL @@ -1306,7 +1299,7 @@ initconn(void) reinit: if (passivemode) { data_addr = myctladdr; - data = socket(data_addr.su_family, SOCK_STREAM, 0); + data = socket(data_addr.sa.sa_family, SOCK_STREAM, 0); if (data < 0) { warn("socket"); return (1); @@ -1324,7 +1317,7 @@ reinit: sizeof(on)) < 0) warn("setsockopt (ignored)"); #endif /* !SMALL */ - switch (data_addr.su_family) { + switch (data_addr.sa.sa_family) { case AF_INET: if (epsv4 && !epsv4bad) { int ov; @@ -1397,7 +1390,7 @@ reinit: if (!pasvcmd) goto bad; if (strcmp(pasvcmd, "PASV") == 0) { - if (data_addr.su_family != AF_INET) { + if (data_addr.sa.sa_family != AF_INET) { fputs( "Passive mode AF mismatch. Shouldn't happen!\n", ttyout); goto bad; @@ -1416,18 +1409,18 @@ reinit: goto bad; } memset(&data_addr, 0, sizeof(data_addr)); - data_addr.su_family = AF_INET; - data_addr.su_len = sizeof(struct sockaddr_in); - data_addr.su_sin.sin_addr.s_addr = + data_addr.sin.sin_family = AF_INET; + data_addr.sin.sin_len = sizeof(struct sockaddr_in); + data_addr.sin.sin_addr.s_addr = htonl(pack4(addr, 0)); - data_addr.su_port = htons(pack2(port, 0)); + data_addr.sin.sin_port = htons(pack2(port, 0)); } else if (strcmp(pasvcmd, "LPSV") == 0) { if (code / 10 == 22 && code != 228) { fputs("wrong server: return code must be 228\n", ttyout); goto bad; } - switch (data_addr.su_family) { + switch (data_addr.sa.sa_family) { case AF_INET: error = sscanf(pasv, "%u,%u,%u,%u,%u,%u,%u,%u,%u", @@ -1447,11 +1440,11 @@ reinit: } memset(&data_addr, 0, sizeof(data_addr)); - data_addr.su_family = AF_INET; -
Re: ld.so initarray support
On Fri, 19 Aug 2016, Mark Kettenis wrote: > > From: Philip Guenther > > Date: Thu, 18 Aug 2016 21:09:10 -0700 > > > > On Thursday, August 18, 2016, Mark Kettenis wrote: > > ... > > > > > > > > There is a functional change here. Our current ld.so doesn't run > > > > > DT_INIT and DT_FINI for the main executable. The ELF standard is a > > > > > bit > > > > > ambiguous about this, but Linux does run tose for the main executable. > > > > > And Solaris allegedly does as well. So my diff changes that. > > > > > > > > ld.so doesn't run them because __start() in csu does! Note that > > > > csu needs to run them for static executables, and we use the > > > > same crt0.o for both static and dynamic executables. I think > > > > you're double executing them with this. > > > > > > We're not double executing because we don't create a DT_INIT entry for > > > them. > > > > Hmm, is that a bug? Static and dynamic should ideally behave the same for > > all these, no? > > Ah, perhaps I wasn't clear. We don't create DT_INIT for both static > and dynamic executables. Hmm, I'm trying to decide if that's a bug or not. > You raise an interesting question though. Traditional static > executables cannot have DT_INIT since they don't have a .dynamic > section. But static PIE executables can have DT_INIT. So should our > self-relocation code attempt to exeute it? To talk mostly at myself... It's an underdocumented part of the ELF standard how code in .init sections gets executed, and how that interacts with setting DT_INIT. The Solaris 11 linker guide says: The sections .init and .fini provide a runtime initialization and termination code block, respectively. The compiler drivers typically supply .init and .fini sections with files they add to the beginning and end of your input file list. These compiler provided files have the effect of encapsulating the .init and .fini code from your relocatable objects into individual functions. These functions are identified by the reserved symbol names _init and _fini respectively. When creating a dynamic object, the link-editor identifies these symbols with the .dynamic tags DT_INIT and DT_FINI accordingly. These tags identify the associated sections so they can be called by the runtime linker. We agree with that for shared-libraries, but for executables we don't, presumably because we can't depend on the viability of the DT_INIT hook because non-PIE, static executables don't have an dynamic section at all, so instead the .init section code ends up in a function __init(): note the extra underbar. That function is then called from __start(). So I think it's fine for you to change ld.so to execute the DT_INIT function for executables: it's won't normally be set but if code explicitly sets it then we'll be fine...as long as they aren't *depending* on doing that to disable execution of .init section code...but if someone did that they deserve to lose: if you don't want .init code, then DON'T INCLUDE IT. The same may apply to executing DT_INIT functions for static, PIE executables. ...but in the end we still need to be able to support static, non-PIE, which means that at least in some cases _start() has to execute .init code and we don't have a great way to handle that case differently in _start(). So for now we need to not handle .init section code in executables via DT_INIT, which makes calling DT_INIT of executables, whether dynamic or static PIE, mostly a moot point and subject to whatever we want to do. Philip
ftp: avoid casts in networking functions
This adds struct sockaddr to sockunion so we can just use &foo.su_sa instead of (struct sockaddr *)&foo. No functional difference but it lets the compiler warn when appropriate. The diff also sync the types in struct sockinet to match the first three fields of struct sockaddr_in. We could get rid of struct sockinet entirely but I wasn't sure it was worth it. - todd Index: usr.bin/ftp/ftp.c === RCS file: /cvs/src/usr.bin/ftp/ftp.c,v retrieving revision 1.99 diff -u -p -u -r1.99 ftp.c --- usr.bin/ftp/ftp.c 20 Aug 2016 20:18:42 - 1.99 +++ usr.bin/ftp/ftp.c 21 Aug 2016 20:20:32 - @@ -85,10 +85,11 @@ union sockunion { struct sockinet { - u_char si_len; - u_char si_family; - u_short si_port; + u_int8_t si_len; + sa_family_t si_family; + in_port_t si_port; } su_si; + struct sockaddr su_sa; struct sockaddr_in su_sin; struct sockaddr_in6 su_sin6; }; @@ -259,7 +260,7 @@ hookup(char *host, char *port) ares = NULL; } #endif /* !SMALL */ - if (getsockname(s, (struct sockaddr *)&myctladdr, &namelen) < 0) { + if (getsockname(s, &myctladdr.su_sa, &namelen) < 0) { warn("getsockname"); code = -1; goto bad; @@ -1516,8 +1517,8 @@ reinit: } else goto bad; - for (error = connect(data, (struct sockaddr *)&data_addr, - data_addr.su_len); error != 0 && errno == EINTR; + for (error = connect(data, &data_addr.su_sa, data_addr.su_len); + error != 0 && errno == EINTR; error = connect_wait(data)) continue; if (error != 0) { @@ -1573,7 +1574,7 @@ noport: warn("setsockopt IPV6_PORTRANGE (ignored)"); break; } - if (bind(data, (struct sockaddr *)&data_addr, data_addr.su_len) < 0) { + if (bind(data, &data_addr.su_sa, data_addr.su_len) < 0) { warn("bind"); goto bad; } @@ -1584,7 +1585,7 @@ noport: warn("setsockopt (ignored)"); #endif /* !SMALL */ namelen = sizeof(data_addr); - if (getsockname(data, (struct sockaddr *)&data_addr, &namelen) < 0) { + if (getsockname(data, &data_addr.su_sa, &namelen) < 0) { warn("getsockname"); goto bad; } @@ -1610,9 +1611,9 @@ noport: if (tmp.su_family == AF_INET6) tmp.su_sin6.sin6_scope_id = 0; af_tmp = (tmp.su_family == AF_INET) ? 1 : 2; - if (getnameinfo((struct sockaddr *)&tmp, - tmp.su_len, hname, sizeof(hname), - pbuf, sizeof(pbuf), NI_NUMERICHOST | NI_NUMERICSERV)) { + if (getnameinfo(&tmp.su_sa, tmp.su_len, hname, + sizeof(hname), pbuf, sizeof(pbuf), + NI_NUMERICHOST | NI_NUMERICSERV)) { result = ERROR; } else { result = command("EPRT |%d|%s|%s|", @@ -1694,7 +1695,7 @@ dataconn(const char *lmode) if (passivemode) return (fdopen(data, lmode)); - s = accept(data, (struct sockaddr *) &from, &fromlen); + s = accept(data, &from.su_sa, &fromlen); if (s < 0) { warn("accept"); (void)close(data), data = -1;
Re: add option for disabling TLS session tickets to libttls
Andreas Bartelt wrote: > Since the use of TLS session tickets potentially interferes with forward > secrecy on a per-session basis, I'd personally prefer an opt-in in > libtls as well as in httpd with regard to its usage. However, such a > semantic change would not be transparent. Any opinions on this? Defaulting to off makes sense to me. It's the marginally safer option and at small scale probably not a performance concern. But if the default results in 900 "tutorials" telling people to turn it back on because web scale, then all we've done is make things difficult. > As kind of a first step, the attached diff adds an function to libtls > which allows to (optionally) disable the use of tls session tickets. Can you please add an option to enable tickets? That makes it easier to write software that works with either default.
Re: bigger mbuf clusters for sosend()
On 13.8.2016. 10:59, Claudio Jeker wrote: > This diff refactors the uio to mbuf code to make use of bigger buffers (up > to 64k) and also switches the MCLGET to use M_WAIT like the MGET calls in > the same function. I see no point in not waiting for a cluster and instead > chain lots of mbufs together as a consequence. > > This makes in my opinion the code easier to read and allows for further > optimizations (like using non-DMA reachable mbufs for AF_UNIX sockets). > > This increased the preformance of loopback connections significantly when > I tested this at n2k16. > Hi, it seems that this patch speeds up forwarding for about 40kpps. At least with my test box and with this setup :) Which means that -current can forward full 10Gbps with 1500byte packets :) pf=NO ddb.panic=1 ddb.console=1 kern.pool_debug=0 kern.maxclusters=32768 net.inet.ip.forwarding=1 net.inet.ip.ifq.maxlen=8192 sending from 12.0.0.11 to 11.0.0.11 # netstat -rnf inet | grep ix 11/8 192.168.11.2 UGS0 118844402 - 8 ix0 12/8 192.168.12.2 UGS00 - 8 ix1 192.168.11.0/30192.168.11.1 UC 10 - 4 ix0 192.168.11.1 a0:36:9f:2e:96:a0 UHLl 01 - 1 ix0 192.168.11.2 90:e2:ba:1a:df:85 UHLc 12 - 4 ix0 192.168.11.3 192.168.11.1 UHb00 - 1 ix0 192.168.12.0/30192.168.12.1 UC 00 - 4 ix1 192.168.12.1 a0:36:9f:2e:96:a1 UHLl 00 - 1 ix1 192.168.12.3 192.168.12.1 UHb00 - 1 ix1 without patch sending receiving 800Kpps 800kpps 840Kpps 840kpps 850Kpps 770kpps 1.4Mpps 690kpps 14Mpps 685kpps with patch sending receiving 800kpps 800kpps 880kpps 880kpps 890kpps 790kpps 1.4Mpps 700kpps 14Mpps 700kpps # netstat -i 1 em0 inem0 out total in total out packets errs packets errs colls packets errs packets errs colls 2 02 0 0888799 0 888779 0 0 1 01 0 0889240 0 889124 0 0 1 01 0 0889296 0 889407 0 0 1 01 0 0888941 0 888932 0 0 1 01 0 0889268 0 889291 0 0 1 01 0 0889095 0 889200 0 0 OpenBSD 6.0-current (GENERIC.MP) #0: Sun Aug 21 18:57:24 CEST 2016 r...@x3550m4.my.domain:/usr/src/sys/arch/amd64/compile/GENERIC.MP RTC BIOS diagnostic error 80 real mem = 34315051008 (32725MB) avail mem = 33270591488 (31729MB) mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.7 @ 0x7e67c000 (84 entries) bios0: vendor IBM version "-[D7E146CUS-1.82]-" date 04/09/2015 bios0: IBM IBM System x3550 M4 Server -[7914T91]- acpi0 at bios0: rev 2 acpi0: sleep states S0 S5 acpi0: tables DSDT FACP TCPA ERST HEST HPET APIC MCFG OEM0 OEM1 SLIT SRAT SLIC SSDT SSDT SSDT SSDT DMAR acpi0: wakeup devices MRP1(S4) DCC0(S4) MRP3(S4) MRP5(S4) EHC2(S5) PEX0(S5) PEX7(S5) EHC1(S5) IP2P(S3) MRPB(S4) MRPC(S4) MRPD(S4) MRPF(S4) MRPG(S4) MRPH(S4) MRPI(S4) [...] acpitimer0 at acpi0: 3579545 Hz, 24 bits acpihpet0 at acpi0: 14318179 Hz acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz, 2400.35 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,SENSOR,ARAT cpu0: 256KB 64b/line 8-way L2 cache cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges cpu0: apic clock running at 100MHz cpu0: mwait min=64, max=64, C-substates=0.2.1.1, IBE cpu1 at mainbus0: apid 2 (application processor) cpu1: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz, 2400.00 MHz cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,SENSOR,ARAT cpu1: 256KB 64b/line 8-way L2 cache cpu1: smt 0, core 1, package 0 cpu2 at mainbus0: apid 4 (application processor) cpu2: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz, 2400.00 MHz cpu2: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,
Re: fgetwln(3) fails to report most encoding errors
Ingo Schwarze writes: > Hi, > > did i mention already that libc wide character code is buggy as hell? > I looked at another very simple function of only 30 lines of code > and promptly found another bug. > > The fgetwln(3) manual is quite explicit that the "fgetwln() function > may also fail ... for any of the errors specified for ... mbrtowc(3)" > and that it must return NULL in case of failure. That's sensible; > we shouldn't expect programmers to inspect ferror(3) or errno(2) > after getting a function return value indicating success. > > However, after reading a single valid character, fgetwln(3) will > mistreat all subsequent encoding errors as newlines - returning > success when encountering an invalid encoding, but still setting > both errno(3) and the stdio error indicator. > > OK to commit the following patch? Makes sense to me. > Note that it will make programs using fgetwln(3), in particular in > ports, error out more frequently, and no longer permit reading of > streams containing encoding errors with this function. But trying > to do so wasn't reliable in the past anyway, because encoding errors > right after newlines already caused the function to error out. > > Also note that FreeBSD and NetBSD contain the same bug. > Actually, i found the bug because i played with FreeBSD rev(1) code > on OpenBSD and was surprised by its absurd behaviour when fed input > containing encoding errors. > > Yours, > Ingo > > > Index: stdio/fgetwln.c > === > RCS file: /cvs/src/lib/libc/stdio/fgetwln.c,v > retrieving revision 1.1 > diff -p -U11 -r1.1 fgetwln.c > --- stdio/fgetwln.c 12 Jan 2015 20:58:07 - 1.1 > +++ stdio/fgetwln.c 21 Aug 2016 14:00:32 - > @@ -59,23 +59,23 @@ fgetwln(FILE * __restrict fp, size_t *le > > len = 0; > while ((wc = __fgetwc_unlock(fp)) != WEOF) { > #define GROW512 > if (len >= fp->_lb._size / sizeof(wchar_t) && > __slbexpand(fp, len + GROW)) > goto error; > *((wchar_t *)fp->_lb._base + len++) = wc; > if (wc == L'\n') > break; > } > - if (len == 0) > + if (len == 0 || fp->_flags & __SERR) > goto error; > > FUNLOCKFILE(fp); > *lenp = len; > return (wchar_t *)fp->_lb._base; > > error: > FUNLOCKFILE(fp); > *lenp = 0; > return NULL; > } > -- jca | PGP : 0x1524E7EE / 5135 92C1 AD36 5293 2BDF DDCC 0DFA 74AE 1524 E7EE
add option for disabling TLS session tickets to libttls
Hello, LibreSSL enables the use of the TLS session ticket extension [RFC 5077, or, according to comments in source code its older version 4507] by default, and libtls currently doesn't provide an API call for disabling this feature. Consequently, OpenBSD's httpd has TLS session tickets enabled by default and doesn't provide an option to turn this TLS extension off. Moreover, there's currently no way to provide a specific policy with regard to the use of TLS session tickets (e.g., lifetime of the corresponding secret key which is used for encrypting all session tickets, the encryption scheme for session tickets etc). Since the use of TLS session tickets potentially interferes with forward secrecy on a per-session basis, I'd personally prefer an opt-in in libtls as well as in httpd with regard to its usage. However, such a semantic change would not be transparent. Any opinions on this? As kind of a first step, the attached diff adds an function to libtls which allows to (optionally) disable the use of tls session tickets. Best regards Andreas Index: src/lib/libtls/tls.h === RCS file: /cvs/src/lib/libtls/tls.h,v retrieving revision 1.33 diff -u -p -u -r1.33 tls.h --- src/lib/libtls/tls.h 12 Aug 2016 15:10:59 - 1.33 +++ src/lib/libtls/tls.h 21 Aug 2016 15:08:32 - @@ -41,6 +41,9 @@ extern "C" { #define TLS_WANT_POLLIN -2 #define TLS_WANT_POLLOUT -3 +#define TLS_SESSION_TICKETS_DISABLE 0 +#define TLS_SESSION_TICKETS_ENABLE 1 + struct tls; struct tls_config; @@ -73,6 +76,8 @@ int tls_config_set_keypair_mem(struct tl size_t _cert_len, const uint8_t *_key, size_t _key_len); void tls_config_set_protocols(struct tls_config *_config, uint32_t _protocols); void tls_config_set_verify_depth(struct tls_config *_config, int _verify_depth); + +void tls_config_disable_session_tickets(struct tls_config *_config); void tls_config_prefer_ciphers_client(struct tls_config *_config); void tls_config_prefer_ciphers_server(struct tls_config *_config); Index: src/lib/libtls/tls_config.c === RCS file: /cvs/src/lib/libtls/tls_config.c,v retrieving revision 1.27 diff -u -p -u -r1.27 tls_config.c --- src/lib/libtls/tls_config.c 13 Aug 2016 13:15:53 - 1.27 +++ src/lib/libtls/tls_config.c 21 Aug 2016 15:08:32 - @@ -193,6 +193,8 @@ tls_config_new(void) tls_config_set_protocols(config, TLS_PROTOCOLS_DEFAULT); tls_config_set_verify_depth(config, 6); + config->session_tickets = TLS_SESSION_TICKETS_ENABLE; + tls_config_prefer_ciphers_server(config); tls_config_verify(config); @@ -524,6 +526,12 @@ void tls_config_set_verify_depth(struct tls_config *config, int verify_depth) { config->verify_depth = verify_depth; +} + +void +tls_config_disable_session_tickets(struct tls_config *config) +{ + config->session_tickets = TLS_SESSION_TICKETS_DISABLE; } void Index: src/lib/libtls/tls_init.3 === RCS file: /cvs/src/lib/libtls/tls_init.3,v retrieving revision 1.66 diff -u -p -u -r1.66 tls_init.3 --- src/lib/libtls/tls_init.3 18 Aug 2016 15:43:12 - 1.66 +++ src/lib/libtls/tls_init.3 21 Aug 2016 15:08:32 - @@ -39,6 +39,7 @@ .Nm tls_config_set_keypair_mem , .Nm tls_config_set_protocols , .Nm tls_config_set_verify_depth , +.Nm tls_config_disable_session_tickets , .Nm tls_config_prefer_ciphers_client , .Nm tls_config_prefer_ciphers_server , .Nm tls_config_clear_keys , @@ -119,6 +120,8 @@ .Fn tls_config_set_protocols "struct tls_config *config" "uint32_t protocols" .Ft "void" .Fn tls_config_set_verify_depth "struct tls_config *config" "int verify_depth" +.Ft "void" +.Fn tls_config_disable_session_tickets "struct tls_config *config" .Ft "void" .Fn tls_config_prefer_ciphers_client "struct tls_config *config" .Ft "void" Index: src/lib/libtls/tls_internal.h === RCS file: /cvs/src/lib/libtls/tls_internal.h,v retrieving revision 1.39 diff -u -p -u -r1.39 tls_internal.h --- src/lib/libtls/tls_internal.h 15 Aug 2016 15:44:58 - 1.39 +++ src/lib/libtls/tls_internal.h 21 Aug 2016 15:08:32 - @@ -64,6 +64,7 @@ struct tls_config { int ecdhecurve; struct tls_keypair *keypair; uint32_t protocols; + int session_tickets; int verify_cert; int verify_client; int verify_depth; Index: src/lib/libtls/tls_server.c === RCS file: /cvs/src/lib/libtls/tls_server.c,v retrieving revision 1.24 diff -u -p -u -r1.24 tls_server.c --- src/lib/libtls/tls_server.c 18 Aug 2016 15:52:03 - 1.24 +++ src/lib/libtls/tls_server.c 21 Aug 2016 15:08:32 - @@ -113,6 +113,9 @@ tls_configure_server_ssl(struct tls *ctx if (ctx->config->ciphers_server == 1) SSL_CTX_set_options(*ssl_ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); + if (ctx->config->session_tickets == TLS_SESSION
OFW/FDT clock "framework"
Here is an attempt to handle the clocks on armv7 in a similar way as we do with gpio, pinctrl and and regulators. Currently the API is fairly limited. There is an interface to get the frequency at which a clock is running and there is an interface to enable a clock. Some devices have multiple clocks. In that case the device tree node for the device is supposed to have a "clock-names" property, and you can use those names to query/enable a particular clock. If there is only one clock, you can pass NULL instead. There are also interfaces with an _idx suffix. Those accept an index to specify a particular clock. These should not be used unless there is no "clock-names" property. For an example on how to use these APIs, see the changes to com_fdt.c. There no longer is a need to harcode the frequencies for particular hardware! The clock framework handles "simple" clocks such as "fixed-clock" and "fixed-factor-clock" all by itself. More complicated clock devices need to register themselves with the framework. The diff contains changes to sxiccmu(4) to add support for a few of the clocks found on the Allwinner A20. Here the clocks are nicely contained under /clocks in the device tree. The code simply walks that part of the tree and registers the clocks it recognized. We don't attach a full device driver for these. There simply are too many clocks and having a driver for each would just create a lot of dmesg spam. I think for sunxi(4) I've found a design that doesn't require an insane amount of code. And once all the sunxi drivers have been converted, some of the existing code will disappear. Any thoughts on the design? If it looks ok, I'd like to commit the code in dev/ofw/ofw_clock.[ch] and continue to write the sunxi code necessary to support clocks for the serial ports. Once that's done, the com_fdt.c bit can be committed. Index: dev/ofw/ofw_clock.c === RCS file: dev/ofw/ofw_clock.c diff -N dev/ofw/ofw_clock.c --- /dev/null 1 Jan 1970 00:00:00 - +++ dev/ofw/ofw_clock.c 21 Aug 2016 14:53:15 - @@ -0,0 +1,212 @@ +/* $OpenBSD$ */ +/* + * Copyright (c) 2016 Mark Kettenis + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +#include +#include +#include + +#include +#include + +LIST_HEAD(, clock_device) clock_devices = + LIST_HEAD_INITIALIZER(clock_devices); + +void +clock_register(struct clock_device *cd) +{ + cd->cd_cells = OF_getpropint(cd->cd_node, "#clock-cells", 0); + cd->cd_phandle = OF_getpropint(cd->cd_node, "phandle", 0); + if (cd->cd_phandle == 0) + return; + + LIST_INSERT_HEAD(&clock_devices, cd, cd_list); +} + +uint32_t +clock_get_frequency_cells(uint32_t *cells) +{ + struct clock_device *cd; + uint32_t phandle = cells[0]; + int node; + + LIST_FOREACH(cd, &clock_devices, cd_list) { + if (cd->cd_phandle == phandle) + break; + } + + if (cd && cd->cd_get_frequency) + return cd->cd_get_frequency(cd->cd_cookie, &cells[1]); + + node = OF_getnodebyphandle(phandle); + if (node == 0) + return 0; + + if (OF_is_compatible(node, "fixed-clock")) + return OF_getpropint(node, "clock-frequency", 0); + + if (OF_is_compatible(node, "fixed-factor-clock")) { + uint32_t mult, div, freq; + + mult = OF_getpropint(node, "clock-mult", 1); + div = OF_getpropint(node, "clock-div", 1); + freq = clock_get_frequency(node, NULL); + return (freq * mult) / div; + } + + return 0; +} + +void +clock_enable_cells(uint32_t *cells) +{ + struct clock_device *cd; + uint32_t phandle = cells[0]; + + LIST_FOREACH(cd, &clock_devices, cd_list) { + if (cd->cd_phandle == phandle) + break; + } + + if (cd && cd->cd_enable) + cd->cd_enable(cd->cd_cookie, &cells[1], 1); +} + +uint32_t * +clock_next_clock(uint32_t *cells) +{ + uint32_t phandle = cells[0]; + int node, ncells; + + node = OF_getnodebyphandle(phandle); + if (node == 0) + return NULL; + + ncells = OF_getpropint(node, "#clo
fgetwln(3) fails to report most encoding errors
Hi, did i mention already that libc wide character code is buggy as hell? I looked at another very simple function of only 30 lines of code and promptly found another bug. The fgetwln(3) manual is quite explicit that the "fgetwln() function may also fail ... for any of the errors specified for ... mbrtowc(3)" and that it must return NULL in case of failure. That's sensible; we shouldn't expect programmers to inspect ferror(3) or errno(2) after getting a function return value indicating success. However, after reading a single valid character, fgetwln(3) will mistreat all subsequent encoding errors as newlines - returning success when encountering an invalid encoding, but still setting both errno(3) and the stdio error indicator. OK to commit the following patch? Note that it will make programs using fgetwln(3), in particular in ports, error out more frequently, and no longer permit reading of streams containing encoding errors with this function. But trying to do so wasn't reliable in the past anyway, because encoding errors right after newlines already caused the function to error out. Also note that FreeBSD and NetBSD contain the same bug. Actually, i found the bug because i played with FreeBSD rev(1) code on OpenBSD and was surprised by its absurd behaviour when fed input containing encoding errors. Yours, Ingo Index: stdio/fgetwln.c === RCS file: /cvs/src/lib/libc/stdio/fgetwln.c,v retrieving revision 1.1 diff -p -U11 -r1.1 fgetwln.c --- stdio/fgetwln.c 12 Jan 2015 20:58:07 - 1.1 +++ stdio/fgetwln.c 21 Aug 2016 14:00:32 - @@ -59,23 +59,23 @@ fgetwln(FILE * __restrict fp, size_t *le len = 0; while ((wc = __fgetwc_unlock(fp)) != WEOF) { #defineGROW512 if (len >= fp->_lb._size / sizeof(wchar_t) && __slbexpand(fp, len + GROW)) goto error; *((wchar_t *)fp->_lb._base + len++) = wc; if (wc == L'\n') break; } - if (len == 0) + if (len == 0 || fp->_flags & __SERR) goto error; FUNLOCKFILE(fp); *lenp = len; return (wchar_t *)fp->_lb._base; error: FUNLOCKFILE(fp); *lenp = 0; return NULL; }