On Fri, Jul 29, 2016 at 7:15 PM, Noth wrote:
> I see there's been quite a few changes to iwm, but unfortunately on the
> latest snapshot (July 26th) the performance is abysmal: what used to be
> decently fast works at between 0.5 & 2kb/s. Ping rates go from 10ms to
> 1000ms and back again over a
> Hi tech@
>
>
>I see there's been quite a few changes to iwm, but unfortunately on
> the latest snapshot (July 26th) the performance is abysmal: what used to
> be decently fast works at between 0.5 & 2kb/s. Ping rates go from 10ms
> to 1000ms and back again over a few packets, with some p
Hi tech@
I see there's been quite a few changes to iwm, but unfortunately on
the latest snapshot (July 26th) the performance is abysmal: what used to
be decently fast works at between 0.5 & 2kb/s. Ping rates go from 10ms
to 1000ms and back again over a few packets, with some packets lost.
On Fri, Jul 29, 2016 at 08:07:14PM -0400, Ted Unangst wrote:
> There's a sched_yield() in taskq_thread(). Something's not quite right.
It is a sched_pause() in taskq_thread().
If I replace my yield() with sched_pause(), the userland hangs
during splicing. Looks like SPCF_SHOULDYIELD is not set.
Alexander Bluhm wrote:
> On Fri, Jul 29, 2016 at 06:46:52PM -0400, Ted Unangst wrote:
> > Alexander Bluhm wrote:
> > > + /* Avoid user land starvation. */
> > > + yield();
> >
> > you don't need to yield here, the task framework should do that for you.
>
> Perhaps the framework should do that, bu
Use the style from the man page examples for getaddrinfo, which makes a
bit more sense.
No functional change intended, and prior to the do/while => for
transition, no .o files were harmed.
OK?
/Alexander
Index: netcat.c
===
RCS fi
On Fri, Jul 29, 2016 at 06:46:52PM -0400, Ted Unangst wrote:
> Alexander Bluhm wrote:
> > + /* Avoid user land starvation. */
> > + yield();
>
> you don't need to yield here, the task framework should do that for you.
Perhaps the framework should do that, but it does not. When I run
my splic
Mark Kettenis wrote:
> > From: "Ted Unangst"
> > Date: Fri, 29 Jul 2016 18:38:20 -0400
> >
> > I'm a little confused about the following.
> >
> > > @@ -520,7 +522,7 @@ uaddr_lin_select(struct vm_map *map, str
> > > /* Deal with guardpages: search for space with one extra page. */
> > > guard
> From: "Ted Unangst"
> Date: Fri, 29 Jul 2016 18:38:20 -0400
>
> I'm a little confused about the following.
>
> > @@ -520,7 +522,7 @@ uaddr_lin_select(struct vm_map *map, str
> > /* Deal with guardpages: search for space with one extra page. */
> > guard_sz = ((map->flags & VM_MAP_GUARD
Alexander Bluhm wrote:
> Hi,
>
> Spliced TCP sockets become faster if we put the output part into
> its own task thread. This is inspired by userland copy where we
> also have to go through the scheduler. This gives the socket buffer
> a chance to be filled up and tcp_output() is called less oft
On Fri, Jul 29, 2016 at 3:31 PM, Mark Kettenis wrote:
>> Date: Thu, 28 Jul 2016 09:47:42 +0200
>> From: Patrick Wildt
>>
>> There is something I missed in the previous diff. When the PTE is not
>> valid, the mapping behind the virtual address of course isn't valid.
>> A flush to an unmapped page
Mark Kettenis wrote:
> The diff below fixes a couple of potential integer overflows in the
> uvm address selection code. Most of these are in code that is
> disabled, such as uaddr_lin_select and the sruff dealing with guard
> pages (guard_sz/guardsz is currently always 0). But I think the
> over
> Date: Thu, 28 Jul 2016 09:47:42 +0200
> From: Patrick Wildt
>
> There is something I missed in the previous diff. When the PTE is not
> valid, the mapping behind the virtual address of course isn't valid.
> A flush to an unmapped page will give us a translation fault. So only
> flush if the p
Hi,
Spliced TCP sockets become faster if we put the output part into
its own task thread. This is inspired by userland copy where we
also have to go through the scheduler. This gives the socket buffer
a chance to be filled up and tcp_output() is called less often and
with bigger chunks.
ok?
bl
The diff below fixes a couple of potential integer overflows in the
uvm address selection code. Most of these are in code that is
disabled, such as uaddr_lin_select and the sruff dealing with guard
pages (guard_sz/guardsz is currently always 0). But I think the
overflow in uvm_addr_fitspace() and
On Wed, 20 Jul 2016 12:36:45 +0200
Vincent Gross wrote:
> This is a completely mechanical diff to get rid of the 7-params
> madness in in6_selectsrc().
>
> I also apply the same treatment to in_selectsrc() for consistency.
>
> Ok?
... and of course I forgot to initialize a variable and broke a
On Thu, Jul 28, 2016 at 10:48:55PM +0200, Remi Locherer wrote:
> The resolver supports more than 3 nameservers.
>
fixed, thanks.
jmc
> Index: resolv.conf.5
> ===
> RCS file: /cvs/src/share/man/man5/resolv.conf.5,v
> retrieving revis
On Wed, 27 Jul 2016 23:57:48 -0400, "Ted Unangst" wrote:
> Amazing this hasn't resulted in bugs. The prototype for all the update
> functions takes a size_t argument, not an unsigned int.
OK millert@
- todd
Hi,
I verified that regression test
src/regress/lib/libssl/unit/tls_ext_alpn.c fails on these cases;
- proto_invalid_len5, 7, 8
- proto_invalid_missing1 - 5
- proto_invalid_missing8, 9
To correct these failures, ssl_parse_clienthello_tlsext() and
ssl_parse_serverhello_tlsext() in ssl/t1_lib.c
19 matches
Mail list logo