Re: [GSoC 2017] Number fo sockets

2017-08-25 Thread Joan Lledó
> Instead of a linked list, how about using a dynamic array?
>
>   https://en.wikipedia.org/wiki/Dynamic_array
>
> This would give you constant-time lookups, amortized constant time
> insertions and deletions, and better data locality and cache behavior.
>
> What do you think?

It's a good idea, in the lwip mailing list someone proposed to use a
hash table too. A linked list is a simple solution that works for now,
but using another approach would be better.



Re: [GSoC 2017] Number fo sockets

2017-08-24 Thread Mark H Weaver
Hi,

Joan Lledó  writes:

> As I mentioned in my previous post about ioctl operations[1], LwIP
> establishes a maximum limit of sockets, which is a big problem for a
> system like the Hurd. Now I've finished all tasks in my initial
> proposal, I thought it was a good idea to spend some time studying
> this issue and trying to find a solution.
>
> The problem is this: in LwIP, sockets are stored in a fixed-size
> array, which is the best solution for embedded systems as it is a
> natural way of indexing items without need of additional variables and
> allows the user to directly access any socket from its index taking
> the same time. If we remove the limited amount of sockets, then we
> cannot keep using an array, and another solution is required to handle
> all this.
>
> In the LwIP translator, I've chosen to replace the array for a linked
> list, and that has some implications: the need to iterate the list any
> time we need to create, destroy or look for a particular socket,

I would be good to avoid having socket lookup require time proportional
to the number of sockets.  The number of sockets may grow quite large in
some applications.

Instead of a linked list, how about using a dynamic array?

  https://en.wikipedia.org/wiki/Dynamic_array

This would give you constant-time lookups, amortized constant time
insertions and deletions, and better data locality and cache behavior.

What do you think?

  Mark



Re: [GSoC 2017] Number fo sockets

2017-08-23 Thread Kalle Olavi Niemitalo
Joan Lledó  writes:

> They are using the 2-clause BSD license, is OK to apply the patch with
> that license?

The 2-clause BSD license is OK for my lwip_poll patches.
I would prefer that you squash the bug fixes into the original
patch.


pgpqvTWEQkK0K.pgp
Description: PGP signature


Re: [GSoC 2017] Number fo sockets

2017-08-23 Thread Joan Lledó
>
> I meant the S_io_select and S_io_select_timeout functions should
> be able to return without replying to the RPC, if the socket is
> not immediately ready.  They would instead put the RPC in a
> queue, and if the socket later becomes ready, another thread
> would remove the RPC from the queue and reply to it.  That way,
> the translator would not need so many threads, saving some
> memory.  This scheme seems difficult to implement using
> lwip_select or lwip_poll because those functions block the
> calling thread until the desired event occurs.  Instead, it
> should be built on top of the lower-level lwIP functions.
> It would also benefit from changes in GNU MIG.
>
> However, I see the current pfinet already uses a thread per
> io_select, so your lwIP port is not worse in this respect.
>
> S_socket_recv (or S_io_read) has a similar issue but it is less
> serious there because one application thread calling recv causes
> only one translator thread to block.  (AFAIK, glibc does not have
> a Hurd-specific version of lio_listio, which could start multiple
> asynchronous operations with one call and thus cause multiple
> threads to block in various translator processes.)

Those issues are not a problem for lwip.

They are using the 2-clause BSD license, is OK to apply the patch with
that license?

Thanks again.



Re: [GSoC 2017] Number fo sockets

2017-08-17 Thread Kalle Olavi Niemitalo
Joan Lledó  writes:

> I've got a question about your patch. Why did you say io_select
> without a thread per socket would require a
> different solution? I'm studying the patch b/c I'd like to send it to
> lwip maintainers, and don't find the problem.

I meant the S_io_select and S_io_select_timeout functions should
be able to return without replying to the RPC, if the socket is
not immediately ready.  They would instead put the RPC in a
queue, and if the socket later becomes ready, another thread
would remove the RPC from the queue and reply to it.  That way,
the translator would not need so many threads, saving some
memory.  This scheme seems difficult to implement using
lwip_select or lwip_poll because those functions block the
calling thread until the desired event occurs.  Instead, it
should be built on top of the lower-level lwIP functions.
It would also benefit from changes in GNU MIG.

However, I see the current pfinet already uses a thread per
io_select, so your lwIP port is not worse in this respect.

S_socket_recv (or S_io_read) has a similar issue but it is less
serious there because one application thread calling recv causes
only one translator thread to block.  (AFAIK, glibc does not have
a Hurd-specific version of lio_listio, which could start multiple
asynchronous operations with one call and thus cause multiple
threads to block in various translator processes.)



Re: [GSoC 2017] Number fo sockets

2017-08-17 Thread Joan Lledó
Hello again Kalle,

I've got a question about your patch. Why did you say io_select
without a thread per socket would require a
different solution? I'm studying the patch b/c I'd like to send it to
lwip maintainers, and don't find the problem.

2017-08-15 9:50 GMT+02:00 Joan Lledó :
> Hello Kalle,
>
> I've applied and tested your patches and seem to be working fine. The
> translator is now using lwip_poll() instead of lwip_select().
>
> I've pushed the changes to github. Thank you very much.



Re: [GSoC 2017] Number fo sockets

2017-08-15 Thread Joan Lledó
Hello Kalle,

I've applied and tested your patches and seem to be working fine. The
translator is now using lwip_poll() instead of lwip_select().

I've pushed the changes to github. Thank you very much.



Re: [GSoC 2017] Number fo sockets

2017-08-13 Thread Kalle Olavi Niemitalo
Joan Lledó  writes:

> About your patches, which lwip version are you using?

Commit b82396f361ec9ce45cf0e997eb7ee5cc5aac12ec from your
lwip-hurd repository.  (I noticed the CRLFs because they ended up
in the diff.)

> Will you send them to the lwip patch tracker?

I don't intend to, because I have not tried running the code, and
because io_select without a thread per socket will require a
different solution.



Re: [GSoC 2017] Number fo sockets

2017-08-13 Thread Joan Lledó
> Too bad there is no lwip_poll function.
> Might the LwIP folks be amenable to adding one?

Yes, they talked about that in their mailing list[1].

> Your size calculations seem wrong.
> glibc defines fd_mask = __fd_mask = long int.
> FD_SETSIZE is the number of bits that fit in fd_set.
> But then lwip_selscan does
>
>   size_t setsize = ((maxfdp1 / (8*(int)sizeof(fd_mask))) + 1) > FD_SETSIZE ?
> ((maxfdp1 / (8*(int)sizeof(fd_mask))) + 1) : FD_SETSIZE;
>   fd_mask lreadset[setsize], lwriteset[setsize], lexceptset[setsize];
> [...]
>   memset(, 0, setsize);
>
> which has two bugs:
> * It treats FD_SETSIZE as the minimum number of fd_mask elements,
>   even though it was supposed to be the number of bits.
>   That seems to waste some memory, but it isn't serious.
> * It clears only the first setsize bytes of each array.
>   It should be doing memset(, 0, setsize * sizeof(fd_mask)).
>   Likewise in the memcpy calls later.
>   This seems likely to cause real problems.
>
> lwip_io_select_common has similar bugs plus it seems to leak
> lreadset, lwriteset, or lexceptset in some cases.

> The handling of LWIP_SOCKET_OFFSET looks inconsistent.
> Suppose LWIP_SOCKET_OFFSET is 100 and LWIP_SOCKET_OPEN_COUNT is
> defined.  The first alloc_socket call sets newsock->count = 100
> and returns 100.  However, if get_socket(100) is then called,
> it first subtracts LWIP_SOCKET_OFFSET from s, resulting in 0,
> and then compares sock->count to 0; won't match.

I'll work on those bugs. Thanks for the time you spent reviewing my
commit, I appreciate your feedback.

> Why do the files have CRLF in the lwip-hurd repository?
> They don't in the lwip repository.

I didn't realize, and don't know why, I'll fix it.

About your patches, which lwip version are you using? Will you send
them to the lwip patch tracker?

--
[1] http://lists.nongnu.org/archive/html/lwip-devel/2017-08/msg00082.html



Re: [GSoC 2017] Number fo sockets

2017-08-13 Thread Kalle Olavi Niemitalo
The handling of LWIP_SOCKET_OFFSET looks inconsistent.
Suppose LWIP_SOCKET_OFFSET is 100 and LWIP_SOCKET_OPEN_COUNT is
defined.  The first alloc_socket call sets newsock->count = 100
and returns 100.  However, if get_socket(100) is then called,
it first subtracts LWIP_SOCKET_OFFSET from s, resulting in 0,
and then compares sock->count to 0; won't match.

I guess this mismatch does not harm the Hurd if you define
LWIP_SOCKET_OFFSET as 0 there, but I'd like to see it fixed
anyway.



Re: [GSoC 2017] Number fo sockets

2017-08-13 Thread Kalle Olavi Niemitalo
Kalle Olavi Niemitalo  writes:

> I wonder how hard it would be to implement those select ops in
> the translator without hogging a thread for each.

To clarify: if one thread in a program calls select on an fd_set
that lists ten sockets, then glibc sends ten io_select requests
before it waits for any response, and this costs ten threads in
the lwip translator if none of the sockets is ready.  Adding
lwip_poll does not help with that.  The struct lwip_select_cb
would have to be allocated from the heap and there would have to
be a way to unlink and free it if glibc destroys the reply port.



Re: [GSoC 2017] Number fo sockets

2017-08-12 Thread Kalle Olavi Niemitalo
Joan Lledó  writes:

> Since Glibc calls the io_select() operation each time the user
> calls send() or recv(), in practice such sockets are just
> unusable.

Too bad there is no lwip_poll function.
Might the LwIP folks be amenable to adding one?

> [3] 
> https://github.com/jlledom/lwip-hurd/commit/ad7bf3a9342c9e75e042ce1040797392b98d24df

Your size calculations seem wrong.
glibc defines fd_mask = __fd_mask = long int.
FD_SETSIZE is the number of bits that fit in fd_set.
But then lwip_selscan does

  size_t setsize = ((maxfdp1 / (8*(int)sizeof(fd_mask))) + 1) > FD_SETSIZE ?
((maxfdp1 / (8*(int)sizeof(fd_mask))) + 1) : FD_SETSIZE;
  fd_mask lreadset[setsize], lwriteset[setsize], lexceptset[setsize];
[...]
  memset(, 0, setsize);

which has two bugs:
* It treats FD_SETSIZE as the minimum number of fd_mask elements,
  even though it was supposed to be the number of bits.
  That seems to waste some memory, but it isn't serious.
* It clears only the first setsize bytes of each array.
  It should be doing memset(, 0, setsize * sizeof(fd_mask)).
  Likewise in the memcpy calls later.
  This seems likely to cause real problems.

lwip_io_select_common has similar bugs plus it seems to leak
lreadset, lwriteset, or lexceptset in some cases.

I wonder how hard it would be to implement those select ops in
the translator without hogging a thread for each.