Re: Several questions concerning libeio internals(+)

2015-12-16 Thread Nick Zavaritsky

> You need some communications mechanism for your threads - that's outside
> the scope of libeio, really.
> 
>> However since there’s the single result queue, it’s impossible to route the 
>> completed request to the particular thread.
> 
> If it's impossible to route completed requests to the particular thread
> that wants the result, then nothing in libeio can fix that, since it is

I believe it will be useful if libeio routed the request back to the issuer 
thread.

It is technically possible to build another layer on top of libeio to process 
the result queue and to route results to particular threads. This approach has 
serious drawbacks:

 * you can’t get request pointers from eio_poll(), even more: the function 
destroys requests, hence in order to route requests somewhere you have to 
duplicate them;

 * you have to route requests from the completion callback; original callback 
has to be remembered somehow, and all functions like eio_open have to be 
wrapped;

 * it is inefficient since a request has to get from a worker thread to the 
dispatcher thread and then to the final destination.

If you consider this feature to be out of scope for libeio, I am fine with that.
___
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/mailman/listinfo/libev

Re: Several questions concerning libeio internals(+)

2015-12-16 Thread Marc Lehmann
On Wed, Dec 16, 2015 at 07:06:18PM +0300, Nick Zavaritsky  
wrote:
> Consider eio_open (const char *path, int flags, mode_t mode, int pri, eio_cb 
> cb, void *data).
> 
> Assume thread A calls eio_open, I want the completion callback to be invoked 
> in the same thread.

You need some communications mechanism for your threads - that's outside
the scope of libeio, really.

> However since there’s the single result queue, it’s impossible to route the 
> completed request to the particular thread.

If it's impossible to route completed requests to the particular thread
that wants the result, then nothing in libeio can fix that, since it is
impossible. Since it is, in general, possible to route requests (or any
data structure) to specific threads, I think that statement is wrong. What
that mechanism is varies widely between implementations, so I am not sure
libeio should force a specific one.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  schm...@schmorp.de
  -=/_/_//_/\_,_/ /_/\_\

___
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/mailman/listinfo/libev

Re: Several questions concerning libeio internals(+)

2015-12-16 Thread Nick Zavaritsky

> As for queues, what in the single result queue doesn't work the way you
> want?  You can asociate state with each request either by making the
> struct larger or using the data member, similarly to libev.

Consider eio_open (const char *path, int flags, mode_t mode, int pri, eio_cb 
cb, void *data).

Assume thread A calls eio_open, I want the completion callback to be invoked in 
the same thread.

With the libeio current design one can submit requests from multiple threads. 
Want_poll callback can wake multiple threads at once, though that is going to 
be inefficient.

However since there’s the single result queue, it’s impossible to route the 
completed request to the particular thread.


___
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/mailman/listinfo/libev

Re: Several questions concerning libeio internals(+)

2015-12-16 Thread Marc Lehmann
On Wed, Dec 16, 2015 at 01:46:35PM +0300, Nick Zavaritsky  
wrote:
> >> eio_init() initializes thread local state; a thread gets a private result 
> >> queue + callbacks. There is the single global request queue + a set of 
> >> worker threads. Once a task is complete it moves into the corresponding 
> >> result queue. Embeding model is essentially the same: eio_poll fetches 
> >> tasks from the thread’s private result queue, registered callbacks are 
> >> invoked when the result queue state changes.
> > 
> > That sounds rather slow and/or unportable - what problem are you trying to
> > solve that you can't solve at the moment?
> 
> We have several threads, each thread runs its own event loop and we want a 
> completed request to return to the issuing thread. Since there is the single 
> result queue in libeio it doesn’t work the way we want.
> 
> What about separate result queues do you find slow/unportable?

Thread local state is either unportable or slow. in any case, it's overhead.
not knowing what you do and why you do it means I can only speculate of
course.

As for queues, what in the single result queue doesn't work the way you
want?  You can asociate state with each request either by making the
struct larger or using the data member, similarly to libev.

> >> (2) Why does ALLOC macro lock pool->wrklock?
> > 
> > The request is shared between threads, and taking the lock addresses
> > concerns about tearing. There have been layout changes and (most notably),
> > req->flags is now a sig_atomic_t, but neither change guarantees that it
> > will work, so it's a "rather be safe than sorry" approach, especially as
> > it isn't a performance-sensitive place.
> 
> Isn’t it true that no two threads access a request simultaneously, the 
> cancelled field being the exception?

No, but the main concern is the cancelled field - accessing it might
non-atomically write the flags field, as C (in the past) made no
guarantees about atomicity between threads.

It could surely be optimised "most everywhere", but if it doesn't hurt,
erring on the correct side is a virtue.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  schm...@schmorp.de
  -=/_/_//_/\_,_/ /_/\_\

___
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/mailman/listinfo/libev

Re: Several questions concerning libeio internals(+)

2015-12-16 Thread Nick Zavaritsky

> I think libeio already is usable form multiple threads?
> 
>> eio_init() initializes thread local state; a thread gets a private result 
>> queue + callbacks. There is the single global request queue + a set of 
>> worker threads. Once a task is complete it moves into the corresponding 
>> result queue. Embeding model is essentially the same: eio_poll fetches tasks 
>> from the thread’s private result queue, registered callbacks are invoked 
>> when the result queue state changes.
> 
> That sounds rather slow and/or unportable - what problem are you trying to
> solve that you can't solve at the moment?

We have several threads, each thread runs its own event loop and we want a 
completed request to return to the issuing thread. Since there is the single 
result queue in libeio it doesn’t work the way we want.

What about separate result queues do you find slow/unportable?

>> (2) Why does ALLOC macro lock pool->wrklock?
> 
> The request is shared between threads, and taking the lock addresses
> concerns about tearing. There have been layout changes and (most notably),
> req->flags is now a sig_atomic_t, but neither change guarantees that it
> will work, so it's a "rather be safe than sorry" approach, especially as
> it isn't a performance-sensitive place.

Isn’t it true that no two threads access a request simultaneously, the 
cancelled field being the exception?
___
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/mailman/listinfo/libev

Re: Several questions concerning libeio internals(+)

2015-12-15 Thread Marc Lehmann
On Tue, Dec 15, 2015 at 08:22:45PM +0300, Nick Zavaritsky  
wrote:
> I’ve implemented a support for using libeio from multiple threads for 
> tarantool.org. Any interest in this feature?

I think libeio already is usable form multiple threads?

> eio_init() initializes thread local state; a thread gets a private result 
> queue + callbacks. There is the single global request queue + a set of worker 
> threads. Once a task is complete it moves into the corresponding result 
> queue. Embeding model is essentially the same: eio_poll fetches tasks from 
> the thread’s private result queue, registered callbacks are invoked when the 
> result queue state changes.

That sounds rather slow and/or unportable - what problem are you trying to
solve that you can't solve at the moment?

> (1) What is the purpose of workers list? It is never used besides worker 
> (un)registrations.

Right now, libeio relies on condvars, which are often relatively fair
in which threads they actually wake up. Obviously, it would be better
if "hot" worker threads are reused first, and this is a step towards
implementing that. turned out not to be so easy, but libeio hasn't really
seen a formal release yet either :)

> (2) Why does ALLOC macro lock pool->wrklock?

The request is shared between threads, and taking the lock addresses
concerns about tearing. There have been layout changes and (most notably),
req->flags is now a sig_atomic_t, but neither change guarantees that it
will work, so it's a "rather be safe than sorry" approach, especially as
it isn't a performance-sensitive place.

> (3) Several pool attributes seem redundant. For instance, nready is 
> essentially req_queue.size and npending==res_queue.size. They aren’t 
> redundant, are they?

Well, they are not always identical, but the main concern here is public
vs. private API. Maybe they could be optimised away, but the design in
this area is still in flux.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  schm...@schmorp.de
  -=/_/_//_/\_,_/ /_/\_\

___
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/mailman/listinfo/libev

Re: Several questions concerning libeio internals(+)

2015-12-15 Thread Chris Brody
Nick, I am sure these changes will be able to help others in the
future. I hope you are willing to publish them as a Gist or something
(along with an explicit license statement) in case they do not make it
into the libeio core.

Thanks,
Chris

On Tue, Dec 15, 2015 at 12:22 PM, Nick Zavaritsky  wrote:
> Hi!
>
> I’ve implemented a support for using libeio from multiple threads for 
> tarantool.org. Any interest in this feature?
>
> eio_init() initializes thread local state; a thread gets a private result 
> queue + callbacks. There is the single global request queue + a set of worker 
> threads. Once a task is complete it moves into the corresponding result 
> queue. Embeding model is essentially the same: eio_poll fetches tasks from 
> the thread’s private result queue, registered callbacks are invoked when the 
> result queue state changes.
>
> It would be great if you answer several questions about libeio internals.
>
> (1) What is the purpose of workers list? It is never used besides worker 
> (un)registrations.
>
> (2) Why does ALLOC macro lock pool->wrklock?
>
> (3) Several pool attributes seem redundant. For instance, nready is 
> essentially req_queue.size and npending==res_queue.size. They aren’t 
> redundant, are they?
>
> Regards.
> ___
> libev mailing list
> libev@lists.schmorp.de
> http://lists.schmorp.de/mailman/listinfo/libev

___
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/mailman/listinfo/libev

Several questions concerning libeio internals(+)

2015-12-15 Thread Nick Zavaritsky
Hi!

I’ve implemented a support for using libeio from multiple threads for 
tarantool.org. Any interest in this feature?

eio_init() initializes thread local state; a thread gets a private result queue 
+ callbacks. There is the single global request queue + a set of worker 
threads. Once a task is complete it moves into the corresponding result queue. 
Embeding model is essentially the same: eio_poll fetches tasks from the 
thread’s private result queue, registered callbacks are invoked when the result 
queue state changes.

It would be great if you answer several questions about libeio internals.

(1) What is the purpose of workers list? It is never used besides worker 
(un)registrations.

(2) Why does ALLOC macro lock pool->wrklock?

(3) Several pool attributes seem redundant. For instance, nready is essentially 
req_queue.size and npending==res_queue.size. They aren’t redundant, are they?

Regards.
___
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/mailman/listinfo/libev