Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-04-11 Thread Stefan Hajnoczi
On Tue, Apr 09, 2013 at 01:10:29PM +0800, liu ping fan wrote:
 On Mon, Apr 8, 2013 at 7:46 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
  On Tue, Apr 02, 2013 at 05:49:57PM +0800, liu ping fan wrote:
  On Thu, Mar 28, 2013 at 9:40 PM, Stefan Hajnoczi stefa...@gmail.com 
  wrote:
   On Thu, Mar 28, 2013 at 09:42:47AM +0100, Paolo Bonzini wrote:
   Il 28/03/2013 08:55, Liu Ping Fan ha scritto:
   3rd. block layer's AioContext will block other AioContexts on the 
same thread.
  
   I cannot understand this.
  
   The plan is for BlockDriverState to be bound to an AioContext.  That
   means each thread is set up with one AioContext.  BlockDriverStates that
   are used in that thread will first be bound to its AioContext.
  
   It's not very useful to have multiple AioContext in the same thread.
  
  But it can be the case that we detach and re-attach the different
  device( AioContext) to the same thread.   I think that the design of
  io_flush is to sync, but for NetClientState, we need something else.
  So if we use AioContext, is it proper to extend readable/writeable
  interface for qemu_aio_set_fd_handler()?
 
  Devices don't have AioContexts, threads do.  When you bind a device to
  an AioContext the AioContext already exists independent of the device.
 
 Oh, yes.  So let me say in this way,   switch the devices among
 different thread. Then if NetClientState happens to exist on the same
 thread with BlockDriverState, it will not be responsive until the
 BlockDriverState has finished the flying job.

It's partially true that devices sharing an event loop may be less
responsive.  That's why we have the option of a 1:1 device-to-thread
mapping.

But remember that QEMU code is (meant to be) designed for event loops.
Therefore, it must not block and should return back to the event loop as
quickly as possible.  So a block and net device in the same event loop
shouldn't inconvenience each other dramatically if the device-to-thread
mappings are reasonable given the host machine, workload, etc.

  Unfortunately I don't understand your question about io_flush and
  readable/writeable qemu_aio_set_fd_handler().
 
 As for readable/writable, I mean something like IOCanReadHandler. If
 NetClientState is unreadable, it just does not poll for G_IO_IN event,
 but not blocks.  But as for io_flush, it will block for pending AIO
 operations. These behaviors are different, so I suggest to expand
 readable/writeable for qemu_aio_set_fd_handler()

I see, thanks for explaining.

In another thread Kevin suggested a solution:

Basically, io_flush() and qemu_aio_wait() should be removed.  Instead
we'll push the synchronous wait into the block layer, which is the only
user.

We can do that by introducing a .bdrv_drain() function which is similar
to io_flush().  Now the qemu_drain_all() which uses qemu_aio_wait() can
change to calling .bdrv_drain() and then executing an event loop
iteration.

In other words, the event loop shouldn't know about io_flush().

I will try to send patches for this today.

Stefan



Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-04-08 Thread Stefan Hajnoczi
On Tue, Apr 02, 2013 at 05:49:57PM +0800, liu ping fan wrote:
 On Thu, Mar 28, 2013 at 9:40 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
  On Thu, Mar 28, 2013 at 09:42:47AM +0100, Paolo Bonzini wrote:
  Il 28/03/2013 08:55, Liu Ping Fan ha scritto:
  3rd. block layer's AioContext will block other AioContexts on the 
   same thread.
 
  I cannot understand this.
 
  The plan is for BlockDriverState to be bound to an AioContext.  That
  means each thread is set up with one AioContext.  BlockDriverStates that
  are used in that thread will first be bound to its AioContext.
 
  It's not very useful to have multiple AioContext in the same thread.
 
 But it can be the case that we detach and re-attach the different
 device( AioContext) to the same thread.   I think that the design of
 io_flush is to sync, but for NetClientState, we need something else.
 So if we use AioContext, is it proper to extend readable/writeable
 interface for qemu_aio_set_fd_handler()?

Devices don't have AioContexts, threads do.  When you bind a device to
an AioContext the AioContext already exists independent of the device.

Unfortunately I don't understand your question about io_flush and
readable/writeable qemu_aio_set_fd_handler().

Stefan



Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-04-08 Thread Stefan Hajnoczi
On Mon, Apr 01, 2013 at 04:15:06PM +0800, liu ping fan wrote:
 On Thu, Mar 28, 2013 at 10:55 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
 
  On Thu, Mar 28, 2013 at 03:55:51PM +0800, Liu Ping Fan wrote:
  It seems the AioContext vs glib issue hasn't been settled yet.  My take
  is that glib is preferrable *if* we don't need to write too many
  helpers/wrappers on top (then we're basically back to rolling our own,
  AioContext).
 
  I was surprised by the amount of code required to listen on a file
  descriptor.  Are you sure there isn't a glib way of doing this that
  avoids rolling our own GSource?
 
 Will diving more into the code, as mdroth's suggestion. Currently the
 main issue is that glib seems no support for readalbe or writable.

I mentioned this in another reply.  I'm not sure what you mean by glib
does not support readable or writeable.  It has G_IO_IN and G_IO_OUT for
readable/writeable events.

  In the next series, please drop the hub re-entrancy stuff and virtio-net
  data plane.  Instead just focus on systematically moving existing net
  clients onto the event loop (net/*.c and NICs).  The only controversial
  issue there is AioContext vs glib, and once that's settled we can merge
  the patches.
 
 What about the core (queue.c, net.c)?  Need I send them out at the
 same time or after the backend(clients) convert finished?

I think the core changes are necessary to build the converted net
clients, so they should be part of the series.

Stefan



Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-04-08 Thread liu ping fan
On Mon, Apr 8, 2013 at 7:49 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
 On Mon, Apr 01, 2013 at 04:15:06PM +0800, liu ping fan wrote:
 On Thu, Mar 28, 2013 at 10:55 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
 
  On Thu, Mar 28, 2013 at 03:55:51PM +0800, Liu Ping Fan wrote:
  It seems the AioContext vs glib issue hasn't been settled yet.  My take
  is that glib is preferrable *if* we don't need to write too many
  helpers/wrappers on top (then we're basically back to rolling our own,
  AioContext).
 
  I was surprised by the amount of code required to listen on a file
  descriptor.  Are you sure there isn't a glib way of doing this that
  avoids rolling our own GSource?
 
 Will diving more into the code, as mdroth's suggestion. Currently the
 main issue is that glib seems no support for readalbe or writable.

 I mentioned this in another reply.  I'm not sure what you mean by glib
 does not support readable or writeable.  It has G_IO_IN and G_IO_OUT for
 readable/writeable events.

Answer it in another reply.
  In the next series, please drop the hub re-entrancy stuff and virtio-net
  data plane.  Instead just focus on systematically moving existing net
  clients onto the event loop (net/*.c and NICs).  The only controversial
  issue there is AioContext vs glib, and once that's settled we can merge
  the patches.
 
 What about the core (queue.c, net.c)?  Need I send them out at the
 same time or after the backend(clients) convert finished?

 I think the core changes are necessary to build the converted net
 clients, so they should be part of the series.

OK, I will send out all of them in v4.

 Stefan



Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-04-08 Thread liu ping fan
On Mon, Apr 8, 2013 at 7:46 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
 On Tue, Apr 02, 2013 at 05:49:57PM +0800, liu ping fan wrote:
 On Thu, Mar 28, 2013 at 9:40 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
  On Thu, Mar 28, 2013 at 09:42:47AM +0100, Paolo Bonzini wrote:
  Il 28/03/2013 08:55, Liu Ping Fan ha scritto:
  3rd. block layer's AioContext will block other AioContexts on the 
   same thread.
 
  I cannot understand this.
 
  The plan is for BlockDriverState to be bound to an AioContext.  That
  means each thread is set up with one AioContext.  BlockDriverStates that
  are used in that thread will first be bound to its AioContext.
 
  It's not very useful to have multiple AioContext in the same thread.
 
 But it can be the case that we detach and re-attach the different
 device( AioContext) to the same thread.   I think that the design of
 io_flush is to sync, but for NetClientState, we need something else.
 So if we use AioContext, is it proper to extend readable/writeable
 interface for qemu_aio_set_fd_handler()?

 Devices don't have AioContexts, threads do.  When you bind a device to
 an AioContext the AioContext already exists independent of the device.

Oh, yes.  So let me say in this way,   switch the devices among
different thread. Then if NetClientState happens to exist on the same
thread with BlockDriverState, it will not be responsive until the
BlockDriverState has finished the flying job.

 Unfortunately I don't understand your question about io_flush and
 readable/writeable qemu_aio_set_fd_handler().

As for readable/writable, I mean something like IOCanReadHandler. If
NetClientState is unreadable, it just does not poll for G_IO_IN event,
but not blocks.  But as for io_flush, it will block for pending AIO
operations. These behaviors are different, so I suggest to expand
readable/writeable for qemu_aio_set_fd_handler()

Regards,
Pingfan

 Stefan



Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-04-02 Thread liu ping fan
On Thu, Mar 28, 2013 at 9:40 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
 On Thu, Mar 28, 2013 at 09:42:47AM +0100, Paolo Bonzini wrote:
 Il 28/03/2013 08:55, Liu Ping Fan ha scritto:
 3rd. block layer's AioContext will block other AioContexts on the same 
  thread.

 I cannot understand this.

 The plan is for BlockDriverState to be bound to an AioContext.  That
 means each thread is set up with one AioContext.  BlockDriverStates that
 are used in that thread will first be bound to its AioContext.

 It's not very useful to have multiple AioContext in the same thread.

But it can be the case that we detach and re-attach the different
device( AioContext) to the same thread.   I think that the design of
io_flush is to sync, but for NetClientState, we need something else.
So if we use AioContext, is it proper to extend readable/writeable
interface for qemu_aio_set_fd_handler()?

Thanks,
Pingfan

 Stefan



Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-04-01 Thread liu ping fan
On Thu, Mar 28, 2013 at 10:55 PM, Stefan Hajnoczi stefa...@gmail.com wrote:

 On Thu, Mar 28, 2013 at 03:55:51PM +0800, Liu Ping Fan wrote:
  From: Liu Ping Fan pingf...@linux.vnet.ibm.com
 
  These series aim to make the whole network re-entrant, here only apply 
  backend and frontend,
  and for the netcore, separated patches have been sent out.  All of these 
  will prepare us for
  moving towards making network layer mutlit-thread.
  Finally it would be omething like
 qemu -object io-thread,id=thread0 \
   -device virtio-net,rx[0]=thread0,tx[0]=thread0
 
  The brief of the whole aim and plan is documented on
http://wiki.qemu.org/Features/network_reentrant
 
  The main issue is about GSource or AioContext,
http://marc.info/?t=13631545332r=1w=3
  And I sumary the main points:
disadvantage for current AioContext
 1st. need to define and expand interface for other fd events, while glib 
  open this interface for user *
 2nd. need to add support for IOCanReadHandler, while gsource provide 
  prepare, check method to allow more flexible control
 3rd. block layer's AioContext will block other AioContexts on the same 
  thread.
 4th. need more document
   disadvantage for glib
 1st. if more than one fds on the same GSource, need re-implement 
  something like aio_set_file_handler
 
  Since I have successed to port frontend on glib, there is no obstale to use 
  glib.
 
 
  v1-v2:
1.NetClientState can associate with up to 2 GSource, for virtio net, one 
  for tx, one for rx,
  so vq can run on different threads.
2.make network front-end onto glib, currently virtio net dataplane
 
 
  Liu Ping Fan (4):
net: port tap onto glib
net: resolve race of tap backend and its peer
net: port hub onto glib
net: port virtio net onto glib
 
   hw/qdev-properties-system.c |1 +
   hw/virtio-net.c |  165 
  +++
   hw/virtio.c |6 ++
   hw/virtio.h |2 +
   include/net/net.h   |   27 +++
   include/net/queue.h |   14 
   net/hub.c   |   34 -
   net/net.c   |   97 +
   net/queue.c |4 +-
   net/tap.c   |   62 +---
   10 files changed, 397 insertions(+), 15 deletions(-)

 It seems the AioContext vs glib issue hasn't been settled yet.  My take
 is that glib is preferrable *if* we don't need to write too many
 helpers/wrappers on top (then we're basically back to rolling our own,
 AioContext).

 I was surprised by the amount of code required to listen on a file
 descriptor.  Are you sure there isn't a glib way of doing this that
 avoids rolling our own GSource?

Will diving more into the code, as mdroth's suggestion. Currently the
main issue is that glib seems no support for readalbe or writable.

 In the next series, please drop the hub re-entrancy stuff and virtio-net
 data plane.  Instead just focus on systematically moving existing net
 clients onto the event loop (net/*.c and NICs).  The only controversial
 issue there is AioContext vs glib, and once that's settled we can merge
 the patches.

What about the core (queue.c, net.c)?  Need I send them out at the
same time or after the backend(clients) convert finished?

 Please avoid layering violations - for example a comment about
 virtio-net in net.h, a comment about vhost in tap, or putting
 net_source_funcs in net.h.  I think converting all existing net clients
 will help make the code changes appropriate and eliminate these kinds of
 hacks which are because you're focussing just on virtio, tap, and hub
 here.

Ok, will convert all.

Regards,
Ping Fan

 Stefan



[Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-03-28 Thread Liu Ping Fan
From: Liu Ping Fan pingf...@linux.vnet.ibm.com

These series aim to make the whole network re-entrant, here only apply backend 
and frontend,
and for the netcore, separated patches have been sent out.  All of these will 
prepare us for
moving towards making network layer mutlit-thread.
Finally it would be omething like
   qemu -object io-thread,id=thread0 \
 -device virtio-net,rx[0]=thread0,tx[0]=thread0

The brief of the whole aim and plan is documented on
  http://wiki.qemu.org/Features/network_reentrant

The main issue is about GSource or AioContext,
  http://marc.info/?t=13631545332r=1w=3
And I sumary the main points:
  disadvantage for current AioContext
   1st. need to define and expand interface for other fd events, while glib 
open this interface for user *
   2nd. need to add support for IOCanReadHandler, while gsource provide 
prepare, check method to allow more flexible control
   3rd. block layer's AioContext will block other AioContexts on the same 
thread.
   4th. need more document
 disadvantage for glib
   1st. if more than one fds on the same GSource, need re-implement something 
like aio_set_file_handler

Since I have successed to port frontend on glib, there is no obstale to use 
glib.


v1-v2:
  1.NetClientState can associate with up to 2 GSource, for virtio net, one for 
tx, one for rx, 
so vq can run on different threads.
  2.make network front-end onto glib, currently virtio net dataplane


Liu Ping Fan (4):
  net: port tap onto glib
  net: resolve race of tap backend and its peer
  net: port hub onto glib
  net: port virtio net onto glib

 hw/qdev-properties-system.c |1 +
 hw/virtio-net.c |  165 +++
 hw/virtio.c |6 ++
 hw/virtio.h |2 +
 include/net/net.h   |   27 +++
 include/net/queue.h |   14 
 net/hub.c   |   34 -
 net/net.c   |   97 +
 net/queue.c |4 +-
 net/tap.c   |   62 +---
 10 files changed, 397 insertions(+), 15 deletions(-)

-- 
1.7.4.4




Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-03-28 Thread Paolo Bonzini
Il 28/03/2013 08:55, Liu Ping Fan ha scritto:
   disadvantage for current AioContext
1st. need to define and expand interface for other fd events, while glib 
 open this interface for user *

True.

2nd. need to add support for IOCanReadHandler, while gsource provide 
 prepare, check method to allow more flexible control

The AioFlushHandler is basically the same thing as the IOCanReadHandler.

3rd. block layer's AioContext will block other AioContexts on the same 
 thread.

I cannot understand this.

4th. need more document

I really don't think so.

  disadvantage for glib
1st. if more than one fds on the same GSource, need re-implement something 
 like aio_set_file_handler




Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-03-28 Thread Stefan Hajnoczi
On Thu, Mar 28, 2013 at 09:42:47AM +0100, Paolo Bonzini wrote:
 Il 28/03/2013 08:55, Liu Ping Fan ha scritto:
 3rd. block layer's AioContext will block other AioContexts on the same 
  thread.
 
 I cannot understand this.

The plan is for BlockDriverState to be bound to an AioContext.  That
means each thread is set up with one AioContext.  BlockDriverStates that
are used in that thread will first be bound to its AioContext.

It's not very useful to have multiple AioContext in the same thread.

Stefan



Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-03-28 Thread Stefan Hajnoczi
On Thu, Mar 28, 2013 at 03:55:51PM +0800, Liu Ping Fan wrote:
 From: Liu Ping Fan pingf...@linux.vnet.ibm.com
 
 These series aim to make the whole network re-entrant, here only apply 
 backend and frontend,
 and for the netcore, separated patches have been sent out.  All of these will 
 prepare us for
 moving towards making network layer mutlit-thread.
 Finally it would be omething like
qemu -object io-thread,id=thread0 \
  -device virtio-net,rx[0]=thread0,tx[0]=thread0
 
 The brief of the whole aim and plan is documented on
   http://wiki.qemu.org/Features/network_reentrant
 
 The main issue is about GSource or AioContext,
   http://marc.info/?t=13631545332r=1w=3
 And I sumary the main points:
   disadvantage for current AioContext
1st. need to define and expand interface for other fd events, while glib 
 open this interface for user *
2nd. need to add support for IOCanReadHandler, while gsource provide 
 prepare, check method to allow more flexible control
3rd. block layer's AioContext will block other AioContexts on the same 
 thread.
4th. need more document
  disadvantage for glib
1st. if more than one fds on the same GSource, need re-implement something 
 like aio_set_file_handler
 
 Since I have successed to port frontend on glib, there is no obstale to use 
 glib.
 
 
 v1-v2:
   1.NetClientState can associate with up to 2 GSource, for virtio net, one 
 for tx, one for rx, 
 so vq can run on different threads.
   2.make network front-end onto glib, currently virtio net dataplane
 
 
 Liu Ping Fan (4):
   net: port tap onto glib
   net: resolve race of tap backend and its peer
   net: port hub onto glib
   net: port virtio net onto glib
 
  hw/qdev-properties-system.c |1 +
  hw/virtio-net.c |  165 
 +++
  hw/virtio.c |6 ++
  hw/virtio.h |2 +
  include/net/net.h   |   27 +++
  include/net/queue.h |   14 
  net/hub.c   |   34 -
  net/net.c   |   97 +
  net/queue.c |4 +-
  net/tap.c   |   62 +---
  10 files changed, 397 insertions(+), 15 deletions(-)

It seems the AioContext vs glib issue hasn't been settled yet.  My take
is that glib is preferrable *if* we don't need to write too many
helpers/wrappers on top (then we're basically back to rolling our own,
AioContext).

I was surprised by the amount of code required to listen on a file
descriptor.  Are you sure there isn't a glib way of doing this that
avoids rolling our own GSource?

In the next series, please drop the hub re-entrancy stuff and virtio-net
data plane.  Instead just focus on systematically moving existing net
clients onto the event loop (net/*.c and NICs).  The only controversial
issue there is AioContext vs glib, and once that's settled we can merge
the patches.

Please avoid layering violations - for example a comment about
virtio-net in net.h, a comment about vhost in tap, or putting
net_source_funcs in net.h.  I think converting all existing net clients
will help make the code changes appropriate and eliminate these kinds of
hacks which are because you're focussing just on virtio, tap, and hub
here.

Stefan



Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib

2013-03-28 Thread mdroth
On Thu, Mar 28, 2013 at 03:55:52PM +0100, Stefan Hajnoczi wrote:
 On Thu, Mar 28, 2013 at 03:55:51PM +0800, Liu Ping Fan wrote:
  From: Liu Ping Fan pingf...@linux.vnet.ibm.com
  
  These series aim to make the whole network re-entrant, here only apply 
  backend and frontend,
  and for the netcore, separated patches have been sent out.  All of these 
  will prepare us for
  moving towards making network layer mutlit-thread.
  Finally it would be omething like
 qemu -object io-thread,id=thread0 \
   -device virtio-net,rx[0]=thread0,tx[0]=thread0
  
  The brief of the whole aim and plan is documented on
http://wiki.qemu.org/Features/network_reentrant
  
  The main issue is about GSource or AioContext,
http://marc.info/?t=13631545332r=1w=3
  And I sumary the main points:
disadvantage for current AioContext
 1st. need to define and expand interface for other fd events, while glib 
  open this interface for user *
 2nd. need to add support for IOCanReadHandler, while gsource provide 
  prepare, check method to allow more flexible control
 3rd. block layer's AioContext will block other AioContexts on the same 
  thread.
 4th. need more document
   disadvantage for glib
 1st. if more than one fds on the same GSource, need re-implement 
  something like aio_set_file_handler
  
  Since I have successed to port frontend on glib, there is no obstale to use 
  glib.
  
  
  v1-v2:
1.NetClientState can associate with up to 2 GSource, for virtio net, one 
  for tx, one for rx, 
  so vq can run on different threads.
2.make network front-end onto glib, currently virtio net dataplane
  
  
  Liu Ping Fan (4):
net: port tap onto glib
net: resolve race of tap backend and its peer
net: port hub onto glib
net: port virtio net onto glib
  
   hw/qdev-properties-system.c |1 +
   hw/virtio-net.c |  165 
  +++
   hw/virtio.c |6 ++
   hw/virtio.h |2 +
   include/net/net.h   |   27 +++
   include/net/queue.h |   14 
   net/hub.c   |   34 -
   net/net.c   |   97 +
   net/queue.c |4 +-
   net/tap.c   |   62 +---
   10 files changed, 397 insertions(+), 15 deletions(-)
 
 It seems the AioContext vs glib issue hasn't been settled yet.  My take
 is that glib is preferrable *if* we don't need to write too many
 helpers/wrappers on top (then we're basically back to rolling our own,
 AioContext).
 
 I was surprised by the amount of code required to listen on a file
 descriptor.  Are you sure there isn't a glib way of doing this that
 avoids rolling our own GSource?

GIOChannel provides a pre-baked GSource you can pass around, but this
might add more confusion to the mix when you consider things like
Slirp which will likely require a custom GSource. It also assumes a
different signature than GSourceFunc for the callback which further adds
to the confusion.

Keeping the interfaces centered around normal GSources I think would help
to avoid the proliferation of more event-handling registration
mechanisms in the future, and make conversions easier if we do need to
change things.

A generic GSource for handling FDs that we can re-use for basic use
cases would help curtail some of the boilerplate later for common fd
handlers. Probably doesn't make sense to generalize until we reach a
decision on glib vs. aiocontext though.

 
 In the next series, please drop the hub re-entrancy stuff and virtio-net
 data plane.  Instead just focus on systematically moving existing net
 clients onto the event loop (net/*.c and NICs).  The only controversial
 issue there is AioContext vs glib, and once that's settled we can merge
 the patches.
 
 Please avoid layering violations - for example a comment about
 virtio-net in net.h, a comment about vhost in tap, or putting
 net_source_funcs in net.h.  I think converting all existing net clients
 will help make the code changes appropriate and eliminate these kinds of
 hacks which are because you're focussing just on virtio, tap, and hub
 here.
 
 Stefan