Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-06-04 Thread Amos Kong
On Fri, May 31, 2013 at 11:02:54AM +0800, Amos Kong wrote:
 On Fri, May 31, 2013 at 08:35:28AM +0800, Amos Kong wrote:
  On Thu, May 30, 2013 at 04:54:41PM +0300, Michael S. Tsirkin wrote:
   On Tue, May 28, 2013 at 08:25:56AM -0400, Luiz Capitulino wrote:


2. Can you actually show the problem does exist so that we 
   ensure this is
   not premature optimization? Might be a good idea to have this 
   in the
   commit log
   
 (which is to couple the event with the query command) is
 appropriate. We're in user-space already, many things could 
 slow
 the guest down apart from the event generation.
 
 Two questions:
 
  1. Do we know how slow (or how many packets are actually 
 dropped)
 if the mac is changed too often *and* the event is always 
 sent?

We always disable interface first, then change the macaddr.
But we just have patch to allow guest to change macaddr of
virtio-net when the interface is running.

| commit 2dcd0cce551983afe2f900125457f10bb5d980ae
| Author: Jiri Pirko jpi...@redhat.com
| Date:   Tue Dec 11 15:33:56 2012 -0500
| 
| [virt] virtio_net: allow to change mac when iface is 
running

  2. Does this solution consider what happens if the QMP 
 client does
 respond timely to the event by issuing the query-rx-filter
 command?

We assume that the QMP client (management) cares about the mac 
changing
event, and will query the latest rx-filter state and sync to 
macvtap
device.

1) If QMP client respond timely to the event: that's what we 
expected :)
   
   Won't this slow down the guest? If not, why?
 
 If guest changes fx-filter configs frequently  management always 
 query the
 event very timely, this will slow down the guest.
 
 We should detect  process the abnormal behavior from management.

That's not abnormal. Management is doing what it should do.

Maybe using the event throttle API can solve the mngt side of the 
problem,
but I still think we need a reproducible test-case to ensure we're doing
the right thing.
   
   I agree we should make sure this code is tested.
   It's pretty easy: run ifconfig in a loop in guest.
   
   Amos, did you try this? Probably should otherwise
   we don't really know whether the logic works.
  
  With v4 patch (without using event throttle)
  
  1. continually query rx-filter from monitor
  # while true; do echo info rx-filter | nc -U /tmp/m; done
  
  2. change mac in guest repeatedly
  # while true; do
   ifconfig eth1 down; ifconfig eth1 hw ether 12:00:00:00:00:00
   ifconfig eth1 down; ifconfig eth1 hw ether 14:00:00:00:00:00
   done
  
  One time (down if, change mac, up) takes about 3,500,000 ns in guest
  some event will be ignored by qemu. I will try to only query when it
  gets NIC_RX_FILTER_CHANGE event from QMP monitor, query ASAP.
  
  == Resource usage:
  
PID USER  PR  NI  VIRT  RES  SHR S  %CPU %MEMTIME+  COMMAND
  16387 root  20   0 2375m 326m 6684 R 104.2  4.2   8:32.16 
  qemu-system-x86
  
  loop script takes about 10% guest cpu (1 core), guest is not slow
  
  
  If we don't use the flag (same effect as that management taks 0 ns to
  response  complete the query after event comming)
 
 
PID USER  PR  NI  VIRT  RES  SHR S  %CPU %MEMTIME+  COMMAND   
  
   
   4317 root  20   0 2377m 285m 6664 R 103.4  3.7   2:06.82 
  qemu-system-x86
  
  guest is very slow (no response for the keyboard input), output:
  clocksource tsc unstable.
 
 ^^  The reproduce rate : about 10%
 
 I did the following tests, it seems repeatedly executing vq command
 with large data will cause guest hung.
 
 Could not slow guest by repeatedly changing rx-filter (reference test1).

I did more tests in clear environment, and found that the guest hang/slow
(no response from monitor) is caused by flooding events. I could not
reproduce it with upstream qemu [1]

If I set event_throttle to 1 ~ 1000, the problem doesn't occur.

It's easier to reproduce this problem by changing vlan config,
not because it passes more data with VQ cmd, but it will cause more
events.


In this case, we can set event_throttle to 1 for _RX_FILTER_CHANGED
event to avoid it slows guest. The 1 ms delay should be acceptabled?

And we can continually use that flag to reduce the events.


[1] 6a4e17711442849bf2cc731ccddef5a2a2d92d29 (Sun Apr 14 18:10:28 2013)

 NOTE: Test1,2,3,4 didn't use control flag, emit all the events to monitor.
 
 == test1 (applied v4)
 I tried to test by repeatedly change promisc mode (on/off).
 The difference is that there is no data 

Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-06-04 Thread Amos Kong
On Tue, Jun 04, 2013 at 02:43:11PM +0800, Amos Kong wrote:
 


 I did more tests in clear environment, and found that the guest hang/slow
 (no response from monitor) is caused by flooding events. I could not
 reproduce it with upstream qemu [1]
 
 If I set event_throttle to 1 ~ 1000, the problem doesn't occur.
 
 It's easier to reproduce this problem by changing vlan config,
 not because it passes more data with VQ cmd, but it will cause more
 events.
 
 
 In this case, we can set event_throttle to 1 for _RX_FILTER_CHANGED
 event to avoid it slows guest. The 1 ms delay should be acceptabled?

Just discussed with mst in IRC.

Here we have two problem:
(1) huge number of events will flood monitor client (management),
(2) emitting huge number of events will slow guest itself.

Both the flag (nc-rxfilter_notify_enabled) and event_throttle API
can be used to avoid problem (1).

Event_throttle API can clearly avoid problem (2).

In real testing, I found it's difficult to reproduce problem (2) if we
already use the flag. It seems response time is larger enough, some
events will be dropped, guest could not be slowed.

Michael told me that we have many ways to slow guest itself, so it's
not a big issue here.

We care about the delay of responsing event, so we should only use
control flag (As my patch v4).

What's your opinion?



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-06-04 Thread Michael S. Tsirkin
On Tue, Jun 04, 2013 at 03:42:19PM +0800, Amos Kong wrote:
 On Tue, Jun 04, 2013 at 02:43:11PM +0800, Amos Kong wrote:
  
 
 
  I did more tests in clear environment, and found that the guest hang/slow
  (no response from monitor) is caused by flooding events. I could not
  reproduce it with upstream qemu [1]
  
  If I set event_throttle to 1 ~ 1000, the problem doesn't occur.
  
  It's easier to reproduce this problem by changing vlan config,
  not because it passes more data with VQ cmd, but it will cause more
  events.
  
  
  In this case, we can set event_throttle to 1 for _RX_FILTER_CHANGED
  event to avoid it slows guest. The 1 ms delay should be acceptabled?
 
 Just discussed with mst in IRC.
 
 Here we have two problem:
 (1) huge number of events will flood monitor client (management),
 (2) emitting huge number of events will slow guest itself.
 
 Both the flag (nc-rxfilter_notify_enabled) and event_throttle API
 can be used to avoid problem (1).
 
 Event_throttle API can clearly avoid problem (2).
 
 In real testing, I found it's difficult to reproduce problem (2) if we
 already use the flag. It seems response time is larger enough, some
 events will be dropped, guest could not be slowed.
 
 Michael told me that we have many ways to slow guest itself, so it's
 not a big issue here.
 
 We care about the delay of responsing event, so we should only use
 control flag (As my patch v4).
 
 What's your opinion?

Sounds reasonable.
Pls send v3 and we'll discuss :)



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-30 Thread Amos Kong
On Tue, May 28, 2013 at 08:25:56AM -0400, Luiz Capitulino wrote:
 On Tue, 28 May 2013 06:43:04 +0800
 Amos Kong ak...@redhat.com wrote:

CC: netdev, vlad

Currently we create  open macvtap device by libvirt(management),
and pass the fd to qemu process. And macvtap works in promiscuous
mode, we want to sync the rx-filter setup of virtio-net to macvtap
device for better performance.

qemu might be a non-privileged process, it doesn't have permission
to setup macvtap device. So we want to add an QMP event to notify
management to execute the setup.

mac-programming over macvtap patch for qemu:
http://lists.nongnu.org/archive/html/qemu-devel/2013-05/msg03337.html


Can we re-consider to setup macvtap in qemu directly? open some setup
permission of rx-filter to qemu process? will do some investigation.
 

  On Mon, May 27, 2013 at 09:24:28AM -0400, Luiz Capitulino wrote:
   On Mon, 27 May 2013 09:10:11 -0400
   Luiz Capitulino lcapitul...@redhat.com wrote:
   
 We use the QMP event to notify management about the mac changing.
 
 In this thread, we _wrongly_ considered to use qmp approach to delay
 the event for avoiding the flooding.
 
   eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 1000);
 
 Now we have a solution (using a flag to turn on/off the notify) to 
 avoid the
 flooding, only emit the event if we have no un-read event.
 
 If we want to (flag is on) emit the event, we wish the event be sent 
 ASAP
 (so event_throttle isn't needed).

Unfortunately this doesn't answer my question. I did understand why 
you're
not using the event throttle API (which is because you don't want to 
slow down
the guest, not the QMP client).

My point is whether coupling the event with the query command is really
justified or even if it really fixes the problem. Two points:

 1. Coupling them is bad design, and will probably strike back, as we 
plan
for a better API for events where events can be disabled
   
   I meant we may in the future, for example, introduce the ability to 
   disable
   commands (and events). One could argue that the event w/o a query command
   is not that useful, as events can be lost. But loosing an event is one 
   thing,
   not having it because it got disabled by a side effect is another.
  
  event_throttle() couples the event in QMP framework, but we use flags
  to disabled the event from real source (emit points/senders). 
  
  If we set the evstate-rate to -1, we can ignore the events in
  monitor_protocol_event_queue(), but we could not control the event
  emitting of each emit point (each nic).
   
   But anyway, my main point in this thread is to make sure we at least
   justify having this coupling. Aren't we optimizing prematurely? Aren't
   we optimizing for a corner case? That's what I want to see answered.
  
  If it's a corner case, we don't need a general API to disable event.
 
 If it's a corner case, it's really worth to fix it?
 
 I think that what we need a real world test-case to show us we're
 doing the right thing.
 
  We can disable this event by a flag, and introduce a new API
  if we have same request from other events.
  
 2. Can you actually show the problem does exist so that we ensure this 
is
not premature optimization? Might be a good idea to have this in the
commit log

  (which is to couple the event with the query command) is
  appropriate. We're in user-space already, many things could slow
  the guest down apart from the event generation.
  
  Two questions:
  
   1. Do we know how slow (or how many packets are actually dropped)
  if the mac is changed too often *and* the event is always sent?
 
 We always disable interface first, then change the macaddr.
 But we just have patch to allow guest to change macaddr of
 virtio-net when the interface is running.
 
 | commit 2dcd0cce551983afe2f900125457f10bb5d980ae
 | Author: Jiri Pirko jpi...@redhat.com
 | Date:   Tue Dec 11 15:33:56 2012 -0500
 | 
 | [virt] virtio_net: allow to change mac when iface is running
 
   2. Does this solution consider what happens if the QMP client does
  respond timely to the event by issuing the query-rx-filter
  command?
 
 We assume that the QMP client (management) cares about the mac 
 changing
 event, and will query the latest rx-filter state and sync to macvtap
 device.
 
 1) If QMP client respond timely to the event: that's what we expected 
 :)

Won't this slow down the guest? If not, why?
  
  If guest changes fx-filter configs frequently  management always query the
  event very timely, this will slow down the guest.
  
  We should detect  process the abnormal behavior from management.
 
 That's not abnormal. Management is doing what it should do.
 
 Maybe using the event throttle API 

Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-30 Thread Michael S. Tsirkin
On Tue, May 28, 2013 at 08:25:56AM -0400, Luiz Capitulino wrote:
 On Tue, 28 May 2013 06:43:04 +0800
 Amos Kong ak...@redhat.com wrote:
 
  On Mon, May 27, 2013 at 09:24:28AM -0400, Luiz Capitulino wrote:
   On Mon, 27 May 2013 09:10:11 -0400
   Luiz Capitulino lcapitul...@redhat.com wrote:
   
 We use the QMP event to notify management about the mac changing.
 
 In this thread, we _wrongly_ considered to use qmp approach to delay
 the event for avoiding the flooding.
 
   eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 1000);
 
 Now we have a solution (using a flag to turn on/off the notify) to 
 avoid the
 flooding, only emit the event if we have no un-read event.
 
 If we want to (flag is on) emit the event, we wish the event be sent 
 ASAP
 (so event_throttle isn't needed).

Unfortunately this doesn't answer my question. I did understand why 
you're
not using the event throttle API (which is because you don't want to 
slow down
the guest, not the QMP client).

My point is whether coupling the event with the query command is really
justified or even if it really fixes the problem. Two points:

 1. Coupling them is bad design, and will probably strike back, as we 
plan
for a better API for events where events can be disabled
   
   I meant we may in the future, for example, introduce the ability to 
   disable
   commands (and events). One could argue that the event w/o a query command
   is not that useful, as events can be lost. But loosing an event is one 
   thing,
   not having it because it got disabled by a side effect is another.
  
  event_throttle() couples the event in QMP framework, but we use flags
  to disabled the event from real source (emit points/senders). 
  
  If we set the evstate-rate to -1, we can ignore the events in
  monitor_protocol_event_queue(), but we could not control the event
  emitting of each emit point (each nic).
   
   But anyway, my main point in this thread is to make sure we at least
   justify having this coupling. Aren't we optimizing prematurely? Aren't
   we optimizing for a corner case? That's what I want to see answered.
  
  If it's a corner case, we don't need a general API to disable event.
 
 If it's a corner case, it's really worth to fix it?
 
 I think that what we need a real world test-case to show us we're
 doing the right thing.
 
  We can disable this event by a flag, and introduce a new API
  if we have same request from other events.
  
 2. Can you actually show the problem does exist so that we ensure this 
is
not premature optimization? Might be a good idea to have this in the
commit log

  (which is to couple the event with the query command) is
  appropriate. We're in user-space already, many things could slow
  the guest down apart from the event generation.
  
  Two questions:
  
   1. Do we know how slow (or how many packets are actually dropped)
  if the mac is changed too often *and* the event is always sent?
 
 We always disable interface first, then change the macaddr.
 But we just have patch to allow guest to change macaddr of
 virtio-net when the interface is running.
 
 | commit 2dcd0cce551983afe2f900125457f10bb5d980ae
 | Author: Jiri Pirko jpi...@redhat.com
 | Date:   Tue Dec 11 15:33:56 2012 -0500
 | 
 | [virt] virtio_net: allow to change mac when iface is running
 
   2. Does this solution consider what happens if the QMP client does
  respond timely to the event by issuing the query-rx-filter
  command?
 
 We assume that the QMP client (management) cares about the mac 
 changing
 event, and will query the latest rx-filter state and sync to macvtap
 device.
 
 1) If QMP client respond timely to the event: that's what we expected 
 :)

Won't this slow down the guest? If not, why?
  
  If guest changes fx-filter configs frequently  management always query the
  event very timely, this will slow down the guest.
  
  We should detect  process the abnormal behavior from management.
 
 That's not abnormal. Management is doing what it should do.
 
 Maybe using the event throttle API can solve the mngt side of the problem,
 but I still think we need a reproducible test-case to ensure we're doing
 the right thing.

I agree we should make sure this code is tested.
It's pretty easy: run ifconfig in a loop in guest.

Amos, did you try this? Probably should otherwise
we don't really know whether the logic works.


  Management (qmp client) always respond timely to the event in the
  begining. If guest changes rx-filter very frequently  continuous.
  Then we increase the evstate-rate, even disable the event.
  
  In the normal usecase, we should consider packet losing first (caused by
  event delay + the delay is used by management to execute the 

Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-30 Thread Michael S. Tsirkin
On Thu, May 30, 2013 at 09:50:35PM +0800, Amos Kong wrote:
 On Tue, May 28, 2013 at 08:25:56AM -0400, Luiz Capitulino wrote:
  On Tue, 28 May 2013 06:43:04 +0800
  Amos Kong ak...@redhat.com wrote:
 
 CC: netdev, vlad
 
 Currently we create  open macvtap device by libvirt(management),
 and pass the fd to qemu process. And macvtap works in promiscuous
 mode, we want to sync the rx-filter setup of virtio-net to macvtap
 device for better performance.
 
 qemu might be a non-privileged process, it doesn't have permission
 to setup macvtap device. So we want to add an QMP event to notify
 management to execute the setup.
 
 mac-programming over macvtap patch for qemu:
 http://lists.nongnu.org/archive/html/qemu-devel/2013-05/msg03337.html
 
 
 Can we re-consider to setup macvtap in qemu directly? open some setup
 permission of rx-filter to qemu process? will do some investigation.
  

I don't think that's a good idea. I expect management to do more
than blindly apply anything qemu tells it to.
For example, check that host admin actually allowed this,
or verify that MAC is unique on this host.

-- 
MST



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-30 Thread Amos Kong
On Thu, May 30, 2013 at 04:54:41PM +0300, Michael S. Tsirkin wrote:
 On Tue, May 28, 2013 at 08:25:56AM -0400, Luiz Capitulino wrote:
  On Tue, 28 May 2013 06:43:04 +0800
  Amos Kong ak...@redhat.com wrote:
  
   On Mon, May 27, 2013 at 09:24:28AM -0400, Luiz Capitulino wrote:
On Mon, 27 May 2013 09:10:11 -0400
Luiz Capitulino lcapitul...@redhat.com wrote:

  We use the QMP event to notify management about the mac changing.
  
  In this thread, we _wrongly_ considered to use qmp approach to delay
  the event for avoiding the flooding.
  
eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 1000);
  
  Now we have a solution (using a flag to turn on/off the notify) to 
  avoid the
  flooding, only emit the event if we have no un-read event.
  
  If we want to (flag is on) emit the event, we wish the event be 
  sent ASAP
  (so event_throttle isn't needed).
 
 Unfortunately this doesn't answer my question. I did understand why 
 you're
 not using the event throttle API (which is because you don't want to 
 slow down
 the guest, not the QMP client).
 
 My point is whether coupling the event with the query command is 
 really
 justified or even if it really fixes the problem. Two points:
 
  1. Coupling them is bad design, and will probably strike back, as we 
 plan
 for a better API for events where events can be disabled

I meant we may in the future, for example, introduce the ability to 
disable
commands (and events). One could argue that the event w/o a query 
command
is not that useful, as events can be lost. But loosing an event is one 
thing,
not having it because it got disabled by a side effect is another.
   
   event_throttle() couples the event in QMP framework, but we use flags
   to disabled the event from real source (emit points/senders). 
   
   If we set the evstate-rate to -1, we can ignore the events in
   monitor_protocol_event_queue(), but we could not control the event
   emitting of each emit point (each nic).

But anyway, my main point in this thread is to make sure we at least
justify having this coupling. Aren't we optimizing prematurely? Aren't
we optimizing for a corner case? That's what I want to see answered.
   
   If it's a corner case, we don't need a general API to disable event.
  
  If it's a corner case, it's really worth to fix it?
  
  I think that what we need a real world test-case to show us we're
  doing the right thing.
  
   We can disable this event by a flag, and introduce a new API
   if we have same request from other events.
   
  2. Can you actually show the problem does exist so that we ensure 
 this is
 not premature optimization? Might be a good idea to have this in 
 the
 commit log
 
   (which is to couple the event with the query command) is
   appropriate. We're in user-space already, many things could slow
   the guest down apart from the event generation.
   
   Two questions:
   
1. Do we know how slow (or how many packets are actually dropped)
   if the mac is changed too often *and* the event is always 
   sent?
  
  We always disable interface first, then change the macaddr.
  But we just have patch to allow guest to change macaddr of
  virtio-net when the interface is running.
  
  | commit 2dcd0cce551983afe2f900125457f10bb5d980ae
  | Author: Jiri Pirko jpi...@redhat.com
  | Date:   Tue Dec 11 15:33:56 2012 -0500
  | 
  | [virt] virtio_net: allow to change mac when iface is running
  
2. Does this solution consider what happens if the QMP client 
   does
   respond timely to the event by issuing the query-rx-filter
   command?
  
  We assume that the QMP client (management) cares about the mac 
  changing
  event, and will query the latest rx-filter state and sync to macvtap
  device.
  
  1) If QMP client respond timely to the event: that's what we 
  expected :)
 
 Won't this slow down the guest? If not, why?
   
   If guest changes fx-filter configs frequently  management always query 
   the
   event very timely, this will slow down the guest.
   
   We should detect  process the abnormal behavior from management.
  
  That's not abnormal. Management is doing what it should do.
  
  Maybe using the event throttle API can solve the mngt side of the problem,
  but I still think we need a reproducible test-case to ensure we're doing
  the right thing.
 
 I agree we should make sure this code is tested.
 It's pretty easy: run ifconfig in a loop in guest.
 
 Amos, did you try this? Probably should otherwise
 we don't really know whether the logic works.

With v4 patch (without using event throttle)

1. continually query rx-filter from monitor
# while true; do echo info 

Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-30 Thread Amos Kong
On Fri, May 31, 2013 at 08:35:28AM +0800, Amos Kong wrote:
 On Thu, May 30, 2013 at 04:54:41PM +0300, Michael S. Tsirkin wrote:
  On Tue, May 28, 2013 at 08:25:56AM -0400, Luiz Capitulino wrote:
   On Tue, 28 May 2013 06:43:04 +0800
   Amos Kong ak...@redhat.com wrote:
   
On Mon, May 27, 2013 at 09:24:28AM -0400, Luiz Capitulino wrote:
 On Mon, 27 May 2013 09:10:11 -0400
 Luiz Capitulino lcapitul...@redhat.com wrote:
 
   We use the QMP event to notify management about the mac changing.
   
   In this thread, we _wrongly_ considered to use qmp approach to 
   delay
   the event for avoiding the flooding.
   
 eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 
   1000);
   
   Now we have a solution (using a flag to turn on/off the notify) 
   to avoid the
   flooding, only emit the event if we have no un-read event.
   
   If we want to (flag is on) emit the event, we wish the event be 
   sent ASAP
   (so event_throttle isn't needed).
  
  Unfortunately this doesn't answer my question. I did understand why 
  you're
  not using the event throttle API (which is because you don't want 
  to slow down
  the guest, not the QMP client).
  
  My point is whether coupling the event with the query command is 
  really
  justified or even if it really fixes the problem. Two points:
  
   1. Coupling them is bad design, and will probably strike back, as 
  we plan
  for a better API for events where events can be disabled
 
 I meant we may in the future, for example, introduce the ability to 
 disable
 commands (and events). One could argue that the event w/o a query 
 command
 is not that useful, as events can be lost. But loosing an event is 
 one thing,
 not having it because it got disabled by a side effect is another.

event_throttle() couples the event in QMP framework, but we use flags
to disabled the event from real source (emit points/senders). 

If we set the evstate-rate to -1, we can ignore the events in
monitor_protocol_event_queue(), but we could not control the event
emitting of each emit point (each nic).
 
 But anyway, my main point in this thread is to make sure we at least
 justify having this coupling. Aren't we optimizing prematurely? Aren't
 we optimizing for a corner case? That's what I want to see answered.

If it's a corner case, we don't need a general API to disable event.
   
   If it's a corner case, it's really worth to fix it?
   
   I think that what we need a real world test-case to show us we're
   doing the right thing.
   
We can disable this event by a flag, and introduce a new API
if we have same request from other events.

   2. Can you actually show the problem does exist so that we ensure 
  this is
  not premature optimization? Might be a good idea to have this 
  in the
  commit log
  
(which is to couple the event with the query command) is
appropriate. We're in user-space already, many things could slow
the guest down apart from the event generation.

Two questions:

 1. Do we know how slow (or how many packets are actually 
dropped)
if the mac is changed too often *and* the event is always 
sent?
   
   We always disable interface first, then change the macaddr.
   But we just have patch to allow guest to change macaddr of
   virtio-net when the interface is running.
   
   | commit 2dcd0cce551983afe2f900125457f10bb5d980ae
   | Author: Jiri Pirko jpi...@redhat.com
   | Date:   Tue Dec 11 15:33:56 2012 -0500
   | 
   | [virt] virtio_net: allow to change mac when iface is running
   
 2. Does this solution consider what happens if the QMP client 
does
respond timely to the event by issuing the query-rx-filter
command?
   
   We assume that the QMP client (management) cares about the mac 
   changing
   event, and will query the latest rx-filter state and sync to 
   macvtap
   device.
   
   1) If QMP client respond timely to the event: that's what we 
   expected :)
  
  Won't this slow down the guest? If not, why?

If guest changes fx-filter configs frequently  management always query 
the
event very timely, this will slow down the guest.

We should detect  process the abnormal behavior from management.
   
   That's not abnormal. Management is doing what it should do.
   
   Maybe using the event throttle API can solve the mngt side of the problem,
   but I still think we need a reproducible test-case to ensure we're doing
   the right thing.
  
  I agree we should make sure this code is tested.
  It's pretty easy: run ifconfig in a loop in guest.
  
  

Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-28 Thread Luiz Capitulino
On Tue, 28 May 2013 06:43:04 +0800
Amos Kong ak...@redhat.com wrote:

 On Mon, May 27, 2013 at 09:24:28AM -0400, Luiz Capitulino wrote:
  On Mon, 27 May 2013 09:10:11 -0400
  Luiz Capitulino lcapitul...@redhat.com wrote:
  
We use the QMP event to notify management about the mac changing.

In this thread, we _wrongly_ considered to use qmp approach to delay
the event for avoiding the flooding.

  eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 1000);

Now we have a solution (using a flag to turn on/off the notify) to 
avoid the
flooding, only emit the event if we have no un-read event.

If we want to (flag is on) emit the event, we wish the event be sent 
ASAP
(so event_throttle isn't needed).
   
   Unfortunately this doesn't answer my question. I did understand why you're
   not using the event throttle API (which is because you don't want to slow 
   down
   the guest, not the QMP client).
   
   My point is whether coupling the event with the query command is really
   justified or even if it really fixes the problem. Two points:
   
1. Coupling them is bad design, and will probably strike back, as we plan
   for a better API for events where events can be disabled
  
  I meant we may in the future, for example, introduce the ability to disable
  commands (and events). One could argue that the event w/o a query command
  is not that useful, as events can be lost. But loosing an event is one 
  thing,
  not having it because it got disabled by a side effect is another.
 
 event_throttle() couples the event in QMP framework, but we use flags
 to disabled the event from real source (emit points/senders). 
 
 If we set the evstate-rate to -1, we can ignore the events in
 monitor_protocol_event_queue(), but we could not control the event
 emitting of each emit point (each nic).
  
  But anyway, my main point in this thread is to make sure we at least
  justify having this coupling. Aren't we optimizing prematurely? Aren't
  we optimizing for a corner case? That's what I want to see answered.
 
 If it's a corner case, we don't need a general API to disable event.

If it's a corner case, it's really worth to fix it?

I think that what we need a real world test-case to show us we're
doing the right thing.

 We can disable this event by a flag, and introduce a new API
 if we have same request from other events.
 
2. Can you actually show the problem does exist so that we ensure this is
   not premature optimization? Might be a good idea to have this in the
   commit log
   
 (which is to couple the event with the query command) is
 appropriate. We're in user-space already, many things could slow
 the guest down apart from the event generation.
 
 Two questions:
 
  1. Do we know how slow (or how many packets are actually dropped)
 if the mac is changed too often *and* the event is always sent?

We always disable interface first, then change the macaddr.
But we just have patch to allow guest to change macaddr of
virtio-net when the interface is running.

| commit 2dcd0cce551983afe2f900125457f10bb5d980ae
| Author: Jiri Pirko jpi...@redhat.com
| Date:   Tue Dec 11 15:33:56 2012 -0500
| 
| [virt] virtio_net: allow to change mac when iface is running

  2. Does this solution consider what happens if the QMP client does
 respond timely to the event by issuing the query-rx-filter
 command?

We assume that the QMP client (management) cares about the mac changing
event, and will query the latest rx-filter state and sync to macvtap
device.

1) If QMP client respond timely to the event: that's what we expected :)
   
   Won't this slow down the guest? If not, why?
 
 If guest changes fx-filter configs frequently  management always query the
 event very timely, this will slow down the guest.
 
 We should detect  process the abnormal behavior from management.

That's not abnormal. Management is doing what it should do.

Maybe using the event throttle API can solve the mngt side of the problem,
but I still think we need a reproducible test-case to ensure we're doing
the right thing.

 Management (qmp client) always respond timely to the event in the
 begining. If guest changes rx-filter very frequently  continuous.
 Then we increase the evstate-rate, even disable the event.
 
 In the normal usecase, we should consider packet losing first (caused by
 event delay + the delay is used by management to execute the change)
 
 ---
 btw, currently we could not test in real environment. If related
 libvirt work finishes, we can evaluate with real delays, packet
 losing, etc.
 
 The worst condition is we could not accept the delay(packet losing),
 we need to consider other solution for mac programming of macvtap.
 
2) If QMP client doesn't respond timely to the event: packets might 
drop.
   If we change mac 

Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-27 Thread Amos Kong
On Fri, May 24, 2013 at 08:51:36AM -0400, Luiz Capitulino wrote:
 On Fri, 24 May 2013 15:10:16 +0300
 Michael S. Tsirkin m...@redhat.com wrote:
 
  On Thu, May 23, 2013 at 01:26:33PM -0400, Luiz Capitulino wrote:
   On Thu, 23 May 2013 20:18:34 +0300
   Michael S. Tsirkin m...@redhat.com wrote:
   
On Thu, May 23, 2013 at 11:54:03AM -0400, Luiz Capitulino wrote:
 On Thu, 16 May 2013 18:17:23 +0300
 Michael S. Tsirkin m...@redhat.com wrote:
 
   The
   existing throttling approach ensures that if the event includes 
   latest
   guest information, then the host doesn't even have to do do a 
   query, and
   is guaranteed that reacting to the final event will always see 
   the most
   recent request.  But most importantly, if the existing throttling 
   works,
   why do we have to invent a one-off approach for this event 
   instead of
   reusing existing code?
 
 Sorry to restart this week old discussion, but I'm now reviewing the 
 patch
 in question and I dislike how we're coupling the event and the query
 command.
 
  Because of the 1st issue above. A large delay because we
 
 Has this been measured? How long is this large delay?
 
 Also, is it impossible for management to issue query-rx-filter
 on a reasonable rate that would also cause the same problems?
 IOW, how can we be sure we're fixing anything without trying it
 on a real use-case scenario?

Play with priorities, you can make management arbitrarily slow.  It's
just not sane to assume any timing guarantees for tasks running on
Linux.
   
   Would you mind to elaborate? I'm not sure I understand how this answers
   my questions.
  
  Maybe I don't understand the questions.
  You are asking why doesn't usual throttling sufficient?
  This was discussed in this thread already.
  That's because it would introduce a huge delay if guest
  changes the mac too often. People don't except that
  changing a mac is a thing the should do slowly.

 You meant shouldn't?
 
 If I got it correctly, all you want to avoid is to call qobject_from_jsonf()
 and monitor_protocol_event() in the mac change path, because this will
 slow down the guest. Did I get it?

No.

We use the QMP event to notify management about the mac changing.

In this thread, we _wrongly_ considered to use qmp approach to delay
the event for avoiding the flooding.

  eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 1000);

Now we have a solution (using a flag to turn on/off the notify) to avoid the
flooding, only emit the event if we have no un-read event.

If we want to (flag is on) emit the event, we wish the event be sent ASAP
(so event_throttle isn't needed).
 
 If I did, my main point is whether or not the solution you're proposing
 (which is to couple the event with the query command) is
 appropriate. We're in user-space already, many things could slow
 the guest down apart from the event generation.
 
 Two questions:
 
  1. Do we know how slow (or how many packets are actually dropped)
 if the mac is changed too often *and* the event is always sent?

We always disable interface first, then change the macaddr.
But we just have patch to allow guest to change macaddr of
virtio-net when the interface is running.

| commit 2dcd0cce551983afe2f900125457f10bb5d980ae
| Author: Jiri Pirko jpi...@redhat.com
| Date:   Tue Dec 11 15:33:56 2012 -0500
| 
| [virt] virtio_net: allow to change mac when iface is running

  2. Does this solution consider what happens if the QMP client does
 respond timely to the event by issuing the query-rx-filter
 command?

We assume that the QMP client (management) cares about the mac changing
event, and will query the latest rx-filter state and sync to macvtap
device.

1) If QMP client respond timely to the event: that's what we expected :)

2) If QMP client doesn't respond timely to the event: packets might drop.
   If we change mac when the interface is running, we can accept trivial
   packets dropping.

For second condition, we need to test in real environment when libvirt
finishs the work of processing events.

-- 
Amos.



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-27 Thread Luiz Capitulino
On Mon, 27 May 2013 17:34:25 +0800
Amos Kong ak...@redhat.com wrote:

 On Fri, May 24, 2013 at 08:51:36AM -0400, Luiz Capitulino wrote:
  On Fri, 24 May 2013 15:10:16 +0300
  Michael S. Tsirkin m...@redhat.com wrote:
  
   On Thu, May 23, 2013 at 01:26:33PM -0400, Luiz Capitulino wrote:
On Thu, 23 May 2013 20:18:34 +0300
Michael S. Tsirkin m...@redhat.com wrote:

 On Thu, May 23, 2013 at 11:54:03AM -0400, Luiz Capitulino wrote:
  On Thu, 16 May 2013 18:17:23 +0300
  Michael S. Tsirkin m...@redhat.com wrote:
  
The
existing throttling approach ensures that if the event includes 
latest
guest information, then the host doesn't even have to do do a 
query, and
is guaranteed that reacting to the final event will always see 
the most
recent request.  But most importantly, if the existing 
throttling works,
why do we have to invent a one-off approach for this event 
instead of
reusing existing code?
  
  Sorry to restart this week old discussion, but I'm now reviewing 
  the patch
  in question and I dislike how we're coupling the event and the query
  command.
  
   Because of the 1st issue above. A large delay because we
  
  Has this been measured? How long is this large delay?
  
  Also, is it impossible for management to issue query-rx-filter
  on a reasonable rate that would also cause the same problems?
  IOW, how can we be sure we're fixing anything without trying it
  on a real use-case scenario?
 
 Play with priorities, you can make management arbitrarily slow.  It's
 just not sane to assume any timing guarantees for tasks running on
 Linux.

Would you mind to elaborate? I'm not sure I understand how this answers
my questions.
   
   Maybe I don't understand the questions.
   You are asking why doesn't usual throttling sufficient?
   This was discussed in this thread already.
   That's because it would introduce a huge delay if guest
   changes the mac too often. People don't except that
   changing a mac is a thing the should do slowly.
 
  You meant shouldn't?
  
  If I got it correctly, all you want to avoid is to call qobject_from_jsonf()
  and monitor_protocol_event() in the mac change path, because this will
  slow down the guest. Did I get it?
 
 No.
 
 We use the QMP event to notify management about the mac changing.
 
 In this thread, we _wrongly_ considered to use qmp approach to delay
 the event for avoiding the flooding.
 
   eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 1000);
 
 Now we have a solution (using a flag to turn on/off the notify) to avoid the
 flooding, only emit the event if we have no un-read event.
 
 If we want to (flag is on) emit the event, we wish the event be sent ASAP
 (so event_throttle isn't needed).

Unfortunately this doesn't answer my question. I did understand why you're
not using the event throttle API (which is because you don't want to slow down
the guest, not the QMP client).

My point is whether coupling the event with the query command is really
justified or even if it really fixes the problem. Two points:

 1. Coupling them is bad design, and will probably strike back, as we plan
for a better API for events where events can be disabled

 2. Can you actually show the problem does exist so that we ensure this is
not premature optimization? Might be a good idea to have this in the
commit log

  (which is to couple the event with the query command) is
  appropriate. We're in user-space already, many things could slow
  the guest down apart from the event generation.
  
  Two questions:
  
   1. Do we know how slow (or how many packets are actually dropped)
  if the mac is changed too often *and* the event is always sent?
 
 We always disable interface first, then change the macaddr.
 But we just have patch to allow guest to change macaddr of
 virtio-net when the interface is running.
 
 | commit 2dcd0cce551983afe2f900125457f10bb5d980ae
 | Author: Jiri Pirko jpi...@redhat.com
 | Date:   Tue Dec 11 15:33:56 2012 -0500
 | 
 | [virt] virtio_net: allow to change mac when iface is running
 
   2. Does this solution consider what happens if the QMP client does
  respond timely to the event by issuing the query-rx-filter
  command?
 
 We assume that the QMP client (management) cares about the mac changing
 event, and will query the latest rx-filter state and sync to macvtap
 device.
 
 1) If QMP client respond timely to the event: that's what we expected :)

Won't this slow down the guest? If not, why?

 
 2) If QMP client doesn't respond timely to the event: packets might drop.
If we change mac when the interface is running, we can accept trivial
packets dropping.
 
 For second condition, we need to test in real environment when libvirt
 finishs the work of processing events.
 




Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-27 Thread Luiz Capitulino
On Mon, 27 May 2013 09:10:11 -0400
Luiz Capitulino lcapitul...@redhat.com wrote:

  We use the QMP event to notify management about the mac changing.
  
  In this thread, we _wrongly_ considered to use qmp approach to delay
  the event for avoiding the flooding.
  
eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 1000);
  
  Now we have a solution (using a flag to turn on/off the notify) to avoid the
  flooding, only emit the event if we have no un-read event.
  
  If we want to (flag is on) emit the event, we wish the event be sent ASAP
  (so event_throttle isn't needed).
 
 Unfortunately this doesn't answer my question. I did understand why you're
 not using the event throttle API (which is because you don't want to slow down
 the guest, not the QMP client).
 
 My point is whether coupling the event with the query command is really
 justified or even if it really fixes the problem. Two points:
 
  1. Coupling them is bad design, and will probably strike back, as we plan
 for a better API for events where events can be disabled

I meant we may in the future, for example, introduce the ability to disable
commands (and events). One could argue that the event w/o a query command
is not that useful, as events can be lost. But loosing an event is one thing,
not having it because it got disabled by a side effect is another.

But anyway, my main point in this thread is to make sure we at least
justify having this coupling. Aren't we optimizing prematurely? Aren't
we optimizing for a corner case? That's what I want to see answered.



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-27 Thread Amos Kong
On Mon, May 27, 2013 at 09:24:28AM -0400, Luiz Capitulino wrote:
 On Mon, 27 May 2013 09:10:11 -0400
 Luiz Capitulino lcapitul...@redhat.com wrote:
 
   We use the QMP event to notify management about the mac changing.
   
   In this thread, we _wrongly_ considered to use qmp approach to delay
   the event for avoiding the flooding.
   
 eg: monitor_protocol_event_throttle(NIC_RX_FILTER_CHANGED, 1000);
   
   Now we have a solution (using a flag to turn on/off the notify) to avoid 
   the
   flooding, only emit the event if we have no un-read event.
   
   If we want to (flag is on) emit the event, we wish the event be sent ASAP
   (so event_throttle isn't needed).
  
  Unfortunately this doesn't answer my question. I did understand why you're
  not using the event throttle API (which is because you don't want to slow 
  down
  the guest, not the QMP client).
  
  My point is whether coupling the event with the query command is really
  justified or even if it really fixes the problem. Two points:
  
   1. Coupling them is bad design, and will probably strike back, as we plan
  for a better API for events where events can be disabled
 
 I meant we may in the future, for example, introduce the ability to disable
 commands (and events). One could argue that the event w/o a query command
 is not that useful, as events can be lost. But loosing an event is one thing,
 not having it because it got disabled by a side effect is another.

event_throttle() couples the event in QMP framework, but we use flags
to disabled the event from real source (emit points/senders). 

If we set the evstate-rate to -1, we can ignore the events in
monitor_protocol_event_queue(), but we could not control the event
emitting of each emit point (each nic).
 
 But anyway, my main point in this thread is to make sure we at least
 justify having this coupling. Aren't we optimizing prematurely? Aren't
 we optimizing for a corner case? That's what I want to see answered.

If it's a corner case, we don't need a general API to disable event.

We can disable this event by a flag, and introduce a new API
if we have same request from other events.

   2. Can you actually show the problem does exist so that we ensure this is
  not premature optimization? Might be a good idea to have this in the
  commit log
  
(which is to couple the event with the query command) is
appropriate. We're in user-space already, many things could slow
the guest down apart from the event generation.

Two questions:

 1. Do we know how slow (or how many packets are actually dropped)
if the mac is changed too often *and* the event is always sent?
   
   We always disable interface first, then change the macaddr.
   But we just have patch to allow guest to change macaddr of
   virtio-net when the interface is running.
   
   | commit 2dcd0cce551983afe2f900125457f10bb5d980ae
   | Author: Jiri Pirko jpi...@redhat.com
   | Date:   Tue Dec 11 15:33:56 2012 -0500
   | 
   | [virt] virtio_net: allow to change mac when iface is running
   
 2. Does this solution consider what happens if the QMP client does
respond timely to the event by issuing the query-rx-filter
command?
   
   We assume that the QMP client (management) cares about the mac changing
   event, and will query the latest rx-filter state and sync to macvtap
   device.
   
   1) If QMP client respond timely to the event: that's what we expected :)
  
  Won't this slow down the guest? If not, why?

If guest changes fx-filter configs frequently  management always query the
event very timely, this will slow down the guest.

We should detect  process the abnormal behavior from management.
Management (qmp client) always respond timely to the event in the
begining. If guest changes rx-filter very frequently  continuous.
Then we increase the evstate-rate, even disable the event.

In the normal usecase, we should consider packet losing first (caused by
event delay + the delay is used by management to execute the change)

---
btw, currently we could not test in real environment. If related
libvirt work finishes, we can evaluate with real delays, packet
losing, etc.

The worst condition is we could not accept the delay(packet losing),
we need to consider other solution for mac programming of macvtap.

   2) If QMP client doesn't respond timely to the event: packets might drop.
  If we change mac when the interface is running, we can accept trivial
  packets dropping.
   
   For second condition, we need to test in real environment when libvirt
   finishs the work of processing events.

-- 
Amos.



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-24 Thread Michael S. Tsirkin
On Thu, May 23, 2013 at 01:26:33PM -0400, Luiz Capitulino wrote:
 On Thu, 23 May 2013 20:18:34 +0300
 Michael S. Tsirkin m...@redhat.com wrote:
 
  On Thu, May 23, 2013 at 11:54:03AM -0400, Luiz Capitulino wrote:
   On Thu, 16 May 2013 18:17:23 +0300
   Michael S. Tsirkin m...@redhat.com wrote:
   
 The
 existing throttling approach ensures that if the event includes latest
 guest information, then the host doesn't even have to do do a query, 
 and
 is guaranteed that reacting to the final event will always see the 
 most
 recent request.  But most importantly, if the existing throttling 
 works,
 why do we have to invent a one-off approach for this event instead of
 reusing existing code?
   
   Sorry to restart this week old discussion, but I'm now reviewing the patch
   in question and I dislike how we're coupling the event and the query
   command.
   
Because of the 1st issue above. A large delay because we
   
   Has this been measured? How long is this large delay?
   
   Also, is it impossible for management to issue query-rx-filter
   on a reasonable rate that would also cause the same problems?
   IOW, how can we be sure we're fixing anything without trying it
   on a real use-case scenario?
  
  Play with priorities, you can make management arbitrarily slow.  It's
  just not sane to assume any timing guarantees for tasks running on
  Linux.
 
 Would you mind to elaborate? I'm not sure I understand how this answers
 my questions.

Maybe I don't understand the questions.
You are asking why doesn't usual throttling sufficient?
This was discussed in this thread already.
That's because it would introduce a huge delay if guest
changes the mac too often. People don't except that
changing a mac is a thing the should do slowly.

Event throttling was not designed for events where
guest asks management to do something.

-- 
MST



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-24 Thread Luiz Capitulino
On Fri, 24 May 2013 15:10:16 +0300
Michael S. Tsirkin m...@redhat.com wrote:

 On Thu, May 23, 2013 at 01:26:33PM -0400, Luiz Capitulino wrote:
  On Thu, 23 May 2013 20:18:34 +0300
  Michael S. Tsirkin m...@redhat.com wrote:
  
   On Thu, May 23, 2013 at 11:54:03AM -0400, Luiz Capitulino wrote:
On Thu, 16 May 2013 18:17:23 +0300
Michael S. Tsirkin m...@redhat.com wrote:

  The
  existing throttling approach ensures that if the event includes 
  latest
  guest information, then the host doesn't even have to do do a 
  query, and
  is guaranteed that reacting to the final event will always see the 
  most
  recent request.  But most importantly, if the existing throttling 
  works,
  why do we have to invent a one-off approach for this event instead 
  of
  reusing existing code?

Sorry to restart this week old discussion, but I'm now reviewing the 
patch
in question and I dislike how we're coupling the event and the query
command.

 Because of the 1st issue above. A large delay because we

Has this been measured? How long is this large delay?

Also, is it impossible for management to issue query-rx-filter
on a reasonable rate that would also cause the same problems?
IOW, how can we be sure we're fixing anything without trying it
on a real use-case scenario?
   
   Play with priorities, you can make management arbitrarily slow.  It's
   just not sane to assume any timing guarantees for tasks running on
   Linux.
  
  Would you mind to elaborate? I'm not sure I understand how this answers
  my questions.
 
 Maybe I don't understand the questions.
 You are asking why doesn't usual throttling sufficient?
 This was discussed in this thread already.
 That's because it would introduce a huge delay if guest
 changes the mac too often. People don't except that
 changing a mac is a thing the should do slowly.

You meant shouldn't?

If I got it correctly, all you want to avoid is to call qobject_from_jsonf()
and monitor_protocol_event() in the mac change path, because this will
slow down the guest. Did I get it?

If I did, my main point is whether or not the solution you're proposing
(which is to couple the event with the query command) is
appropriate. We're in user-space already, many things could slow
the guest down apart from the event generation.

Two questions:

 1. Do we know how slow (or how many packets are actually dropped)
if the mac is changed too often *and* the event is always sent?

 2. Does this solution consider what happens if the QMP client does
respond timely to the event by issuing the query-rx-filter
command?



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-23 Thread Amos Kong
On Tue, May 21, 2013 at 11:51:17AM +0300, Michael S. Tsirkin wrote:
 On Tue, May 21, 2013 at 01:04:55PM +0800, Amos Kong wrote:
 

snip
+event_data = qobject_from_jsonf({ 'name': %s }, 
n-netclient_name);
+monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
+qobject_decref(event_data);
+
 return VIRTIO_NET_OK;
 }

   
   This makes it easy for guest to flood management with
   spurious events.
  
   How about we set a flag after this, and avoid sending any more
   events until management queries the filter status?
  
  As you discussed in this thread, we need a flag to turn on/off
  the event notification to avoid the flooding.
  
  But we could not set the flag in first mac-table change to turn off
  the notification.

I'm wrong. Management's query for first event will enable
notification. If management don't query, no problem here,
because we only need to know the latest rx-filter state.

  Becase one action(execute one cmd in guest) might
  cause multiple events.

|| To clarify what I am proposing:
|| - on info mac-table - clear flag
|| - on mac-table change - test and set flag
||   if was not set - send event to management
||   if was set - do not send event
 
 I still think it will work.

Yes, it works, effectively avoid the event flooding.

 since the event does not have any
 information, what does it matter that we send one and not many events?
 
  It would be flexible to add a parameter for query-mac-table to change
  the flag. Or add a new command to change the flag.
   
  -- 
  Amos.
 
 Looks a bit too complex, to me.

-- 
Amos.



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-23 Thread Luiz Capitulino
On Thu, 16 May 2013 18:17:23 +0300
Michael S. Tsirkin m...@redhat.com wrote:

  The
  existing throttling approach ensures that if the event includes latest
  guest information, then the host doesn't even have to do do a query, and
  is guaranteed that reacting to the final event will always see the most
  recent request.  But most importantly, if the existing throttling works,
  why do we have to invent a one-off approach for this event instead of
  reusing existing code?

Sorry to restart this week old discussion, but I'm now reviewing the patch
in question and I dislike how we're coupling the event and the query
command.

 Because of the 1st issue above. A large delay because we

Has this been measured? How long is this large delay?

Also, is it impossible for management to issue query-rx-filter
on a reasonable rate that would also cause the same problems?
IOW, how can we be sure we're fixing anything without trying it
on a real use-case scenario?

 exceed an arbitrary throttling rate would be bad
 for the guest. Contrast with delay in e.g.
 device delete event.
 The throttling mechanism is good for events that host cares
 about, not for events that guest cares about.
 
  -- 
  Eric Blake   eblake redhat com+1-919-301-3266
  Libvirt virtualization library http://libvirt.org
  
 
 
 




Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-23 Thread Michael S. Tsirkin
On Thu, May 23, 2013 at 11:54:03AM -0400, Luiz Capitulino wrote:
 On Thu, 16 May 2013 18:17:23 +0300
 Michael S. Tsirkin m...@redhat.com wrote:
 
   The
   existing throttling approach ensures that if the event includes latest
   guest information, then the host doesn't even have to do do a query, and
   is guaranteed that reacting to the final event will always see the most
   recent request.  But most importantly, if the existing throttling works,
   why do we have to invent a one-off approach for this event instead of
   reusing existing code?
 
 Sorry to restart this week old discussion, but I'm now reviewing the patch
 in question and I dislike how we're coupling the event and the query
 command.
 
  Because of the 1st issue above. A large delay because we
 
 Has this been measured? How long is this large delay?
 
 Also, is it impossible for management to issue query-rx-filter
 on a reasonable rate that would also cause the same problems?
 IOW, how can we be sure we're fixing anything without trying it
 on a real use-case scenario?

Play with priorities, you can make management arbitrarily slow.  It's
just not sane to assume any timing guarantees for tasks running on
Linux.

  exceed an arbitrary throttling rate would be bad
  for the guest. Contrast with delay in e.g.
  device delete event.
  The throttling mechanism is good for events that host cares
  about, not for events that guest cares about.
  
   -- 
   Eric Blake   eblake redhat com+1-919-301-3266
   Libvirt virtualization library http://libvirt.org
   
  
  
  



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-23 Thread Luiz Capitulino
On Thu, 23 May 2013 20:18:34 +0300
Michael S. Tsirkin m...@redhat.com wrote:

 On Thu, May 23, 2013 at 11:54:03AM -0400, Luiz Capitulino wrote:
  On Thu, 16 May 2013 18:17:23 +0300
  Michael S. Tsirkin m...@redhat.com wrote:
  
The
existing throttling approach ensures that if the event includes latest
guest information, then the host doesn't even have to do do a query, and
is guaranteed that reacting to the final event will always see the most
recent request.  But most importantly, if the existing throttling works,
why do we have to invent a one-off approach for this event instead of
reusing existing code?
  
  Sorry to restart this week old discussion, but I'm now reviewing the patch
  in question and I dislike how we're coupling the event and the query
  command.
  
   Because of the 1st issue above. A large delay because we
  
  Has this been measured? How long is this large delay?
  
  Also, is it impossible for management to issue query-rx-filter
  on a reasonable rate that would also cause the same problems?
  IOW, how can we be sure we're fixing anything without trying it
  on a real use-case scenario?
 
 Play with priorities, you can make management arbitrarily slow.  It's
 just not sane to assume any timing guarantees for tasks running on
 Linux.

Would you mind to elaborate? I'm not sure I understand how this answers
my questions.



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-21 Thread Michael S. Tsirkin
On Tue, May 21, 2013 at 01:04:55PM +0800, Amos Kong wrote:
 On Thu, May 16, 2013 at 03:17:45PM +0300, Michael S. Tsirkin wrote:
  On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
   Introduce this new QMP event to notify management after guest changes
   mac-table configuration.
 
 
   +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
   +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
   +qobject_decref(event_data);
   +
return VIRTIO_NET_OK;
}

  
  Sorry, pls ignore my previous mail, I see you actually
  emit this on rx mode change as well.
  
  I find the name misleading or at least it mislead me :)
  RX_FILTER_CHANGED?
 
 Agree.
 
 What we query contain some of rx modes  mac-table content,
 rx_filter is better.
 
 I will also change monitor cmd to 'query-rx-filter'.
  
   @@ -425,6 +432,7 @@ static int virtio_net_handle_mac(VirtIONet *n, 
   uint8_t cmd,
{
struct virtio_net_ctrl_mac mac_data;
size_t s;
   +QObject *event_data;

if (cmd == VIRTIO_NET_CTRL_MAC_ADDR_SET) {
if (iov_size(iov, iov_cnt) != sizeof(n-mac)) {
   @@ -497,6 +505,10 @@ static int virtio_net_handle_mac(VirtIONet *n, 
   uint8_t cmd,
n-mac_table.multi_overflow = 1;
}

   +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
   +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
   +qobject_decref(event_data);
   +
return VIRTIO_NET_OK;
}
   
  
  This makes it easy for guest to flood management with
  spurious events.
 
  How about we set a flag after this, and avoid sending any more
  events until management queries the filter status?
 
 As you discussed in this thread, we need a flag to turn on/off
 the event notification to avoid the flooding.
 
 But we could not set the flag in first mac-table change to turn off
 the notification. Becase one action(execute one cmd in guest) might
 cause multiple events.

I still think it will work, since the event does not have any
information, what does it matter that we send one and not many events?

 It would be flexible to add a parameter for query-mac-table to change
 the flag. Or add a new command to change the flag.
  
 
 -- 
   Amos.

Looks a bit too complex, to me.

-- 
MST



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-20 Thread Amos Kong
On Thu, May 16, 2013 at 03:17:45PM +0300, Michael S. Tsirkin wrote:
 On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
  Introduce this new QMP event to notify management after guest changes
  mac-table configuration.


  +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
  +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
  +qobject_decref(event_data);
  +
   return VIRTIO_NET_OK;
   }
   
 
 Sorry, pls ignore my previous mail, I see you actually
 emit this on rx mode change as well.
 
 I find the name misleading or at least it mislead me :)
 RX_FILTER_CHANGED?

Agree.

What we query contain some of rx modes  mac-table content,
rx_filter is better.

I will also change monitor cmd to 'query-rx-filter'.
 
  @@ -425,6 +432,7 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t 
  cmd,
   {
   struct virtio_net_ctrl_mac mac_data;
   size_t s;
  +QObject *event_data;
   
   if (cmd == VIRTIO_NET_CTRL_MAC_ADDR_SET) {
   if (iov_size(iov, iov_cnt) != sizeof(n-mac)) {
  @@ -497,6 +505,10 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t 
  cmd,
   n-mac_table.multi_overflow = 1;
   }
   
  +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
  +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
  +qobject_decref(event_data);
  +
   return VIRTIO_NET_OK;
   }
  
 
 This makes it easy for guest to flood management with
 spurious events.

 How about we set a flag after this, and avoid sending any more
 events until management queries the filter status?

As you discussed in this thread, we need a flag to turn on/off
the event notification to avoid the flooding.

But we could not set the flag in first mac-table change to turn off
the notification. Becase one action(execute one cmd in guest) might
cause multiple events.

It would be flexible to add a parameter for query-mac-table to change
the flag. Or add a new command to change the flag.
 

-- 
Amos.



[Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Amos Kong
Introduce this new QMP event to notify management after guest changes
mac-table configuration.

Signed-off-by: Amos Kong ak...@redhat.com
---
 QMP/qmp-events.txt| 14 ++
 hw/net/virtio-net.c   | 12 
 include/monitor/monitor.h |  1 +
 monitor.c |  1 +
 4 files changed, 28 insertions(+)

diff --git a/QMP/qmp-events.txt b/QMP/qmp-events.txt
index 92fe5fb..24d62df 100644
--- a/QMP/qmp-events.txt
+++ b/QMP/qmp-events.txt
@@ -154,6 +154,20 @@ Data:
 path: /machine/peripheral/virtio-net-pci-0 },
   timestamp: { seconds: 1265044230, microseconds: 450486 } }
 
+MAC_TABLE_CHANGED
+-
+
+Emitted mac-table configuration is changed by the guest.
+
+Data:
+
+- name: net client name (json-string)
+
+{ event: MAC_TABLE_CHANGED,
+  data: { name: vnet0 },
+  timestamp: { seconds: 1368697518, microseconds: 326866 }}
+}
+
 DEVICE_TRAY_MOVED
 -
 
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index bed0822..a9b8f53 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -21,6 +21,8 @@
 #include hw/virtio/virtio-net.h
 #include net/vhost_net.h
 #include hw/virtio/virtio-bus.h
+#include qapi/qmp/qjson.h
+#include monitor/monitor.h
 
 #define VIRTIO_NET_VM_VERSION11
 
@@ -395,6 +397,7 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, uint8_t 
cmd,
 {
 uint8_t on;
 size_t s;
+QObject *event_data;
 
 s = iov_to_buf(iov, iov_cnt, 0, on, sizeof(on));
 if (s != sizeof(on)) {
@@ -417,6 +420,10 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, uint8_t 
cmd,
 return VIRTIO_NET_ERR;
 }
 
+event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
+monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
+qobject_decref(event_data);
+
 return VIRTIO_NET_OK;
 }
 
@@ -425,6 +432,7 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t cmd,
 {
 struct virtio_net_ctrl_mac mac_data;
 size_t s;
+QObject *event_data;
 
 if (cmd == VIRTIO_NET_CTRL_MAC_ADDR_SET) {
 if (iov_size(iov, iov_cnt) != sizeof(n-mac)) {
@@ -497,6 +505,10 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t cmd,
 n-mac_table.multi_overflow = 1;
 }
 
+event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
+monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
+qobject_decref(event_data);
+
 return VIRTIO_NET_OK;
 }
 
diff --git a/include/monitor/monitor.h b/include/monitor/monitor.h
index 1a6cfcf..e88c70f 100644
--- a/include/monitor/monitor.h
+++ b/include/monitor/monitor.h
@@ -40,6 +40,7 @@ typedef enum MonitorEvent {
 QEVENT_BLOCK_JOB_ERROR,
 QEVENT_BLOCK_JOB_READY,
 QEVENT_DEVICE_DELETED,
+QEVENT_MAC_TABLE_CHANGED,
 QEVENT_DEVICE_TRAY_MOVED,
 QEVENT_SUSPEND,
 QEVENT_SUSPEND_DISK,
diff --git a/monitor.c b/monitor.c
index 62aaebe..9e51545 100644
--- a/monitor.c
+++ b/monitor.c
@@ -490,6 +490,7 @@ static const char *monitor_event_names[] = {
 [QEVENT_BLOCK_JOB_ERROR] = BLOCK_JOB_ERROR,
 [QEVENT_BLOCK_JOB_READY] = BLOCK_JOB_READY,
 [QEVENT_DEVICE_DELETED] = DEVICE_DELETED,
+[QEVENT_MAC_TABLE_CHANGED] = MAC_TABLE_CHANGED,
 [QEVENT_DEVICE_TRAY_MOVED] = DEVICE_TRAY_MOVED,
 [QEVENT_SUSPEND] = SUSPEND,
 [QEVENT_SUSPEND_DISK] = SUSPEND_DISK,
-- 
1.8.1.4




Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Michael S. Tsirkin
On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
 Introduce this new QMP event to notify management after guest changes
 mac-table configuration.
 
 Signed-off-by: Amos Kong ak...@redhat.com
 ---
  QMP/qmp-events.txt| 14 ++
  hw/net/virtio-net.c   | 12 
  include/monitor/monitor.h |  1 +
  monitor.c |  1 +
  4 files changed, 28 insertions(+)
 
 diff --git a/QMP/qmp-events.txt b/QMP/qmp-events.txt
 index 92fe5fb..24d62df 100644
 --- a/QMP/qmp-events.txt
 +++ b/QMP/qmp-events.txt
 @@ -154,6 +154,20 @@ Data:
  path: /machine/peripheral/virtio-net-pci-0 },
timestamp: { seconds: 1265044230, microseconds: 450486 } }
  
 +MAC_TABLE_CHANGED
 +-
 +
 +Emitted mac-table configuration is changed by the guest.
 +
 +Data:
 +
 +- name: net client name (json-string)
 +
 +{ event: MAC_TABLE_CHANGED,
 +  data: { name: vnet0 },
 +  timestamp: { seconds: 1368697518, microseconds: 326866 }}
 +}
 +

It seems clear that if management wants to know about
RX filter changes, it also cares about RX mode changes.
So what's the plan for RX mode changes?
Want to add more events or extend this one?
I'd like to see how it all works together.

  DEVICE_TRAY_MOVED
  -
  
 diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
 index bed0822..a9b8f53 100644
 --- a/hw/net/virtio-net.c
 +++ b/hw/net/virtio-net.c
 @@ -21,6 +21,8 @@
  #include hw/virtio/virtio-net.h
  #include net/vhost_net.h
  #include hw/virtio/virtio-bus.h
 +#include qapi/qmp/qjson.h
 +#include monitor/monitor.h
  
  #define VIRTIO_NET_VM_VERSION11
  
 @@ -395,6 +397,7 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
 uint8_t cmd,
  {
  uint8_t on;
  size_t s;
 +QObject *event_data;
  
  s = iov_to_buf(iov, iov_cnt, 0, on, sizeof(on));
  if (s != sizeof(on)) {
 @@ -417,6 +420,10 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
 uint8_t cmd,
  return VIRTIO_NET_ERR;
  }
  
 +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
 +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
 +qobject_decref(event_data);
 +
  return VIRTIO_NET_OK;
  }
  
 @@ -425,6 +432,7 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t 
 cmd,
  {
  struct virtio_net_ctrl_mac mac_data;
  size_t s;
 +QObject *event_data;
  
  if (cmd == VIRTIO_NET_CTRL_MAC_ADDR_SET) {
  if (iov_size(iov, iov_cnt) != sizeof(n-mac)) {
 @@ -497,6 +505,10 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t 
 cmd,
  n-mac_table.multi_overflow = 1;
  }
  
 +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
 +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
 +qobject_decref(event_data);
 +
  return VIRTIO_NET_OK;
  }
  
 diff --git a/include/monitor/monitor.h b/include/monitor/monitor.h
 index 1a6cfcf..e88c70f 100644
 --- a/include/monitor/monitor.h
 +++ b/include/monitor/monitor.h
 @@ -40,6 +40,7 @@ typedef enum MonitorEvent {
  QEVENT_BLOCK_JOB_ERROR,
  QEVENT_BLOCK_JOB_READY,
  QEVENT_DEVICE_DELETED,
 +QEVENT_MAC_TABLE_CHANGED,
  QEVENT_DEVICE_TRAY_MOVED,
  QEVENT_SUSPEND,
  QEVENT_SUSPEND_DISK,
 diff --git a/monitor.c b/monitor.c
 index 62aaebe..9e51545 100644
 --- a/monitor.c
 +++ b/monitor.c
 @@ -490,6 +490,7 @@ static const char *monitor_event_names[] = {
  [QEVENT_BLOCK_JOB_ERROR] = BLOCK_JOB_ERROR,
  [QEVENT_BLOCK_JOB_READY] = BLOCK_JOB_READY,
  [QEVENT_DEVICE_DELETED] = DEVICE_DELETED,
 +[QEVENT_MAC_TABLE_CHANGED] = MAC_TABLE_CHANGED,
  [QEVENT_DEVICE_TRAY_MOVED] = DEVICE_TRAY_MOVED,
  [QEVENT_SUSPEND] = SUSPEND,
  [QEVENT_SUSPEND_DISK] = SUSPEND_DISK,
 -- 
 1.8.1.4



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Michael S. Tsirkin
On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
 Introduce this new QMP event to notify management after guest changes
 mac-table configuration.
 
 Signed-off-by: Amos Kong ak...@redhat.com
 ---
  QMP/qmp-events.txt| 14 ++
  hw/net/virtio-net.c   | 12 
  include/monitor/monitor.h |  1 +
  monitor.c |  1 +
  4 files changed, 28 insertions(+)
 
 diff --git a/QMP/qmp-events.txt b/QMP/qmp-events.txt
 index 92fe5fb..24d62df 100644
 --- a/QMP/qmp-events.txt
 +++ b/QMP/qmp-events.txt
 @@ -154,6 +154,20 @@ Data:
  path: /machine/peripheral/virtio-net-pci-0 },
timestamp: { seconds: 1265044230, microseconds: 450486 } }
  
 +MAC_TABLE_CHANGED
 +-
 +
 +Emitted mac-table configuration is changed by the guest.
 +
 +Data:
 +
 +- name: net client name (json-string)
 +
 +{ event: MAC_TABLE_CHANGED,
 +  data: { name: vnet0 },
 +  timestamp: { seconds: 1368697518, microseconds: 326866 }}
 +}
 +
  DEVICE_TRAY_MOVED
  -
  
 diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
 index bed0822..a9b8f53 100644
 --- a/hw/net/virtio-net.c
 +++ b/hw/net/virtio-net.c
 @@ -21,6 +21,8 @@
  #include hw/virtio/virtio-net.h
  #include net/vhost_net.h
  #include hw/virtio/virtio-bus.h
 +#include qapi/qmp/qjson.h
 +#include monitor/monitor.h
  
  #define VIRTIO_NET_VM_VERSION11
  
 @@ -395,6 +397,7 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
 uint8_t cmd,
  {
  uint8_t on;
  size_t s;
 +QObject *event_data;
  
  s = iov_to_buf(iov, iov_cnt, 0, on, sizeof(on));
  if (s != sizeof(on)) {
 @@ -417,6 +420,10 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
 uint8_t cmd,
  return VIRTIO_NET_ERR;
  }
  
 +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
 +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
 +qobject_decref(event_data);
 +
  return VIRTIO_NET_OK;
  }
  

Sorry, pls ignore my previous mail, I see you actually
emit this on rx mode change as well.

I find the name misleading or at least it mislead me :)
RX_FILTER_CHANGED?

 @@ -425,6 +432,7 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t 
 cmd,
  {
  struct virtio_net_ctrl_mac mac_data;
  size_t s;
 +QObject *event_data;
  
  if (cmd == VIRTIO_NET_CTRL_MAC_ADDR_SET) {
  if (iov_size(iov, iov_cnt) != sizeof(n-mac)) {
 @@ -497,6 +505,10 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t 
 cmd,
  n-mac_table.multi_overflow = 1;
  }
  
 +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
 +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
 +qobject_decref(event_data);
 +
  return VIRTIO_NET_OK;
  }
 

This makes it easy for guest to flood management with
spurious events.
How about we set a flag after this, and avoid sending any more
events until management queries the filter status?
 
 diff --git a/include/monitor/monitor.h b/include/monitor/monitor.h
 index 1a6cfcf..e88c70f 100644
 --- a/include/monitor/monitor.h
 +++ b/include/monitor/monitor.h
 @@ -40,6 +40,7 @@ typedef enum MonitorEvent {
  QEVENT_BLOCK_JOB_ERROR,
  QEVENT_BLOCK_JOB_READY,
  QEVENT_DEVICE_DELETED,
 +QEVENT_MAC_TABLE_CHANGED,
  QEVENT_DEVICE_TRAY_MOVED,
  QEVENT_SUSPEND,
  QEVENT_SUSPEND_DISK,
 diff --git a/monitor.c b/monitor.c
 index 62aaebe..9e51545 100644
 --- a/monitor.c
 +++ b/monitor.c
 @@ -490,6 +490,7 @@ static const char *monitor_event_names[] = {
  [QEVENT_BLOCK_JOB_ERROR] = BLOCK_JOB_ERROR,
  [QEVENT_BLOCK_JOB_READY] = BLOCK_JOB_READY,
  [QEVENT_DEVICE_DELETED] = DEVICE_DELETED,
 +[QEVENT_MAC_TABLE_CHANGED] = MAC_TABLE_CHANGED,
  [QEVENT_DEVICE_TRAY_MOVED] = DEVICE_TRAY_MOVED,
  [QEVENT_SUSPEND] = SUSPEND,
  [QEVENT_SUSPEND_DISK] = SUSPEND_DISK,
 -- 
 1.8.1.4



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Luiz Capitulino
On Thu, 16 May 2013 15:17:45 +0300
Michael S. Tsirkin m...@redhat.com wrote:

 On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
  Introduce this new QMP event to notify management after guest changes
  mac-table configuration.
  
  Signed-off-by: Amos Kong ak...@redhat.com
  ---
   QMP/qmp-events.txt| 14 ++
   hw/net/virtio-net.c   | 12 
   include/monitor/monitor.h |  1 +
   monitor.c |  1 +
   4 files changed, 28 insertions(+)
  
  diff --git a/QMP/qmp-events.txt b/QMP/qmp-events.txt
  index 92fe5fb..24d62df 100644
  --- a/QMP/qmp-events.txt
  +++ b/QMP/qmp-events.txt
  @@ -154,6 +154,20 @@ Data:
   path: /machine/peripheral/virtio-net-pci-0 },
 timestamp: { seconds: 1265044230, microseconds: 450486 } }
   
  +MAC_TABLE_CHANGED
  +-
  +
  +Emitted mac-table configuration is changed by the guest.
  +
  +Data:
  +
  +- name: net client name (json-string)
  +
  +{ event: MAC_TABLE_CHANGED,
  +  data: { name: vnet0 },
  +  timestamp: { seconds: 1368697518, microseconds: 326866 }}
  +}
  +
   DEVICE_TRAY_MOVED
   -
   
  diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
  index bed0822..a9b8f53 100644
  --- a/hw/net/virtio-net.c
  +++ b/hw/net/virtio-net.c
  @@ -21,6 +21,8 @@
   #include hw/virtio/virtio-net.h
   #include net/vhost_net.h
   #include hw/virtio/virtio-bus.h
  +#include qapi/qmp/qjson.h
  +#include monitor/monitor.h
   
   #define VIRTIO_NET_VM_VERSION11
   
  @@ -395,6 +397,7 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
  uint8_t cmd,
   {
   uint8_t on;
   size_t s;
  +QObject *event_data;
   
   s = iov_to_buf(iov, iov_cnt, 0, on, sizeof(on));
   if (s != sizeof(on)) {
  @@ -417,6 +420,10 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
  uint8_t cmd,
   return VIRTIO_NET_ERR;
   }
   
  +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
  +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
  +qobject_decref(event_data);
  +
   return VIRTIO_NET_OK;
   }
   
 
 Sorry, pls ignore my previous mail, I see you actually
 emit this on rx mode change as well.
 
 I find the name misleading or at least it mislead me :)
 RX_FILTER_CHANGED?
 
  @@ -425,6 +432,7 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t 
  cmd,
   {
   struct virtio_net_ctrl_mac mac_data;
   size_t s;
  +QObject *event_data;
   
   if (cmd == VIRTIO_NET_CTRL_MAC_ADDR_SET) {
   if (iov_size(iov, iov_cnt) != sizeof(n-mac)) {
  @@ -497,6 +505,10 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t 
  cmd,
   n-mac_table.multi_overflow = 1;
   }
   
  +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
  +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
  +qobject_decref(event_data);
  +
   return VIRTIO_NET_OK;
   }
  
 
 This makes it easy for guest to flood management with
 spurious events.
 How about we set a flag after this, and avoid sending any more
 events until management queries the filter status?

We have an API for that, look at monitor_protocol_event_init().

  
  diff --git a/include/monitor/monitor.h b/include/monitor/monitor.h
  index 1a6cfcf..e88c70f 100644
  --- a/include/monitor/monitor.h
  +++ b/include/monitor/monitor.h
  @@ -40,6 +40,7 @@ typedef enum MonitorEvent {
   QEVENT_BLOCK_JOB_ERROR,
   QEVENT_BLOCK_JOB_READY,
   QEVENT_DEVICE_DELETED,
  +QEVENT_MAC_TABLE_CHANGED,
   QEVENT_DEVICE_TRAY_MOVED,
   QEVENT_SUSPEND,
   QEVENT_SUSPEND_DISK,
  diff --git a/monitor.c b/monitor.c
  index 62aaebe..9e51545 100644
  --- a/monitor.c
  +++ b/monitor.c
  @@ -490,6 +490,7 @@ static const char *monitor_event_names[] = {
   [QEVENT_BLOCK_JOB_ERROR] = BLOCK_JOB_ERROR,
   [QEVENT_BLOCK_JOB_READY] = BLOCK_JOB_READY,
   [QEVENT_DEVICE_DELETED] = DEVICE_DELETED,
  +[QEVENT_MAC_TABLE_CHANGED] = MAC_TABLE_CHANGED,
   [QEVENT_DEVICE_TRAY_MOVED] = DEVICE_TRAY_MOVED,
   [QEVENT_SUSPEND] = SUSPEND,
   [QEVENT_SUSPEND_DISK] = SUSPEND_DISK,
  -- 
  1.8.1.4
 




Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Michael S. Tsirkin
On Thu, May 16, 2013 at 08:24:03AM -0400, Luiz Capitulino wrote:
 On Thu, 16 May 2013 15:17:45 +0300
 Michael S. Tsirkin m...@redhat.com wrote:
 
  On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
   Introduce this new QMP event to notify management after guest changes
   mac-table configuration.
   
   Signed-off-by: Amos Kong ak...@redhat.com
   ---
QMP/qmp-events.txt| 14 ++
hw/net/virtio-net.c   | 12 
include/monitor/monitor.h |  1 +
monitor.c |  1 +
4 files changed, 28 insertions(+)
   
   diff --git a/QMP/qmp-events.txt b/QMP/qmp-events.txt
   index 92fe5fb..24d62df 100644
   --- a/QMP/qmp-events.txt
   +++ b/QMP/qmp-events.txt
   @@ -154,6 +154,20 @@ Data:
path: /machine/peripheral/virtio-net-pci-0 },
  timestamp: { seconds: 1265044230, microseconds: 450486 } }

   +MAC_TABLE_CHANGED
   +-
   +
   +Emitted mac-table configuration is changed by the guest.
   +
   +Data:
   +
   +- name: net client name (json-string)
   +
   +{ event: MAC_TABLE_CHANGED,
   +  data: { name: vnet0 },
   +  timestamp: { seconds: 1368697518, microseconds: 326866 }}
   +}
   +
DEVICE_TRAY_MOVED
-

   diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
   index bed0822..a9b8f53 100644
   --- a/hw/net/virtio-net.c
   +++ b/hw/net/virtio-net.c
   @@ -21,6 +21,8 @@
#include hw/virtio/virtio-net.h
#include net/vhost_net.h
#include hw/virtio/virtio-bus.h
   +#include qapi/qmp/qjson.h
   +#include monitor/monitor.h

#define VIRTIO_NET_VM_VERSION11

   @@ -395,6 +397,7 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
   uint8_t cmd,
{
uint8_t on;
size_t s;
   +QObject *event_data;

s = iov_to_buf(iov, iov_cnt, 0, on, sizeof(on));
if (s != sizeof(on)) {
   @@ -417,6 +420,10 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
   uint8_t cmd,
return VIRTIO_NET_ERR;
}

   +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
   +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
   +qobject_decref(event_data);
   +
return VIRTIO_NET_OK;
}

  
  Sorry, pls ignore my previous mail, I see you actually
  emit this on rx mode change as well.
  
  I find the name misleading or at least it mislead me :)
  RX_FILTER_CHANGED?
  
   @@ -425,6 +432,7 @@ static int virtio_net_handle_mac(VirtIONet *n, 
   uint8_t cmd,
{
struct virtio_net_ctrl_mac mac_data;
size_t s;
   +QObject *event_data;

if (cmd == VIRTIO_NET_CTRL_MAC_ADDR_SET) {
if (iov_size(iov, iov_cnt) != sizeof(n-mac)) {
   @@ -497,6 +505,10 @@ static int virtio_net_handle_mac(VirtIONet *n, 
   uint8_t cmd,
n-mac_table.multi_overflow = 1;
}

   +event_data = qobject_from_jsonf({ 'name': %s }, n-netclient_name);
   +monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
   +qobject_decref(event_data);
   +
return VIRTIO_NET_OK;
}
   
  
  This makes it easy for guest to flood management with
  spurious events.
  How about we set a flag after this, and avoid sending any more
  events until management queries the filter status?
 
 We have an API for that, look at monitor_protocol_event_init().

You mean monitor_protocol_event_throttle?
So what happens if guest triggers more
MAC changes per second?

   
   diff --git a/include/monitor/monitor.h b/include/monitor/monitor.h
   index 1a6cfcf..e88c70f 100644
   --- a/include/monitor/monitor.h
   +++ b/include/monitor/monitor.h
   @@ -40,6 +40,7 @@ typedef enum MonitorEvent {
QEVENT_BLOCK_JOB_ERROR,
QEVENT_BLOCK_JOB_READY,
QEVENT_DEVICE_DELETED,
   +QEVENT_MAC_TABLE_CHANGED,
QEVENT_DEVICE_TRAY_MOVED,
QEVENT_SUSPEND,
QEVENT_SUSPEND_DISK,
   diff --git a/monitor.c b/monitor.c
   index 62aaebe..9e51545 100644
   --- a/monitor.c
   +++ b/monitor.c
   @@ -490,6 +490,7 @@ static const char *monitor_event_names[] = {
[QEVENT_BLOCK_JOB_ERROR] = BLOCK_JOB_ERROR,
[QEVENT_BLOCK_JOB_READY] = BLOCK_JOB_READY,
[QEVENT_DEVICE_DELETED] = DEVICE_DELETED,
   +[QEVENT_MAC_TABLE_CHANGED] = MAC_TABLE_CHANGED,
[QEVENT_DEVICE_TRAY_MOVED] = DEVICE_TRAY_MOVED,
[QEVENT_SUSPEND] = SUSPEND,
[QEVENT_SUSPEND_DISK] = SUSPEND_DISK,
   -- 
   1.8.1.4
  



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Luiz Capitulino
On Thu, 16 May 2013 15:45:17 +0300
Michael S. Tsirkin m...@redhat.com wrote:

 On Thu, May 16, 2013 at 08:24:03AM -0400, Luiz Capitulino wrote:
  On Thu, 16 May 2013 15:17:45 +0300
  Michael S. Tsirkin m...@redhat.com wrote:
  
   On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
Introduce this new QMP event to notify management after guest changes
mac-table configuration.

Signed-off-by: Amos Kong ak...@redhat.com
---
 QMP/qmp-events.txt| 14 ++
 hw/net/virtio-net.c   | 12 
 include/monitor/monitor.h |  1 +
 monitor.c |  1 +
 4 files changed, 28 insertions(+)

diff --git a/QMP/qmp-events.txt b/QMP/qmp-events.txt
index 92fe5fb..24d62df 100644
--- a/QMP/qmp-events.txt
+++ b/QMP/qmp-events.txt
@@ -154,6 +154,20 @@ Data:
 path: /machine/peripheral/virtio-net-pci-0 },
   timestamp: { seconds: 1265044230, microseconds: 450486 } }
 
+MAC_TABLE_CHANGED
+-
+
+Emitted mac-table configuration is changed by the guest.
+
+Data:
+
+- name: net client name (json-string)
+
+{ event: MAC_TABLE_CHANGED,
+  data: { name: vnet0 },
+  timestamp: { seconds: 1368697518, microseconds: 326866 }}
+}
+
 DEVICE_TRAY_MOVED
 -
 
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index bed0822..a9b8f53 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -21,6 +21,8 @@
 #include hw/virtio/virtio-net.h
 #include net/vhost_net.h
 #include hw/virtio/virtio-bus.h
+#include qapi/qmp/qjson.h
+#include monitor/monitor.h
 
 #define VIRTIO_NET_VM_VERSION11
 
@@ -395,6 +397,7 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
uint8_t cmd,
 {
 uint8_t on;
 size_t s;
+QObject *event_data;
 
 s = iov_to_buf(iov, iov_cnt, 0, on, sizeof(on));
 if (s != sizeof(on)) {
@@ -417,6 +420,10 @@ static int virtio_net_handle_rx_mode(VirtIONet *n, 
uint8_t cmd,
 return VIRTIO_NET_ERR;
 }
 
+event_data = qobject_from_jsonf({ 'name': %s }, 
n-netclient_name);
+monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
+qobject_decref(event_data);
+
 return VIRTIO_NET_OK;
 }
 
   
   Sorry, pls ignore my previous mail, I see you actually
   emit this on rx mode change as well.
   
   I find the name misleading or at least it mislead me :)
   RX_FILTER_CHANGED?
   
@@ -425,6 +432,7 @@ static int virtio_net_handle_mac(VirtIONet *n, 
uint8_t cmd,
 {
 struct virtio_net_ctrl_mac mac_data;
 size_t s;
+QObject *event_data;
 
 if (cmd == VIRTIO_NET_CTRL_MAC_ADDR_SET) {
 if (iov_size(iov, iov_cnt) != sizeof(n-mac)) {
@@ -497,6 +505,10 @@ static int virtio_net_handle_mac(VirtIONet *n, 
uint8_t cmd,
 n-mac_table.multi_overflow = 1;
 }
 
+event_data = qobject_from_jsonf({ 'name': %s }, 
n-netclient_name);
+monitor_protocol_event(QEVENT_MAC_TABLE_CHANGED, event_data);
+qobject_decref(event_data);
+
 return VIRTIO_NET_OK;
 }

   
   This makes it easy for guest to flood management with
   spurious events.
   How about we set a flag after this, and avoid sending any more
   events until management queries the filter status?
  
  We have an API for that, look at monitor_protocol_event_init().
 
 You mean monitor_protocol_event_throttle?
 So what happens if guest triggers more
 MAC changes per second?

The QMP client will receive the newest one.

 

diff --git a/include/monitor/monitor.h b/include/monitor/monitor.h
index 1a6cfcf..e88c70f 100644
--- a/include/monitor/monitor.h
+++ b/include/monitor/monitor.h
@@ -40,6 +40,7 @@ typedef enum MonitorEvent {
 QEVENT_BLOCK_JOB_ERROR,
 QEVENT_BLOCK_JOB_READY,
 QEVENT_DEVICE_DELETED,
+QEVENT_MAC_TABLE_CHANGED,
 QEVENT_DEVICE_TRAY_MOVED,
 QEVENT_SUSPEND,
 QEVENT_SUSPEND_DISK,
diff --git a/monitor.c b/monitor.c
index 62aaebe..9e51545 100644
--- a/monitor.c
+++ b/monitor.c
@@ -490,6 +490,7 @@ static const char *monitor_event_names[] = {
 [QEVENT_BLOCK_JOB_ERROR] = BLOCK_JOB_ERROR,
 [QEVENT_BLOCK_JOB_READY] = BLOCK_JOB_READY,
 [QEVENT_DEVICE_DELETED] = DEVICE_DELETED,
+[QEVENT_MAC_TABLE_CHANGED] = MAC_TABLE_CHANGED,
 [QEVENT_DEVICE_TRAY_MOVED] = DEVICE_TRAY_MOVED,
 [QEVENT_SUSPEND] = SUSPEND,
 [QEVENT_SUSPEND_DISK] = SUSPEND_DISK,
-- 
1.8.1.4
   
 




Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Eric Blake
On 05/16/2013 05:07 AM, Amos Kong wrote:
 Introduce this new QMP event to notify management after guest changes
 mac-table configuration.
 
 Signed-off-by: Amos Kong ak...@redhat.com
 ---
  QMP/qmp-events.txt| 14 ++
  hw/net/virtio-net.c   | 12 
  include/monitor/monitor.h |  1 +
  monitor.c |  1 +
  4 files changed, 28 insertions(+)
 
 diff --git a/QMP/qmp-events.txt b/QMP/qmp-events.txt
 index 92fe5fb..24d62df 100644
 --- a/QMP/qmp-events.txt
 +++ b/QMP/qmp-events.txt
 @@ -154,6 +154,20 @@ Data:
  path: /machine/peripheral/virtio-net-pci-0 },
timestamp: { seconds: 1265044230, microseconds: 450486 } }
  
 +MAC_TABLE_CHANGED
 +-
 +
 +Emitted mac-table configuration is changed by the guest.
 +
 +Data:
 +
 +- name: net client name (json-string)
 +
 +{ event: MAC_TABLE_CHANGED,
 +  data: { name: vnet0 },
 +  timestamp: { seconds: 1368697518, microseconds: 326866 }}
 +}

Is it worth trying to also provide details about the change as part of
the event, to avoid having to do a round-trip query- command just to
learn what the new values are?

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Eric Blake
On 05/16/2013 06:17 AM, Michael S. Tsirkin wrote:
 On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
 Introduce this new QMP event to notify management after guest changes
 mac-table configuration.

 
 This makes it easy for guest to flood management with
 spurious events.
 How about we set a flag after this, and avoid sending any more
 events until management queries the filter status?
  

Or use rate-limiting, similar to what we have done for other
guest-triggered events (such as BALLOON_CHANGE), where management can
then tweak the maximum frequency at which it is willing to receive events.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Michael S. Tsirkin
On Thu, May 16, 2013 at 08:56:42AM -0600, Eric Blake wrote:
 On 05/16/2013 05:07 AM, Amos Kong wrote:
  Introduce this new QMP event to notify management after guest changes
  mac-table configuration.
  
  Signed-off-by: Amos Kong ak...@redhat.com
  ---
   QMP/qmp-events.txt| 14 ++
   hw/net/virtio-net.c   | 12 
   include/monitor/monitor.h |  1 +
   monitor.c |  1 +
   4 files changed, 28 insertions(+)
  
  diff --git a/QMP/qmp-events.txt b/QMP/qmp-events.txt
  index 92fe5fb..24d62df 100644
  --- a/QMP/qmp-events.txt
  +++ b/QMP/qmp-events.txt
  @@ -154,6 +154,20 @@ Data:
   path: /machine/peripheral/virtio-net-pci-0 },
 timestamp: { seconds: 1265044230, microseconds: 450486 } }
   
  +MAC_TABLE_CHANGED
  +-
  +
  +Emitted mac-table configuration is changed by the guest.
  +
  +Data:
  +
  +- name: net client name (json-string)
  +
  +{ event: MAC_TABLE_CHANGED,
  +  data: { name: vnet0 },
  +  timestamp: { seconds: 1368697518, microseconds: 326866 }}
  +}
 
 Is it worth trying to also provide details about the change as part of
 the event, to avoid having to do a round-trip query- command just to
 learn what the new values are?
 
 -- 
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org
 

That really depends on the device though.
Some give you incremental add/delete mac commands,
others might let you replace the whole rx filter
in one go.

So if yes I'd say we should dump the whole table
not what changed.

-- 
MST



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Michael S. Tsirkin
On Thu, May 16, 2013 at 08:58:38AM -0600, Eric Blake wrote:
 On 05/16/2013 06:17 AM, Michael S. Tsirkin wrote:
  On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
  Introduce this new QMP event to notify management after guest changes
  mac-table configuration.
 
  
  This makes it easy for guest to flood management with
  spurious events.
  How about we set a flag after this, and avoid sending any more
  events until management queries the filter status?
   
 
 Or use rate-limiting, similar to what we have done for other
 guest-triggered events (such as BALLOON_CHANGE), where management can
 then tweak the maximum frequency at which it is willing to receive events.
 
 -- 
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org
 

I'm not sure how would management set the rate though,
and any throttling here might hurt the guest,
unlike the balloon.

OTOH what I proposed kind of moderates itself automatically.

-- 
MST



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Michael S. Tsirkin
On Thu, May 16, 2013 at 06:03:26PM +0300, Michael S. Tsirkin wrote:
 On Thu, May 16, 2013 at 08:58:38AM -0600, Eric Blake wrote:
  On 05/16/2013 06:17 AM, Michael S. Tsirkin wrote:
   On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
   Introduce this new QMP event to notify management after guest changes
   mac-table configuration.
  
   
   This makes it easy for guest to flood management with
   spurious events.
   How about we set a flag after this, and avoid sending any more
   events until management queries the filter status?

  
  Or use rate-limiting, similar to what we have done for other
  guest-triggered events (such as BALLOON_CHANGE), where management can
  then tweak the maximum frequency at which it is willing to receive events.
  
  -- 
  Eric Blake   eblake redhat com+1-919-301-3266
  Libvirt virtualization library http://libvirt.org
  
 
 I'm not sure how would management set the rate though,
 and any throttling here might hurt the guest,
 unlike the balloon.
 
 OTOH what I proposed kind of moderates itself automatically.

To clarify the issue:
- guest might be changing macs a lot not because it
is malicious, but because it has a reason to do it.
delaying the filter update for such a guest
would drop lots of packets.

To clarify what I am proposing:
- on info mac-table - clear flag
- on mac-table change - test and set flag
if was not set - send event to management
if was set - do not send event

This way management does not get events faster than
it can handle them.

 -- 
 MST



Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Eric Blake
On 05/16/2013 09:03 AM, Michael S. Tsirkin wrote:
 On Thu, May 16, 2013 at 08:58:38AM -0600, Eric Blake wrote:
 On 05/16/2013 06:17 AM, Michael S. Tsirkin wrote:
 On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
 Introduce this new QMP event to notify management after guest changes
 mac-table configuration.


 This makes it easy for guest to flood management with
 spurious events.
 How about we set a flag after this, and avoid sending any more
 events until management queries the filter status?
  

 Or use rate-limiting, similar to what we have done for other
 guest-triggered events (such as BALLOON_CHANGE), where management can
 then tweak the maximum frequency at which it is willing to receive events.

 -- 
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org

 
 I'm not sure how would management set the rate though,
 and any throttling here might hurt the guest,
 unlike the balloon.

If I understand how memballoon throttling works, throttling is NOT
guest-visible; it merely controls how frequently the guest can trigger
an event to the host.  The host always gets the latest guest status, but
only after a timeout has occurred since the last event sent (therefore,
2 back-to-back changes mean that the second event isn't sent until the
timeout elapses; 3 back-to-back means that the 2nd is dropped and only
the first and third changes get sent, with the 3rd waiting until after
the timeout).  That is, not all changes reach the host, the first change
always happens immediately, but subsequent changes may be deferred until
the timeout elapses, but the host will eventually see the final change,
and no slower than the frequency it configures for the throttling.

Or are you are arguing that if the guest makes a request, but the host
waits until a second has elapsed before it even gets the event to act on
the request, then the guest has suffered a performance loss?

 
 OTOH what I proposed kind of moderates itself automatically.

Your approach (no more events until the host has acknowleged) has a
potential problem if the host misses the event (because of a libvirtd
restart, for example - but then again, on a libvirtd restart, libvirt
should probably query current state to get itself back in sync); and
also means that the host sees stale event data if subsequent events were
squelched because the host hasn't reacted to the first event yet.  The
existing throttling approach ensures that if the event includes latest
guest information, then the host doesn't even have to do do a query, and
is guaranteed that reacting to the final event will always see the most
recent request.  But most importantly, if the existing throttling works,
why do we have to invent a one-off approach for this event instead of
reusing existing code?

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Michael S. Tsirkin
On Thu, May 16, 2013 at 09:12:36AM -0600, Eric Blake wrote:
 On 05/16/2013 09:03 AM, Michael S. Tsirkin wrote:
  On Thu, May 16, 2013 at 08:58:38AM -0600, Eric Blake wrote:
  On 05/16/2013 06:17 AM, Michael S. Tsirkin wrote:
  On Thu, May 16, 2013 at 07:07:24PM +0800, Amos Kong wrote:
  Introduce this new QMP event to notify management after guest changes
  mac-table configuration.
 
 
  This makes it easy for guest to flood management with
  spurious events.
  How about we set a flag after this, and avoid sending any more
  events until management queries the filter status?
   
 
  Or use rate-limiting, similar to what we have done for other
  guest-triggered events (such as BALLOON_CHANGE), where management can
  then tweak the maximum frequency at which it is willing to receive events.
 
  -- 
  Eric Blake   eblake redhat com+1-919-301-3266
  Libvirt virtualization library http://libvirt.org
 
  
  I'm not sure how would management set the rate though,
  and any throttling here might hurt the guest,
  unlike the balloon.
 
 If I understand how memballoon throttling works, throttling is NOT
 guest-visible; it merely controls how frequently the guest can trigger
 an event to the host.  The host always gets the latest guest status, but
 only after a timeout has occurred since the last event sent (therefore,
 2 back-to-back changes mean that the second event isn't sent until the
 timeout elapses; 3 back-to-back means that the 2nd is dropped and only
 the first and third changes get sent, with the 3rd waiting until after
 the timeout).  That is, not all changes reach the host, the first change
 always happens immediately, but subsequent changes may be deferred until
 the timeout elapses, but the host will eventually see the final change,
 and no slower than the frequency it configures for the throttling.
 
 Or are you are arguing that if the guest makes a request, but the host
 waits until a second has elapsed before it even gets the event to act on
 the request, then the guest has suffered a performance loss?

Yes, that's what I'm saying.

  
  OTOH what I proposed kind of moderates itself automatically.
 
 Your approach (no more events until the host has acknowleged) has a
 potential problem if the host misses the event (because of a libvirtd
 restart, for example - but then again, on a libvirtd restart, libvirt
 should probably query current state to get itself back in sync);

exactly
 and
 also means that the host sees stale event data if subsequent events were
 squelched because the host hasn't reacted to the first event yet.

So, let's not send any data in the event. Amos's patch does exafctly
that.

 The
 existing throttling approach ensures that if the event includes latest
 guest information, then the host doesn't even have to do do a query, and
 is guaranteed that reacting to the final event will always see the most
 recent request.  But most importantly, if the existing throttling works,
 why do we have to invent a one-off approach for this event instead of
 reusing existing code?

Because of the 1st issue above. A large delay because we
exceed an arbitrary throttling rate would be bad
for the guest. Contrast with delay in e.g.
device delete event.
The throttling mechanism is good for events that host cares
about, not for events that guest cares about.

 -- 
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org
 





Re: [Qemu-devel] [PATCH v2 1/2] net: introduce MAC_TABLE_CHANGED event

2013-05-16 Thread Eric Blake
On 05/16/2013 09:17 AM, Michael S. Tsirkin wrote:

 The
 existing throttling approach ensures that if the event includes latest
 guest information, then the host doesn't even have to do do a query, and
 is guaranteed that reacting to the final event will always see the most
 recent request.  But most importantly, if the existing throttling works,
 why do we have to invent a one-off approach for this event instead of
 reusing existing code?
 
 Because of the 1st issue above. A large delay because we
 exceed an arbitrary throttling rate would be bad
 for the guest. Contrast with delay in e.g.
 device delete event.
 The throttling mechanism is good for events that host cares
 about, not for events that guest cares about.

Alright, your argument has me convinced :)  Looks like we DO want to
react to the guest as fast as possible, for less missed traffic in the
guest, but also without overwhelming the host with events.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature