Re: [Qemu-devel] [RFC] New thread for the VM migration

2011-07-18 Thread Markus Armbruster
Juan Quintela quint...@redhat.com writes:

[...]
 I have think a little bit about hotplug  migration, and haven't arraive
 to a nice solution.

 - Disabling hotplug/unplug during migration: easy to do.  But it is not
   exactly user friendly (we are here).

 - Allowing hotplug during migration. Solutions:
   * allow the transfer of hotplug events over the migration protocol
 (make it even more complicated, but not too much.  The big problem is
 if the hotplug succeeds on the source but not the destination, ...)
   * migrate the device list in stage 3: it fixes the hotplug problem
 nicely, but it makes the interesting problem that after migrating
 all the ram, we can find interesting problems like: disk not
 readable, etc.  Not funny.
   * insert your nice idea here

 As far as I can see, if we sent the device list 1st, we can create the
 full machine at destination, but hotplug is interesting to manage.
 Sending the device list late, solve hotplug, but allows errors after
 migrating all memory (also known as, why don't you told me *sooner*).

I figure the errors relevant here happen in device backends (host parts)
mostly.

Maybe updating just backends is easier than full device hot plug.
Configure backends before migrating memory, to catch errors.
Reconfigure backends afterwards for hot plug[*].  Then build machine.

You still get errors from frontends (device models) after migrating
memory, but they should be rare.

[...]

[*] You could do it in the middle to catch errors as early as
possible, but I doubt it's worth the trouble.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [RFC] New thread for the VM migration

2011-07-15 Thread Anthony Liguori

On 07/15/2011 02:59 AM, Paolo Bonzini wrote:

On 07/14/2011 06:07 PM, Avi Kivity wrote:


Maybe we can do this via a magic subsection whose contents are the
hotplug event.


What about making the device list just another thing that has to be
migrated live, together with block and ram?


In an ideal world, you would only create the backends on the destination 
node and all of the devices would be created through the migration process.


Regards,

Anthony Liguori


Paolo



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [RFC] New thread for the VM migration

2011-07-14 Thread Avi Kivity

On 07/14/2011 10:14 AM, Umesh Deshpande wrote:

Following patch is implemented to deal with the VCPU and iothread starvation 
during the migration of a guest. Currently iothread is responsible for 
performing the migration. It holds the qemu_mutex during the migration and 
doesn't allow VCPU to enter the qemu mode and delays its return to the guest. 
The guest migration, executed as an iohandler also delays the execution of 
other iohandlers. In the following patch, the migration has been moved to a 
separate thread to reduce the qemu_mutex contention and iohandler starvation.


@@ -260,10 +260,15 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
void *opaque)
  return 0;
  }

+if (stage != 3)
+qemu_mutex_lock_iothread();


Please read CODING_STYLE, especially the bit about braces.

Does this mean that the following code is sometimes executed without 
qemu_mutex?  I don't think any of it is thread safe.


Even just reading memory is not thread safe.  You either have to copy it 
into a buffer under lock, or convert the memory API to RCU.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [RFC] New thread for the VM migration

2011-07-14 Thread Stefan Hajnoczi
On Thu, Jul 14, 2011 at 9:36 AM, Avi Kivity a...@redhat.com wrote:
 On 07/14/2011 10:14 AM, Umesh Deshpande wrote:
 @@ -260,10 +260,15 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int
 stage, void *opaque)
          return 0;
      }

 +    if (stage != 3)
 +        qemu_mutex_lock_iothread();

 Please read CODING_STYLE, especially the bit about braces.

Please use scripts/checkpatch.pl to check coding style before
submitting patches to the list.

You can also set git's pre-commit hook to automatically run checkpatch.pl:
http://blog.vmsplice.net/2011/03/how-to-automatically-run-checkpatchpl.html

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [RFC] New thread for the VM migration

2011-07-14 Thread Anthony Liguori

On 07/14/2011 03:36 AM, Avi Kivity wrote:

On 07/14/2011 10:14 AM, Umesh Deshpande wrote:

Following patch is implemented to deal with the VCPU and iothread
starvation during the migration of a guest. Currently iothread is
responsible for performing the migration. It holds the qemu_mutex
during the migration and doesn't allow VCPU to enter the qemu mode and
delays its return to the guest. The guest migration, executed as an
iohandler also delays the execution of other iohandlers. In the
following patch, the migration has been moved to a separate thread to
reduce the qemu_mutex contention and iohandler starvation.


@@ -260,10 +260,15 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int
stage, void *opaque)
return 0;
}

+ if (stage != 3)
+ qemu_mutex_lock_iothread();


Please read CODING_STYLE, especially the bit about braces.

Does this mean that the following code is sometimes executed without
qemu_mutex? I don't think any of it is thread safe.


That was my reaction too.

I think the most rational thing to do is have a separate thread and a 
pair of producer/consumer queues.


The I/O thread can push virtual addresses and sizes to the queue for the 
migration thread to compress/write() to the fd.  The migration thread 
can then push sent regions onto a separate queue for the I/O thread to 
mark as dirty.


Regards,

Anthony Liguori



Even just reading memory is not thread safe. You either have to copy it
into a buffer under lock, or convert the memory API to RCU.



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [RFC] New thread for the VM migration

2011-07-14 Thread Avi Kivity

On 07/14/2011 03:30 PM, Anthony Liguori wrote:

Does this mean that the following code is sometimes executed without
qemu_mutex? I don't think any of it is thread safe.



That was my reaction too.

I think the most rational thing to do is have a separate thread and a 
pair of producer/consumer queues.


The I/O thread can push virtual addresses and sizes to the queue for 
the migration thread to compress/write() to the fd.  The migration 
thread can then push sent regions onto a separate queue for the I/O 
thread to mark as dirty.


Even virtual addresses are not safe enough, because of hotunplug.  
Without some kind of locking, you have to copy the data.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [RFC] New thread for the VM migration

2011-07-14 Thread Anthony Liguori

On 07/14/2011 07:32 AM, Avi Kivity wrote:

On 07/14/2011 03:30 PM, Anthony Liguori wrote:

Does this mean that the following code is sometimes executed without
qemu_mutex? I don't think any of it is thread safe.



That was my reaction too.

I think the most rational thing to do is have a separate thread and a
pair of producer/consumer queues.

The I/O thread can push virtual addresses and sizes to the queue for
the migration thread to compress/write() to the fd. The migration
thread can then push sent regions onto a separate queue for the I/O
thread to mark as dirty.


Even virtual addresses are not safe enough, because of hotunplug.
Without some kind of locking, you have to copy the data.


We don't know yet how we're going to implement hot unplug so let's not 
worry about this for now.


I think a reference count based approach is really the only sane thing 
to do and if we did that, it wouldn't be a problem since the reference 
would be owned by the I/O thread and would live until the migration 
thread is done with the VA.


Regards,

Anthony Liguori





--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [RFC] New thread for the VM migration

2011-07-14 Thread Avi Kivity

On 07/14/2011 07:49 PM, Anthony Liguori wrote:


I think a reference count based approach is really the only sane thing 
to do and if we did that, it wouldn't be a problem since the reference 
would be owned by the I/O thread and would live until the migration 
thread is done with the VA.




I was thinking about RCU.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html