Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-20 Thread Kevin Wolf
Am 19.10.2010 19:09, schrieb Anthony Liguori:
 On 10/19/2010 11:54 AM, Ayal Baron wrote:
 - Anthony Liguorianth...@codemonkey.ws  wrote:


 On 10/19/2010 07:48 AM, Dor Laor wrote:
  
 On 10/19/2010 04:11 AM, Chris Wright wrote:

 * Juan Quintela (quint...@redhat.com) wrote:
  
 Please send in any agenda items you are interested in covering.

 - 0.13.X -stable handoff
 - 0.14 planning
 - threadlet work
 - virtfs proposals

  
 - Live snapshots
- We were asked to add this feature for external qcow2
  images. Will simple approach of fsync + tracking each requested
  backing file (it can be per vDisk) and re-open the new image

 would
  
  be accepted?

 I had assumed that this would involve:

 qemu -hda windows.img

 (qemu) snapshot ide0-disk0 snap0.img

 1) create snap0.img internally by doing the equivalent of `qemu-img
 create -f qcow2 -b windows.img snap0.img'
 2) bdrv_flush('ide0-disk0')
 3) bdrv_open(snap0.img)
 4) bdrv_close(windows.img)
 5) rename('windows.img', 'windows.img.tmp')
 6) rename('snap0.img', 'windows.img')
 7) rename('windows.img.tmp', 'snap0.img')
  
 All the rename logic assumes files, need to take into account devices as 
 well (namely LVs)

 
 Sure, just s/rename/lvrename/g.

That would mean that you need to have both backing file and new COW
image on LVs.

 The renaming step can be optional and a management tool can take care of 
 that.  It's really just there for convenience since the user expectation 
 is that when you give a name of a snapshot, that the snapshot is 
 reflected in that name not that the new in-use image is that name.

I think that depends on the terminology you use.

If you call it doing a snapshot, then probably people expect that the
snapshot is a new file and they continue to work on the same file (and
they may not understand that removing the snapshot destroys the main
image).

If you call it something like creating a new branch, they will expect
that the old file stays as it is and they create something new on top of
that.

So maybe we shouldn't start doing renames (which we cannot do for
anything but files anyway, consider not only LVs, but also nbd or http
backends), but rather think of a good name for the operation.

Kevin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-20 Thread Ayal Baron

- Kevin Wolf kw...@redhat.com wrote:

 Am 19.10.2010 19:09, schrieb Anthony Liguori:
  On 10/19/2010 11:54 AM, Ayal Baron wrote:
  - Anthony Liguorianth...@codemonkey.ws  wrote:
 
 
  On 10/19/2010 07:48 AM, Dor Laor wrote:
   
  On 10/19/2010 04:11 AM, Chris Wright wrote:
 
  * Juan Quintela (quint...@redhat.com) wrote:
   
  Please send in any agenda items you are interested in
 covering.
 
  - 0.13.X -stable handoff
  - 0.14 planning
  - threadlet work
  - virtfs proposals
 
   
  - Live snapshots
 - We were asked to add this feature for external qcow2
   images. Will simple approach of fsync + tracking each
 requested
   backing file (it can be per vDisk) and re-open the new
 image
 
  would
   
   be accepted?
 
  I had assumed that this would involve:
 
  qemu -hda windows.img
 
  (qemu) snapshot ide0-disk0 snap0.img
 
  1) create snap0.img internally by doing the equivalent of
 `qemu-img
  create -f qcow2 -b windows.img snap0.img'
  2) bdrv_flush('ide0-disk0')
  3) bdrv_open(snap0.img)
  4) bdrv_close(windows.img)
  5) rename('windows.img', 'windows.img.tmp')
  6) rename('snap0.img', 'windows.img')
  7) rename('windows.img.tmp', 'snap0.img')
   
  All the rename logic assumes files, need to take into account
 devices as well (namely LVs)
 
  
  Sure, just s/rename/lvrename/g.
 
 That would mean that you need to have both backing file and new COW
 image on LVs.

That is indeed the way we work (LVs all the way) and you are correct that qemu 
should not assume this, but as Anthony said, the rename bit should be optional 
(and we would opt to go without) if at all.

 
  The renaming step can be optional and a management tool can take
 care of 
  that.  It's really just there for convenience since the user
 expectation 
  is that when you give a name of a snapshot, that the snapshot is 
  reflected in that name not that the new in-use image is that name.
 
 I think that depends on the terminology you use.
 
 If you call it doing a snapshot, then probably people expect that the
 snapshot is a new file and they continue to work on the same file
 (and
 they may not understand that removing the snapshot destroys the
 main
 image).
 
 If you call it something like creating a new branch, they will expect
 that the old file stays as it is and they create something new on top
 of
 that.
 
 So maybe we shouldn't start doing renames (which we cannot do for
 anything but files anyway, consider not only LVs, but also nbd or
 http
 backends), but rather think of a good name for the operation.
 
 Kevin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-20 Thread Anthony Liguori

On 10/20/2010 04:18 AM, Kevin Wolf wrote:

Am 19.10.2010 19:09, schrieb Anthony Liguori:
   

On 10/19/2010 11:54 AM, Ayal Baron wrote:
 

- Anthony Liguorianth...@codemonkey.ws   wrote:


   

On 10/19/2010 07:48 AM, Dor Laor wrote:

 

On 10/19/2010 04:11 AM, Chris Wright wrote:

   

* Juan Quintela (quint...@redhat.com) wrote:

 

Please send in any agenda items you are interested in covering.

   

- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals


 

- Live snapshots
- We were asked to add this feature for external qcow2
  images. Will simple approach of fsync + tracking each requested
  backing file (it can be per vDisk) and re-open the new image

   

would

 

  be accepted?

   

I had assumed that this would involve:

qemu -hda windows.img

(qemu) snapshot ide0-disk0 snap0.img

1) create snap0.img internally by doing the equivalent of `qemu-img
create -f qcow2 -b windows.img snap0.img'
2) bdrv_flush('ide0-disk0')
3) bdrv_open(snap0.img)
4) bdrv_close(windows.img)
5) rename('windows.img', 'windows.img.tmp')
6) rename('snap0.img', 'windows.img')
7) rename('windows.img.tmp', 'snap0.img')

 

All the rename logic assumes files, need to take into account devices as well 
(namely LVs)

   

Sure, just s/rename/lvrename/g.
 

That would mean that you need to have both backing file and new COW
image on LVs.
   


Yeah, I guess there are two options.  You could force a user to create 
the new leaf image or you could make the command take a blockdev spec 
excluding the backing_file and automatically insert the backing_file 
attribute into the spec before creating the bs.



The renaming step can be optional and a management tool can take care of
that.  It's really just there for convenience since the user expectation
is that when you give a name of a snapshot, that the snapshot is
reflected in that name not that the new in-use image is that name.
 

I think that depends on the terminology you use.

If you call it doing a snapshot, then probably people expect that the
snapshot is a new file and they continue to work on the same file (and
they may not understand that removing the snapshot destroys the main
image).

If you call it something like creating a new branch, they will expect
that the old file stays as it is and they create something new on top of
that.

So maybe we shouldn't start doing renames (which we cannot do for
anything but files anyway, consider not only LVs, but also nbd or http
backends), but rather think of a good name for the operation.
   


Yeah, that's a reasonable point.

Regards,

Anthony Liguori


Kevin
   


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Dor Laor

On 10/19/2010 04:11 AM, Chris Wright wrote:

* Juan Quintela (quint...@redhat.com) wrote:


Please send in any agenda items you are interested in covering.


- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals



- Live snapshots
  - We were asked to add this feature for external qcow2
images. Will simple approach of fsync + tracking each requested
backing file (it can be per vDisk) and re-open the new image would
be accepted?
  - Integration with FS freeze for consistent guest app snapshot
Many apps do not sync their ram state to disk correctly or frequent
enough. Physical world backup software calls fs freeze on xfs and
VSS for windows to make the backup consistent.
In order to integrated this with live snapshots we need a guest
agent to trigger the guest fs freeze.
We can either have qemu communicate with the agent directly through
virtio-serial or have a mgmt daemon use virtio-serial to
communicate with the guest in addition to QMP messages about the
live snapshot state.
Preferences? The first solution complicates qemu while the second
complicates mgmt.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Avi Kivity

 On 10/19/2010 02:48 PM, Dor Laor wrote:

On 10/19/2010 04:11 AM, Chris Wright wrote:

* Juan Quintela (quint...@redhat.com) wrote:


Please send in any agenda items you are interested in covering.


- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals



- Live snapshots
  - We were asked to add this feature for external qcow2
images. Will simple approach of fsync + tracking each requested
backing file (it can be per vDisk) and re-open the new image would
be accepted?
  - Integration with FS freeze for consistent guest app snapshot
Many apps do not sync their ram state to disk correctly or frequent
enough. Physical world backup software calls fs freeze on xfs and
VSS for windows to make the backup consistent.
In order to integrated this with live snapshots we need a guest
agent to trigger the guest fs freeze.
We can either have qemu communicate with the agent directly through
virtio-serial or have a mgmt daemon use virtio-serial to
communicate with the guest in addition to QMP messages about the
live snapshot state.
Preferences? The first solution complicates qemu while the second
complicates mgmt.


Third option, make the freeze path management - qemu - virtio-blk - 
guest kernel - file systems.  The advantage is that it's easy to 
associate file systems with a block device this way.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Dor Laor

On 10/19/2010 02:55 PM, Avi Kivity wrote:

On 10/19/2010 02:48 PM, Dor Laor wrote:

On 10/19/2010 04:11 AM, Chris Wright wrote:

* Juan Quintela (quint...@redhat.com) wrote:


Please send in any agenda items you are interested in covering.


- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals



- Live snapshots
- We were asked to add this feature for external qcow2
images. Will simple approach of fsync + tracking each requested
backing file (it can be per vDisk) and re-open the new image would
be accepted?
- Integration with FS freeze for consistent guest app snapshot
Many apps do not sync their ram state to disk correctly or frequent
enough. Physical world backup software calls fs freeze on xfs and
VSS for windows to make the backup consistent.
In order to integrated this with live snapshots we need a guest
agent to trigger the guest fs freeze.
We can either have qemu communicate with the agent directly through
virtio-serial or have a mgmt daemon use virtio-serial to
communicate with the guest in addition to QMP messages about the
live snapshot state.
Preferences? The first solution complicates qemu while the second
complicates mgmt.


Third option, make the freeze path management - qemu - virtio-blk -
guest kernel - file systems. The advantage is that it's easy to
associate file systems with a block device this way.


OTH the userspace freeze path already exist and now you create another 
path. What about FS that span over LVM with multiple drives? IDE/SCSI?

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Avi Kivity

 On 10/19/2010 02:58 PM, Dor Laor wrote:

On 10/19/2010 02:55 PM, Avi Kivity wrote:

On 10/19/2010 02:48 PM, Dor Laor wrote:

On 10/19/2010 04:11 AM, Chris Wright wrote:

* Juan Quintela (quint...@redhat.com) wrote:


Please send in any agenda items you are interested in covering.


- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals



- Live snapshots
- We were asked to add this feature for external qcow2
images. Will simple approach of fsync + tracking each requested
backing file (it can be per vDisk) and re-open the new image would
be accepted?
- Integration with FS freeze for consistent guest app snapshot
Many apps do not sync their ram state to disk correctly or frequent
enough. Physical world backup software calls fs freeze on xfs and
VSS for windows to make the backup consistent.
In order to integrated this with live snapshots we need a guest
agent to trigger the guest fs freeze.
We can either have qemu communicate with the agent directly through
virtio-serial or have a mgmt daemon use virtio-serial to
communicate with the guest in addition to QMP messages about the
live snapshot state.
Preferences? The first solution complicates qemu while the second
complicates mgmt.


Third option, make the freeze path management - qemu - virtio-blk -
guest kernel - file systems. The advantage is that it's easy to
associate file systems with a block device this way.


OTH the userspace freeze path already exist and now you create another 
path. 


I guess we would still have a userspace daemon; instead of talking to 
virtio-serial it talks to virtio-blk.  So:


  management - qemu - virtio-blk - guest driver - kernel fs 
resolver - daemon - apps


Yuck.


What about FS that span over LVM with multiple drives? IDE/SCSI?


Good points.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Anthony Liguori

On 10/19/2010 08:03 AM, Avi Kivity wrote:

 On 10/19/2010 02:58 PM, Dor Laor wrote:

On 10/19/2010 02:55 PM, Avi Kivity wrote:

On 10/19/2010 02:48 PM, Dor Laor wrote:

On 10/19/2010 04:11 AM, Chris Wright wrote:

* Juan Quintela (quint...@redhat.com) wrote:


Please send in any agenda items you are interested in covering.


- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals



- Live snapshots
- We were asked to add this feature for external qcow2
images. Will simple approach of fsync + tracking each requested
backing file (it can be per vDisk) and re-open the new image would
be accepted?
- Integration with FS freeze for consistent guest app snapshot
Many apps do not sync their ram state to disk correctly or frequent
enough. Physical world backup software calls fs freeze on xfs and
VSS for windows to make the backup consistent.
In order to integrated this with live snapshots we need a guest
agent to trigger the guest fs freeze.
We can either have qemu communicate with the agent directly through
virtio-serial or have a mgmt daemon use virtio-serial to
communicate with the guest in addition to QMP messages about the
live snapshot state.
Preferences? The first solution complicates qemu while the second
complicates mgmt.


Third option, make the freeze path management - qemu - virtio-blk -
guest kernel - file systems. The advantage is that it's easy to
associate file systems with a block device this way.


OTH the userspace freeze path already exist and now you create 
another path. 


I guess we would still have a userspace daemon; instead of talking to 
virtio-serial it talks to virtio-blk.  So:


  management - qemu - virtio-blk - guest driver - kernel fs 
resolver - daemon - apps


Yuck.


Yeah, in Windows, I'm pretty sure the freeze API is a userspace 
concept.  Various apps can hook into it to serialize their state.


At the risk of stealing Mike's thunder, we've actually been working on a 
simple guest agent exactly for this type of task.  Mike's planning an 
RFC for later this week but for those that are interested the repo is at 
http://repo.or.cz/w/qemu/mdroth.git


Regards,

Anthony Liguori



What about FS that span over LVM with multiple drives? IDE/SCSI?


Good points.



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Anthony Liguori

On 10/19/2010 07:48 AM, Dor Laor wrote:

On 10/19/2010 04:11 AM, Chris Wright wrote:

* Juan Quintela (quint...@redhat.com) wrote:


Please send in any agenda items you are interested in covering.


- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals



- Live snapshots
  - We were asked to add this feature for external qcow2
images. Will simple approach of fsync + tracking each requested
backing file (it can be per vDisk) and re-open the new image would
be accepted?


I had assumed that this would involve:

qemu -hda windows.img

(qemu) snapshot ide0-disk0 snap0.img

1) create snap0.img internally by doing the equivalent of `qemu-img 
create -f qcow2 -b windows.img snap0.img'

2) bdrv_flush('ide0-disk0')
3) bdrv_open(snap0.img)
4) bdrv_close(windows.img)
5) rename('windows.img', 'windows.img.tmp')
6) rename('snap0.img', 'windows.img')
7) rename('windows.img.tmp', 'snap0.img')

Regards,

Anthony Liguori


  - Integration with FS freeze for consistent guest app snapshot
Many apps do not sync their ram state to disk correctly or frequent
enough. Physical world backup software calls fs freeze on xfs and
VSS for windows to make the backup consistent.
In order to integrated this with live snapshots we need a guest
agent to trigger the guest fs freeze.
We can either have qemu communicate with the agent directly through
virtio-serial or have a mgmt daemon use virtio-serial to
communicate with the guest in addition to QMP messages about the
live snapshot state.
Preferences? The first solution complicates qemu while the second
complicates mgmt.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Avi Kivity

 On 10/19/2010 03:22 PM, Anthony Liguori wrote:


I had assumed that this would involve:

qemu -hda windows.img

(qemu) snapshot ide0-disk0 snap0.img

1) create snap0.img internally by doing the equivalent of `qemu-img 
create -f qcow2 -b windows.img snap0.img'

2) bdrv_flush('ide0-disk0')
3) bdrv_open(snap0.img)
4) bdrv_close(windows.img)
5) rename('windows.img', 'windows.img.tmp')
6) rename('snap0.img', 'windows.img')
7) rename('windows.img.tmp', 'snap0.img')



Looks reasonable.

Would be interesting to look at this as a use case for the threading 
work.  We should eventually be able to create a snapshot without 
stalling vcpus (stalling I/O of course allowed).


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Anthony Liguori

On 10/19/2010 07:48 AM, Dor Laor wrote:

On 10/19/2010 04:11 AM, Chris Wright wrote:

* Juan Quintela (quint...@redhat.com) wrote:


Please send in any agenda items you are interested in covering.


- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals



- Live snapshots
  - We were asked to add this feature for external qcow2
images. Will simple approach of fsync + tracking each requested
backing file (it can be per vDisk) and re-open the new image would
be accepted?
  - Integration with FS freeze for consistent guest app snapshot
Many apps do not sync their ram state to disk correctly or frequent
enough. Physical world backup software calls fs freeze on xfs and
VSS for windows to make the backup consistent.
In order to integrated this with live snapshots we need a guest
agent to trigger the guest fs freeze.
We can either have qemu communicate with the agent directly through
virtio-serial or have a mgmt daemon use virtio-serial to
communicate with the guest in addition to QMP messages about the
live snapshot state.
Preferences? The first solution complicates qemu while the second
complicates mgmt.


- usb-ccid (aka external device modules)

We probably won't get to it for today's call, but we should try to queue 
this topic up for discussion.  We have a similar situation with vtpm 
(existing device model that wants to integrate with QEMU).  My position 
so far has been that we should avoid external device models because of 
difficulty integrating QEMU features with external device models.


However, I'd like to hear opinions from a wider audience.

Regards,

Anthony Liguori


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Anthony Liguori

On 10/19/2010 08:27 AM, Avi Kivity wrote:

 On 10/19/2010 03:22 PM, Anthony Liguori wrote:


I had assumed that this would involve:

qemu -hda windows.img

(qemu) snapshot ide0-disk0 snap0.img

1) create snap0.img internally by doing the equivalent of `qemu-img 
create -f qcow2 -b windows.img snap0.img'

2) bdrv_flush('ide0-disk0')
3) bdrv_open(snap0.img)
4) bdrv_close(windows.img)
5) rename('windows.img', 'windows.img.tmp')
6) rename('snap0.img', 'windows.img')
7) rename('windows.img.tmp', 'snap0.img')



Looks reasonable.

Would be interesting to look at this as a use case for the threading 
work.  We should eventually be able to create a snapshot without 
stalling vcpus (stalling I/O of course allowed).


If we had another block-level command, like bdrv_aio_freeze(), that 
queued all pending requests until the given callback completed, it would 
be very easy to do this entirely asynchronously.  For instance:


bdrv_aio_freeze(create_snapshot)

create_snapshot():
  bdrv_aio_flush(done_flush)

done_flush():
  bdrv_open(...)
  bdrv_close(...)
  ...

Of course, closing a device while it's being frozen is probably a recipe 
for disaster but you get the idea :-)


Regards,

Anthony Liguori


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Stefan Hajnoczi
On Tue, Oct 19, 2010 at 2:33 PM, Anthony Liguori anth...@codemonkey.ws wrote:
 On 10/19/2010 08:27 AM, Avi Kivity wrote:

  On 10/19/2010 03:22 PM, Anthony Liguori wrote:

 I had assumed that this would involve:

 qemu -hda windows.img

 (qemu) snapshot ide0-disk0 snap0.img

 1) create snap0.img internally by doing the equivalent of `qemu-img
 create -f qcow2 -b windows.img snap0.img'
 2) bdrv_flush('ide0-disk0')
 3) bdrv_open(snap0.img)
 4) bdrv_close(windows.img)
 5) rename('windows.img', 'windows.img.tmp')
 6) rename('snap0.img', 'windows.img')
 7) rename('windows.img.tmp', 'snap0.img')


 Looks reasonable.

 Would be interesting to look at this as a use case for the threading work.
  We should eventually be able to create a snapshot without stalling vcpus
 (stalling I/O of course allowed).

 If we had another block-level command, like bdrv_aio_freeze(), that queued
 all pending requests until the given callback completed, it would be very
 easy to do this entirely asynchronously.  For instance:

 bdrv_aio_freeze(create_snapshot)

 create_snapshot():
  bdrv_aio_flush(done_flush)

 done_flush():
  bdrv_open(...)
  bdrv_close(...)
  ...

 Of course, closing a device while it's being frozen is probably a recipe for
 disaster but you get the idea :-)

bdrv_aio_freeze() or any mechanism to deal with pending requests in
the generic block code would be a good step for future live support
of other operations like truncate.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Avi Kivity

 On 10/19/2010 03:38 PM, Stefan Hajnoczi wrote:

bdrv_aio_freeze() or any mechanism to deal with pending requests in
the generic block code would be a good step for future live support
of other operations like truncate.


+ logical disk grow, etc.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Ayal Baron

- Anthony Liguori anth...@codemonkey.ws wrote:

 On 10/19/2010 07:48 AM, Dor Laor wrote:
  On 10/19/2010 04:11 AM, Chris Wright wrote:
  * Juan Quintela (quint...@redhat.com) wrote:
 
  Please send in any agenda items you are interested in covering.
 
  - 0.13.X -stable handoff
  - 0.14 planning
  - threadlet work
  - virtfs proposals
 
 
  - Live snapshots
- We were asked to add this feature for external qcow2
  images. Will simple approach of fsync + tracking each requested
  backing file (it can be per vDisk) and re-open the new image
 would
  be accepted?
 
 I had assumed that this would involve:
 
 qemu -hda windows.img
 
 (qemu) snapshot ide0-disk0 snap0.img
 
 1) create snap0.img internally by doing the equivalent of `qemu-img 
 create -f qcow2 -b windows.img snap0.img'
 2) bdrv_flush('ide0-disk0')
 3) bdrv_open(snap0.img)
 4) bdrv_close(windows.img)
 5) rename('windows.img', 'windows.img.tmp')
 6) rename('snap0.img', 'windows.img')
 7) rename('windows.img.tmp', 'snap0.img')

All the rename logic assumes files, need to take into account devices as well 
(namely LVs)
Also, just to make sure, this should support multiple images (concurrent 
snapshot of all of them or a subset).
Otherwise looks good.

 
 Regards,
 
 Anthony Liguori
 
- Integration with FS freeze for consistent guest app snapshot
  Many apps do not sync their ram state to disk correctly or
 frequent
  enough. Physical world backup software calls fs freeze on xfs
 and
  VSS for windows to make the backup consistent.
  In order to integrated this with live snapshots we need a guest
  agent to trigger the guest fs freeze.
  We can either have qemu communicate with the agent directly
 through
  virtio-serial or have a mgmt daemon use virtio-serial to
  communicate with the guest in addition to QMP messages about
 the
  live snapshot state.
  Preferences? The first solution complicates qemu while the
 second
  complicates mgmt.
  -- 
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Anthony Liguori

On 10/19/2010 11:54 AM, Ayal Baron wrote:

- Anthony Liguorianth...@codemonkey.ws  wrote:

   

On 10/19/2010 07:48 AM, Dor Laor wrote:
 

On 10/19/2010 04:11 AM, Chris Wright wrote:
   

* Juan Quintela (quint...@redhat.com) wrote:
 

Please send in any agenda items you are interested in covering.
   

- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals

 

- Live snapshots
   - We were asked to add this feature for external qcow2
 images. Will simple approach of fsync + tracking each requested
 backing file (it can be per vDisk) and re-open the new image
   

would
 

 be accepted?
   

I had assumed that this would involve:

qemu -hda windows.img

(qemu) snapshot ide0-disk0 snap0.img

1) create snap0.img internally by doing the equivalent of `qemu-img
create -f qcow2 -b windows.img snap0.img'
2) bdrv_flush('ide0-disk0')
3) bdrv_open(snap0.img)
4) bdrv_close(windows.img)
5) rename('windows.img', 'windows.img.tmp')
6) rename('snap0.img', 'windows.img')
7) rename('windows.img.tmp', 'snap0.img')
 

All the rename logic assumes files, need to take into account devices as well 
(namely LVs)
   


Sure, just s/rename/lvrename/g.

The renaming step can be optional and a management tool can take care of 
that.  It's really just there for convenience since the user expectation 
is that when you give a name of a snapshot, that the snapshot is 
reflected in that name not that the new in-use image is that name.



Also, just to make sure, this should support multiple images (concurrent 
snapshot of all of them or a subset).
   


Yeah, concurrent is a little trickier.  Simple solution is for a 
management tool to just do a stop + multiple snapshots + cont.  It's 
equivalent to what we'd do if we don't do it aio which is probably how 
we'd do the first implementation.


But in the long term, I think the most elegant solution would be to 
expose the freeze api via QMP and let a management tool freeze multiple 
devices, then start taking snapshots, then unfreeze them when all 
snapshots are complete.


Regards,

Anthony Liguori


Otherwise looks good.

   

Regards,

Anthony Liguori

 

   - Integration with FS freeze for consistent guest app snapshot
 Many apps do not sync their ram state to disk correctly or
   

frequent
 

 enough. Physical world backup software calls fs freeze on xfs
   

and
 

 VSS for windows to make the backup consistent.
 In order to integrated this with live snapshots we need a guest
 agent to trigger the guest fs freeze.
 We can either have qemu communicate with the agent directly
   

through
 

 virtio-serial or have a mgmt daemon use virtio-serial to
 communicate with the guest in addition to QMP messages about
   

the
 

 live snapshot state.
 Preferences? The first solution complicates qemu while the
   

second
 

 complicates mgmt.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
   


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Ayal Baron

- Anthony Liguori anth...@codemonkey.ws wrote:

 On 10/19/2010 11:54 AM, Ayal Baron wrote:
  - Anthony Liguorianth...@codemonkey.ws  wrote:
 
 
  On 10/19/2010 07:48 AM, Dor Laor wrote:
   
  On 10/19/2010 04:11 AM, Chris Wright wrote:
 
  * Juan Quintela (quint...@redhat.com) wrote:
   
  Please send in any agenda items you are interested in covering.
 
  - 0.13.X -stable handoff
  - 0.14 planning
  - threadlet work
  - virtfs proposals
 
   
  - Live snapshots
 - We were asked to add this feature for external qcow2
   images. Will simple approach of fsync + tracking each
 requested
   backing file (it can be per vDisk) and re-open the new image
 
  would
   
   be accepted?
 
  I had assumed that this would involve:
 
  qemu -hda windows.img
 
  (qemu) snapshot ide0-disk0 snap0.img
 
  1) create snap0.img internally by doing the equivalent of
 `qemu-img
  create -f qcow2 -b windows.img snap0.img'
  2) bdrv_flush('ide0-disk0')
  3) bdrv_open(snap0.img)
  4) bdrv_close(windows.img)
  5) rename('windows.img', 'windows.img.tmp')
  6) rename('snap0.img', 'windows.img')
  7) rename('windows.img.tmp', 'snap0.img')
   
  All the rename logic assumes files, need to take into account
 devices as well (namely LVs)
 
 
 Sure, just s/rename/lvrename/g.

No can do.  In our setup, lvm is running in a clustered env in a single writer 
multiple readers configuration.  Vm may be running on a reader which is not 
allowed to lvrename (would corrupt the entire VG).

 
 The renaming step can be optional and a management tool can take care
 of 
 that.  It's really just there for convenience since the user
 expectation 
 is that when you give a name of a snapshot, that the snapshot is 
 reflected in that name not that the new in-use image is that name.

So keeping it optional is good.

 
  Also, just to make sure, this should support multiple images
 (concurrent snapshot of all of them or a subset).
 
 
 Yeah, concurrent is a little trickier.  Simple solution is for a 
 management tool to just do a stop + multiple snapshots + cont.  It's 
 equivalent to what we'd do if we don't do it aio which is probably how
 
 we'd do the first implementation.
 
 But in the long term, I think the most elegant solution would be to 
 expose the freeze api via QMP and let a management tool freeze
 multiple 
 devices, then start taking snapshots, then unfreeze them when all 
 snapshots are complete.
 
 Regards,
 
 Anthony Liguori

qemu should call the freeze as part of the process (for all of the relevant 
devices) then take the snapshots then thaw.

 
  Otherwise looks good.
 
 
  Regards,
 
  Anthony Liguori
 
   
 - Integration with FS freeze for consistent guest app snapshot
   Many apps do not sync their ram state to disk correctly or
 
  frequent
   
   enough. Physical world backup software calls fs freeze on
 xfs
 
  and
   
   VSS for windows to make the backup consistent.
   In order to integrated this with live snapshots we need a
 guest
   agent to trigger the guest fs freeze.
   We can either have qemu communicate with the agent directly
 
  through
   
   virtio-serial or have a mgmt daemon use virtio-serial to
   communicate with the guest in addition to QMP messages about
 
  the
   
   live snapshot state.
   Preferences? The first solution complicates qemu while the
 
  second
   
   complicates mgmt.
  -- 
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at 
 http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM call agenda for Oct 19

2010-10-19 Thread Anthony Liguori

On 10/19/2010 03:57 PM, Ayal Baron wrote:

Yeah, concurrent is a little trickier.  Simple solution is for a
management tool to just do a stop + multiple snapshots + cont.  It's
equivalent to what we'd do if we don't do it aio which is probably how

we'd do the first implementation.

But in the long term, I think the most elegant solution would be to
expose the freeze api via QMP and let a management tool freeze
multiple
devices, then start taking snapshots, then unfreeze them when all
snapshots are complete.

Regards,

Anthony Liguori
 

qemu should call the freeze as part of the process (for all of the relevant 
devices) then take the snapshots then thaw.
   


Yeah, I'm not opposed to us providing simpler interfaces in addition to 
or in lieu of lower level interfaces.


Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


KVM call agenda for Oct 19

2010-10-18 Thread Juan Quintela

Please send in any agenda items you are interested in covering.

thanks,

Juan.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for Oct 19

2010-10-18 Thread Chris Wright
* Juan Quintela (quint...@redhat.com) wrote:
 
 Please send in any agenda items you are interested in covering.

- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html