I'm tired of shell-scripting to wait for completion of a block pull,
when virsh can be taught to do the same. I couldn't quite reuse
vshWatchJob, as this is not a case of a long-running command where
a second thread must be used to probe job status (at least, not unless
I make virsh start doing
This extends domainsnapshot XML to add a new element under each
disk of a disk snapshot:
disk name='vda'
source file='/path/to/live'/
mirror file='/path/to/mirror'/
/disk
For now, if a mirror is requested, the snapshot must be external,
and assumes the same driver format (qcow2 or
In order to track a block copy job across libvirtd restarts, we
need to save internal XML that tracks the name of the file
holding the mirror. Displaying this name in dumpxml might also
be useful to the user, even if we don't yet have a way to (re-)
start a domain with mirroring enabled up front.
The hardest part of this patch is figuring out how to provide proper
security labeling and lock manager setup for the mirror, as well as
rolling it all back on error.
* src/qemu/qemu_driver.c (qemuDomainSnapshotCreateXML): Decide
when mirrors are allowed.
(qemuDomainSnapshotDiskPrepare): Prepare
For now, disk migration via block copy job is not implemented. But
when we do implement it, we have to deal with the fact that qemu does
not provide an easy way to re-start a qemu process with mirroring
still intact (it _might_ be possible by using qemu -S then an
initial 'drive-mirror' with disk
The new block copy storage migration sequence requires both the
'drive-mirror' and 'drive-reopen' monitor commands, which have
been proposed[1] for qemu 1.1. Someday (probably for qemu 1.2),
these commands may also be added to the 'transaction' monitor
command for even more power, but we don't
Handle the new type of block copy event and info. Of course,
this patch does nothing until a later patch actually allows the
creation/abort of a block copy job. And we'd really love to
have an event without having to poll for the transition between
pull and mirroring, but that will have to wait
This is the bare minimum to end a copy job (of course, until a
later patch adds the ability to start a copy job, this patch
doesn't do much in isolation; I've just split the patches to
ease the review).
This patch intentionally avoids SELinux, lock manager, and audit
actions. Also, if libvirtd
Minimal patch to wire up all the pieces in the previous patches
to actually enable a block copy job. By minimal, I mean that
qemu creates the file (that is, no REUSE_EXT flag support yet),
SELinux must be disabled, a lock manager is not informed, and
the audit logs aren't updated. But those will
This new API provides additional flexibility over what can be
crammed on top of virDomainBlockRebase (namely, the ability to
specify an arbitrary destination format, for things like copying
qcow2 into qed without having to pre-create the destination), at
the expense that it cannot be backported
This copies some of the checks from snapshots regarding testing
when a file already exists. In the process, I noticed snapshots
had hard-to-read logic, as well as a missing sanity check:
REUSE_EXT should require the destination to already be present.
* src/qemu/qemu_driver.c
RHEL-only
drive-mirror and drive-reopen are still under upstream qemu
discussion; as a result, RHEL decided to backport things under
a downstream name. Accommodate these differences.
There are a few differences between the upstream proposal[1]
and the RHEL build. The RHEL build uses the
This copies heavily from qemuDomainSnapshotCreateSingleDiskActive(),
in order to set the SELinux label, obtain locking manager lease, and
audit the fact that we hand a new file over to qemu. Alas, releasing
the lease and label on failure or at the end of the mirroring is a
trickier prospect (we
Expose the full abilities of virDomainBlockCopy.
* tools/virsh.c (blockJobImpl): Add --format option for block copy.
* tools/virsh.pod (blockcopy): Document this.
---
tools/virsh.c | 26 --
tools/virsh.pod | 10 +-
2 files changed, 25 insertions(+), 11
This completes the public API for using mirrored snapshots as a
means of performing live storage migration. Of course, it will
take additional patches to actually provide the implementation.
The idea here is that oVirt can start with a domain with 'vda'
open on /path1/to/old.qcow2 with a base
For now, disk migration via mirroring is not implemented. But when
we do implement it, we have to deal with the fact that qemu does
not provide an easy way to re-start a qemu process with mirroring
still intact (it _might_ be possible by using qemu -S then an
initial 'drive-mirror' with disk
Almost trivial; the trick was dealing with the fact that we're
stuck with 'unsigned long bandwidth' due to earlier design
decisions. Also, prefer the correct flag names (although we
intentionally chose the _SHALLOW and _REUSE_EXT values to be
equal, to allow for loose handling in our code).
*
Delete a mirrored snapshot by picking which of the two files in the
mirror to reopen. This is not atomic, so we update the snapshot
in place as we iterate through each successful disk. Since we limited
mirrored snapshots to transient domains, there is no persistent
configuration to update. This
On 04/16/2012 09:43 PM, Daniel P. Berrange wrote:
On Mon, Apr 16, 2012 at 06:00:22PM +0200, Marc-André Lureau wrote:
Hi
On Mon, Apr 16, 2012 at 2:32 PM, Srivatsa S. Bhat
srivatsa.b...@linux.vnet.ibm.com wrote:
On 04/16/2012 05:34 PM, Marc-André Lureau wrote:
Did you happen to perform a
101 - 119 of 119 matches
Mail list logo