My initial idea on tackling this is that we shouldn't be using unique
names for the patch disk. We could use a name like vm instance uuid +
'patch' (this seems to go against the methodologies used everwhere
else, but since they seem to be untracked and one-time-use at this
point maybe that doesn't matter), or generate a non-random uuid
somehow based on the system vm name or some other attribute unique to
that system VM (suggestions welcome). Then if it already exists we
just format and use the existing patch disk, otherwise we create it as
normal.

This seems like the most reasonable way to make sure we're not leaving
orphaned disks behind on every system VM reboot/crash. Or perhaps they
should just be tracked as member volumes in the database (I'm not sure
what the original reasoning was for how it's done now), but maybe a
static name for the patch disk is a reasonable triage at this point?

On Sun, Sep 9, 2012 at 10:31 PM, Marcus Sorensen <shadow...@gmail.com> wrote:
> I've got an issue with the CLVM on KVM support, it seems that the
> patch disks are created on the fly when a system VM is started. If I
> reboot a system VM 5 times I'll end up with 5 patch disks. I'm the one
> who submitted the CLVM patch, and I don't see that there's much
> difference between what we're doing with CLVM and what it does for
> everything else, so I thought I'd ask:
>
> Is this an issue for other backing stores as well (accumulating patch
> disks for system VMs)? If not where is it handled?
>
> Any suggestions on how to go about fixing it? I see I could
> potentially hack into StopCommand, rebootVM/cleanupVM/stopVM, detect
> the patch disk and lvremove it, but then again if it doesn't go down
> on purpose (say a host crash) I'll still be leaking patch disks.
>
> Is it safe to assume that any patch disk that's not currently open is
> safe to delete (these are generated on the fly and not really tracked
> anywhere in the database, right?)

Reply via email to