thanks, so this affects all hypervisors/storage backends that use the
patch disk, or should I code my solution specific to KVM?

On Mon, Sep 10, 2012 at 11:43 AM, Edison Su <edison...@citrix.com> wrote:
>
>
>> -----Original Message-----
>> From: Marcus Sorensen [mailto:shadow...@gmail.com]
>> Sent: Sunday, September 09, 2012 9:32 PM
>> To: cloudstack-dev@incubator.apache.org
>> Subject: cleaning up patch disks
>>
>> I've got an issue with the CLVM on KVM support, it seems that the
>> patch disks are created on the fly when a system VM is started. If I
>> reboot a system VM 5 times I'll end up with 5 patch disks. I'm the one
>> who submitted the CLVM patch, and I don't see that there's much
>> difference between what we're doing with CLVM and what it does for
>> everything else, so I thought I'd ask:
>>
>> Is this an issue for other backing stores as well (accumulating patch
>> disks for system VMs)? If not where is it handled?
>
>
> It's a bug, that patch disks are not cleaned up after system vm got stopped.
>
>>
>> Any suggestions on how to go about fixing it? I see I could
>> potentially hack into StopCommand, rebootVM/cleanupVM/stopVM, detect
>> the patch disk and lvremove it, but then again if it doesn't go down
>> on purpose (say a host crash) I'll still be leaking patch disks.
>>
>> Is it safe to assume that any patch disk that's not currently open is
>> safe to delete (these are generated on the fly and not really tracked
>> anywhere in the database, right?)
>
> If it's created on shared storage shared by multiple KVM hosts, then it's not 
> easy to know, this patch disk is opened or not.
> Normally, we can delete that patch disk for every 
> stopcommand/stopvm/rebootvm/cleanupvm command.
> If host is crashed, CS manager will send a command to other hosts in the 
> cluster to clean up the VM, so we have the chance to clean up the patch disk 
> anyway.
> As you said in another mail, we can use the name schema: vm-name-patch-disk 
> for patch disk.
> Patch are welcome!
>

Reply via email to