[
https://issues.apache.org/jira/browse/CLOUDSTACK-9368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298765#comment-15298765
]
ASF GitHub Bot commented on CLOUDSTACK-9368:
--------------------------------------------
GitHub user nvazquez opened a pull request:
https://github.com/apache/cloudstack/pull/1560
CLOUDSTACK-9368: DS template copies don’t get deleted in VMware ESXi with
multiple clusters and zone wide storage
JIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9386
### Introduction
In some production environments with multiple clusters it was noticed that
unused templates were consuming too much storage. It was discovered that
template cleanup was not deleting marked templates on ESXi.
### Description of the problem
Suppose we have multiple clusters `(c1, c2,...,cN)` on a data center and
template `T` from which we deploy vms on `c1.`
Suppose now that we expunge those vms, and there's no other vm instance
from template `T,` so this was the actual workflow:
1. CloudStack marks template for cleanup after `storage.cleanup.interval`
seconds, by setting `marked_for_gc = 1` on `template_spool_ref` table, for that
template.
2. After another `storage.cleanup.interval` seconds a `DestroyCommand` will
be sent, to delete template from primary storage
3. On `VmwareResource`, command is processed, and it first picks up a
random cluster, say `ci != c1` to look for vm template (using volume's path)
and destroy it. But, as template was on `c1` it cannot be found, so it won't be
deleted. Entry on `template_spool_ref` is deleted but not the actual template
on hypervisor side.
### Proposed solution
We propose a way to attack problem shown in point 3, by not picking up a
random cluster to look for vm but using data store. This way we make sure vm
template will be deleted in every case, and not depending on random cluster
selection
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/nvazquez/cloudstack gcbug
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/cloudstack/pull/1560.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1560
----
commit 656bba0cc3281d40c2609eafc0f7ae691a8863e8
Author: nvazquez <[email protected]>
Date: 2016-05-20T16:01:03Z
CLOUDSTACK-9368: Find vm on datacenter instead of randomly choosing a
cluster
----
> Fix for Support configurable NFS version for Secondary Storage mounts
> ---------------------------------------------------------------------
>
> Key: CLOUDSTACK-9368
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9368
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the
> default.)
> Components: VMware
> Affects Versions: 4.9.0
> Reporter: Nicolas Vazquez
> Fix For: 4.9.0
>
>
> This issue address a problem introduced in
> [CLOUDSTACK-9252|https://issues.apache.org/jira/browse/CLOUDSTACK-9252] in
> which NFS version couldn't be changed after hosts resources were configured
> on startup (for hosts using `VmwareResource`), and as host parameters didn't
> include `nfs.version` key, it was set `null`.
> h4. Proposed solution
> In this proposed solution `nfsVersion` would be passed in `NfsTO` through
> `CopyCommand` to `VmwareResource`, who will check if NFS version is still
> configured or not. If not, it will use the one sent in the command and will
> set it to its storage processor and storage handler. After those setups, it
> will proceed executing command.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)