erma07 opened a new pull request, #13114:
URL: https://github.com/apache/cloudstack/pull/13114

   ### Description
   
   This PR adds a new global setting **`xenserver.create.full.clone`** that 
mirrors the existing `vmware.create.full.clone` behaviour for the 
XenServer/XCP-ng hypervisor.
   
   Until now, XenServer-backed VMs created from a template have always been 
provisioned as **linked clones** (`VDI.clone()`), which on file-backed SRs 
(NFS/EXT/XFS) produces a copy-on-write disk that depends on the cached template 
VDI. This has two practical drawbacks:
   
   1. The cached template VDI cannot be removed while any linked-clone child 
still exists (XenServer enforces a VHD parent ref-count), so template cleanup 
is blocked.
   2. There is a hard ceiling of ~30 linked clones per VHD chain in XenServer.
   
   When the new setting is enabled (per-pool or globally), the XenServer 
storage processor instead issues `VDI.copy(conn, sr)` for the template→volume 
operation, producing a fully-independent VDI with no parent VHD relationship. 
Deleting the template afterwards is harmless to existing VMs, and the per-chain 
clone ceiling no longer applies.
   
   **Default value is `false`**, so existing deployments are unaffected unless 
an operator opts in.
   
   How it's wired up:
   
   - New `ConfigKey<Boolean> XenserverCreateCloneFull` in `StorageManager` 
(StoragePool scope, dynamic, default `"false"`), registered via 
`StorageManagerImpl.getConfigKeys()`.
   - `AncientDataMotionStrategy` keeps 
`addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest` byte-for-byte 
unchanged. A new sibling helper 
`addFullCloneAndDiskprovisiongStrictnessFlagOnXenServerDest` writes the 
per-pool XenServer value onto `PrimaryDataStoreTO.fullCloneFlag`, and a new 
dispatcher `addFullCloneAndDiskprovisiongStrictnessFlagOnDest` switches on 
`dataTO.getHypervisorType()` to call the right per-hypervisor helper. All seven 
existing call sites that used to call the VMware helper now call the dispatcher 
instead — VMware behaviour is preserved exactly.
   - `PrimaryDataStoreTO.fullCloneFlag` already exists and is 
hypervisor-agnostic; no TO change required.
   - On the agent, `XenServerStorageProcessor.cloneVolumeFromBaseTemplate` 
reads `((PrimaryDataStoreTO) destStore).isFullCloneFlag()` and, when true, 
calls `tmpltvdi.copy(conn, tmpltvdi.getSR(conn))` instead of 
`tmpltvdi.createClone(conn, new HashMap<>())`. `Xenserver625StorageProcessor` 
does not override the method, so all XenServer versions are covered with a 
single edit.
   
   The legacy `CitrixCreateCommandWrapper.CreateCommand` path is intentionally 
left untouched — `new CreateCommand(...)` is only constructed by test code in 
this repository, so the wrapper is unreachable in production today.
   
   <!-- Fixes: # -->
   
   ### Types of changes
   
   - [ ] Breaking change (fix or feature that would cause existing 
functionality to change)
   - [x] New feature (non-breaking change which adds functionality)
   - [ ] Bug fix (non-breaking change which fixes an issue)
   - [ ] Enhancement (improves an existing feature and functionality)
   - [ ] Cleanup (Code refactoring and cleanup, that may add test cases)
   - [ ] Build/CI
   - [ ] Test (unit or integration test code)
   
   ### Feature/Enhancement Scale or Bug Severity
   
   #### Feature/Enhancement Scale
   
   - [ ] Major
   - [x] Minor
   
   ### Screenshots (if appropriate):
   
   _Global Settings UI showing the new `xenserver.create.full.clone` key — 
TODO: attach screenshot._
   
   ### How Has This Been Tested?
   
   **Unit tests**
   
   - Added `AncientDataMotionStrategyTest.testAddFullCloneFlagOnXenServerDest`, 
mirroring the existing VMware test, which overrides the 
`XenserverCreateCloneFull` default, sets `dataTO.getHypervisorType()` to 
`HypervisorType.XenServer`, calls the new XenServer helper, and verifies that 
`setFullCloneFlag(true)` is invoked on the destination `PrimaryDataStoreTO`.
   - Existing `testAddFullCloneFlagOnVMwareDest` and 
`testAddFullCloneFlagOnNotVmwareDest` continue to pass — the VMware helper was 
not modified.
   
   **Manual test environment** _(fill in)_
   
   - CloudStack version: `<version>`
   - XenServer / XCP-ng version: `<version>`
   - Primary storage type tested: `<NFS / EXT / LVM / iSCSI>`
   - Template format / size: `<format / size>`
   
   **Manual scenarios** _(fill in results)_
   
   1. With `xenserver.create.full.clone=false` (default), deploy a VM from a 
template and run on the XenServer host:
      ```
      xe vdi-list params=uuid,sm-config name-label=ROOT-<vmid>
      ```
      Expect `sm-config:vhd-parent` to be present (linked clone, today's 
behaviour). _Result: ☐_
   
   2. Set `xenserver.create.full.clone=true` on the pool (Infrastructure → 
Primary Storage → Settings) and deploy another VM from the same template. 
Re-run the command above and expect `sm-config:vhd-parent` to be **absent**, 
and `physical-utilisation` to be on the same order of magnitude as 
`virtual-size`. _Result: ☐_
   
   3. Delete the cached template VDI on the primary storage. The 
full-clone-backed VM keeps running normally; a linked-clone-backed VM would 
lose data. Restore the template afterwards. _Result: ☐_
   
   4. Repeat (1)–(3) on a VMware pool to confirm the `vmware.create.full.clone` 
path is unaffected by the helper refactor. _Result: ☐_
   
   #### How did you try to break this feature and the system with this change?
   
   - **Mixed hypervisors:** verified the dispatch's `switch` only triggers when 
`dataTO.getHypervisorType()` is `XenServer`; pools with `KVM`, `Hyperv`, etc. 
fall through and `fullCloneFlag` is left untouched (linked-clone behaviour 
preserved).
   - **No hypervisor type:** if `Volume.getHypervisorType()` returns 
`HypervisorType.None` (storage pool not bound to a cluster with a known 
hypervisor), the dispatch falls through to the default branch and the volume is 
linked-cloned — safe default, no regression.
   - **VMware regression:** the original 
`addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest` is byte-for-byte 
unchanged; the seven call sites that previously called it now go through the 
dispatcher, which routes VMware traffic back into the same method.
   - **Managed storage path (out of scope, documented):** flows that route 
through `StorageSystemDataMotionStrategy` (e.g. SolidFire) bypass the 
dispatcher — same VMware-side limitation today, behaviour preserved.
   - **Legacy `CreateCommand` path (out of scope, documented):** 
`CitrixCreateCommandWrapper.execute(CreateCommand)` still always linked-clones, 
but `new CreateCommand(...)` has no production callers in this repository (only 
test code).
   - **Default value:** `false`, so the change is a no-op on upgrade until an 
operator opts in.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to