Copilot commented on code in PR #11633:
URL: https://github.com/apache/cloudstack/pull/11633#discussion_r2727273111


##########
engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java:
##########
@@ -2893,6 +2894,20 @@ protected void migrate(final VMInstanceVO vm, final long 
srcHostId, final Deploy
         }
     }
 
+    private void postMigrationRelease(VirtualMachineProfile profile, long 
vmId, long srcHostId) {
+        List<String> poolsToReleaseOnOrigin = 
volumeMgr.postMigrationReleaseDatastoresOnOriginHost(profile, vmId);
+        if (CollectionUtils.isEmpty(poolsToReleaseOnOrigin)) {
+            return;
+        }
+        // Unmount the datastores from this host only (useful for SolidFire 
1:1 plugin zone-wide storage migrations)
+        UnmountDatastoresCommand cmd = new 
UnmountDatastoresCommand(poolsToReleaseOnOrigin);
+        try {
+            _agentMgr.send(srcHostId, cmd);
+        } catch (AgentUnavailableException | OperationTimedoutException e) {
+            throw new RuntimeException(e);

Review Comment:
   Throwing a RuntimeException wrapping AgentUnavailableException or 
OperationTimedoutException may cause the entire migration to fail unexpectedly. 
Consider logging the error and handling it gracefully instead of propagating it 
as a RuntimeException, since this is a cleanup operation that happens after the 
VM has successfully migrated. The migration has already succeeded at this point 
(line 2887), so failing here could leave the system in an inconsistent state.
   ```suggestion
               s_logger.warn("Failed to unmount datastores on origin host " + 
srcHostId +
                       " after migration of VM id=" + vmId + ". Datastores to 
unmount: " + poolsToReleaseOnOrigin, e);
   ```



##########
plugins/storage/volume/solidfire/src/main/java/org/apache/cloudstack/storage/datastore/driver/SolidFirePrimaryDataStoreDriver.java:
##########
@@ -118,6 +119,7 @@ public class SolidFirePrimaryDataStoreDriver implements 
PrimaryDataStoreDriver {
     @Inject private VolumeDao volumeDao;
     @Inject private VolumeDetailsDao volumeDetailsDao;
     @Inject private VolumeDataFactory volumeFactory;
+    @Inject private AgentManager agentManager;

Review Comment:
   The AgentManager is injected but never used in this class. This appears to 
be an unnecessary import and injection. Consider removing this if it's not 
needed for the functionality being implemented.



##########
plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java:
##########
@@ -4645,6 +4655,29 @@ protected Answer execute(PrepareForMigrationCommand cmd) 
{
         }
     }
 
+    private void 
prepareDatastoreForZoneWideManagedStorageInterClusterMigration(DiskTO disk, 
VmwareHypervisorHost hyperHost) throws Exception {
+        Map<String, String> details = disk.getDetails();
+        if (MapUtils.isEmpty(details) || !details.containsKey(DiskTO.SCOPE) ||
+                !details.containsKey(DiskTO.MANAGED) && 
!details.containsKey(DiskTO.INTER_CLUSTER_MIGRATION)) {

Review Comment:
   The boolean logic in this condition is incorrect. The expression 
`!details.containsKey(DiskTO.MANAGED) && 
!details.containsKey(DiskTO.INTER_CLUSTER_MIGRATION)` should be 
`(!details.containsKey(DiskTO.MANAGED) || 
!details.containsKey(DiskTO.INTER_CLUSTER_MIGRATION))` with OR instead of AND. 
   
   The current logic means the method returns only when BOTH MANAGED and 
INTER_CLUSTER_MIGRATION are missing, but the subsequent checks on lines 
4666-4668 explicitly check for the presence of these keys. This means if either 
key is present (but not both), the method won't return early but will later 
evaluate false in the checks below, making the early return ineffective.
   
   The intended logic should be: return early if SCOPE is missing OR if either 
MANAGED or INTER_CLUSTER_MIGRATION is missing.
   ```suggestion
                   (!details.containsKey(DiskTO.MANAGED) || 
!details.containsKey(DiskTO.INTER_CLUSTER_MIGRATION))) {
   ```



##########
plugins/storage/volume/solidfire/src/main/java/org/apache/cloudstack/storage/datastore/driver/SolidFirePrimaryDataStoreDriver.java:
##########
@@ -25,6 +25,7 @@
 
 import javax.inject.Inject;
 
+import com.cloud.agent.AgentManager;

Review Comment:
   The import for AgentManager is unused. This import was added but the 
AgentManager field is never actually used in the class. Consider removing this 
import along with the unused field injection.
   ```suggestion
   
   ```



##########
plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java:
##########
@@ -4645,6 +4655,29 @@ protected Answer execute(PrepareForMigrationCommand cmd) 
{
         }
     }
 
+    private void 
prepareDatastoreForZoneWideManagedStorageInterClusterMigration(DiskTO disk, 
VmwareHypervisorHost hyperHost) throws Exception {
+        Map<String, String> details = disk.getDetails();
+        if (MapUtils.isEmpty(details) || !details.containsKey(DiskTO.SCOPE) ||
+                !details.containsKey(DiskTO.MANAGED) && 
!details.containsKey(DiskTO.INTER_CLUSTER_MIGRATION)) {
+            return;
+        }
+
+        VmwareContext context = hyperHost.getContext();
+        boolean isManaged = details.containsKey(DiskTO.MANAGED) && 
BooleanUtils.toBoolean(details.get(DiskTO.MANAGED));
+        boolean isZoneWideStorage = details.containsKey(DiskTO.SCOPE) && 
details.get(DiskTO.SCOPE).equalsIgnoreCase(ScopeType.ZONE.name());
+        boolean isInterClusterMigration = 
details.containsKey(DiskTO.INTER_CLUSTER_MIGRATION) && 
BooleanUtils.toBoolean(details.get(DiskTO.INTER_CLUSTER_MIGRATION));
+
+        if (isManaged && isZoneWideStorage && isInterClusterMigration) {
+            s_logger.debug(String.format("Preparing storage on destination 
cluster for host %s", hyperHost.getHyperHostName()));
+            String iScsiName = details.get(DiskTO.IQN); // details should not 
be null for managed storage (it may or may not be null for non-managed storage)
+            String datastoreName = VmwareResource.getDatastoreName(iScsiName);
+            s_logger.debug(String.format("Ensuring datastore %s is mounted on 
destination cluster", datastoreName));
+            _storageProcessor.prepareManagedDatastore(context, hyperHost, 
datastoreName,
+                    details.get(DiskTO.IQN), details.get(DiskTO.STORAGE_HOST),
+                    Integer.parseInt(details.get(DiskTO.STORAGE_PORT)));

Review Comment:
   There's a potential NumberFormatException if 
details.get(DiskTO.STORAGE_PORT) returns null or a non-numeric value. While the 
comment on line 4672 suggests details should not be null for managed storage, 
there's no explicit validation that STORAGE_PORT is present or valid before 
parsing. Consider adding a null check and validation before calling 
Integer.parseInt().
   ```suggestion
               String storagePortString = details.get(DiskTO.STORAGE_PORT);
               if (StringUtils.isBlank(storagePortString) || 
!NumberUtils.isDigits(storagePortString)) {
                   s_logger.warn(String.format("Skipping preparation of managed 
datastore %s due to invalid storage port value: %s",
                           datastoreName, storagePortString));
                   return;
               }
               int storagePort = Integer.parseInt(storagePortString);
               _storageProcessor.prepareManagedDatastore(context, hyperHost, 
datastoreName,
                       details.get(DiskTO.IQN), 
details.get(DiskTO.STORAGE_HOST), storagePort);
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to