Copilot commented on code in PR #12321:
URL: https://github.com/apache/cloudstack/pull/12321#discussion_r2645052127


##########
server/src/main/java/com/cloud/storage/StorageManagerImpl.java:
##########
@@ -2941,16 +2709,14 @@ public Pair<Boolean, String> 
checkIfReadyVolumeFitsInStoragePoolWithStorageAcces
                     List<String> srcStorageAccessGroups = 
_storagePoolAccessGroupMapDao.getStorageAccessGroups(srcPoolId);
                     List<String> destStorageAccessGroups = 
_storagePoolAccessGroupMapDao.getStorageAccessGroups(destPool.getId());
 
-                    logger.debug(String.format("Storage access groups on 
source storage %s are %s and destination storage %s are %s",
-                            srcPool, srcStorageAccessGroups, destPool, 
destStorageAccessGroups));
+                    logger.debug("Storage access groups on source storage {} 
are {} and destination storage {} are {}",
+                            srcPool, srcStorageAccessGroups, destPool, 
destStorageAccessGroups);
 
                     if (CollectionUtils.isEmpty(srcStorageAccessGroups) && 
CollectionUtils.isEmpty(destStorageAccessGroups)) {
                         return new Pair<>(true, "Success");
                     }
 
                     if (CollectionUtils.isNotEmpty(srcStorageAccessGroups) && 
CollectionUtils.isNotEmpty(destStorageAccessGroups)) {
-                        List<String> intersection = new 
ArrayList<>(srcStorageAccessGroups);
-                        intersection.retainAll(destStorageAccessGroups);
 

Review Comment:
   The removed lines (intersection calculation) were computing the common 
storage access groups between source and destination pools. This logic appears 
to be missing in the updated code. The code now checks various conditions 
without computing the intersection, which was likely important for determining 
if migration is possible. This could be a logic error where necessary 
validation was removed.



##########
api/src/main/java/org/apache/cloudstack/api/command/admin/swift/AddSwiftCmd.java:
##########
@@ -83,25 +82,20 @@ public long getEntityOwnerId() {
 
     @Override
     public void execute() {
-        Map<String, String> dm = new HashMap<String, String>();
+        Map<String, String> dm = new HashMap<>();
         dm.put(ApiConstants.ACCOUNT, getAccount());
         dm.put(ApiConstants.USERNAME, getUsername());
         dm.put(ApiConstants.KEY, getKey());
 
-        try{
-            ImageStore result = _storageService.discoverImageStore(null, 
getUrl(), "Swift", null, dm);
-            ImageStoreResponse storeResponse = null;
-            if (result != null) {
-                storeResponse = 
_responseGenerator.createImageStoreResponse(result);
-                storeResponse.setResponseName(getCommandName());
-                storeResponse.setObjectName("secondarystorage");
-                setResponseObject(storeResponse);
-            } else {
-                throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, 
"Failed to add Swift secondary storage");
-            }
-        } catch (DiscoveryException ex) {
-            logger.warn("Exception: ", ex);
-            throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, 
ex.getMessage());
+        ImageStore result = _storageService.discoverImageStore(null, getUrl(), 
"Swift", null, dm);
+        ImageStoreResponse storeResponse;
+        if (result != null) {
+            storeResponse = 
_responseGenerator.createImageStoreResponse(result);
+            storeResponse.setResponseName(getCommandName());
+            storeResponse.setObjectName("secondarystorage");
+            setResponseObject(storeResponse);
+        } else {
+            throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed 
to add Swift secondary storage");
         }

Review Comment:
   The removal of DiscoveryException from the method signature and the 
try-catch block changes the error handling behavior. The previous code caught 
DiscoveryException and wrapped it in a ServerApiException. Now, if 
discoverImageStore throws DiscoveryException, it will propagate as an uncaught 
exception. However, looking at the interface change in StorageService.java, 
DiscoveryException was removed from the throws clause, suggesting this 
exception is no longer thrown. Verify that the implementation no longer throws 
this exception.



##########
server/src/main/java/com/cloud/storage/StorageManagerImpl.java:
##########
@@ -1683,7 +1519,7 @@ public boolean deletePool(DeletePoolCmd cmd) {
         }
 
         if (sPool.getPoolType() == StoragePoolType.DatastoreCluster) {
-            // FR41 yet to handle on failure of deletion of any of the child 
storage pool
+            // FR41 yet to handle on failure of deletion any child storage pool

Review Comment:
   The grammar error "deletion any child storage pool" should be corrected to 
"deletion of any child storage pool".
   ```suggestion
               // FR41 yet to handle on failure of deletion of any child 
storage pool
   ```



##########
server/src/main/java/com/cloud/storage/StorageManagerImpl.java:
##########
@@ -2119,30 +1953,30 @@ public void cleanupStorage(boolean recurring) {
                         if (logger.isDebugEnabled()) {
                             snapshot = 
_snapshotDao.findById(snapshotDataStoreVO.getSnapshotId());
                             if (snapshot == null) {
-                                logger.warn(String.format("Did not find 
snapshot [ID: %d] for which store reference is in destroying state; therefore, 
it cannot be destroyed.", snapshotDataStoreVO.getSnapshotId()));
+                                logger.warn("Did not find snapshot [ID: {}] 
for which store reference is in destroying state; therefore, it cannot be 
destroyed.",
+                                        snapshotDataStoreVO.getSnapshotId());
                                 continue;
                             }
                             snapshotUuid = snapshot.getUuid();
                         }
 
                         try {
                             if (logger.isDebugEnabled()) {
-                                logger.debug(String.format("Verifying if 
snapshot [%s] is in destroying state in %s data store ID: %d.", snapshotUuid, 
storeRole, snapshotDataStoreVO.getDataStoreId()));
+                                logger.debug("Verifying if snapshot [{}] is in 
destroying state in {} data store ID: {}.",
+                                        snapshotUuid, storeRole, 
snapshotDataStoreVO.getDataStoreId());
                             }
                             SnapshotInfo snapshotInfo = 
snapshotFactory.getSnapshot(snapshotDataStoreVO.getSnapshotId(), 
snapshotDataStoreVO.getDataStoreId(), snapshotDataStoreVO.getRole());
                             if (snapshotInfo != null) {
-                                if (logger.isDebugEnabled()) {
-                                    logger.debug(String.format("Snapshot [%s] 
in destroying state found in %s data store [%s]; therefore, it will be 
destroyed.", snapshotUuid, storeRole, snapshotInfo.getDataStore().getUuid()));
-                                }
+                                logger.debug("Snapshot [{}] in destroying 
state found in %s data store [{}]; therefore, it will be destroyed.",
+                                        snapshotUuid, storeRole, 
snapshotInfo.getDataStore().getUuid());

Review Comment:
   The change from conditional logic with if-else-if to the direct 
logger.debug() call removes the previous debugging check. In line 1970, the 
code now calls logger.debug() unconditionally, whereas before it was wrapped in 
an if (logger.isDebugEnabled()) check. While modern logging frameworks optimize 
this, the pattern is inconsistent with the immediately preceding conditional 
debug logging on lines 1964-1967.
   ```suggestion
                                   if (logger.isDebugEnabled()) {
                                       logger.debug("Snapshot [{}] in 
destroying state found in {} data store [{}]; therefore, it will be destroyed.",
                                               snapshotUuid, storeRole, 
snapshotInfo.getDataStore().getUuid());
                                   }
   ```



##########
server/src/main/java/com/cloud/storage/StorageManagerImpl.java:
##########
@@ -3569,7 +3324,7 @@ public boolean 
storagePoolHasEnoughSpace(List<Pair<Volume, DiskProfile>> volumeD
                 volumeVO = volumeDao.findById(volume.getId());
             }
 
-            // this if statement should resolve to true at most once per 
execution of the for loop its contained within (for a root disk that is
+            // this if statement should resolve to true at most once per 
execution of the for loop it's contained within (for a root disk that is

Review Comment:
   The contraction "it's" should be "its" (possessive form) in this context: 
"for loop it's contained within" should be "for loop its contained within". 
However, note that the phrase would be more grammatically correct as "for loop 
it is contained within" but the possessive "its" is the appropriate fix for the 
contraction used here.
   ```suggestion
               // this if statement should resolve to true at most once per 
execution of the for loop its contained within (for a root disk that is
   ```



##########
server/src/main/java/com/cloud/storage/StorageManagerImpl.java:
##########
@@ -4698,9 +4451,9 @@ public ObjectStore discoverObjectStore(String name, 
String url, Long size, Strin
             store = lifeCycle.initialize(params);
         } catch (Exception e) {
             if (logger.isDebugEnabled()) {
-                logger.debug("Failed to add object store: " + e.getMessage(), 
e);
+                logger.debug("Failed to add object store: {}", e.getMessage(), 
e);
             }
-            throw new CloudRuntimeException("Failed to add object store: " + 
e.getMessage(), e);
+            throw new CloudRuntimeException("Failed to add object store: {}" + 
e.getMessage(), e);

Review Comment:
   The error message format string contains an extra pair of curly braces. The 
message should be "Failed to add object store: {}" with the exception message 
as the second argument, but currently it has "Failed to add object store: {}" + 
e.getMessage(). This will result in string concatenation instead of proper 
formatting. Remove the string concatenation and use proper SLF4J formatting.
   ```suggestion
               throw new CloudRuntimeException("Failed to add object store: " + 
e.getMessage(), e);
   ```



##########
plugins/storage/image/swift/src/main/java/org/apache/cloudstack/storage/datastore/driver/SwiftImageStoreDriverImpl.java:
##########
@@ -83,15 +80,11 @@ public String createEntityExtractUrl(DataStore store, 
String installPath, ImageF
         String containerName = 
SwiftUtil.getContainerName(dataObject.getType().toString(), dataObject.getId());
         String objectName = installPath.split("\\/")[1];
         // Get extract url expiration interval set in global configuration (in 
seconds)
-        int urlExpirationInterval = 
Integer.parseInt(_configDao.getValue(Config.ExtractURLExpirationInterval.toString()));
+        int urlExpirationInterval = 
ConfigurationManager.ExtractURLExpirationInterval.value();
 
         URL swiftUrl = SwiftUtil.generateTempUrl(swiftTO, containerName, 
objectName, tempKey, urlExpirationInterval);
-        if (swiftUrl != null) {
-            logger.debug("Swift temp-url: " + swiftUrl.toString());
-            return swiftUrl.toString();
-        }
-
-        throw new CloudRuntimeException("Unable to create extraction URL");
+        logger.debug("Swift temp-url: {}", swiftUrl);
+        return swiftUrl.toString();

Review Comment:
   The removal of the null check for swiftUrl could potentially cause a 
NullPointerException. The previous code checked if swiftUrl was not null before 
returning it. Now the code directly returns swiftUrl.toString(), which will 
throw NPE if SwiftUtil.generateTempUrl() returns null. Consider adding back the 
null check or ensuring that generateTempUrl() never returns null.
   ```suggestion
           return swiftUrl != null ? swiftUrl.toString() : null;
   ```



##########
server/src/main/java/com/cloud/storage/StorageManagerImpl.java:
##########
@@ -997,7 +855,7 @@ public PrimaryDataStoreInfo createPool(CreateStoragePoolCmd 
cmd) throws Resource
             throw new InvalidParameterValueException("zone id can't be null, 
if scope is zone");
         }
 
-        HypervisorType hypervisorType = HypervisorType.KVM;
+        HypervisorType hypervisorType; // defaults to HypervisorType.KVM

Review Comment:
   The comment on line 858 is misleading. The variable is now declared without 
initialization, so it doesn't "default" to HypervisorType.KVM. It will be 
assigned a value later in the conditional blocks starting at line 859. Consider 
removing this comment or updating it to accurately reflect that the 
hypervisorType will be assigned based on the scope.
   ```suggestion
           HypervisorType hypervisorType; // will be set based on the provided 
scope/hypervisor
   ```



##########
engine/components-api/src/main/java/com/cloud/storage/StorageManager.java:
##########
@@ -351,24 +378,19 @@ static Boolean getFullCloneConfiguration(Long storeId) {
 
     /**
      * This comment is relevant to managed storage only.
-     *
      *  Long clusterId = only used for managed storage
-     *
      *  Some managed storage can be more efficient handling VM templates (via 
cloning) if it knows the capabilities of the compute cluster it is dealing with.
      *  If the compute cluster supports UUID resigning and the storage system 
can clone a volume from a volume, then this determines how much more space a
      *  new root volume (that makes use of a template) will take up on the 
storage system.
-     *
      *  For example, if a storage system can clone a volume from a volume and 
the compute cluster supports UUID resigning (relevant for hypervisors like
      *  XenServer and ESXi that put virtual disks in clustered file systems), 
then the storage system will need to determine if it already has a copy of
      *  the template or if it will need to create one first before cloning the 
template to a new volume to be used for the new root disk (assuming the root
-     *  disk is being deployed from a template). If the template doesn't 
already exists on the storage system, then you need to take into consideration 
space
+     *  disk is being deployed from a template). If the template doesn't 
already exist on the storage system, then you need to take into consideration 
space

Review Comment:
   The grammar error "doesn't already exists" should be corrected to "doesn't 
already exist" (singular form of the verb to match the singular subject 
"template").



##########
api/src/main/java/org/apache/cloudstack/api/command/admin/storage/UpdateStoragePoolCmd.java:
##########
@@ -131,7 +131,7 @@ public ApiCommandResourceType getApiResourceType() {
         return ApiCommandResourceType.StoragePool;
     }
 
-    public Map<String,String> getDetails() {
+    public Map<String,Object> getDetails() {

Review Comment:
   The return type of getDetails() has changed from Map&lt;String,String&gt; to 
Map&lt;String,Object&gt;. This is a breaking API change that could affect 
callers expecting Map&lt;String,String&gt;. Verify that all callers of this 
method have been updated to handle the new return type, particularly the 
extractApiParamAsMap method in StorageManagerImpl which now correctly expects 
Map&lt;String,Object&gt;.



##########
engine/components-api/src/main/java/com/cloud/storage/StorageManager.java:
##########
@@ -232,6 +232,43 @@ public interface StorageManager extends StorageService {
             "Storage", "true", "Allow SSVMs to try copying public templates 
from one secondary storage to another instead of downloading them from the 
source.",
             true, ConfigKey.Scope.Zone, null);
 
+    ConfigKey<Integer> VmDiskThrottlingIopsReadRate = new ConfigKey<>(
+            Integer.class,
+            "vm.disk.throttling.iops_read_rate",
+            "Storage",
+            "0",
+            "Default disk I/O read rate in requests per second allowed in User 
vm's disk.",
+            true,
+            ConfigKey.Scope.Global,
+            null);
+    ConfigKey<Integer> VmDiskThrottlingIopsWriteRate = new ConfigKey<>(
+            Integer.class,
+            "vm.disk.throttling.iops_write_rate",
+            "Storage",
+            "0",
+            "Default disk I/O writerate in requests per second allowed in User 
vm's disk.",
+            true,
+            ConfigKey.Scope.Global,
+            null);
+    ConfigKey<Integer> VmDiskThrottlingBytesReadRate = new ConfigKey<>(
+            Integer.class,
+            "vm.disk.throttling.bytes_read_rate",
+            "Storage",
+            "0",
+            "Default disk I/O read rate in bytes per second allowed in User 
vm's disk.",
+            true,
+            ConfigKey.Scope.Global,
+            null);
+    ConfigKey<Integer> VmDiskThrottlingBytesWriteRate = new ConfigKey<>(
+            Integer.class,
+            "vm.disk.throttling.bytes_write_rate",
+            "Advanced",

Review Comment:
   The category for this ConfigKey should be "Storage" to be consistent with 
the other VM disk throttling ConfigKeys defined above (lines 235-262). 
Currently, it's set to "Advanced" while the others use "Storage".
   ```suggestion
               "Storage",
   ```



##########
api/src/main/java/org/apache/cloudstack/api/command/admin/storage/AddImageStoreS3CMD.java:
##########
@@ -48,15 +48,14 @@
 import org.apache.cloudstack.api.response.ImageStoreResponse;
 
 import com.cloud.exception.ConcurrentOperationException;
-import com.cloud.exception.DiscoveryException;
 import com.cloud.exception.InsufficientCapacityException;
 import com.cloud.exception.NetworkRuleConflictException;
 import com.cloud.exception.ResourceAllocationException;
 import com.cloud.exception.ResourceUnavailableException;
 import com.cloud.storage.ImageStore;
 
-@APICommand(name = "addImageStoreS3", description = "Adds S3 Image Store", 
responseObject = ImageStoreResponse.class, since = "4.7.0",
-        requestHasSensitiveInfo = true, responseHasSensitiveInfo = false)
+@APICommand(name = "addImageStoreS3", description = "Adds S3 Image Store", 
responseObject = ImageStoreResponse.class,
+        since = "4.7.0", responseHasSensitiveInfo = false)

Review Comment:
   The removed attribute "requestHasSensitiveInfo = true" from the APICommand 
annotation indicates that this command was previously marked as containing 
sensitive information (access keys, secret keys). Removing this flag could have 
security implications if there's any downstream processing that relies on this 
metadata to handle sensitive data appropriately. Verify that this removal is 
intentional and that sensitive parameter handling is maintained through other 
means.
   ```suggestion
           since = "4.7.0", requestHasSensitiveInfo = true, 
responseHasSensitiveInfo = false)
   ```



##########
server/src/main/java/com/cloud/storage/StorageManagerImpl.java:
##########
@@ -1288,11 +1131,11 @@ public PrimaryDataStoreInfo 
updateStoragePool(UpdateStoragePoolCmd cmd) throws I
 
             if (dataStoreLifeCycle instanceof PrimaryDataStoreLifeCycle) {
                 if (updatedCapacityBytes != null) {
-                    details.put(PrimaryDataStoreLifeCycle.CAPACITY_BYTES, 
updatedCapacityBytes != null ? String.valueOf(updatedCapacityBytes) : null);
+                    details.put(PrimaryDataStoreLifeCycle.CAPACITY_BYTES, 
String.valueOf(updatedCapacityBytes));
                     _storagePoolDao.updateCapacityBytes(id, 
updatedCapacityBytes);
                 }
                 if (updatedCapacityIops != null) {
-                    details.put(PrimaryDataStoreLifeCycle.CAPACITY_IOPS, 
updatedCapacityIops != null ? String.valueOf(updatedCapacityIops) : null);
+                    details.put(PrimaryDataStoreLifeCycle.CAPACITY_IOPS, 
String.valueOf(updatedCapacityIops));

Review Comment:
   The removal of null checks before String.valueOf() on lines 1134 and 1138 
could result in the string "null" being stored if updatedCapacityBytes or 
updatedCapacityIops is null. The previous code explicitly checked for null 
before conversion. While the updated code only executes if these values are not 
null (due to the if checks on lines 1133 and 1137), the redundant null ternary 
operators were defensive programming. This change is acceptable but less 
defensive.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to