[
https://issues.apache.org/jira/browse/CLOUDSTACK-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15254519#comment-15254519
]
ASF GitHub Bot commented on CLOUDSTACK-8745:
--------------------------------------------
Github user swill commented on a diff in the pull request:
https://github.com/apache/cloudstack/pull/713#discussion_r60792255
--- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
@@ -0,0 +1,229 @@
+#!/usr/bin/env python
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from nose.plugins.attrib import attr
+from marvin.cloudstackTestCase import cloudstackTestCase
+from marvin.cloudstackAPI import (enableStorageMaintenance,
+ cancelStorageMaintenance
+ )
+from marvin.lib.utils import (cleanup_resources,
+ validateList)
+from marvin.lib.base import (Account,
+ VirtualMachine,
+ ServiceOffering,
+ Cluster,
+ StoragePool,
+ Volume)
+from marvin.lib.common import (get_zone,
+ get_domain,
+ get_template,
+ list_hosts
+ )
+from marvin.codes import PASS
+
+
+def maintenance(self, storageid):
+ """enables maintenance mode of a Storage pool"""
+
+ cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
+ cmd.id = storageid
+ return self.api_client.enableStorageMaintenance(cmd)
+
+
+def cancelmaintenance(self, storageid):
+ """cancel maintenance mode of a Storage pool"""
+
+ cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
+ cmd.id = storageid
+ return self.api_client.cancelStorageMaintenance(cmd)
+
+
+class testHaPoolMaintenance(cloudstackTestCase):
+
+ @classmethod
+ def setUpClass(cls):
+ try:
+ cls._cleanup = []
+ cls.testClient = super(
+ testHaPoolMaintenance,
+ cls).getClsTestClient()
+ cls.api_client = cls.testClient.getApiClient()
+ cls.services = cls.testClient.getParsedTestDataConfig()
+ # Get Domain, Zone, Template
+ cls.domain = get_domain(cls.api_client)
+ cls.zone = get_zone(
+ cls.api_client,
+ cls.testClient.getZoneForTests())
+ cls.template = get_template(
+ cls.api_client,
+ cls.zone.id,
+ cls.services["ostype"]
+ )
+ cls.hypervisor = cls.testClient.getHypervisorInfo()
+ cls.services['mode'] = cls.zone.networktype
+ cls.hypervisor = cls.testClient.getHypervisorInfo()
+ cls.services["virtual_machine"]["zoneid"] = cls.zone.id
+ cls.services["virtual_machine"]["template"] = cls.template.id
+ cls.clusterWithSufficientPool = None
+ clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
+
+ if not validateList(clusters)[0]:
+
+ cls.debug(
+ "check list cluster response for zone id %s" %
+ cls.zone.id)
+
+ for cluster in clusters:
+ cls.pool = StoragePool.list(cls.api_client,
+ clusterid=cluster.id,
+ keyword="NetworkFilesystem"
+ )
+
+ if not validateList(cls.pool)[0]:
+
+ cls.debug(
+ "check list cluster response for zone id %s" %
+ cls.zone.id)
+
+ if len(cls.pool) >= 2:
+ cls.clusterWithSufficientPool = cluster
+ break
+ if not cls.clusterWithSufficientPool:
+ return
+
+ cls.services["service_offerings"][
+ "tiny"]["offerha"] = "True"
+
+ cls.services_off = ServiceOffering.create(
+ cls.api_client,
+
cls.services["service_offerings"]["tiny"])
+ cls._cleanup.append(cls.services_off)
+
+ except Exception as e:
+ cls.tearDownClass()
+ raise Exception("Warning: Exception in setup : %s" % e)
+ return
+
+ def setUp(self):
+
+ self.apiClient = self.testClient.getApiClient()
+ self.dbclient = self.testClient.getDbConnection()
+ self.cleanup = []
+ if not self.clusterWithSufficientPool:
+ self.skipTest(
+ "sufficient storage not available in any cluster for zone
%s" %
+ self.zone.id)
+ self.account = Account.create(
--- End diff --
This is the only case I have tested so far. :)
```
# ./run_marvin_single_tests.sh /data/shared/marvin/mct-zone3-kvm3-kvm4.cfg
+ marvinCfg=/data/shared/marvin/mct-zone3-kvm3-kvm4.cfg
+ '[' -z /data/shared/marvin/mct-zone3-kvm3-kvm4.cfg ']'
+ cd /data/git/cs2/cloudstack/test/integration
+ nosetests --with-marvin
--marvin-config=/data/shared/marvin/mct-zone3-kvm3-kvm4.cfg -s -a tags=advanced
component/maint/test_ha_pool_maintenance.py
==== Marvin Init Started ====
=== Marvin Parse Config Successful ===
=== Marvin Setting TestData Successful===
==== Log Folder Path: /tmp//MarvinLogs//Apr_22_2016_21_33_55_KPLSCU. All
logs will be available here ====
=== Marvin Init Logging Successful===
==== Marvin Init Successful ====
===final results are now copied to:
/tmp//MarvinLogs/test_ha_pool_maintenance_74DSU4===
[root@cs2 cloudstack]# cat
/tmp/MarvinLogs/test_ha_pool_maintenance_74DSU4/results.txt
put storage in maintenance mode and start ha vm and check usage ... SKIP:
sufficient storage not available in any cluster for zone
47059670-62f8-46a4-9874-2a845f9d1b19
----------------------------------------------------------------------
Ran 1 test in 0.227s
OK (SKIP=1)
```
> After a volume is migrated; the usage table still shows the old volume id
> -------------------------------------------------------------------------
>
> Key: CLOUDSTACK-8745
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8745
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the
> default.)
> Components: marvin, Usage
> Affects Versions: 4.5.0
> Reporter: prashant kumar mishra
> Assignee: prashant kumar mishra
>
> After a volume is migrated; the usage table still shows the old volume id
> steps to verify:
> ==========
> 1-Created a HA VM with both root and data disk
> 2. Add one more primary storage to this cluster
> 3. Put the original /old Primary storage into maintenance which has the root
> and data disk of VM created in step 1
> 4. Start the VM which was stopped as a part of step 3
> 5. Check for VOLUME.DELETE & VOLUME.CREATE events for root disk and data disk
> in usage_event table
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)