[jira] [Created] (CLOUDSTACK-9870) Xenserver no longer leaves the most recent snapshot in primary for incremental snapshots

2017-04-11 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9870:
--

 Summary: Xenserver no longer leaves the most recent snapshot in 
primary for incremental snapshots
 Key: CLOUDSTACK-9870
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9870
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Snapshot
Affects Versions: 4.9.0.1
 Environment: Cloudstack with xenserver
Reporter: subhash yedugundla


Cloudstack always keeps the most recent snapshot in primary storage. Rest would 
be removed once they are copied to secondary storage. But currently the 
behavior is changed. Even the most recent snapshot is getting removed so that 
every new snapshot would be full snapshot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9870) Xenserver no longer leaves the most recent snapshot in primary for incremental snapshots

2017-04-11 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15965449#comment-15965449
 ] 

subhash yedugundla commented on CLOUDSTACK-9870:


The following PR should be taken care once this issue is fixed
https://github.com/apache/cloudstack/pull/1740

> Xenserver no longer leaves the most recent snapshot in primary for 
> incremental snapshots
> 
>
> Key: CLOUDSTACK-9870
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9870
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Snapshot
>Affects Versions: 4.9.0.1
> Environment: Cloudstack with xenserver
>Reporter: subhash yedugundla
>
> Cloudstack always keeps the most recent snapshot in primary storage. Rest 
> would be removed once they are copied to secondary storage. But currently the 
> behavior is changed. Even the most recent snapshot is getting removed so that 
> every new snapshot would be full snapshot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9889) Dedicate guest vlan ip range to a domain

2017-04-23 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9889:
--

 Summary: Dedicate guest vlan ip range to a domain
 Key: CLOUDSTACK-9889
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9889
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: subhash yedugundla






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9889) Dedicate guest vlan ip range to a domain

2017-04-25 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-9889:
---
Description: 
List of scenarios covered

*Dedication of guest Vlan range*
Admin should be allowed dedicate guest vlan ranges to a domain
While dedicating a guest Vlan range
If the range overlaps with any of the existing dedicated range then extend the 
existing dedicated range
Otherwise add it as a new dedicated guest Vlan range
If the range doesn’t exist in the system then the request should fail
If any of the vlan in the range is in use by a network that belongs to a 
different account of different domain then the request should fail

*Releasing guest Vlan range*
Admin should be allowed to release a dedicated guest vlan range that is 
dedicated to a domain back to the system pool
If the range is not dedicated to the domain then the request should fail
Even If one/more of the Vlans belonging to the range is in use by a network the 
range should be released back to the system pool
 The vlans that are in use should continue to be in use by the network

*Network implementation*
If the network belongs to an account that has a dedicated range of Vlans then a 
Vlan from the account's dedicated range should be allocated to the network
If an account uses up all of its dedicated Vlan’s the next network being 
created for the account should be assigned a Vlan that belongs to the domain 
pool
Otherwise, a Vlan should be allocated from the free pool i.e. Vlan range 
belonging to the zone

*Network creation*
If a Vlan id is specified and if this id belongs to a dedicated Vlan range of a 
different account/domain then the creation should fail
If a Vlan id is specified and if this id belongs to the system pool but the 
network owner has a range of dedicated range of vlans then the creation should 
fail

*Domain deletion*
Guest Ip ranges and vlans dedicated to the domain should be released back to 
the free pool 

*Account deletion*
When an account is deleted, the IP ranges and vlans associated with the account 
 should be moved to system pool.

> Dedicate guest vlan ip range to a domain
> 
>
> Key: CLOUDSTACK-9889
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9889
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: subhash yedugundla
>
> List of scenarios covered
> *Dedication of guest Vlan range*
> Admin should be allowed dedicate guest vlan ranges to a domain
> While dedicating a guest Vlan range
> If the range overlaps with any of the existing dedicated range then extend 
> the existing dedicated range
> Otherwise add it as a new dedicated guest Vlan range
> If the range doesn’t exist in the system then the request should fail
> If any of the vlan in the range is in use by a network that belongs to a 
> different account of different domain then the request should fail
> *Releasing guest Vlan range*
> Admin should be allowed to release a dedicated guest vlan range that is 
> dedicated to a domain back to the system pool
> If the range is not dedicated to the domain then the request should fail
> Even If one/more of the Vlans belonging to the range is in use by a network 
> the range should be released back to the system pool
>  The vlans that are in use should continue to be in use by the network
> *Network implementation*
> If the network belongs to an account that has a dedicated range of Vlans then 
> a Vlan from the account's dedicated range should be allocated to the network
> If an account uses up all of its dedicated Vlan’s the next network being 
> created for the account should be assigned a Vlan that belongs to the domain 
> pool
> Otherwise, a Vlan should be allocated from the free pool i.e. Vlan range 
> belonging to the zone
> *Network creation*
> If a Vlan id is specified and if this id belongs to a dedicated Vlan range of 
> a different account/domain then the creation should fail
> If a Vlan id is specified and if this id belongs to the system pool but the 
> network owner has a range of dedicated range of vlans then the creation 
> should fail
> *Domain deletion*
> Guest Ip ranges and vlans dedicated to the domain should be released back to 
> the free pool 
> *Account deletion*
> When an account is deleted, the IP ranges and vlans associated with the 
> account  should be moved to system pool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9889) Dedicate guest vlan range to a domain

2017-04-25 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-9889:
---
Summary: Dedicate guest vlan  range to a domain  (was: Dedicate guest vlan 
ip range to a domain)

> Dedicate guest vlan  range to a domain
> --
>
> Key: CLOUDSTACK-9889
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9889
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: subhash yedugundla
>
> List of scenarios covered
> *Dedication of guest Vlan range*
> Admin should be allowed dedicate guest vlan ranges to a domain
> While dedicating a guest Vlan range
> If the range overlaps with any of the existing dedicated range then extend 
> the existing dedicated range
> Otherwise add it as a new dedicated guest Vlan range
> If the range doesn’t exist in the system then the request should fail
> If any of the vlan in the range is in use by a network that belongs to a 
> different account of different domain then the request should fail
> *Releasing guest Vlan range*
> Admin should be allowed to release a dedicated guest vlan range that is 
> dedicated to a domain back to the system pool
> If the range is not dedicated to the domain then the request should fail
> Even If one/more of the Vlans belonging to the range is in use by a network 
> the range should be released back to the system pool
>  The vlans that are in use should continue to be in use by the network
> *Network implementation*
> If the network belongs to an account that has a dedicated range of Vlans then 
> a Vlan from the account's dedicated range should be allocated to the network
> If an account uses up all of its dedicated Vlan’s the next network being 
> created for the account should be assigned a Vlan that belongs to the domain 
> pool
> Otherwise, a Vlan should be allocated from the free pool i.e. Vlan range 
> belonging to the zone
> *Network creation*
> If a Vlan id is specified and if this id belongs to a dedicated Vlan range of 
> a different account/domain then the creation should fail
> If a Vlan id is specified and if this id belongs to the system pool but the 
> network owner has a range of dedicated range of vlans then the creation 
> should fail
> *Domain deletion*
> Guest Ip ranges and vlans dedicated to the domain should be released back to 
> the free pool 
> *Account deletion*
> When an account is deleted, the IP ranges and vlans associated with the 
> account  should be moved to system pool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-10040) Upload volume fails when management server can not reach the URL

2017-08-08 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-10040:
---

 Summary: Upload volume fails when management server can not reach 
the URL
 Key: CLOUDSTACK-10040
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10040
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Volumes
Affects Versions: 4.10.0.0
Reporter: subhash yedugundla


Upload volume from internet fails when the management server cannot reach the 
URL ( internet)

During the upload volume, we check the validity of the URL through management 
server first and then download the volume to secondary storage through SSVM.
We have customer environments where the management server doesn’t have internet 
access ( possibility in some customer environments as per their security 
policies). This will now cause a failure even though the SSVM has internet 
connection




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10041) Support to use ikev1 for site-to-site VPN connections

2017-08-09 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-10041:
---

 Summary: Support to use ikev1 for site-to-site VPN connections
 Key: CLOUDSTACK-10041
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10041
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Virtual Router
Affects Versions: 4.10.0.0
Reporter: subhash yedugundla


Currently VR initiates connections using ikev2 by default with strongswan. In 
case, the customer gateway does not support ikev2. The connections would result 
in errors. In order to avoid this, this feature introduces a new parameter of 
ikeversion. Based on that the version with which Virtual Router initiates the 
connection would be decided



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10131) After copying the template charging for that template is stopped

2017-11-06 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-10131:
---

 Summary: After copying the template charging for that template is 
stopped 
 Key: CLOUDSTACK-10131
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10131
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Template, Usage
Affects Versions: 4.5.2
Reporter: subhash yedugundla


[Repro Steps]
Register a template in Zone 1 .
 Copy the template from Zone 1 to Zone 2 .
 Once copied, deleted the template from Zone 1.
 Check the charging of the template and found that charging of the template in 
Zone 2 stopped

Sample data related to the issue


mysql> select id,type,state,description,created from event where description 
like "%4186%" and type like "%template%"; 
+-+-+---+-+-+
 
| id | type | state | description | created | 
+-+-+---+-+-+
 
| 1964466 | TEMPLATE.CREATE | Completed | Successfully completed creating 
template. Id: 4186 name: PerfTestVM2 | 2015-06-15 09:48:57 | 
| 2411011 | TEMPLATE.COPY | Scheduled | copying template: 4186 from zone: 3 to 
zone: 1 | 2015-07-22 03:57:32 | 
| 2411012 | TEMPLATE.COPY | Started | copying template. copying template: 4186 
from zone: 3 to zone: 1 | 2015-07-22 03:57:32 | 
| 2411458 | TEMPLATE.COPY | Completed | Successfully completed copying 
template. copying template: 4186 from zone: 3 to zone: 1 | 2015-07-22 04:47:08 
| 
| 2412521 | TEMPLATE.DELETE | Scheduled | Deleting template 4186 | 2015-07-22 
06:46:18 | 
| 2412522 | TEMPLATE.DELETE | Started | deleting template. Template Id: 4186 | 
2015-07-22 06:46:18 | 
| 2412523 | TEMPLATE.DELETE | Completed | Successfully completed deleting 
template. Template Id: 4186 | 2015-07-22 06:46:18 | 
+-+-+---+-+-+
 

You can see that the template of zone3 is not removed. 

mysql> select * from template_zone_ref where template_id=4186;; 
+--+-+-+-+-+-+
 
| id | zone_id | template_id | created | last_updated | removed | 
+--+-+-+-+-+-+
 
| 3974 | 3 | 4186 | 2015-06-15 09:48:57 | 2015-06-15 09:48:57 | NULL | 
<=Not removed 
| 4845 | 1 | 4186 | 2015-07-22 04:47:08 | 2015-07-22 04:47:08 | 2015-07-22 
06:46:18 | 
+--+-+-+-+-+-+
 

However, not only Zone1 but also the charging of the template of Zone3 stopped. 

+-+-+-+-+--+---++--+-+
 
| id | start_date | end_date | zone_id | description | usage_display | 
raw_usage | usage_id | size | 
+-+-+-+-+--+---++--+-+
 
| 5998486 | 2015-06-14 15:00:00 | 2015-06-15 14:59:59 | 3 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 4.808055 Hrs | 4.808055400848389 | 
4186 | 14995087360 | 
=an abbreviation 
4186 | 14995087360 | 
| 7834096 | 2015-07-19 15:00:00 | 2015-07-20 14:59:59 | 3 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 24 Hrs | 24 | 4186 | 14995087360 | 
| 7889426 | 2015-07-20 15:00:00 | 2015-07-21 14:59:59 | 3 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 24 Hrs | 24 | 4186 | 14995087360 | 
| 7945799 | 2015-07-21 15:00:00 | 2015-07-22 14:59:59 | 3 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 15.771667 Hrs | 15.771666526794434 | 
4186 | 14995087360 | 
| 7945801 | 2015-07-21 15:00:00 | 2015-07-22 14:59:59 | 1 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 1.986111 Hrs | 1.9861112833023071 | 
4186 | 14995087360 | 
+-+-+-+-+--+---++--+-+
 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10133) Local storage overprovisioning for ext file system

2017-11-07 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-10133:
---

 Summary: Local storage overprovisioning for ext file system
 Key: CLOUDSTACK-10133
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10133
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.10.1.0
Reporter: subhash yedugundla


Currently cloudstack does not allow local storage to e \



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10134) Performance improvement for applying port forwarding rules

2017-11-07 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-10134:
---

 Summary: Performance improvement for applying port forwarding rules
 Key: CLOUDSTACK-10134
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10134
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Virtual Router
Reporter: subhash yedugundla


Repro Steps
Step to reproduce: 
1. Allocate ip address 
2. Add PortForwarding rule to the IP address repeatedly 
3. Check the time that setting takes

Time for each rules goes up for every new rule that gets added. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10135) ACL rules order is not maintained for ACL_OUTBOUND in VPC VR

2017-11-08 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-10135:
---

 Summary: ACL rules order is not maintained for ACL_OUTBOUND in VPC 
VR
 Key: CLOUDSTACK-10135
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10135
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Virtual Router
Affects Versions: 4.10.0.0
Reporter: subhash yedugundla


Repro steps
1.Create a vpc with super cidr(172.16.0.0/16)
2. created a custom acl with at least 3 ACL_OUTBOUND rules with number oder 
like 15, 10, 20
3. Create a tier with the above ACL
4.Deploy an instance in the tier
5.In router the  ACL rules  wont be as per the sequence number oder




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CLOUDSTACK-10135) ACL rules order is not maintained for ACL_OUTBOUND in VPC VR

2017-11-08 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-10135:

Labels:   (was: pr)

> ACL rules order is not maintained for ACL_OUTBOUND in VPC VR
> 
>
> Key: CLOUDSTACK-10135
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10135
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: subhash yedugundla
>
> Repro steps
> 1.Create a vpc with super cidr(172.16.0.0/16)
> 2. created a custom acl with at least 3 ACL_OUTBOUND rules with number oder 
> like 15, 10, 20
> 3. Create a tier with the above ACL
> 4.Deploy an instance in the tier
> 5.In router the  ACL rules  wont be as per the sequence number oder



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CLOUDSTACK-10135) ACL rules order is not maintained for ACL_OUTBOUND in VPC VR

2017-11-08 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-10135:

Labels: pr  (was: )

> ACL rules order is not maintained for ACL_OUTBOUND in VPC VR
> 
>
> Key: CLOUDSTACK-10135
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10135
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.10.0.0
>Reporter: subhash yedugundla
>
> Repro steps
> 1.Create a vpc with super cidr(172.16.0.0/16)
> 2. created a custom acl with at least 3 ACL_OUTBOUND rules with number oder 
> like 15, 10, 20
> 3. Create a tier with the above ACL
> 4.Deploy an instance in the tier
> 5.In router the  ACL rules  wont be as per the sequence number oder



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-8849) Usage job should stop usage generation in case of any exception

2015-09-14 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-8849:
--

 Summary: Usage job should stop usage generation in case of any 
exception
 Key: CLOUDSTACK-8849
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8849
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Usage
Affects Versions: 4.5.2
Reporter: subhash yedugundla


usage server should stop completely after getting an exception. When it 
continues it can result in wrong usage data



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8850) revertSnapshot command does not work

2015-09-14 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-8850:
--

 Summary: revertSnapshot command does not work
 Key: CLOUDSTACK-8850
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8850
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: subhash yedugundla


revertSnapshot command does not work. But it is coming up in the documentation. 
So Documentation needs to be updated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8850) revertSnapshot command does not work

2015-09-14 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-8850:
---
Affects Version/s: 4.5.2

> revertSnapshot command does not work
> 
>
> Key: CLOUDSTACK-8850
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8850
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2
>Reporter: subhash yedugundla
>
> revertSnapshot command does not work. But it is coming up in the 
> documentation. So Documentation needs to be updated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8849) Usage job should stop usage generation in case of any exception

2015-09-15 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14745169#comment-14745169
 ] 

subhash yedugundla commented on CLOUDSTACK-8849:


Fixed in the following pull request
https://github.com/apache/cloudstack/pull/827

> Usage job should stop usage generation in case of any exception
> ---
>
> Key: CLOUDSTACK-8849
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8849
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Usage
>Affects Versions: 4.5.2
>Reporter: subhash yedugundla
>
> usage server should stop completely after getting an exception. When it 
> continues it can result in wrong usage data



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8908) After copying the template charging for that template is stopped

2015-09-24 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-8908:
--

 Summary: After copying the template charging for that template is 
stopped 
 Key: CLOUDSTACK-8908
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8908
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Template, Usage
Affects Versions: 4.5.2
Reporter: subhash yedugundla


[Repro Steps]
Register a template in Zone 1 .
 Copy the template from Zone 1 to Zone 2 .
 Once copied, deleted the template from Zone 1.
 Check the charging of the template and found that charging of the template in 
Zone 2 stopped

Sample data related to the issue


mysql> select id,type,state,description,created from event where description 
like "%4186%" and type like "%template%"; 
+-+-+---+-+-+
 
| id | type | state | description | created | 
+-+-+---+-+-+
 
| 1964466 | TEMPLATE.CREATE | Completed | Successfully completed creating 
template. Id: 4186 name: PerfTestVM2 | 2015-06-15 09:48:57 | 
| 2411011 | TEMPLATE.COPY | Scheduled | copying template: 4186 from zone: 3 to 
zone: 1 | 2015-07-22 03:57:32 | 
| 2411012 | TEMPLATE.COPY | Started | copying template. copying template: 4186 
from zone: 3 to zone: 1 | 2015-07-22 03:57:32 | 
| 2411458 | TEMPLATE.COPY | Completed | Successfully completed copying 
template. copying template: 4186 from zone: 3 to zone: 1 | 2015-07-22 04:47:08 
| 
| 2412521 | TEMPLATE.DELETE | Scheduled | Deleting template 4186 | 2015-07-22 
06:46:18 | 
| 2412522 | TEMPLATE.DELETE | Started | deleting template. Template Id: 4186 | 
2015-07-22 06:46:18 | 
| 2412523 | TEMPLATE.DELETE | Completed | Successfully completed deleting 
template. Template Id: 4186 | 2015-07-22 06:46:18 | 
+-+-+---+-+-+
 

You can see that the template of zone3 is not removed. 

mysql> select * from template_zone_ref where template_id=4186;; 
+--+-+-+-+-+-+
 
| id | zone_id | template_id | created | last_updated | removed | 
+--+-+-+-+-+-+
 
| 3974 | 3 | 4186 | 2015-06-15 09:48:57 | 2015-06-15 09:48:57 | NULL | 
<=Not removed 
| 4845 | 1 | 4186 | 2015-07-22 04:47:08 | 2015-07-22 04:47:08 | 2015-07-22 
06:46:18 | 
+--+-+-+-+-+-+
 

However, not only Zone1 but also the charging of the template of Zone3 stopped. 

+-+-+-+-+--+---++--+-+
 
| id | start_date | end_date | zone_id | description | usage_display | 
raw_usage | usage_id | size | 
+-+-+-+-+--+---++--+-+
 
| 5998486 | 2015-06-14 15:00:00 | 2015-06-15 14:59:59 | 3 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 4.808055 Hrs | 4.808055400848389 | 
4186 | 14995087360 | 
=an abbreviation 
4186 | 14995087360 | 
| 7834096 | 2015-07-19 15:00:00 | 2015-07-20 14:59:59 | 3 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 24 Hrs | 24 | 4186 | 14995087360 | 
| 7889426 | 2015-07-20 15:00:00 | 2015-07-21 14:59:59 | 3 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 24 Hrs | 24 | 4186 | 14995087360 | 
| 7945799 | 2015-07-21 15:00:00 | 2015-07-22 14:59:59 | 3 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 15.771667 Hrs | 15.771666526794434 | 
4186 | 14995087360 | 
| 7945801 | 2015-07-21 15:00:00 | 2015-07-22 14:59:59 | 1 | Template Id:4186 
Size:14995087360VirtualSize:16106127360 | 1.986111 Hrs | 1.9861112833023071 | 
4186 | 14995087360 | 
+-+-+-+-+--+---++--+-+
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8921) snapshot_store_ref table should store actual size of back snapshot in secondary storage

2015-09-29 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-8921:
--

 Summary: snapshot_store_ref table should store actual size of back 
snapshot in secondary storage
 Key: CLOUDSTACK-8921
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8921
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Snapshot, Usage
Affects Versions: 4.5.2
 Environment: Hypervisor: Xen Server
Hypervisor Version: 6.2 + SP1 
Reporter: subhash yedugundla


CCP is storing physical-utilisation of the snapshot in physical_size column of 
the table "snapshot_store_ref". That was fixed as part of 
https://issues.apache.org/jira/browse/CLOUDSTACK-7842.

. From DB:
=

mysql> select * from snapshot_store_ref where id=586 \G;
*** 1. row ***
id: 586
store_id: 1
snapshot_id: 305
created: 2015-06-15 06:06:22
last_updated: NULL
job_id: NULL
store_role: Image
size: 5368709120
physical_size: 13312 ---> This is the size we are storing in the DB
parent_snapshot_id: 0
install_path: snapshots/2/233/e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3
state: Ready
update_count: 2
ref_cnt: 0
updated: 2015-06-15 06:06:36
volume_id: 233
1 row in set (0.00 sec)

 From File System:
=
[root@kirangoleta2 233]# ls -lh
total 2.1M
-rw-r--r--. 1 root root 2.1M Jun 15 11:36 
e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3.vhd ---> Physical file size

3. From Xen Server:
=
xe vdi-list name-label=newtest_ROOT-203_20150615060620 params=all

uuid ( RO) : 74a4185e-74fe-4cec-875b-060572cc675d
name-label ( RW): newtest_ROOT-203_20150615060620
name-description ( RW):
is-a-snapshot ( RO): true
snapshot-of ( RO): 02789581-7bfc-45bd-8e59-c35515d2b605
snapshots ( RO):
snapshot-time ( RO): 20150615T06:09:29Z
allowed-operations (SRO): forget; generate_config; update; resize; destroy; 
clone; copy; snapshot
current-operations (SRO):
sr-uuid ( RO): 73ff08fb-b341-c71c-e2c7-be6c8d395126
sr-name-label ( RO): 347c06fb-f7dd-3613-aa82-db5b82181d77
vbd-uuids (SRO):
crashdump-uuids (SRO):
virtual-size ( RO): 5368709120
physical-utilisation ( RO): 14848 ---> This is the size xen server reports as 
consumed
location ( RO): 74a4185e-74fe-4cec-875b-060572cc675d
type ( RO): User
sharable ( RO): false
read-only ( RO): false
storage-lock ( RO): false
managed ( RO): true
parent ( RO): 
missing ( RO): false
other-config (MRW): content_id: ad6423f7-e2c3-7ea4-be8d-573ad155511e
xenstore-data (MRO):
sm-config (MRO): vhd-parent: 73a33517-e9c5-48c6-89e7-70e37905a74a
on-boot ( RW): persist
allow-caching ( RW): false
metadata-latest ( RO): false
metadata-of-pool ( RO): 
tags (SRW):

I see that we are storing the physical-utilisation reported by xen server.
Interesting I see that ACS stores the physical file size in case of VMWare 
environment,

EXPECTED BEHAVIOR
==
It is expected to see the physical size of the snapshot file in physical_size 
column of the table "snapshot_store_ref

ACTUAL BEHAVIOR
==
In case of Xen Server CCP is storing physical-utilisation of the snapshot in 
physical_size column of the table "snapshot_store_ref"





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8921) snapshot_store_ref table should store actual size of back snapshot in secondary storage

2015-09-29 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-8921:
---
Description: 
CCP is storing physical-utilisation of the snapshot in physical_size column of 
the table "snapshot_store_ref". That was fixed as part of 
https://issues.apache.org/jira/browse/CLOUDSTACK-7842.

>From DB:
=
mysql> select * from snapshot_store_ref where id=586 \G;
*** 1. row ***
id: 586
store_id: 1
snapshot_id: 305
created: 2015-06-15 06:06:22
last_updated: NULL
job_id: NULL
store_role: Image
size: 5368709120
physical_size: 13312 ---> This is the size we are storing in the DB
parent_snapshot_id: 0
install_path: snapshots/2/233/e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3
state: Ready
update_count: 2
ref_cnt: 0
updated: 2015-06-15 06:06:36
volume_id: 233
1 row in set (0.00 sec)

 From File System:
=
[root@kirangoleta2 233]# ls -lh
total 2.1M
-rw-r--r--. 1 root root 2.1M Jun 15 11:36 
e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3.vhd ---> Physical file size

3. From Xen Server:
=
xe vdi-list name-label=newtest_ROOT-203_20150615060620 params=all

uuid ( RO) : 74a4185e-74fe-4cec-875b-060572cc675d
name-label ( RW): newtest_ROOT-203_20150615060620
name-description ( RW):
is-a-snapshot ( RO): true
snapshot-of ( RO): 02789581-7bfc-45bd-8e59-c35515d2b605
snapshots ( RO):
snapshot-time ( RO): 20150615T06:09:29Z
allowed-operations (SRO): forget; generate_config; update; resize; destroy; 
clone; copy; snapshot
current-operations (SRO):
sr-uuid ( RO): 73ff08fb-b341-c71c-e2c7-be6c8d395126
sr-name-label ( RO): 347c06fb-f7dd-3613-aa82-db5b82181d77
vbd-uuids (SRO):
crashdump-uuids (SRO):
virtual-size ( RO): 5368709120
physical-utilisation ( RO): 14848 ---> This is the size xen server reports as 
consumed
location ( RO): 74a4185e-74fe-4cec-875b-060572cc675d
type ( RO): User
sharable ( RO): false
read-only ( RO): false
storage-lock ( RO): false
managed ( RO): true
parent ( RO): 
missing ( RO): false
other-config (MRW): content_id: ad6423f7-e2c3-7ea4-be8d-573ad155511e
xenstore-data (MRO):
sm-config (MRO): vhd-parent: 73a33517-e9c5-48c6-89e7-70e37905a74a
on-boot ( RW): persist
allow-caching ( RW): false
metadata-latest ( RO): false
metadata-of-pool ( RO): 
tags (SRW):

I see that we are storing the physical-utilisation reported by xen server.
Interesting I see that ACS stores the physical file size in case of VMWare 
environment,

EXPECTED BEHAVIOR
==
It is expected to see the physical size of the snapshot file in physical_size 
column of the table "snapshot_store_ref

ACTUAL BEHAVIOR
==
In case of Xen Server ACS is storing physical-utilisation of the snapshot in 
physical_size column of the table "snapshot_store_ref"



  was:
CCP is storing physical-utilisation of the snapshot in physical_size column of 
the table "snapshot_store_ref". That was fixed as part of 
https://issues.apache.org/jira/browse/CLOUDSTACK-7842.

. From DB:
=

mysql> select * from snapshot_store_ref where id=586 \G;
*** 1. row ***
id: 586
store_id: 1
snapshot_id: 305
created: 2015-06-15 06:06:22
last_updated: NULL
job_id: NULL
store_role: Image
size: 5368709120
physical_size: 13312 ---> This is the size we are storing in the DB
parent_snapshot_id: 0
install_path: snapshots/2/233/e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3
state: Ready
update_count: 2
ref_cnt: 0
updated: 2015-06-15 06:06:36
volume_id: 233
1 row in set (0.00 sec)

 From File System:
=
[root@kirangoleta2 233]# ls -lh
total 2.1M
-rw-r--r--. 1 root root 2.1M Jun 15 11:36 
e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3.vhd ---> Physical file size

3. From Xen Server:
=
xe vdi-list name-label=newtest_ROOT-203_20150615060620 params=all

uuid ( RO) : 74a4185e-74fe-4cec-875b-060572cc675d
name-label ( RW): newtest_ROOT-203_20150615060620
name-description ( RW):
is-a-snapshot ( RO): true
snapshot-of ( RO): 02789581-7bfc-45bd-8e59-c35515d2b605
snapshots ( RO):
snapshot-time ( RO): 20150615T06:09:29Z
allowed-operations (SRO): forget; generate_config; update; resize; destroy; 
clone; copy; snapshot
current-operations (SRO):
sr-uuid ( RO): 73ff08fb-b341-c71c-e2c7-be6c8d395126
sr-name-label ( RO): 347c06fb-f7dd-3613-aa82-db5b82181d77
vbd-uuids (SRO):
crashdump-uuids (SRO):
virtual-size ( RO): 5368709120
physical-utilisation ( RO): 14848 ---> This is the size xen server reports as 
consumed
location ( RO): 74a4185e-74fe-4cec-875b-060572cc675d
type ( RO): User
sharable ( RO): false
read-only ( RO): false
storage-lock ( RO): false
managed ( RO): true
parent ( RO): 
missing ( RO): false
other-config (MRW): content_id: ad6423f7-e2c3-7ea4-be8d-573ad155511e
xenstore-data (MRO):
sm-config (MRO): vhd-parent: 73a33517-e9c5-48c6-89e7-70e37905a74a
on-boot ( RW): persist
al

[jira] [Updated] (CLOUDSTACK-8921) snapshot_store_ref table should store actual size of back snapshot in secondary storage

2015-09-29 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-8921:
---
Description: 
CCP is storing physical-utilisation of the snapshot in physical_size column of 
the table "snapshot_store_ref". That was fixed as part of 
https://issues.apache.org/jira/browse/CLOUDSTACK-7842.

>From DB:
=
mysql> select * from snapshot_store_ref where id=586 \G;


id: 586
store_id: 1
snapshot_id: 305
created: 2015-06-15 06:06:22
last_updated: NULL
job_id: NULL
store_role: Image
size: 5368709120
physical_size: 13312 ---> This is the size we are storing in the DB
parent_snapshot_id: 0
install_path: snapshots/2/233/e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3
state: Ready
update_count: 2
ref_cnt: 0
updated: 2015-06-15 06:06:36
volume_id: 233
1 row in set (0.00 sec)

 From File System:
=
[root@kirangoleta2 233]# ls -lh
total 2.1M
-rw-r--r--. 1 root root 2.1M Jun 15 11:36 
e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3.vhd ---> Physical file size

3. From Xen Server:
=
xe vdi-list name-label=newtest_ROOT-203_20150615060620 params=all

uuid ( RO) : 74a4185e-74fe-4cec-875b-060572cc675d
name-label ( RW): newtest_ROOT-203_20150615060620
name-description ( RW):
is-a-snapshot ( RO): true
snapshot-of ( RO): 02789581-7bfc-45bd-8e59-c35515d2b605
snapshots ( RO):
snapshot-time ( RO): 20150615T06:09:29Z
allowed-operations (SRO): forget; generate_config; update; resize; destroy; 
clone; copy; snapshot
current-operations (SRO):
sr-uuid ( RO): 73ff08fb-b341-c71c-e2c7-be6c8d395126
sr-name-label ( RO): 347c06fb-f7dd-3613-aa82-db5b82181d77
vbd-uuids (SRO):
crashdump-uuids (SRO):
virtual-size ( RO): 5368709120
physical-utilisation ( RO): 14848 ---> This is the size xen server reports as 
consumed
location ( RO): 74a4185e-74fe-4cec-875b-060572cc675d
type ( RO): User
sharable ( RO): false
read-only ( RO): false
storage-lock ( RO): false
managed ( RO): true
parent ( RO): 
missing ( RO): false
other-config (MRW): content_id: ad6423f7-e2c3-7ea4-be8d-573ad155511e
xenstore-data (MRO):
sm-config (MRO): vhd-parent: 73a33517-e9c5-48c6-89e7-70e37905a74a
on-boot ( RW): persist
allow-caching ( RW): false
metadata-latest ( RO): false
metadata-of-pool ( RO): 
tags (SRW):

I see that we are storing the physical-utilisation reported by xen server.
Interesting I see that ACS stores the physical file size in case of VMWare 
environment,

EXPECTED BEHAVIOR
==
It is expected to see the physical size of the snapshot file in physical_size 
column of the table "snapshot_store_ref

ACTUAL BEHAVIOR
==
In case of Xen Server ACS is storing physical-utilisation of the snapshot in 
physical_size column of the table "snapshot_store_ref"



  was:
CCP is storing physical-utilisation of the snapshot in physical_size column of 
the table "snapshot_store_ref". That was fixed as part of 
https://issues.apache.org/jira/browse/CLOUDSTACK-7842.

>From DB:
=
mysql> select * from snapshot_store_ref where id=586 \G;
*** 1. row ***
id: 586
store_id: 1
snapshot_id: 305
created: 2015-06-15 06:06:22
last_updated: NULL
job_id: NULL
store_role: Image
size: 5368709120
physical_size: 13312 ---> This is the size we are storing in the DB
parent_snapshot_id: 0
install_path: snapshots/2/233/e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3
state: Ready
update_count: 2
ref_cnt: 0
updated: 2015-06-15 06:06:36
volume_id: 233
1 row in set (0.00 sec)

 From File System:
=
[root@kirangoleta2 233]# ls -lh
total 2.1M
-rw-r--r--. 1 root root 2.1M Jun 15 11:36 
e8d888a5-41c0-4a17-b1b7-f9a6fd50c0d3.vhd ---> Physical file size

3. From Xen Server:
=
xe vdi-list name-label=newtest_ROOT-203_20150615060620 params=all

uuid ( RO) : 74a4185e-74fe-4cec-875b-060572cc675d
name-label ( RW): newtest_ROOT-203_20150615060620
name-description ( RW):
is-a-snapshot ( RO): true
snapshot-of ( RO): 02789581-7bfc-45bd-8e59-c35515d2b605
snapshots ( RO):
snapshot-time ( RO): 20150615T06:09:29Z
allowed-operations (SRO): forget; generate_config; update; resize; destroy; 
clone; copy; snapshot
current-operations (SRO):
sr-uuid ( RO): 73ff08fb-b341-c71c-e2c7-be6c8d395126
sr-name-label ( RO): 347c06fb-f7dd-3613-aa82-db5b82181d77
vbd-uuids (SRO):
crashdump-uuids (SRO):
virtual-size ( RO): 5368709120
physical-utilisation ( RO): 14848 ---> This is the size xen server reports as 
consumed
location ( RO): 74a4185e-74fe-4cec-875b-060572cc675d
type ( RO): User
sharable ( RO): false
read-only ( RO): false
storage-lock ( RO): false
managed ( RO): true
parent ( RO): 
missing ( RO): false
other-config (MRW): content_id: ad6423f7-e2c3-7ea4-be8d-573ad155511e
xenstore-data (MRO):
sm-config (MRO): vhd-parent: 73a33517-e9c5-48c6-89e7-70e37905a74a
on-boot ( RW): persist
allow-caching ( RW): false
metadata-latest ( RO): false
metadata-

[jira] [Created] (CLOUDSTACK-8922) Unable to delete IP tag

2015-09-29 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-8922:
--

 Summary: Unable to delete IP tag
 Key: CLOUDSTACK-8922
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8922
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.5.2
Reporter: subhash yedugundla


1. Acquire new IP address 
2. Create tags for the IP 
3. Delete the tag from Step#2 

 an error occurs at Step#3 whereby the delete tag operation fails with 
"Acct[f4d0c381-e0b7-4aed-aa90-3336d42f7540-7100017] does not have 
permission to operate within domain id\u003d632"

TROUBLESHOOTING
==
Acquire new IP address
*
{noformat}
2014-11-19 15:08:15,870 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(catalina-exec-20:ctx-faed32b5 ctx-712308cb ctx-401bfcaf) submit async 
job-72419, details: AsyncJobVO {id:72419, userId: 17, accountId: 15, 
instanceType: IpAddress, instanceId: 672, cmd: 
org.apache.cloudstack.api.command.user.address.AssociateIPAddrCmd, cmdInfo: 
{"id":"672","response":"json","cmdEventType":"NET.IPASSIGN","ctxUserId":"17","zoneid":"a117e75f-d02e-4074-806d-889c61261394","httpmethod":"GET","ctxAccountId":"15","networkid":"0ca7c69e-c281-407b-a152-2559c10a81a6","ctxStartEventId":"166725","signature":"3NZRU6dIBxg1HMDiP/MkY2agRn4\u003d","apikey":"tuwHXs1AfpQheJeJ9BcLdoVxIBCItASnguAbus76AMUcIXuyFgHOJiIB44fO57p_bBaqyfppmxrC-rQSb-nxXg"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 345048681027, completeMsid: null, lastUpdated: null, 
lastPolled: null, created: null}
2014-11-19 15:08:15,870 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(API-Job-Executor-68:ctx-fca9add6 job-72419) Executing AsyncJobVO {id:72419, 
userId: 17, accountId: 15, instanceType: IpAddress, instanceId: 672, cmd: 
org.apache.cloudstack.api.command.user.address.AssociateIPAddrCmd, cmdInfo: 
{"id":"672","response":"json","cmdEventType":"NET.IPASSIGN","ctxUserId":"17","zoneid":"a117e75f-d02e-4074-806d-889c61261394","httpmethod":"GET","ctxAccountId":"15","networkid":"0ca7c69e-c281-407b-a152-2559c10a81a6","ctxStartEventId":"166725","signature":"3NZRU6dIBxg1HMDiP/MkY2agRn4\u003d","apikey":"tuwHXs1AfpQheJeJ9BcLdoVxIBCItASnguAbus76AMUcIXuyFgHOJiIB44fO57p_bBaqyfppmxrC-rQSb-nxXg"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 345048681027, completeMsid: null, lastUpdated: null, 
lastPolled: null, created: null}
2014-11-19 15:08:15,890 DEBUG [c.c.u.AccountManagerImpl] 
(API-Job-Executor-68:ctx-fca9add6 job-72419 ctx-96bbdee5) Access to 
Ntwk[216|Guest|8] granted to 
Acct[f4d0c381-e0b7-4aed-aa90-3336d42f7540-7100017] by DomainChecker
2014-11-19 15:08:15,911 DEBUG [c.c.n.IpAddressManagerImpl] 
(API-Job-Executor-68:ctx-fca9add6 job-72419 ctx-96bbdee5) Successfully 
associated ip address 210.140.170.160 to network Ntwk[216|Guest|8]
2014-11-19 15:08:15,922 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(API-Job-Executor-68:ctx-fca9add6 job-72419 ctx-96bbdee5) Complete async 
job-72419, jobStatus: SUCCEEDED, resultCode: 0, result: 
org.apache.cloudstack.api.response.IPAddressResponse/ipaddress/{"id":"3d7c3a2a-1f2d-46dc-9905-4a7ce620e7e9","ipaddress":"210.140.170.160","allocated":"2014-11-19T15:08:15+0900","zoneid":"a117e75f-d02e-4074-806d-889c61261394","zonename":"tesla","issourcenat":false,"account":"7100017","domainid":"cc27e32c-6acd-4fdf-a1e5-734cef8a7fe0","domain":"7100017","forvirtualnetwork":true,"isstaticnat":false,"issystem":false,"associatednetworkid":"0ca7c69e-c281-407b-a152-2559c10a81a6","associatednetworkname":"network1","networkid":"79132c74-fe77-4bd5-9915-ce7c577fb95f","state":"Allocating","physicalnetworkid":"4a00ce42-6a30-4494-afdd-3531d883237b","tags":[],"isportable":false}
2014-11-19 15:08:15,932 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(API-Job-Executor-68:ctx-fca9add6 job-72419) Remove job-72419 from job 
monitoring

+---+---++---+---+-+-+
| id| job_cmd   | 
job_status | job_init_msid | job_complete_msid | created | 
last_updated|
+---+---++---+---+-+-+
| 72419 | org.apache.cloudstack.api.command.user.address.AssociateIPAddrCmd |   
   1 |  345048681027 |  345048681027 | 2014-11-19 06:08:15 | 2014-11-19 
06:08:15 |
+---+---++---+---+-+-+
1 row in set (0.00 sec)
{noformat}


Create Tag

[jira] [Updated] (CLOUDSTACK-8939) VM Snapshot size with memory is not correctly calculated in cloud.usage_event (XenServer)

2015-10-06 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-8939:
---
Description: 
1. created a VM snapshot with memory on a VM with 512 MB RAM
list all the VBDs for the VM
xe vbd-list vm-name-label=i-2-43-VM empty=false params=vdi-uuid
vdi-uuid ( RO): fbe638dd-02c5-42f5-96b2-7b3d73e68658
Verify the size of the snapshot disk and its parent (I understand from the code 
we only check one parent in the chain)
# xe vdi-list params=physical-utilisation,sm-config,is-a-snapshot 
uuid=fbe638dd-02c5-42f5-96b2-7b3d73e68658
is-a-snapshot ( RO)   : false
physical-utilisation ( RO): 38124032 <-
   sm-config (MRO): 
host_OpaqueRef:52c1ec01-cef6-d4fd-7d81-68795a853ee0: RW; vhd-parent: 
993e3859-8be9-414c-819b-61095ab2eff1
parent:
# xe vdi-list params=physical-utilisation,sm-config,is-a-snapshot 
uuid=993e3859-8be9-414c-819b-61095ab2eff1
is-a-snapshot ( RO)   : false
physical-utilisation ( RO): 119816704 <-
   sm-config (MRO): vhd-blocks: 
eJxrYAAB0QcMWAELduFBDwSgNAsaHxcQARGMNHMOQfsJARFGRgkGRkWgOxmxGiWCwz5c9qLLU+o+YoGooIBUp6CgIJ2sIxnQKxzoZQ8M0Dof09s/1AMALcYD3A==;
 vhd-parent: 4b7ee66c-de53-47bb-b3d5-306d40a0ea08
At this stage, not counting the "suspend VDI", the size looks as follows: 
38124032 + 119816704 = 157940736 = 150.6 MB
Now, let's calulate the "suspend VDI" size and add it to the above value:
Note my test VM has 512 MB of RAM
# xe vdi-list name-label=Suspend\ image params=all

uuid ( RO): e2ae7a47-c410-479b-9e2b-5f05c01f1b4f
  name-label ( RW): Suspend image
name-description ( RW): Suspend image
   is-a-snapshot ( RO): true
physical-utilisation ( RO): 6144
<--- 
   sm-config (MRO): vhd-parent: c286583b-dd47-4a21-a5fe-ba2f671f1163
Looking at the parent
# xe vdi-list uuid=c286583b-dd47-4a21-a5fe-ba2f671f1163 params=all

uuid ( RO): c286583b-dd47-4a21-a5fe-ba2f671f1163
  name-label ( RW): base copy
   is-a-snapshot ( RO): false
virtual-size ( RO): 782237696
physical-utilisation ( RO): 252154368   
<--- 
   sm-config (MRO): vhd-blocks: eJz7/x8ZNDA0MEDAASiNyucAAHMeEPs=
 
So the "suspend VDI" + its parent has "physical utilisation" of 6144 + 
252154368 = 252160512 = 17.55 MB
Now, if we add it to the previously calculated disk sizes, we'll get:
xapi (phys util): 157940736 (non-memory snap disks)+ 252160512 (memory snap 
disks) = 410101248 = 391.1 MB
Let's verify the size in cloud.usage_event now:
select * from cloud.usage_event where  resource_name like "i-2-43-VM%" \G



   id: 528  

  
 type: VMSNAPSHOT.CREATE

  
   account_id: 2

  
  created: 2015-04-14 11:48:56
  zone_id: 1
  resource_id: 43
resource_name: i-2-43-VM_VS_20150414114830
  offering_id: NULL
  template_id: 64
 size: 119863296 <-
resource_type: NULL
processed: 0
 virtual_size: NULL
1 row in set (0.01 sec)
Overall, the snapshot size in cloud.usage_event != size calculated from xapi 
objects based on the code
xapi (phys util): 157940736 (non-memory snap disks)+ 252160512 (memory snap 
disks) = 410101248 = 391.1 MB
but:
usage_event: 119863296 = 114.3 MB

  was:
1. created a VM snapshot with memory on a VM with 512 MB RAM
list all the VBDs for the VM
xe vbd-list vm-name-label=i-2-43-VM empty=false params=vdi-uuid
vdi-uuid ( RO): fbe638dd-02c5-42f5-96b2-7b3d73e68658
Verify the size of the snapshot disk and its parent (I understand from the code 
we only check one parent in the chain)
# xe vdi-list params=physical-utilisation,sm-config,is-a-snapshot 
uuid=fbe638dd-02c5-42f5-96b2-7b3d73e68658
is-a-snapshot ( RO)   : false
physical-utilisation ( RO): 38124032 <-
   sm-config (MRO): 
host_OpaqueRef:52c1ec01-cef6-d4fd-7d81-68795a853ee0: RW; vhd-parent: 
993e3859-8be9-414c-819b-61095ab2eff1
parent:
# xe vdi-list params=physical-utilisation,sm-config,is-a-snapshot 
uuid=993e3859-8be9-414c-819b-61095ab2eff1
is-a-snapshot ( RO)   : false
physical-utilisation ( RO): 119816704 <-
   sm-config (MRO):

[jira] [Updated] (CLOUDSTACK-8939) VM Snapshot size with memory is not correctly calculated in cloud.usage_event (XenServer)

2015-10-06 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-8939:
---
Summary: VM Snapshot size with memory is not correctly calculated in 
cloud.usage_event (XenServer)  (was: VM Snapshot size with memory is not 
correctly calculated in cloud.usage_event ()

> VM Snapshot size with memory is not correctly calculated in cloud.usage_event 
> (XenServer)
> -
>
> Key: CLOUDSTACK-8939
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8939
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2
>Reporter: subhash yedugundla
>
> 1. created a VM snapshot with memory on a VM with 512 MB RAM
> list all the VBDs for the VM
> xe vbd-list vm-name-label=i-2-43-VM empty=false params=vdi-uuid
> vdi-uuid ( RO): fbe638dd-02c5-42f5-96b2-7b3d73e68658
> Verify the size of the snapshot disk and its parent (I understand from the 
> code we only check one parent in the chain)
> # xe vdi-list params=physical-utilisation,sm-config,is-a-snapshot 
> uuid=fbe638dd-02c5-42f5-96b2-7b3d73e68658
> is-a-snapshot ( RO)   : false
> physical-utilisation ( RO): 38124032 <-
>sm-config (MRO): 
> host_OpaqueRef:52c1ec01-cef6-d4fd-7d81-68795a853ee0: RW; vhd-parent: 
> 993e3859-8be9-414c-819b-61095ab2eff1
> parent:
> # xe vdi-list params=physical-utilisation,sm-config,is-a-snapshot 
> uuid=993e3859-8be9-414c-819b-61095ab2eff1
> is-a-snapshot ( RO)   : false
> physical-utilisation ( RO): 119816704 <-
>sm-config (MRO): vhd-blocks: 
> eJxrYAAB0QcMWAELduFBDwSgNAsaHxcQARGMNHMOQfsJARFGRgkGRkWgOxmxGiWCwz5c9qLLU+o+YoGooIBUp6CgIJ2sIxnQKxzoZQ8M0Dof09s/1AMALcYD3A==;
>  vhd-parent: 4b7ee66c-de53-47bb-b3d5-306d40a0ea08
> At this stage, not counting the "suspend VDI", the size looks as follows: 
> 38124032 + 119816704 = 157940736 = 150.6 MB
> Now, let's calulate the "suspend VDI" size and add it to the above value:
> Note my test VM has 512 MB of RAM
> # xe vdi-list name-label=Suspend\ image params=all
> uuid ( RO): e2ae7a47-c410-479b-9e2b-5f05c01f1b4f
>   name-label ( RW): Suspend image
> name-description ( RW): Suspend image
>is-a-snapshot ( RO): true
> physical-utilisation ( RO): 6144  
> <--- 
>sm-config (MRO): vhd-parent: 
> c286583b-dd47-4a21-a5fe-ba2f671f1163
> Looking at the parent
> # xe vdi-list uuid=c286583b-dd47-4a21-a5fe-ba2f671f1163 params=all
> uuid ( RO): c286583b-dd47-4a21-a5fe-ba2f671f1163
>   name-label ( RW): base copy
>is-a-snapshot ( RO): false
> virtual-size ( RO): 782237696
> physical-utilisation ( RO): 252154368 
> <--- 
>sm-config (MRO): vhd-blocks: eJz7/x8ZNDA0MEDAASiNyucAAHMeEPs=
>  
> So the "suspend VDI" + its parent has "physical utilisation" of 6144 + 
> 252154368 = 252160512 = 17.55 MB
> Now, if we add it to the previously calculated disk sizes, we'll get:
> xapi (phys util): 157940736 (non-memory snap disks)+ 252160512 (memory snap 
> disks) = 410101248 = 391.1 MB
> Let's verify the size in cloud.usage_event now:
> select * from cloud.usage_event where  resource_name like "i-2-43-VM%" \G
> *** 1. row ***
>   
>   
>id: 528
>   
>   
>  type: VMSNAPSHOT.CREATE  
>   
>   
>account_id: 2  
>   
>   
>   created: 2015-04-14 11:48:56
>   zone_id: 1
>   resource_id: 43
> resource_name: i-2-43-VM_VS_20150414114830
>   offering_id: NULL
>   template_id: 64
>  size: 119863296 <-
> resource_type: NULL
> processed: 0
>  virtual_size: NULL
> 1 row in set (0.01 sec)
> Overall, the snapshot size in cloud.usage_event != size calculated from xapi 
> objects based on the code
> xapi (phys util): 157940736 (non-memory snap disks)+ 252160512 (memory snap 
> disks) = 410101248 = 391.1 MB
> but:
> usage_event: 119863296 = 114.3 MB



--
Th

[jira] [Created] (CLOUDSTACK-8939) VM Snapshot size with memory is not correctly calculated in cloud.usage_event (

2015-10-06 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-8939:
--

 Summary: VM Snapshot size with memory is not correctly calculated 
in cloud.usage_event (
 Key: CLOUDSTACK-8939
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8939
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.2
Reporter: subhash yedugundla


1. created a VM snapshot with memory on a VM with 512 MB RAM
list all the VBDs for the VM
xe vbd-list vm-name-label=i-2-43-VM empty=false params=vdi-uuid
vdi-uuid ( RO): fbe638dd-02c5-42f5-96b2-7b3d73e68658
Verify the size of the snapshot disk and its parent (I understand from the code 
we only check one parent in the chain)
# xe vdi-list params=physical-utilisation,sm-config,is-a-snapshot 
uuid=fbe638dd-02c5-42f5-96b2-7b3d73e68658
is-a-snapshot ( RO)   : false
physical-utilisation ( RO): 38124032 <-
   sm-config (MRO): 
host_OpaqueRef:52c1ec01-cef6-d4fd-7d81-68795a853ee0: RW; vhd-parent: 
993e3859-8be9-414c-819b-61095ab2eff1
parent:
# xe vdi-list params=physical-utilisation,sm-config,is-a-snapshot 
uuid=993e3859-8be9-414c-819b-61095ab2eff1
is-a-snapshot ( RO)   : false
physical-utilisation ( RO): 119816704 <-
   sm-config (MRO): vhd-blocks: 
eJxrYAAB0QcMWAELduFBDwSgNAsaHxcQARGMNHMOQfsJARFGRgkGRkWgOxmxGiWCwz5c9qLLU+o+YoGooIBUp6CgIJ2sIxnQKxzoZQ8M0Dof09s/1AMALcYD3A==;
 vhd-parent: 4b7ee66c-de53-47bb-b3d5-306d40a0ea08
At this stage, not counting the "suspend VDI", the size looks as follows: 
38124032 + 119816704 = 157940736 = 150.6 MB
Now, let's calulate the "suspend VDI" size and add it to the above value:
Note my test VM has 512 MB of RAM
# xe vdi-list name-label=Suspend\ image params=all

uuid ( RO): e2ae7a47-c410-479b-9e2b-5f05c01f1b4f
  name-label ( RW): Suspend image
name-description ( RW): Suspend image
   is-a-snapshot ( RO): true
physical-utilisation ( RO): 6144
<--- 
   sm-config (MRO): vhd-parent: c286583b-dd47-4a21-a5fe-ba2f671f1163
Looking at the parent
# xe vdi-list uuid=c286583b-dd47-4a21-a5fe-ba2f671f1163 params=all

uuid ( RO): c286583b-dd47-4a21-a5fe-ba2f671f1163
  name-label ( RW): base copy
   is-a-snapshot ( RO): false
virtual-size ( RO): 782237696
physical-utilisation ( RO): 252154368   
<--- 
   sm-config (MRO): vhd-blocks: eJz7/x8ZNDA0MEDAASiNyucAAHMeEPs=
 
So the "suspend VDI" + its parent has "physical utilisation" of 6144 + 
252154368 = 252160512 = 17.55 MB
Now, if we add it to the previously calculated disk sizes, we'll get:
xapi (phys util): 157940736 (non-memory snap disks)+ 252160512 (memory snap 
disks) = 410101248 = 391.1 MB
Let's verify the size in cloud.usage_event now:
select * from cloud.usage_event where  resource_name like "i-2-43-VM%" \G

*** 1. row ***  

  
   id: 528  

  
 type: VMSNAPSHOT.CREATE

  
   account_id: 2

  
  created: 2015-04-14 11:48:56
  zone_id: 1
  resource_id: 43
resource_name: i-2-43-VM_VS_20150414114830
  offering_id: NULL
  template_id: 64
 size: 119863296 <-
resource_type: NULL
processed: 0
 virtual_size: NULL
1 row in set (0.01 sec)
Overall, the snapshot size in cloud.usage_event != size calculated from xapi 
objects based on the code
xapi (phys util): 157940736 (non-memory snap disks)+ 252160512 (memory snap 
disks) = 410101248 = 391.1 MB
but:
usage_event: 119863296 = 114.3 MB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8944) Template download possible from new secondary storages before the download is 100 % complete

2015-10-10 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-8944:
--

 Summary: Template download possible from new secondary storages 
before the download is 100 % complete
 Key: CLOUDSTACK-8944
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8944
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.2
 Environment: xenserver host with nfs storage
Reporter: subhash yedugundla


ISSUE
==
Secondary Storage ( Parent is NULL in the database )
after  secondary storage is added  from the CloudStack GUI which in turn leads 
to a invalid download URL for a template
 

TROUBLESHOOTING
===

The parameter provided when the Secondary storage created 
Name: 
Provider:NFS 
Zone:dev3-z1 
Server:192.168.125.12 
Path:/vol/dev03/test 



  when we add secondary storage

{noformat}
2015-06-11 07:27:40,686 TRACE [c.c.u.d.T.Statement] 
(catalina-exec-19:ctx-11906a2c ctx-550a6e46) (logid:0fb48736) Closing: 
com.mysql.jdbc.JDBC4PreparedStatement@7e703121: INSERT INTO image_store 
(image_store.id, image_store.name, image_store.uuid, image_store.protocol, 
image_store.url, image_store.image_provider_name, image_store.data_center_id, 
image_store.scope, image_store.created, image_store.role, image_store.parent, 
image_store.total_size, image_store.used_bytes) VALUES (0, _binary'sec3', 
_binary'471d5edc-424e-41fb-a21e-47e53670fe62', _binary'nfs', 
_binary'nfs://10.104.49.65/nfs/sec3', _binary'NFS', 1, 'ZONE', '2015-06-11 
01:57:40', 'Image', null, null, null) 


mysql> select * from image_store where id=3 \G; 
*** 1. row *** 
id: 3 
name: sec3 
image_provider_name: NFS 
protocol: nfs 
url: nfs://10.104.49.65/nfs/sec3 
data_center_id: 1 
scope: ZONE 
role: Image 
uuid: 471d5edc-424e-41fb-a21e-47e53670fe62 
parent: NULL 
created: 2015-06-11 01:57:40 
removed: NULL 
total_size: NULL 
used_bytes: NULL 
1 row in set (0.00 sec) 
{noformat}


 Template download falils if the parent is NULL
The URL published when the customer extract the template  gives 403 forbidden 
error. 

Example :
Template id:3343 
The URL is below. 
https://210-140-168-1.systemip.idcfcloud.com/userdata/8aa50513-e60e-481f-989d-5bbd119504df.ova
 

The template is stored on the new mount-point (je01v-secstr01-02 )

{noformat}
root@s-1-VM:/var/www/html/userdata# df -h 
Filesystem Size Used Avail Use% Mounted on 
rootfs 276M 144M 118M 55% / 
udev 10M 0 10M 0% /dev 
tmpfs 201M 224K 201M 1% /run 
/dev/disk/by-uuid/1458767f-a01a-4237-89e8-930f8c42fffe 276M 144M 118M 55% / 
tmpfs 5.0M 0 5.0M 0% /run/lock 
tmpfs 515M 0 515M 0% /run/shm 
/dev/sda1 45M 22M 21M 51% /boot 
/dev/sda6 98M 5.6M 88M 6% /home 
/dev/sda8 368M 11M 339M 3% /opt 
/dev/sda10 63M 5.3M 55M 9% /tmp 
/dev/sda7 610M 518M 61M 90% /usr 
/dev/sda9 415M 248M 146M 63% /var 
10.133.245.11:/je01v-secstr01-01 16T 11T 5.5T 66% 
/mnt/SecStorage/8c0f1709-5d1d-3f0e-b100-ccfb873cf3ff 
10.133.245.11:/je01v-secstr01-02 5.9T 4.0T 1.9T 69% 
/mnt/SecStorage/22836274-19c4-301a-80d8-690f16530e0a **THIS ONE 

>From the SSVM

root@s-1-VM:/var/www/html/userdata# ls -lah | grep 3343 
lrwxrwxrwx 1 root root 83 May 20 06:11 8aa50513-e60e-481f-989d-5bbd119504df.ova 
-> 
/mnt/SecStorage/null/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova
 

{noformat}

The symbolic link is 
"/mnt/SecStorage/null/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova".
 
We assumed the problem is that the link contains "null" directory. 
The correct symbolic link should be 
"/mnt/SecStorage/22836274-19c4-301a-80d8-690f16530e0a/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova"





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8944) Template download possible from new secondary storages before the download is 100 % complete

2015-10-10 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14951994#comment-14951994
 ] 

subhash yedugundla commented on CLOUDSTACK-8944:


When a new secondary storage is added, the mount point is updated only after 
all public templates, volumes and snapshots are completely copied to the new 
storage. in the mean time, when somebody tries a template download, first 
template store with template in downloaded state gets selected. However 
template goes to DOWNLOADED state as soon as the download starts to the store. 
So making sure to download a template that is 100 pct complete would prevent 
this issue

> Template download possible from new secondary storages before the download is 
> 100 % complete
> 
>
> Key: CLOUDSTACK-8944
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8944
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2
> Environment: xenserver host with nfs storage
>Reporter: subhash yedugundla
>
> ISSUE
> ==
> Secondary Storage ( Parent is NULL in the database )
> after  secondary storage is added  from the CloudStack GUI which in turn 
> leads to a invalid download URL for a template
>  
> TROUBLESHOOTING
> ===
> The parameter provided when the Secondary storage created 
> Name: 
> Provider:NFS 
> Zone:dev3-z1 
> Server:192.168.125.12 
> Path:/vol/dev03/test 
>   when we add secondary storage
> {noformat}
> 2015-06-11 07:27:40,686 TRACE [c.c.u.d.T.Statement] 
> (catalina-exec-19:ctx-11906a2c ctx-550a6e46) (logid:0fb48736) Closing: 
> com.mysql.jdbc.JDBC4PreparedStatement@7e703121: INSERT INTO image_store 
> (image_store.id, image_store.name, image_store.uuid, image_store.protocol, 
> image_store.url, image_store.image_provider_name, image_store.data_center_id, 
> image_store.scope, image_store.created, image_store.role, image_store.parent, 
> image_store.total_size, image_store.used_bytes) VALUES (0, _binary'sec3', 
> _binary'471d5edc-424e-41fb-a21e-47e53670fe62', _binary'nfs', 
> _binary'nfs://10.104.49.65/nfs/sec3', _binary'NFS', 1, 'ZONE', '2015-06-11 
> 01:57:40', 'Image', null, null, null) 
> mysql> select * from image_store where id=3 \G; 
> *** 1. row *** 
> id: 3 
> name: sec3 
> image_provider_name: NFS 
> protocol: nfs 
> url: nfs://10.104.49.65/nfs/sec3 
> data_center_id: 1 
> scope: ZONE 
> role: Image 
> uuid: 471d5edc-424e-41fb-a21e-47e53670fe62 
> parent: NULL 
> created: 2015-06-11 01:57:40 
> removed: NULL 
> total_size: NULL 
> used_bytes: NULL 
> 1 row in set (0.00 sec) 
> {noformat}
>  Template download falils if the parent is NULL
> The URL published when the customer extract the template  gives 403 forbidden 
> error. 
> Example :
> Template id:3343 
> The URL is below. 
> https://210-140-168-1.systemip.idcfcloud.com/userdata/8aa50513-e60e-481f-989d-5bbd119504df.ova
>  
> The template is stored on the new mount-point (je01v-secstr01-02 )
> {noformat}
> root@s-1-VM:/var/www/html/userdata# df -h 
> Filesystem Size Used Avail Use% Mounted on 
> rootfs 276M 144M 118M 55% / 
> udev 10M 0 10M 0% /dev 
> tmpfs 201M 224K 201M 1% /run 
> /dev/disk/by-uuid/1458767f-a01a-4237-89e8-930f8c42fffe 276M 144M 118M 55% / 
> tmpfs 5.0M 0 5.0M 0% /run/lock 
> tmpfs 515M 0 515M 0% /run/shm 
> /dev/sda1 45M 22M 21M 51% /boot 
> /dev/sda6 98M 5.6M 88M 6% /home 
> /dev/sda8 368M 11M 339M 3% /opt 
> /dev/sda10 63M 5.3M 55M 9% /tmp 
> /dev/sda7 610M 518M 61M 90% /usr 
> /dev/sda9 415M 248M 146M 63% /var 
> 10.133.245.11:/je01v-secstr01-01 16T 11T 5.5T 66% 
> /mnt/SecStorage/8c0f1709-5d1d-3f0e-b100-ccfb873cf3ff 
> 10.133.245.11:/je01v-secstr01-02 5.9T 4.0T 1.9T 69% 
> /mnt/SecStorage/22836274-19c4-301a-80d8-690f16530e0a **THIS ONE 
> From the SSVM
> root@s-1-VM:/var/www/html/userdata# ls -lah | grep 3343 
> lrwxrwxrwx 1 root root 83 May 20 06:11 
> 8aa50513-e60e-481f-989d-5bbd119504df.ova -> 
> /mnt/SecStorage/null/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova
>  
> {noformat}
> The symbolic link is 
> "/mnt/SecStorage/null/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova".
>  
> We assumed the problem is that the link contains "null" directory. 
> The correct symbolic link should be 
> "/mnt/SecStorage/22836274-19c4-301a-80d8-690f16530e0a/template/tmpl/19/3343/d93d6fcf-bb4e-3287-8346-a7781c39ecdb.ova"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-8850) revertSnapshot command does not work

2015-10-13 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla resolved CLOUDSTACK-8850.

Resolution: Fixed

Fixed in commit ids 7d83ca9e0aea4744474f62a3a3ccee4b5fa8090d

f2d4820773cf62b430889b39d67336cbd61ec378


> revertSnapshot command does not work
> 
>
> Key: CLOUDSTACK-8850
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8850
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2
>Reporter: subhash yedugundla
>
> revertSnapshot command does not work. But it is coming up in the 
> documentation. So Documentation needs to be updated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CLOUDSTACK-8850) revertSnapshot command does not work

2015-10-13 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla reopened CLOUDSTACK-8850:


> revertSnapshot command does not work
> 
>
> Key: CLOUDSTACK-8850
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8850
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2
>Reporter: subhash yedugundla
>
> revertSnapshot command does not work. But it is coming up in the 
> documentation. So Documentation needs to be updated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CLOUDSTACK-8850) revertSnapshot command does not work

2015-10-13 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-8850:
---
Comment: was deleted

(was: Fixed in commit ids 7d83ca9e0aea4744474f62a3a3ccee4b5fa8090d

f2d4820773cf62b430889b39d67336cbd61ec378
)

> revertSnapshot command does not work
> 
>
> Key: CLOUDSTACK-8850
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8850
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2
>Reporter: subhash yedugundla
>
> revertSnapshot command does not work. But it is coming up in the 
> documentation. So Documentation needs to be updated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9553) Usage event is not getting record for snapshots in a specific scenario

2016-10-20 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9553:
--

 Summary: Usage event is not getting record for snapshots in a 
specific scenario
 Key: CLOUDSTACK-9553
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9553
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Usage
Affects Versions: 4.9.0
 Environment: vmware 4.5.1
Reporter: subhash yedugundla
 Fix For: Future


1. Create a scheduled snapshot of the volume
2. Delete the snapshot schedule before the run of the usage job for the day. 
3. The usage job completes successfully but there is a error message "post 
process snapshot failed"
4. The snapshot.create event is captured in the event table, but not in the 
usage event table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9553) Usage event is not getting record for snapshots in a specific scenario

2016-10-20 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15591480#comment-15591480
 ] 

subhash yedugundla commented on CLOUDSTACK-9553:


An exception is getting raised when the corresponding schedule is not available 
which in turn resulting in skipping the code to generate usage event. 


> Usage event is not getting record for snapshots in a specific scenario
> --
>
> Key: CLOUDSTACK-9553
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9553
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Usage
>Affects Versions: 4.9.0
> Environment: vmware 4.5.1
>Reporter: subhash yedugundla
> Fix For: Future
>
>
> 1. Create a scheduled snapshot of the volume
> 2. Delete the snapshot schedule before the run of the usage job for the day. 
> 3. The usage job completes successfully but there is a error message "post 
> process snapshot failed"
> 4. The snapshot.create event is captured in the event table, but not in the 
> usage event table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9553) Usage event is not getting recorded for snapshots in a specific scenario

2016-10-20 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-9553:
---
Summary: Usage event is not getting recorded for snapshots in a specific 
scenario  (was: Usage event is not getting record for snapshots in a specific 
scenario)

> Usage event is not getting recorded for snapshots in a specific scenario
> 
>
> Key: CLOUDSTACK-9553
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9553
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Usage
>Affects Versions: 4.9.0
> Environment: vmware 4.5.1
>Reporter: subhash yedugundla
> Fix For: Future
>
>
> 1. Create a scheduled snapshot of the volume
> 2. Delete the snapshot schedule before the run of the usage job for the day. 
> 3. The usage job completes successfully but there is a error message "post 
> process snapshot failed"
> 4. The snapshot.create event is captured in the event table, but not in the 
> usage event table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9553) Usage event is not getting recorded for snapshots in a specific scenario

2016-10-20 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15591501#comment-15591501
 ] 

subhash yedugundla commented on CLOUDSTACK-9553:


Fixed in pull request https://github.com/apache/cloudstack/pull/1714

> Usage event is not getting recorded for snapshots in a specific scenario
> 
>
> Key: CLOUDSTACK-9553
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9553
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Usage
>Affects Versions: 4.9.0
> Environment: vmware 4.5.1
>Reporter: subhash yedugundla
> Fix For: Future
>
>
> 1. Create a scheduled snapshot of the volume
> 2. Delete the snapshot schedule before the run of the usage job for the day. 
> 3. The usage job completes successfully but there is a error message "post 
> process snapshot failed"
> 4. The snapshot.create event is captured in the event table, but not in the 
> usage event table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9554) Juniper Contrail plug-in is publishing events to wrong message bus

2016-10-20 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9554:
--

 Summary: Juniper Contrail plug-in is publishing events to wrong 
message bus
 Key: CLOUDSTACK-9554
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9554
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: eventbus
Affects Versions: 4.9.0
 Environment: Citrix Xenserver with juniper contrail. 
Reporter: subhash yedugundla
 Fix For: 4.10.0.0


Juniper contrail plugin is publishing events to internal message bus instead of 
event bus. which can lead to a deadlock in the following scenario

1. Create a VM in cloudstack with xenserver
2. Create firewall rules on contrail for the same
3. Delete the vm from xenserver directly. 
4. Next power state sync cycle would create a deadlock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-9554) Juniper Contrail plug-in is publishing events to wrong message bus

2016-10-20 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla resolved CLOUDSTACK-9554.

Resolution: Fixed

> Juniper Contrail plug-in is publishing events to wrong message bus
> --
>
> Key: CLOUDSTACK-9554
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9554
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: eventbus
>Affects Versions: 4.9.0
> Environment: Citrix Xenserver with juniper contrail. 
>Reporter: subhash yedugundla
> Fix For: 4.10.0.0
>
>
> Juniper contrail plugin is publishing events to internal message bus instead 
> of event bus. which can lead to a deadlock in the following scenario
> 1. Create a VM in cloudstack with xenserver
> 2. Create firewall rules on contrail for the same
> 3. Delete the vm from xenserver directly. 
> 4. Next power state sync cycle would create a deadlock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9554) Juniper Contrail plug-in is publishing events to wrong message bus

2016-10-20 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15591611#comment-15591611
 ] 

subhash yedugundla commented on CLOUDSTACK-9554:


The following pull request is raised for this issue

https://github.com/apache/cloudstack/pull/1715

> Juniper Contrail plug-in is publishing events to wrong message bus
> --
>
> Key: CLOUDSTACK-9554
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9554
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: eventbus
>Affects Versions: 4.9.0
> Environment: Citrix Xenserver with juniper contrail. 
>Reporter: subhash yedugundla
> Fix For: 4.10.0.0
>
>
> Juniper contrail plugin is publishing events to internal message bus instead 
> of event bus. which can lead to a deadlock in the following scenario
> 1. Create a VM in cloudstack with xenserver
> 2. Create firewall rules on contrail for the same
> 3. Delete the vm from xenserver directly. 
> 4. Next power state sync cycle would create a deadlock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CLOUDSTACK-9554) Juniper Contrail plug-in is publishing events to wrong message bus

2016-10-20 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla reopened CLOUDSTACK-9554:


> Juniper Contrail plug-in is publishing events to wrong message bus
> --
>
> Key: CLOUDSTACK-9554
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9554
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: eventbus
>Affects Versions: 4.9.0
> Environment: Citrix Xenserver with juniper contrail. 
>Reporter: subhash yedugundla
> Fix For: 4.10.0.0
>
>
> Juniper contrail plugin is publishing events to internal message bus instead 
> of event bus. which can lead to a deadlock in the following scenario
> 1. Create a VM in cloudstack with xenserver
> 2. Create firewall rules on contrail for the same
> 3. Delete the vm from xenserver directly. 
> 4. Next power state sync cycle would create a deadlock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9555) Charging of template stops in a certain use case

2016-10-20 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9555:
--

 Summary: Charging of template stops in a certain use case
 Key: CLOUDSTACK-9555
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9555
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Usage
Affects Versions: 4.9.0
 Environment: All Hypervisors
Reporter: subhash yedugundla
 Fix For: 4.10.0.0


Charging of template stops in the following use case

Step1:Register a Template(Name:A) to Zone1
Step2:Copy the template(Name:A) to Zone 2
Step3:Delete the template(Name:A) of only Zone2
Step4:Copy the template(Name:A) to Zone 2
Step5:Check the charging of the template



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9555) when a template is deleted and then copied over again , it is still marked as Removed in template_zone_ref table

2016-10-20 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-9555:
---
Summary: when a template is deleted and then copied over again , it is 
still marked as Removed in template_zone_ref table  (was: Charging of template 
stops in a certain use case)

> when a template is deleted and then copied over again , it is still marked as 
> Removed in template_zone_ref table
> 
>
> Key: CLOUDSTACK-9555
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9555
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Usage
>Affects Versions: 4.9.0
> Environment: All Hypervisors
>Reporter: subhash yedugundla
> Fix For: 4.10.0.0
>
>
> Charging of template stops in the following use case
> Step1:Register a Template(Name:A) to Zone1
> Step2:Copy the template(Name:A) to Zone 2
> Step3:Delete the template(Name:A) of only Zone2
> Step4:Copy the template(Name:A) to Zone 2
> Step5:Check the charging of the template



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9557) Deploy from VMsnapshot fails with exception if source template is removed or made private

2016-10-21 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9557:
--

 Summary: Deploy from VMsnapshot fails with exception if source 
template is removed or made private
 Key: CLOUDSTACK-9557
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9557
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Template
Affects Versions: 4.8.0
 Environment: Any Hypervisor
Reporter: subhash yedugundla
 Fix For: 4.8.1


Steps to reproduce the issue
i) Upload a template as admin user and make sure "public" is selected when 
uploading it.
ii) Now login as a user to CloudPlatform and deploy a VM with the template 
created in step i).
iii) Create a VM snapshot as the user for the VM in step ii). Once created 
deploy a VM from the snapshot ( this will work as expected)
iv) Now login as admin again , edit the template created in step i) and Uncheck 
"public". This is make the template as private ( or else delete the template 
from UI)
v) Login as same user as in step ii) and try to create a VM from the same 
snapshot ( created in step iii)). This will fail now.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9557) Deploy from VMsnapshot fails with exception if source template is removed or made private

2016-10-21 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15594577#comment-15594577
 ] 

subhash yedugundla commented on CLOUDSTACK-9557:


Deploying vm from snapshot and the deploy vm from template takes the same code 
path and that is restricting the deployment as the we should not allow 
deployment from private templates. So changing this when the vmsnapshot is part 
of parameters for deployvm command. 

> Deploy from VMsnapshot fails with exception if source template is removed or 
> made private
> -
>
> Key: CLOUDSTACK-9557
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9557
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template
>Affects Versions: 4.8.0
> Environment: Any Hypervisor
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Steps to reproduce the issue
> i) Upload a template as admin user and make sure "public" is selected when 
> uploading it.
> ii) Now login as a user to CloudPlatform and deploy a VM with the template 
> created in step i).
> iii) Create a VM snapshot as the user for the VM in step ii). Once created 
> deploy a VM from the snapshot ( this will work as expected)
> iv) Now login as admin again , edit the template created in step i) and 
> Uncheck "public". This is make the template as private ( or else delete the 
> template from UI)
> v) Login as same user as in step ii) and try to create a VM from the same 
> snapshot ( created in step iii)). This will fail now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9557) Deploy from VMsnapshot fails with exception if source template is removed or made private

2016-10-21 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15594755#comment-15594755
 ] 

subhash yedugundla commented on CLOUDSTACK-9557:



The following is the pull request for this
https://github.com/apache/cloudstack/pull/1721


> Deploy from VMsnapshot fails with exception if source template is removed or 
> made private
> -
>
> Key: CLOUDSTACK-9557
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9557
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template
>Affects Versions: 4.8.0
> Environment: Any Hypervisor
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Steps to reproduce the issue
> i) Upload a template as admin user and make sure "public" is selected when 
> uploading it.
> ii) Now login as a user to CloudPlatform and deploy a VM with the template 
> created in step i).
> iii) Create a VM snapshot as the user for the VM in step ii). Once created 
> deploy a VM from the snapshot ( this will work as expected)
> iv) Now login as admin again , edit the template created in step i) and 
> Uncheck "public". This is make the template as private ( or else delete the 
> template from UI)
> v) Login as same user as in step ii) and try to create a VM from the same 
> snapshot ( created in step iii)). This will fail now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9558) Cleanup the snapshots on the primary storage of Xenserver after VM/Volume is expunged

2016-10-21 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9558:
--

 Summary: Cleanup the snapshots on the primary storage of Xenserver 
after VM/Volume is expunged
 Key: CLOUDSTACK-9558
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9558
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Volumes
Affects Versions: 4.8.0
 Environment: Xen Server
Reporter: subhash yedugundla
 Fix For: 4.8.1


Steps to reproduce the issue
===
i) Deploy a new VM in CCP on Xenserver
ii) Create a snapshot for the volume created in step i) from CCP. This step 
will create a snapshot on the primary storage and keeps it on storage as we use 
it as reference for the incremental snapshots
iii) Now destroy and expunge the VM created in step i)
You will notice that the volume for the VM ( created in step i) is deleted from 
the primary storage. However the snapshot created on primary ( as part of step 
ii)) still exists on the primary and this needs to be deleted manually by the 
admin.
Snapshot exists on the primary storage even after deleting the Volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9557) Deploy from VMsnapshot fails with exception if source template is removed or made private

2016-10-21 Thread subhash yedugundla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subhash yedugundla updated CLOUDSTACK-9557:
---
Description: 
Steps to reproduce the issue
i) Upload a template as admin user and make sure "public" is selected when 
uploading it.
ii) Now login as a user to CloudStack and deploy a VM with the template created 
in step i).
iii) Create a VM snapshot as the user for the VM in step ii). Once created 
deploy a VM from the snapshot ( this will work as expected)
iv) Now login as admin again , edit the template created in step i) and Uncheck 
"public". This is make the template as private ( or else delete the template 
from UI)
v) Login as same user as in step ii) and try to create a VM from the same 
snapshot ( created in step iii)). This will fail now.


  was:
Steps to reproduce the issue
i) Upload a template as admin user and make sure "public" is selected when 
uploading it.
ii) Now login as a user to CloudPlatform and deploy a VM with the template 
created in step i).
iii) Create a VM snapshot as the user for the VM in step ii). Once created 
deploy a VM from the snapshot ( this will work as expected)
iv) Now login as admin again , edit the template created in step i) and Uncheck 
"public". This is make the template as private ( or else delete the template 
from UI)
v) Login as same user as in step ii) and try to create a VM from the same 
snapshot ( created in step iii)). This will fail now.



> Deploy from VMsnapshot fails with exception if source template is removed or 
> made private
> -
>
> Key: CLOUDSTACK-9557
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9557
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template
>Affects Versions: 4.8.0
> Environment: Any Hypervisor
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Steps to reproduce the issue
> i) Upload a template as admin user and make sure "public" is selected when 
> uploading it.
> ii) Now login as a user to CloudStack and deploy a VM with the template 
> created in step i).
> iii) Create a VM snapshot as the user for the VM in step ii). Once created 
> deploy a VM from the snapshot ( this will work as expected)
> iv) Now login as admin again , edit the template created in step i) and 
> Uncheck "public". This is make the template as private ( or else delete the 
> template from UI)
> v) Login as same user as in step ii) and try to create a VM from the same 
> snapshot ( created in step iii)). This will fail now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9559) Deleting zone without deleting the secondary storage under the zone should not be allowed

2016-10-21 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9559:
--

 Summary: Deleting zone without deleting the secondary storage 
under the zone should not be allowed
 Key: CLOUDSTACK-9559
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9559
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Secondary Storage
Affects Versions: 4.8.0
 Environment: All Hypervisors
Reporter: subhash yedugundla
 Fix For: 4.8.1


When a zone is deleted, with out deleting the corresponding secondary storage. 
If there are templates or volumes in secondary storage, it wont be possible to 
delete them from ACS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9560) Root volume of deleted VM left unremoved

2016-10-24 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9560:
--

 Summary: Root volume of deleted VM left unremoved
 Key: CLOUDSTACK-9560
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9560
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Volumes
Affects Versions: 4.8.0
 Environment: XenServer
Reporter: subhash yedugundla
 Fix For: 4.8.1


In the following scenario root volume gets unremoved
Steps to reproduce the issue
1. Create a VM.
2. Stop this VM.
3. On the page of the volume of the VM, click 'Download Volume' icon.
4. Wait for the popup screen to display and cancel out with/without clicking 
the download link.
5. Destroy the VM
Even after the corresponding VM is deleted,expunged, the root-volume is left in 
'Expunging' state unremoved.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9560) Root volume of deleted VM left unremoved

2016-10-24 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15601842#comment-15601842
 ] 

subhash yedugundla commented on CLOUDSTACK-9560:


When a download is attempted a symbolic link would be created in SSVM and 
mapping would be placed in volume_store_ref table. The link would expire after 
the global setting 'extract.url.expiration.interval' which defaults to 4 hours 
and the expired links are cleaned up after the extract.url.cleanup.interval 
after expiry which defaults to 2 hours. The deletion of volume would be 
restricted if there is an entry for the volume in volume_store_ref table. After 
the expiry, the cleanup thread clears the entries in the volume_store_ref 
table, not the entries in the volumes table.  If the vm destroy is attempted 
after cleanup is done from volume_store_ref table, then the volumes would be 
cleanup. However, If the vm is destroyed before the expiry, then this issue 
occurs. So adding volume cleanup during cleanup of expired entries.

> Root volume of deleted VM left unremoved
> 
>
> Key: CLOUDSTACK-9560
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9560
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Volumes
>Affects Versions: 4.8.0
> Environment: XenServer
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> In the following scenario root volume gets unremoved
> Steps to reproduce the issue
> 1. Create a VM.
> 2. Stop this VM.
> 3. On the page of the volume of the VM, click 'Download Volume' icon.
> 4. Wait for the popup screen to display and cancel out with/without clicking 
> the download link.
> 5. Destroy the VM
> Even after the corresponding VM is deleted,expunged, the root-volume is left 
> in 'Expunging' state unremoved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9572) Snapshot on primary storage not cleaned up after Storage migration

2016-10-27 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9572:
--

 Summary: Snapshot on primary storage not cleaned up after Storage 
migration
 Key: CLOUDSTACK-9572
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9572
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Storage Controller
Affects Versions: 4.8.0
 Environment: Xen Server
Reporter: subhash yedugundla
 Fix For: 4.8.1


Issue Description
===
1. Create an instance on the local storage on any host
2. Create a scheduled snapshot of the volume:
3. Wait until ACS created the snapshot. ACS is creating a snapshot on local 
storage and is transferring this snapshot to secondary storage. But the latest 
snapshot on local storage will stay there. This is as expected.
4. Migrate the instance to another XenServer host with ACS UI and Storage Live 
Migration
5. The Snapshot on the old host on local storage will not be cleaned up and is 
staying on local storage. So local storage will fill up with unneeded snapshots.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9585) UI doesn't give an option to select the xentools version for non ROOT users

2016-11-09 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9585:
--

 Summary: UI doesn't give an option to select the xentools version 
for non ROOT users
 Key: CLOUDSTACK-9585
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9585
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: UI
Affects Versions: 4.8.0
 Environment: Xen Server
Reporter: subhash yedugundla
 Fix For: 4.8.1


UI doesn't give an option to select the xentools version while registering 
template for any user other than ROOT admin. Templates registered by other 
users are marked as 'xenserver56' and results in unsusable VMs due to the 
device_id:002 issue with windows if the template is having xentools version 
higher than 6.1 .

Repro Steps
Select register template as any other user than ROOT domain admin and UI 
doesn't give an option to select the xentools version.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9589) vmName entries from host_details table for the VM's whose state is Expunging should be deleted during upgrade from older versions

2016-11-10 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9589:
--

 Summary: vmName entries from host_details table for the VM's whose 
state is Expunging should be deleted during upgrade from older versions
 Key: CLOUDSTACK-9589
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9589
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Baremetal
Affects Versions: 4.4.4
 Environment: Baremetal zone
Reporter: subhash yedugundla
 Fix For: 4.8.1


Having vmName entries for VMs in 'expunging' states would cause with deploying 
VMs with matching host tags fail. So removing them during upgrade



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9592) Empty responses from site to site connection status are not handled propertly

2016-11-10 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9592:
--

 Summary: Empty responses from site to site connection status are 
not handled propertly
 Key: CLOUDSTACK-9592
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9592
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Network Controller
Affects Versions: 4.8.0
 Environment: Any Hypervisor
Reporter: subhash yedugundla
 Fix For: 4.8.1


vpn connection status gives responses like the below sometimes

Processing: { Ans: , MgmtId: 7203499016310, via: 1(10.147.28.37), Ver: v1, 
Flags: 110, 
[{"com.cloud.agent.api.CheckS2SVpnConnectionsAnswer":{"ipToConnected":{},"ipToDetail":{},"details":"","result":true,"wait":0}}]
 }
2016-09-27 08:52:19,211 DEBUG [c.c.a.t.Request] 
(RouterStatusMonitor-1:ctx-c20f391d) (logid:c217239d) Seq 
1-2315413158421863581: Received: { Ans: , MgmtId: 7203499016310, via: 
1(10.147.28.37), Ver: v1, Flags: 110,
{ CheckS2SVpnConnectionsAnswer }

In the above scenario, the bug in the processing of this response assumes the 
connection is disconnected even though it is not disconnected and there would 
be two consecutive alerts in logs as well as emails even though there is not 
actual disconnection and reconnection

Site-to-site Vpn Connection XYZ-VPN on router r-197-VM(id: 197) just switch 
from Disconnected to Connected
Site-to-site Vpn Connection to D1 site to site VPN on router r-372-VM(id: 372) 
just switch from Connected to Disconnected




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9595) Transactions are not getting retried in case of database deadlock errors

2016-11-11 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9595:
--

 Summary: Transactions are not getting retried in case of database 
deadlock errors
 Key: CLOUDSTACK-9595
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9595
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.8.0
Reporter: subhash yedugundla
 Fix For: 4.8.1


Customer is seeing occasional error 'Deadlock found when trying to get lock; 
try restarting transaction' messages in their management server logs.  It 
happens regularly at least once a day.  The following is the error seen 

2015-12-09 19:23:19,450 ERROR [cloud.api.ApiServer] 
(catalina-exec-3:ctx-f05c58fc ctx-39c17156 ctx-7becdf6e) unhandled exception 
executing api command: [Ljava.lang.String;@230a6e7f
com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
com.mysql.jdbc.JDBC4PreparedStatement@74f134e3: DELETE FROM 
instance_group_vm_map WHERE instance_group_vm_map.instance_id = 941374
at com.cloud.utils.db.GenericDaoBase.expunge(GenericDaoBase.java:1209)
at sun.reflect.GeneratedMethodAccessor360.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at 
com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy237.expunge(Unknown Source)
at 
com.cloud.vm.UserVmManagerImpl$2.doInTransactionWithoutResult(UserVmManagerImpl.java:2593)
at 
com.cloud.utils.db.TransactionCallbackNoReturn.doInTransaction(TransactionCallbackNoReturn.java:25)
at com.cloud.utils.db.Transaction$2.doInTransaction(Transaction.java:57)
at com.cloud.utils.db.Transaction.execute(Transaction.java:45)
at com.cloud.utils.db.Transaction.execute(Transaction.java:54)
at 
com.cloud.vm.UserVmManagerImpl.addInstanceToGroup(UserVmManagerImpl.java:2575)
at 
com.cloud.vm.UserVmManagerImpl.updateVirtualMachine(UserVmManagerImpl.java:2332)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9595) Transactions are not getting retried in case of database deadlock errors

2016-11-11 Thread subhash yedugundla (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15656822#comment-15656822
 ] 

subhash yedugundla commented on CLOUDSTACK-9595:


Problem Statement
--
MySQLTransactionRollbackException is seen frequently in logs

Root Cause

Attempts to lock rows in the core data access layer of database fails if there 
is a possibility of deadlock. However Operations are not getting retried in 
case of deadlock. So introducing retries here

Solution
---
Operations would be retried after some wait time in case of dead lock exception

> Transactions are not getting retried in case of database deadlock errors
> 
>
> Key: CLOUDSTACK-9595
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9595
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.8.0
>Reporter: subhash yedugundla
> Fix For: 4.8.1
>
>
> Customer is seeing occasional error 'Deadlock found when trying to get lock; 
> try restarting transaction' messages in their management server logs.  It 
> happens regularly at least once a day.  The following is the error seen 
> 2015-12-09 19:23:19,450 ERROR [cloud.api.ApiServer] 
> (catalina-exec-3:ctx-f05c58fc ctx-39c17156 ctx-7becdf6e) unhandled exception 
> executing api command: [Ljava.lang.String;@230a6e7f
> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: 
> com.mysql.jdbc.JDBC4PreparedStatement@74f134e3: DELETE FROM 
> instance_group_vm_map WHERE instance_group_vm_map.instance_id = 941374
>   at com.cloud.utils.db.GenericDaoBase.expunge(GenericDaoBase.java:1209)
>   at sun.reflect.GeneratedMethodAccessor360.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>   at 
> com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
>   at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
>   at com.sun.proxy.$Proxy237.expunge(Unknown Source)
>   at 
> com.cloud.vm.UserVmManagerImpl$2.doInTransactionWithoutResult(UserVmManagerImpl.java:2593)
>   at 
> com.cloud.utils.db.TransactionCallbackNoReturn.doInTransaction(TransactionCallbackNoReturn.java:25)
>   at com.cloud.utils.db.Transaction$2.doInTransaction(Transaction.java:57)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:45)
>   at com.cloud.utils.db.Transaction.execute(Transaction.java:54)
>   at 
> com.cloud.vm.UserVmManagerImpl.addInstanceToGroup(UserVmManagerImpl.java:2575)
>   at 
> com.cloud.vm.UserVmManagerImpl.updateVirtualMachine(UserVmManagerImpl.java:2332)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9410) Data Disk shown as "detached" in XS

2016-06-08 Thread subhash yedugundla (JIRA)
subhash yedugundla created CLOUDSTACK-9410:
--

 Summary: Data Disk shown as "detached" in XS
 Key: CLOUDSTACK-9410
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9410
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Volumes
Affects Versions: 4.8.0
Reporter: subhash yedugundla
Priority: Minor


===
Issue Description
===
1. Create a data-disk
2. Attach data-disk to an instance. Name in XenCenter for the data-disk looks 
like "i-2-762-VM-DATA"
3. Detach data-disk from instance. Name will change to "detached"

The issue is that the one would like to see the name of the disk once detached, 
as the same name given to the volume when created and not “detached” and the 
naming convention for the disk when detached is misleading Even though one 
should be managing everything through cloudstack, when there are multiple DATA 
disks that are detached and if all show the same name it will cause confusion.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)