[jira] [Commented] (CLOUDSTACK-9588) Add Load Balancer functionality in Network page is redundant.

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866375#comment-15866375
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9588:


Github user nitin-maharana commented on the issue:

https://github.com/apache/cloudstack/pull/1758
  
ping @rajesh-battala @karuturi 


> Add Load Balancer functionality in Network page is redundant.
> -
>
> Key: CLOUDSTACK-9588
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9588
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nitin Kumar Maharana
>
> Steps to Reproduce:
> Network -> Select any network -> Observer Add Load Balancer tab
> The "Add Load Balancer" functionality is redundant.
> The above is used to create LB rule without any public IP.
> Resolution:
> There exist similar functionality in Network -> Any Network -> Details Tab -> 
> View IP Addresses -> Any public IP -> Configuration Tab -> Observe Load 
> Balancing.
> The above is used to create LB rule with a public IP. This is a more 
> convenient way of creating LB rule as the IP is involved.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9618) Load Balancer configuration page does not have "Source" method in the drop down list

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866499#comment-15866499
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9618:


Github user nitin-maharana commented on the issue:

https://github.com/apache/cloudstack/pull/1786
  
ping @sateesh-chodapuneedi @rajesh-battala @karuturi 


> Load Balancer configuration page does not have "Source" method in the drop 
> down list
> 
>
> Key: CLOUDSTACK-9618
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9618
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nitin Kumar Maharana
>
> If we create an isolated network with NetScaler published service offering 
> for Load balancing service, then the load balancing configuration UI does not 
> show "Source" as one of the supported LB methods in the drop down list. It 
> only shows "Round-Robin" and "LeastConnection" methods in the list. Howerver, 
> It successfully creates LB rule with "Source" as the LB method using API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9772) Perform HEAD request to retrieve header information

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865345#comment-15865345
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9772:


Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1934
  
@remibergsma Good point, I was aware of that difference, which I think 
doesn't help to make systems reliable.
Another improvement would be to remove this function and refactor the code 
to read sizes from the template objects in the DB or on the GET requests when 
downloading them.


> Perform HEAD request to retrieve header information
> ---
>
> Key: CLOUDSTACK-9772
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9772
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Template
>Affects Versions: 4.2.0, 4.2.1, 4.3.0, 4.4.0, 4.5.0, 4.3.1, 4.4.1, 4.4.2, 
> 4.4.3, 4.3.2, 4.5.1, 4.4.4, 4.5.2, 4.6.0, 4.6.1, 4.6.2, 4.7.0, 4.7.1, 4.8.0, 
> 4.9.0, 4.8.1.1, 4.9.0.1, 4.5.2.2
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>
> The function in UriUtils which perform a check for the template file size of 
> an arbitrary URL is sending a `GET` request to only retrieve the response 
> header. A `HEAD` is the correct way of retrieving such information from the 
> response header.
> This was affecting the restart of a management server since all templates 
> were retrieved when receiving the startup command from the secondary storage 
> sysvm.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865344#comment-15865344
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8886:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1939
  
@blueorangutan test


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865323#comment-15865323
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8886:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1939
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-483


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865347#comment-15865347
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8886:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1939
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865671#comment-15865671
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865694#comment-15865694
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
Packaging result: ✔centos6 ✔centos7 ✖debian. JID-485


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865709#comment-15865709
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + xenserver-65sp1) 
has been kicked to run smoke tests


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865708#comment-15865708
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@blueorangutan test centos7 xenserver-65sp1


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865703#comment-15865703
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@borisstoyanov unsupported parameters provided. Supported mgmt server os 
are: `centos6, centos7, ubuntu`. Supported hypervisors are: `kvm-centos6, 
kvm-centos7, kvm-ubuntu, xenserver-65sp1, xenserver-62sp1, vmware-60u2, 
vmware-55u3, vmware-51u1, vmware-50u1`


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865376#comment-15865376
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8886:


Github user kishankavala commented on the issue:

https://github.com/apache/cloudstack/pull/1939
  
LGTM

API response:


admin
8c91c04e-f282-11e6-8a09-d4ae52cb9a54
8c91ab6a-f282-11e6-8a09-d4ae52cb9a54
ROOT
a6401f5b-b090-4a64-9d73-c04369d15ca8

VM-5e458101-b6a7-477e-9086-ee659ce0a700 running time (ServiceOffering: 1) 
(Template: 111)

0.12778 Hrs
1
0.12778
5e458101-b6a7-477e-9086-ee659ce0a700
VM-5e458101-b6a7-477e-9086-ee659ce0a700
2b10fa98-c167-444f-b9c0-752611a9f267
02567d08-f283-11e6-8a09-d4ae52cb9a54
5e458101-b6a7-477e-9086-ee659ce0a700
Simulator
2017-02-14'T'07:07:15+00:00
2017-02-14'T'07:15:00+00:00



> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8654) CoreOS support

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865390#comment-15865390
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8654:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1244


> CoreOS support
> --
>
> Key: CLOUDSTACK-8654
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8654
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> - Support CoreOS OS type while registering template/ISO
> - Add UI option to supply user data (cloud-config) while deploying Vm



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9655) The template which is registered in all zones will be deleted by deleting 1 template on any zone

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865410#comment-15865410
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9655:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1818
  
@ustcweizhou in the above snippet, its adding the zoneid if its not 
cross-zone. if no zoneid is provided, it will default to -1 which means cross 
zone. So, this check is required. 
This fix is just enhancing the message if its a cross-zone template. Its 
not trying to do the delete only in that zone. That is not possible.


> The template which is registered in all zones will be deleted by deleting 1 
> template on any zone
> 
>
> Key: CLOUDSTACK-9655
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9655
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> for a crosszone template, trying to delete a copy of it in one zone will 
> delete it from all the zones without showing any warning in UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865526#comment-15865526
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9781:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1940
  
@blueorangutan package


> ACS records ID in events tables instead of UUID.
> 
>
> Key: CLOUDSTACK-9781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayant Patil
>
> ISSUE
> =
> Wrong presentation of volume id in ASC events.
> While creating a snapshot, only volume ID is mentioned in the events. For 
> example, “Scheduled async job for creating snapshot for volume Id:270". On 
> looking into the notification, user is not able to identify the volume. So 
> modified event description with UUID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9763) vpc: can not ssh to instance after vpc restart

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865524#comment-15865524
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9763:


Github user serbaut commented on the issue:

https://github.com/apache/cloudstack/pull/1919
  
The VPC VR maintains metadata 
(http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/virtual_machines/user-data.html)
 as static files in /var/www/html/metadata. When a VR is destroyed and 
recreated (by e.g. "restart with cleanup") this metadata is rebuilt by 
createVmDataCommandForVMs(). public-keys is missing from that function so it 
becomes empty after the rebuild and a request for latest/meta-data/public-keys 
no longer returns the correct key.

This PR adds public-key to the rebuild.


> vpc: can not ssh to instance after vpc restart
> --
>
> Key: CLOUDSTACK-9763
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9763
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router, VPC
>Affects Versions: 4.8.0
>Reporter: Joakim Sernbrant
>
> Restart with Cleanup of a VPC does not update the public-key metadata, it is 
> explicitly set to null in 
> https://github.com/apache/cloudstack/blob/master/server/src/com/cloud/network/router/CommandSetupHelper.java#L614
> Rebooting instances relying on metadata (e.g. coreos) will no longer have the 
> correct public key configured.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865552#comment-15865552
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9781:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1940
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-484


> ACS records ID in events tables instead of UUID.
> 
>
> Key: CLOUDSTACK-9781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayant Patil
>
> ISSUE
> =
> Wrong presentation of volume id in ASC events.
> While creating a snapshot, only volume ID is mentioned in the events. For 
> example, “Scheduled async job for creating snapshot for volume Id:270". On 
> looking into the notification, user is not able to identify the volume. So 
> modified event description with UUID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865670#comment-15865670
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@blueorangutan package


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865698#comment-15865698
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@blueorangutan test centos7 xenserver-65sp2


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-14 Thread Jayant Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayant Patil updated CLOUDSTACK-9781:
-
Summary: ACS records ID in events tables instead of UUID.  (was: CCP 
records ID in events tables instead of UUID.)

> ACS records ID in events tables instead of UUID.
> 
>
> Key: CLOUDSTACK-9781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayant Patil
>
> ISSUE
> =
> Wrong presentation of volume id in CCP events.
> While creating a snapshot, only volume ID is mentioned in the events. For 
> example, “Scheduled async job for creating snapshot for volume Id:270". On 
> looking into the notification, user is not able to identify the volume. So 
> modified event description with UUID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9721) Remove deprecated/unused global configuration parameter - consoleproxy.loadscan.interval

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865443#comment-15865443
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9721:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1881
  
configuration cleanup. needs BVT


> Remove deprecated/unused global configuration parameter - 
> consoleproxy.loadscan.interval
> 
>
> Key: CLOUDSTACK-9721
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9721
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
> Fix For: 4.10.0.0
>
>
> CloudStack doesn't uses "consoleproxy.loadscan.interval" parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865556#comment-15865556
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9781:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1940
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> ACS records ID in events tables instead of UUID.
> 
>
> Key: CLOUDSTACK-9781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayant Patil
>
> ISSUE
> =
> Wrong presentation of volume id in ASC events.
> While creating a snapshot, only volume ID is mentioned in the events. For 
> example, “Scheduled async job for creating snapshot for volume Id:270". On 
> looking into the notification, user is not able to identify the volume. So 
> modified event description with UUID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865554#comment-15865554
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9781:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1940
  
@blueorangutan test


> ACS records ID in events tables instead of UUID.
> 
>
> Key: CLOUDSTACK-9781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayant Patil
>
> ISSUE
> =
> Wrong presentation of volume id in ASC events.
> While creating a snapshot, only volume ID is mentioned in the events. For 
> example, “Scheduled async job for creating snapshot for volume Id:270". On 
> looking into the notification, user is not able to identify the volume. So 
> modified event description with UUID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8737) Remove out-of-band VR reboot code based on persistent VR configuration changes

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865435#comment-15865435
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8737:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1882
  
unused class is removed. needs BVT


> Remove out-of-band VR reboot code based on persistent VR configuration changes
> --
>
> Key: CLOUDSTACK-8737
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8737
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.6.0
>Reporter: Koushik Das
>Assignee: Koushik Das
> Fix For: 4.6.0
>
>
> VR reboot was required to reprogram rules in case it was stopped and started 
> outside of CS. With persistent VR configuration changes (added in 4.6) the 
> rules are persisted across a stop-start of VR. So no need to do VR reboot. 
> Refer to the following discussion on dev list.
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201506.mbox/%3cac13e3c1-3719-4b48-a35d-dbc4ba704...@schubergphilis.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9655) The template which is registered in all zones will be deleted by deleting 1 template on any zone

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865464#comment-15865464
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9655:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1818
  
@karuturi in my understanding, the zoneid is not necessary if template is 
not cross-zone.
If template is cross-zone, the zoneid should be passed if template is 
deleted from the 'Zones' tab (it means template will be deleted from the zone), 
Otherwise, the template will be deleted from all zones (this is why I suggest 
to add the new button in template details page, then the zoneid will not be set)
Please correct me if I am wrong.



> The template which is registered in all zones will be deleted by deleting 1 
> template on any zone
> 
>
> Key: CLOUDSTACK-9655
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9655
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> for a crosszone template, trying to delete a copy of it in one zone will 
> delete it from all the zones without showing any warning in UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9724) VPC tier network restart with cleanup, missing public ip on VR interface

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865397#comment-15865397
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9724:


Github user jayapalu commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1885#discussion_r100985099
  
--- Diff: server/src/com/cloud/network/IpAddressManagerImpl.java ---
@@ -460,6 +460,12 @@ boolean checkIfIpAssocRequired(Network network, 
boolean postApplyRules, List 0) {
+if (network.getVpcId() != null) {
--- End diff --

Improved. Added more details


> VPC tier network restart with cleanup, missing public ip on VR interface
> 
>
> Key: CLOUDSTACK-9724
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9724
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.10.0.0
>
>
> On vpc tier network restart with clean up missing secondary ip addresses on 
> the VR public interface.
> 1. Create a vpc and deploy a vm in tier.
> 2. Acquire a public ip and configure PF rule
> 3. check that the VR interface has two ip addresses.
> 4. Restart the tier network with cleanup.
> 5. After restart in VR interface ip (PF rule configured) is missed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865426#comment-15865426
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8896:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/873


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865425#comment-15865425
 ] 

ASF subversion and git services commented on CLOUDSTACK-8896:
-

Commit abd7860e68f3465f4c79fed657f27ef1737b92f1 in cloudstack's branch 
refs/heads/4.9 from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=abd7860 ]

Merge pull request #873 from karuturi/CLOUDSTACK-8896

CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%This 
issue occurs when a volume in Ready state is moved across storage
pools.

While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.

computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.

Testing:
I couldnt write unittests for it. This class is not in a unittestable state.

manually tested in the below environment
1. xenserver 6.5 setup with 2 clusters and a host each in each of them.
2. added storage tags for the primary storage.
3. created two service offerings with the storage tags.
4. deployed two vms using newly created offerings in step 3.
5. at this stage, there are two vms one on each host with root disks on the 
corresponding primary.
6. create a data disk and attach it to vm1
7. detach the data disk. now the data disk is in the primary storage of the 
cluster of vm1 (let us say primary1)
8. attach this data disk to vm2(running on a host in different cluster)
9. the volume should be moved to the primary storage of another cluster and 
op_host_capacity should be accordingly updated.

* pr/873:
  CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%

Signed-off-by: Rajani Karuturi 


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865427#comment-15865427
 ] 

ASF subversion and git services commented on CLOUDSTACK-8896:
-

Commit abd7860e68f3465f4c79fed657f27ef1737b92f1 in cloudstack's branch 
refs/heads/4.9 from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=abd7860 ]

Merge pull request #873 from karuturi/CLOUDSTACK-8896

CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%This 
issue occurs when a volume in Ready state is moved across storage
pools.

While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.

computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.

Testing:
I couldnt write unittests for it. This class is not in a unittestable state.

manually tested in the below environment
1. xenserver 6.5 setup with 2 clusters and a host each in each of them.
2. added storage tags for the primary storage.
3. created two service offerings with the storage tags.
4. deployed two vms using newly created offerings in step 3.
5. at this stage, there are two vms one on each host with root disks on the 
corresponding primary.
6. create a data disk and attach it to vm1
7. detach the data disk. now the data disk is in the primary storage of the 
cluster of vm1 (let us say primary1)
8. attach this data disk to vm2(running on a host in different cluster)
9. the volume should be moved to the primary storage of another cluster and 
op_host_capacity should be accordingly updated.

* pr/873:
  CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%

Signed-off-by: Rajani Karuturi 


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865424#comment-15865424
 ] 

ASF subversion and git services commented on CLOUDSTACK-8896:
-

Commit abd7860e68f3465f4c79fed657f27ef1737b92f1 in cloudstack's branch 
refs/heads/4.9 from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=abd7860 ]

Merge pull request #873 from karuturi/CLOUDSTACK-8896

CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%This 
issue occurs when a volume in Ready state is moved across storage
pools.

While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.

computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.

Testing:
I couldnt write unittests for it. This class is not in a unittestable state.

manually tested in the below environment
1. xenserver 6.5 setup with 2 clusters and a host each in each of them.
2. added storage tags for the primary storage.
3. created two service offerings with the storage tags.
4. deployed two vms using newly created offerings in step 3.
5. at this stage, there are two vms one on each host with root disks on the 
corresponding primary.
6. create a data disk and attach it to vm1
7. detach the data disk. now the data disk is in the primary storage of the 
cluster of vm1 (let us say primary1)
8. attach this data disk to vm2(running on a host in different cluster)
9. the volume should be moved to the primary storage of another cluster and 
op_host_capacity should be accordingly updated.

* pr/873:
  CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%

Signed-off-by: Rajani Karuturi 


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865421#comment-15865421
 ] 

ASF subversion and git services commented on CLOUDSTACK-8896:
-

Commit abd7860e68f3465f4c79fed657f27ef1737b92f1 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=abd7860 ]

Merge pull request #873 from karuturi/CLOUDSTACK-8896

CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%This 
issue occurs when a volume in Ready state is moved across storage
pools.

While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.

computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.

Testing:
I couldnt write unittests for it. This class is not in a unittestable state.

manually tested in the below environment
1. xenserver 6.5 setup with 2 clusters and a host each in each of them.
2. added storage tags for the primary storage.
3. created two service offerings with the storage tags.
4. deployed two vms using newly created offerings in step 3.
5. at this stage, there are two vms one on each host with root disks on the 
corresponding primary.
6. create a data disk and attach it to vm1
7. detach the data disk. now the data disk is in the primary storage of the 
cluster of vm1 (let us say primary1)
8. attach this data disk to vm2(running on a host in different cluster)
9. the volume should be moved to the primary storage of another cluster and 
op_host_capacity should be accordingly updated.

* pr/873:
  CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%

Signed-off-by: Rajani Karuturi 


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865476#comment-15865476
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9781:


GitHub user jayantpatil1234 opened a pull request:

https://github.com/apache/cloudstack/pull/1940

CLOUDSTACK-9781:ACS records ID in events tables instead of UUID.

ISSUE
=
Wrong presentation of volume id in ASC events.
While creating a snapshot, only volume ID is mentioned in the events. For 
example, “Scheduled async job for creating snapshot for volume Id:270". On 
looking into the notification, user is not able to identify the volume. So 
modified event description with UUID.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CS-48313

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1940.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1940


commit 7f216e8960e466acbd610fd90d44583bb8b6c15e
Author: Jayant Patil 
Date:   2017-02-14T08:51:30Z

CLOUDSTACK-9781:ACS records ID in events tables instead of UUID.




> ACS records ID in events tables instead of UUID.
> 
>
> Key: CLOUDSTACK-9781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayant Patil
>
> ISSUE
> =
> Wrong presentation of volume id in ASC events.
> While creating a snapshot, only volume ID is mentioned in the events. For 
> example, “Scheduled async job for creating snapshot for volume Id:270". On 
> looking into the notification, user is not able to identify the volume. So 
> modified event description with UUID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865528#comment-15865528
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9781:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1940
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> ACS records ID in events tables instead of UUID.
> 
>
> Key: CLOUDSTACK-9781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayant Patil
>
> ISSUE
> =
> Wrong presentation of volume id in ASC events.
> While creating a snapshot, only volume ID is mentioned in the events. For 
> example, “Scheduled async job for creating snapshot for volume Id:270". On 
> looking into the notification, user is not able to identify the volume. So 
> modified event description with UUID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9763) vpc: can not ssh to instance after vpc restart

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865547#comment-15865547
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9763:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1919
  
I agree with @serbaut , different from password which is applied to VR only 
once, the public keys should be set in VR each time when we recreate a VR.

this LGTM+1


> vpc: can not ssh to instance after vpc restart
> --
>
> Key: CLOUDSTACK-9763
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9763
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router, VPC
>Affects Versions: 4.8.0
>Reporter: Joakim Sernbrant
>
> Restart with Cleanup of a VPC does not update the public-key metadata, it is 
> explicitly set to null in 
> https://github.com/apache/cloudstack/blob/master/server/src/com/cloud/network/router/CommandSetupHelper.java#L614
> Rebooting instances relying on metadata (e.g. coreos) will no longer have the 
> correct public key configured.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9697) Better error message on UI user if tries to shrink the VM ROOT volume size

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865737#comment-15865737
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9697:


Github user rashmidixit commented on the issue:

https://github.com/apache/cloudstack/pull/1855
  
@sadhugit I have updated the bug description based on your comments.


> Better error message on UI user if tries to shrink the VM ROOT volume size
> --
>
> Key: CLOUDSTACK-9697
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9697
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.9.1.0
>
>
> If a user tries to shrink the size of the root volume of a VM, the operation 
> fails with an error 
> Going from existing size of 10737418240 to size of 8589934592 would shrink 
> the volume.Need to sign off by supplying the shrinkok parameter with value of 
> true.
> Instead, the UI can simply not allow shrink operation on the ROOT volume. 
> Throw a more user friendly message on the UI
> "Shrink operation on ROOT volume not supported"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9697) Better error message on UI user if tries to shrink the VM ROOT volume size

2017-02-14 Thread Rashmi Dixit (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rashmi Dixit updated CLOUDSTACK-9697:
-
Summary: Better error message on UI user if tries to shrink the VM ROOT 
volume size  (was: Better error message user if tries to shrink the VM ROOT 
volume size)

> Better error message on UI user if tries to shrink the VM ROOT volume size
> --
>
> Key: CLOUDSTACK-9697
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9697
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.9.1.0
>
>
> If a user tries to shrink the size of the root volume of a VM, the operation 
> fails with an error 
> Going from existing size of 10737418240 to size of 8589934592 would shrink 
> the volume.Need to sign off by supplying the shrinkok parameter with value of 
> true.
> Instead, the UI can simply not allow shrink operation on the ROOT volume. 
> Throw a more user friendly message
> "Shrink operation on ROOT volume not supported"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9697) Better error message on UI user if tries to shrink the VM ROOT volume size

2017-02-14 Thread Rashmi Dixit (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rashmi Dixit updated CLOUDSTACK-9697:
-
Description: 
If a user tries to shrink the size of the root volume of a VM, the operation 
fails with an error 
Going from existing size of 10737418240 to size of 8589934592 would shrink the 
volume.Need to sign off by supplying the shrinkok parameter with value of true.

Instead, the UI can simply not allow shrink operation on the ROOT volume. Throw 
a more user friendly message on the UI
"Shrink operation on ROOT volume not supported"


  was:
If a user tries to shrink the size of the root volume of a VM, the operation 
fails with an error 
Going from existing size of 10737418240 to size of 8589934592 would shrink the 
volume.Need to sign off by supplying the shrinkok parameter with value of true.

Instead, the UI can simply not allow shrink operation on the ROOT volume. Throw 
a more user friendly message
"Shrink operation on ROOT volume not supported"



> Better error message on UI user if tries to shrink the VM ROOT volume size
> --
>
> Key: CLOUDSTACK-9697
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9697
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.9.1.0
>
>
> If a user tries to shrink the size of the root volume of a VM, the operation 
> fails with an error 
> Going from existing size of 10737418240 to size of 8589934592 would shrink 
> the volume.Need to sign off by supplying the shrinkok parameter with value of 
> true.
> Instead, the UI can simply not allow shrink operation on the ROOT volume. 
> Throw a more user friendly message on the UI
> "Shrink operation on ROOT volume not supported"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9699) Metrics: Add a global setting to enable/disable Metrics view

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865740#comment-15865740
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9699:


Github user rashmidixit commented on the issue:

https://github.com/apache/cloudstack/pull/1884
  
@rhtyd Thanks for the update. 


> Metrics: Add a global setting to enable/disable Metrics view
> 
>
> Key: CLOUDSTACK-9699
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9699
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.7.0, 4.8.0, 4.9.0
>Reporter: Rashmi Dixit
>Assignee: Rashmi Dixit
> Fix For: 4.10.0.0
>
> Attachments: enable-metrics-flag.PNG, metrics-disabled.PNG, 
> metrics-enabled.PNG
>
>
> The Metrics view for each type of entity basically fires APIs and calculates 
> required values on the client end. For e.g. to display memory usage etc at 
> the zone level, it will fetch all zones. For each zone it will fetch 
> pods->cluster->host->VMs
> For a very large Cloudstack installation this will have a major impact on the 
> performance. 
> Ideally, there should be an API which calculates all this in the backend and 
> the UI should simply show the values. However, for the time, introduce a 
> global setting called enable.metrics which will be set to false. This will 
> cause the metrics button not to be shown on any of the pages.
> If the Admin changes this to true, then the button will be visible and 
> Metrics functionality will work as usual.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866204#comment-15866204
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8886:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1939
  
Trillian test result (tid-818)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 33181 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1939-t818-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 858.87 | 
test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 350.74 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 165.29 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.22 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 255.96 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 277.32 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 537.76 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 511.00 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1454.05 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 543.71 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1276.54 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.44 | test_volumes.py
test_08_resize_volume | Success | 156.39 | test_volumes.py
test_07_resize_fail | Success | 161.44 | test_volumes.py
test_06_download_detached_volume | Success | 156.31 | test_volumes.py
test_05_detach_volume | Success | 155.78 | test_volumes.py
test_04_delete_attached_volume | Success | 151.19 | test_volumes.py
test_03_download_attached_volume | Success | 156.20 | test_volumes.py
test_02_attach_volume | Success | 95.37 | test_volumes.py
test_01_create_volume | Success | 711.56 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.19 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.73 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 163.73 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 267.62 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.56 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.89 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 130.79 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.88 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.16 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.31 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.55 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.05 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.10 | test_templates.py
test_02_edit_template | Success | 90.12 | test_templates.py
test_01_create_template | Success | 45.46 | test_templates.py
test_10_destroy_cpvm | Success | 161.57 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.59 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.46 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.56 | test_ssvm.py
test_06_stop_cpvm | Success | 161.70 | test_ssvm.py
test_05_stop_ssvm | Success | 138.81 | test_ssvm.py
test_04_cpvm_internals | Success | 1.13 | test_ssvm.py
test_03_ssvm_internals | Success | 3.23 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.26 | test_snapshots.py
test_04_change_offering_small | Success | 240.79 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.08 | test_service_offerings.py
test_01_create_service_offering | Success | 0.17 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py

[jira] [Commented] (CLOUDSTACK-9763) vpc: can not ssh to instance after vpc restart

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865943#comment-15865943
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9763:


Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1919
  
Thanks for the explanation @serbaut. That is exactly why I asked; so, it is 
not a reboot/restart per se. It is a re-deploy; the old virtual machine is 
destroyed and a new one is deployed (or maybe a reset of the VHD of the VM). By 
VM here I mean VR (which at the end of the day is a VM). 

@serbaut, could you add these explanations on the PR description and Jira 
ticket (https://issues.apache.org/jira/browse/CLOUDSTACK-9763)? At least for 
me, this was not clear.

Thanks for the fix ;)
Code LGTM


> vpc: can not ssh to instance after vpc restart
> --
>
> Key: CLOUDSTACK-9763
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9763
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router, VPC
>Affects Versions: 4.8.0
>Reporter: Joakim Sernbrant
>
> Restart with Cleanup of a VPC does not update the public-key metadata, it is 
> explicitly set to null in 
> https://github.com/apache/cloudstack/blob/master/server/src/com/cloud/network/router/CommandSetupHelper.java#L614
> Rebooting instances relying on metadata (e.g. coreos) will no longer have the 
> correct public key configured.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-14 Thread Jayant Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayant Patil updated CLOUDSTACK-9781:
-
Description: 
ISSUE
=
Wrong presentation of volume id in ASC events.
While creating a snapshot, only volume ID is mentioned in the events. For 
example, “Scheduled async job for creating snapshot for volume Id:270". On 
looking into the notification, user is not able to identify the volume. So 
modified event description with UUID.

  was:
ISSUE
=
Wrong presentation of volume id in CCP events.
While creating a snapshot, only volume ID is mentioned in the events. For 
example, “Scheduled async job for creating snapshot for volume Id:270". On 
looking into the notification, user is not able to identify the volume. So 
modified event description with UUID.


> ACS records ID in events tables instead of UUID.
> 
>
> Key: CLOUDSTACK-9781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayant Patil
>
> ISSUE
> =
> Wrong presentation of volume id in ASC events.
> While creating a snapshot, only volume ID is mentioned in the events. For 
> example, “Scheduled async job for creating snapshot for volume Id:270". On 
> looking into the notification, user is not able to identify the volume. So 
> modified event description with UUID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865418#comment-15865418
 ] 

ASF subversion and git services commented on CLOUDSTACK-8896:
-

Commit bec9115a617ecac27e5b5785c8e838a535764f7d in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=bec9115 ]

CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%

This issue occurs when a volume in Ready state is moved across storage
pools.

While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.

computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865423#comment-15865423
 ] 

ASF subversion and git services commented on CLOUDSTACK-8896:
-

Commit bec9115a617ecac27e5b5785c8e838a535764f7d in cloudstack's branch 
refs/heads/4.9 from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=bec9115 ]

CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%

This issue occurs when a volume in Ready state is moved across storage
pools.

While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.

computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865419#comment-15865419
 ] 

ASF subversion and git services commented on CLOUDSTACK-8896:
-

Commit abd7860e68f3465f4c79fed657f27ef1737b92f1 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=abd7860 ]

Merge pull request #873 from karuturi/CLOUDSTACK-8896

CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%This 
issue occurs when a volume in Ready state is moved across storage
pools.

While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.

computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.

Testing:
I couldnt write unittests for it. This class is not in a unittestable state.

manually tested in the below environment
1. xenserver 6.5 setup with 2 clusters and a host each in each of them.
2. added storage tags for the primary storage.
3. created two service offerings with the storage tags.
4. deployed two vms using newly created offerings in step 3.
5. at this stage, there are two vms one on each host with root disks on the 
corresponding primary.
6. create a data disk and attach it to vm1
7. detach the data disk. now the data disk is in the primary storage of the 
cluster of vm1 (let us say primary1)
8. attach this data disk to vm2(running on a host in different cluster)
9. the volume should be moved to the primary storage of another cluster and 
op_host_capacity should be accordingly updated.

* pr/873:
  CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%

Signed-off-by: Rajani Karuturi 


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865422#comment-15865422
 ] 

ASF subversion and git services commented on CLOUDSTACK-8896:
-

Commit 2aeca0d34fc1e7352a9a287b965202cdd1a7f6a5 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=2aeca0d ]

Merge release branch 4.9 to master

* 4.9:
  CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8896) Allocated percentage of storage can go beyond 100%

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865420#comment-15865420
 ] 

ASF subversion and git services commented on CLOUDSTACK-8896:
-

Commit abd7860e68f3465f4c79fed657f27ef1737b92f1 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=abd7860 ]

Merge pull request #873 from karuturi/CLOUDSTACK-8896

CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%This 
issue occurs when a volume in Ready state is moved across storage
pools.

While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.

computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.

Testing:
I couldnt write unittests for it. This class is not in a unittestable state.

manually tested in the below environment
1. xenserver 6.5 setup with 2 clusters and a host each in each of them.
2. added storage tags for the primary storage.
3. created two service offerings with the storage tags.
4. deployed two vms using newly created offerings in step 3.
5. at this stage, there are two vms one on each host with root disks on the 
corresponding primary.
6. create a data disk and attach it to vm1
7. detach the data disk. now the data disk is in the primary storage of the 
cluster of vm1 (let us say primary1)
8. attach this data disk to vm2(running on a host in different cluster)
9. the volume should be moved to the primary storage of another cluster and 
op_host_capacity should be accordingly updated.

* pr/873:
  CLOUDSTACK-8896: allocated percentage of storage pool going beyond 100%

Signed-off-by: Rajani Karuturi 


> Allocated percentage of storage can go beyond 100%
> --
>
> Key: CLOUDSTACK-8896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2, 4.6.0
>Reporter: Rajani Karuturi
>Assignee: Rajani Karuturi
>
> This issue occurs when a volume in Ready state is moved across storage pools.
> Let us say there is a data volume, volume0 in Ready state in a cluster scope 
> primary storage primary0.
> Now, when an operation is attempted to attach this volume to a vm in another 
> cluster, the volume is moved to the new cluster and the asking size is zero 
> at this time.
> you can observe logs like below with asking size 0 in the management server 
> logs.
> 2015-09-22 08:49:02,754 DEBUG [c.c.s.StorageManagerImpl] 
> (Work-Job-Executor-6:ctx-27e0990a job-37/job-38 ctx-985e5ad0) 
> (logid:a0a97129) Checking pool: 1 for volume allocation 
> [Vol[8|vm=null|DATADISK]], maxSize : 3298534883328, totalAllocatedSize : 
> 24096276480, askingSize : 0, allocated disable threshold: 0.85



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9623) Deploying virtual machine fails due to "Couldn't find vlanId" in Basic Zone

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866502#comment-15866502
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9623:


Github user nitin-maharana commented on the issue:

https://github.com/apache/cloudstack/pull/1792
  
ping @sateesh-chodapuneedi @karuturi 


> Deploying virtual machine fails due to "Couldn't find vlanId" in Basic Zone
> ---
>
> Key: CLOUDSTACK-9623
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9623
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Nitin Kumar Maharana
>
> In Basic zone, while deploying a VM it fails with an unexpected exception.
> This is reproducible only when java asserts are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866903#comment-15866903
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
Trillian test result (tid-821)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 38820 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1829-t821-xenserver-65sp1.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 527.66 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1345.35 | 
test_vpc_redundant.py
test_02_redundant_VPC_default_routes | `Failure` | 353.82 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 555.85 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 736.79 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 320.48 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 151.41 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 506.02 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 313.39 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 661.28 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 867.14 | test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.53 | test_volumes.py
test_08_resize_volume | Success | 100.66 | test_volumes.py
test_07_resize_fail | Success | 105.69 | test_volumes.py
test_06_download_detached_volume | Success | 20.24 | test_volumes.py
test_05_detach_volume | Success | 100.21 | test_volumes.py
test_04_delete_attached_volume | Success | 10.14 | test_volumes.py
test_03_download_attached_volume | Success | 15.19 | test_volumes.py
test_02_attach_volume | Success | 10.64 | test_volumes.py
test_01_create_volume | Success | 392.06 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.17 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 175.94 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 130.74 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 191.61 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.57 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.15 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 70.78 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.07 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.10 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 10.12 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.16 | test_vm_life_cycle.py
test_01_stop_vm | Success | 25.18 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 80.49 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 5.10 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.13 | test_templates.py
test_01_create_template | Success | 45.33 | test_templates.py
test_10_destroy_cpvm | Success | 191.53 | test_ssvm.py
test_09_destroy_ssvm | Success | 198.64 | test_ssvm.py
test_08_reboot_cpvm | Success | 111.43 | test_ssvm.py
test_07_reboot_ssvm | Success | 143.68 | test_ssvm.py
test_06_stop_cpvm | Success | 161.51 | test_ssvm.py
test_05_stop_ssvm | Success | 138.77 | test_ssvm.py
test_04_cpvm_internals | Success | 1.15 | test_ssvm.py
test_03_ssvm_internals | Success | 3.47 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 20.94 | test_snapshots.py
test_04_change_offering_small | Success | 128.93 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.06 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.09 | test_secondary_storage.py

[jira] [Created] (CLOUDSTACK-9782) Host HA

2017-02-14 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-9782:
---

 Summary: Host HA
 Key: CLOUDSTACK-9782
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9782
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: Future, 4.11.0.0


CloudStack lacks a way to reliably fence a host, the idea of the host-ha 
feature is to provide a general purpose HA framework and implementation 
specific for hypervisor that can use additional mechanism such as OOBM (ipmi 
based power management) to reliably investigate, recover and fencing a host. 
This feature can handle scenarios associated with server crash issues and 
reliable fencing of hosts and HA of VM.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Host+HA



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8856) Primary Storage Used(type tag with value 2) related tag is not showing in listCapacity api response

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867284#comment-15867284
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8856:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/865
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 340
 Hypervisor xenserver
 NetworkType Advanced
 Passed=103
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_non_contigiousvlan.py

 * test_extendPhysicalNetworkVlan Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_routers_network_ops.py
test_disk_offerings.py


> Primary Storage Used(type tag with value 2) related tag is not showing in 
> listCapacity api response
> ---
>
> Key: CLOUDSTACK-8856
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8856
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>
> Actual behavior:
> Primary Storage Used(type tag with value 2) related tag is showing in 
> listCapacity api response when 'sortby=Usage' parameter is removed in 
> listCapacity api.
> Expected behavior:
> Primary Storage Used(type tag with value 2) related tag should be shown in 
> listCapacity api response when 'sortby=Usage' parameter is included in 
> listCapacity api.
> Steps to reproduce:
> 1.Login cloudstack as admin and launch one vm successfully.
> 2.Make sure that firebug tool is enable and navigate to Dashboard.
> 3.Right click on 'listCapacity' api and select 'Copy Location'.
> 4.Remove 'response=json' parameter from above copied url and paste the 
> modified url in new tab.
> 5.Check for 'type tag with value 2' in 'listCapcity' api xml response.
> listCapacity API Command: 
> http://10.81.29.87/client/api?command=listCapacity=Qp0XOEUVLrZHBZrQp7ame96IzXE%3D=true=usage&_=1417697306264
> Response:
> 
> -
> 9
> -
> 5
> 25cb4986-c286-4937-a0ec-b290307eaa4f
> XenRT-Zone-0
> 3
> 5
> 60
> 
> -
> 4
> 25cb4986-c286-4937-a0ec-b290307eaa4f
> XenRT-Zone-0
> 3
> 15
> 20
> 
> -
> 7
> 25cb4986-c286-4937-a0ec-b290307eaa4f
> XenRT-Zone-0
> 1
> 5
> 20
> 
> -
> 6
> 25cb4986-c286-4937-a0ec-b290307eaa4f
> XenRT-Zone-0
> 227751755776
> 4395909513216
> 5.18
> 
> -
> 0
> 25cb4986-c286-4937-a0ec-b290307eaa4f
> XenRT-Zone-0
> 2952790016
> 124470096896
> 2.37
> 
> -
> 1
> 25cb4986-c286-4937-a0ec-b290307eaa4f
> XenRT-Zone-0
> 2000
> 100800
> 1.98
> 
> -
> 3
> 25cb4986-c286-4937-a0ec-b290307eaa4f
> XenRT-Zone-0
> 29339156480
> 8791819026432
> 0.33
> 
> -
> 9
> 25cb4986-c286-4937-a0ec-b290307eaa4f
> XenRT-Zone-0
> 8388608
> 981911732224
> 0
> 
> -
> 19
> 25cb4986-c286-4937-a0ec-b290307eaa4f
> XenRT-Zone-0
> 0
> 0
> 0
> 
> 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9705) Unauthenticated API allows Admin password reset

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867253#comment-15867253
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9705:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1865
  
 ### ACS CI BVT Run
 **Sumarry:**
 Build Number 321
 Hypervisor xenserver
 NetworkType Advanced
 Passed=104
 Failed=0
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**

**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_routers_network_ops.py
test_disk_offerings.py


> Unauthenticated API allows Admin password reset
> ---
>
> Key: CLOUDSTACK-9705
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9705
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> The "unauthenticated API" allows a caller to reset CloudStack administrator 
> passwords. This presents a security risk becaues it allows for privilege 
> escallation attacks. First, if the unauthenticated API is listening on the 
> network (instead of locally) then any user on the network can reset admin 
> passwords. If, the API is only listening locally, then any user with access 
> to the local box can resset admin passwords. This would allow them to access 
> other hosts within the cloudstack deployment.
> While it may be important to provide a recovery mechanism for admin passwords 
> that have been lost or hyjacked, such a solution needs to be secure. We 
> should either remove this feature from the Unauthenticated API, or provide a 
> solution that is less open to abuse.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9707) deployVirtualMachine API should fail if hostid is specified and host doesn't have enough resources

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867261#comment-15867261
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9707:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1868
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 323
 Hypervisor xenserver
 NetworkType Advanced
 Passed=104
 Failed=0
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**

**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_routers_network_ops.py
test_disk_offerings.py


> deployVirtualMachine API should fail if hostid is specified and host doesn't 
> have enough resources
> --
>
> Key: CLOUDSTACK-9707
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9707
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> While using hostid parameter, vm gets deployed on another if the host
> given is running out of capacity. If host id is specified the deployment 
> should happen
> on the given host and it should fail if the host is out of capacity. We are 
> retrying
> deployment on the entire zone without the given host id if we fail once. The 
> retry,
> which will retry on other hosts, should only be attempted if host id isn't 
> given.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9682) Block VM migration to a storage which is in maintainence mode

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867274#comment-15867274
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9682:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1838
  
 ### ACS CI BVT Run
 **Sumarry:**
 Build Number 336
 Hypervisor xenserver
 NetworkType Advanced
 Passed=104
 Failed=0
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**

**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_routers_network_ops.py
test_disk_offerings.py


> Block VM migration to a storage which is in maintainence mode
> -
>
> Key: CLOUDSTACK-9682
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9682
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> Description
> Put vmfs storage pool (cluster wide/zone wide) in maintenance mode and try to 
> migrate a VM through api call to that storage pool
> Steps
> 1. Put one of the storage pool in a cluster in maintenance mode
> 2. From api call, migrate VM from one cluster to another to the above storage 
> pool
> 3. Even though the storage pool is in maintenance mode, the migration task is 
> initiated and completed without any error
> Expectation
> CloudStack should block these kind of migration



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867281#comment-15867281
 ] 

ASF subversion and git services commented on CLOUDSTACK-8886:
-

Commit 9d8eebf68d0750cef7836bfdd5413af9863993e3 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9d8eebf ]

Merge pull request #1939 from Accelerite/CLOUDSTACK-8886

CLOUDSTACK-8886: Limitations is listUsageRecords output, listUsageRecAs @kansal 
 is inactive created new branch and raised the PR. This is continuation of PR 
#858
This closes #858

Problem: Only domainid is returned by usageReports API call. In cloudstack 
documention it mentions "domain" as being in the usage response. The API should 
really be returning the domain as account information has both account and 
accountid.

Fix: Missing setDomainName at the time of creating response.

* pr/1939:
  CLOUDSTACK-8886: Limitations is listUsageRecords output, listUsageRecords 
does not return domain - Fixed and tests added

Signed-off-by: Rajani Karuturi 


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867276#comment-15867276
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8886:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1939
  
merging


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867279#comment-15867279
 ] 

ASF subversion and git services commented on CLOUDSTACK-8886:
-

Commit 9d8eebf68d0750cef7836bfdd5413af9863993e3 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9d8eebf ]

Merge pull request #1939 from Accelerite/CLOUDSTACK-8886

CLOUDSTACK-8886: Limitations is listUsageRecords output, listUsageRecAs @kansal 
 is inactive created new branch and raised the PR. This is continuation of PR 
#858
This closes #858

Problem: Only domainid is returned by usageReports API call. In cloudstack 
documention it mentions "domain" as being in the usage response. The API should 
really be returning the domain as account information has both account and 
accountid.

Fix: Missing setDomainName at the time of creating response.

* pr/1939:
  CLOUDSTACK-8886: Limitations is listUsageRecords output, listUsageRecords 
does not return domain - Fixed and tests added

Signed-off-by: Rajani Karuturi 


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867280#comment-15867280
 ] 

ASF subversion and git services commented on CLOUDSTACK-8886:
-

Commit 9d8eebf68d0750cef7836bfdd5413af9863993e3 in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9d8eebf ]

Merge pull request #1939 from Accelerite/CLOUDSTACK-8886

CLOUDSTACK-8886: Limitations is listUsageRecords output, listUsageRecAs @kansal 
 is inactive created new branch and raised the PR. This is continuation of PR 
#858
This closes #858

Problem: Only domainid is returned by usageReports API call. In cloudstack 
documention it mentions "domain" as being in the usage response. The API should 
really be returning the domain as account information has both account and 
accountid.

Fix: Missing setDomainName at the time of creating response.

* pr/1939:
  CLOUDSTACK-8886: Limitations is listUsageRecords output, listUsageRecords 
does not return domain - Fixed and tests added

Signed-off-by: Rajani Karuturi 


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867278#comment-15867278
 ] 

ASF subversion and git services commented on CLOUDSTACK-8886:
-

Commit f17d27dd93e7c1b0ba60afdf78276a8b07c4dff0 in cloudstack's branch 
refs/heads/master from [~kansal]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=f17d27d ]

CLOUDSTACK-8886: Limitations is listUsageRecords output, listUsageRecords does 
not return domain - Fixed and tests added


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9782) Host HA

2017-02-14 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867363#comment-15867363
 ] 

Wei Zhou commented on CLOUDSTACK-9782:
--

very good

> Host HA
> ---
>
> Key: CLOUDSTACK-9782
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9782
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.11.0.0
>
>
> CloudStack lacks a way to reliably fence a host, the idea of the host-ha 
> feature is to provide a general purpose HA framework and implementation 
> specific for hypervisor that can use additional mechanism such as OOBM (ipmi 
> based power management) to reliably investigate, recover and fencing a host. 
> This feature can handle scenarios associated with server crash issues and 
> reliable fencing of hosts and HA of VM.
> FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Host+HA



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8663) Snapshot Improvements

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867380#comment-15867380
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8663:


GitHub user anshul1886 opened a pull request:

https://github.com/apache/cloudstack/pull/1941

CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume

snapshots to exist together

Reverting VM to disk only snapshot in Xenserver corrupts VM

Stale NFS secondary storage on XS leads to volume creation failure from 
snapshot

Fixed various concerns raised in #672 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anshul1886/cloudstack-1 CLOUDSTACK-8663

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1941.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1941


commit ca84fd4ffd5d80aef9c7624365d29d7b2aeb3225
Author: Anshul Gangwar 
Date:   2015-07-24T09:15:20Z

CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume
snapshots to exist together

Reverting VM to disk only snapshot in Xenserver corrupts VM

Stale NFS secondary storage on XS leads to volume creation failure from 
snapshot




> Snapshot Improvements
> -
>
> Key: CLOUDSTACK-8663
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8663
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
> Fix For: Future
>
>
> Split volume snapshot process
> Allow VM snapshot and volume snapshots to exist together



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8862) Issuing multiple attach-volume commands simultaneously can be problematic

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867265#comment-15867265
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8862:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1900
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 324
 Hypervisor xenserver
 NetworkType Advanced
 Passed=103
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_routers_network_ops.py

 * test_03_RVR_Network_check_router_state Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


> Issuing multiple attach-volume commands simultaneously can be problematic
> -
>
> Key: CLOUDSTACK-8862
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8862
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.5.0, 4.5.1, 4.5.2, 4.6.0
> Environment: N/A
>Reporter: Mike Tutkowski
> Fix For: Future
>
>
> If a user submits two volumeAttach commands around the same time, the first 
> one can succeed while the second one can fail and can lead CloudStack to ask 
> the underlying storage plug-in to remove the volume from a given ACL (but the 
> volume should be in the ACL because the first attachVolume command succeeded).
> A somewhat similar problem can happen if you submit the second attachVolume 
> command to another VM in the same cluster.
> Proposed solution:
> A data volume should make use of a new column in the volumes table: 
> attach_state (or some name like that).
> This column can have five possible values: null (for root disks), detached 
> (default state for data volumes), attaching, attached, and detaching.
> When an attachVolume command is submitted, the volume should immediately be 
> placed into the "attaching" state. If a transition to that state is not 
> possible, an exception is thrown (for example, if you're already in the 
> "attached" state, you can't transition to the "attaching" state).
> A similar kind of logic already exists for volume snapshots.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9182) Some running VMs turned off on manual migration when auto migration failed while host preparing for maintenance

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867268#comment-15867268
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9182:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1252
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 327
 Hypervisor xenserver
 NetworkType Advanced
 Passed=103
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_routers_network_ops.py

 * test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


> Some running VMs turned off on manual migration when auto migration failed 
> while host preparing for maintenance
> ---
>
> Key: CLOUDSTACK-9182
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9182
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VMware
>Affects Versions: 4.5.2
> Environment: vCenter 5.0
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
>
> When a host is put into maintenance, CloudStack schedules migration for all 
> the running VMs present on the host. This scheduling is managed by High 
> Availability worker thread. And every time a migration fails, CloudStack 
> re-schedules the migration to be executed after 10 minutes.
> In this case, CloudStack fails to migrate some VMs automatically while host 
> is preparing for maintenance and admin tried to migrate them manually. All 
> these VMs are turned off after manual migration.
> Steps:
> - Put a host into maintenance
> - Scheduled migration failed for a VM and CloudStack re-scheduled it.
> - Before the next scheduled migration, manually migrate the VM to a different 
> host.
> When the next scheduled migration was started by the HA work, it failed 
> because there was a mismatch between the source host saved in the HA work job 
> and the actual source host. If VM migration fails due to mismatch then the VM 
> is stopped on the host it resides on.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9165) unable to use reserved IP range in a network for external VMs

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867271#comment-15867271
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9165:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1246
  
 ### ACS CI BVT Run
 **Sumarry:**
 Build Number 328
 Hypervisor xenserver
 NetworkType Advanced
 Passed=103
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_routers_network_ops.py

 * test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


> unable to use reserved IP range in a network for external VMs
> -
>
> Key: CLOUDSTACK-9165
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9165
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: sudharma jain
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9783) Improve metrics view performance

2017-02-14 Thread Rohit Yadav (JIRA)
Rohit Yadav created CLOUDSTACK-9783:
---

 Summary: Improve metrics view performance
 Key: CLOUDSTACK-9783
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Rohit Yadav
Assignee: Rohit Yadav
 Fix For: 4.9.3.0, Future, 4.10.0.0


Metrics view is a pure frontend feature, where several API calls are made to 
generate the metrics view tabular data. In very large environments, rendering 
of these tables can take a lot of time, especially when there is high latency. 
The improvement task is to reimplement this feature by moving the logic to 
backend so metrics calculations happen at the backend and final result can be 
served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9784) GPU detail not displayed in GPU tab of management server UI.

2017-02-14 Thread Nitesh Sarda (JIRA)
Nitesh Sarda created CLOUDSTACK-9784:


 Summary: GPU detail not displayed in GPU tab of management server 
UI.
 Key: CLOUDSTACK-9784
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9784
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Nitesh Sarda


ISSUE
==

When GPU tab of the host is selected on the management server UI, no GPU detail 
is displayed.

RESOLUTION
==

 There is a javascript file namely "system.js" in which while fetching the GPU 
details, sort functionality in dataprovider is returning value as undefined and 
hence it throwing an exception. So handled the output as undefined gracefully 
to avoid exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9706) Retry deleting snapshot if deleteSnapshot command failed

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867256#comment-15867256
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9706:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1867
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 322
 Hypervisor xenserver
 NetworkType Advanced
 Passed=103
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_routers_network_ops.py

 * test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


> Retry deleting snapshot if deleteSnapshot command failed 
> -
>
> Key: CLOUDSTACK-9706
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9706
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> Currently when we delete snapshot then we mark it to be in destroyed state 
> first and then we go to delete it on storage if it can be deleted. If the 
> deletion of snapshot fails then we never retry to delete it which fills up 
> storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867283#comment-15867283
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8886:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/858


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8886) Limitations is listUsageRecords output - listUsageRecords does not return "domain"

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867282#comment-15867282
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8886:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1939


> Limitations is listUsageRecords output - listUsageRecords does not return 
> "domain"
> --
>
> Key: CLOUDSTACK-8886
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8886
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>
> Only domainid is returned by usageReports API call.
> In cloudstack documention it mentions "domain" as being in the usage 
> response. The API should really be returning the domain as account 
> information has both account and accountid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867291#comment-15867291
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
LGTM. @karuturi this can be merged.


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)