[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315825#comment-16315825
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


blueorangutan commented on issue #2146: CLOUDSTACK-4757: Support OVA files with 
multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#issuecomment-355901892
 
 
   @rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted 
as I make progress.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315824#comment-16315824
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


rhtyd commented on issue #2146: CLOUDSTACK-4757: Support OVA files with 
multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#issuecomment-355901848
 
 
   @blueorangutan package


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9885) VPC RVR: On deleting first tier and configuring Private GW both VRs becoming MASTER

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315818#comment-16315818
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9885:


rhtyd commented on issue #2128: CLOUDSTACK-9885: VPCVR: Updated to the private 
the traffic_type
URL: https://github.com/apache/cloudstack/pull/2128#issuecomment-355901271
 
 
   @jayapalu can you address outstanding questions and look at failures?
   @ustcweizhou since you've had fixed the issue in a different way, can you 
help review the PR? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VPC RVR: On deleting first tier and configuring Private GW  both VRs becoming 
> MASTER
> 
>
> Key: CLOUDSTACK-9885
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9885
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>Priority: Blocker
> Fix For: 4.10.1.0
>
>
> - Configure two tier networks t1 and t2. Delete the t1 network. Both VRs are 
> getting  into MASTER state.
> r-269-QA - was BACKUP VR. On deleting t1 network it became MASTER.
> {noformat}
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:5d:a4:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.1.33/24 brd 10.1.1.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> root@r-269-QA:~#
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> inet 10.147.52.100/24 brd 10.147.52.255 scope global secondary eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> inet 10.1.2.1/24 brd 10.1.2.255 scope global secondary eth4
> root@r-269-QA:~# checkrouter.sh
> Status: MASTER
> root@r-269-QA:~#
> {noformat}
> root@r-268-QA - was MASTER VR. On deleting t1 it deleted its eth2 interface 
> and delete 10.2.1.1 ip on ethic interface.
>After some time it configured 10.2.1.1 ip on eth4 and it became master.
> {noformat}
> root@r-268-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:02:ac brd ff:ff:ff:ff:ff:ff
> inet 169.254.2.172/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 

[jira] [Commented] (CLOUDSTACK-10146) Bypass Secondary Storage for KVM templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315810#comment-16315810
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10146:
-

borisstoyanov commented on issue #2379: CLOUDSTACK-10146: Bypass Secondary 
Storage for KVM templates
URL: https://github.com/apache/cloudstack/pull/2379#issuecomment-355899839
 
 
   SSVM tests are not related to this, let me have a quick look at the template 
ones.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bypass Secondary Storage for KVM templates
> --
>
> Key: CLOUDSTACK-10146
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10146
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.11.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315793#comment-16315793
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

rhtyd commented on issue #2391: CLOUDSTACK-10215: Excessive log4j debug level 
in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391#issuecomment-355898749
 
 
   Test LGTM, all the failures are env related and not with this PR.
   Using Daan's comment, consider his review as LGTM as squash merge will 
remove the merge commits, I'll merge this based on two lgtms (code review) and 
test.
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315792#comment-16315792
 ] 

ASF subversion and git services commented on CLOUDSTACK-10215:
--

Commit 981286f93cd79d0583637ee806b0faf674bf39fc in cloudstack's branch 
refs/heads/master from Bitworks Software, Ltd
[ https://gitbox.apache.org/repos/asf?p=cloudstack.git;h=981286f ]

CLOUDSTACK-10215: Excessive log4j debug level in CPVM could lead to FS overflow 
(#2391)

Fixed excessive log levels for systemvm agents.


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315787#comment-16315787
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

rhtyd commented on issue #2391: CLOUDSTACK-10215: Excessive log4j debug level 
in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391#issuecomment-355898749
 
 
   Test LGTM, all the failures are env related and not with this PR.
   Using Daan's comment, consider this as LGTM as squash merge will remove the 
merge commits, I'll merge this based on two lgtms (code review) and test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315791#comment-16315791
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

rhtyd closed pull request #2391: CLOUDSTACK-10215: Excessive log4j debug level 
in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/systemvm/agent/conf/log4j-cloud.xml 
b/systemvm/agent/conf/log4j-cloud.xml
index f4ad65ed66e..749d2fe8964 100644
--- a/systemvm/agent/conf/log4j-cloud.xml
+++ b/systemvm/agent/conf/log4j-cloud.xml
@@ -87,11 +87,11 @@ under the License.
 
 
 
-  
+  
 
 
 
-  
+  
 
 
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10178) Hotfixes to make 4.10 work

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315784#comment-16315784
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10178:
-

bwsw closed pull request #2320: CLOUDSTACK-10178: Hotfixes to make 4.10 work
URL: https://github.com/apache/cloudstack/pull/2320
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core/src/com/cloud/storage/template/TemplateLocation.java 
b/core/src/com/cloud/storage/template/TemplateLocation.java
index e52a635dc68..d10d05ae971 100644
--- a/core/src/com/cloud/storage/template/TemplateLocation.java
+++ b/core/src/com/cloud/storage/template/TemplateLocation.java
@@ -26,6 +26,7 @@
 import java.util.ArrayList;
 import java.util.Iterator;
 import java.util.Properties;
+import java.util.Arrays;
 
 import org.apache.log4j.Logger;
 
@@ -81,12 +82,12 @@ public boolean purge() {
 boolean purged = true;
 String[] files = _storage.listFiles(_templatePath);
 for (String file : files) {
-boolean r = _storage.delete(file);
-if (!r) {
+boolean isRemoved = _storage.delete(file);
+if (!isRemoved) {
 purged = false;
 }
 if (s_logger.isDebugEnabled()) {
-s_logger.debug((r ? "R" : "Unable to r") + "emove " + file);
+s_logger.debug((isRemoved ? "Removed " : "Unable to remove") + 
file);
 }
 }
 
@@ -97,43 +98,60 @@ public boolean load() throws IOException {
 try (FileInputStream strm = new FileInputStream(_file);) {
 _props.load(strm);
 } catch (IOException e) {
-s_logger.warn("Unable to load the template properties", e);
+s_logger.warn("Unable to load the template properties for '" + 
_file + "': ", e);
 }
 
 for (ImageFormat format : ImageFormat.values()) {
-String ext = _props.getProperty(format.getFileExtension());
+String currentExtension = format.getFileExtension();
+String ext = _props.getProperty(currentExtension);
 if (ext != null) {
+if (s_logger.isDebugEnabled()) {
+s_logger.debug("File extension '" + currentExtension + "' 
was found in '" + _file + "'.");
+}
 FormatInfo info = new FormatInfo();
 info.format = format;
-info.filename = _props.getProperty(format.getFileExtension() + 
".filename");
+info.filename = _props.getProperty(currentExtension + 
".filename");
 if (info.filename == null) {
+if (s_logger.isDebugEnabled()) {
+s_logger.debug("Property '" + currentExtension + 
".filename' was not found in '" + _file + "'. Current format is ignored.");
+}
 continue;
 }
-info.size = 
NumbersUtil.parseLong(_props.getProperty(format.getFileExtension() + ".size"), 
-1);
+if (s_logger.isDebugEnabled()) {
+s_logger.debug("Property '" + currentExtension + 
".filename' was found in '" + _file + "'. Current format will be parsed.");
+}
+info.size = 
NumbersUtil.parseLong(_props.getProperty(currentExtension + ".size"), -1);
 _props.setProperty("physicalSize", Long.toString(info.size));
-info.virtualSize = 
NumbersUtil.parseLong(_props.getProperty(format.getFileExtension() + 
".virtualsize"), -1);
+info.virtualSize = 
NumbersUtil.parseLong(_props.getProperty(currentExtension + ".virtualsize"), 
-1);
 _formats.add(info);
 
 if (!checkFormatValidity(info)) {
 _isCorrupted = true;
 s_logger.warn("Cleaning up inconsistent information for " 
+ format);
 }
+} else {
+if (s_logger.isDebugEnabled()) {
+s_logger.debug("Format extension '" + currentExtension + 
"' wasn't found in '" + _file + "'.");
+}
 }
 }
 
 if (_props.getProperty("uniquename") == null || 
_props.getProperty("virtualsize") == null) {
+if (s_logger.isDebugEnabled()) {
+s_logger.debug("Property 'uniquename' or 'virtualsize' weren't 
found in '" + _file + "'. Loading failed.");
+}
 return false;
 }
-
 return (_formats.size() > 0);
 }
 
 public boolean save() {
 for (FormatInfo info : _formats) {
-_props.setProperty(info.format.getFileExtension(), "true");
-

[jira] [Commented] (CLOUDSTACK-10188) Resource Accounting for primary storage is Broken

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315783#comment-16315783
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10188:
-

rhtyd commented on issue #2362: CLOUDSTACK-10188 - Resource Accounting for 
primary storage is Broken when Domains are in use
URL: https://github.com/apache/cloudstack/pull/2362#issuecomment-355898495
 
 
   @bwsw sure, please fix the conflicts.
   Ideally today we are supposed to freeze the branch to only accept 
critical/blocker fixes until the rc1 is put to vote. I'll start a discussion on 
dev@ to gather consensus. Meanwhile, if you've time/bandwidth do work on it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Resource Accounting for primary storage is Broken
> -
>
> Key: CLOUDSTACK-10188
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10188
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0, 4.10.0.0, 4.11.0.0
>Reporter: Ivan Kudryavtsev
>
> During storage expunge domain resource statistics for primary storage space 
> resource counter is not updated for domain. This leads to the situation when 
> domain resource statistics for primary storage is overfilled (statistics only 
> increase but not decrease).
> Global scheduled task resourcecount.check.interval > 0 provides a workaround 
> but not fixes the problem truly because when accounts inside domains use 
> primary_storage allocation/deallocation intensively it leads to service block 
> of operation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10178) Hotfixes to make 4.10 work

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315779#comment-16315779
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10178:
-

rhtyd commented on issue #2320: CLOUDSTACK-10178: Hotfixes to make 4.10 work
URL: https://github.com/apache/cloudstack/pull/2320#issuecomment-355898241
 
 
   Sure thanks @bwsw 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Hotfixes to make 4.10 work
> --
>
> Key: CLOUDSTACK-10178
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10178
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Ivan Kudryavtsev
>
> # Fixes absent IPv6 network definition bugs for basic zone which lead to 
> exceptions on KVM agent if it uses SGs (management and agent affected)
> # Fixes the case when template is created from a snapshot (CLOUDSTACK-10140, 
> merged https://github.com/apache/cloudstack/pull/2322)
> # Fixes ubuntu/debian br_netfilter dependency (CLOUDSTACK-10138, merged 
> CLOUDSTACK-10138: Load br_netfilter in security_group management script)
> # Fixes quota plugin bug (https://github.com/apache/cloudstack/pull/2326)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9921) NPE when garbage collector is running

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315776#comment-16315776
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9921:


rhtyd closed pull request #2139: CLOUDSTACK-9921: NPE when storage garbage 
collector is running.
URL: https://github.com/apache/cloudstack/pull/2139
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/engine/schema/src/com/cloud/storage/dao/SnapshotDao.java 
b/engine/schema/src/com/cloud/storage/dao/SnapshotDao.java
index fe635586370..1c11f9b6180 100755
--- a/engine/schema/src/com/cloud/storage/dao/SnapshotDao.java
+++ b/engine/schema/src/com/cloud/storage/dao/SnapshotDao.java
@@ -61,4 +61,6 @@
 
 void updateVolumeIds(long oldVolId, long newVolId);
 
+List listByStatusNotIn(long volumeId, Snapshot.State... 
status);
+
 }
diff --git a/engine/schema/src/com/cloud/storage/dao/SnapshotDaoImpl.java 
b/engine/schema/src/com/cloud/storage/dao/SnapshotDaoImpl.java
index a6941cf5165..560edc93816 100755
--- a/engine/schema/src/com/cloud/storage/dao/SnapshotDaoImpl.java
+++ b/engine/schema/src/com/cloud/storage/dao/SnapshotDaoImpl.java
@@ -69,6 +69,7 @@
 private SearchBuilder AccountIdSearch;
 private SearchBuilder InstanceIdSearch;
 private SearchBuilder StatusSearch;
+private SearchBuilder notInStatusSearch;
 private GenericSearchBuilder CountSnapshotsByAccount;
 @Inject
 ResourceTagDao _tagsDao;
@@ -187,6 +188,11 @@ protected void init() {
 StatusSearch.and("status", StatusSearch.entity().getState(), 
SearchCriteria.Op.IN);
 StatusSearch.done();
 
+notInStatusSearch  = createSearchBuilder();
+notInStatusSearch.and("volumeId", 
notInStatusSearch.entity().getVolumeId(), SearchCriteria.Op.EQ);
+notInStatusSearch.and("status", notInStatusSearch.entity().getState(), 
SearchCriteria.Op.NOTIN);
+notInStatusSearch.done();
+
 CountSnapshotsByAccount = createSearchBuilder(Long.class);
 CountSnapshotsByAccount.select(null, Func.COUNT, null);
 CountSnapshotsByAccount.and("account", 
CountSnapshotsByAccount.entity().getAccountId(), SearchCriteria.Op.EQ);
@@ -352,4 +358,12 @@ public void updateVolumeIds(long oldVolId, long newVolId) {
 UpdateBuilder ub = getUpdateBuilder(snapshot);
 update(ub, sc, null);
 }
+
+@Override
+public List listByStatusNotIn(long volumeId, Snapshot.State... 
status) {
+SearchCriteria sc = this.notInStatusSearch.create();
+sc.setParameters("volumeId", volumeId);
+sc.setParameters("status", (Object[]) status);
+return listBy(sc, null);
+}
 }
diff --git a/server/src/com/cloud/vm/UserVmManagerImpl.java 
b/server/src/com/cloud/vm/UserVmManagerImpl.java
index df50f5a9162..892965c2daa 100644
--- a/server/src/com/cloud/vm/UserVmManagerImpl.java
+++ b/server/src/com/cloud/vm/UserVmManagerImpl.java
@@ -231,10 +231,10 @@
 import com.cloud.storage.GuestOSVO;
 import com.cloud.storage.SnapshotVO;
 import com.cloud.storage.Storage;
+import com.cloud.storage.Snapshot;
 import com.cloud.storage.Storage.ImageFormat;
 import com.cloud.storage.Storage.StoragePoolType;
 import com.cloud.storage.Storage.TemplateType;
-import com.cloud.storage.Snapshot;
 import com.cloud.storage.StorageManager;
 import com.cloud.storage.StoragePool;
 import com.cloud.storage.StoragePoolStatus;
@@ -5449,11 +5449,20 @@ public UserVm moveVMToUser(final AssignVMCmd cmd) 
throws ResourceAllocationExcep
 }
 }
 
+final List volumes = _volsDao.findByInstance(cmd.getVmId());
+
+for (VolumeVO volume : volumes) {
+List snapshots = 
_snapshotDao.listByStatusNotIn(volume.getId(), 
Snapshot.State.Destroyed,Snapshot.State.Error);
+if (snapshots != null && snapshots.size() > 0) {
+throw new InvalidParameterValueException(
+"Snapshots exists for volume: "+ volume.getName()+ ", 
Detach volume or remove snapshots for volume before assigning VM to another 
user.");
+}
+}
+
 DataCenterVO zone = _dcDao.findById(vm.getDataCenterId());
 
 // Get serviceOffering and Volumes for Virtual Machine
 final ServiceOfferingVO offering = 
_serviceOfferingDao.findByIdIncludingRemoved(vm.getId(), 
vm.getServiceOfferingId());
-final List volumes = _volsDao.findByInstance(cmd.getVmId());
 
 //Remove vm from instance group
 removeInstanceFromInstanceGroup(cmd.getVmId());
@@ -5508,16 +5517,6 @@ public void 
doInTransactionWithoutResult(TransactionStatus status) {
 
_resourceLimitMgr.incrementResourceCount(newAccount.getAccountId(), 

[jira] [Commented] (CLOUDSTACK-9921) NPE when garbage collector is running

2018-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315777#comment-16315777
 ] 

ASF subversion and git services commented on CLOUDSTACK-9921:
-

Commit 8442a4d9dfcfdd7bb92e9039a7923f1c3915fddf in cloudstack's branch 
refs/heads/master from [~jay_accelerite]
[ https://gitbox.apache.org/repos/asf?p=cloudstack.git;h=8442a4d ]

CLOUDSTACK-9921: Fix NPE when storage garbage collector is running (#2139)

Steps to reproduce issue

Deploy a VM
Take snapshot of the root volume
Delete the snapshot
Before the garbage collector has run, shutdown the VM and assign the VM to 
other user.
When garage collector executes NPE shows in the logs.

> NPE when garbage collector is running
> -
>
> Key: CLOUDSTACK-9921
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9921
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: jay
>
> Steps to reproduce issue
> 1. Deploy a VM
> 2. Take snapshot of the root volume
> 3. Delete the snapshot
> 4. Before the garbage collector has run, shutdown the VM and assign the VM to 
> other user.
> 5. When garage collector executes NPE shows in the logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9921) NPE when garbage collector is running

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315775#comment-16315775
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9921:


rhtyd commented on issue #2139: CLOUDSTACK-9921: NPE when storage garbage 
collector is running.
URL: https://github.com/apache/cloudstack/pull/2139#issuecomment-355898136
 
 
   Test LGTM, failures not related to this PR.
   Merging this based on code reviews and test results.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> NPE when garbage collector is running
> -
>
> Key: CLOUDSTACK-9921
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9921
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: jay
>
> Steps to reproduce issue
> 1. Deploy a VM
> 2. Take snapshot of the root volume
> 3. Delete the snapshot
> 4. Before the garbage collector has run, shutdown the VM and assign the VM to 
> other user.
> 5. When garage collector executes NPE shows in the logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10188) Resource Accounting for primary storage is Broken

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315755#comment-16315755
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10188:
-

bwsw commented on issue #2362: CLOUDSTACK-10188 - Resource Accounting for 
primary storage is Broken when Domains are in use
URL: https://github.com/apache/cloudstack/pull/2362#issuecomment-355895431
 
 
   @rhtyd Do we have a chance including it to 4.11 yet? I overviewed Travis 
logs and only one test fails which seems to be a timeout reason, not a real 
failure. We already resolved conflicts twice against moving master. I don't 
even imaging how much we will be asked to do so if waiting 4.12))
   And I suppose it's a bug which is vital for users who use domains, because 
It blocks creating new VMs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Resource Accounting for primary storage is Broken
> -
>
> Key: CLOUDSTACK-10188
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10188
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0, 4.10.0.0, 4.11.0.0
>Reporter: Ivan Kudryavtsev
>
> During storage expunge domain resource statistics for primary storage space 
> resource counter is not updated for domain. This leads to the situation when 
> domain resource statistics for primary storage is overfilled (statistics only 
> increase but not decrease).
> Global scheduled task resourcecount.check.interval > 0 provides a workaround 
> but not fixes the problem truly because when accounts inside domains use 
> primary_storage allocation/deallocation intensively it leads to service block 
> of operation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315714#comment-16315714
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

blueorangutan commented on issue #2391: CLOUDSTACK-10215: Excessive log4j debug 
level in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391#issuecomment-355889787
 
 
   Trillian test result (tid-2067)
   Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
   Total time taken: 48261 seconds
   Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2391-t2067-kvm-centos7.zip
   Intermitten failure detected: /marvin/tests/smoke/test_accounts.py
   Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_virtio_scsi_vm.py
   Intermitten failure detected: /marvin/tests/smoke/test_deploy_vm_iso.py
   Intermitten failure detected: /marvin/tests/smoke/test_iso.py
   Intermitten failure detected: /marvin/tests/smoke/test_ssvm.py
   Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
   Smoke tests completed. 60 look OK, 7 have error(s)
   Only failed tests results shown below:
   
   
   Test | Result | Time (s) | Test File
   --- | --- | --- | ---
   ContextSuite context=TestTemplateHierarchy>:setup | `Error` | 1533.76 | 
test_accounts.py
   ContextSuite context=TestDeployVirtioSCSIVM>:setup | `Error` | 0.00 | 
test_deploy_virtio_scsi_vm.py
   test_deploy_vm_from_iso | `Error` | 1510.84 | test_deploy_vm_iso.py
   test_01_create_iso_with_checksum_sha1 | `Error` | 65.31 | test_iso.py
   test_02_create_iso_with_checksum_sha256 | `Error` | 65.28 | test_iso.py
   test_03_create_iso_with_checksum_md5 | `Error` | 65.30 | test_iso.py
   test_04_create_iso_with_no_checksum | `Error` | 65.28 | test_iso.py
   test_01_create_iso | `Failure` | 1510.97 | test_iso.py
   ContextSuite context=TestISO>:setup | `Error` | 3026.92 | test_iso.py
   test_05_stop_ssvm | `Failure` | 98.16 | test_ssvm.py
   test_07_resize_fail | `Failure` | 15.22 | test_volumes.py
   test_02_redundant_VPC_default_routes | `Failure` | 866.20 | 
test_vpc_redundant.py
   test_05_rvpc_multi_tiers | `Failure` | 340.75 | test_vpc_redundant.py
   test_05_rvpc_multi_tiers | `Error` | 386.28 | test_vpc_redundant.py
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315676#comment-16315676
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


blueorangutan commented on issue #2298: CLOUDSTACK-9620: Enhancements for 
managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355885085
 
 
   @rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315675#comment-16315675
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


rhtyd commented on issue #2298: CLOUDSTACK-9620: Enhancements for managed 
storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355885076
 
 
   @blueorangutan test 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315674#comment-16315674
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


rhtyd commented on issue #2298: CLOUDSTACK-9620: Enhancements for managed 
storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355885069
 
 
   Travis keeps failing, please check @mike-tutkowski


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315653#comment-16315653
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


blueorangutan commented on issue #2298: CLOUDSTACK-9620: Enhancements for 
managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355882665
 
 
   Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1614


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10216) Support MHz resource limit and accounting for VM instances

2018-01-07 Thread Ivan Kudryavtsev (JIRA)
Ivan Kudryavtsev created CLOUDSTACK-10216:
-

 Summary: Support MHz resource limit and accounting for VM instances
 Key: CLOUDSTACK-10216
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10216
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Ivan Kudryavtsev


Right now ACS supports cores limit and memory limit for accounts and domains, 
but an administrator could like to use additional synthetic parameter "Total 
MHz" which enables additional accounting for VM instance resources of account 
or domain.

Resource is accounted like vm_cores_count x core_frequency




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315641#comment-16315641
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


rhtyd commented on issue #2298: CLOUDSTACK-9620: Enhancements for managed 
storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355880852
 
 
   @blueorangutan package 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315642#comment-16315642
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


blueorangutan commented on issue #2298: CLOUDSTACK-9620: Enhancements for 
managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355880884
 
 
   @rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted 
as I make progress.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315634#comment-16315634
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


mike-tutkowski opened a new pull request #2298: CLOUDSTACK-9620: Enhancements 
for managed storage
URL: https://github.com/apache/cloudstack/pull/2298
 
 
   Allowed zone-wide primary storage based on a custom plug-in to be added via 
the GUI in a KVM-only environment (previously this only worked for XenServer 
and VMware)
   
   Added support for root disks on managed storage with KVM
   
   Added support for volume snapshots with managed storage on KVM
   
   Enabled creating a template directly from a volume (i.e. without having to 
go through a volume snapshot) on KVM with managed storage
   
   Only allowed the resizing of a volume for managed storage on KVM if the 
volume in question is either not attached to a VM or is attached to a VM in the 
Stopped state
   
   Included support for Reinstall VM on KVM with managed storage
   
   Enabled offline migration on KVM from non-managed storage to managed storage 
and vice versa
   
   Included support for online storage migration on KVM with managed storage 
(NFS and Ceph to managed storage)
   
   Added support to download (extract) a managed-storage volume to a QCOW2 file
   
   When uploading a file from outside of CloudStack to CloudStack, set the min 
and max IOPS, if applicable.
   
   Included support for the KVM auto-convergence feature
   
   The compression flag was actually added in version 1.0.3 (103) as 
opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
   
   On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
from the guest OS (as opposed to doing so from CloudStack), we need to pass to 
the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
   
   Added a new Global Setting: kvm.storage.live.migration.wait
   
   For XenServer, added a check to enforce that only volumes from zone-wide 
managed storage can be storage motioned from a host in one cluster to a host in 
another cluster (cannot do so at the time being with volumes from 
cluster-scoped managed storage)
   
   Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
with one or more snapshots.
   
   Enabled for managed storage with VMware: Template caching, create snapshot, 
delete snapshot, create volume from snapshot, and create template from snapshot
   
   Added an SIOC API plug-in to support VMware SIOC
   
   When starting a VM that uses managed storage in a cluster other than the one 
it last was running in, we need to remove the reference to the iSCSI volume 
from the original cluster.
   
   Added the ability to revert a volume to a snapshot
   
   Enabled cluster-scoped managed storage
   
   Added support for VMware dynamic discovery
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the 

[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315604#comment-16315604
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


mike-tutkowski opened a new pull request #2298: CLOUDSTACK-9620: Enhancements 
for managed storage
URL: https://github.com/apache/cloudstack/pull/2298
 
 
   Allowed zone-wide primary storage based on a custom plug-in to be added via 
the GUI in a KVM-only environment (previously this only worked for XenServer 
and VMware)
   
   Added support for root disks on managed storage with KVM
   
   Added support for volume snapshots with managed storage on KVM
   
   Enabled creating a template directly from a volume (i.e. without having to 
go through a volume snapshot) on KVM with managed storage
   
   Only allowed the resizing of a volume for managed storage on KVM if the 
volume in question is either not attached to a VM or is attached to a VM in the 
Stopped state
   
   Included support for Reinstall VM on KVM with managed storage
   
   Enabled offline migration on KVM from non-managed storage to managed storage 
and vice versa
   
   Included support for online storage migration on KVM with managed storage 
(NFS and Ceph to managed storage)
   
   Added support to download (extract) a managed-storage volume to a QCOW2 file
   
   When uploading a file from outside of CloudStack to CloudStack, set the min 
and max IOPS, if applicable.
   
   Included support for the KVM auto-convergence feature
   
   The compression flag was actually added in version 1.0.3 (103) as 
opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
   
   On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
from the guest OS (as opposed to doing so from CloudStack), we need to pass to 
the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
   
   Added a new Global Setting: kvm.storage.live.migration.wait
   
   For XenServer, added a check to enforce that only volumes from zone-wide 
managed storage can be storage motioned from a host in one cluster to a host in 
another cluster (cannot do so at the time being with volumes from 
cluster-scoped managed storage)
   
   Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
with one or more snapshots.
   
   Enabled for managed storage with VMware: Template caching, create snapshot, 
delete snapshot, create volume from snapshot, and create template from snapshot
   
   Added an SIOC API plug-in to support VMware SIOC
   
   When starting a VM that uses managed storage in a cluster other than the one 
it last was running in, we need to remove the reference to the iSCSI volume 
from the original cluster.
   
   Added the ability to revert a volume to a snapshot
   
   Enabled cluster-scoped managed storage
   
   Added support for VMware dynamic discovery
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the 

[jira] [Commented] (CLOUDSTACK-9885) VPC RVR: On deleting first tier and configuring Private GW both VRs becoming MASTER

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315572#comment-16315572
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9885:


blueorangutan commented on issue #2128: CLOUDSTACK-9885: VPCVR: Updated to the 
private the traffic_type
URL: https://github.com/apache/cloudstack/pull/2128#issuecomment-355870673
 
 
   Trillian test result (tid-2068)
   Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
   Total time taken: 29768 seconds
   Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2128-t2068-kvm-centos7.zip
   Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
   Intermitten failure detected: /marvin/tests/smoke/test_router_dhcphosts.py
   Intermitten failure detected: /marvin/tests/smoke/test_router_dns.py
   Intermitten failure detected: /marvin/tests/smoke/test_routers_network_ops.py
   Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
   Smoke tests completed. 65 look OK, 2 have error(s)
   Only failed tests results shown below:
   
   
   Test | Result | Time (s) | Test File
   --- | --- | --- | ---
   test_01_vpc_privategw_acl | `Failure` | 41.03 | test_privategw_acl.py
   test_02_vpc_privategw_static_routes | `Failure` | 101.85 | 
test_privategw_acl.py
   test_03_vpc_privategw_restart_vpc_cleanup | `Failure` | 111.95 | 
test_privategw_acl.py
   test_04_rvpc_privategw_static_routes | `Failure` | 157.04 | 
test_privategw_acl.py
   test_02_attach_volume | `Failure` | 669.80 | test_volumes.py
   test_07_resize_fail | `Failure` | 15.23 | test_volumes.py
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VPC RVR: On deleting first tier and configuring Private GW  both VRs becoming 
> MASTER
> 
>
> Key: CLOUDSTACK-9885
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9885
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>Priority: Blocker
> Fix For: 4.10.1.0
>
>
> - Configure two tier networks t1 and t2. Delete the t1 network. Both VRs are 
> getting  into MASTER state.
> r-269-QA - was BACKUP VR. On deleting t1 network it became MASTER.
> {noformat}
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:5d:a4:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.1.33/24 brd 10.1.1.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> root@r-269-QA:~#
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> inet 10.147.52.100/24 brd 10.147.52.255 scope global secondary eth3
> 6: 

[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315561#comment-16315561
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


blueorangutan commented on issue #2146: CLOUDSTACK-4757: Support OVA files with 
multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#issuecomment-355870139
 
 
   Trillian test result (tid-2064)
   Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
   Total time taken: 52205 seconds
   Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2146-t2064-vmware-55u3.zip
   Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_vgpu_enabled_vm.py
   Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_vm_root_resize.py
   Intermitten failure detected: /marvin/tests/smoke/test_vm_life_cycle.py
   Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
   Smoke tests completed. 62 look OK, 5 have error(s)
   Only failed tests results shown below:
   
   
   Test | Result | Time (s) | Test File
   --- | --- | --- | ---
   test_3d_gpu_support | `Failure` | 431.76 | test_deploy_vgpu_enabled_vm.py
   test_00_deploy_vm_root_resize | `Error` | 0.17 | 
test_deploy_vm_root_resize.py
   test_08_migrate_vm | `Error` | 56.01 | test_vm_life_cycle.py
   test_01_create_volume | `Failure` | 202.18 | test_volumes.py
   test_02_redundant_VPC_default_routes | `Failure` | 1192.86 | 
test_vpc_redundant.py
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315542#comment-16315542
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


mike-tutkowski commented on issue #2298: CLOUDSTACK-9620: Enhancements for 
managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355868824
 
 
   @rhtyd Once we get a successful run of (standard) tests back, I can then 
re-run all of the managed-storage regression tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315541#comment-16315541
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


mike-tutkowski opened a new pull request #2298: CLOUDSTACK-9620: Enhancements 
for managed storage
URL: https://github.com/apache/cloudstack/pull/2298
 
 
   Allowed zone-wide primary storage based on a custom plug-in to be added via 
the GUI in a KVM-only environment (previously this only worked for XenServer 
and VMware)
   
   Added support for root disks on managed storage with KVM
   
   Added support for volume snapshots with managed storage on KVM
   
   Enabled creating a template directly from a volume (i.e. without having to 
go through a volume snapshot) on KVM with managed storage
   
   Only allowed the resizing of a volume for managed storage on KVM if the 
volume in question is either not attached to a VM or is attached to a VM in the 
Stopped state
   
   Included support for Reinstall VM on KVM with managed storage
   
   Enabled offline migration on KVM from non-managed storage to managed storage 
and vice versa
   
   Included support for online storage migration on KVM with managed storage 
(NFS and Ceph to managed storage)
   
   Added support to download (extract) a managed-storage volume to a QCOW2 file
   
   When uploading a file from outside of CloudStack to CloudStack, set the min 
and max IOPS, if applicable.
   
   Included support for the KVM auto-convergence feature
   
   The compression flag was actually added in version 1.0.3 (103) as 
opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
   
   On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
from the guest OS (as opposed to doing so from CloudStack), we need to pass to 
the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
   
   Added a new Global Setting: kvm.storage.live.migration.wait
   
   For XenServer, added a check to enforce that only volumes from zone-wide 
managed storage can be storage motioned from a host in one cluster to a host in 
another cluster (cannot do so at the time being with volumes from 
cluster-scoped managed storage)
   
   Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
with one or more snapshots.
   
   Enabled for managed storage with VMware: Template caching, create snapshot, 
delete snapshot, create volume from snapshot, and create template from snapshot
   
   Added an SIOC API plug-in to support VMware SIOC
   
   When starting a VM that uses managed storage in a cluster other than the one 
it last was running in, we need to remove the reference to the iSCSI volume 
from the original cluster.
   
   Added the ability to revert a volume to a snapshot
   
   Enabled cluster-scoped managed storage
   
   Added support for VMware dynamic discovery
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the 

[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315534#comment-16315534
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


mike-tutkowski commented on issue #2298: CLOUDSTACK-9620: Enhancements for 
managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355867559
 
 
   @rhtyd I needed to rename a folder from "sioc" to "vmware-sioc" and now the 
management server starts just fine. I've updated the PR with the new commit. 
Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10146) Bypass Secondary Storage for KVM templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315513#comment-16315513
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10146:
-

blueorangutan commented on issue #2379: CLOUDSTACK-10146: Bypass Secondary 
Storage for KVM templates
URL: https://github.com/apache/cloudstack/pull/2379#issuecomment-355861731
 
 
   Trillian test result (tid-2063)
   Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
   Total time taken: 44857 seconds
   Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2379-t2063-kvm-centos7.zip
   Intermitten failure detected: /marvin/tests/smoke/test_public_ip_range.py
   Intermitten failure detected: /marvin/tests/smoke/test_ssvm.py
   Intermitten failure detected: /marvin/tests/smoke/test_templates.py
   Intermitten failure detected: /marvin/tests/smoke/test_usage.py
   Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
   Intermitten failure detected: /marvin/tests/smoke/test_hostha_kvm.py
   Smoke tests completed. 63 look OK, 4 have error(s)
   Only failed tests results shown below:
   
   
   Test | Result | Time (s) | Test File
   --- | --- | --- | ---
   test_01_list_sec_storage_vm | `Failure` | 0.13 | test_ssvm.py
   test_02_list_cpvm_vm | `Failure` | 0.14 | test_ssvm.py
   test_05_stop_ssvm | `Failure` | 99.28 | test_ssvm.py
   test_06_stop_cpvm | `Failure` | 112.30 | test_ssvm.py
   test_01_register_template_direct_download_flag | `Error` | 0.04 | 
test_templates.py
   test_02_deploy_vm_from_direct_download_template | `Error` | 0.00 | 
test_templates.py
   test_03_deploy_vm_wrong_checksum | `Error` | 0.04 | test_templates.py
   test_04_extract_template | `Failure` | 132.38 | test_templates.py
   ContextSuite context=TestISOUsage>:setup | `Error` | 0.00 | test_usage.py
   test_06_download_detached_volume | `Failure` | 147.53 | test_volumes.py
   test_07_resize_fail | `Failure` | 15.42 | test_volumes.py
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bypass Secondary Storage for KVM templates
> --
>
> Key: CLOUDSTACK-10146
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10146
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.11.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315481#comment-16315481
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


rafaelweingartner commented on issue #2387: CLOUDSTACK-8855 Improve Error 
Message for Host Alert State and reconnect host API.
URL: https://github.com/apache/cloudstack/pull/2387#issuecomment-355854202
 
 
   @rhtyd thanks for spotting that!
   Changes applied and conflicts solved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315466#comment-16315466
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


rafaelweingartner commented on a change in pull request #2146: CLOUDSTACK-4757: 
Support OVA files with multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#discussion_r160059612
 
 

 ##
 File path: api/src/org/apache/cloudstack/api/command/user/vm/DeployVMCmd.java
 ##
 @@ -443,6 +449,37 @@ public String getKeyboard() {
 return dhcpOptionsMap;
 }
 
+public Map getDataDiskTemplateToDiskOfferingMap() {
+if (diskOfferingId != null && dataDiskTemplateToDiskOfferingList != 
null) {
 
 Review comment:
   Actually, what I suggested is what is being done in the code now...
   When I commented the code was different from its current state.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10178) Hotfixes to make 4.10 work

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315439#comment-16315439
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10178:
-

bwsw commented on issue #2320: CLOUDSTACK-10178: Hotfixes to make 4.10 work
URL: https://github.com/apache/cloudstack/pull/2320#issuecomment-355845923
 
 
   @rhtyd @wido @GabrielBrascher 
   Since it requires more and more PRs to include in 4.10 to fix bugs I would 
like to close this because It is no sense to keep it open in the perspective of 
4.11.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Hotfixes to make 4.10 work
> --
>
> Key: CLOUDSTACK-10178
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10178
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Ivan Kudryavtsev
>
> # Fixes absent IPv6 network definition bugs for basic zone which lead to 
> exceptions on KVM agent if it uses SGs (management and agent affected)
> # Fixes the case when template is created from a snapshot (CLOUDSTACK-10140, 
> merged https://github.com/apache/cloudstack/pull/2322)
> # Fixes ubuntu/debian br_netfilter dependency (CLOUDSTACK-10138, merged 
> CLOUDSTACK-10138: Load br_netfilter in security_group management script)
> # Fixes quota plugin bug (https://github.com/apache/cloudstack/pull/2326)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9921) NPE when garbage collector is running

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315430#comment-16315430
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9921:


blueorangutan commented on issue #2139: CLOUDSTACK-9921: NPE when storage 
garbage collector is running.
URL: https://github.com/apache/cloudstack/pull/2139#issuecomment-355845011
 
 
   Trillian test result (tid-2062)
   Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
   Total time taken: 31723 seconds
   Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2139-t2062-kvm-centos7.zip
   Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_virtio_scsi_vm.py
   Intermitten failure detected: /marvin/tests/smoke/test_internal_lb.py
   Intermitten failure detected: /marvin/tests/smoke/test_outofbandmanagement.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
   Intermitten failure detected: /marvin/tests/smoke/test_host_maintenance.py
   Smoke tests completed. 65 look OK, 2 have error(s)
   Only failed tests results shown below:
   
   
   Test | Result | Time (s) | Test File
   --- | --- | --- | ---
   test_oobm_zchange_password | `Error` | 5.73 | test_outofbandmanagement.py
   test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
`Failure` | 311.50 | test_vpc_redundant.py
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> NPE when garbage collector is running
> -
>
> Key: CLOUDSTACK-9921
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9921
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: jay
>
> Steps to reproduce issue
> 1. Deploy a VM
> 2. Take snapshot of the root volume
> 3. Delete the snapshot
> 4. Before the garbage collector has run, shutdown the VM and assign the VM to 
> other user.
> 5. When garage collector executes NPE shows in the logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315373#comment-16315373
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

blueorangutan commented on issue #2391: CLOUDSTACK-10215: Excessive log4j debug 
level in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391#issuecomment-355835420
 
 
   @rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315372#comment-16315372
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

rhtyd commented on issue #2391: CLOUDSTACK-10215: Excessive log4j debug level 
in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391#issuecomment-355835413
 
 
   @blueorangutan test


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315371#comment-16315371
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


rhtyd commented on issue #2298: CLOUDSTACK-9620: Enhancements for managed 
storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355835398
 
 
   Thanks @mike-tutkowski 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315370#comment-16315370
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


mike-tutkowski commented on issue #2298: CLOUDSTACK-9620: Enhancements for 
managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355835190
 
 
   That looks like a silly config-file mistake. I made a change to that file 
due to reviewer comments and didn’t re-run the management server. Once I’m back 
at my computer today, I can fix that and let you know when that’s done.
   
   On Jan 7, 2018, at 3:39 AM, Rohit Yadav 
> wrote:
   
   
   @mike-tutkowski can you fix runtime 
issue, management server fails to start with: /cc 
@DaanHoogland
   
   2018-01-07 18:27:32,548 ERROR [o.a.c.s.m.w.CloudStackContextLoaderListener] 
(main:null) (logid:) Failed to start CloudStack
   java.io.IOException: Resource 
[jar:file:/usr/share/cloudstack-management/lib/cloudstack-4.11.0.0-SNAPSHOT.jar!/META-INF/cloudstack/sioc/module.properties]
 is expected to exist at 
[classpath:META-INF/cloudstack/vmware-sioc/module.properties] please ensure the 
name property is correct
   at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinition.checkNameMatchesSelf(DefaultModuleDefinition.java:108)
   
   
   
   Due to this issue and without a fix, we can not further test/review this.
   
   —
   You are receiving this because you were mentioned.
   Reply to this email directly, view it on 
GitHub, 
or mute the 
thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to 

[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315367#comment-16315367
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

blueorangutan commented on issue #2391: CLOUDSTACK-10215: Excessive log4j debug 
level in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391#issuecomment-355834267
 
 
   Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1613


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9606) While IP address is released, tag are not deleted

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315354#comment-16315354
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9606:


rhtyd commented on issue #1775: CLOUDSTACK-9606: While IP address is released, 
tag are not deleted.
URL: https://github.com/apache/cloudstack/pull/1775#issuecomment-355832401
 
 
   Agree @DaanHoogland I'll move this to 4.12 milestone.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> While IP address is released, tag are not deleted
> -
>
> Key: CLOUDSTACK-9606
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9606
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
>
> IP address release API call (disassociateIpAddress) does not have any 
> mechanism to remove the tags.
> All though the IP address is not allocated, corresponding tag still exists.
> REPRO STEPS
> ==
> 1. Acquire an IP address by Domain-Admin account A. 
> 2. Add tag to the target IP address by Domain-Admin account A. 
> 3. Release the target IP address without deleting the tag. 
> ⇒We found out that the state of the IP address is "Free" at this point, 
> but the tag which added by Domain-Admin account A still remains. 
> 4. Acquire the target IP address by Domain-Admin account B. 
> ⇒The tag still remains without change. 
> If account B tries to delete the tag, in our lab we can delete the tag as 
> domain admin. Although customer reported that they can't complete it because 
> of authority error.
> EXPECTED BEHAVIOR
> ==
> When we release an IP address, the corresponding tags should be removed from 
> related tables
> ACTUAL BEHAVIOR
> ==
> When we release an IP address, the corresponding tags are not removed from 
> related tables



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315355#comment-16315355
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

blueorangutan commented on issue #2391: CLOUDSTACK-10215: Excessive log4j debug 
level in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391#issuecomment-355832422
 
 
   @rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted 
as I make progress.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315353#comment-16315353
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

rhtyd commented on issue #2391: CLOUDSTACK-10215: Excessive log4j debug level 
in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391#issuecomment-355832371
 
 
   @DaanHoogland sure, squash merge should be fine to remove the 'merge' 
commits. I'll kick a round of test.
   @blueorangutan package


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315324#comment-16315324
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

DaanHoogland commented on issue #2391: CLOUDSTACK-10215: Excessive log4j debug 
level in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391#issuecomment-355827117
 
 
   @rhtyd I think we can add this to 4.11, no?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315326#comment-16315326
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


mike-tutkowski opened a new pull request #2298: CLOUDSTACK-9620: Enhancements 
for managed storage
URL: https://github.com/apache/cloudstack/pull/2298
 
 
   Allowed zone-wide primary storage based on a custom plug-in to be added via 
the GUI in a KVM-only environment (previously this only worked for XenServer 
and VMware)
   
   Added support for root disks on managed storage with KVM
   
   Added support for volume snapshots with managed storage on KVM
   
   Enabled creating a template directly from a volume (i.e. without having to 
go through a volume snapshot) on KVM with managed storage
   
   Only allowed the resizing of a volume for managed storage on KVM if the 
volume in question is either not attached to a VM or is attached to a VM in the 
Stopped state
   
   Included support for Reinstall VM on KVM with managed storage
   
   Enabled offline migration on KVM from non-managed storage to managed storage 
and vice versa
   
   Included support for online storage migration on KVM with managed storage 
(NFS and Ceph to managed storage)
   
   Added support to download (extract) a managed-storage volume to a QCOW2 file
   
   When uploading a file from outside of CloudStack to CloudStack, set the min 
and max IOPS, if applicable.
   
   Included support for the KVM auto-convergence feature
   
   The compression flag was actually added in version 1.0.3 (103) as 
opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
   
   On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
from the guest OS (as opposed to doing so from CloudStack), we need to pass to 
the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
   
   Added a new Global Setting: kvm.storage.live.migration.wait
   
   For XenServer, added a check to enforce that only volumes from zone-wide 
managed storage can be storage motioned from a host in one cluster to a host in 
another cluster (cannot do so at the time being with volumes from 
cluster-scoped managed storage)
   
   Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
with one or more snapshots.
   
   Enabled for managed storage with VMware: Template caching, create snapshot, 
delete snapshot, create volume from snapshot, and create template from snapshot
   
   Added an SIOC API plug-in to support VMware SIOC
   
   When starting a VM that uses managed storage in a cluster other than the one 
it last was running in, we need to remove the reference to the iSCSI volume 
from the original cluster.
   
   Added the ability to revert a volume to a snapshot
   
   Enabled cluster-scoped managed storage
   
   Added support for VMware dynamic discovery
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the 

[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315297#comment-16315297
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


DaanHoogland commented on issue #2298: CLOUDSTACK-9620: Enhancements for 
managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355825410
 
 
   closing to trigger jenkins again


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315300#comment-16315300
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


DaanHoogland commented on issue #2298: CLOUDSTACK-9620: Enhancements for 
managed storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-355825439
 
 
   jenkins do your thing, please.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315301#comment-16315301
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


mike-tutkowski opened a new pull request #2298: CLOUDSTACK-9620: Enhancements 
for managed storage
URL: https://github.com/apache/cloudstack/pull/2298
 
 
   Allowed zone-wide primary storage based on a custom plug-in to be added via 
the GUI in a KVM-only environment (previously this only worked for XenServer 
and VMware)
   
   Added support for root disks on managed storage with KVM
   
   Added support for volume snapshots with managed storage on KVM
   
   Enabled creating a template directly from a volume (i.e. without having to 
go through a volume snapshot) on KVM with managed storage
   
   Only allowed the resizing of a volume for managed storage on KVM if the 
volume in question is either not attached to a VM or is attached to a VM in the 
Stopped state
   
   Included support for Reinstall VM on KVM with managed storage
   
   Enabled offline migration on KVM from non-managed storage to managed storage 
and vice versa
   
   Included support for online storage migration on KVM with managed storage 
(NFS and Ceph to managed storage)
   
   Added support to download (extract) a managed-storage volume to a QCOW2 file
   
   When uploading a file from outside of CloudStack to CloudStack, set the min 
and max IOPS, if applicable.
   
   Included support for the KVM auto-convergence feature
   
   The compression flag was actually added in version 1.0.3 (103) as 
opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
   
   On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
from the guest OS (as opposed to doing so from CloudStack), we need to pass to 
the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
   
   Added a new Global Setting: kvm.storage.live.migration.wait
   
   For XenServer, added a check to enforce that only volumes from zone-wide 
managed storage can be storage motioned from a host in one cluster to a host in 
another cluster (cannot do so at the time being with volumes from 
cluster-scoped managed storage)
   
   Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
with one or more snapshots.
   
   Enabled for managed storage with VMware: Template caching, create snapshot, 
delete snapshot, create volume from snapshot, and create template from snapshot
   
   Added an SIOC API plug-in to support VMware SIOC
   
   When starting a VM that uses managed storage in a cluster other than the one 
it last was running in, we need to remove the reference to the iSCSI volume 
from the original cluster.
   
   Added the ability to revert a volume to a snapshot
   
   Enabled cluster-scoped managed storage
   
   Added support for VMware dynamic discovery
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the 

[jira] [Commented] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315288#comment-16315288
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10215:
-

bwsw opened a new pull request #2391: CLOUDSTACK-10215: Excessive log4j debug 
level in CPVM could lead to FS overflow
URL: https://github.com/apache/cloudstack/pull/2391
 
 
   Jira reference: https://issues.apache.org/jira/browse/CLOUDSTACK-10215


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Excessive log4j debug level in CPVM, SSVM could lead to FS overflow
> ---
>
> Key: CLOUDSTACK-10215
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: cloudstack-agent
>Affects Versions: 4.10.0.0
>Reporter: Ivan Kudryavtsev
>
> com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
> /var/log overflow.
> {{2018-01-06 06:13:57,069 DEBUG 
> [cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
> AjaxImageHandler 
> /ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. host: 10.252.2.10
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. port: 5903
> 2018-01-06 06:13:57,070 DEBUG 
> [cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
> token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9606) While IP address is released, tag are not deleted

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315275#comment-16315275
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9606:


DaanHoogland commented on issue #1775: CLOUDSTACK-9606: While IP address is 
released, tag are not deleted.
URL: https://github.com/apache/cloudstack/pull/1775#issuecomment-355824250
 
 
   @rhtyd @priyankparihar @syed @rajesh-battala @sarathkouk @borisstoyanov 
   As this PR does not pass it's own test in travis and does not seem to be 
very critical, I suggest we move it to a future release.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> While IP address is released, tag are not deleted
> -
>
> Key: CLOUDSTACK-9606
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9606
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
>
> IP address release API call (disassociateIpAddress) does not have any 
> mechanism to remove the tags.
> All though the IP address is not allocated, corresponding tag still exists.
> REPRO STEPS
> ==
> 1. Acquire an IP address by Domain-Admin account A. 
> 2. Add tag to the target IP address by Domain-Admin account A. 
> 3. Release the target IP address without deleting the tag. 
> ⇒We found out that the state of the IP address is "Free" at this point, 
> but the tag which added by Domain-Admin account A still remains. 
> 4. Acquire the target IP address by Domain-Admin account B. 
> ⇒The tag still remains without change. 
> If account B tries to delete the tag, in our lab we can delete the tag as 
> domain admin. Although customer reported that they can't complete it because 
> of authority error.
> EXPECTED BEHAVIOR
> ==
> When we release an IP address, the corresponding tags should be removed from 
> related tables
> ACTUAL BEHAVIOR
> ==
> When we release an IP address, the corresponding tags are not removed from 
> related tables



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10215) Excessive log4j debug level in CPVM, SSVM could lead to FS overflow

2018-01-07 Thread Ivan Kudryavtsev (JIRA)
Ivan Kudryavtsev created CLOUDSTACK-10215:
-

 Summary: Excessive log4j debug level in CPVM, SSVM could lead to 
FS overflow
 Key: CLOUDSTACK-10215
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10215
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: cloudstack-agent
Affects Versions: 4.10.0.0
Reporter: Ivan Kudryavtsev


com.cloud scope has DEBUG level and CPVM logs very much which could to FS of 
/var/log overflow.

{{2018-01-06 06:13:57,069 DEBUG 
[cloud.consoleproxy.ConsoleProxyAjaxImageHandler] (Thread-4159:null) 
AjaxImageHandler 
/ajaximg?token=RcHSrvzegyrjZAlc1Wjifcwv9P8WwK3eH63SuIS8WFFGssxymmjdYkZ4-S4ilY1UHxX612Lt_5Xi1Z5JaoCfDSf_UCi8lTIsPEBlDpUEWQg1IblYu0HxvoDugX9J4XgAdpj74qg_U4pOs74dzdZFB50PB_HxcMhzUqd5plH914PmRDw5k0ONaa183CsGa7DcGVvWaR_eYP_8_CArahGAjHt04Kx227tjyMx4Zaju7iNyxpBWxtBC5YJyj8rjv7IeA_0Pevz91pWn6OE1pkeLwGeFSV8pZw4BWg95SG97A-I=2020=1515219237015
2018-01-06 06:13:57,070 DEBUG 
[cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
token. host: 10.252.2.10
2018-01-06 06:13:57,070 DEBUG 
[cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
token. port: 5903
2018-01-06 06:13:57,070 DEBUG 
[cloud.consoleproxy.ConsoleProxyHttpHandlerHelper] (Thread-4159:null) decode 
token. tag: 375c62b5-74d9-4494-8b79-0d7c76cff10f}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9885) VPC RVR: On deleting first tier and configuring Private GW both VRs becoming MASTER

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315236#comment-16315236
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9885:


blueorangutan commented on issue #2128: CLOUDSTACK-9885: VPCVR: Updated to the 
private the traffic_type
URL: https://github.com/apache/cloudstack/pull/2128#issuecomment-355819947
 
 
   @rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VPC RVR: On deleting first tier and configuring Private GW  both VRs becoming 
> MASTER
> 
>
> Key: CLOUDSTACK-9885
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9885
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>Priority: Blocker
> Fix For: 4.10.1.0
>
>
> - Configure two tier networks t1 and t2. Delete the t1 network. Both VRs are 
> getting  into MASTER state.
> r-269-QA - was BACKUP VR. On deleting t1 network it became MASTER.
> {noformat}
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:5d:a4:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.1.33/24 brd 10.1.1.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> root@r-269-QA:~#
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> inet 10.147.52.100/24 brd 10.147.52.255 scope global secondary eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> inet 10.1.2.1/24 brd 10.1.2.255 scope global secondary eth4
> root@r-269-QA:~# checkrouter.sh
> Status: MASTER
> root@r-269-QA:~#
> {noformat}
> root@r-268-QA - was MASTER VR. On deleting t1 it deleted its eth2 interface 
> and delete 10.2.1.1 ip on ethic interface.
>After some time it configured 10.2.1.1 ip on eth4 and it became master.
> {noformat}
> root@r-268-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:02:ac brd ff:ff:ff:ff:ff:ff
> inet 169.254.2.172/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 

[jira] [Commented] (CLOUDSTACK-9885) VPC RVR: On deleting first tier and configuring Private GW both VRs becoming MASTER

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315235#comment-16315235
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9885:


rhtyd commented on issue #2128: CLOUDSTACK-9885: VPCVR: Updated to the private 
the traffic_type
URL: https://github.com/apache/cloudstack/pull/2128#issuecomment-355819906
 
 
   @blueorangutan test 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VPC RVR: On deleting first tier and configuring Private GW  both VRs becoming 
> MASTER
> 
>
> Key: CLOUDSTACK-9885
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9885
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>Priority: Blocker
> Fix For: 4.10.1.0
>
>
> - Configure two tier networks t1 and t2. Delete the t1 network. Both VRs are 
> getting  into MASTER state.
> r-269-QA - was BACKUP VR. On deleting t1 network it became MASTER.
> {noformat}
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:5d:a4:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.1.33/24 brd 10.1.1.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> root@r-269-QA:~#
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> inet 10.147.52.100/24 brd 10.147.52.255 scope global secondary eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> inet 10.1.2.1/24 brd 10.1.2.255 scope global secondary eth4
> root@r-269-QA:~# checkrouter.sh
> Status: MASTER
> root@r-269-QA:~#
> {noformat}
> root@r-268-QA - was MASTER VR. On deleting t1 it deleted its eth2 interface 
> and delete 10.2.1.1 ip on ethic interface.
>After some time it configured 10.2.1.1 ip on eth4 and it became master.
> {noformat}
> root@r-268-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:02:ac brd ff:ff:ff:ff:ff:ff
> inet 169.254.2.172/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc 

[jira] [Commented] (CLOUDSTACK-9885) VPC RVR: On deleting first tier and configuring Private GW both VRs becoming MASTER

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315234#comment-16315234
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9885:


blueorangutan commented on issue #2128: CLOUDSTACK-9885: VPCVR: Updated to the 
private the traffic_type
URL: https://github.com/apache/cloudstack/pull/2128#issuecomment-355819849
 
 
   Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1612


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VPC RVR: On deleting first tier and configuring Private GW  both VRs becoming 
> MASTER
> 
>
> Key: CLOUDSTACK-9885
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9885
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>Priority: Blocker
> Fix For: 4.10.1.0
>
>
> - Configure two tier networks t1 and t2. Delete the t1 network. Both VRs are 
> getting  into MASTER state.
> r-269-QA - was BACKUP VR. On deleting t1 network it became MASTER.
> {noformat}
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:5d:a4:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.1.33/24 brd 10.1.1.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> root@r-269-QA:~#
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> inet 10.147.52.100/24 brd 10.147.52.255 scope global secondary eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> inet 10.1.2.1/24 brd 10.1.2.255 scope global secondary eth4
> root@r-269-QA:~# checkrouter.sh
> Status: MASTER
> root@r-269-QA:~#
> {noformat}
> root@r-268-QA - was MASTER VR. On deleting t1 it deleted its eth2 interface 
> and delete 10.2.1.1 ip on ethic interface.
>After some time it configured 10.2.1.1 ip on eth4 and it became master.
> {noformat}
> root@r-268-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:02:ac brd ff:ff:ff:ff:ff:ff
> inet 169.254.2.172/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2: 

[jira] [Commented] (CLOUDSTACK-9885) VPC RVR: On deleting first tier and configuring Private GW both VRs becoming MASTER

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315208#comment-16315208
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9885:


rhtyd commented on issue #2128: CLOUDSTACK-9885: VPCVR: Updated to the private 
the traffic_type
URL: https://github.com/apache/cloudstack/pull/2128#issuecomment-355818317
 
 
   @blueorangutan package


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VPC RVR: On deleting first tier and configuring Private GW  both VRs becoming 
> MASTER
> 
>
> Key: CLOUDSTACK-9885
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9885
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>Priority: Blocker
> Fix For: 4.10.1.0
>
>
> - Configure two tier networks t1 and t2. Delete the t1 network. Both VRs are 
> getting  into MASTER state.
> r-269-QA - was BACKUP VR. On deleting t1 network it became MASTER.
> {noformat}
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:5d:a4:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.1.33/24 brd 10.1.1.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> root@r-269-QA:~#
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> inet 10.147.52.100/24 brd 10.147.52.255 scope global secondary eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> inet 10.1.2.1/24 brd 10.1.2.255 scope global secondary eth4
> root@r-269-QA:~# checkrouter.sh
> Status: MASTER
> root@r-269-QA:~#
> {noformat}
> root@r-268-QA - was MASTER VR. On deleting t1 it deleted its eth2 interface 
> and delete 10.2.1.1 ip on ethic interface.
>After some time it configured 10.2.1.1 ip on eth4 and it became master.
> {noformat}
> root@r-268-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:02:ac brd ff:ff:ff:ff:ff:ff
> inet 169.254.2.172/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc 

[jira] [Commented] (CLOUDSTACK-9885) VPC RVR: On deleting first tier and configuring Private GW both VRs becoming MASTER

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315207#comment-16315207
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9885:


rhtyd commented on a change in pull request #2128: CLOUDSTACK-9885: VPCVR: 
Updated to the private the traffic_type
URL: https://github.com/apache/cloudstack/pull/2128#discussion_r160047465
 
 

 ##
 File path: server/src/com/cloud/network/guru/PrivateNetworkGuru.java
 ##
 @@ -64,7 +64,7 @@
 @Inject
 EntityManager _entityMgr;
 
-private static final TrafficType[] TrafficTypes = {TrafficType.Guest};
+private static final TrafficType[] TrafficTypes = {TrafficType.PrivateGw};
 
 Review comment:
   Will this cause regression @jayapalu ? /cc @ustcweizhou 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VPC RVR: On deleting first tier and configuring Private GW  both VRs becoming 
> MASTER
> 
>
> Key: CLOUDSTACK-9885
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9885
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>Priority: Blocker
> Fix For: 4.10.1.0
>
>
> - Configure two tier networks t1 and t2. Delete the t1 network. Both VRs are 
> getting  into MASTER state.
> r-269-QA - was BACKUP VR. On deleting t1 network it became MASTER.
> {noformat}
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:5d:a4:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.1.33/24 brd 10.1.1.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> root@r-269-QA:~#
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> inet 10.147.52.100/24 brd 10.147.52.255 scope global secondary eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> inet 10.1.2.1/24 brd 10.1.2.255 scope global secondary eth4
> root@r-269-QA:~# checkrouter.sh
> Status: MASTER
> root@r-269-QA:~#
> {noformat}
> root@r-268-QA - was MASTER VR. On deleting t1 it deleted its eth2 interface 
> and delete 10.2.1.1 ip on ethic interface.
>After some time it configured 10.2.1.1 ip on eth4 and it became master.
> {noformat}
> root@r-268-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> 

[jira] [Commented] (CLOUDSTACK-9885) VPC RVR: On deleting first tier and configuring Private GW both VRs becoming MASTER

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315209#comment-16315209
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9885:


blueorangutan commented on issue #2128: CLOUDSTACK-9885: VPCVR: Updated to the 
private the traffic_type
URL: https://github.com/apache/cloudstack/pull/2128#issuecomment-355818323
 
 
   @rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted 
as I make progress.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VPC RVR: On deleting first tier and configuring Private GW  both VRs becoming 
> MASTER
> 
>
> Key: CLOUDSTACK-9885
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9885
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
>Priority: Blocker
> Fix For: 4.10.1.0
>
>
> - Configure two tier networks t1 and t2. Delete the t1 network. Both VRs are 
> getting  into MASTER state.
> r-269-QA - was BACKUP VR. On deleting t1 network it became MASTER.
> {noformat}
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:5d:a4:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.1.33/24 brd 10.1.1.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> root@r-269-QA:~#
> root@r-269-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:01:dc brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.220/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 10.147.46.255 scope global eth1
> 5: eth3:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:de:fc:00:00:29 brd ff:ff:ff:ff:ff:ff
> inet 10.147.52.200/24 brd 10.147.52.255 scope global eth3
> inet 10.147.52.100/24 brd 10.147.52.255 scope global secondary eth3
> 6: eth4:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 02:00:31:e1:00:03 brd ff:ff:ff:ff:ff:ff
> inet 10.1.2.78/24 brd 10.1.2.255 scope global eth4
> inet 10.1.2.1/24 brd 10.1.2.255 scope global secondary eth4
> root@r-269-QA:~# checkrouter.sh
> Status: MASTER
> root@r-269-QA:~#
> {noformat}
> root@r-268-QA - was MASTER VR. On deleting t1 it deleted its eth2 interface 
> and delete 10.2.1.1 ip on ethic interface.
>After some time it configured 10.2.1.1 ip on eth4 and it became master.
> {noformat}
> root@r-268-QA:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 0e:00:a9:fe:02:ac brd ff:ff:ff:ff:ff:ff
> inet 169.254.2.172/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state UP 
> qlen 1000
> link/ether 06:e4:c8:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.147.46.102/24 brd 

[jira] [Closed] (CLOUDSTACK-9892) Primary storage resource check is broken when using root disk size override to deploy VM

2018-01-07 Thread Rohit Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Yadav closed CLOUDSTACK-9892.
---
Resolution: Fixed

> Primary storage resource check is broken when using root disk size override 
> to deploy VM
> 
>
> Key: CLOUDSTACK-9892
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9892
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
>Priority: Critical
> Fix For: 4.11.0.0
>
>
> Primary storage resource check for account/domain is broken when using root 
> disk size override to deploy VM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9606) While IP address is released, tag are not deleted

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315206#comment-16315206
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9606:


rhtyd commented on issue #1775: CLOUDSTACK-9606: While IP address is released, 
tag are not deleted.
URL: https://github.com/apache/cloudstack/pull/1775#issuecomment-355816469
 
 
   @priyankparihar can you fix the error in the new marvin test, see Travis's 
failed job for details:
   ```
test_25_CLOUDSTACK_9606| exceptions.NameError | 16.268  | test_tags
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> While IP address is released, tag are not deleted
> -
>
> Key: CLOUDSTACK-9606
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9606
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
>
> IP address release API call (disassociateIpAddress) does not have any 
> mechanism to remove the tags.
> All though the IP address is not allocated, corresponding tag still exists.
> REPRO STEPS
> ==
> 1. Acquire an IP address by Domain-Admin account A. 
> 2. Add tag to the target IP address by Domain-Admin account A. 
> 3. Release the target IP address without deleting the tag. 
> ⇒We found out that the state of the IP address is "Free" at this point, 
> but the tag which added by Domain-Admin account A still remains. 
> 4. Acquire the target IP address by Domain-Admin account B. 
> ⇒The tag still remains without change. 
> If account B tries to delete the tag, in our lab we can delete the tag as 
> domain admin. Although customer reported that they can't complete it because 
> of authority error.
> EXPECTED BEHAVIOR
> ==
> When we release an IP address, the corresponding tags should be removed from 
> related tables
> ACTUAL BEHAVIOR
> ==
> When we release an IP address, the corresponding tags are not removed from 
> related tables



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315191#comment-16315191
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


rhtyd commented on issue #2146: CLOUDSTACK-4757: Support OVA files with 
multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#issuecomment-355814287
 
 
   @blueorangutan test centos7 vmware-55u3


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315192#comment-16315192
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


blueorangutan commented on issue #2146: CLOUDSTACK-4757: Support OVA files with 
multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#issuecomment-355814305
 
 
   @rhtyd a Trillian-Jenkins test job (centos7 mgmt + vmware-55u3) has been 
kicked to run smoke tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9813) Use configdrive for userdata, metadata & password

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315188#comment-16315188
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9813:


rhtyd commented on issue #2097: [4.11] CLOUDSTACK-9813: Extending Config Drive 
support
URL: https://github.com/apache/cloudstack/pull/2097#issuecomment-355814239
 
 
   Higher than normal failures, can you check the failures @fmaximus ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use configdrive for userdata, metadata & password 
> --
>
> Key: CLOUDSTACK-9813
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9813
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Network Controller, Secondary Storage, SystemVM, 
> VMware
>Affects Versions: Future
>Reporter: Eric Waegeman
>Assignee: Kris Sterckx
>
> To avoid the use of an extra VM for the virtual router we implement 
> configdrive for userdata, metadata & password. 
> The configdrive ISO is created on the secondary store and the KVM & VMware 
> plugins are adapted to accept the configdrive ISO as second cdrom.
> Is applicable for isolated, VPC and shared networks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10146) Bypass Secondary Storage for KVM templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315190#comment-16315190
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10146:
-

blueorangutan commented on issue #2379: CLOUDSTACK-10146: Bypass Secondary 
Storage for KVM templates
URL: https://github.com/apache/cloudstack/pull/2379#issuecomment-355814250
 
 
   @rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bypass Secondary Storage for KVM templates
> --
>
> Key: CLOUDSTACK-10146
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10146
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.11.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10146) Bypass Secondary Storage for KVM templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315187#comment-16315187
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10146:
-

rhtyd commented on issue #2379: CLOUDSTACK-10146: Bypass Secondary Storage for 
KVM templates
URL: https://github.com/apache/cloudstack/pull/2379#issuecomment-355814222
 
 
   @blueorangutan test


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bypass Secondary Storage for KVM templates
> --
>
> Key: CLOUDSTACK-10146
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10146
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.11.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315189#comment-16315189
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


blueorangutan commented on issue #2146: CLOUDSTACK-4757: Support OVA files with 
multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#issuecomment-355814247
 
 
   Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1611


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10146) Bypass Secondary Storage for KVM templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315186#comment-16315186
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10146:
-

blueorangutan commented on issue #2379: CLOUDSTACK-10146: Bypass Secondary 
Storage for KVM templates
URL: https://github.com/apache/cloudstack/pull/2379#issuecomment-355814124
 
 
   Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1610


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bypass Secondary Storage for KVM templates
> --
>
> Key: CLOUDSTACK-10146
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10146
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.11.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315183#comment-16315183
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


rhtyd commented on a change in pull request #2387: CLOUDSTACK-8855 Improve 
Error Message for Host Alert State and reconnect host API.
URL: https://github.com/apache/cloudstack/pull/2387#discussion_r159857552
 
 

 ##
 File path: 
api/src/org/apache/cloudstack/api/command/admin/host/ReconnectHostCmd.java
 ##
 @@ -101,16 +104,17 @@ public Long getInstanceId() {
 public void execute() {
 try {
 Host result = _resourceService.reconnectHost(this);
-if (result != null) {
-HostResponse response = 
_responseGenerator.createHostResponse(result);
-response.setResponseName(getCommandName());
-this.setResponseObject(response);
-} else {
-throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, 
"Failed to reconnect host");
-}
-} catch (Exception ex) {
-s_logger.warn("Exception: ", ex);
-throw new 
ServerApiException(ApiErrorCode.RESOURCE_UNAVAILABLE_ERROR, ex.getMessage());
+HostResponse response = 
_responseGenerator.createHostResponse(result);
+response.setResponseName(getCommandName());
+this.setResponseObject(response);
+}catch (InvalidParameterValueException e) {
 
 Review comment:
   Space between catch and closing brace?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315185#comment-16315185
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


rhtyd commented on a change in pull request #2387: CLOUDSTACK-8855 Improve 
Error Message for Host Alert State and reconnect host API.
URL: https://github.com/apache/cloudstack/pull/2387#discussion_r159857677
 
 

 ##
 File path: 
api/src/org/apache/cloudstack/api/command/admin/host/ReconnectHostCmd.java
 ##
 @@ -101,16 +104,17 @@ public Long getInstanceId() {
 public void execute() {
 try {
 Host result = _resourceService.reconnectHost(this);
-if (result != null) {
-HostResponse response = 
_responseGenerator.createHostResponse(result);
-response.setResponseName(getCommandName());
-this.setResponseObject(response);
-} else {
-throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, 
"Failed to reconnect host");
-}
-} catch (Exception ex) {
-s_logger.warn("Exception: ", ex);
-throw new 
ServerApiException(ApiErrorCode.RESOURCE_UNAVAILABLE_ERROR, ex.getMessage());
+HostResponse response = 
_responseGenerator.createHostResponse(result);
+response.setResponseName(getCommandName());
+this.setResponseObject(response);
+}catch (InvalidParameterValueException e) {
+throw new ServerApiException(ApiErrorCode.PARAM_ERROR, 
e.getMessage());
+}
+catch (CloudRuntimeException e) {
+s_logger.warn("Exception: ", e);
+throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, 
e.getMessage());
+}catch (AgentUnavailableException e) {
 
 Review comment:
   Same as above^^


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9813) Use configdrive for userdata, metadata & password

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315182#comment-16315182
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9813:


blueorangutan commented on issue #2097: [4.11] CLOUDSTACK-9813: Extending 
Config Drive support
URL: https://github.com/apache/cloudstack/pull/2097#issuecomment-355814047
 
 
   Trillian test result (tid-2052)
   Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
   Total time taken: 58531 seconds
   Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2097-t2052-vmware-55u3.zip
   Intermitten failure detected: /marvin/tests/smoke/test_public_ip_range.py
   Intermitten failure detected: /marvin/tests/smoke/test_ssvm.py
   Intermitten failure detected: /marvin/tests/smoke/test_templates.py
   Intermitten failure detected: /marvin/tests/smoke/test_usage.py
   Intermitten failure detected: /marvin/tests/smoke/test_vm_life_cycle.py
   Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
   Smoke tests completed. 61 look OK, 6 have error(s)
   Only failed tests results shown below:
   
   
   Test | Result | Time (s) | Test File
   --- | --- | --- | ---
   test_01_list_sec_storage_vm | `Failure` | 0.18 | test_ssvm.py
   test_02_list_cpvm_vm | `Failure` | 0.19 | test_ssvm.py
   test_05_stop_ssvm | `Failure` | 105.72 | test_ssvm.py
   test_06_stop_cpvm | `Failure` | 135.59 | test_ssvm.py
   test_04_extract_template | `Failure` | 142.45 | test_templates.py
   ContextSuite context=TestISOUsage>:setup | `Error` | 0.00 | test_usage.py
   test_10_attachAndDetach_iso | `Error` | 17.15 | test_vm_life_cycle.py
   test_06_download_detached_volume | `Failure` | 192.96 | test_volumes.py
   test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 708.76 | 
test_vpc_redundant.py
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use configdrive for userdata, metadata & password 
> --
>
> Key: CLOUDSTACK-9813
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9813
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Network Controller, Secondary Storage, SystemVM, 
> VMware
>Affects Versions: Future
>Reporter: Eric Waegeman
>Assignee: Kris Sterckx
>
> To avoid the use of an extra VM for the virtual router we implement 
> configdrive for userdata, metadata & password. 
> The configdrive ISO is created on the secondary store and the KVM & VMware 
> plugins are adapted to accept the configdrive ISO as second cdrom.
> Is applicable for isolated, VPC and shared networks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-8855) Improve Error Message for Host Alert State

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315184#comment-16315184
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8855:


rhtyd commented on a change in pull request #2387: CLOUDSTACK-8855 Improve 
Error Message for Host Alert State and reconnect host API.
URL: https://github.com/apache/cloudstack/pull/2387#discussion_r159857611
 
 

 ##
 File path: 
api/src/org/apache/cloudstack/api/command/admin/host/ReconnectHostCmd.java
 ##
 @@ -101,16 +104,17 @@ public Long getInstanceId() {
 public void execute() {
 try {
 Host result = _resourceService.reconnectHost(this);
-if (result != null) {
-HostResponse response = 
_responseGenerator.createHostResponse(result);
-response.setResponseName(getCommandName());
-this.setResponseObject(response);
-} else {
-throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, 
"Failed to reconnect host");
-}
-} catch (Exception ex) {
-s_logger.warn("Exception: ", ex);
-throw new 
ServerApiException(ApiErrorCode.RESOURCE_UNAVAILABLE_ERROR, ex.getMessage());
+HostResponse response = 
_responseGenerator.createHostResponse(result);
+response.setResponseName(getCommandName());
+this.setResponseObject(response);
+}catch (InvalidParameterValueException e) {
+throw new ServerApiException(ApiErrorCode.PARAM_ERROR, 
e.getMessage());
+}
+catch (CloudRuntimeException e) {
 
 Review comment:
   Fix this to be one same line as closing brace `}`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improve Error Message for Host Alert State
> --
>
> Key: CLOUDSTACK-8855
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8855
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0
>Reporter: Bharat Kumar
>Assignee: Bharat Kumar
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9892) Primary storage resource check is broken when using root disk size override to deploy VM

2018-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315178#comment-16315178
 ] 

ASF subversion and git services commented on CLOUDSTACK-9892:
-

Commit 4d7a9d82cc2df041c59a6d126b8b5d5228b3de5d in cloudstack's branch 
refs/heads/master from koushik-das
[ https://gitbox.apache.org/repos/asf?p=cloudstack.git;h=4d7a9d8 ]

CLOUDSTACK-9892: Primary storage resource check is broken when using root disk 
size override to deploy VM (#2088)

This happens when the root disk size is overridden. The primary storage limit 
check should be performed based on overridden size instead of template size. 
Enabled root disk resize tests to run on simulator as well.

> Primary storage resource check is broken when using root disk size override 
> to deploy VM
> 
>
> Key: CLOUDSTACK-9892
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9892
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
>Priority: Critical
> Fix For: 4.11.0.0
>
>
> Primary storage resource check for account/domain is broken when using root 
> disk size override to deploy VM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9892) Primary storage resource check is broken when using root disk size override to deploy VM

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315177#comment-16315177
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9892:


rhtyd closed pull request #2088: CLOUDSTACK-9892: Primary storage resource 
check is broken when using …
URL: https://github.com/apache/cloudstack/pull/2088
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/plugins/hypervisors/simulator/src/com/cloud/resource/SimulatorStorageProcessor.java
 
b/plugins/hypervisors/simulator/src/com/cloud/resource/SimulatorStorageProcessor.java
index 9d86bc31b71..b493f6e2bcd 100644
--- 
a/plugins/hypervisors/simulator/src/com/cloud/resource/SimulatorStorageProcessor.java
+++ 
b/plugins/hypervisors/simulator/src/com/cloud/resource/SimulatorStorageProcessor.java
@@ -86,9 +86,17 @@ public Answer copyTemplateToPrimaryStorage(CopyCommand cmd) {
 
 @Override
 public Answer cloneVolumeFromBaseTemplate(CopyCommand cmd) {
+long size = 100;
+DataTO dataTO = cmd.getDestTO();
+if (dataTO instanceof VolumeObjectTO) {
+VolumeObjectTO destVolume = (VolumeObjectTO)dataTO;
+if (destVolume.getSize() != null) {
+size = destVolume.getSize();
+}
+}
 VolumeObjectTO volume = new VolumeObjectTO();
 volume.setPath(UUID.randomUUID().toString());
-volume.setSize(100);
+volume.setSize(size);
 volume.setFormat(Storage.ImageFormat.RAW);
 return new CopyCmdAnswer(volume);
 }
diff --git a/server/src/com/cloud/vm/UserVmManagerImpl.java 
b/server/src/com/cloud/vm/UserVmManagerImpl.java
index df50f5a9162..230708cc720 100644
--- a/server/src/com/cloud/vm/UserVmManagerImpl.java
+++ b/server/src/com/cloud/vm/UserVmManagerImpl.java
@@ -311,9 +311,8 @@
 public class UserVmManagerImpl extends ManagerBase implements UserVmManager, 
VirtualMachineGuru, UserVmService, Configurable {
 private static final Logger s_logger = 
Logger.getLogger(UserVmManagerImpl.class);
 
-private static final int ACQUIRE_GLOBAL_LOCK_TIMEOUT_FOR_COOPERATION = 3; 
// 3
-
-// seconds
+private static final int ACQUIRE_GLOBAL_LOCK_TIMEOUT_FOR_COOPERATION = 3; 
// 3 seconds
+private static final long GB_TO_BYTES = 1024 * 1024 * 1024;
 
 @Inject
 EntityManager _entityMgr;
@@ -3251,6 +3250,19 @@ protected UserVm createVirtualMachine(DataCenter zone, 
ServiceOffering serviceOf
 _templateDao.loadDetails(template);
 }
 
+HypervisorType hypervisorType = null;
+if (template.getHypervisorType() == null || 
template.getHypervisorType() == HypervisorType.None) {
+if (hypervisor == null || hypervisor == HypervisorType.None) {
+throw new InvalidParameterValueException("hypervisor parameter 
is needed to deploy VM or the hypervisor parameter value passed is invalid");
+}
+hypervisorType = hypervisor;
+} else {
+if (hypervisor != null && hypervisor != HypervisorType.None && 
hypervisor != template.getHypervisorType()) {
+throw new InvalidParameterValueException("Hypervisor passed to 
the deployVm call, is different from the hypervisor type of the template");
+}
+hypervisorType = template.getHypervisorType();
+}
+
 long accountId = owner.getId();
 
 assert !(requestedIps != null && (defaultIps.getIp4Address() != null 
|| defaultIps.getIp6Address() != null)) : "requestedIp list and 
defaultNetworkIp should never be specified together";
@@ -3283,11 +3295,25 @@ protected UserVm createVirtualMachine(DataCenter zone, 
ServiceOffering serviceOf
 }
 // check if account/domain is with in resource limits to create a new 
vm
 boolean isIso = Storage.ImageFormat.ISO == template.getFormat();
-// For baremetal, size can be null
-Long tmp = _templateDao.findById(template.getId()).getSize();
 long size = 0;
-if (tmp != null) {
-size = tmp;
+// custom root disk size, resizes base template to larger size
+if (customParameters.containsKey("rootdisksize")) {
+// only KVM, XenServer and VMware supports rootdisksize override
+if (!(hypervisorType == HypervisorType.KVM || hypervisorType == 
HypervisorType.XenServer || hypervisorType == HypervisorType.VMware || 
hypervisorType == HypervisorType.Simulator)) {
+throw new InvalidParameterValueException("Hypervisor " + 
hypervisorType + " does not support rootdisksize override");
+}
+
+Long rootDiskSize = 
NumbersUtil.parseLong(customParameters.get("rootdisksize"), 

[jira] [Commented] (CLOUDSTACK-9892) Primary storage resource check is broken when using root disk size override to deploy VM

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315176#comment-16315176
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9892:


rhtyd commented on issue #2088: CLOUDSTACK-9892: Primary storage resource check 
is broken when using …
URL: https://github.com/apache/cloudstack/pull/2088#issuecomment-355813791
 
 
   Thanks @DaanHoogland for reviewing. I'll merge this based on test results 
and two code reviews.
   Please submit subsequent improvements as separate PR - @koushik-das 
@yvsubhash 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Primary storage resource check is broken when using root disk size override 
> to deploy VM
> 
>
> Key: CLOUDSTACK-9892
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9892
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
>Priority: Critical
> Fix For: 4.11.0.0
>
>
> Primary storage resource check for account/domain is broken when using root 
> disk size override to deploy VM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9620) Improvements for Managed Storage

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315170#comment-16315170
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9620:


rhtyd commented on issue #2298: CLOUDSTACK-9620: Enhancements for managed 
storage
URL: https://github.com/apache/cloudstack/pull/2298#issuecomment-35581
 
 
   @mike-tutkowski can you fix runtime issue, management server fails to start 
with: /cc @DaanHoogland 
   ```
   2018-01-07 18:27:32,548 ERROR [o.a.c.s.m.w.CloudStackContextLoaderListener] 
(main:null) (logid:) Failed to start CloudStack
   java.io.IOException: Resource 
[jar:file:/usr/share/cloudstack-management/lib/cloudstack-4.11.0.0-SNAPSHOT.jar!/META-INF/cloudstack/sioc/module.properties]
 is expected to exist at 
[classpath:META-INF/cloudstack/vmware-sioc/module.properties] please ensure the 
name property is correct
   at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinition.checkNameMatchesSelf(DefaultModuleDefinition.java:108)
   
   ```
   Due to this issue and without a fix, we can not further test/review this. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements for Managed Storage
> 
>
> Key: CLOUDSTACK-9620
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9620
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Management Server, VMware, XenServer
>Affects Versions: 4.11.0.0
> Environment: KVM, vSphere, and XenServer
>Reporter: Mike Tutkowski
>Assignee: Mike Tutkowski
> Fix For: 4.11.0.0
>
>
> Allowed zone-wide primary storage based on a custom plug-in to be added via 
> the GUI in a KVM-only environment (previously this only worked for XenServer 
> and VMware)
> Added support for root disks on managed storage with KVM
> Added support for volume snapshots with managed storage on KVM
> Enabled creating a template directly from a volume (i.e. without having to go 
> through a volume snapshot) on KVM with managed storage
> Only allowed the resizing of a volume for managed storage on KVM if the 
> volume in question is either not attached to a VM or is attached to a VM in 
> the Stopped state
> Included support for Reinstall VM on KVM with managed storage
> Enabled offline migration on KVM from non-managed storage to managed storage 
> and vice versa
> Included support for online storage migration on KVM with managed storage 
> (NFS and Ceph to managed storage)
> Added support to download (extract) a managed-storage volume to a QCOW2 file
> When uploading a file from outside of CloudStack to CloudStack, set the min 
> and max IOPS, if applicable.
> Included support for the KVM auto-convergence feature
> The compression flag was actually added in version 1.0.3 (103) as opposed 
> to version 1.3.0 (1003000) (changed this to reflect the correct version)
> On KVM when using iSCSI-based managed storage, if the user shuts a VM down 
> from the guest OS (as opposed to doing so from CloudStack), we need to pass 
> to the KVM agent a list of applicable iSCSI volumes that need to be 
> disconnected.
> Added a new Global Setting: kvm.storage.live.migration.wait
> For XenServer, added a check to enforce that only volumes from zone-wide 
> managed storage can be storage motioned from a host in one cluster to a host 
> in another cluster (cannot do so at the time being with volumes from 
> cluster-scoped managed storage)
> Don’t allow Storage XenMotion on a VM that has any managed-storage volume 
> with one or more snapshots.
> Enabled for managed storage with VMware: Template caching, create snapshot, 
> delete snapshot, create volume from snapshot, and create template from 
> snapshot
> Added an SIOC API plug-in to support VMware SIOC
> When starting a VM that uses managed storage in a cluster other than the one 
> it last was running in, we need to remove the reference to the iSCSI volume 
> from the original cluster.
> Added the ability to revert a volume to a snapshot
> Enabled cluster-scoped managed storage
> Added support for VMware dynamic discovery



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9892) Primary storage resource check is broken when using root disk size override to deploy VM

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315167#comment-16315167
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9892:


DaanHoogland commented on a change in pull request #2088: CLOUDSTACK-9892: 
Primary storage resource check is broken when using …
URL: https://github.com/apache/cloudstack/pull/2088#discussion_r160045607
 
 

 ##
 File path: server/src/com/cloud/vm/UserVmManagerImpl.java
 ##
 @@ -3283,11 +3295,25 @@ protected UserVm createVirtualMachine(DataCenter zone, 
ServiceOffering serviceOf
 }
 // check if account/domain is with in resource limits to create a new 
vm
 boolean isIso = Storage.ImageFormat.ISO == template.getFormat();
-// For baremetal, size can be null
-Long tmp = _templateDao.findById(template.getId()).getSize();
 long size = 0;
-if (tmp != null) {
-size = tmp;
+// custom root disk size, resizes base template to larger size
 
 Review comment:
   again I would have liked to see this extracted


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Primary storage resource check is broken when using root disk size override 
> to deploy VM
> 
>
> Key: CLOUDSTACK-9892
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9892
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
>Priority: Critical
> Fix For: 4.11.0.0
>
>
> Primary storage resource check for account/domain is broken when using root 
> disk size override to deploy VM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9892) Primary storage resource check is broken when using root disk size override to deploy VM

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315168#comment-16315168
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9892:


DaanHoogland commented on a change in pull request #2088: CLOUDSTACK-9892: 
Primary storage resource check is broken when using …
URL: https://github.com/apache/cloudstack/pull/2088#discussion_r160045542
 
 

 ##
 File path: server/src/com/cloud/vm/UserVmManagerImpl.java
 ##
 @@ -3251,6 +3250,19 @@ protected UserVm createVirtualMachine(DataCenter zone, 
ServiceOffering serviceOf
 _templateDao.loadDetails(template);
 }
 
+HypervisorType hypervisorType = null;
 
 Review comment:
   goos oportunity to extract and reduce this mega method


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Primary storage resource check is broken when using root disk size override 
> to deploy VM
> 
>
> Key: CLOUDSTACK-9892
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9892
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
>Priority: Critical
> Fix For: 4.11.0.0
>
>
> Primary storage resource check for account/domain is broken when using root 
> disk size override to deploy VM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10197) XenServer 7.1: Cannot mount xentool iso from cloudstack on VMs

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315163#comment-16315163
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10197:
-

rhtyd closed pull request #2365: CLOUDSTACK-10197: Rename xentools iso for 
XenServer 7.0+
URL: https://github.com/apache/cloudstack/pull/2365
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/discoverer/XcpServerDiscoverer.java
 
b/plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/discoverer/XcpServerDiscoverer.java
index 83a9c23617f..d23f7a86c35 100644
--- 
a/plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/discoverer/XcpServerDiscoverer.java
+++ 
b/plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/discoverer/XcpServerDiscoverer.java
@@ -536,7 +536,7 @@ private void createXsToolsISO() {
 id = _tmpltDao.getNextInSequence(Long.class, "id");
 VMTemplateVO template =
 VMTemplateVO.createPreHostIso(id, isoName, isoName, 
ImageFormat.ISO, true, true, TemplateType.PERHOST, null, null, true, 64, 
Account.ACCOUNT_ID_SYSTEM,
-null, "xen-pv-drv-iso", false, 1, false, 
HypervisorType.XenServer);
+null, "XenServer Tools Installer ISO 
(xen-pv-drv-iso)", false, 1, false, HypervisorType.XenServer);
 _tmpltDao.persist(template);
 } else {
 id = tmplt.getId();
diff --git 
a/plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/CitrixResourceBase.java
 
b/plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/CitrixResourceBase.java
index f960b9f09b9..97d6118d335 100644
--- 
a/plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/CitrixResourceBase.java
+++ 
b/plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/CitrixResourceBase.java
@@ -2592,9 +2592,10 @@ public VDI getIsoVDIByURL(final Connection conn, final 
String vmName, final Stri
 String mountpoint = null;
 if (isoURL.startsWith("xs-tools")) {
 try {
-final Set vdis = VDI.getByNameLabel(conn, isoURL);
+final String actualIsoURL = actualIsoTemplate(conn);
+final Set vdis = VDI.getByNameLabel(conn, actualIsoURL);
 if (vdis.isEmpty()) {
-throw new CloudRuntimeException("Could not find ISO with 
URL: " + isoURL);
+throw new CloudRuntimeException("Could not find ISO with 
URL: " + actualIsoURL);
 }
 return vdis.iterator().next();
 
@@ -2630,6 +2631,22 @@ public VDI getIsoVDIByURL(final Connection conn, final 
String vmName, final Stri
 }
 }
 
+private String actualIsoTemplate(final Connection conn) throws 
BadServerResponse, XenAPIException, XmlRpcException {
+final Host host = Host.getByUuid(conn, _host.getUuid());
+final Host.Record record = host.getRecord(conn);
+final String xenBrand = record.softwareVersion.get("product_brand");
+final String xenVersion = 
record.softwareVersion.get("product_version");
+final String[] items = xenVersion.split("\\.");
+
+// guest-tools.iso for XenServer version 7.0+
+if (xenBrand.equals("XenServer") && Integer.parseInt(items[0]) >= 7) {
+return "guest-tools.iso";
+}
+
+// xs-tools.iso for older XenServer versions
+return "xs-tools.iso";
+}
+
 public String getLabel() {
 final Connection conn = getConnection();
 final String result = callHostPlugin(conn, "ovstunnel", "getLabel");
@@ -3882,9 +3899,10 @@ protected VDI mount(final Connection conn, final String 
vmName, final DiskTO vol
 final String templateName = iso.getName();
 if (templateName.startsWith("xs-tools")) {
 try {
-final Set vdis = VDI.getByNameLabel(conn, 
templateName);
+final String actualTemplateName = actualIsoTemplate(conn);
+final Set vdis = VDI.getByNameLabel(conn, 
actualTemplateName);
 if (vdis.isEmpty()) {
-throw new CloudRuntimeException("Could not find ISO 
with URL: " + templateName);
+throw new CloudRuntimeException("Could not find ISO 
with URL: " + actualTemplateName);
 }
 return vdis.iterator().next();
 } catch (final XenAPIException e) {


 


This is an automated message 

[jira] [Commented] (CLOUDSTACK-10197) XenServer 7.1: Cannot mount xentool iso from cloudstack on VMs

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315162#comment-16315162
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10197:
-

blueorangutan commented on issue #2365: CLOUDSTACK-10197: Rename xentools iso 
for XenServer 7.0+
URL: https://github.com/apache/cloudstack/pull/2365#issuecomment-355812109
 
 
   @rhtyd a Trillian-Jenkins test job (centos7 mgmt + xenserver-65sp1) has been 
kicked to run smoke tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> XenServer 7.1: Cannot mount  xentool iso from cloudstack on VMs
> ---
>
> Key: CLOUDSTACK-10197
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10197
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
> Environment: XenServer 7.0+
>Reporter: Khosrow Moossavi
> Fix For: Future
>
>
> In XenServer 7.0+ xentools iso has been renamed from *xs-tools* to 
> *guest-tools* so CloudStack fails to attach it to any VM.
> {code}
> (acs-admin) > attach iso 
> virtualmachineid=d13eeff1-2d99-46a9-8fc5-3510df6e9f5e 
> id=e8a56540-0fc3-44de-9911-635d2d8f25c4
> errorcode = 530
> errortext = Failed to attach iso
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10197) XenServer 7.1: Cannot mount xentool iso from cloudstack on VMs

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315161#comment-16315161
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10197:
-

rhtyd commented on issue #2365: CLOUDSTACK-10197: Rename xentools iso for 
XenServer 7.0+
URL: https://github.com/apache/cloudstack/pull/2365#issuecomment-355812078
 
 
   @blueorangutan test centos7 xenserver-65sp1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> XenServer 7.1: Cannot mount  xentool iso from cloudstack on VMs
> ---
>
> Key: CLOUDSTACK-10197
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10197
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
> Environment: XenServer 7.0+
>Reporter: Khosrow Moossavi
> Fix For: Future
>
>
> In XenServer 7.0+ xentools iso has been renamed from *xs-tools* to 
> *guest-tools* so CloudStack fails to attach it to any VM.
> {code}
> (acs-admin) > attach iso 
> virtualmachineid=d13eeff1-2d99-46a9-8fc5-3510df6e9f5e 
> id=e8a56540-0fc3-44de-9911-635d2d8f25c4
> errorcode = 530
> errortext = Failed to attach iso
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10197) XenServer 7.1: Cannot mount xentool iso from cloudstack on VMs

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315160#comment-16315160
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10197:
-

rhtyd commented on issue #2365: CLOUDSTACK-10197: Rename xentools iso for 
XenServer 7.0+
URL: https://github.com/apache/cloudstack/pull/2365#issuecomment-355812920
 
 
   Test LGTM for 7.1 comparing baseline results against #2376. Test LGTM for 
XenServer 6.5sp1. The failures around template/volume are not related to this 
PR and also seen in #2376.
   Merging this based on code reviews and test results.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> XenServer 7.1: Cannot mount  xentool iso from cloudstack on VMs
> ---
>
> Key: CLOUDSTACK-10197
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10197
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
> Environment: XenServer 7.0+
>Reporter: Khosrow Moossavi
> Fix For: Future
>
>
> In XenServer 7.0+ xentools iso has been renamed from *xs-tools* to 
> *guest-tools* so CloudStack fails to attach it to any VM.
> {code}
> (acs-admin) > attach iso 
> virtualmachineid=d13eeff1-2d99-46a9-8fc5-3510df6e9f5e 
> id=e8a56540-0fc3-44de-9911-635d2d8f25c4
> errorcode = 530
> errortext = Failed to attach iso
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10197) XenServer 7.1: Cannot mount xentool iso from cloudstack on VMs

2018-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315164#comment-16315164
 ] 

ASF subversion and git services commented on CLOUDSTACK-10197:
--

Commit 25d7d741a7a80fa615c576fda6248e1f1a28dafe in cloudstack's branch 
refs/heads/master from [~kmoossavi]
[ https://gitbox.apache.org/repos/asf?p=cloudstack.git;h=25d7d74 ]

CLOUDSTACK-10197: Rename xentools iso for XenServer 7.0+ (#2365)

The xentools iso has been renamed from xs-tools to guest-tools
starting from XenServer 7.0.

> XenServer 7.1: Cannot mount  xentool iso from cloudstack on VMs
> ---
>
> Key: CLOUDSTACK-10197
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10197
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
> Environment: XenServer 7.0+
>Reporter: Khosrow Moossavi
> Fix For: Future
>
>
> In XenServer 7.0+ xentools iso has been renamed from *xs-tools* to 
> *guest-tools* so CloudStack fails to attach it to any VM.
> {code}
> (acs-admin) > attach iso 
> virtualmachineid=d13eeff1-2d99-46a9-8fc5-3510df6e9f5e 
> id=e8a56540-0fc3-44de-9911-635d2d8f25c4
> errorcode = 530
> errortext = Failed to attach iso
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10197) XenServer 7.1: Cannot mount xentool iso from cloudstack on VMs

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315159#comment-16315159
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10197:
-

rhtyd commented on issue #2365: CLOUDSTACK-10197: Rename xentools iso for 
XenServer 7.0+
URL: https://github.com/apache/cloudstack/pull/2365#issuecomment-355812920
 
 
   Test LGTM for 7.1 comparing baseline results against #2376 I'll merge this 
after xs-6.5sp1 results are back.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> XenServer 7.1: Cannot mount  xentool iso from cloudstack on VMs
> ---
>
> Key: CLOUDSTACK-10197
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10197
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
> Environment: XenServer 7.0+
>Reporter: Khosrow Moossavi
> Fix For: Future
>
>
> In XenServer 7.0+ xentools iso has been renamed from *xs-tools* to 
> *guest-tools* so CloudStack fails to attach it to any VM.
> {code}
> (acs-admin) > attach iso 
> virtualmachineid=d13eeff1-2d99-46a9-8fc5-3510df6e9f5e 
> id=e8a56540-0fc3-44de-9911-635d2d8f25c4
> errorcode = 530
> errortext = Failed to attach iso
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315157#comment-16315157
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


blueorangutan commented on issue #2146: CLOUDSTACK-4757: Support OVA files with 
multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#issuecomment-355812800
 
 
   @rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted 
as I make progress.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9921) NPE when garbage collector is running

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315156#comment-16315156
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9921:


blueorangutan commented on issue #2139: CLOUDSTACK-9921: NPE when storage 
garbage collector is running.
URL: https://github.com/apache/cloudstack/pull/2139#issuecomment-355812752
 
 
   @rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> NPE when garbage collector is running
> -
>
> Key: CLOUDSTACK-9921
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9921
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: jay
>
> Steps to reproduce issue
> 1. Deploy a VM
> 2. Take snapshot of the root volume
> 3. Delete the snapshot
> 4. Before the garbage collector has run, shutdown the VM and assign the VM to 
> other user.
> 5. When garage collector executes NPE shows in the logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315155#comment-16315155
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


rhtyd commented on issue #2146: CLOUDSTACK-4757: Support OVA files with 
multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#issuecomment-355812751
 
 
   @blueorangutan package


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9896) ListDedicatedXXX doesn't respect pagination

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315154#comment-16315154
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9896:


DaanHoogland commented on issue #2073: CLOUDSTACK-9896: API: listDedicatedXXX 
should respect pagination
URL: https://github.com/apache/cloudstack/pull/2073#issuecomment-355812722
 
 
   no failures in the test results for test_deploy_virtio_scsi_vm.py or 
test_internal_lb.py
   test_privategw_acl.py are really intermitten; ping failures
   test_volumes.py is unrelated but worrying as it happens a lot lately (fixed 
size disk offering does not raise exception while being resized)
   test_vpc_redundant.py are genuine and known to happen a lot
   test_vpc_router_nics.py and test_vpc_vpn.py do not report errors
   
   Al in all, no test failures are related /cc @rhtyd 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> ListDedicatedXXX doesn't respect pagination
> ---
>
> Key: CLOUDSTACK-9896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>Priority: Minor
>
> The listDedicatedZones, listDedicatedPods, listDedicatedClusters, 
> listDedicatedHosts are not using the pagination filter to return the results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9921) NPE when garbage collector is running

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315153#comment-16315153
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9921:


rhtyd commented on issue #2139: CLOUDSTACK-9921: NPE when storage garbage 
collector is running.
URL: https://github.com/apache/cloudstack/pull/2139#issuecomment-355812697
 
 
   @blueorangutan test


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> NPE when garbage collector is running
> -
>
> Key: CLOUDSTACK-9921
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9921
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: jay
>
> Steps to reproduce issue
> 1. Deploy a VM
> 2. Take snapshot of the root volume
> 3. Delete the snapshot
> 4. Before the garbage collector has run, shutdown the VM and assign the VM to 
> other user.
> 5. When garage collector executes NPE shows in the logs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10146) Bypass Secondary Storage for KVM templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315151#comment-16315151
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10146:
-

blueorangutan commented on issue #2379: CLOUDSTACK-10146: Bypass Secondary 
Storage for KVM templates
URL: https://github.com/apache/cloudstack/pull/2379#issuecomment-355812505
 
 
   @rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted 
as I make progress.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bypass Secondary Storage for KVM templates
> --
>
> Key: CLOUDSTACK-10146
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10146
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.11.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10146) Bypass Secondary Storage for KVM templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315150#comment-16315150
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10146:
-

rhtyd commented on issue #2379: CLOUDSTACK-10146: Bypass Secondary Storage for 
KVM templates
URL: https://github.com/apache/cloudstack/pull/2379#issuecomment-355812483
 
 
   @blueorangutan package


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Bypass Secondary Storage for KVM templates
> --
>
> Key: CLOUDSTACK-10146
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10146
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.11.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9813) Use configdrive for userdata, metadata & password

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315149#comment-16315149
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9813:


rhtyd commented on a change in pull request #2097: [4.11] CLOUDSTACK-9813: 
Extending Config Drive support
URL: https://github.com/apache/cloudstack/pull/2097#discussion_r160045298
 
 

 ##
 File path: tools/marvin/marvin/config/test_data.py
 ##
 @@ -2186,6 +2371,33 @@
 "Dns": "VpcVirtualRouter"
 }
 },
+"vpc_offering_configdrive_withoutdns": {
 
 Review comment:
   General note wrt changes in `test_data.py` - please move to specific marvin 
test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use configdrive for userdata, metadata & password 
> --
>
> Key: CLOUDSTACK-9813
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9813
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Network Controller, Secondary Storage, SystemVM, 
> VMware
>Affects Versions: Future
>Reporter: Eric Waegeman
>Assignee: Kris Sterckx
>
> To avoid the use of an extra VM for the virtual router we implement 
> configdrive for userdata, metadata & password. 
> The configdrive ISO is created on the secondary store and the KVM & VMware 
> plugins are adapted to accept the configdrive ISO as second cdrom.
> Is applicable for isolated, VPC and shared networks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9813) Use configdrive for userdata, metadata & password

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315148#comment-16315148
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9813:


rhtyd commented on a change in pull request #2097: [4.11] CLOUDSTACK-9813: 
Extending Config Drive support
URL: https://github.com/apache/cloudstack/pull/2097#discussion_r160045278
 
 

 ##
 File path: tools/appliance/systemvmtemplate/scripts/configure_proxy.sh
 ##
 @@ -0,0 +1,38 @@
+#!/bin/bash
 
 Review comment:
   This is not needed for the new system, look at building with packer and how 
to export http proxy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use configdrive for userdata, metadata & password 
> --
>
> Key: CLOUDSTACK-9813
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9813
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Network Controller, Secondary Storage, SystemVM, 
> VMware
>Affects Versions: Future
>Reporter: Eric Waegeman
>Assignee: Kris Sterckx
>
> To avoid the use of an extra VM for the virtual router we implement 
> configdrive for userdata, metadata & password. 
> The configdrive ISO is created on the secondary store and the KVM & VMware 
> plugins are adapted to accept the configdrive ISO as second cdrom.
> Is applicable for isolated, VPC and shared networks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9896) ListDedicatedXXX doesn't respect pagination

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315144#comment-16315144
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9896:


rhtyd closed pull request #2073: CLOUDSTACK-9896: API: listDedicatedXXX should 
respect pagination
URL: https://github.com/apache/cloudstack/pull/2073
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/plugins/dedicated-resources/src/org/apache/cloudstack/dedicated/DedicatedResourceManagerImpl.java
 
b/plugins/dedicated-resources/src/org/apache/cloudstack/dedicated/DedicatedResourceManagerImpl.java
index e7a6f35dce2..7cf193d49be 100644
--- 
a/plugins/dedicated-resources/src/org/apache/cloudstack/dedicated/DedicatedResourceManagerImpl.java
+++ 
b/plugins/dedicated-resources/src/org/apache/cloudstack/dedicated/DedicatedResourceManagerImpl.java
@@ -73,6 +73,7 @@
 import com.cloud.utils.NumbersUtil;
 import com.cloud.utils.Pair;
 import com.cloud.utils.db.DB;
+import com.cloud.utils.db.Filter;
 import com.cloud.utils.db.Transaction;
 import com.cloud.utils.db.TransactionCallback;
 import com.cloud.utils.db.TransactionCallbackNoReturn;
@@ -816,6 +817,8 @@ public DedicateHostResponse 
createDedicateHostResponse(DedicatedResources resour
 String accountName = cmd.getAccountName();
 Long accountId = null;
 Long affinityGroupId = cmd.getAffinityGroupId();
+Long startIndex = cmd.getStartIndex();
+Long pageSize = cmd.getPageSizeVal();
 
 if (accountName != null) {
 if (domainId != null) {
@@ -827,7 +830,8 @@ public DedicateHostResponse 
createDedicateHostResponse(DedicatedResources resour
 throw new InvalidParameterValueException("Please specify the 
domain id of the account: " + accountName);
 }
 }
-Pair result = 
_dedicatedDao.searchDedicatedZones(zoneId, domainId, accountId, 
affinityGroupId);
+Filter searchFilter = new Filter(DedicatedResourceVO.class, "id", 
true, startIndex, pageSize);
+Pair result = 
_dedicatedDao.searchDedicatedZones(zoneId, domainId, accountId, 
affinityGroupId, searchFilter);
 return new Pair(result.first(), result.second());
 }
 
@@ -838,6 +842,8 @@ public DedicateHostResponse 
createDedicateHostResponse(DedicatedResources resour
 String accountName = cmd.getAccountName();
 Long accountId = null;
 Long affinityGroupId = cmd.getAffinityGroupId();
+Long startIndex = cmd.getStartIndex();
+Long pageSize = cmd.getPageSizeVal();
 
 if (accountName != null) {
 if (domainId != null) {
@@ -849,7 +855,8 @@ public DedicateHostResponse 
createDedicateHostResponse(DedicatedResources resour
 throw new InvalidParameterValueException("Please specify the 
domain id of the account: " + accountName);
 }
 }
-Pair result = 
_dedicatedDao.searchDedicatedPods(podId, domainId, accountId, affinityGroupId);
+Filter searchFilter = new Filter(DedicatedResourceVO.class, "id", 
true, startIndex, pageSize);
+Pair result = 
_dedicatedDao.searchDedicatedPods(podId, domainId, accountId, affinityGroupId, 
searchFilter);
 return new Pair(result.first(), result.second());
 }
 
@@ -860,6 +867,8 @@ public DedicateHostResponse 
createDedicateHostResponse(DedicatedResources resour
 String accountName = cmd.getAccountName();
 Long accountId = null;
 Long affinityGroupId = cmd.getAffinityGroupId();
+Long startIndex = cmd.getStartIndex();
+Long pageSize = cmd.getPageSizeVal();
 
 if (accountName != null) {
 if (domainId != null) {
@@ -871,7 +880,8 @@ public DedicateHostResponse 
createDedicateHostResponse(DedicatedResources resour
 throw new InvalidParameterValueException("Please specify the 
domain id of the account: " + accountName);
 }
 }
-Pair result = 
_dedicatedDao.searchDedicatedClusters(clusterId, domainId, accountId, 
affinityGroupId);
+Filter searchFilter = new Filter(DedicatedResourceVO.class, "id", 
true, startIndex, pageSize);
+Pair result = 
_dedicatedDao.searchDedicatedClusters(clusterId, domainId, accountId, 
affinityGroupId, searchFilter);
 return new Pair(result.first(), result.second());
 }
 
@@ -881,6 +891,8 @@ public DedicateHostResponse 
createDedicateHostResponse(DedicatedResources resour
 Long domainId = cmd.getDomainId();
 String accountName = cmd.getAccountName();
 Long 

[jira] [Commented] (CLOUDSTACK-9896) ListDedicatedXXX doesn't respect pagination

2018-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315145#comment-16315145
 ] 

ASF subversion and git services commented on CLOUDSTACK-9896:
-

Commit 92a6bc27ff862be72de7f96a2e836d6bb66a353c in cloudstack's branch 
refs/heads/master from [~marcaurele]
[ https://gitbox.apache.org/repos/asf?p=cloudstack.git;h=92a6bc2 ]

CLOUDSTACK-9896: listDedicatedXXX should respect pagination (#2073)

Fixes listDedicatedxxx APIs to respect pagination options.

> ListDedicatedXXX doesn't respect pagination
> ---
>
> Key: CLOUDSTACK-9896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>Priority: Minor
>
> The listDedicatedZones, listDedicatedPods, listDedicatedClusters, 
> listDedicatedHosts are not using the pagination filter to return the results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9896) ListDedicatedXXX doesn't respect pagination

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315143#comment-16315143
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9896:


rhtyd commented on issue #2073: CLOUDSTACK-9896: API: listDedicatedXXX should 
respect pagination
URL: https://github.com/apache/cloudstack/pull/2073#issuecomment-355812174
 
 
   Tests LGTM, comparing last two runs where errors did not repeat themselves.
   Merging this based on code reviews and test results.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> ListDedicatedXXX doesn't respect pagination
> ---
>
> Key: CLOUDSTACK-9896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>Priority: Minor
>
> The listDedicatedZones, listDedicatedPods, listDedicatedClusters, 
> listDedicatedHosts are not using the pagination filter to return the results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10197) XenServer 7.1: Cannot mount xentool iso from cloudstack on VMs

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315142#comment-16315142
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10197:
-

blueorangutan commented on issue #2365: CLOUDSTACK-10197: Rename xentools iso 
for XenServer 7.0+
URL: https://github.com/apache/cloudstack/pull/2365#issuecomment-355812109
 
 
   @rhtyd a Trillian-Jenkins test job (centos7 mgmt + xenserver-65sp1) has been 
kicked to run smoke tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> XenServer 7.1: Cannot mount  xentool iso from cloudstack on VMs
> ---
>
> Key: CLOUDSTACK-10197
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10197
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
> Environment: XenServer 7.0+
>Reporter: Khosrow Moossavi
> Fix For: Future
>
>
> In XenServer 7.0+ xentools iso has been renamed from *xs-tools* to 
> *guest-tools* so CloudStack fails to attach it to any VM.
> {code}
> (acs-admin) > attach iso 
> virtualmachineid=d13eeff1-2d99-46a9-8fc5-3510df6e9f5e 
> id=e8a56540-0fc3-44de-9911-635d2d8f25c4
> errorcode = 530
> errortext = Failed to attach iso
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-10197) XenServer 7.1: Cannot mount xentool iso from cloudstack on VMs

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315141#comment-16315141
 ] 

ASF GitHub Bot commented on CLOUDSTACK-10197:
-

rhtyd commented on issue #2365: CLOUDSTACK-10197: Rename xentools iso for 
XenServer 7.0+
URL: https://github.com/apache/cloudstack/pull/2365#issuecomment-355812078
 
 
   @blueorangutan test centos7 xenserver-65sp1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> XenServer 7.1: Cannot mount  xentool iso from cloudstack on VMs
> ---
>
> Key: CLOUDSTACK-10197
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10197
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
> Environment: XenServer 7.0+
>Reporter: Khosrow Moossavi
> Fix For: Future
>
>
> In XenServer 7.0+ xentools iso has been renamed from *xs-tools* to 
> *guest-tools* so CloudStack fails to attach it to any VM.
> {code}
> (acs-admin) > attach iso 
> virtualmachineid=d13eeff1-2d99-46a9-8fc5-3510df6e9f5e 
> id=e8a56540-0fc3-44de-9911-635d2d8f25c4
> errorcode = 530
> errortext = Failed to attach iso
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9892) Primary storage resource check is broken when using root disk size override to deploy VM

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315140#comment-16315140
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9892:


rhtyd commented on issue #2088: CLOUDSTACK-9892: Primary storage resource check 
is broken when using …
URL: https://github.com/apache/cloudstack/pull/2088#issuecomment-355812032
 
 
   Test LGTM. Additional review requested - @DaanHoogland @borisstoyanov 
@yvsubhash and others


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Primary storage resource check is broken when using root disk size override 
> to deploy VM
> 
>
> Key: CLOUDSTACK-9892
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9892
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
>Priority: Critical
> Fix For: 4.11.0.0
>
>
> Primary storage resource check for account/domain is broken when using root 
> disk size override to deploy VM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9896) ListDedicatedXXX doesn't respect pagination

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315136#comment-16315136
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9896:


blueorangutan commented on issue #2073: CLOUDSTACK-9896: API: listDedicatedXXX 
should respect pagination
URL: https://github.com/apache/cloudstack/pull/2073#issuecomment-355811817
 
 
   Trillian test result (tid-2057)
   Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
   Total time taken: 36510 seconds
   Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2073-t2057-kvm-centos7.zip
   Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_virtio_scsi_vm.py
   Intermitten failure detected: /marvin/tests/smoke/test_internal_lb.py
   Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
   Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_router_nics.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
   Smoke tests completed. 64 look OK, 3 have error(s)
   Only failed tests results shown below:
   
   
   Test | Result | Time (s) | Test File
   --- | --- | --- | ---
   test_02_vpc_privategw_static_routes | `Failure` | 189.02 | 
test_privategw_acl.py
   test_03_vpc_privategw_restart_vpc_cleanup | `Failure` | 193.48 | 
test_privategw_acl.py
   test_04_rvpc_privategw_static_routes | `Failure` | 274.80 | 
test_privategw_acl.py
   test_07_resize_fail | `Failure` | 15.41 | test_volumes.py
   test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 374.07 | 
test_vpc_redundant.py
   test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
`Failure` | 297.73 | test_vpc_redundant.py
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> ListDedicatedXXX doesn't respect pagination
> ---
>
> Key: CLOUDSTACK-9896
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9896
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Reporter: Marc-Aurèle Brothier
>Assignee: Marc-Aurèle Brothier
>Priority: Minor
>
> The listDedicatedZones, listDedicatedPods, listDedicatedClusters, 
> listDedicatedHosts are not using the pagination filter to return the results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-9892) Primary storage resource check is broken when using root disk size override to deploy VM

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315123#comment-16315123
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9892:


blueorangutan commented on issue #2088: CLOUDSTACK-9892: Primary storage 
resource check is broken when using …
URL: https://github.com/apache/cloudstack/pull/2088#issuecomment-355810307
 
 
   Trillian test result (tid-2056)
   Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
   Total time taken: 36544 seconds
   Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2088-t2056-kvm-centos7.zip
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
   Smoke tests completed. 67 look OK, 0 have error(s)
   Only failed tests results shown below:
   
   
   Test | Result | Time (s) | Test File
   --- | --- | --- | ---
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Primary storage resource check is broken when using root disk size override 
> to deploy VM
> 
>
> Key: CLOUDSTACK-9892
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9892
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Koushik Das
>Assignee: Koushik Das
>Priority: Critical
> Fix For: 4.11.0.0
>
>
> Primary storage resource check for account/domain is broken when using root 
> disk size override to deploy VM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CLOUDSTACK-4757) Support OVA files with multiple disks for templates

2018-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315107#comment-16315107
 ] 

ASF GitHub Bot commented on CLOUDSTACK-4757:


blueorangutan commented on issue #2146: CLOUDSTACK-4757: Support OVA files with 
multiple disks for templates
URL: https://github.com/apache/cloudstack/pull/2146#issuecomment-355807982
 
 
   Trillian test result (tid-2049)
   Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
   Total time taken: 53255 seconds
   Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr2146-t2049-vmware-55u3.zip
   Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_vgpu_enabled_vm.py
   Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_vm_root_resize.py
   Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
   Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
   Smoke tests completed. 63 look OK, 4 have error(s)
   Only failed tests results shown below:
   
   
   Test | Result | Time (s) | Test File
   --- | --- | --- | ---
   test_3d_gpu_support | `Failure` | 404.84 | test_deploy_vgpu_enabled_vm.py
   test_00_deploy_vm_root_resize | `Error` | 0.17 | 
test_deploy_vm_root_resize.py
   test_01_create_volume | `Failure` | 189.95 | test_volumes.py
   test_05_rvpc_multi_tiers | `Failure` | 665.46 | test_vpc_redundant.py
   test_05_rvpc_multi_tiers | `Error` | 726.46 | test_vpc_redundant.py
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support OVA files with multiple disks for templates
> ---
>
> Key: CLOUDSTACK-4757
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4757
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Reporter: Likitha Shetty
>Assignee: Nicolas Vazquez
>Priority: Minor
> Fix For: Future
>
>
> CloudStack volumes and templates are one single virtual disk in case of 
> XenServer/XCP and KVM hypervisors since the files used for templates and 
> volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates 
> are in OVA format, which are archives that can contain a complete VM 
> including multiple VMDKs and other files such as ISOs. And currently, 
> Cloudstack only supports Template creation based on OVA files containing a 
> single disk. If a user creates a template from a OVA file containing more 
> than 1 disk and launches an instance using this template, only the first disk 
> is attached to the new instance and other disks are ignored.
> Similarly with uploaded volumes, attaching an uploaded volume that contains 
> multiple disks to a VM will result in only one VMDK to being attached to the 
> VM.
> This behavior needs to be improved in VMWare to support OVA files with 
> multiple disks for both uploaded volumes and templates. i.e. If a user 
> creates a template from a OVA file containing more than 1 disk and launches 
> an instance using this template, the first disk should be attached to the new 
> instance as the ROOT disk and volumes should be created based on other VMDK 
> disks in the OVA file and should be attached to the instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >