[jira] [Commented] (CLOUDSTACK-9790) Can't create a Basic Zone (networking problem)

2017-02-16 Thread Kris Sterckx (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871369#comment-15871369
 ] 

Kris Sterckx commented on CLOUDSTACK-9790:
--

Investigating

> Can't create a Basic Zone (networking problem)
> --
>
> Key: CLOUDSTACK-9790
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9790
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.10
>Reporter: Mike Tutkowski
>Assignee: Kris Sterckx
>Priority: Blocker
> Fix For: 4.10
>
>
> A NullPointerException is thrown when trying to create a Basic Zone:
> java.lang.NullPointerException
>   at com.cloud.utils.net.NetUtils.getCidrNetmask(NetUtils.java:956)
>   at com.cloud.configuration.ConfigurationManagerImpl.
> validateIpRange(ConfigurationManagerImpl.java:2924)
> This appears to be related to PR 1579.
> In ConfigurationManagerImpl.java, it seems the new lines on 2924 – 2926 are 
> the problem: https://github.com/apache/cloudstack/pull/1579/files



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CLOUDSTACK-9790) Can't create a Basic Zone (networking problem)

2017-02-16 Thread Kris Sterckx (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kris Sterckx reassigned CLOUDSTACK-9790:


Assignee: Kris Sterckx

> Can't create a Basic Zone (networking problem)
> --
>
> Key: CLOUDSTACK-9790
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9790
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.10
>Reporter: Mike Tutkowski
>Assignee: Kris Sterckx
>Priority: Blocker
> Fix For: 4.10
>
>
> A NullPointerException is thrown when trying to create a Basic Zone:
> java.lang.NullPointerException
>   at com.cloud.utils.net.NetUtils.getCidrNetmask(NetUtils.java:956)
>   at com.cloud.configuration.ConfigurationManagerImpl.
> validateIpRange(ConfigurationManagerImpl.java:2924)
> This appears to be related to PR 1579.
> In ConfigurationManagerImpl.java, it seems the new lines on 2924 – 2926 are 
> the problem: https://github.com/apache/cloudstack/pull/1579/files



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871357#comment-15871357
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user priyankparihar commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Hi @borisstoyanov and @serg38 ,
Test code is modified. Please take a look. I think now it will not fail. 


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys VM. New option for Root disk 
> size is added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8284) Primary_storage vlaue is not updating in resource_count table after VM deletion

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871339#comment-15871339
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8284:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1857
  
@adwaitpatankar this is because _resourceLimitMgr.recalculateResourceCount 
only recalculate the resource count on account(if accountid is set), or domain 
(if accountid is not set), not both account/domain.

We have this issue before, it is fixed by replacing 
_resourceLimitMgr.recalculateResourceCount with
{code}
_resourceLimitMgr.decrementResourceCount(vm.getAccountId(), 
ResourceType.primary_storage, rootVol.get(0).isDisplay(), new 
Long(rootVol.get(0).getSize()));
{code}


> Primary_storage vlaue is not updating in resource_count table after VM 
> deletion
> ---
>
> Key: CLOUDSTACK-8284
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8284
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.4.2
>Reporter: Sonali Jadhav
>Assignee: adwait patankar
>
> I was doing some tests for checking resource limitation behavior on 
> sub-domain, a/c and project.
> For account , I didn't set any limits. Instead I set limits on sub-domain. 
> But this specific Primary storage limit(40GB) acted weird.
> At first in sub-domain, I created new instance (without project), and 
> assigned 41 GB, and it successfully created VM. Then I deleted that VM with 
> Expunge option enabled, and tried to create vm again with disk size 42, which 
> failed with error "Maximum number of resources of type 'primary_storage' for 
> domain id=3 has been exceeded"
> Then I gave disk size to 39GB, that also failed with same error. And now if I 
> create any disk size its failing. There are no instances or volumes under 
> that sub-domain.
> so which i checked resource_count table, "count" for Primary_storage was not 
> set to zero.
> So then I logged into same domain from UI, and in "Domains" section clicked 
> on "update resource count". After which in resource_count table 
> "primary_storage" table value went back to zero.
> For Reference: 
> http://comments.gmane.org/gmane.comp.apache.cloudstack.user/17008 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9711) Remote Access vpn user add fail ignored when the VR in stopped state

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871335#comment-15871335
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9711:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1874
  
harmless change, LGTM


> Remote Access vpn user add fail ignored when the VR in stopped state
> 
>
> Key: CLOUDSTACK-9711
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9711
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.10.0.0
>
>
> 1. Create multiple networks in an account
> 2. Enable remote access vpn.
> 3. Stop the VR for one of the network.
> 4. Configure vpn user. 
> If configuring vpn user in one of the network fails the failure is ignored.
> Failure should be shown in API response.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871333#comment-15871333
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1916
  
@ustcweizhou I think it would be good to add support for 16.04/systemd to 
run mgmt server. But Given 12.04 will EOL in next 2.5 months, let's have 
support for 16.04 in 4.9 as well, drop support for 4.9.


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9788) Exception is throwed when list networks with pagesize is 0

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871332#comment-15871332
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9788:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1946
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Exception is throwed when list networks with pagesize is 0
> --
>
> Key: CLOUDSTACK-9788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> listnetworks
> {code}
> $ cloudmonkey listNetworks listall=true page=1 pagesize=0
> Error 530: / by zero
> {
>   "cserrorcode": ,
>   "errorcode": 530,
>   "errortext": "/ by zero",
>   "uuidList": []
> }
> {code}
> however, list virtualmachines
> {code}
> $ cloudmonkey listVirtualMachines listall=true page=1 pagesize=0
> {
>   "count": 240
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9788) Exception is throwed when list networks with pagesize is 0

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871330#comment-15871330
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9788:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1946
  
Trillian build failed:
16:43:38 FAILED - RETRYING: TASK: Wait for default template to be ready 
before returning (1 retries left).
will try again.. 
@blueorangutan test


> Exception is throwed when list networks with pagesize is 0
> --
>
> Key: CLOUDSTACK-9788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> listnetworks
> {code}
> $ cloudmonkey listNetworks listall=true page=1 pagesize=0
> Error 530: / by zero
> {
>   "cserrorcode": ,
>   "errorcode": 530,
>   "errortext": "/ by zero",
>   "uuidList": []
> }
> {code}
> however, list virtualmachines
> {code}
> $ cloudmonkey listVirtualMachines listall=true page=1 pagesize=0
> {
>   "count": 240
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871329#comment-15871329
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1916
  
@borisstoyanov a Trillian-Jenkins test job (ubuntu mgmt + kvm-ubuntu) has 
been kicked to run smoke tests


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871327#comment-15871327
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1916
  
@rhtyd I remember Trillian having some issues when using ubuntu images, but 
cannot find the build failure. 
Will trigger a new one to investigate. 
@blueorangutan test ubuntu kvm-ubuntu


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871324#comment-15871324
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1916
  
@rhtyd then I will close this PR, and create another PR for 4.10/master.
Someone who run with 4.9 LTS and want to have 16.04 systemd support, can 
feel free to merge this PR into their branch.


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871321#comment-15871321
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9569:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1856
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871320#comment-15871320
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9569:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1856
  
@blueorangutan test


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871298#comment-15871298
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9569:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1856
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-501


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9356) VPC add VPN User fails same error as CLOUDSTACK-8927

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871295#comment-15871295
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9356:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1903
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-500


> VPC add VPN User fails same error as CLOUDSTACK-8927
> 
>
> Key: CLOUDSTACK-9356
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9356
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VPC, XenServer
>Affects Versions: 4.8.0, 4.9.0
> Environment: Two CentOS7 MGMT Servers, Two XenServerClusters, 
> Advanced Networking, VLAN isolated
>Reporter: Thomas
>Priority: Critical
>
> When we try to add an VPN User on a VPC following error occurs:
> Management Server:
> ---
> Apr 20 09:24:43 WARN  [resource.virtualnetwork.VirtualRoutingResource] 
> (DirectAgent-68:ctx-de5cbf45) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:43 admin02 server: WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-68:ctx-de5cbf45) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 WARN  [resource.virtualnetwork.VirtualRoutingResource] 
> (DirectAgent-268:ctx-873174f6) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 admin02 server: WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-268:ctx-873174f6) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 WARN  [network.vpn.RemoteAccessVpnManagerImpl] 
> (API-Job-Executor-58:ctx-7f86f610 job-1169 ctx-1073feac) (logid:180e35ed) 
> Unable to apply vpn users
> Apr 20 09:24:47 localhost java.lang.IndexOutOfBoundsException: Index: 1, 
> Size: 1
> Apr 20 09:24:47 localhost at 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)
> Apr 20 09:24:47 localhost at java.util.ArrayList.get(ArrayList.java:429)
> Apr 20 09:24:47 localhost at 
> com.cloud.network.vpn.RemoteAccessVpnManagerImpl.applyVpnUsers(RemoteAccessVpnManagerImpl.java:532)
> Apr 20 09:24:47 localhost at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 20 09:24:47 localhost at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 20 09:24:47 localhost at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 20 09:24:47 localhost at 
> java.lang.reflect.Method.invoke(Method.java:498)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
> Apr 20 09:24:47 localhost at 
> com.sun.proxy.$Proxy234.applyVpnUsers(Unknown Source)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.api.command.user.vpn.AddVpnUserCmd.execute(AddVpnUserCmd.java:122)
> Apr 20 09:24:47 localhost at 
> com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
> Apr 20 09:24:47 localhost at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:554)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> Apr 20 09:24:47 localhost at 
> 

[jira] [Commented] (CLOUDSTACK-9628) Fix Template Size in Swift as Secondary Storage

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871288#comment-15871288
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9628:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1770
  
@blueorangutan test


> Fix Template Size in Swift as Secondary Storage
> ---
>
> Key: CLOUDSTACK-9628
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9628
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0, Future
>Reporter: Syed Ahmed
>
> Cloudstack incorrectly uses the physical size as the size of the
> template. Ideally, the size should refelct the virtual size. This
> PR fixes that issue.
> https://github.com/apache/cloudstack/pull/1770



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9628) Fix Template Size in Swift as Secondary Storage

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871289#comment-15871289
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9628:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1770
  
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been 
kicked to run smoke tests


> Fix Template Size in Swift as Secondary Storage
> ---
>
> Key: CLOUDSTACK-9628
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9628
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0, Future
>Reporter: Syed Ahmed
>
> Cloudstack incorrectly uses the physical size as the size of the
> template. Ideally, the size should refelct the virtual size. This
> PR fixes that issue.
> https://github.com/apache/cloudstack/pull/1770



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9691) unhandeled excetion in list snapshot command when a primary store is deleted

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871283#comment-15871283
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9691:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1847
  
merging


> unhandeled excetion in list snapshot command when a primary store is deleted
> 
>
> Key: CLOUDSTACK-9691
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9691
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>
> Repro steps:
> I have a setup with 3 clusters . for one cluster i deleted the primary storage
> now when i traverse to storage tab getting exception "Unable to locate 
> datastore with id 1"
> DB entries for deleted primary storage :
> "id"  "name"  "uuid"  "pool_type" "port"  "data_center_id"
> "pod_id""cluster_id" "used_bytes"   "capacity_bytes"
> "host_address"  "user_info" "path"  "created" "removed" "update_time" 
>   "status""storage_provider_name" "scope" "hypervisor" "managed"  
> "capacity_iops"
> "1"   ""  \N  "NetworkFilesystem" "2049"  "1" "1" "1" 
> "4674624913408" "5902284816384" "10.147.28.7"   \N  
> "/export/home/shweta/471.xen.primary"   "2016-08-17 08:14:12"   "2016-08-25 
> 04:54:53"   \N  "Maintenance"   "DefaultPrimary" "CLUSTER"  \N  
> "0" \N
> MS log shows :
> 2016-08-26 14:34:36,709 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-1:ctx-90c9ba3a) (logid:115e39ad) ===START=== 10.233.88.59 – 
> GET 
> command=listSnapshots=json=true=1=20&_=1472202277072
> 2016-08-26 14:34:36,747 ERROR [c.c.a.ApiServer] (catalina-exec-1:ctx-90c9ba3a 
> ctx-94284178) (logid:115e39ad) unhandled exception executing api command: 
> [Ljava.lang.String;@77f27ce8
> com.cloud.utils.exception.CloudRuntimeException: Unable to locate datastore 
> with id 1
> at 
> org.apache.cloudstack.storage.datastore.manager.PrimaryDataStoreProviderManagerImpl.getPrimaryDataStore(PrimaryDataStoreProviderManagerImpl.java:61)
> at 
> org.apache.cloudstack.storage.datastore.DataStoreManagerImpl.getDataStore(DataStoreManagerImpl.java:48)
> at 
> com.cloud.api.ApiResponseHelper.getDataStoreRole(ApiResponseHelper.java:571)
> at 
> com.cloud.api.ApiResponseHelper.createSnapshotResponse(ApiResponseHelper.java:537)
> at 
> org.apache.cloudstack.api.command.user.snapshot.ListSnapshotsCmd.execute(ListSnapshotsCmd.java:117)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:132)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:707)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:538)
> at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:297)
> at com.cloud.api.ApiServlet$1.run(ApiServlet.java:129)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:126)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:86)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2268)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 

[jira] [Commented] (CLOUDSTACK-9691) unhandeled excetion in list snapshot command when a primary store is deleted

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871284#comment-15871284
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9691:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1847
  
oops.. will wait for travis


> unhandeled excetion in list snapshot command when a primary store is deleted
> 
>
> Key: CLOUDSTACK-9691
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9691
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>
> Repro steps:
> I have a setup with 3 clusters . for one cluster i deleted the primary storage
> now when i traverse to storage tab getting exception "Unable to locate 
> datastore with id 1"
> DB entries for deleted primary storage :
> "id"  "name"  "uuid"  "pool_type" "port"  "data_center_id"
> "pod_id""cluster_id" "used_bytes"   "capacity_bytes"
> "host_address"  "user_info" "path"  "created" "removed" "update_time" 
>   "status""storage_provider_name" "scope" "hypervisor" "managed"  
> "capacity_iops"
> "1"   ""  \N  "NetworkFilesystem" "2049"  "1" "1" "1" 
> "4674624913408" "5902284816384" "10.147.28.7"   \N  
> "/export/home/shweta/471.xen.primary"   "2016-08-17 08:14:12"   "2016-08-25 
> 04:54:53"   \N  "Maintenance"   "DefaultPrimary" "CLUSTER"  \N  
> "0" \N
> MS log shows :
> 2016-08-26 14:34:36,709 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-1:ctx-90c9ba3a) (logid:115e39ad) ===START=== 10.233.88.59 – 
> GET 
> command=listSnapshots=json=true=1=20&_=1472202277072
> 2016-08-26 14:34:36,747 ERROR [c.c.a.ApiServer] (catalina-exec-1:ctx-90c9ba3a 
> ctx-94284178) (logid:115e39ad) unhandled exception executing api command: 
> [Ljava.lang.String;@77f27ce8
> com.cloud.utils.exception.CloudRuntimeException: Unable to locate datastore 
> with id 1
> at 
> org.apache.cloudstack.storage.datastore.manager.PrimaryDataStoreProviderManagerImpl.getPrimaryDataStore(PrimaryDataStoreProviderManagerImpl.java:61)
> at 
> org.apache.cloudstack.storage.datastore.DataStoreManagerImpl.getDataStore(DataStoreManagerImpl.java:48)
> at 
> com.cloud.api.ApiResponseHelper.getDataStoreRole(ApiResponseHelper.java:571)
> at 
> com.cloud.api.ApiResponseHelper.createSnapshotResponse(ApiResponseHelper.java:537)
> at 
> org.apache.cloudstack.api.command.user.snapshot.ListSnapshotsCmd.execute(ListSnapshotsCmd.java:117)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:132)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:707)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:538)
> at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:297)
> at com.cloud.api.ApiServlet$1.run(ApiServlet.java:129)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:126)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:86)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2268)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 

[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871277#comment-15871277
 ] 

ASF subversion and git services commented on CLOUDSTACK-9752:
-

Commit 49dadc5505d85323b0864f50a2a8e36dd05805e5 in cloudstack's branch 
refs/heads/master from [~nvazquez]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=49dadc5 ]

CLOUDSTACK-9752: [Vmware] Optimization of volume attachness to vm


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk
> 13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> …
> 13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:22:07,581 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk
> 13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> 13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:25:06,362 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871279#comment-15871279
 ] 

ASF subversion and git services commented on CLOUDSTACK-9752:
-

Commit bf2f441211f84b8e6f010a187b3691cbf22fd79e in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=bf2f441 ]

Merge pull request #1913 from nvazquez/createVolumeOptimization

CLOUDSTACK-9752: [Vmware] Optimization of volume attachness to vm## Description

This optimization aims to reduce volume attach slowness due to vmdk files 
search on datastore before creating the volume (search for `.vmdk`, 
`-flat.vmdk` and `-delta.vmdk` files to delete them if they exist). This search 
is not necessary when attaching a volume in Allocated state, due to volume 
files don't exist on datastore.

On large datastores, this search can cause volume attachness to be really slow, 
as we can see in this log lines:


13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
13-mgmt.log:2016-11-02 10:19:42,567 WARN  
[storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
CreateObjectCommand) Unable to locate VMDK file: 
9ce7731fd38b4045afbb7ce9754abbc1.vmdk
13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on [b5ebda046d613e079b5874b169cd848f]

13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in [b5ebda046d613e079b5874b169cd848f]
13-mgmt.log:2016-11-02 10:22:07,581 WARN  
[storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
CreateObjectCommand) Unable to locate VMDK file: 
9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk

13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
[b5ebda046d613e079b5874b169cd848f]
13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
[b5ebda046d613e079b5874b169cd848f]
13-mgmt.log:2016-11-02 10:25:06,362 WARN  
[storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
CreateObjectCommand) Unable to locate VMDK file: 
9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk


* pr/1913:
  CLOUDSTACK-9752: [Vmware] Optimization of volume attachness to vm

Signed-off-by: Rajani Karuturi 


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 

[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871278#comment-15871278
 ] 

ASF subversion and git services commented on CLOUDSTACK-9752:
-

Commit bf2f441211f84b8e6f010a187b3691cbf22fd79e in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=bf2f441 ]

Merge pull request #1913 from nvazquez/createVolumeOptimization

CLOUDSTACK-9752: [Vmware] Optimization of volume attachness to vm## Description

This optimization aims to reduce volume attach slowness due to vmdk files 
search on datastore before creating the volume (search for `.vmdk`, 
`-flat.vmdk` and `-delta.vmdk` files to delete them if they exist). This search 
is not necessary when attaching a volume in Allocated state, due to volume 
files don't exist on datastore.

On large datastores, this search can cause volume attachness to be really slow, 
as we can see in this log lines:


13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
13-mgmt.log:2016-11-02 10:19:42,567 WARN  
[storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
CreateObjectCommand) Unable to locate VMDK file: 
9ce7731fd38b4045afbb7ce9754abbc1.vmdk
13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on [b5ebda046d613e079b5874b169cd848f]

13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in [b5ebda046d613e079b5874b169cd848f]
13-mgmt.log:2016-11-02 10:22:07,581 WARN  
[storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
CreateObjectCommand) Unable to locate VMDK file: 
9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk

13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
[b5ebda046d613e079b5874b169cd848f]
13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
(DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
[b5ebda046d613e079b5874b169cd848f]
13-mgmt.log:2016-11-02 10:25:06,362 WARN  
[storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
CreateObjectCommand) Unable to locate VMDK file: 
9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk


* pr/1913:
  CLOUDSTACK-9752: [Vmware] Optimization of volume attachness to vm

Signed-off-by: Rajani Karuturi 


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 

[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871281#comment-15871281
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9752:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1913


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk
> 13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> …
> 13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:22:07,581 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk
> 13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> 13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:25:06,362 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871275#comment-15871275
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
merging


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871274#comment-15871274
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9752:


Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/1913
  
merging


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk
> 13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> …
> 13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:22:07,581 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk
> 13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> 13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:25:06,362 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871270#comment-15871270
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1727


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-02-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871266#comment-15871266
 ] 

ASF subversion and git services commented on CLOUDSTACK-9539:
-

Commit 3a6d98289cfd139f9f2a520e231253ec8ddb9f11 in cloudstack's branch 
refs/heads/master from [~nvazquez]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=3a6d982 ]

CLOUDSTACK-9539: Support changing Service offering for instance with VM 
Snapshots


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-02-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871267#comment-15871267
 ] 

ASF subversion and git services commented on CLOUDSTACK-9539:
-

Commit 12497d04bffe0fd7e4f6c381d2d80a5a3852acfa in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=12497d0 ]

Merge pull request #1727 from nvazquez/change-serv-offering

CLOUDSTACK-9539: Support changing Service offering for instance with VM 
Snapshots## Actual behaviour

CloudStack doesn't support changing service offering for vm instances which 
have vm snapshots, they should be removed before changing service offering.
## Goal

Extend actual behaviour by supporting changing service offering for vms which 
have vm snapshots. In that case, previously taken snapshots (if reverted) 
should use previous service offering, future snapshots should use the newest.
## Proposed solution:
1. Adding `service_offering_id` column on `vm_snapshots` table: This way 
snapshot can be reverted to original state even though service offering can be 
changed for vm instance.
   NOTE: Existing vm snapshots are populated on update script by:
   `UPDATE vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET 
s.service_offering_id = v.service_offering_id;`
2. New vm snapshots will use instance vm service offering id as 
`service_offering_id`
3. Revert to vm snapshots should use vm snapshot's `service_offering_id` value.
## Example use case:
- Deploy vm using service offering A
- Take vm snapshot -> snap1 (service offering A)
- Stop vm
- Change vm service offering to B
- Revert to VM snapshot snap 1
- Start vm

It is expected that vm has service offering A after last step

* pr/1727:
  CLOUDSTACK-9539: Support changing Service offering for instance with VM 
Snapshots

Signed-off-by: Rajani Karuturi 


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-02-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871268#comment-15871268
 ] 

ASF subversion and git services commented on CLOUDSTACK-9539:
-

Commit 12497d04bffe0fd7e4f6c381d2d80a5a3852acfa in cloudstack's branch 
refs/heads/master from [~rajanik]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=12497d0 ]

Merge pull request #1727 from nvazquez/change-serv-offering

CLOUDSTACK-9539: Support changing Service offering for instance with VM 
Snapshots## Actual behaviour

CloudStack doesn't support changing service offering for vm instances which 
have vm snapshots, they should be removed before changing service offering.
## Goal

Extend actual behaviour by supporting changing service offering for vms which 
have vm snapshots. In that case, previously taken snapshots (if reverted) 
should use previous service offering, future snapshots should use the newest.
## Proposed solution:
1. Adding `service_offering_id` column on `vm_snapshots` table: This way 
snapshot can be reverted to original state even though service offering can be 
changed for vm instance.
   NOTE: Existing vm snapshots are populated on update script by:
   `UPDATE vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET 
s.service_offering_id = v.service_offering_id;`
2. New vm snapshots will use instance vm service offering id as 
`service_offering_id`
3. Revert to vm snapshots should use vm snapshot's `service_offering_id` value.
## Example use case:
- Deploy vm using service offering A
- Take vm snapshot -> snap1 (service offering A)
- Stop vm
- Change vm service offering to B
- Revert to VM snapshot snap 1
- Start vm

It is expected that vm has service offering A after last step

* pr/1727:
  CLOUDSTACK-9539: Support changing Service offering for instance with VM 
Snapshots

Signed-off-by: Rajani Karuturi 


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871272#comment-15871272
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9783:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1944
  
I've created a Trillian env out of the PR and run the test and it has 
passed:

```
test_list_vms_metrics (tests.smoke.test_metrics_api.TestMetrics) ... === 
TestName: test_list_vms_metrics | Status : SUCCESS ===
ok
```


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9628) Fix Template Size in Swift as Secondary Storage

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871262#comment-15871262
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9628:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1770
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-499


> Fix Template Size in Swift as Secondary Storage
> ---
>
> Key: CLOUDSTACK-9628
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9628
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0, Future
>Reporter: Syed Ahmed
>
> Cloudstack incorrectly uses the physical size as the size of the
> template. Ideally, the size should refelct the virtual size. This
> PR fixes that issue.
> https://github.com/apache/cloudstack/pull/1770



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9691) unhandeled excetion in list snapshot command when a primary store is deleted

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871255#comment-15871255
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9691:


Github user anshul1886 commented on the issue:

https://github.com/apache/cloudstack/pull/1847
  
@karuturi @nvazquez , Added the marvin test from #1735.


> unhandeled excetion in list snapshot command when a primary store is deleted
> 
>
> Key: CLOUDSTACK-9691
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9691
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>
> Repro steps:
> I have a setup with 3 clusters . for one cluster i deleted the primary storage
> now when i traverse to storage tab getting exception "Unable to locate 
> datastore with id 1"
> DB entries for deleted primary storage :
> "id"  "name"  "uuid"  "pool_type" "port"  "data_center_id"
> "pod_id""cluster_id" "used_bytes"   "capacity_bytes"
> "host_address"  "user_info" "path"  "created" "removed" "update_time" 
>   "status""storage_provider_name" "scope" "hypervisor" "managed"  
> "capacity_iops"
> "1"   ""  \N  "NetworkFilesystem" "2049"  "1" "1" "1" 
> "4674624913408" "5902284816384" "10.147.28.7"   \N  
> "/export/home/shweta/471.xen.primary"   "2016-08-17 08:14:12"   "2016-08-25 
> 04:54:53"   \N  "Maintenance"   "DefaultPrimary" "CLUSTER"  \N  
> "0" \N
> MS log shows :
> 2016-08-26 14:34:36,709 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-1:ctx-90c9ba3a) (logid:115e39ad) ===START=== 10.233.88.59 – 
> GET 
> command=listSnapshots=json=true=1=20&_=1472202277072
> 2016-08-26 14:34:36,747 ERROR [c.c.a.ApiServer] (catalina-exec-1:ctx-90c9ba3a 
> ctx-94284178) (logid:115e39ad) unhandled exception executing api command: 
> [Ljava.lang.String;@77f27ce8
> com.cloud.utils.exception.CloudRuntimeException: Unable to locate datastore 
> with id 1
> at 
> org.apache.cloudstack.storage.datastore.manager.PrimaryDataStoreProviderManagerImpl.getPrimaryDataStore(PrimaryDataStoreProviderManagerImpl.java:61)
> at 
> org.apache.cloudstack.storage.datastore.DataStoreManagerImpl.getDataStore(DataStoreManagerImpl.java:48)
> at 
> com.cloud.api.ApiResponseHelper.getDataStoreRole(ApiResponseHelper.java:571)
> at 
> com.cloud.api.ApiResponseHelper.createSnapshotResponse(ApiResponseHelper.java:537)
> at 
> org.apache.cloudstack.api.command.user.snapshot.ListSnapshotsCmd.execute(ListSnapshotsCmd.java:117)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:132)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:707)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:538)
> at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:297)
> at com.cloud.api.ApiServlet$1.run(ApiServlet.java:129)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:126)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:86)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2268)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 

[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871239#comment-15871239
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9569:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1856
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9569) VR on shared network not starting on KVM

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871238#comment-15871238
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9569:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1856
  
LGTM.
@blueorangutan package


> VR on shared network not starting on KVM
> 
>
> Key: CLOUDSTACK-9569
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9569
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.9.0
>Reporter: John Burwell
>Priority: Critical
> Fix For: 4.9.2.0, 4.10.1.0, 4.11.0.0
>
> Attachments: cloud.log
>
>
> A VR for a shared network on KVM fails to complete startup with the following 
> behavior:
> # VR starts on KVM
> # Agent pings VR
> # Increase timeout from from 120 seconds to 1200 seconds
> # API configuration starts
> The Management Server reports that the command times out.  Please see the 
> attached {cloud.log} which depicts the activity of the VR through the 
> timeout.  This failure does not occur on VMware.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9356) VPC add VPN User fails same error as CLOUDSTACK-8927

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871234#comment-15871234
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9356:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1903
  
@blueorangutan package


> VPC add VPN User fails same error as CLOUDSTACK-8927
> 
>
> Key: CLOUDSTACK-9356
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9356
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VPC, XenServer
>Affects Versions: 4.8.0, 4.9.0
> Environment: Two CentOS7 MGMT Servers, Two XenServerClusters, 
> Advanced Networking, VLAN isolated
>Reporter: Thomas
>Priority: Critical
>
> When we try to add an VPN User on a VPC following error occurs:
> Management Server:
> ---
> Apr 20 09:24:43 WARN  [resource.virtualnetwork.VirtualRoutingResource] 
> (DirectAgent-68:ctx-de5cbf45) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:43 admin02 server: WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-68:ctx-de5cbf45) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 WARN  [resource.virtualnetwork.VirtualRoutingResource] 
> (DirectAgent-268:ctx-873174f6) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 admin02 server: WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-268:ctx-873174f6) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 WARN  [network.vpn.RemoteAccessVpnManagerImpl] 
> (API-Job-Executor-58:ctx-7f86f610 job-1169 ctx-1073feac) (logid:180e35ed) 
> Unable to apply vpn users
> Apr 20 09:24:47 localhost java.lang.IndexOutOfBoundsException: Index: 1, 
> Size: 1
> Apr 20 09:24:47 localhost at 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)
> Apr 20 09:24:47 localhost at java.util.ArrayList.get(ArrayList.java:429)
> Apr 20 09:24:47 localhost at 
> com.cloud.network.vpn.RemoteAccessVpnManagerImpl.applyVpnUsers(RemoteAccessVpnManagerImpl.java:532)
> Apr 20 09:24:47 localhost at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 20 09:24:47 localhost at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 20 09:24:47 localhost at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 20 09:24:47 localhost at 
> java.lang.reflect.Method.invoke(Method.java:498)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
> Apr 20 09:24:47 localhost at 
> com.sun.proxy.$Proxy234.applyVpnUsers(Unknown Source)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.api.command.user.vpn.AddVpnUserCmd.execute(AddVpnUserCmd.java:122)
> Apr 20 09:24:47 localhost at 
> com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
> Apr 20 09:24:47 localhost at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:554)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> Apr 20 09:24:47 localhost at 
> 

[jira] [Commented] (CLOUDSTACK-9356) VPC add VPN User fails same error as CLOUDSTACK-8927

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871235#comment-15871235
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9356:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1903
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> VPC add VPN User fails same error as CLOUDSTACK-8927
> 
>
> Key: CLOUDSTACK-9356
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9356
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VPC, XenServer
>Affects Versions: 4.8.0, 4.9.0
> Environment: Two CentOS7 MGMT Servers, Two XenServerClusters, 
> Advanced Networking, VLAN isolated
>Reporter: Thomas
>Priority: Critical
>
> When we try to add an VPN User on a VPC following error occurs:
> Management Server:
> ---
> Apr 20 09:24:43 WARN  [resource.virtualnetwork.VirtualRoutingResource] 
> (DirectAgent-68:ctx-de5cbf45) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:43 admin02 server: WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-68:ctx-de5cbf45) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 WARN  [resource.virtualnetwork.VirtualRoutingResource] 
> (DirectAgent-268:ctx-873174f6) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 admin02 server: WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-268:ctx-873174f6) (logid:180e35ed) Expected 1 answers while 
> executing VpnUsersCfgCommand but received 2
> Apr 20 09:24:47 WARN  [network.vpn.RemoteAccessVpnManagerImpl] 
> (API-Job-Executor-58:ctx-7f86f610 job-1169 ctx-1073feac) (logid:180e35ed) 
> Unable to apply vpn users
> Apr 20 09:24:47 localhost java.lang.IndexOutOfBoundsException: Index: 1, 
> Size: 1
> Apr 20 09:24:47 localhost at 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)
> Apr 20 09:24:47 localhost at java.util.ArrayList.get(ArrayList.java:429)
> Apr 20 09:24:47 localhost at 
> com.cloud.network.vpn.RemoteAccessVpnManagerImpl.applyVpnUsers(RemoteAccessVpnManagerImpl.java:532)
> Apr 20 09:24:47 localhost at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 20 09:24:47 localhost at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 20 09:24:47 localhost at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 20 09:24:47 localhost at 
> java.lang.reflect.Method.invoke(Method.java:498)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
> Apr 20 09:24:47 localhost at 
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
> Apr 20 09:24:47 localhost at 
> com.sun.proxy.$Proxy234.applyVpnUsers(Unknown Source)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.api.command.user.vpn.AddVpnUserCmd.execute(AddVpnUserCmd.java:122)
> Apr 20 09:24:47 localhost at 
> com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
> Apr 20 09:24:47 localhost at 
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:554)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> Apr 20 09:24:47 localhost at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> Apr 20 09:24:47 localhost 

[jira] [Commented] (CLOUDSTACK-9405) listDomains API call takes an extremely long time to respond

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871236#comment-15871236
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9405:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1901
  
LGTM, @ustcweizhou can you add a marvin test to validate api changes?


> listDomains API call takes an extremely long time to respond
> 
>
> Key: CLOUDSTACK-9405
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9405
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.8.0
>Reporter: dsclose
>  Labels: performance
>
> We recently upgraded from Cloudstack 4.5.2 to Cloudstack 4.8.0. Since this 
> update, the listDomains API call has started taking an extremely long time to 
> respond. This has caused issues with our services that rely on this API call. 
> Initially they simply timed out until we increased the thresholds. Now we 
> have processes that used to take a few seconds taking many minutes.
> This is so problematic for us that our organisation has put a halt on further 
> updates of Cloudstack 4.5.2 installations. If reversing the update of zones 
> already on 4.8.0 was feasible, we would have reverted back to 4.5.2.
> Here is a table of the times we're seeing:
> ||CS Version||Domain Count||API Response Time||
> |4.5.2|251|~3s|
> |4.8.0|182|~26s|
> |4.8.0|<10|<1s|
> This small data sample indicates that the response time for zones with a 
> larger amount of domains is significantly worse after the update to 4.8.0. 
> Zones with few domains aren't able to reproduce this issue.
> I recall a bug being resolved recently that concerned reducing the response 
> time for list* API calls. I also recall [~remibergsma] resolving a bug 
> concerning the sorting of the listDomains response. Is it possible that these 
> issues are connected?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9462) Systemd packaging for Ubuntu 16.04

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871233#comment-15871233
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9462:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1916
  
@borisstoyanov can you check Trillian failures?
@ustcweizhou can you check Jenkins/Travis failures? I'm in favour of 
removing 12.04 support (for mgmt server) in 4.9+, since the support for Ubuntu 
12.04 ends in April 2017.


> Systemd packaging for Ubuntu 16.04
> --
>
> Key: CLOUDSTACK-9462
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9462
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> Support for building deb packages that will work on Ubuntu 16.04



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871229#comment-15871229
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
LGTM.


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9788) Exception is throwed when list networks with pagesize is 0

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871228#comment-15871228
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9788:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1946
  
This might affect other/all APIs therefore would require regression 
testing, but LGTM (did not test this).


> Exception is throwed when list networks with pagesize is 0
> --
>
> Key: CLOUDSTACK-9788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> listnetworks
> {code}
> $ cloudmonkey listNetworks listall=true page=1 pagesize=0
> Error 530: / by zero
> {
>   "cserrorcode": ,
>   "errorcode": 530,
>   "errortext": "/ by zero",
>   "uuidList": []
> }
> {code}
> however, list virtualmachines
> {code}
> $ cloudmonkey listVirtualMachines listall=true page=1 pagesize=0
> {
>   "count": 240
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9628) Fix Template Size in Swift as Secondary Storage

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871227#comment-15871227
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9628:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1770
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


> Fix Template Size in Swift as Secondary Storage
> ---
>
> Key: CLOUDSTACK-9628
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9628
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0, Future
>Reporter: Syed Ahmed
>
> Cloudstack incorrectly uses the physical size as the size of the
> template. Ideally, the size should refelct the virtual size. This
> PR fixes that issue.
> https://github.com/apache/cloudstack/pull/1770



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9628) Fix Template Size in Swift as Secondary Storage

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871226#comment-15871226
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9628:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1770
  
LGTM, did not test it.
@blueorangutan package


> Fix Template Size in Swift as Secondary Storage
> ---
>
> Key: CLOUDSTACK-9628
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9628
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.9.0, Future
>Reporter: Syed Ahmed
>
> Cloudstack incorrectly uses the physical size as the size of the
> template. Ideally, the size should refelct the virtual size. This
> PR fixes that issue.
> https://github.com/apache/cloudstack/pull/1770



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9711) Remote Access vpn user add fail ignored when the VR in stopped state

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871222#comment-15871222
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9711:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1874
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 349
 Hypervisor xenserver
 NetworkType Advanced
 Passed=103
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_routers_network_ops.py

 * test_03_RVR_Network_check_router_state Failing since 2 runs


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


> Remote Access vpn user add fail ignored when the VR in stopped state
> 
>
> Key: CLOUDSTACK-9711
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9711
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.10.0.0
>
>
> 1. Create multiple networks in an account
> 2. Enable remote access vpn.
> 3. Stop the VR for one of the network.
> 4. Configure vpn user. 
> If configuring vpn user in one of the network fails the failure is ignored.
> Failure should be shown in API response.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8284) Primary_storage vlaue is not updating in resource_count table after VM deletion

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871221#comment-15871221
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8284:


Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/1857
  
 ### ACS CI BVT Run
 **Sumarry:**
 Build Number 348
 Hypervisor xenserver
 NetworkType Advanced
 Passed=103
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_routers_network_ops.py

 * test_03_RVR_Network_check_router_state Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


> Primary_storage vlaue is not updating in resource_count table after VM 
> deletion
> ---
>
> Key: CLOUDSTACK-8284
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8284
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.4.2
>Reporter: Sonali Jadhav
>Assignee: adwait patankar
>
> I was doing some tests for checking resource limitation behavior on 
> sub-domain, a/c and project.
> For account , I didn't set any limits. Instead I set limits on sub-domain. 
> But this specific Primary storage limit(40GB) acted weird.
> At first in sub-domain, I created new instance (without project), and 
> assigned 41 GB, and it successfully created VM. Then I deleted that VM with 
> Expunge option enabled, and tried to create vm again with disk size 42, which 
> failed with error "Maximum number of resources of type 'primary_storage' for 
> domain id=3 has been exceeded"
> Then I gave disk size to 39GB, that also failed with same error. And now if I 
> create any disk size its failing. There are no instances or volumes under 
> that sub-domain.
> so which i checked resource_count table, "count" for Primary_storage was not 
> set to zero.
> So then I logged into same domain from UI, and in "Domains" section clicked 
> on "update resource count". After which in resource_count table 
> "primary_storage" table value went back to zero.
> For Reference: 
> http://comments.gmane.org/gmane.comp.apache.cloudstack.user/17008 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8880) Allocated memory more than total memory on a KVM host

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871220#comment-15871220
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8880:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/847
  
LGTM, @kishankavala please squash your changes and amend git summary with 
jira id.
/cc @karuturi 


> Allocated memory more than total memory on a KVM host
> -
>
> Key: CLOUDSTACK-8880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Reporter: Kishan Kavala
>Assignee: Kishan Kavala
>
> With  memory over-provisioning set to 1, when mgmt server starts VMs in 
> parallel on one host, then the memory allocated on that kvm can be larger 
> than the actual physcial memory of the kvm host.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871215#comment-15871215
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
LGTM. /cc @karuturi 


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871214#comment-15871214
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9752:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1913
  
LGTM. /cc @karuturi 


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk
> 13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> …
> 13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:22:07,581 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk
> 13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> 13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:25:06,362 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9728) query to traffic sentinel requesting for usage stats is too long resulting in traffic sentinel sending HTTP 414 error response

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871212#comment-15871212
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9728:


Github user jayapalu commented on the issue:

https://github.com/apache/cloudstack/pull/1886
  
@rafaelweingartner  Traffic sentinel supports the post request.



> query to traffic sentinel requesting for usage stats is too long resulting in 
> traffic sentinel sending HTTP 414 error response
> --
>
> Key: CLOUDSTACK-9728
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9728
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Devices
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.10.0.0
>
>
> The query sent to the traffic sentinel to retrieve network usage information 
> is rejected with a 414 error because the string is too long for it to process



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9682) Block VM migration to a storage which is in maintainence mode

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871210#comment-15871210
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9682:


Github user koushik-das commented on the issue:

https://github.com/apache/cloudstack/pull/1838
  
Code changes LGTM
@karuturi This can be merged


> Block VM migration to a storage which is in maintainence mode
> -
>
> Key: CLOUDSTACK-9682
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9682
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> Description
> Put vmfs storage pool (cluster wide/zone wide) in maintenance mode and try to 
> migrate a VM through api call to that storage pool
> Steps
> 1. Put one of the storage pool in a cluster in maintenance mode
> 2. From api call, migrate VM from one cluster to another to the above storage 
> pool
> 3. Even though the storage pool is in maintenance mode, the migration task is 
> initiated and completed without any error
> Expectation
> CloudStack should block these kind of migration



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9691) unhandeled excetion in list snapshot command when a primary store is deleted

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871208#comment-15871208
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9691:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1847
  
@anshul1886 great, thanks!


> unhandeled excetion in list snapshot command when a primary store is deleted
> 
>
> Key: CLOUDSTACK-9691
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9691
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>
> Repro steps:
> I have a setup with 3 clusters . for one cluster i deleted the primary storage
> now when i traverse to storage tab getting exception "Unable to locate 
> datastore with id 1"
> DB entries for deleted primary storage :
> "id"  "name"  "uuid"  "pool_type" "port"  "data_center_id"
> "pod_id""cluster_id" "used_bytes"   "capacity_bytes"
> "host_address"  "user_info" "path"  "created" "removed" "update_time" 
>   "status""storage_provider_name" "scope" "hypervisor" "managed"  
> "capacity_iops"
> "1"   ""  \N  "NetworkFilesystem" "2049"  "1" "1" "1" 
> "4674624913408" "5902284816384" "10.147.28.7"   \N  
> "/export/home/shweta/471.xen.primary"   "2016-08-17 08:14:12"   "2016-08-25 
> 04:54:53"   \N  "Maintenance"   "DefaultPrimary" "CLUSTER"  \N  
> "0" \N
> MS log shows :
> 2016-08-26 14:34:36,709 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-1:ctx-90c9ba3a) (logid:115e39ad) ===START=== 10.233.88.59 – 
> GET 
> command=listSnapshots=json=true=1=20&_=1472202277072
> 2016-08-26 14:34:36,747 ERROR [c.c.a.ApiServer] (catalina-exec-1:ctx-90c9ba3a 
> ctx-94284178) (logid:115e39ad) unhandled exception executing api command: 
> [Ljava.lang.String;@77f27ce8
> com.cloud.utils.exception.CloudRuntimeException: Unable to locate datastore 
> with id 1
> at 
> org.apache.cloudstack.storage.datastore.manager.PrimaryDataStoreProviderManagerImpl.getPrimaryDataStore(PrimaryDataStoreProviderManagerImpl.java:61)
> at 
> org.apache.cloudstack.storage.datastore.DataStoreManagerImpl.getDataStore(DataStoreManagerImpl.java:48)
> at 
> com.cloud.api.ApiResponseHelper.getDataStoreRole(ApiResponseHelper.java:571)
> at 
> com.cloud.api.ApiResponseHelper.createSnapshotResponse(ApiResponseHelper.java:537)
> at 
> org.apache.cloudstack.api.command.user.snapshot.ListSnapshotsCmd.execute(ListSnapshotsCmd.java:117)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:132)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:707)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:538)
> at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:297)
> at com.cloud.api.ApiServlet$1.run(ApiServlet.java:129)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:126)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:86)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2268)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 

[jira] [Commented] (CLOUDSTACK-9691) unhandeled excetion in list snapshot command when a primary store is deleted

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871205#comment-15871205
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9691:


Github user anshul1886 commented on the issue:

https://github.com/apache/cloudstack/pull/1847
  
@nvazquez, Yeah I am fine with it. I will add the test from your PR to mine 
PR


> unhandeled excetion in list snapshot command when a primary store is deleted
> 
>
> Key: CLOUDSTACK-9691
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9691
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>
> Repro steps:
> I have a setup with 3 clusters . for one cluster i deleted the primary storage
> now when i traverse to storage tab getting exception "Unable to locate 
> datastore with id 1"
> DB entries for deleted primary storage :
> "id"  "name"  "uuid"  "pool_type" "port"  "data_center_id"
> "pod_id""cluster_id" "used_bytes"   "capacity_bytes"
> "host_address"  "user_info" "path"  "created" "removed" "update_time" 
>   "status""storage_provider_name" "scope" "hypervisor" "managed"  
> "capacity_iops"
> "1"   ""  \N  "NetworkFilesystem" "2049"  "1" "1" "1" 
> "4674624913408" "5902284816384" "10.147.28.7"   \N  
> "/export/home/shweta/471.xen.primary"   "2016-08-17 08:14:12"   "2016-08-25 
> 04:54:53"   \N  "Maintenance"   "DefaultPrimary" "CLUSTER"  \N  
> "0" \N
> MS log shows :
> 2016-08-26 14:34:36,709 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-1:ctx-90c9ba3a) (logid:115e39ad) ===START=== 10.233.88.59 – 
> GET 
> command=listSnapshots=json=true=1=20&_=1472202277072
> 2016-08-26 14:34:36,747 ERROR [c.c.a.ApiServer] (catalina-exec-1:ctx-90c9ba3a 
> ctx-94284178) (logid:115e39ad) unhandled exception executing api command: 
> [Ljava.lang.String;@77f27ce8
> com.cloud.utils.exception.CloudRuntimeException: Unable to locate datastore 
> with id 1
> at 
> org.apache.cloudstack.storage.datastore.manager.PrimaryDataStoreProviderManagerImpl.getPrimaryDataStore(PrimaryDataStoreProviderManagerImpl.java:61)
> at 
> org.apache.cloudstack.storage.datastore.DataStoreManagerImpl.getDataStore(DataStoreManagerImpl.java:48)
> at 
> com.cloud.api.ApiResponseHelper.getDataStoreRole(ApiResponseHelper.java:571)
> at 
> com.cloud.api.ApiResponseHelper.createSnapshotResponse(ApiResponseHelper.java:537)
> at 
> org.apache.cloudstack.api.command.user.snapshot.ListSnapshotsCmd.execute(ListSnapshotsCmd.java:117)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:132)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:707)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:538)
> at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:297)
> at com.cloud.api.ApiServlet$1.run(ApiServlet.java:129)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:126)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:86)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2268)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> 

[jira] [Commented] (CLOUDSTACK-9691) unhandeled excetion in list snapshot command when a primary store is deleted

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871189#comment-15871189
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9691:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1847
  
Hi @anshul1886,
I've deployed and tested your PR by replicating issue we had in our 
environment with Vmware and passed successfully! I think your solution is much 
cleaner and simpler than mine. Do you agree if I close mine and we go ahead on 
your PR? Would you mind adding marvin test written in mine to your PR (under 
test_snapshots.py)?


> unhandeled excetion in list snapshot command when a primary store is deleted
> 
>
> Key: CLOUDSTACK-9691
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9691
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Anshul Gangwar
>
> Repro steps:
> I have a setup with 3 clusters . for one cluster i deleted the primary storage
> now when i traverse to storage tab getting exception "Unable to locate 
> datastore with id 1"
> DB entries for deleted primary storage :
> "id"  "name"  "uuid"  "pool_type" "port"  "data_center_id"
> "pod_id""cluster_id" "used_bytes"   "capacity_bytes"
> "host_address"  "user_info" "path"  "created" "removed" "update_time" 
>   "status""storage_provider_name" "scope" "hypervisor" "managed"  
> "capacity_iops"
> "1"   ""  \N  "NetworkFilesystem" "2049"  "1" "1" "1" 
> "4674624913408" "5902284816384" "10.147.28.7"   \N  
> "/export/home/shweta/471.xen.primary"   "2016-08-17 08:14:12"   "2016-08-25 
> 04:54:53"   \N  "Maintenance"   "DefaultPrimary" "CLUSTER"  \N  
> "0" \N
> MS log shows :
> 2016-08-26 14:34:36,709 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-1:ctx-90c9ba3a) (logid:115e39ad) ===START=== 10.233.88.59 – 
> GET 
> command=listSnapshots=json=true=1=20&_=1472202277072
> 2016-08-26 14:34:36,747 ERROR [c.c.a.ApiServer] (catalina-exec-1:ctx-90c9ba3a 
> ctx-94284178) (logid:115e39ad) unhandled exception executing api command: 
> [Ljava.lang.String;@77f27ce8
> com.cloud.utils.exception.CloudRuntimeException: Unable to locate datastore 
> with id 1
> at 
> org.apache.cloudstack.storage.datastore.manager.PrimaryDataStoreProviderManagerImpl.getPrimaryDataStore(PrimaryDataStoreProviderManagerImpl.java:61)
> at 
> org.apache.cloudstack.storage.datastore.DataStoreManagerImpl.getDataStore(DataStoreManagerImpl.java:48)
> at 
> com.cloud.api.ApiResponseHelper.getDataStoreRole(ApiResponseHelper.java:571)
> at 
> com.cloud.api.ApiResponseHelper.createSnapshotResponse(ApiResponseHelper.java:537)
> at 
> org.apache.cloudstack.api.command.user.snapshot.ListSnapshotsCmd.execute(ListSnapshotsCmd.java:117)
> at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:132)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:707)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:538)
> at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:297)
> at com.cloud.api.ApiServlet$1.run(ApiServlet.java:129)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:126)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:86)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> 

[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871028#comment-15871028
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
Trillian test result (tid-842)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 33489 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1947-t842-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Test completed. 47 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 339.16 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 164.89 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.05 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 244.85 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 285.92 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 523.25 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 504.17 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1403.08 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 557.94 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 748.66 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1296.99 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.32 | test_volumes.py
test_08_resize_volume | Success | 156.01 | test_volumes.py
test_07_resize_fail | Success | 161.01 | test_volumes.py
test_06_download_detached_volume | Success | 155.88 | test_volumes.py
test_05_detach_volume | Success | 150.47 | test_volumes.py
test_04_delete_attached_volume | Success | 151.33 | test_volumes.py
test_03_download_attached_volume | Success | 156.08 | test_volumes.py
test_02_attach_volume | Success | 95.88 | test_volumes.py
test_01_create_volume | Success | 710.89 | test_volumes.py
test_deploy_vm_multiple | Success | 246.96 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.01 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.49 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.14 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.68 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.08 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.64 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.66 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.13 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.25 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 45.37 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.02 | test_templates.py
test_05_template_permissions | Success | 0.03 | test_templates.py
test_04_extract_template | Success | 5.14 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.15 | test_templates.py
test_01_create_template | Success | 30.42 | test_templates.py
test_10_destroy_cpvm | Success | 131.48 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.58 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.46 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.51 | test_ssvm.py
test_06_stop_cpvm | Success | 131.60 | test_ssvm.py
test_05_stop_ssvm | Success | 163.53 | test_ssvm.py
test_04_cpvm_internals | Success | 1.19 | test_ssvm.py
test_03_ssvm_internals | Success | 3.24 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.07 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 10.90 | test_snapshots.py
test_04_change_offering_small | Success | 237.32 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.02 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.14 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.08 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.11 | test_secondary_storage.py
test_09_reboot_router | Success | 40.29 | test_routers.py
test_08_start_router | Success | 

[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870994#comment-15870994
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
Trillian test result (tid-840)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 32519 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1829-t840-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 863.95 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 353.40 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 295.14 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 149.68 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 56.05 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 226.06 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 270.35 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 534.10 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 500.82 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1397.70 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 536.94 | test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.32 | test_volumes.py
test_08_resize_volume | Success | 156.31 | test_volumes.py
test_07_resize_fail | Success | 156.44 | test_volumes.py
test_06_download_detached_volume | Success | 151.17 | test_volumes.py
test_05_detach_volume | Success | 150.71 | test_volumes.py
test_04_delete_attached_volume | Success | 146.10 | test_volumes.py
test_03_download_attached_volume | Success | 151.23 | test_volumes.py
test_02_attach_volume | Success | 88.99 | test_volumes.py
test_01_create_volume | Success | 710.99 | test_volumes.py
test_deploy_vm_multiple | Success | 247.49 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.66 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.20 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.86 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.14 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.78 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.83 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.31 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 60.56 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.07 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.13 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.15 | test_templates.py
test_01_create_template | Success | 30.35 | test_templates.py
test_10_destroy_cpvm | Success | 166.47 | test_ssvm.py
test_09_destroy_ssvm | Success | 138.09 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.61 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.72 | test_ssvm.py
test_06_stop_cpvm | Success | 131.71 | test_ssvm.py
test_05_stop_ssvm | Success | 133.57 | test_ssvm.py
test_04_cpvm_internals | Success | 1.16 | test_ssvm.py
test_03_ssvm_internals | Success | 3.50 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.11 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.12 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.24 | test_snapshots.py
test_04_change_offering_small | Success | 239.62 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.10 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.19 | test_secondary_storage.py
test_09_reboot_router | Success | 40.32 | test_routers.py
test_08_start_router | Success | 30.27 | test_routers.py
test_07_stop_router | Success | 10.17 | 

[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870742#comment-15870742
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
@priyankparihar Resize test still fails on vmware. It is hard to see the 
reason in blueorangutain output but you might want to make sure the test 
provisioning uses full clones via setting vmware.create.full.clone. It can be 
either global or set per primary storage pool. Full clone disks is the only way 
resize disk works on vmware


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer attached image 2) when user calls the resize volume on ROOT volume. 
> Here only size option is shown. For DATADISK disk offerings are shown.
> 3) (refer attached image 3) when user deploys 

[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870735#comment-15870735
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Trillian test result (tid-835)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 43251 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1813-t835-xenserver-65sp1.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_ssvm.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 565.46 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1370.87 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 558.00 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 692.75 | 
test_privategw_acl.py
ContextSuite context=TestSnapshotRootDisk>:teardown | `Error` | 61.66 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 306.09 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 171.74 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 528.69 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 319.44 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 683.44 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 819.50 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1076.78 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.79 | test_volumes.py
test_08_resize_volume | Success | 90.91 | test_volumes.py
test_07_resize_fail | Success | 101.02 | test_volumes.py
test_06_download_detached_volume | Success | 25.43 | test_volumes.py
test_05_detach_volume | Success | 100.25 | test_volumes.py
test_04_delete_attached_volume | Success | 10.40 | test_volumes.py
test_03_download_attached_volume | Success | 15.30 | test_volumes.py
test_02_attach_volume | Success | 10.68 | test_volumes.py
test_01_create_volume | Success | 392.83 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.21 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 219.28 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 130.97 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 222.95 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 27.06 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.24 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 71.10 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.11 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.16 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 10.19 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.23 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.26 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 110.95 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.25 | test_templates.py
test_03_delete_template | Success | 5.12 | test_templates.py
test_02_edit_template | Success | 90.13 | test_templates.py
test_01_create_template | Success | 51.21 | test_templates.py
test_10_destroy_cpvm | Success | 196.75 | test_ssvm.py
test_09_destroy_ssvm | Success | 229.56 | test_ssvm.py
test_08_reboot_cpvm | Success | 141.83 | test_ssvm.py
test_07_reboot_ssvm | Success | 154.40 | test_ssvm.py
test_06_stop_cpvm | Success | 131.81 | test_ssvm.py
test_05_stop_ssvm | Success | 199.21 | test_ssvm.py
test_04_cpvm_internals | Success | 1.15 | test_ssvm.py
test_03_ssvm_internals | Success | 3.26 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 21.20 | test_snapshots.py
test_04_change_offering_small | Success | 116.24 | test_service_offerings.py

[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870674#comment-15870674
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9752:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1913
  
LGTM


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk
> 13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> …
> 13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:22:07,581 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk
> 13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> 13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:25:06,362 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870675#comment-15870675
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9752:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1913
  
@karuturi This one is ready for merging. LGTM and test results are positive.


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk
> 13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> …
> 13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:22:07,581 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk
> 13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> 13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:25:06,362 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9611) Dedicating a Guest VLAN range to Project does not work

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870645#comment-15870645
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9611:


Github user ustcweizhou commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1771#discussion_r101618476
  
--- Diff: server/src/com/cloud/network/NetworkServiceImpl.java ---
@@ -3085,9 +3085,10 @@ public GuestVlan 
dedicateGuestVlanRange(DedicateGuestVlanRangeCmd cmd) {
 // Verify account is valid
 Account vlanOwner = null;
 if (projectId != null) {
-if (accountName != null) {
-throw new InvalidParameterValueException("accountName and 
projectId are mutually exclusive");
-}
+//accountName and projectId are mutually exclusive
--- End diff --

@nitin-maharana I disagree this change. We'd better keep like it.
I suggest to make ui change to add a new field 'scope' which can be set to 
'domain', 'account', 'project'.
It is similar to the scope field in the dialog to create a shared network 
in advanced zone.

scope='Domain', domain field is shown.
scope='Account, domain and account are shown
scope='Project', domain and project are shown.




> Dedicating a Guest VLAN range to Project does not work
> --
>
> Key: CLOUDSTACK-9611
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9611
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nitin Kumar Maharana
>
> Trying to dedicate a guest VLAN range to an account fails. If we pass both 
> account and projectid parameters to the dedicateGuestVlanRange (which are not 
> mentioned as mutually exclusive in API description) the API layer throws 
> error saying both are mutually exclusive.
> Steps to Reproduce:
> 
> Create an account. Create a project in that account.
> Go to admin account and change view to the above project.
> Navigate to Infrastructure -> Zone -> Physical Network -> Guest -> Dedicate 
> Guest VLAN range.
> Try to dedicate the guest VLAN range from the project view for the account 
> associated with the project.
> It fails with Error saying accountName and projectId are mutually exclusive.
> Expected:
> 
> The VLAN range should get dedicated to the project account.
> Notes:
> =
> If we do the dedication from default view then it works fine as no projectid 
> is associated over there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870644#comment-15870644
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Trillian test result (tid-836)
Environment: vmware-60u2 (x2), Advanced Networking with Mgmt server 7
Total time taken: 38546 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1813-t836-vmware-60u2.zip
Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_vm_root_resize.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_templates.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 839.38 | 
test_privategw_acl.py
test_00_deploy_vm_root_resize | `Error` | 0.89 | 
test_deploy_vm_root_resize.py
test_01_vpc_site2site_vpn | Success | 327.61 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 146.75 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 568.39 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 304.47 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 666.43 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 618.40 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1488.85 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 664.14 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 594.49 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1301.28 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 21.07 | test_volumes.py
test_06_download_detached_volume | Success | 55.68 | test_volumes.py
test_05_detach_volume | Success | 100.30 | test_volumes.py
test_04_delete_attached_volume | Success | 10.34 | test_volumes.py
test_03_download_attached_volume | Success | 15.32 | test_volumes.py
test_02_attach_volume | Success | 54.24 | test_volumes.py
test_01_create_volume | Success | 435.99 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.23 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 219.23 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 423.39 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.79 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 202.98 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.84 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 185.35 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 91.53 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.12 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 5.14 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.15 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.23 | test_vm_life_cycle.py
test_01_stop_vm | Success | 5.12 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 241.83 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 15.24 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.14 | test_templates.py
test_01_create_template | Success | 136.19 | test_templates.py
test_10_destroy_cpvm | Success | 236.93 | test_ssvm.py
test_09_destroy_ssvm | Success | 238.99 | test_ssvm.py
test_08_reboot_cpvm | Success | 246.76 | test_ssvm.py
test_07_reboot_ssvm | Success | 369.26 | test_ssvm.py
test_06_stop_cpvm | Success | 176.89 | test_ssvm.py
test_05_stop_ssvm | Success | 173.74 | test_ssvm.py
test_04_cpvm_internals | Success | 1.18 | test_ssvm.py
test_03_ssvm_internals | Success | 3.84 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.16 | test_ssvm.py
test_01_snapshot_root_disk | Success | 26.25 | test_snapshots.py
test_04_change_offering_small | Success | 116.86 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.13 | 

[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870615#comment-15870615
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9752:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1913
  
Trillian test result (tid-831)
Environment: vmware-60u2 (x2), Advanced Networking with Mgmt server 7
Total time taken: 44384 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1913-t831-vmware-60u2.zip
Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_vgpu_enabled_vm.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 48 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 836.63 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 370.56 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 151.67 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 586.44 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 323.58 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 694.53 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 621.80 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1522.64 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 697.48 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 652.91 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1302.22 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 20.60 | test_volumes.py
test_06_download_detached_volume | Success | 45.64 | test_volumes.py
test_05_detach_volume | Success | 100.19 | test_volumes.py
test_04_delete_attached_volume | Success | 10.16 | test_volumes.py
test_03_download_attached_volume | Success | 15.23 | test_volumes.py
test_02_attach_volume | Success | 78.66 | test_volumes.py
test_01_create_volume | Success | 505.64 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.14 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 227.02 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 201.38 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.76 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 262.23 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.76 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.21 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 61.03 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.10 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 5.11 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.11 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.22 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.16 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 221.30 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 10.24 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 110.89 | test_templates.py
test_10_destroy_cpvm | Success | 266.67 | test_ssvm.py
test_09_destroy_ssvm | Success | 238.32 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.38 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.09 | test_ssvm.py
test_06_stop_cpvm | Success | 206.56 | test_ssvm.py
test_05_stop_ssvm | Success | 203.36 | test_ssvm.py
test_04_cpvm_internals | Success | 1.03 | test_ssvm.py
test_03_ssvm_internals | Success | 3.27 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.10 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.14 | test_snapshots.py
test_04_change_offering_small | Success | 116.78 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | 

[jira] [Commented] (CLOUDSTACK-9403) Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin test coverage

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870496#comment-15870496
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9403:


Github user mike-tutkowski commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1579#discussion_r101596817
  
--- Diff: server/src/com/cloud/configuration/ConfigurationManagerImpl.java 
---
@@ -2909,51 +2911,21 @@ public Vlan doInTransaction(final TransactionStatus 
status) {
 String vlanGateway = null;
 String vlanNetmask = null;
 boolean sameSubnet = false;
-if (vlans != null && vlans.size() > 0) {
+if (CollectionUtils.isNotEmpty(vlans)) {
 for (final VlanVO vlan : vlans) {
-if (ipv4) {
-vlanGateway = vlan.getVlanGateway();
-vlanNetmask = vlan.getVlanNetmask();
-// check if subset or super set or neither.
-final NetUtils.SupersetOrSubset val = 
checkIfSubsetOrSuperset(newVlanGateway, newVlanNetmask, vlan, startIP, endIP);
-if (val == NetUtils.SupersetOrSubset.isSuperset) {
-// this means that new cidr is a superset of the
-// existing subnet.
-throw new InvalidParameterValueException("The 
subnet you are trying to add is a superset of the existing subnet having 
gateway" + vlan.getVlanGateway()
-+ " and netmask  " + 
vlan.getVlanNetmask());
-} else if (val == 
NetUtils.SupersetOrSubset.neitherSubetNorSuperset) {
-// this implies the user is trying to add a new 
subnet
-// which is not a superset or subset of this 
subnet.
-// checking with the other subnets.
-continue;
-} else if (val == NetUtils.SupersetOrSubset.isSubset) {
-// this means he is trying to add to the same 
subnet.
-throw new InvalidParameterValueException("The 
subnet you are trying to add is a subset of the existing subnet having gateway" 
+ vlan.getVlanGateway()
-+ " and netmask  " + 
vlan.getVlanNetmask());
-} else if (val == 
NetUtils.SupersetOrSubset.sameSubnet) {
-sameSubnet = true;
-//check if the gateway provided by the user is 
same as that of the subnet.
-if (newVlanGateway != null && 
!newVlanGateway.equals(vlanGateway)) {
-throw new InvalidParameterValueException("The 
gateway of the subnet should be unique. The subnet alreaddy has a gateway " + 
vlanGateway);
-}
-break;
-}
-}
-if (ipv6) {
-if (ip6Gateway != null && 
!ip6Gateway.equals(network.getIp6Gateway())) {
-throw new InvalidParameterValueException("The 
input gateway " + ip6Gateway + " is not same as network gateway " + 
network.getIp6Gateway());
-}
-if (ip6Cidr != null && 
!ip6Cidr.equals(network.getIp6Cidr())) {
-throw new InvalidParameterValueException("The 
input cidr " + ip6Cidr + " is not same as network ciddr " + 
network.getIp6Cidr());
-}
-ip6Gateway = network.getIp6Gateway();
-ip6Cidr = network.getIp6Cidr();
-_networkModel.checkIp6Parameters(startIPv6, endIPv6, 
ip6Gateway, ip6Cidr);
-sameSubnet = true;
-}
+vlanGateway = vlan.getVlanGateway();
+vlanNetmask = vlan.getVlanNetmask();
+sameSubnet = hasSameSubnet(ipv4, vlanGateway, vlanNetmask, 
newVlanGateway, newVlanNetmask, startIP, endIP,
+ipv6, ip6Gateway, ip6Cidr, startIPv6, endIPv6, 
network);
+if (sameSubnet) break;
 }
+} else {
+vlanGateway = network.getGateway();
+vlanNetmask = NetUtils.getCidrNetmask(network.getCidr());
--- End diff --

I believe this is the root of the following blocker for 4.10: 
https://issues.apache.org/jira/browse/CLOUDSTACK-9790


> Nuage VSP Plugin : Support for SharedNetwork fuctionality including Marvin 
> test coverage
> 
>
> Key: CLOUDSTACK-9403
> URL: 

[jira] [Created] (CLOUDSTACK-9790) Can't create a Basic Zone (networking problem)

2017-02-16 Thread Mike Tutkowski (JIRA)
Mike Tutkowski created CLOUDSTACK-9790:
--

 Summary: Can't create a Basic Zone (networking problem)
 Key: CLOUDSTACK-9790
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9790
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Network Controller
Affects Versions: 4.10
Reporter: Mike Tutkowski
Priority: Blocker
 Fix For: 4.10


A NullPointerException is thrown when trying to create a Basic Zone:

java.lang.NullPointerException
  at com.cloud.utils.net.NetUtils.getCidrNetmask(NetUtils.java:956)
  at com.cloud.configuration.ConfigurationManagerImpl.
validateIpRange(ConfigurationManagerImpl.java:2924)

This appears to be related to PR 1579.

In ConfigurationManagerImpl.java, it seems the new lines on 2924 – 2926 are the 
problem: https://github.com/apache/cloudstack/pull/1579/files




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-8880) Allocated memory more than total memory on a KVM host

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870421#comment-15870421
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8880:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/847
  
Trillian test result (tid-837)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 28112 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr847-t837-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Test completed. 48 look ok, 1 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 310.76 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 165.96 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 71.22 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 266.38 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 264.54 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 536.23 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 501.90 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1451.41 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 554.03 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 742.88 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1311.59 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.72 | test_volumes.py
test_08_resize_volume | Success | 156.67 | test_volumes.py
test_07_resize_fail | Success | 161.64 | test_volumes.py
test_06_download_detached_volume | Success | 156.44 | test_volumes.py
test_05_detach_volume | Success | 145.69 | test_volumes.py
test_04_delete_attached_volume | Success | 146.24 | test_volumes.py
test_03_download_attached_volume | Success | 151.33 | test_volumes.py
test_02_attach_volume | Success | 89.15 | test_volumes.py
test_01_create_volume | Success | 718.99 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.21 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 101.07 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 165.59 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 232.97 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.73 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.24 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 36.34 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.14 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.92 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.95 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.19 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.33 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 55.68 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.08 | test_templates.py
test_04_extract_template | Success | 5.14 | test_templates.py
test_03_delete_template | Success | 5.13 | test_templates.py
test_02_edit_template | Success | 90.12 | test_templates.py
test_01_create_template | Success | 65.99 | test_templates.py
test_10_destroy_cpvm | Success | 166.75 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.59 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.56 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.25 | test_ssvm.py
test_06_stop_cpvm | Success | 131.53 | test_ssvm.py
test_05_stop_ssvm | Success | 133.35 | test_ssvm.py
test_04_cpvm_internals | Success | 1.05 | test_ssvm.py
test_03_ssvm_internals | Success | 3.20 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.14 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.15 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.51 | test_snapshots.py
test_04_change_offering_small | Success | 209.68 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.16 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.23 | test_secondary_storage.py
test_09_reboot_router | 

[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870350#comment-15870350
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9783:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1944
  
Just triggered Trillian env to check the test failure. 


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870291#comment-15870291
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9783:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1944
  
Trillian test result (tid-833)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 28094 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1944-t833-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_metrics_api.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 355.57 | 
test_privategw_acl.py
test_list_vms_metrics | `Error` | 10.34 | test_metrics_api.py
test_01_vpc_site2site_vpn | Success | 160.37 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 71.41 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 307.09 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 313.58 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 534.07 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 521.52 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1301.55 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 560.88 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 760.16 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1282.47 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.78 | test_volumes.py
test_08_resize_volume | Success | 156.38 | test_volumes.py
test_07_resize_fail | Success | 161.46 | test_volumes.py
test_06_download_detached_volume | Success | 156.28 | test_volumes.py
test_05_detach_volume | Success | 155.78 | test_volumes.py
test_04_delete_attached_volume | Success | 151.10 | test_volumes.py
test_03_download_attached_volume | Success | 156.46 | test_volumes.py
test_02_attach_volume | Success | 94.97 | test_volumes.py
test_01_create_volume | Success | 711.44 | test_volumes.py
test_deploy_vm_multiple | Success | 253.11 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.68 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 185.31 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.88 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.76 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.85 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.31 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 50.49 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.08 | test_templates.py
test_04_extract_template | Success | 5.15 | test_templates.py
test_03_delete_template | Success | 5.10 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 60.53 | test_templates.py
test_10_destroy_cpvm | Success | 161.59 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.58 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.59 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.52 | test_ssvm.py
test_06_stop_cpvm | Success | 131.79 | test_ssvm.py
test_05_stop_ssvm | Success | 133.81 | test_ssvm.py
test_04_cpvm_internals | Success | 1.21 | test_ssvm.py
test_03_ssvm_internals | Success | 3.35 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.11 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.03 | test_snapshots.py
test_04_change_offering_small | Success | 234.65 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.10 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 | test_secondary_storage.py
test_09_reboot_router | Success | 35.28 | test_routers.py
test_08_start_router | Success | 30.25 | 

[jira] [Commented] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870251#comment-15870251
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9781:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1940
  
@jayantpatil1234 can you please have a look on the volumes and snapshots 
failures please. 


> ACS records ID in events tables instead of UUID.
> 
>
> Key: CLOUDSTACK-9781
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9781
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Jayant Patil
>
> ISSUE
> =
> Wrong presentation of volume id in ACS events.
> While creating a snapshot, only volume ID is mentioned in the events. For 
> example, “Scheduled async job for creating snapshot for volume Id:270". On 
> looking into the notification, user is not able to identify the volume. So 
> modified event description with UUID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9783) Improve metrics view performance

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870243#comment-15870243
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9783:


Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1944
  
Thanks for testing/reviewing @borisstoyanov 


> Improve metrics view performance
> 
>
> Key: CLOUDSTACK-9783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9783
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Rohit Yadav
>Assignee: Rohit Yadav
> Fix For: Future, 4.10.0.0, 4.9.3.0
>
>
> Metrics view is a pure frontend feature, where several API calls are made to 
> generate the metrics view tabular data. In very large environments, rendering 
> of these tables can take a lot of time, especially when there is high 
> latency. The improvement task is to reimplement this feature by moving the 
> logic to backend so metrics calculations happen at the backend and final 
> result can be served by a single API request.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870220#comment-15870220
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
@blueorangutan test


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9781) ACS records ID in events tables instead of UUID.

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870224#comment-15870224
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9781:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1940
  
Trillian test result (tid-832)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 28868 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1940-t832-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Test completed. 45 look ok, 4 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_vpc_site2site_vpn | `Failure` | 175.58 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | `Failure` | 251.53 | test_vpc_vpn.py
test_04_rvpc_privategw_static_routes | `Failure` | 335.65 | 
test_privategw_acl.py
test_09_delete_detached_volume | `Error` | 10.28 | test_volumes.py
test_08_resize_volume | `Error` | 5.10 | test_volumes.py
test_07_resize_fail | `Error` | 10.24 | test_volumes.py
test_06_download_detached_volume | `Error` | 5.09 | test_volumes.py
test_05_detach_volume | `Error` | 5.08 | test_volumes.py
test_04_delete_attached_volume | `Error` | 5.11 | test_volumes.py
test_03_download_attached_volume | `Error` | 5.10 | test_volumes.py
test_02_attach_volume | `Error` | 68.98 | test_volumes.py
test_01_create_volume | `Error` | 144.36 | test_volumes.py
ContextSuite context=TestVolumes>:teardown | `Error` | 146.37 | 
test_volumes.py
test_03_delete_vm_snapshots | `Error` | 0.06 | test_vm_snapshots.py
test_02_revert_vm_snapshots | `Error` | 90.25 | test_vm_snapshots.py
test_01_vpc_remote_access_vpn | Success | 61.13 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 292.56 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 520.39 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 506.44 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1402.02 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 527.36 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 761.32 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1265.46 | 
test_vpc_redundant.py
test_01_create_vm_snapshots | Success | 163.72 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 237.77 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.67 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.16 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.93 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.89 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 126.16 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.18 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.31 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 50.52 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.37 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.12 | test_templates.py
test_01_create_template | Success | 30.39 | test_templates.py
test_10_destroy_cpvm | Success | 161.68 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.70 | test_ssvm.py
test_08_reboot_cpvm | Success | 102.78 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.66 | test_ssvm.py
test_06_stop_cpvm | Success | 132.06 | test_ssvm.py
test_05_stop_ssvm | Success | 133.65 | test_ssvm.py
test_04_cpvm_internals | Success | 1.16 | test_ssvm.py
test_03_ssvm_internals | Success | 3.49 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.28 | test_snapshots.py
test_04_change_offering_small | Success | 209.55 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success 

[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870222#comment-15870222
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870218#comment-15870218
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
thanks @syed 


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870217#comment-15870217
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user syed commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@borisstoyanov  I've done my testing on a linux based HVM VM and it works 
as expected after the fix.


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870215#comment-15870215
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-498


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870212#comment-15870212
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + vmware-60u2) has 
been kicked to run smoke tests


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870209#comment-15870209
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@blueorangutan test centos7 vmware-60u2


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870202#comment-15870202
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@blueorangutan test centos7 kvm-centos7


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870208#comment-15870208
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870167#comment-15870167
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870166#comment-15870166
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
@blueorangutan package


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9788) Exception is throwed when list networks with pagesize is 0

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870163#comment-15870163
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9788:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1946
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> Exception is throwed when list networks with pagesize is 0
> --
>
> Key: CLOUDSTACK-9788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> listnetworks
> {code}
> $ cloudmonkey listNetworks listall=true page=1 pagesize=0
> Error 530: / by zero
> {
>   "cserrorcode": ,
>   "errorcode": 530,
>   "errortext": "/ by zero",
>   "uuidList": []
> }
> {code}
> however, list virtualmachines
> {code}
> $ cloudmonkey listVirtualMachines listall=true page=1 pagesize=0
> {
>   "count": 240
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9788) Exception is throwed when list networks with pagesize is 0

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870162#comment-15870162
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9788:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1946
  
@blueorangutan test


> Exception is throwed when list networks with pagesize is 0
> --
>
> Key: CLOUDSTACK-9788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> listnetworks
> {code}
> $ cloudmonkey listNetworks listall=true page=1 pagesize=0
> Error 530: / by zero
> {
>   "cserrorcode": ,
>   "errorcode": 530,
>   "errortext": "/ by zero",
>   "uuidList": []
> }
> {code}
> however, list virtualmachines
> {code}
> $ cloudmonkey listVirtualMachines listall=true page=1 pagesize=0
> {
>   "count": 240
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9788) Exception is throwed when list networks with pagesize is 0

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870153#comment-15870153
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9788:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1946
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-497


> Exception is throwed when list networks with pagesize is 0
> --
>
> Key: CLOUDSTACK-9788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> listnetworks
> {code}
> $ cloudmonkey listNetworks listall=true page=1 pagesize=0
> Error 530: / by zero
> {
>   "cserrorcode": ,
>   "errorcode": 530,
>   "errortext": "/ by zero",
>   "uuidList": []
> }
> {code}
> however, list virtualmachines
> {code}
> $ cloudmonkey listVirtualMachines listall=true page=1 pagesize=0
> {
>   "count": 240
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870152#comment-15870152
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
Packaging result: ✖centos6 ✔centos7 ✔debian. JID-496


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CLOUDSTACK-9763) vpc: can not ssh to instance after vpc restart

2017-02-16 Thread Joakim Sernbrant (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joakim Sernbrant updated CLOUDSTACK-9763:
-
Description: 
Restart with Cleanup of a VPC does not update the public-key metadata, it is 
explicitly set to null in 

https://github.com/apache/cloudstack/blob/master/server/src/com/cloud/network/router/CommandSetupHelper.java#L614

Rebooting instances relying on metadata (e.g. coreos) will no longer have the 
correct public key configured.

Added explanation:
The VPC VR maintains metadata 
(http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/virtual_machines/user-data.html)
 as static files in /var/www/html/metadata. When a VR is destroyed and 
recreated (by e.g. "restart with cleanup") this metadata is rebuilt by 
createVmDataCommandForVMs(). public-keys is missing from that function so it 
becomes empty after the rebuild and a request for latest/meta-data/public-keys 
no longer returns the correct key.



  was:
Restart with Cleanup of a VPC does not update the public-key metadata, it is 
explicitly set to null in 

https://github.com/apache/cloudstack/blob/master/server/src/com/cloud/network/router/CommandSetupHelper.java#L614

Rebooting instances relying on metadata (e.g. coreos) will no longer have the 
correct public key configured.



> vpc: can not ssh to instance after vpc restart
> --
>
> Key: CLOUDSTACK-9763
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9763
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router, VPC
>Affects Versions: 4.8.0
>Reporter: Joakim Sernbrant
>
> Restart with Cleanup of a VPC does not update the public-key metadata, it is 
> explicitly set to null in 
> https://github.com/apache/cloudstack/blob/master/server/src/com/cloud/network/router/CommandSetupHelper.java#L614
> Rebooting instances relying on metadata (e.g. coreos) will no longer have the 
> correct public key configured.
> Added explanation:
> The VPC VR maintains metadata 
> (http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/virtual_machines/user-data.html)
>  as static files in /var/www/html/metadata. When a VR is destroyed and 
> recreated (by e.g. "restart with cleanup") this metadata is rebuilt by 
> createVmDataCommandForVMs(). public-keys is missing from that function so it 
> becomes empty after the rebuild and a request for 
> latest/meta-data/public-keys no longer returns the correct key.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9787) No error message while change guest vm cidr to a large value

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870135#comment-15870135
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9787:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1945
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


> No error message while change guest vm cidr to a large value
> 
>
> Key: CLOUDSTACK-9787
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9787
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> example
> 1. create a network with cidr = 10.1.1.32/28
> 2. edit the network and change guest vm cidr to 10.1.1.32/27
> according to server/src/com/cloud/network/NetworkServiceImpl.java
> {code}
> if (networkCidr != null) {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> networkCidr)) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR : "
> + networkCidr);
> }
> } else {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> network.getCidr())) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR :  "
> + network.getCidr());
> }
> }
> {code}
> this should throw an exception, however it does not.
> I added some unit test in 
> utils/src/test/java/com/cloud/utils/net/NetUtilsTest.java
> {code}
> @Test
> public void testIsNetworkAWithinNetworkB() {
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/22", 
> "192.168.30.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/23", 
> "192.168.28.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/22", 
> "192.168.28.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.28.0/22"));
> }
> {code}
> the test fails at 
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9787) No error message while change guest vm cidr to a large value

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870134#comment-15870134
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9787:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1945
  
@blueorangutan test 


> No error message while change guest vm cidr to a large value
> 
>
> Key: CLOUDSTACK-9787
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9787
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> example
> 1. create a network with cidr = 10.1.1.32/28
> 2. edit the network and change guest vm cidr to 10.1.1.32/27
> according to server/src/com/cloud/network/NetworkServiceImpl.java
> {code}
> if (networkCidr != null) {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> networkCidr)) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR : "
> + networkCidr);
> }
> } else {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> network.getCidr())) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR :  "
> + network.getCidr());
> }
> }
> {code}
> this should throw an exception, however it does not.
> I added some unit test in 
> utils/src/test/java/com/cloud/utils/net/NetUtilsTest.java
> {code}
> @Test
> public void testIsNetworkAWithinNetworkB() {
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/22", 
> "192.168.30.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/23", 
> "192.168.28.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/22", 
> "192.168.28.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.28.0/22"));
> }
> {code}
> the test fails at 
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9787) No error message while change guest vm cidr to a large value

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870126#comment-15870126
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9787:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1945
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-495


> No error message while change guest vm cidr to a large value
> 
>
> Key: CLOUDSTACK-9787
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9787
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> example
> 1. create a network with cidr = 10.1.1.32/28
> 2. edit the network and change guest vm cidr to 10.1.1.32/27
> according to server/src/com/cloud/network/NetworkServiceImpl.java
> {code}
> if (networkCidr != null) {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> networkCidr)) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR : "
> + networkCidr);
> }
> } else {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> network.getCidr())) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR :  "
> + network.getCidr());
> }
> }
> {code}
> this should throw an exception, however it does not.
> I added some unit test in 
> utils/src/test/java/com/cloud/utils/net/NetUtilsTest.java
> {code}
> @Test
> public void testIsNetworkAWithinNetworkB() {
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/22", 
> "192.168.30.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/23", 
> "192.168.28.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/22", 
> "192.168.28.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.28.0/22"));
> }
> {code}
> the test fails at 
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9339) Virtual Routers don't handle Multiple Public Interfaces

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870103#comment-15870103
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9339:


Github user remibergsma commented on the issue:

https://github.com/apache/cloudstack/pull/1943
  
This issue was also handled in this PR: 
https://github.com/apache/cloudstack/pull/1821/files although that was done on 
the Python side. This seems cleaner. Should we keep both fixes? Won't really 
hurt I'd say.


> Virtual Routers don't handle Multiple Public Interfaces
> ---
>
> Key: CLOUDSTACK-9339
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9339
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.8.0
>Reporter: dsclose
>Assignee: Murali Reddy
>  Labels: firewall, nat, router
> Fix For: 4.10.0.0, 4.9.1.0
>
>
> There are a series of issues with the way Virtual Routers manage multiple 
> public interfaces. These are more pronounced on redundant virtual router 
> setups. I have not attempted to examine these issues in a VPC context. 
> Outside of a VPC context, however, the following is expected behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on 
> eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue 
> commands to the router.
> * eth2 is the routers public interface. By default, a single public IP will 
> be setup on eth2 along with the necessary iptables and ip rules to source-NAT 
> guest traffic to that public IP.
> * When a public IP address is assigned to the router that is on a separate 
> subnet to the source-NAT IP, a new interface is configured, such as eth3, and 
> the IP is assigned to that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how 
> many public subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The 
> master.py script is responsible for setting public interfaces to UP during a 
> keepalived transition. Currently the check_is_up method of the CsIP class 
> brings all interfaces UP on both RvR. A proposed fix for this has been 
> discussed on the mailing list. That fix will leave public interfaces DOWN on 
> RvR allowing the keepalived transition to control the state of public 
> interfaces. Issue #1413 includes a commit that contradicts the proposed fix 
> so it is unclear what the current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. 
> Assuming public interfaces should be default be DOWN on an RvR we need to 
> accommodate the fact that, as interfaces are created, no keepalived 
> transition occurs. This means that assigning an IP from a new public subnet 
> will have no effect (as the interface will be down) until the network is 
> restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two 
> iptables rules in the FORWARD chain of the filter table created for eth2 that 
> allow forwarding between eth2 and eth0. Equivalent rules are not created for 
> other public interfaces so forwarded traffic is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, 
> outbound traffic is source-NAT'd to the networks default source-NAT IP. New 
> connections from guests that are destined for public networks are processed 
> like so:
> 1. Traffic is matched against the following rule in the mangle table that 
> marks the connection with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 
> 0x0/0x
> 2. There are no "ip rule" statements that match a connection marked 0x0, so 
> the kernel routes the connection via the default gateway. That gateway is on 
> source-NAT subnet, so the connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP 
> address is the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9788) Exception is throwed when list networks with pagesize is 0

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870098#comment-15870098
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9788:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1946
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Exception is throwed when list networks with pagesize is 0
> --
>
> Key: CLOUDSTACK-9788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> listnetworks
> {code}
> $ cloudmonkey listNetworks listall=true page=1 pagesize=0
> Error 530: / by zero
> {
>   "cserrorcode": ,
>   "errorcode": 530,
>   "errortext": "/ by zero",
>   "uuidList": []
> }
> {code}
> however, list virtualmachines
> {code}
> $ cloudmonkey listVirtualMachines listall=true page=1 pagesize=0
> {
>   "count": 240
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870094#comment-15870094
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9788) Exception is throwed when list networks with pagesize is 0

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870097#comment-15870097
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9788:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1946
  
@blueorangutan package


> Exception is throwed when list networks with pagesize is 0
> --
>
> Key: CLOUDSTACK-9788
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9788
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> listnetworks
> {code}
> $ cloudmonkey listNetworks listall=true page=1 pagesize=0
> Error 530: / by zero
> {
>   "cserrorcode": ,
>   "errorcode": 530,
>   "errortext": "/ by zero",
>   "uuidList": []
> }
> {code}
> however, list virtualmachines
> {code}
> $ cloudmonkey listVirtualMachines listall=true page=1 pagesize=0
> {
>   "count": 240
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870093#comment-15870093
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1947
  
@blueorangutan package


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9787) No error message while change guest vm cidr to a large value

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870088#comment-15870088
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9787:


Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1945
  
@borisstoyanov a Jenkins job has been kicked to build packages. I'll keep 
you posted as I make progress.


> No error message while change guest vm cidr to a large value
> 
>
> Key: CLOUDSTACK-9787
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9787
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> example
> 1. create a network with cidr = 10.1.1.32/28
> 2. edit the network and change guest vm cidr to 10.1.1.32/27
> according to server/src/com/cloud/network/NetworkServiceImpl.java
> {code}
> if (networkCidr != null) {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> networkCidr)) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR : "
> + networkCidr);
> }
> } else {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> network.getCidr())) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR :  "
> + network.getCidr());
> }
> }
> {code}
> this should throw an exception, however it does not.
> I added some unit test in 
> utils/src/test/java/com/cloud/utils/net/NetUtilsTest.java
> {code}
> @Test
> public void testIsNetworkAWithinNetworkB() {
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/22", 
> "192.168.30.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/23", 
> "192.168.28.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/22", 
> "192.168.28.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.28.0/22"));
> }
> {code}
> the test fails at 
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9787) No error message while change guest vm cidr to a large value

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870086#comment-15870086
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9787:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1945
  
@blueorangutan package


> No error message while change guest vm cidr to a large value
> 
>
> Key: CLOUDSTACK-9787
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9787
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> example
> 1. create a network with cidr = 10.1.1.32/28
> 2. edit the network and change guest vm cidr to 10.1.1.32/27
> according to server/src/com/cloud/network/NetworkServiceImpl.java
> {code}
> if (networkCidr != null) {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> networkCidr)) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR : "
> + networkCidr);
> }
> } else {
> if (!NetUtils.isNetworkAWithinNetworkB(guestVmCidr, 
> network.getCidr())) {
> throw new InvalidParameterValueException("Invalid value 
> of Guest VM CIDR. For IP Reservation, Guest VM CIDR  should be a subset of 
> network CIDR :  "
> + network.getCidr());
> }
> }
> {code}
> this should throw an exception, however it does not.
> I added some unit test in 
> utils/src/test/java/com/cloud/utils/net/NetUtilsTest.java
> {code}
> @Test
> public void testIsNetworkAWithinNetworkB() {
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.30.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/22", 
> "192.168.30.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/23"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/24", 
> "192.168.28.0/22"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/23", 
> "192.168.28.0/24"));
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.28.0/22", 
> "192.168.28.0/24"));
> assertTrue(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/24", 
> "192.168.28.0/22"));
> }
> {code}
> the test fails at 
> assertFalse(NetUtils.isNetworkAWithinNetworkB("192.168.30.0/23", 
> "192.168.30.0/24"));



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9363) Can't start a Xen HVM vm when more than 2 volumes attached

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870015#comment-15870015
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9363:


Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1829
  
I've verified this with Windows HVM enabled template. Having more than 4 
disks (6 data disks in my case), removing the disk on deviceId=4. After a 
reboot the VM boots as expected, user is able to operate with all the data 
disks. Just one question I have, how are the unix machines affected by this, 
should we address testing them? 


> Can't start a Xen HVM vm when more than 2 volumes attached
> --
>
> Key: CLOUDSTACK-9363
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9363
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.6.0, 4.7.1
> Environment: XenServer 6.5
> HVM template
>Reporter: Simon Godard
>Priority: Critical
>
> Starting a HVM VM fails on XenServer fails when more than 2 volumes are 
> attached to the vm. Attaching the volumes while the vm is running is fine.
> PV vms are not affected by this problem. The bug seems to have been 
> introduced in this bug fix: 
> https://issues.apache.org/jira/browse/CLOUDSTACK-8826
> Mailing list discussion: http://markmail.org/thread/4nmyra6aofxtu3o2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869948#comment-15869948
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
@karuturi sure, done! Thanks!


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9746) system-vm: logrotate config causes critical failures

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869932#comment-15869932
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9746:


Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
@serbaut I agree with reducing the size.

However, I suggest to 
(1) remove the delaycompress, then the files like syslog.1 or rsyslog.1 
will be compressed.
(2) keep rotate , as the compressed file are quite small (< 1MB). Keeping 
more compressing files (especially cloud.log) will be helpful for 
troubleshooting in case we have some issues.


> system-vm: logrotate config causes critical failures
> 
>
> Key: CLOUDSTACK-9746
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9746
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: SystemVM
>Affects Versions: 4.8.0, 4.9.0
>Reporter: Joakim Sernbrant
>Priority: Critical
>
> CLOUDSTACK-6885 changed logrotate from time based to size based. This means 
> that logs will grow up to its size times two (due to delaycompress).
> For example:
> 50M auth.log
> 50M auth.log.1
> 10M cloud.log
> 10M cloud.log.1
> 50M cron.log
> 50M cron.log.1
> 50M messages
> 50M messages.1
> ...
> Some files will grow slowly but eventually they will get to their max size. 
> The total allowed log size with the current config is well beyond the size of 
> the log partition.
> Having a full /dev/log puts the VR in a state where operations on it 
> critically fails.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9779) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to load balancing rule

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869908#comment-15869908
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9779:


Github user ustcweizhou commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1937#discussion_r101515754
  
--- Diff: server/src/com/cloud/network/NetworkServiceImpl.java ---
@@ -852,7 +852,8 @@ public boolean releaseSecondaryIpFromNic(long 
ipAddressId) {
 throw new InvalidParameterValueException("Can' remove the 
ip " + secondaryIp + "is associate with static NAT rule public IP address id " 
+ publicIpVO.getId());
--- End diff --

@niteshsarda I've created a PR for it: 
https://github.com/apache/cloudstack/pull/1947


> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> 
>
> Key: CLOUDSTACK-9779
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Nitesh Sarda
>
> ISSUE 
> =
> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> load balancing rule
> REPRO STEPS
> ==
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. Configure Loadbalancing rule on one of the secondary IP address and try 
> releasing the other secondary IP address.
> 5. The operation would fail
> EXPECTED BEHAVIOR
> ==
> Secondary IP address should be released if there are no LB rules associated 
> with it.
> ACTUAL BEHAVIOR
> ==
> Releasing secondary IP address even if there are no LB rules associated with 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CLOUDSTACK-9789) Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to static nat

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869906#comment-15869906
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9789:


GitHub user ustcweizhou opened a pull request:

https://github.com/apache/cloudstack/pull/1947

CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated 
static nat which is actually not used



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ustcweizhou/cloudstack RemoveSecondaryIP

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1947.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1947


commit 0f054246b65d003ae7f024b6ef125b2ec1846879
Author: Wei Zhou 
Date:   2017-02-16T13:18:56Z

CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated 
static nat which is actually not used




> Releasing secondary guest IP fails with error VM nic Ip x.x.x.x is mapped to 
> static nat
> ---
>
> Key: CLOUDSTACK-9789
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9789
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> similar to https://issues.apache.org/jira/browse/CLOUDSTACK-9779
> steps to reproduce
> 1. Create two isolated guest networks with same CIDR
> 2. Deploy VMs on both networks
> 3. Acquire secondary IP on NICs of both VMs and make sure they have the same 
> value, user can input the IP address.
> 4. acquire a public IP and enable static nat to secondary IP on first VM.
> 5. try to remove the secondary IP on second VM. The operation would fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >