[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-01-27 Thread rashmidixit (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843290#comment-15843290
 ] 

rashmidixit commented on CLOUDSTACK-9604:
-

Github user pdube commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1813#discussion_r98259432
  
--- Diff: 
plugins/hypervisors/xenserver/test/com/cloud/hypervisor/xenserver/resource/wrapper/xenbase/CitrixRequestWrapperTest.java
 ---
@@ -436,7 +436,7 @@ public void testResizeVolumeCommand() {
 final Answer answer = wrapper.execute(resizeCommand, 
citrixResourceBase);
 verify(citrixResourceBase, times(1)).getConnection();
 
-assertFalse(answer.getResult());
+//assertFalse(answer.getResult());
--- End diff --

Why comment this out?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume 

[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-01-27 Thread rashmidixit (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843291#comment-15843291
 ] 

rashmidixit commented on CLOUDSTACK-9604:
-

Github user pdube commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1813#discussion_r98259959
  
--- Diff: server/src/com/cloud/vm/UserVmManagerImpl.java ---
@@ -3520,27 +3520,17 @@ public UserVmVO doInTransaction(TransactionStatus 
status) throws InsufficientCap
 }
 rootDiskSize = 
Long.parseLong(customParameters.get("rootdisksize"));
 
-// only KVM supports rootdisksize override
-if (hypervisorType != HypervisorType.KVM) {
--- End diff --

Why was XS blocked before?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root 

[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843287#comment-15843287
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user pdube commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1813#discussion_r98259395
  
--- Diff: 
plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/wrapper/xenbase/CitrixResizeVolumeCommandWrapper.java
 ---
@@ -48,6 +48,11 @@ public Answer execute(final ResizeVolumeCommand command, 
final CitrixResourceBas
 long newSize = command.getNewSize();
 
 try {
+
+if(command.getCurrentSize() <= newSize) {
+s_logger.info("No need to resize volume: " + volId +", 
current size " + command.getCurrentSize() + " is same as  new size " + newSize);
+return new ResizeVolumeAnswer(command, true, "success", 
newSize);
+}
--- End diff --

So you can't increase the size of a volume? This seems flawed


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API 

[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843285#comment-15843285
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user pdube commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1813#discussion_r98259959
  
--- Diff: server/src/com/cloud/vm/UserVmManagerImpl.java ---
@@ -3520,27 +3520,17 @@ public UserVmVO doInTransaction(TransactionStatus 
status) throws InsufficientCap
 }
 rootDiskSize = 
Long.parseLong(customParameters.get("rootdisksize"));
 
-// only KVM supports rootdisksize override
-if (hypervisorType != HypervisorType.KVM) {
--- End diff --

Why was XS blocked before?


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT disks.
> 2) (refer 

[jira] [Commented] (CLOUDSTACK-9604) Root disk resize support for VMware and XenServer

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843286#comment-15843286
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9604:


Github user pdube commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1813#discussion_r98259432
  
--- Diff: 
plugins/hypervisors/xenserver/test/com/cloud/hypervisor/xenserver/resource/wrapper/xenbase/CitrixRequestWrapperTest.java
 ---
@@ -436,7 +436,7 @@ public void testResizeVolumeCommand() {
 final Answer answer = wrapper.execute(resizeCommand, 
citrixResourceBase);
 verify(citrixResourceBase, times(1)).getConnection();
 
-assertFalse(answer.getResult());
+//assertFalse(answer.getResult());
--- End diff --

Why comment this out?


> Root disk resize support for VMware and XenServer
> -
>
> Key: CLOUDSTACK-9604
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9604
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Priyank Parihar
>Assignee: Priyank Parihar
> Attachments: 1.png, 2.png, 3.png
>
>
> Currently the root size of an instance is locked to that of the template. 
> This creates unnecessary template duplicates, prevents the creation of a 
> market place, wastes time and disk space and generally makes work more 
> complicated.
> Real life example - a small VPS provider might want to offer the following 
> sizes (in GB):
> 10,20,40,80,160,240,320,480,620
> That's 9 offerings.
> The template selection could look like this, including real disk space used:
> Windows 2008 ~10GB
> Windows 2008+Plesk ~15GB
> Windows 2008+MSSQL ~15GB
> Windows 2012 ~10GB
> Windows 2012+Plesk ~15GB
> Windows 2012+MSSQL ~15GB
> CentOS ~1GB
> CentOS+CPanel ~3GB
> CentOS+Virtualmin ~3GB
> CentOS+Zimbra ~3GB
> CentOS+Docker ~2GB
> Debian ~1GB
> Ubuntu LTS ~1GB
> In this case the total disk space used by templates will be 828 GB, that's 
> almost 1 TB. If your storage is expensive and limited SSD this can get 
> painful!
> If the root resize feature is enabled we can reduce this to under 100 GB.
> Specifications and Description 
> Administrators don't want to deploy duplicate OS templates of differing 
> sizes just to support different storage packages. Instead, the VM deployment 
> can accept a size for the root disk and adjust the template clone 
> accordingly. In addition, CloudStack already supports data disk resizing for 
> existing volumes, we can extend that functionality to resize existing root 
> disks. 
>   As mentioned, we can leverage the existing design for resizing an existing 
> volume. The difference with root volumes is that we can't resize via disk 
> offering, therefore we need to verify that no disk offering was passed, just 
> a size. The existing enforcements of new size > existing size will still 
> server their purpose.
>For deployment-based resize (ROOT volume size different from template 
> size), we pass the rootdisksize parameter when the existing code allocates 
> the root volume. In the process, we validate that the root disk size is > 
> existing template size, and non-zero. This will persist the root volume as 
> the desired size regardless of whether or not the VM is started on deploy. 
> Then hypervisor specific code needs to be made to pay attention to the 
> VolumeObjectTO's size attribute and use that when doing the work of cloning 
> from template, rather than inheriting the template's size. This can be 
> implemented one hypervisor at a time, and as such there needs to be a check 
> in UserVmManagerImpl to fail unsupported hypervisors with 
> InvalidParameterValueException when the rootdisksize is passed.
>
> Hypervisor specific changes
> XenServer
> Resize ROOT volume is only supported for stopped VMs
> Newly created ROOT volume will be resized after clone from template
> VMware  
> Resize ROOT volume is only supported for stopped VMs.
> New size should be large then the previous size.
> Newly created ROOT volume will be resized after clone from template iff
>  There is no root disk chaining.(means use Full clone)
> And Root Disk controller setting is not  IDE.
> Previously created Root Volume could be resized iif
> There is no root disk chaining.
> And Root Disk controller setting is not  IDE.
> Web Services APIs
> resizeVolume API call will not change, but it will accept volume UUIDs of 
> root volumes in id parameter for resizing.
> deployVirtualMachine API call will allow new rootdisksize parameter to be 
> passed. This parameter will be used as the disk size (in GB) when cloning 
> from template.
> UI
> 1) (refer attached image 1) shows UI that resize volume option is added for 
> ROOT 

[jira] [Issue Comment Deleted] (CLOUDSTACK-3783) VPC VR not functioning with Openvswitch

2017-01-27 Thread Michael (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael updated CLOUDSTACK-3783:

Comment: was deleted

(was: Was this fixed?  I have the issue in 4.5 and even when trying 4.9)

> VPC VR not functioning with Openvswitch
> ---
>
> Key: CLOUDSTACK-3783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3783
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Network Devices, SystemVM
>Affects Versions: 4.1.0, 4.2.0
> Environment: Host: ubuntu 13.04 x86_64 (up-to-date). openvswitch: 
> 1.9.0. qemu-kvm 1.4.0. libvirt 1.0.2. 
> Cloudstack configured with advanced networking. Tags for physical networks
> - vswitch0 for public & guest traffic
> - vif9 for storage traffic
> - vif8 for management traffic
> Openvswitch configuration: 
> # ovs-vsctl show
>Bridge "vswitch1"
>Port "vswitch1"
>Interface "vswitch1"
>type: internal
>Port "eth1"
>Interface "eth1"
>Port "vif9"
>tag: 9
>Interface "vif9"
>type: internal
>Bridge "vswitch0"
>Port "vnet1"
>tag: 32
>Interface "vnet1"
>Port "vswitch0"
>Interface "vswitch0"
>type: internal
>Port "vif8"
>tag: 8
>Interface "vif8"
>type: internal
>Port "eth0"
>Interface "eth0"
>Bridge "cloud0"
>Port "vnet0"
>Interface "vnet0"
>Port "cloud0"
>Interface "cloud0"
>type: internal
>ovs_version: "1.9.0"
> /etc/cloudstack/agent/agent.properties: 
> #Storage
> #Tue Jul 23 16:57:16 MDT 2013
> guest.network.device=vswitch0
> workers=5
> private.network.device=vif8
> network.bridge.type=openvswitch
> port=8250
> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> pod=1
> libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver
> zone=1
> guid=98a9c232-b852-38bb-aec6-9617750429f5
> public.network.device=vswitch0
> cluster=1
> local.storage.uuid=d755f2e8-53ed-40f6-b7bb-923bf3693f09
> domr.scripts.dir=scripts/network/domr/kvm
> LibvirtComputingResource.id=1
>Reporter: Dinu Vlad
>
> When trying to add a VPC, the VR's public interface IP address is not 
> assigned correctly, nor the source nat or the default route. Cloudstack 
> reports the VPC is created successfully, however the VR is left in an 
> "incomplete" state. 
> Relevant agent.log extract: 
> 2013-07-19 16:39:20,961 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.PlugNicCommand
> 2013-07-19 16:39:20,970 DEBUG [kvm.resource.OvsVifDriver] 
> (agentRequest-Handler-2:null) plugging nic=[Nic:Public-192.168.1.68-vlan://32]
> 2013-07-19 16:39:20,970 DEBUG [kvm.resource.OvsVifDriver] 
> (agentRequest-Handler-2:null) creating a vlan dev and bridge for public 
> traffic per traffic label vswitch0
> 2013-07-19 16:39:21,116 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.routing.IpAssocVpcCommand
> 2013-07-19 16:39:21,126 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource] 
> (agentRequest-Handler-2:null) Executing: 
> /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh 
> vpc_ipassoc.sh 169.254.2.23  -A  -l 192.168.1.68 -c ethnull -g 192.168.1.1 -m 
> 24 -n 192.168.1.0
> 2013-07-19 16:39:29,107 DEBUG [kvm.resource.LibvirtComputingResource] 
> (UgentTask-5:null) Executing: 
> /usr/share/cloudstack-common/scripts/vm/network/security_group.py 
> get_rule_logs_for_vms
> 2013-07-19 16:39:29,233 DEBUG [kvm.resource.LibvirtComputingResource] 
> (UgentTask-5:null) Execution is successful.
> 2013-07-19 16:39:29,235 DEBUG [cloud.agent.Agent] (UgentTask-5:null) Sending 
> ping: Seq 7-103:  { Cmd , MgmtId: -1, via: 7, Ver: v1, Flags: 11, 
> [{"PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"newStates":{},"_gatewayAccessible":true,"_vnetAccessible":true,"hostType":"Routing","hostId":7,"wait":0}}]
>  }
> 2013-07-19 16:39:29,243 DEBUG [cloud.agent.Agent] (Agent-Handler-5:null) 
> Received response: Seq 7-103:  { Ans: , MgmtId: 112938636298, via: 7, Ver: 
> v1, Flags: 100010, 
> [{"PingAnswer":{"_command":{"hostType":"Routing","hostId":7,"wait":0},"result":true,"wait":0}}]
>  }
> 2013-07-19 16:39:38,707 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource] 
> (agentRequest-Handler-2:null) Execution is successful.
> 2013-07-19 16:39:38,708 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource] 
> (agentRequest-Handler-2:null) Device "ethnull" does not exist.
> Cannot find device "ethnull"
> Error: argument "Table_ethnull" is wrong: "table" value is invalid
> 

[jira] [Comment Edited] (CLOUDSTACK-9759) VPC VR ips.json ethNone instead of eth1

2017-01-27 Thread Michael (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842998#comment-15842998
 ] 

Michael edited comment on CLOUDSTACK-9759 at 1/27/17 3:27 PM:
--

Issue also occurred when attempting to use cloudstack 4.5:
VPC's VR public interface is being sent ethnull

2017-01-27 08:49:30,080 DEBUG [c.c.a.t.Request] (AgentManager-Handler-13:null) 
Seq 1-7817123053208338458: Processing:  { Ans: , MgmtId: 22095315174824, via: 
1, Ver: v1, Flags: 110, 
[{"com.cloud.agent.api.StartAnswer":{"vm":{"id":3,"name":"r-3-VM","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian
 GNU/Linux 5.0 (64-bit)","platformEmulator":"Debian GNU/Linux 5","bootArgs":" 
vpccidr=10.0.0.0/8 domain=cs2cloud.internal dns1=8.8.8.8 template=domP 
name=r-3-VM eth0ip=169.254.3.175 eth0mask=255.255.0.0 type=vpcrouter 
disable_rp_filter=true","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"v7KZyL_Jl4j6046LrQDmxw","vncAddr":"10.100.0.11","params":{},"uuid":"51576f1f-803d-4496-a280-ead0eb3dac32","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"eaf789cb-0f98-4178-b68f-bffac63868d6","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"d5c9eef8-56b0-32eb-a7ed-68c0fe33b5e9","id":1,"poolType":"NetworkFilesystem","host":"10.100.0.21","path":"/mnt/clouddisk01/primary","port":2049,"url":"NetworkFilesystem://10.100.0.21/mnt/clouddisk01/primary/?ROLE=Primary=d5c9eef8-56b0-32eb-a7ed-68c0fe33b5e9"}},"name":"ROOT-3","size":312990208,"path":"eaf789cb-0f98-4178-b68f-bffac63868d6","volumeId":3,"vmName":"r-3-VM","accountId":2,"format":"QCOW2","provisioningType":"THIN","id":3,"deviceId":0,"hypervisorType":"KVM"}},"diskSeq":0,"path":"eaf789cb-0f98-4178-b68f-bffac63868d6","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"10.100.0.21","volumeSize":"312990208"}}],"nics":[{"deviceId":0,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nicUuid":"74d5b224-0fca-4d35-91d5-cf313bac2136","uuid":"4031860c-ca26-4cfb-ade4-2bf2020dd104","ip":"169.254.3.175","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:03:af","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"result":true,"wait":0}},{"com.cloud.agent.api.check.CheckSshAnswer":{"result":true,"wait":0}},{"com.cloud.agent.api.GetDomRVersionAnswer":{"templateVersion":"Cloudstack
 Release 4.5.2 Tue Aug 11 00:42:47 UTC 
2015","scriptsVersion":"8ab35a06add83db9c319d717cd7b181f\n","result":true,"details":"Cloudstack
 Release 4.5.2 Tue Aug 11 00:42:47 UTC 
2015&8ab35a06add83db9c319d717cd7b181f\n","wait":0}},{"com.cloud.agent.api.PlugNicAnswer":{"result":true,"details":"success","wait":0}},{"com.cloud.agent.api.routing.GroupAnswer":{"results":["10.100.65.3
 - vpc_ipassoc - success: Device \"ethnull\" does not exist.\nCannot find 
device \"ethnull\"\narping: unknown iface ethnull\narping: unknown iface 
ethnull\nError: argument \"Table_ethnull\" is wrong: \"table\" value is 
invalid\n\nError: argument \"Table_ethnull\" is wrong: \"table\" value is 
invalid\n\nRTNETLINK answers: No such process\n","10.100.65.3 - 
vpc_privategateway - success: iptables: No chain/target/match by that 
name.\n"],"result":true,"wait":0}},{"com.cloud.agent.api.Answer":{"result":true,"details":"iptables:
 Bad rule (does a matching rule exist in that chain?).\niptables: No 
chain/target/match by that 
name.\n","wait":0}},{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-3-VM","bytesSent":0,"bytesReceived":0,"result":true,"wait":0}}]
 }


was (Author: mabarkdoll):
Issue also occurred when attempting to use cloudstack 4.5:
2017-01-27 08:49:30,080 DEBUG [c.c.a.t.Request] (AgentManager-Handler-13:null) 
Seq 1-7817123053208338458: Processing:  { Ans: , MgmtId: 22095315174824, via: 
1, Ver: v1, Flags: 110, 
[{"com.cloud.agent.api.StartAnswer":{"vm":{"id":3,"name":"r-3-VM","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian
 GNU/Linux 5.0 (64-bit)","platformEmulator":"Debian GNU/Linux 5","bootArgs":" 
vpccidr=10.0.0.0/8 domain=cs2cloud.internal dns1=8.8.8.8 template=domP 
name=r-3-VM eth0ip=169.254.3.175 eth0mask=255.255.0.0 type=vpcrouter 

[jira] [Commented] (CLOUDSTACK-9759) VPC VR ips.json ethNone instead of eth1

2017-01-27 Thread Michael (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842998#comment-15842998
 ] 

Michael commented on CLOUDSTACK-9759:
-

Issue also occurred when attempting to use cloudstack 4.5:
2017-01-27 08:49:30,080 DEBUG [c.c.a.t.Request] (AgentManager-Handler-13:null) 
Seq 1-7817123053208338458: Processing:  { Ans: , MgmtId: 22095315174824, via: 
1, Ver: v1, Flags: 110, 
[{"com.cloud.agent.api.StartAnswer":{"vm":{"id":3,"name":"r-3-VM","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian
 GNU/Linux 5.0 (64-bit)","platformEmulator":"Debian GNU/Linux 5","bootArgs":" 
vpccidr=10.0.0.0/8 domain=cs2cloud.internal dns1=8.8.8.8 template=domP 
name=r-3-VM eth0ip=169.254.3.175 eth0mask=255.255.0.0 type=vpcrouter 
disable_rp_filter=true","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"v7KZyL_Jl4j6046LrQDmxw","vncAddr":"10.100.0.11","params":{},"uuid":"51576f1f-803d-4496-a280-ead0eb3dac32","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"eaf789cb-0f98-4178-b68f-bffac63868d6","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"d5c9eef8-56b0-32eb-a7ed-68c0fe33b5e9","id":1,"poolType":"NetworkFilesystem","host":"10.100.0.21","path":"/mnt/clouddisk01/primary","port":2049,"url":"NetworkFilesystem://10.100.0.21/mnt/clouddisk01/primary/?ROLE=Primary=d5c9eef8-56b0-32eb-a7ed-68c0fe33b5e9"}},"name":"ROOT-3","size":312990208,"path":"eaf789cb-0f98-4178-b68f-bffac63868d6","volumeId":3,"vmName":"r-3-VM","accountId":2,"format":"QCOW2","provisioningType":"THIN","id":3,"deviceId":0,"hypervisorType":"KVM"}},"diskSeq":0,"path":"eaf789cb-0f98-4178-b68f-bffac63868d6","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"10.100.0.21","volumeSize":"312990208"}}],"nics":[{"deviceId":0,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nicUuid":"74d5b224-0fca-4d35-91d5-cf313bac2136","uuid":"4031860c-ca26-4cfb-ade4-2bf2020dd104","ip":"169.254.3.175","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:03:af","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"result":true,"wait":0}},{"com.cloud.agent.api.check.CheckSshAnswer":{"result":true,"wait":0}},{"com.cloud.agent.api.GetDomRVersionAnswer":{"templateVersion":"Cloudstack
 Release 4.5.2 Tue Aug 11 00:42:47 UTC 
2015","scriptsVersion":"8ab35a06add83db9c319d717cd7b181f\n","result":true,"details":"Cloudstack
 Release 4.5.2 Tue Aug 11 00:42:47 UTC 
2015&8ab35a06add83db9c319d717cd7b181f\n","wait":0}},{"com.cloud.agent.api.PlugNicAnswer":{"result":true,"details":"success","wait":0}},{"com.cloud.agent.api.routing.GroupAnswer":{"results":["10.100.65.3
 - vpc_ipassoc - success: Device \"ethnull\" does not exist.\nCannot find 
device \"ethnull\"\narping: unknown iface ethnull\narping: unknown iface 
ethnull\nError: argument \"Table_ethnull\" is wrong: \"table\" value is 
invalid\n\nError: argument \"Table_ethnull\" is wrong: \"table\" value is 
invalid\n\nRTNETLINK answers: No such process\n","10.100.65.3 - 
vpc_privategateway - success: iptables: No chain/target/match by that 
name.\n"],"result":true,"wait":0}},{"com.cloud.agent.api.Answer":{"result":true,"details":"iptables:
 Bad rule (does a matching rule exist in that chain?).\niptables: No 
chain/target/match by that 
name.\n","wait":0}},{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-3-VM","bytesSent":0,"bytesReceived":0,"result":true,"wait":0}}]
 }

> VPC VR ips.json ethNone instead of eth1
> ---
>
> Key: CLOUDSTACK-9759
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9759
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.9.2.0
> Environment: Centos 6.8, CloudStack 4.9.2.0 advanced networking vlans 
> for guest and public
>Reporter: Michael
>Priority: Critical
> Attachments: cloud.log, cloudstack.tar, messages
>
>
> Cloudstack deployment with advanced networking.
> My VPC VR is incorrectly be assigned ethNone inside 
> (/etc/cloudstack/ips.json) for my eth1 public network interface.  If I edit 
> /etc/cloudstack/ips.json and change both occurrences of ethNone to eth1 and 
> restart the VPC then the network works for isolated guest networks attached 
> to the VPC VR and the VR itself.  (note: this issue only occurs with the VPC 
> VR not with guest network or guest shared networks).
> Here is the contents of /etc/cloudstack/ips.json
> root@r31-VM# cat /etc/cloudstack/ips.json
> {
> "eth0": [
> {
> "add": true,
> "broadcast": "169.254.255.255",
> "cidr": 

[jira] [Comment Edited] (CLOUDSTACK-3783) VPC VR not functioning with Openvswitch

2017-01-27 Thread Michael (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842961#comment-15842961
 ] 

Michael edited comment on CLOUDSTACK-3783 at 1/27/17 3:11 PM:
--

Was this fixed?  I have the issue in 4.5 and even when trying 4.9


was (Author: mabarkdoll):
Was this fix?  I have the issue in 4.5 and even when trying 4.9

> VPC VR not functioning with Openvswitch
> ---
>
> Key: CLOUDSTACK-3783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3783
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Network Devices, SystemVM
>Affects Versions: 4.1.0, 4.2.0
> Environment: Host: ubuntu 13.04 x86_64 (up-to-date). openvswitch: 
> 1.9.0. qemu-kvm 1.4.0. libvirt 1.0.2. 
> Cloudstack configured with advanced networking. Tags for physical networks
> - vswitch0 for public & guest traffic
> - vif9 for storage traffic
> - vif8 for management traffic
> Openvswitch configuration: 
> # ovs-vsctl show
>Bridge "vswitch1"
>Port "vswitch1"
>Interface "vswitch1"
>type: internal
>Port "eth1"
>Interface "eth1"
>Port "vif9"
>tag: 9
>Interface "vif9"
>type: internal
>Bridge "vswitch0"
>Port "vnet1"
>tag: 32
>Interface "vnet1"
>Port "vswitch0"
>Interface "vswitch0"
>type: internal
>Port "vif8"
>tag: 8
>Interface "vif8"
>type: internal
>Port "eth0"
>Interface "eth0"
>Bridge "cloud0"
>Port "vnet0"
>Interface "vnet0"
>Port "cloud0"
>Interface "cloud0"
>type: internal
>ovs_version: "1.9.0"
> /etc/cloudstack/agent/agent.properties: 
> #Storage
> #Tue Jul 23 16:57:16 MDT 2013
> guest.network.device=vswitch0
> workers=5
> private.network.device=vif8
> network.bridge.type=openvswitch
> port=8250
> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> pod=1
> libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver
> zone=1
> guid=98a9c232-b852-38bb-aec6-9617750429f5
> public.network.device=vswitch0
> cluster=1
> local.storage.uuid=d755f2e8-53ed-40f6-b7bb-923bf3693f09
> domr.scripts.dir=scripts/network/domr/kvm
> LibvirtComputingResource.id=1
>Reporter: Dinu Vlad
>
> When trying to add a VPC, the VR's public interface IP address is not 
> assigned correctly, nor the source nat or the default route. Cloudstack 
> reports the VPC is created successfully, however the VR is left in an 
> "incomplete" state. 
> Relevant agent.log extract: 
> 2013-07-19 16:39:20,961 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.PlugNicCommand
> 2013-07-19 16:39:20,970 DEBUG [kvm.resource.OvsVifDriver] 
> (agentRequest-Handler-2:null) plugging nic=[Nic:Public-192.168.1.68-vlan://32]
> 2013-07-19 16:39:20,970 DEBUG [kvm.resource.OvsVifDriver] 
> (agentRequest-Handler-2:null) creating a vlan dev and bridge for public 
> traffic per traffic label vswitch0
> 2013-07-19 16:39:21,116 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.routing.IpAssocVpcCommand
> 2013-07-19 16:39:21,126 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource] 
> (agentRequest-Handler-2:null) Executing: 
> /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh 
> vpc_ipassoc.sh 169.254.2.23  -A  -l 192.168.1.68 -c ethnull -g 192.168.1.1 -m 
> 24 -n 192.168.1.0
> 2013-07-19 16:39:29,107 DEBUG [kvm.resource.LibvirtComputingResource] 
> (UgentTask-5:null) Executing: 
> /usr/share/cloudstack-common/scripts/vm/network/security_group.py 
> get_rule_logs_for_vms
> 2013-07-19 16:39:29,233 DEBUG [kvm.resource.LibvirtComputingResource] 
> (UgentTask-5:null) Execution is successful.
> 2013-07-19 16:39:29,235 DEBUG [cloud.agent.Agent] (UgentTask-5:null) Sending 
> ping: Seq 7-103:  { Cmd , MgmtId: -1, via: 7, Ver: v1, Flags: 11, 
> [{"PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"newStates":{},"_gatewayAccessible":true,"_vnetAccessible":true,"hostType":"Routing","hostId":7,"wait":0}}]
>  }
> 2013-07-19 16:39:29,243 DEBUG [cloud.agent.Agent] (Agent-Handler-5:null) 
> Received response: Seq 7-103:  { Ans: , MgmtId: 112938636298, via: 7, Ver: 
> v1, Flags: 100010, 
> [{"PingAnswer":{"_command":{"hostType":"Routing","hostId":7,"wait":0},"result":true,"wait":0}}]
>  }
> 2013-07-19 16:39:38,707 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource] 
> (agentRequest-Handler-2:null) Execution is successful.
> 2013-07-19 16:39:38,708 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource] 
> 

[jira] [Commented] (CLOUDSTACK-3783) VPC VR not functioning with Openvswitch

2017-01-27 Thread Michael (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842961#comment-15842961
 ] 

Michael commented on CLOUDSTACK-3783:
-

Was this fix?  I have the issue in 4.5 and even when trying 4.9

> VPC VR not functioning with Openvswitch
> ---
>
> Key: CLOUDSTACK-3783
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3783
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Network Devices, SystemVM
>Affects Versions: 4.1.0, 4.2.0
> Environment: Host: ubuntu 13.04 x86_64 (up-to-date). openvswitch: 
> 1.9.0. qemu-kvm 1.4.0. libvirt 1.0.2. 
> Cloudstack configured with advanced networking. Tags for physical networks
> - vswitch0 for public & guest traffic
> - vif9 for storage traffic
> - vif8 for management traffic
> Openvswitch configuration: 
> # ovs-vsctl show
>Bridge "vswitch1"
>Port "vswitch1"
>Interface "vswitch1"
>type: internal
>Port "eth1"
>Interface "eth1"
>Port "vif9"
>tag: 9
>Interface "vif9"
>type: internal
>Bridge "vswitch0"
>Port "vnet1"
>tag: 32
>Interface "vnet1"
>Port "vswitch0"
>Interface "vswitch0"
>type: internal
>Port "vif8"
>tag: 8
>Interface "vif8"
>type: internal
>Port "eth0"
>Interface "eth0"
>Bridge "cloud0"
>Port "vnet0"
>Interface "vnet0"
>Port "cloud0"
>Interface "cloud0"
>type: internal
>ovs_version: "1.9.0"
> /etc/cloudstack/agent/agent.properties: 
> #Storage
> #Tue Jul 23 16:57:16 MDT 2013
> guest.network.device=vswitch0
> workers=5
> private.network.device=vif8
> network.bridge.type=openvswitch
> port=8250
> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> pod=1
> libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver
> zone=1
> guid=98a9c232-b852-38bb-aec6-9617750429f5
> public.network.device=vswitch0
> cluster=1
> local.storage.uuid=d755f2e8-53ed-40f6-b7bb-923bf3693f09
> domr.scripts.dir=scripts/network/domr/kvm
> LibvirtComputingResource.id=1
>Reporter: Dinu Vlad
>
> When trying to add a VPC, the VR's public interface IP address is not 
> assigned correctly, nor the source nat or the default route. Cloudstack 
> reports the VPC is created successfully, however the VR is left in an 
> "incomplete" state. 
> Relevant agent.log extract: 
> 2013-07-19 16:39:20,961 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.PlugNicCommand
> 2013-07-19 16:39:20,970 DEBUG [kvm.resource.OvsVifDriver] 
> (agentRequest-Handler-2:null) plugging nic=[Nic:Public-192.168.1.68-vlan://32]
> 2013-07-19 16:39:20,970 DEBUG [kvm.resource.OvsVifDriver] 
> (agentRequest-Handler-2:null) creating a vlan dev and bridge for public 
> traffic per traffic label vswitch0
> 2013-07-19 16:39:21,116 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.routing.IpAssocVpcCommand
> 2013-07-19 16:39:21,126 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource] 
> (agentRequest-Handler-2:null) Executing: 
> /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh 
> vpc_ipassoc.sh 169.254.2.23  -A  -l 192.168.1.68 -c ethnull -g 192.168.1.1 -m 
> 24 -n 192.168.1.0
> 2013-07-19 16:39:29,107 DEBUG [kvm.resource.LibvirtComputingResource] 
> (UgentTask-5:null) Executing: 
> /usr/share/cloudstack-common/scripts/vm/network/security_group.py 
> get_rule_logs_for_vms
> 2013-07-19 16:39:29,233 DEBUG [kvm.resource.LibvirtComputingResource] 
> (UgentTask-5:null) Execution is successful.
> 2013-07-19 16:39:29,235 DEBUG [cloud.agent.Agent] (UgentTask-5:null) Sending 
> ping: Seq 7-103:  { Cmd , MgmtId: -1, via: 7, Ver: v1, Flags: 11, 
> [{"PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"newStates":{},"_gatewayAccessible":true,"_vnetAccessible":true,"hostType":"Routing","hostId":7,"wait":0}}]
>  }
> 2013-07-19 16:39:29,243 DEBUG [cloud.agent.Agent] (Agent-Handler-5:null) 
> Received response: Seq 7-103:  { Ans: , MgmtId: 112938636298, via: 7, Ver: 
> v1, Flags: 100010, 
> [{"PingAnswer":{"_command":{"hostType":"Routing","hostId":7,"wait":0},"result":true,"wait":0}}]
>  }
> 2013-07-19 16:39:38,707 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource] 
> (agentRequest-Handler-2:null) Execution is successful.
> 2013-07-19 16:39:38,708 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource] 
> (agentRequest-Handler-2:null) Device "ethnull" does not exist.
> Cannot find device "ethnull"
> Error: argument "Table_ethnull" is wrong: "table" value is invalid

[jira] [Closed] (CLOUDSTACK-9729) Spring 4.x support PR-1638 broke Nuage VSP plugin as of dependency to com.amazonaws.util.json.JSONException

2017-01-27 Thread Kris Sterckx (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kris Sterckx closed CLOUDSTACK-9729.


> Spring 4.x support PR-1638 broke Nuage VSP plugin as of dependency to 
> com.amazonaws.util.json.JSONException
> ---
>
> Key: CLOUDSTACK-9729
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9729
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Kris Sterckx
>Assignee: Kris Sterckx
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> https://github.com/apache/cloudstack/pull/1638 has moved from
> {noformat}
> 1.10.64
> {noformat}
> to
> {noformat}
> 1.11.61
> {noformat}
> which breaks the use of com.amazonaws.util.json.JSONException
> This breaks Nuage VSP network plugin as of its dependency to 
> {noformat}
>   net.nuagenetworks.vsp
>   nuage-vsp-acs-client
>   1.0.0
> {noformat} 
>  
> We need to fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CLOUDSTACK-9729) Spring 4.x support PR-1638 broke Nuage VSP plugin as of dependency to com.amazonaws.util.json.JSONException

2017-01-27 Thread Raf Smeets (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842794#comment-15842794
 ] 

Raf Smeets edited comment on CLOUDSTACK-9729 at 1/27/17 12:57 PM:
--

In total 181 marvin tests have passed, 1 test has failed, 7 tests are skipped.

Here are the details:
test_nuage_internal_dns.py 6 tests passed
test_nuage_non_public_sharednetwork_ip_range.py 5 tests PASSED 
test_nuage_password_reset.py 1 test PASSED 
test_nuage_public_sharednetwork_ip_range.py 8 tests PASSED 
test_nuage_publicsharednetwork.py 51 tests PASSED 
test_nuage_public_sharednetwork_userdata.py 14 tests PASSED
test_nuage_sharednetwork_deployVM.py 51 tests PASSED 
test_nuage_sharednetwork_vpc_vm_monitor.py 10 tests PASSED 
test_nuage_source_nat.py 2 tests PASSED, 6 are SKIPPED Configured Nuage VSP SDN 
platform infrastructure does not support underlay networking 
test_nuage_static_nat.py 10 tests PASSED 
test_nuage_vpc_internal_lb.py 7 PASSED, 1 failed 
test_05_nuage_internallb_traffic lb is not in correct state on VSD line1237 
known issue https://issues.apache.org/jira/browse/CLOUDSTACK-9749 fix needs to 
be implemented upstream.
test_nuage_vpc_network.py 1 test PASSED, 1 SKIPPED There is only one Zone 
configured: skipping test 
test_nuage_vsp.py 2 tests PASSED.



was (Author: smeetsr):
In total 180 marvin tests have passed, 2 tests have failed, 7 tests are skipped.

Here are the details:
test_nuage_internal_dns.py 6 tests passed
test_nuage_non_public_sharednetwork_ip_range.py 5 tests PASSED 
test_nuage_password_reset.py 1 test PASSED 
test_nuage_public_sharednetwork_ip_range.py 8 tests PASSED 
test_nuage_publicsharednetwork.py 51 tests PASSED 
test_nuage_public_sharednetwork_userdata.py 13 tests PASSED, 1 test FAILED
test_05 sometimes router vm state is not updated in VSD which can be fixed by 
increasing the nbr of retries to 20 in verify_vsd_object_status
test_nuage_sharednetwork_deployVM.py 51 tests PASSED 
test_nuage_sharednetwork_vpc_vm_monitor.py 10 tests PASSED 
test_nuage_source_nat.py 2 tests PASSED, 6 are SKIPPED Configured Nuage VSP SDN 
platform infrastructure does not support underlay networking 
test_nuage_static_nat.py 10 tests PASSED 
test_nuage_vpc_internal_lb.py 7 PASSED, 1 failed 
test_05_nuage_internallb_traffic lb is not in correct state on VSD line1237 
known issue CLOUD-972 fix needs to be implemented upstream.
test_nuage_vpc_network.py 1 test PASSED, 1 SKIPPED There is only one Zone 
configured: skipping test 
test_nuage_vsp.py 2 tests PASSED.


> Spring 4.x support PR-1638 broke Nuage VSP plugin as of dependency to 
> com.amazonaws.util.json.JSONException
> ---
>
> Key: CLOUDSTACK-9729
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9729
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Kris Sterckx
>Assignee: Kris Sterckx
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> https://github.com/apache/cloudstack/pull/1638 has moved from
> {noformat}
> 1.10.64
> {noformat}
> to
> {noformat}
> 1.11.61
> {noformat}
> which breaks the use of com.amazonaws.util.json.JSONException
> This breaks Nuage VSP network plugin as of its dependency to 
> {noformat}
>   net.nuagenetworks.vsp
>   nuage-vsp-acs-client
>   1.0.0
> {noformat} 
>  
> We need to fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9729) Spring 4.x support PR-1638 broke Nuage VSP plugin as of dependency to com.amazonaws.util.json.JSONException

2017-01-27 Thread Raf Smeets (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842794#comment-15842794
 ] 

Raf Smeets commented on CLOUDSTACK-9729:


In total 180 marvin tests have passed, 2 tests have failed, 7 tests are skipped.

Here are the details:
test_nuage_internal_dns.py 6 tests passed
test_nuage_non_public_sharednetwork_ip_range.py 5 tests PASSED 
test_nuage_password_reset.py 1 test PASSED 
test_nuage_public_sharednetwork_ip_range.py 8 tests PASSED 
test_nuage_publicsharednetwork.py 51 tests PASSED 
test_nuage_public_sharednetwork_userdata.py 13 tests PASSED, 1 test FAILED
test_05 sometimes router vm state is not updated in VSD which can be fixed by 
increasing the nbr of retries to 20 in verify_vsd_object_status
test_nuage_sharednetwork_deployVM.py 51 tests PASSED 
test_nuage_sharednetwork_vpc_vm_monitor.py 10 tests PASSED 
test_nuage_source_nat.py 2 tests PASSED, 6 are SKIPPED Configured Nuage VSP SDN 
platform infrastructure does not support underlay networking 
test_nuage_static_nat.py 10 tests PASSED 
test_nuage_vpc_internal_lb.py 7 PASSED, 1 failed 
test_05_nuage_internallb_traffic lb is not in correct state on VSD line1237 
known issue CLOUD-972 fix needs to be implemented upstream.
test_nuage_vpc_network.py 1 test PASSED, 1 SKIPPED There is only one Zone 
configured: skipping test 
test_nuage_vsp.py 2 tests PASSED.


> Spring 4.x support PR-1638 broke Nuage VSP plugin as of dependency to 
> com.amazonaws.util.json.JSONException
> ---
>
> Key: CLOUDSTACK-9729
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9729
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Kris Sterckx
>Assignee: Kris Sterckx
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> https://github.com/apache/cloudstack/pull/1638 has moved from
> {noformat}
> 1.10.64
> {noformat}
> to
> {noformat}
> 1.11.61
> {noformat}
> which breaks the use of com.amazonaws.util.json.JSONException
> This breaks Nuage VSP network plugin as of its dependency to 
> {noformat}
>   net.nuagenetworks.vsp
>   nuage-vsp-acs-client
>   1.0.0
> {noformat} 
>  
> We need to fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CLOUDSTACK-9729) Spring 4.x support PR-1638 broke Nuage VSP plugin as of dependency to com.amazonaws.util.json.JSONException

2017-01-27 Thread Frank Maximus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Maximus resolved CLOUDSTACK-9729.
---
Resolution: Fixed
  Assignee: Kris Sterckx  (was: Frank Maximus)

> Spring 4.x support PR-1638 broke Nuage VSP plugin as of dependency to 
> com.amazonaws.util.json.JSONException
> ---
>
> Key: CLOUDSTACK-9729
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9729
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Kris Sterckx
>Assignee: Kris Sterckx
>Priority: Blocker
> Fix For: 4.10.0.0
>
>
> https://github.com/apache/cloudstack/pull/1638 has moved from
> {noformat}
> 1.10.64
> {noformat}
> to
> {noformat}
> 1.11.61
> {noformat}
> which breaks the use of com.amazonaws.util.json.JSONException
> This breaks Nuage VSP network plugin as of its dependency to 
> {noformat}
>   net.nuagenetworks.vsp
>   nuage-vsp-acs-client
>   1.0.0
> {noformat} 
>  
> We need to fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9738) Optimize vm expunge process for instances with vm snapshots

2017-01-27 Thread rashmidixit (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842687#comment-15842687
 ] 

rashmidixit commented on CLOUDSTACK-9738:
-

Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1905
  
@rhtyd @karuturi Can we run tests on this PR and merge on success ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


> Optimize vm expunge process for instances with vm snapshots
> ---
>
> Key: CLOUDSTACK-9738
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9738
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> h2. Description
> It was noticed that expunging instances with many vm snapshots took a look of 
> time, as hypervisor received as many tasks as vm snapshots instance had, 
> apart from the delete vm task. We propose a way to optimize this process for 
> instances with vm snapshots by sending only one delete task to hypervisor, 
> which will delete vm and its snapshots
> h2. Use cases
> # deleteVMsnapohsot-> no changes to current behavior
> # destroyVM with expunge=false ->  no actions to VMsnaphsot is performed at 
> the moment. When VM cleanup thread is executed it will perform the same 
> sequence as #3. If instance is recovered before expunged by the cleanup 
> thread it will remain intact with VMSnapshot chain present
> # destroyVM with expunge=true:
> #*   Vmsnaphsot is  marked with removed timestamp and state = Expunging 
> in DB
> #*   VM is deleted in HW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-01-27 Thread rashmidixit (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842691#comment-15842691
 ] 

rashmidixit commented on CLOUDSTACK-9752:
-

Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1913
  
@rhtyd @karuturi Can we run vmware tests on this PR and merge on success ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk
> 13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> …
> 13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:22:07,581 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk
> 13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> 13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:25:06,362 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9752) [Vmware] Optimization of volume attachness to vm

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842686#comment-15842686
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9752:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1913
  
@rhtyd @karuturi Can we run vmware tests on this PR and merge on success ?


> [Vmware] Optimization of volume attachness to vm
> 
>
> Key: CLOUDSTACK-9752
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9752
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: VMware
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> This optimization aims to reduce volume attach slowness due to vmdk files 
> search on datastore before creating the volume (search for {{.vmdk}}, 
> {{-flat.vmdk}} and {{-delta.vmdk}} files to delete them if they exist). This 
> search is not necessary when attaching a volume in Allocated state, due to 
> volume files don't exist on datastore.
> On large datastores, this search can cause volume attachness to be really 
> slow, as we can see in this log lines:
> {code}
> 13-mgmt.log:2016-11-02 10:16:33,136 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk in [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:19:42,567 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1.vmdk
> 13-mgmt.log:2016-11-02 10:19:42,719 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> …
> 13-mgmt.log:2016-11-02 10:19:44,399 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:22:07,581 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-flat.vmdk
> 13-mgmt.log:2016-11-02 10:22:07,731 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Search file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk on 
> [b5ebda046d613e079b5874b169cd848f] 
> 13-mgmt.log:2016-11-02 10:22:09,745 INFO  [vmware.mo.DatastoreMO] 
> (DirectAgent-931:ctx-5687d68e uscrlpdcsesx240.ads.autodesk.com, 
> job-8675314/job-8675315, cmd: CreateObjectCommand) Searching file 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk in 
> [b5ebda046d613e079b5874b169cd848f]
> 13-mgmt.log:2016-11-02 10:25:06,362 WARN  
> [storage.resource.VmwareStorageLayoutHelper] (DirectAgent-931:ctx-5687d68e 
> uscrlpdcsesx240.ads.autodesk.com, job-8675314/job-8675315, cmd: 
> CreateObjectCommand) Unable to locate VMDK file: 
> 9ce7731fd38b4045afbb7ce9754abbc1-delta.vmdk
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9457) Allow retrieval and modification of VM and template details via API and UI

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842681#comment-15842681
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9457:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1767
  
@karuturi Can we  merge this one ?


> Allow retrieval and modification of VM and template details via API and UI
> --
>
> Key: CLOUDSTACK-9457
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9457
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>Priority: Minor
> Attachments: TemplateDetails1.JPG, VMDetails1.JPG, 
> VMDetailsRunning.JPG, VMDetailsStopped.JPG
>
>
> h2. Introduction
> As suggested on [9379|https://issues.apache.org/jira/browse/CLOUDSTACK-9379], 
> it would be nice to be able to customize vm details through API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9457) Allow retrieval and modification of VM and template details via API and UI

2017-01-27 Thread rashmidixit (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842683#comment-15842683
 ] 

rashmidixit commented on CLOUDSTACK-9457:
-

Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1767
  
@karuturi Can we  merge this one ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


> Allow retrieval and modification of VM and template details via API and UI
> --
>
> Key: CLOUDSTACK-9457
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9457
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>Priority: Minor
> Attachments: TemplateDetails1.JPG, VMDetails1.JPG, 
> VMDetailsRunning.JPG, VMDetailsStopped.JPG
>
>
> h2. Introduction
> As suggested on [9379|https://issues.apache.org/jira/browse/CLOUDSTACK-9379], 
> it would be nice to be able to customize vm details through API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9738) Optimize vm expunge process for instances with vm snapshots

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842684#comment-15842684
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9738:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1905
  
@rhtyd @karuturi Can we run tests on this PR and merge on success ?


> Optimize vm expunge process for instances with vm snapshots
> ---
>
> Key: CLOUDSTACK-9738
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9738
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.10.0.0
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Fix For: 4.10.0.0
>
>
> h2. Description
> It was noticed that expunging instances with many vm snapshots took a look of 
> time, as hypervisor received as many tasks as vm snapshots instance had, 
> apart from the delete vm task. We propose a way to optimize this process for 
> instances with vm snapshots by sending only one delete task to hypervisor, 
> which will delete vm and its snapshots
> h2. Use cases
> # deleteVMsnapohsot-> no changes to current behavior
> # destroyVM with expunge=false ->  no actions to VMsnaphsot is performed at 
> the moment. When VM cleanup thread is executed it will perform the same 
> sequence as #3. If instance is recovered before expunged by the cleanup 
> thread it will remain intact with VMSnapshot chain present
> # destroyVM with expunge=true:
> #*   Vmsnaphsot is  marked with removed timestamp and state = Expunging 
> in DB
> #*   VM is deleted in HW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9574) Redesign storage views

2017-01-27 Thread rashmidixit (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842678#comment-15842678
 ] 

rashmidixit commented on CLOUDSTACK-9574:
-

Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1747
  
@rhtyd @karuturi Can we run tests on this PR and merge on success ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


> Redesign storage views
> --
>
> Key: CLOUDSTACK-9574
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9574
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API, UI
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Attachments: PS-DETAILS.PNG, PS.PNG
>
>
> h2. Part 1: Redesign storage tags
> h3. Actual behavior
> Primary storage tags are being saved as an entry on {{storage_pool_details}} 
> with:
> * name = TAG_NAME
> * value = "true"
> When a boolean property is defined in {{storage_pool_details}} and has value 
> = "true", it is displayed as a tag.
> !https://issues.apache.org/jira/secure/attachment/12836196/PS-DETAILS.PNG!
> !https://issues.apache.org/jira/secure/attachment/12836195/PS.PNG!
> h3. Goal
> Redesign {{Storage Tags}} for Primary Storage view, to list only tags, as it 
> is done in Host Tags (Hosts view).
> h2. Part 2: Remove details from listImageStores API call response and UI
> h3. Description
> In Secondary Storage view we propose removing Details field, as Setting tab 
> list details for a given image store. We also remove details from response on 
> listImageStores API method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-01-27 Thread rashmidixit (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842671#comment-15842671
 ] 

rashmidixit commented on CLOUDSTACK-9539:
-

Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
@rhtyd @karuturi Can we run tests on this PR and merge on success ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9539) Support changing Service offering for instance with VM Snapshots

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842669#comment-15842669
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9539:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1727
  
@rhtyd @karuturi Can we run tests on this PR and merge on success ?


> Support changing Service offering for instance with VM Snapshots
> 
>
> Key: CLOUDSTACK-9539
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9539
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
>
> h3. Actual behaviour
> CloudStack doesn't support changing service offering for vm instances which 
> have vm snapshots, they should be removed before changing service offering.
> h3. Goal
> Extend actual behaviour by supporting changing service offering for vms which 
> have vm snapshots. In that case, previously taken snapshots (if reverted) 
> should use previous service offering, future snapshots should use the newest.
> h3. Proposed solution:
> 1. Adding {{service_offering_id}} column on {{vm_snapshots}} table: This way 
> snapshot can be reverted to original state even though service offering can 
> be changed for vm instance.
> NOTE: Existing vm snapshots are populated on update script by {{UPDATE 
> vm_snapshots s JOIN vm_instance v ON v.id = s.vm_id SET s.service_offering_id 
> = v.service_offering_id;}}
> 2. New vm snapshots will use instance vm service offering id as 
> {{service_offering_id}}
> 3. Revert to vm snapshots should use vm snapshot's {{service_offering_id}} 
> value.
> h3. Example use case:
> - Deploy vm using service offering A
> - Take vm snapshot -> snap1 (service offering A)
> - Stop vm
> - Change vm service offering to B
> - Revert to VM snapshot snap 1
> - Start vm
> It is expected that vm has service offering A after last step



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9574) Redesign storage views

2017-01-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842673#comment-15842673
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9574:


Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1747
  
@rhtyd @karuturi Can we run tests on this PR and merge on success ?


> Redesign storage views
> --
>
> Key: CLOUDSTACK-9574
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9574
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API, UI
>Reporter: Nicolas Vazquez
>Assignee: Nicolas Vazquez
> Attachments: PS-DETAILS.PNG, PS.PNG
>
>
> h2. Part 1: Redesign storage tags
> h3. Actual behavior
> Primary storage tags are being saved as an entry on {{storage_pool_details}} 
> with:
> * name = TAG_NAME
> * value = "true"
> When a boolean property is defined in {{storage_pool_details}} and has value 
> = "true", it is displayed as a tag.
> !https://issues.apache.org/jira/secure/attachment/12836196/PS-DETAILS.PNG!
> !https://issues.apache.org/jira/secure/attachment/12836195/PS.PNG!
> h3. Goal
> Redesign {{Storage Tags}} for Primary Storage view, to list only tags, as it 
> is done in Host Tags (Hosts view).
> h2. Part 2: Remove details from listImageStores API call response and UI
> h3. Description
> In Secondary Storage view we propose removing Details field, as Setting tab 
> list details for a given image store. We also remove details from response on 
> listImageStores API method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9761) Custom NW offering with Default Egress policy as " Allow" : new ICMP rule is created as "accept" instead of " DROP"

2017-01-27 Thread DeepthiMachiraju (JIRA)
DeepthiMachiraju created CLOUDSTACK-9761:


 Summary: Custom NW offering with Default Egress policy as " Allow" 
: new ICMP rule is created as "accept" instead of " DROP"
 Key: CLOUDSTACK-9761
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9761
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Virtual Router
Affects Versions: 4.9.0.1
Reporter: DeepthiMachiraju
 Fix For: 4.10.0.0


- Create a new network offering say 'nw1' with Default Egress policy as " 
Allow".
- deploy a network with the above offering.

Chain FW_EGRESS_RULES (1 references)
target prot opt source   destination
ACCEPT all  --  0.0.0.0/00.0.0.0/0

- on UI , select ICMP protocol and add the rule . 

Chain FW_EGRESS_RULES (1 references)
target prot opt source   destination
ACCEPT icmp --  10.1.1.0/24  0.0.0.0/0icmptype 255
ACCEPT all  --  0.0.0.0/00.0.0.0/0


- tcp/udp rules are added appropriately as drop .



Chain FW_EGRESS_RULES (1 references)
target prot opt source   destination
DROP   udp  --  10.1.1.0/24  0.0.0.0/0udp dpts:250:360
DROP   tcp  --  10.1.1.0/24  0.0.0.0/0tcp dpts:1:1000
ACCEPT icmp --  10.1.1.0/24  0.0.0.0/0icmptype 255
ACCEPT all  --  0.0.0.0/00.0.0.0/0


 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)