[GitHub] cloudstack issue #1916: CLOUDSTACK-9462: Build packages on Ubuntu 12.04/16.0...

2017-01-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1916
  
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you 
posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1916: CLOUDSTACK-9462: Build packages on Ubuntu 12.04/16.0...

2017-01-28 Thread rhtyd
Github user rhtyd commented on the issue:

https://github.com/apache/cloudstack/pull/1916
  
@ustcweizhou can you check/fix the failures?
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1711: XenServer 7 Support

2017-01-28 Thread PaulAngus
Github user PaulAngus commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
@ciroiriarte CloudStack orchestrates XenServers through XAPI - so how VLANs 
are created is transparent to CloudStack.  Are you using Basic or Advanced 
networks ? Advanced networks are the generally the way to go isolate networks 
through VLANs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1711: XenServer 7 Support

2017-01-28 Thread ciroiriarte
Github user ciroiriarte commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
Also, is there any roadmap to add OVS support?, it's the current standard 
on XS7 and seems I'll have to go back to Linux bridge if I want to move to 
Cloudstack.

I'm moving away of a setup where I couldn't send tagged vlan to a guest 
with regular bridges but OVS did the trick.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [PROPOSAL] add native container orchestration service

2017-01-28 Thread Will Stevens
I agree that we need to be careful what we take on and own inside
CloudStack.  I feel like some of the plugins or integrations which we have
been "maintaining" may serve us better to abandon, but I feel like that is
a whole discussion on its own.

In this case, I feel like there is a minimum viable solution which puts
CloudStack in a pretty good place to enable container orchestration.  For
example, one of the biggest challenges with K8S is the fact that it is
single tenant.  CloudStack has good multi tenancy support and has the
ability to orchestrate the underlying infra quite well.  We will have to be
very careful not to try to own too deep into the K8S world though, in my
opinion.  We only want to be responsible for providing the infra (and a way
to bootstrap K8S ideally) and be able to scale the infra, everything else
should be owned by the K8S on top.  That is the way I see it anyway, but
please add your input.

I think it is a liability to try to go too deep, for the same reasons Wido
and Erik have mentioned.  But I also think we need to take it seriously
because that train is moving and this may be a good opportunity to stay
relevant in a rapidly changing market.

*Will STEVENS*
Lead Developer



On Sat, Jan 28, 2017 at 1:13 PM, Wido den Hollander  wrote:

>
> > Op 27 januari 2017 om 16:08 schreef Will Stevens  >:
> >
> >
> > Hey Murali,
> > How different is this proposal than what ShapeBlue already built.  It
> looks
> > pretty consistent with the functionality that you guys open sourced in
> > Seville.
> >
> > I have not yet used this functionality, but I have reports that it works
> > quite well.
> >
> > I believe the premise here is to only orchestrate the VM layer and
> > basically expose a "group" of running VMs to the user.  The user is
> > responsible for configuring K8S or whatever other container orchestrator
> on
> > top.  I saw mention of the "cloud-config" scripts in the FS, how are
> those
> > exposed to the cluster?  Maybe the FS can expand on that a bit?
> >
> > I believe the core feature that is being requested to be added is the
> > ability to create a group of VMs which will be kept active as a group if
> at
> > all possible.  ACS would be responsible for making sure that the number
> of
> > VMs specified for the group are in running state and it would spin up new
> > VMs as needed in order to satisfy the group settings.  In general, it is
> > understood that any application running on this group would have to be
> > fault tolerant enough to be able to rediscover a new VM if one fails and
> is
> > replaced by a fresh copy.  Is that fair to say?  How is it expected that
> > this service discovery is done, just by VMs being present on the network?
> >
> > As for some of the other people's concerns in this thread.
> >
> > - Regarding Wido's remarks.  I understand that there is some added
> > complexity, but I don't feel like the scope of the addition is
> > unrealistic.  I think the LXC integration was a lot farther out of the
> > scope of what ACS does then this is.  This does not change the "things"
> > which ACS orchestrates, it just adds the concept of a grouping of things
> > which ACS already manages.  I think this is the right approach since it
> is
> > not trying to be a container orchestrator.  We will never compete with
> K8S,
> > for example, and we should not try, but K8S is here and the market wants
> > it.  I do think we should be keeping our head up about that fact because
> > being able to provide a the underlay for K8S is very valuable in the
> > current marketplace.  I see this functionality as a way to enable K8S
> > adoption on top of ACS without changing our core values.
> >
> > - Regarding Erik's remarks.  The container space is moving fast, but so
> is
> > the industry.  If we want to remain relevant, we need to be able to
> adapt a
> > bit.  I don't think this is a big shift in what we do, but it is one that
> > enables people to be able to start running with something like K8S on top
> > of their existing ACS.  This is something we are interested in doing and
> so
> > are our customers.  If we can have a thin layer in ACS which helps enable
> > the use of K8S (or other container orchestrators) by orchestrating
> > infrastructure, as we already do, and making it easier to adopt a
> container
> > orchestrator running on top of ACS, I think that gives us a nice foothold
> > in the market.  I don't really feel it is fair to compare containers to
> > IPv6.  IPv6 has been out forever and it has taken almost a decade to get
> > anyone to adopt it.  Containers have really only been here for like 2
> years
> > and they are changing the market landscape in a very real way.
> >
> > Kind of on topic and kind of off topic.  I think understanding our
> approach
> > to containers is going to be important for the ACS community as a whole.
> > If we don't offer that market anything, then we will not be considered

[GitHub] cloudstack issue #1711: XenServer 7 Support

2017-01-28 Thread ciroiriarte
Github user ciroiriarte commented on the issue:

https://github.com/apache/cloudstack/pull/1711
  
Are these messages expected?

2017-01-29 00:02:24,476 WARN  [c.c.a.d.ParamGenericValidationWorker] 
(catalina-exec-10:ctx-6776e3dd ctx-a85fb6b1) (logid:a578d2b0) **Received 
unknown parameters for command addHost. Unknown parameters : clustertype**
2017-01-29 00:02:24,478 INFO  [c.c.r.ResourceManagerImpl] 
(catalina-exec-10:ctx-6776e3dd ctx-a85fb6b1) (logid:a578d2b0) Trying to add a 
new host at http://brick04 in data center 1
2017-01-29 00:02:24,527 DEBUG [c.c.h.x.d.XcpServerDiscoverer] 
(catalina-exec-10:ctx-6776e3dd ctx-a85fb6b1) (logid:a578d2b0) host X.X.X.X 
doesn't have 996dd2e7-ad95-49cc-a0be-2c9adc4dfb0b Hotfix
2017-01-29 00:02:24,533 DEBUG [c.c.h.x.d.XcpServerDiscoverer] 
(catalina-exec-10:ctx-6776e3dd ctx-a85fb6b1) (logid:a578d2b0) host X.X.X.X 
doesn't have 0850b186-4d47-11e3-a720-001b2151a503 Hotfix
2017-01-29 00:02:24,540 WARN  [c.c.h.x.d.XcpServerDiscoverer] 
(catalina-exec-10:ctx-6776e3dd ctx-a85fb6b1) (logid:a578d2b0) **defaulting to 
xenserver650 resource** for product brand: XenServer with product version: 7.0.0
2017-01-29 00:02:24,540 INFO  [c.c.h.x.d.XcpServerDiscoverer] 
(catalina-exec-10:ctx-6776e3dd ctx-a85fb6b1) (logid:a578d2b0) Found host 
brick04 ip=X.X.X.X product version=7.0.0



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


RE: [GitHub] cloudstack issue #977: [4.10] CLOUDSTACK-8746: VM Snapshotting implementatio...

2017-01-28 Thread Simon Weller
Yes, it's a bug, due to the number of associated comments attached to this PR.

Simon Weller/615-312-6068

-Original Message-
From: DaanHoogland [g...@git.apache.org]
Received: Saturday, 28 Jan 2017, 12:17PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: [GitHub] cloudstack issue #977: [4.10] CLOUDSTACK-8746: VM 
Snapshotting implementatio...

Github user DaanHoogland commented on the issue:

https://github.com/apache/cloudstack/pull/977

hey @kiwiflyer (not related to the PR at hand) I see that this PR is not 
reported on acspr.enu.net as ci complete. Is that a bug?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #977: [4.10] CLOUDSTACK-8746: VM Snapshotting implementatio...

2017-01-28 Thread DaanHoogland
Github user DaanHoogland commented on the issue:

https://github.com/apache/cloudstack/pull/977
  
hey @kiwiflyer (not related to the PR at hand) I see that this PR is not 
reported on acspr.enu.net as ci complete. Is that a bug?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [PROPOSAL] add native container orchestration service

2017-01-28 Thread Wido den Hollander

> Op 27 januari 2017 om 16:08 schreef Will Stevens :
> 
> 
> Hey Murali,
> How different is this proposal than what ShapeBlue already built.  It looks
> pretty consistent with the functionality that you guys open sourced in
> Seville.
> 
> I have not yet used this functionality, but I have reports that it works
> quite well.
> 
> I believe the premise here is to only orchestrate the VM layer and
> basically expose a "group" of running VMs to the user.  The user is
> responsible for configuring K8S or whatever other container orchestrator on
> top.  I saw mention of the "cloud-config" scripts in the FS, how are those
> exposed to the cluster?  Maybe the FS can expand on that a bit?
> 
> I believe the core feature that is being requested to be added is the
> ability to create a group of VMs which will be kept active as a group if at
> all possible.  ACS would be responsible for making sure that the number of
> VMs specified for the group are in running state and it would spin up new
> VMs as needed in order to satisfy the group settings.  In general, it is
> understood that any application running on this group would have to be
> fault tolerant enough to be able to rediscover a new VM if one fails and is
> replaced by a fresh copy.  Is that fair to say?  How is it expected that
> this service discovery is done, just by VMs being present on the network?
> 
> As for some of the other people's concerns in this thread.
> 
> - Regarding Wido's remarks.  I understand that there is some added
> complexity, but I don't feel like the scope of the addition is
> unrealistic.  I think the LXC integration was a lot farther out of the
> scope of what ACS does then this is.  This does not change the "things"
> which ACS orchestrates, it just adds the concept of a grouping of things
> which ACS already manages.  I think this is the right approach since it is
> not trying to be a container orchestrator.  We will never compete with K8S,
> for example, and we should not try, but K8S is here and the market wants
> it.  I do think we should be keeping our head up about that fact because
> being able to provide a the underlay for K8S is very valuable in the
> current marketplace.  I see this functionality as a way to enable K8S
> adoption on top of ACS without changing our core values.
> 
> - Regarding Erik's remarks.  The container space is moving fast, but so is
> the industry.  If we want to remain relevant, we need to be able to adapt a
> bit.  I don't think this is a big shift in what we do, but it is one that
> enables people to be able to start running with something like K8S on top
> of their existing ACS.  This is something we are interested in doing and so
> are our customers.  If we can have a thin layer in ACS which helps enable
> the use of K8S (or other container orchestrators) by orchestrating
> infrastructure, as we already do, and making it easier to adopt a container
> orchestrator running on top of ACS, I think that gives us a nice foothold
> in the market.  I don't really feel it is fair to compare containers to
> IPv6.  IPv6 has been out forever and it has taken almost a decade to get
> anyone to adopt it.  Containers have really only been here for like 2 years
> and they are changing the market landscape in a very real way.
> 
> Kind of on topic and kind of off topic.  I think understanding our approach
> to containers is going to be important for the ACS community as a whole.
> If we don't offer that market anything, then we will not be considered and
> we will lose market share we can't afford to lose.  If we try to hitch our
> horse to that cart too much, we will not be able to be agile enough and
> will fail.  I feel like the right approach is for us to know that it is a
> thriving market and continue to do what we do, but to extend an olive
> branch to that market.  I think this sort of implementation is the right
> approach because we are not trying to do too much.  We are simply giving a
> foundation on which the next big thing in the container orchestration world
> can adopt without us having to compete directly in that space.  I think we
> have to focus on what we do best, but at the same time, think about what we
> can do to enable that huge market of users to adopt ACS as their
> foundation.  The ability to offer VMs and containers in the same data plane
> is something we have the ability to do, especially with this approach, and
> is something that most other softwares can not do.  The adoption of
> containers by bigger organizations will be only part of their workload,
> they will still be running VMs for the foreseeable future. Being able to
> appeal to that market is going to be important for us.
> 
> Hopefully I don't have too many strong opinions here, but I do think we
> need to be thinking about how we move forward in a world which is adopting
> containers in a very real way.
> 

Understood. I just want to prevent that we add more features to CloudStack 
which are 

[GitHub] cloudstack issue #1836: [4.10/master] Smoketest Health

2017-01-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1836
  
Trillian test result (tid-789)
Environment: vmware-55u3 (x2), Advanced Networking with Mgmt server 7
Total time taken: 45208 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1836-t789-vmware-55u3.zip
Intermitten failure detected: 
/marvin/tests/smoke/test_deploy_vgpu_enabled_vm.py
Intermitten failure detected: /marvin/tests/smoke/test_internal_lb.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 861.53 | 
test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup | `Failure` | 554.03 | 
test_privategw_acl.py
test_06_download_detached_volume | `Error` | 50.36 | test_volumes.py
test_02_vpc_privategw_static_routes | `Error` | 635.76 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 365.23 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 191.38 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 582.95 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 339.03 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 739.91 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 705.73 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1526.91 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 727.68 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 658.60 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1356.90 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 30.68 | test_volumes.py
test_05_detach_volume | Success | 100.23 | test_volumes.py
test_04_delete_attached_volume | Success | 15.21 | test_volumes.py
test_03_download_attached_volume | Success | 15.19 | test_volumes.py
test_02_attach_volume | Success | 58.69 | test_volumes.py
test_01_create_volume | Success | 507.82 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.16 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 232.19 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 221.27 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.60 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 292.12 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 438.44 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.16 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 80.81 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.07 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.11 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.10 | test_vm_life_cycle.py
test_02_start_vm | Success | 20.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 10.11 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 216.42 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.02 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 10.16 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.14 | test_templates.py
test_01_create_template | Success | 110.67 | test_templates.py
test_10_destroy_cpvm | Success | 266.52 | test_ssvm.py
test_09_destroy_ssvm | Success | 263.55 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.30 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.13 | test_ssvm.py
test_06_stop_cpvm | Success | 171.52 | test_ssvm.py
test_05_stop_ssvm | Success | 213.37 | test_ssvm.py
test_04_cpvm_internals | Success | 1.03 | test_ssvm.py
test_03_ssvm_internals | Success | 3.07 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 26.00 | test_snapshots.py
test_04_change_offering_small | Success | 91.74 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | 

[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-01-28 Thread pdion891
Github user pdion891 commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Quickly tested with 4.10 pre jdk8 and it work creating vm and resize the 
root volume on XenServer 6.5.  tested using lvmiscsi and SolidFire storage 
plugins.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1836: [4.10/master] Smoketest Health

2017-01-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1836
  
Trillian test result (tid-787)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 6
Total time taken: 42886 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1836-t787-xenserver-65sp1.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_redundant_vpc_site2site_vpn | `Failure` | 207.12 | test_vpc_vpn.py
test_05_rvpc_multi_tiers | `Failure` | 532.92 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1351.55 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 545.22 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 744.39 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 351.92 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 146.75 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 305.36 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 631.59 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 845.50 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1088.58 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.72 | test_volumes.py
test_08_resize_volume | Success | 96.20 | test_volumes.py
test_07_resize_fail | Success | 101.13 | test_volumes.py
test_06_download_detached_volume | Success | 20.36 | test_volumes.py
test_05_detach_volume | Success | 100.32 | test_volumes.py
test_04_delete_attached_volume | Success | 10.23 | test_volumes.py
test_03_download_attached_volume | Success | 15.32 | test_volumes.py
test_02_attach_volume | Success | 10.71 | test_volumes.py
test_01_create_volume | Success | 392.71 | test_volumes.py
test_03_delete_vm_snapshots | Success | 280.27 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 181.26 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 131.80 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 252.85 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.77 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.19 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 61.12 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.11 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.17 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 20.27 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.28 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.36 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 85.97 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.05 | test_templates.py
test_04_extract_template | Success | 5.13 | test_templates.py
test_03_delete_template | Success | 5.13 | test_templates.py
test_02_edit_template | Success | 90.13 | test_templates.py
test_01_create_template | Success | 50.51 | test_templates.py
test_10_destroy_cpvm | Success | 201.74 | test_ssvm.py
test_09_destroy_ssvm | Success | 198.81 | test_ssvm.py
test_08_reboot_cpvm | Success | 152.16 | test_ssvm.py
test_07_reboot_ssvm | Success | 143.81 | test_ssvm.py
test_06_stop_cpvm | Success | 141.77 | test_ssvm.py
test_05_stop_ssvm | Success | 168.93 | test_ssvm.py
test_04_cpvm_internals | Success | 1.14 | test_ssvm.py
test_03_ssvm_internals | Success | 3.45 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 26.59 | test_snapshots.py
test_04_change_offering_small | Success | 116.10 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.10 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 | test_secondary_storage.py
test_01_scale_vm | Success | 5.21 | 

[GitHub] cloudstack-www pull request #33: Updated the 2017 conference details

2017-01-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack-www/pull/33


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1836: [4.10/master] Smoketest Health

2017-01-28 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1836
  
Trillian test result (tid-788)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 35834 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1836-t788-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_router_nics.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 863.91 | 
test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 335.43 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 166.19 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.17 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 250.90 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 274.86 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 527.60 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 516.68 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1409.58 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 554.23 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1286.69 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.49 | test_volumes.py
test_08_resize_volume | Success | 156.42 | test_volumes.py
test_07_resize_fail | Success | 161.50 | test_volumes.py
test_06_download_detached_volume | Success | 156.32 | test_volumes.py
test_05_detach_volume | Success | 155.90 | test_volumes.py
test_04_delete_attached_volume | Success | 151.19 | test_volumes.py
test_03_download_attached_volume | Success | 156.34 | test_volumes.py
test_02_attach_volume | Success | 89.84 | test_volumes.py
test_01_create_volume | Success | 711.15 | test_volumes.py
test_deploy_vm_multiple | Success | 252.59 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.65 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.20 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 41.14 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.15 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.86 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.84 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.20 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.38 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.47 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.13 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.14 | test_templates.py
test_01_create_template | Success | 30.35 | test_templates.py
test_10_destroy_cpvm | Success | 161.58 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.98 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.64 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.63 | test_ssvm.py
test_06_stop_cpvm | Success | 131.79 | test_ssvm.py
test_05_stop_ssvm | Success | 133.73 | test_ssvm.py
test_04_cpvm_internals | Success | 1.23 | test_ssvm.py
test_03_ssvm_internals | Success | 3.35 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.22 | test_snapshots.py
test_04_change_offering_small | Success | 234.70 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.14 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 35.33 | test_routers.py
test_08_start_router | Success | 30.27 | test_routers.py
test_07_stop_router | Success | 10.18 | test_routers.py
test_06_router_advanced | Success | 0.06 | test_routers.py

[GitHub] cloudstack issue #977: [4.10] CLOUDSTACK-8746: VM Snapshotting implementatio...

2017-01-28 Thread kiwiflyer
Github user kiwiflyer commented on the issue:

https://github.com/apache/cloudstack/pull/977
  
@karuturi please see testing above. This PR now has working tests and is 
ready for merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---