Re: [DISCUSS] VR upgrade downtime reduction

2018-02-06 Thread Wei ZHOU
Hi Remi,

Actually in our fork, there are more changes than restartnetwork and
restart vpc, similar as your changes.
(1) edit networks from offering with single VR to offerings with RVR, will
hack VR (set new guest IP, start keepalived and conntrackd, blablabla)
(2) restart vpc from single VR to RVR. similar changes will be made.
The downtime is around 5s. However, these changes are based 4.7.1, we are
not sure if it still work in 4.11

We have lots of changes , we will port the changes to 4.11 LTS and create
PRs in the next months.

-Wei


2018-02-06 14:47 GMT+01:00 Remi Bergsma :

> Hi Daan,
>
> In my opinion the biggest issue is the fact that there are a lot of
> different code paths: VPC versus non-VPC, VPC versus redundant-VPC, etc.
> That's why you cannot simply switch from a single VPC to a redundant VPC
> for example.
>
> For SBP, we mitigated that in Cosmic by converting all non-VPCs to a VPC
> with a single tier and made sure all features are supported. Next we merged
> the single and redundant VPC code paths. The idea here is that redundancy
> or not should only be a difference in the number of routers. Code should be
> the same. A single router, is also "master" but there just is no "backup".
>
> That simplifies things A LOT, as keepalived is now the master of the whole
> thing. No more assigning ip addresses in Python, but leave that to
> keepalived instead. Lots of code deleted. Easier to maintain, way more
> stable. We just released Cosmic 6 that has this feature and are now rolling
> it out in production. Looking good so far. This change unlocks a lot of
> possibilities, like live upgrading from a single VPC to a redundant one
> (and back). In the end, if the redundant VPC is rock solid, you most likely
> don't even want single VPCs any more. But that will come.
>
> As I said, we're rolling this out as we speak. In a few weeks when
> everything is upgraded I can share what we learned and how well it works.
> CloudStack could use a similar approach.
>
> Kind Regards,
> Remi
>
>
>
> On 05/02/2018, 16:44, "Daan Hoogland"  wrote:
>
> H devs,
>
> I have recently (re-)submitted two PRs, one by Wei [1] and one by Remi
> [2],
> that reduce downtime for redundant routers and redundant VPCs
> respectively.
> (please review those)
> Now from customers we hear that they also want to reduce downtime for
> regular VRs so as we discussed this we came to two possible solutions
> that
> we want to implement one of:
>
> 1. start and configure a new router before destroying the old one and
> then
> as a last minute action stop the old one.
> 2. make all routers start up redundancy services but for regular
> routers
> start only one until an upgrade is required at which time a new, second
> router can be started before killing the old one.​
>
> ​obviously both solutions have their merits, so I want to have your
> input
> to make the broadest supported implementation.
> -1 means there will be an overlap or a small delay and interruption of
> service.
> +1 It can be argued, "they got what they payed for".
> -2 means a overhead in memory usage by the router by the extra services
> running on it.
> +2 the number of router-varieties will be further reduced.
>
> -1&-2 We have to deal with potentially large upgrade steps from way
> before
> the cloudstack era even and might be stuck to 1 because of that,
> needing to
> hack around it. Any dealing with older VRs, pre 4.5 and especially pre
> 4.0
> will be hard.
>
> I am not cross posting though this might be one of these occasions
> where it
> is appropriate to include users@. Just my puristic inhibitions.
>
> Of course I have preferences but can you share your thoughts, please?
> ​
> ​And don't forget to review Wei's [1] and Remi's [2] work please.
>
> ​[1] https://github.com/apache/cloudstack/pull/2435​
> [2] https://github.com/apache/cloudstack/pull/2436
>
> --
> Daan
>
>
>


Re: Copy Volume Failed in CloudStack 4.5 (XenServer 6.5)

2018-02-06 Thread anillakieni
Dear All,

Is somebody available here to assist me on fixing my issue.

Thanks,
Anil.

On Tue, Feb 6, 2018 at 9:00 PM, anillakieni  wrote:

> Hi All,
>
> I'm facing issue when copying  larger size volumes. i.e., Secondary
> Storage to Primary Storage (I mean attaching DATA volume to VM), after
> certain time around 37670 seconds.
>
> Version of:
> - CloudStack is 4.5.0
> - XenServer 6.5.0
> - MySQL 5.1.73
>
>
> The error and log is provided below, Could someone please assist me here
> which steps i have to take to fix this issue. Also, can we have a chance to
> update the failed status to success through database tables because i have
> to upload the whole disk again to secondary storage and then later attach
> it to VM, which is consuming more time. My environment has very slow
> network transfers (I have only 1 Gig switch). Please let me know if we can
> tweak the DB to update the status of the disk or do we have any settings to
> be changed to accept more time (wait time) for updating the status.
> "
>
> 2018-02-06 03:20:42,385 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-31:ctx-c1c78a5a
> job-106186/job-106187 ctx-ea1ef3e6) (logid:c59b2359) Seq
> 38-367887794560851961: Received:  { Ans: , MgmtId: 47019105324719, via: 38,
> Ver: v1, Flags: 110, { CopyCmdAnswer } }
> 2018-02-06 03:20:42,389 DEBUG [o.a.c.s.v.VolumeObject]
> (Work-Job-Executor-31:ctx-c1c78a5a job-106186/job-106187 ctx-ea1ef3e6)
> (logid:c59b2359) *Failed to update state*
> *com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
> com.mysql.jdbc.JDBC4PreparedStatement@54bd3a25: SELECT volume_store_ref.id
> , volume_store_ref.store_id,
> volume_store_ref.volume_id, volume_store_ref.zone_id,
> volume_store_ref.created, volume_store_ref.last_updated,
> volume_store_ref.download_pct, volume_store_ref.size,
> volume_store_ref.physical_size, volume_store_ref.download_state,
> volume_store_ref.checksum, volume_store_ref.local_path,
> volume_store_ref.error_str, volume_store_ref.job_id,
> volume_store_ref.install_path, volume_store_ref.url,
> volume_store_ref.download_url, volume_store_ref.download_url_created,
> volume_store_ref.destroyed, volume_store_ref.update_count,
> volume_store_ref.updated, volume_store_ref.state, volume_store_ref.ref_cnt
> FROM volume_store_ref WHERE volume_store_ref.store_id = 1  AND
> volume_store_ref.volume_id = 1178  AND volume_store_ref.destroyed = 0
> ORDER BY RAND() LIMIT 1*
> at com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(
> GenericDaoBase.java:425)
> at com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(
> GenericDaoBase.java:361)
> at com.cloud.utils.db.GenericDaoBase.findOneIncludingRemovedBy(
> GenericDaoBase.java:889)
> at com.cloud.utils.db.GenericDaoBase.findOneBy(
> GenericDaoBase.java:900)
> at org.apache.cloudstack.storage.image.db.VolumeDataStoreDaoImpl.
> findByStoreVolume(VolumeDataStoreDaoImpl.java:209)
> at sun.reflect.GeneratedMethodAccessor306.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.springframework.aop.support.AopUtils.
> invokeJoinpointUsingReflection(AopUtils.java:317)
> at org.springframework.aop.framework.ReflectiveMethodInvocation.
> invokeJoinpoint(ReflectiveMethodInvocation.java:183)
> at org.springframework.aop.framework.ReflectiveMethodInvocation.
> proceed(ReflectiveMethodInvocation.java:150)
> at com.cloud.utils.db.TransactionContextInterceptor.invoke(
> TransactionContextInterceptor.java:34)
> at org.springframework.aop.framework.ReflectiveMethodInvocation.
> proceed(ReflectiveMethodInvocation.java:161)
> at org.springframework.aop.interceptor.
> ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
> at org.springframework.aop.framework.ReflectiveMethodInvocation.
> proceed(ReflectiveMethodInvocation.java:172)
> at org.springframework.aop.framework.JdkDynamicAopProxy.
> invoke(JdkDynamicAopProxy.java:204)
> at com.sun.proxy.$Proxy173.findByStoreVolume(Unknown Source)
> at org.apache.cloudstack.storage.datastore.
> ObjectInDataStoreManagerImpl.findObject(ObjectInDataStoreManagerImpl.
> java:353)
> at org.apache.cloudstack.storage.datastore.
> ObjectInDataStoreManagerImpl.findObject(ObjectInDataStoreManagerImpl.
> java:338)
> at org.apache.cloudstack.storage.datastore.
> ObjectInDataStoreManagerImpl.update(ObjectInDataStoreManagerImpl.java:289)
> at org.apache.cloudstack.storage.volume.VolumeObject.
> processEvent(VolumeObject.java:294)
> at org.apache.cloudstack.storage.volume.VolumeServiceImpl.
> copyVolumeFromImageToPrimaryCallback(VolumeServiceImpl.java:901)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 

Copy Volume Failed in CloudStack 4.5 (XenServer 6.5)

2018-02-06 Thread anillakieni
Hi All,

I'm facing issue when copying  larger size volumes. i.e., Secondary Storage
to Primary Storage (I mean attaching DATA volume to VM), after certain time
around 37670 seconds.

Version of:
- CloudStack is 4.5.0
- XenServer 6.5.0
- MySQL 5.1.73


The error and log is provided below, Could someone please assist me here
which steps i have to take to fix this issue. Also, can we have a chance to
update the failed status to success through database tables because i have
to upload the whole disk again to secondary storage and then later attach
it to VM, which is consuming more time. My environment has very slow
network transfers (I have only 1 Gig switch). Please let me know if we can
tweak the DB to update the status of the disk or do we have any settings to
be changed to accept more time (wait time) for updating the status.
"

2018-02-06 03:20:42,385 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-31:ctx-c1c78a5a job-106186/job-106187 ctx-ea1ef3e6)
(logid:c59b2359) Seq 38-367887794560851961: Received:  { Ans: , MgmtId:
47019105324719, via: 38, Ver: v1, Flags: 110, { CopyCmdAnswer } }
2018-02-06 03:20:42,389 DEBUG [o.a.c.s.v.VolumeObject]
(Work-Job-Executor-31:ctx-c1c78a5a job-106186/job-106187 ctx-ea1ef3e6)
(logid:c59b2359) *Failed to update state*
*com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
com.mysql.jdbc.JDBC4PreparedStatement@54bd3a25: SELECT volume_store_ref.id
, volume_store_ref.store_id,
volume_store_ref.volume_id, volume_store_ref.zone_id,
volume_store_ref.created, volume_store_ref.last_updated,
volume_store_ref.download_pct, volume_store_ref.size,
volume_store_ref.physical_size, volume_store_ref.download_state,
volume_store_ref.checksum, volume_store_ref.local_path,
volume_store_ref.error_str, volume_store_ref.job_id,
volume_store_ref.install_path, volume_store_ref.url,
volume_store_ref.download_url, volume_store_ref.download_url_created,
volume_store_ref.destroyed, volume_store_ref.update_count,
volume_store_ref.updated, volume_store_ref.state, volume_store_ref.ref_cnt
FROM volume_store_ref WHERE volume_store_ref.store_id = 1  AND
volume_store_ref.volume_id = 1178  AND volume_store_ref.destroyed = 0
ORDER BY RAND() LIMIT 1*
at
com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(GenericDaoBase.java:425)
at
com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(GenericDaoBase.java:361)
at
com.cloud.utils.db.GenericDaoBase.findOneIncludingRemovedBy(GenericDaoBase.java:889)
at
com.cloud.utils.db.GenericDaoBase.findOneBy(GenericDaoBase.java:900)
at
org.apache.cloudstack.storage.image.db.VolumeDataStoreDaoImpl.findByStoreVolume(VolumeDataStoreDaoImpl.java:209)
at sun.reflect.GeneratedMethodAccessor306.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at
com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy173.findByStoreVolume(Unknown Source)
at
org.apache.cloudstack.storage.datastore.ObjectInDataStoreManagerImpl.findObject(ObjectInDataStoreManagerImpl.java:353)
at
org.apache.cloudstack.storage.datastore.ObjectInDataStoreManagerImpl.findObject(ObjectInDataStoreManagerImpl.java:338)
at
org.apache.cloudstack.storage.datastore.ObjectInDataStoreManagerImpl.update(ObjectInDataStoreManagerImpl.java:289)
at
org.apache.cloudstack.storage.volume.VolumeObject.processEvent(VolumeObject.java:294)
at
org.apache.cloudstack.storage.volume.VolumeServiceImpl.copyVolumeFromImageToPrimaryCallback(VolumeServiceImpl.java:901)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.cloudstack.framework.async.AsyncCallbackDispatcher.dispatch(AsyncCallbackDispatcher.java:148)
at

Re: [DISCUSS] VR upgrade downtime reduction

2018-02-06 Thread Daan Hoogland
looking forward to your blog(s), Remi. sound like you guys are still having
fun.

PS did you review your PR, i submitted for you ;) ?

On Tue, Feb 6, 2018 at 2:47 PM, Remi Bergsma 
wrote:

> Hi Daan,
>
> In my opinion the biggest issue is the fact that there are a lot of
> different code paths: VPC versus non-VPC, VPC versus redundant-VPC, etc.
> That's why you cannot simply switch from a single VPC to a redundant VPC
> for example.
>
> For SBP, we mitigated that in Cosmic by converting all non-VPCs to a VPC
> with a single tier and made sure all features are supported. Next we merged
> the single and redundant VPC code paths. The idea here is that redundancy
> or not should only be a difference in the number of routers. Code should be
> the same. A single router, is also "master" but there just is no "backup".
>
> That simplifies things A LOT, as keepalived is now the master of the whole
> thing. No more assigning ip addresses in Python, but leave that to
> keepalived instead. Lots of code deleted. Easier to maintain, way more
> stable. We just released Cosmic 6 that has this feature and are now rolling
> it out in production. Looking good so far. This change unlocks a lot of
> possibilities, like live upgrading from a single VPC to a redundant one
> (and back). In the end, if the redundant VPC is rock solid, you most likely
> don't even want single VPCs any more. But that will come.
>
> As I said, we're rolling this out as we speak. In a few weeks when
> everything is upgraded I can share what we learned and how well it works.
> CloudStack could use a similar approach.
>
> Kind Regards,
> Remi
>
>
>
> On 05/02/2018, 16:44, "Daan Hoogland"  wrote:
>
> H devs,
>
> I have recently (re-)submitted two PRs, one by Wei [1] and one by Remi
> [2],
> that reduce downtime for redundant routers and redundant VPCs
> respectively.
> (please review those)
> Now from customers we hear that they also want to reduce downtime for
> regular VRs so as we discussed this we came to two possible solutions
> that
> we want to implement one of:
>
> 1. start and configure a new router before destroying the old one and
> then
> as a last minute action stop the old one.
> 2. make all routers start up redundancy services but for regular
> routers
> start only one until an upgrade is required at which time a new, second
> router can be started before killing the old one.​
>
> ​obviously both solutions have their merits, so I want to have your
> input
> to make the broadest supported implementation.
> -1 means there will be an overlap or a small delay and interruption of
> service.
> +1 It can be argued, "they got what they payed for".
> -2 means a overhead in memory usage by the router by the extra services
> running on it.
> +2 the number of router-varieties will be further reduced.
>
> -1&-2 We have to deal with potentially large upgrade steps from way
> before
> the cloudstack era even and might be stuck to 1 because of that,
> needing to
> hack around it. Any dealing with older VRs, pre 4.5 and especially pre
> 4.0
> will be hard.
>
> I am not cross posting though this might be one of these occasions
> where it
> is appropriate to include users@. Just my puristic inhibitions.
>
> Of course I have preferences but can you share your thoughts, please?
> ​
> ​And don't forget to review Wei's [1] and Remi's [2] work please.
>
> ​[1] https://github.com/apache/cloudstack/pull/2435​
> [2] https://github.com/apache/cloudstack/pull/2436
>
> --
> Daan
>
>
>


-- 
Daan


Re: [DISCUSS] VR upgrade downtime reduction

2018-02-06 Thread Remi Bergsma
Hi Daan,

In my opinion the biggest issue is the fact that there are a lot of different 
code paths: VPC versus non-VPC, VPC versus redundant-VPC, etc. That's why you 
cannot simply switch from a single VPC to a redundant VPC for example. 

For SBP, we mitigated that in Cosmic by converting all non-VPCs to a VPC with a 
single tier and made sure all features are supported. Next we merged the single 
and redundant VPC code paths. The idea here is that redundancy or not should 
only be a difference in the number of routers. Code should be the same. A 
single router, is also "master" but there just is no "backup".

That simplifies things A LOT, as keepalived is now the master of the whole 
thing. No more assigning ip addresses in Python, but leave that to keepalived 
instead. Lots of code deleted. Easier to maintain, way more stable. We just 
released Cosmic 6 that has this feature and are now rolling it out in 
production. Looking good so far. This change unlocks a lot of possibilities, 
like live upgrading from a single VPC to a redundant one (and back). In the 
end, if the redundant VPC is rock solid, you most likely don't even want single 
VPCs any more. But that will come.

As I said, we're rolling this out as we speak. In a few weeks when everything 
is upgraded I can share what we learned and how well it works. CloudStack could 
use a similar approach.
 
Kind Regards,
Remi



On 05/02/2018, 16:44, "Daan Hoogland"  wrote:

H devs,

I have recently (re-)submitted two PRs, one by Wei [1] and one by Remi [2],
that reduce downtime for redundant routers and redundant VPCs respectively.
(please review those)
Now from customers we hear that they also want to reduce downtime for
regular VRs so as we discussed this we came to two possible solutions that
we want to implement one of:

1. start and configure a new router before destroying the old one and then
as a last minute action stop the old one.
2. make all routers start up redundancy services but for regular routers
start only one until an upgrade is required at which time a new, second
router can be started before killing the old one.​

​obviously both solutions have their merits, so I want to have your input
to make the broadest supported implementation.
-1 means there will be an overlap or a small delay and interruption of
service.
+1 It can be argued, "they got what they payed for".
-2 means a overhead in memory usage by the router by the extra services
running on it.
+2 the number of router-varieties will be further reduced.

-1&-2 We have to deal with potentially large upgrade steps from way before
the cloudstack era even and might be stuck to 1 because of that, needing to
hack around it. Any dealing with older VRs, pre 4.5 and especially pre 4.0
will be hard.

I am not cross posting though this might be one of these occasions where it
is appropriate to include users@. Just my puristic inhibitions.

Of course I have preferences but can you share your thoughts, please?
​
​And don't forget to review Wei's [1] and Remi's [2] work please.

​[1] https://github.com/apache/cloudstack/pull/2435​
[2] https://github.com/apache/cloudstack/pull/2436

-- 
Daan




Re: Refusing to design this network, the physical isolation type is not BCF_SEGMENT

2018-02-06 Thread Nux!
Thanks Nicolas, much appreciated.
Once you have a patch, feel free to ping me so I can test.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Nicolas Vazquez" 
> To: "dev" 
> Sent: Tuesday, 6 February, 2018 13:23:54
> Subject: Re: Refusing to design this network, the physical isolation type is 
> not BCF_SEGMENT

> Hi Lucian,
> 
> 
> Thanks for posting this issue. I have checked the canHandle() method on
> VxlanGuestNetworkGuru and it is not considering L2 network offerings, only
> Isolated, so it refuses to design the network. I'll make sure to include a fix
> for it on 4.11.1.
> 
> 
> Thanks,
> 
> Nicolas
> 
> 
> From: Nux! 
> Sent: Tuesday, February 6, 2018 8:30:03 AM
> To: dev
> Subject: [L2 network] [VXLAN] Refusing to design this network, the physical
> isolation type is not BCF_SEGMENT
> 
> Hi,
> 
> I'm trying to add an L2 network based on a VXLAN physical network and I am
> getting the error in the subject.
> 
> If I use a VLAN based physical network all completes successfully and I end up
> with an L2 network in green "Setup" state.
> 
> Here are some more logs:
> 
> 2018-02-06 11:20:27,748 DEBUG [c.c.n.NetworkServiceImpl]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Found physical
> network id=201 based on requested tags mellanoxvxlan
> 2018-02-06 11:20:27,749 DEBUG [c.c.n.NetworkServiceImpl]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Found physical
> network id=201 based on requested tags mellanoxvxlan
> 2018-02-06 11:20:27,766 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to
> design this network, the physical isolation type is not BCF_SEGMENT
> 2018-02-06 11:20:27,766 DEBUG [o.a.c.n.c.m.ContrailGuru]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to
> design this network
> 2018-02-06 11:20:27,767 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to
> design this network
> 2018-02-06 11:20:27,767 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to
> design this network
> 2018-02-06 11:20:27,767 DEBUG [c.c.n.g.OvsGuestNetworkGuru]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to
> design this network
> 2018-02-06 11:20:27,769 DEBUG [o.a.c.n.g.SspGuestNetworkGuru]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) SSP not
> configured to be active
> 2018-02-06 11:20:27,769 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to
> design this network
> 2018-02-06 11:20:27,769 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to
> design network using network offering 19 on physical network 201
> 2018-02-06 11:20:27,770 DEBUG [o.a.c.e.o.NetworkOrchestrator]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Releasing lock
> for Acct[6af2875b-04fc-11e8-923e-002590474525-admin]
> 2018-02-06 11:20:27,789 DEBUG [c.c.u.d.T.Transaction]
> (qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Rolling back the
> transaction: Time = 38 Name =  qtp788117692-390; called by
> -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-Transaction.execute:43-Transaction.execute:47-NetworkOrchestrator.createGuestNetwork:2315-NetworkServiceImpl$4.doInTransaction:1383-NetworkServiceImpl$4.doInTransaction:1331-Transaction.execute:40-NetworkServiceImpl.commitNetwork:1331-NetworkServiceImpl.createGuestNetwork:1294-NativeMethodAccessorImpl.invoke0:-2
> 2018-02-06 11:20:27,798 ERROR [c.c.a.ApiServer] (qtp788117692-390:ctx-f1a980be
> ctx-61be30e8) (logid:0ca0c866) unhandled exception executing api command:
> [Ljava.lang.String;@43b9df02
> com.cloud.utils.exception.CloudRuntimeException: Unable to convert network
> offering with specified id to network profile
>at
>
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.setupNetwork(NetworkOrchestrator.java:726)
>at
>
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$10.doInTransaction(NetworkOrchestrator.java:2364)
>at
>
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$10.doInTransaction(NetworkOrchestrator.java:2315)
>at 
> com.cloud.utils.db.Transaction$2.doInTransaction(Transaction.java:50)
>at com.cloud.utils.db.Transaction.execute(Transaction.java:40)
>at com.cloud.utils.db.Transaction.execute(Transaction.java:47)
> 
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> nicolas.vazq...@shapeblue.com
> www.shapeblue.com
> ,
> @shapeblue


Re: Refusing to design this network, the physical isolation type is not BCF_SEGMENT

2018-02-06 Thread Nicolas Vazquez
Hi Lucian,


Thanks for posting this issue. I have checked the canHandle() method on 
VxlanGuestNetworkGuru and it is not considering L2 network offerings, only 
Isolated, so it refuses to design the network. I'll make sure to include a fix 
for it on 4.11.1.


Thanks,

Nicolas


From: Nux! 
Sent: Tuesday, February 6, 2018 8:30:03 AM
To: dev
Subject: [L2 network] [VXLAN] Refusing to design this network, the physical 
isolation type is not BCF_SEGMENT

Hi,

I'm trying to add an L2 network based on a VXLAN physical network and I am 
getting the error in the subject.

If I use a VLAN based physical network all completes successfully and I end up 
with an L2 network in green "Setup" state.

Here are some more logs:

2018-02-06 11:20:27,748 DEBUG [c.c.n.NetworkServiceImpl] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Found physical 
network id=201 based on requested tags mellanoxvxlan
2018-02-06 11:20:27,749 DEBUG [c.c.n.NetworkServiceImpl] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Found physical 
network id=201 based on requested tags mellanoxvxlan
2018-02-06 11:20:27,766 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network, the physical isolation type is not BCF_SEGMENT
2018-02-06 11:20:27,766 DEBUG [o.a.c.n.c.m.ContrailGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,767 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,767 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,767 DEBUG [c.c.n.g.OvsGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,769 DEBUG [o.a.c.n.g.SspGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) SSP not 
configured to be active
2018-02-06 11:20:27,769 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,769 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design network using network offering 19 on physical network 201
2018-02-06 11:20:27,770 DEBUG [o.a.c.e.o.NetworkOrchestrator] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Releasing lock 
for Acct[6af2875b-04fc-11e8-923e-002590474525-admin]
2018-02-06 11:20:27,789 DEBUG [c.c.u.d.T.Transaction] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Rolling back the 
transaction: Time = 38 Name =  qtp788117692-390; called by 
-TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-Transaction.execute:43-Transaction.execute:47-NetworkOrchestrator.createGuestNetwork:2315-NetworkServiceImpl$4.doInTransaction:1383-NetworkServiceImpl$4.doInTransaction:1331-Transaction.execute:40-NetworkServiceImpl.commitNetwork:1331-NetworkServiceImpl.createGuestNetwork:1294-NativeMethodAccessorImpl.invoke0:-2
2018-02-06 11:20:27,798 ERROR [c.c.a.ApiServer] (qtp788117692-390:ctx-f1a980be 
ctx-61be30e8) (logid:0ca0c866) unhandled exception executing api command: 
[Ljava.lang.String;@43b9df02
com.cloud.utils.exception.CloudRuntimeException: Unable to convert network 
offering with specified id to network profile
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.setupNetwork(NetworkOrchestrator.java:726)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$10.doInTransaction(NetworkOrchestrator.java:2364)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$10.doInTransaction(NetworkOrchestrator.java:2315)
at com.cloud.utils.db.Transaction$2.doInTransaction(Transaction.java:50)
at com.cloud.utils.db.Transaction.execute(Transaction.java:40)
at com.cloud.utils.db.Transaction.execute(Transaction.java:47)


--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

nicolas.vazq...@shapeblue.com 
www.shapeblue.com
,   
@shapeblue
  
 



Re: [DISCUSS] VR upgrade downtime reduction

2018-02-06 Thread Wido den Hollander



On 02/06/2018 12:28 PM, Daan Hoogland wrote:

I'm afraid I don't agree on some of your comments, Wido.

On Tue, Feb 6, 2018 at 12:03 PM, Wido den Hollander  wrote:




On 02/05/2018 04:44 PM, Daan Hoogland wrote:


H devs,

I have recently (re-)submitted two PRs, one by Wei [1] and one by Remi
[2],
that reduce downtime for redundant routers and redundant VPCs
respectively.
(please review those)
Now from customers we hear that they also want to reduce downtime for
regular VRs so as we discussed this we came to two possible solutions that
we want to implement one of:

1. start and configure a new router before destroying the old one and then
as a last minute action stop the old one.



Seems like a simple solution to me, this wouldn't require a lot of changes
in the VR.


​expect add in a stop moment just before activating, that doesn't exist yet.
​


Ah, yes. But it would mean additional tests and parameters. Not that 
it's impossible though.


The VR is already fragile imho and could use a lot more love. Adding 
more features might break things which we currently have. That's my fear 
of working on them.







2. make all routers start up redundancy services but for regular routers

start only one until an upgrade is required at which time a new, second
router can be started before killing the old one.​



True, but that would be a problem as you would need to script a lot in the
VR.


​all the scripts for rvr are already on the systemvm
​


Ah, yes, for the VPC, I forgot that.









​obviously both solutions have their merits, so I want to have your input
to make the broadest supported implementation.
-1 means there will be an overlap or a small delay and interruption of
service.
+1 It can be argued, "they got what they payed for".
-2 means a overhead in memory usage by the router by the extra services
running on it.
+2 the number of router-varieties will be further reduced.

-1&-2 We have to deal with potentially large upgrade steps from way before
the cloudstack era even and might be stuck to 1 because of that, needing
to
hack around it. Any dealing with older VRs, pre 4.5 and especially pre 4.0
will be hard.



I don't like hacking. The VRs already are 'hacky' imho.


​yes, it is.​




We (PCextreme) are only using Basic Networking so for us the VR only does
DHCP and Cloud-init, so we don't care about this that much ;)


​thanks for the input anyway, Wido


I think however that it's a valid point. The Redundant Virtual Router is 
mostly important when you have traffic flowing through it.


So for Basic Networking it's less important or for a setup where traffic 
isn't going through the VR and it only does DHCP, am I correct?


Wido


​




Wido


I am not cross posting though this might be one of these occasions where it

is appropriate to include users@. Just my puristic inhibitions.

Of course I have preferences but can you share your thoughts, please?
​
​And don't forget to review Wei's [1] and Remi's [2] work please.

​[1] https://github.com/apache/cloudstack/pull/2435​
[2] https://github.com/apache/cloudstack/pull/2436







Re: 4.11 Release announcment

2018-02-06 Thread Kris Sterckx
Hi Giles ,


Impressive !


For completeness : following features are missing in your list :

* Extra DHCP options support  (Nuage Networks)(CLOUDSTACK-9776)

* Physical network migration  (CLOUDSTACK-10024)  ,  better take it
separate as this is generic development

* Nuage VSP 5.0 support and caching of NuageVsp ID's   (CLOUDSTACK-10053)


Kris


On 6 February 2018 at 10:36, Giles Sirett 
wrote:

> Hi all
>
> Rohit and I are wording the announcement for the 4.11 release
>
> I'm trying to get a few quotes for the announcements from ACS  users
>
>
> Something along the lines of "we're excited about this new version of
> Cloudstack because of"
>
>
> If anybody here is able to provide a quote, can you please ping something
> over to me by Thursday 12:00 GMT
>
>
> List of whats new below
>
>
> New Features and Improvements
> *Support for XenServer 7.1 and 7.2, and improved support for
> VMware 6.5.
> *Host-HA cloudstack/> framework and HA-provider for KVM hosts with and NFS as
> primary storage, and a new background polling task manager.
> *Secure agents communication: new certificate authority framework<
> http://www.shapeblue.com/cloudstack-ca-framework/> and a default built-in
> root CA provider.
> *New network type - L2 confluence/pages/viewpage.action?pageId=74680920>.
> *CloudStack metrics exporter for Prometheus >.
> *Cloudian Hyperstore
> Connector for CloudStack.
> *Annotation feature for CloudStack entities such as hosts.
> *Separation of volume snapshot creation of primary storage and
> backing operation on secondary storage.
> *Limit admin access from specified CIDRs.
> *Expansion of Management IP Range.
> *Dedication of public IPs to SSVM and CPVM.
> *Support for separate subnet for SSVM and CPVM.
> *Bypass secondary storage template copy/transfer for KVM.
> *Support for multi-disk OVA template for VMware.
> *Storage overprovisioning for local storage.
> *LDAP mapping with domain scope, and mapping of LDAP group to an
> account.
> *Move user across accounts.
> *Support for "VSD managed" networks with Nuage Networks.
> *Extend config drive support for user data, metadata, and password
> (Nuage networks).
> *Nuage domain template selection per VPC and support for network
> migration.
> *Managed storage enhancements.
> *Support for watchdog timer to KVM Instances.
> *Support for Secondary IPv6 Addresses and Subnets.
> *IPv6 Prefix Delegation support in Basic Networking.
> *Ability to specific mac address while deploying VM or adding a
> nic to a VM.
> *VMware dvswitch security policies configuration in network
> offering
> *Allow more than 7 nics to be added to a VMware VM.
> *Network rate cloudstack-administration/en/latest/service_offerings.html#
> network-throttling> usage for guest offering for VRs.
> *Usage metrics for VM snapshot on primary storage
> *Enable netscaler inline mode.
> *NCC integration in CloudStack.
> *The retirement of Midonet network plugin.
> UI Improvements
> *High precision of metrics in the dashboard.
> *Event timeline - filter related events.
> *Navigation improvements:
> * VRs to account, network, instances
> * Network and VRs to instances.
> *List view improvements:
> * As applicable, account, zone, network columns in list views.
> * States and related columns with icons in various infrastructure
> entity views.
> * Additional columns in several list views.
> *New columns for additional information.
> *Bulk operation support for stopping and destroying VMs (known the
> issue of manual refreshing required).
> Structural Improvements
> *Embedded Jetty and improved CloudStack management server
> configuration.
> *Improved support for Java 8 in built artifacts/modules,
> packaging, and systemvm template.
> *Debian 9 based systemvm template:
> * Patches system VM without reboot, reduces VR/systemvm startup
> time to few tens of seconds.
> * Faster console proxy startup and service availability.
> * Improved support for redundant virtual routers, conntrackd and
> keepalived.
> * Improved strongswan provided VPN (s2s and remote access).
> * Packer based systemvm template generation and reduced disk size.
> * Several optimization and improvements.
>
>
>
>
>
>
>
> Kind regards
> Giles
>
>
> giles.sir...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


[L2 network] [VXLAN] Refusing to design this network, the physical isolation type is not BCF_SEGMENT

2018-02-06 Thread Nux!
Hi,

I'm trying to add an L2 network based on a VXLAN physical network and I am 
getting the error in the subject.

If I use a VLAN based physical network all completes successfully and I end up 
with an L2 network in green "Setup" state.

Here are some more logs:

2018-02-06 11:20:27,748 DEBUG [c.c.n.NetworkServiceImpl] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Found physical 
network id=201 based on requested tags mellanoxvxlan
2018-02-06 11:20:27,749 DEBUG [c.c.n.NetworkServiceImpl] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Found physical 
network id=201 based on requested tags mellanoxvxlan
2018-02-06 11:20:27,766 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network, the physical isolation type is not BCF_SEGMENT
2018-02-06 11:20:27,766 DEBUG [o.a.c.n.c.m.ContrailGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,767 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,767 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,767 DEBUG [c.c.n.g.OvsGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,769 DEBUG [o.a.c.n.g.SspGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) SSP not 
configured to be active
2018-02-06 11:20:27,769 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design this network
2018-02-06 11:20:27,769 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Refusing to 
design network using network offering 19 on physical network 201
2018-02-06 11:20:27,770 DEBUG [o.a.c.e.o.NetworkOrchestrator] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Releasing lock 
for Acct[6af2875b-04fc-11e8-923e-002590474525-admin]
2018-02-06 11:20:27,789 DEBUG [c.c.u.d.T.Transaction] 
(qtp788117692-390:ctx-f1a980be ctx-61be30e8) (logid:0ca0c866) Rolling back the 
transaction: Time = 38 Name =  qtp788117692-390; called by 
-TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-Transaction.execute:43-Transaction.execute:47-NetworkOrchestrator.createGuestNetwork:2315-NetworkServiceImpl$4.doInTransaction:1383-NetworkServiceImpl$4.doInTransaction:1331-Transaction.execute:40-NetworkServiceImpl.commitNetwork:1331-NetworkServiceImpl.createGuestNetwork:1294-NativeMethodAccessorImpl.invoke0:-2
2018-02-06 11:20:27,798 ERROR [c.c.a.ApiServer] (qtp788117692-390:ctx-f1a980be 
ctx-61be30e8) (logid:0ca0c866) unhandled exception executing api command: 
[Ljava.lang.String;@43b9df02
com.cloud.utils.exception.CloudRuntimeException: Unable to convert network 
offering with specified id to network profile
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.setupNetwork(NetworkOrchestrator.java:726)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$10.doInTransaction(NetworkOrchestrator.java:2364)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$10.doInTransaction(NetworkOrchestrator.java:2315)
at com.cloud.utils.db.Transaction$2.doInTransaction(Transaction.java:50)
at com.cloud.utils.db.Transaction.execute(Transaction.java:40)
at com.cloud.utils.db.Transaction.execute(Transaction.java:47)


--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Re: [DISCUSS] VR upgrade downtime reduction

2018-02-06 Thread Daan Hoogland
I'm afraid I don't agree on some of your comments, Wido.

On Tue, Feb 6, 2018 at 12:03 PM, Wido den Hollander  wrote:

>
>
> On 02/05/2018 04:44 PM, Daan Hoogland wrote:
>
>> H devs,
>>
>> I have recently (re-)submitted two PRs, one by Wei [1] and one by Remi
>> [2],
>> that reduce downtime for redundant routers and redundant VPCs
>> respectively.
>> (please review those)
>> Now from customers we hear that they also want to reduce downtime for
>> regular VRs so as we discussed this we came to two possible solutions that
>> we want to implement one of:
>>
>> 1. start and configure a new router before destroying the old one and then
>> as a last minute action stop the old one.
>>
>
> Seems like a simple solution to me, this wouldn't require a lot of changes
> in the VR.
>
​expect add in a stop moment just before activating, that doesn't exist yet.
​


>
> 2. make all routers start up redundancy services but for regular routers
>> start only one until an upgrade is required at which time a new, second
>> router can be started before killing the old one.​
>>
>
> True, but that would be a problem as you would need to script a lot in the
> VR.

​all the scripts for rvr are already on the systemvm
​


>
>
>
>> ​obviously both solutions have their merits, so I want to have your input
>> to make the broadest supported implementation.
>> -1 means there will be an overlap or a small delay and interruption of
>> service.
>> +1 It can be argued, "they got what they payed for".
>> -2 means a overhead in memory usage by the router by the extra services
>> running on it.
>> +2 the number of router-varieties will be further reduced.
>>
>> -1&-2 We have to deal with potentially large upgrade steps from way before
>> the cloudstack era even and might be stuck to 1 because of that, needing
>> to
>> hack around it. Any dealing with older VRs, pre 4.5 and especially pre 4.0
>> will be hard.
>>
>>
> I don't like hacking. The VRs already are 'hacky' imho.
>
​yes, it is.​


>
> We (PCextreme) are only using Basic Networking so for us the VR only does
> DHCP and Cloud-init, so we don't care about this that much ;)
>
​thanks for the input anyway, Wido
​


>
> Wido
>
>
> I am not cross posting though this might be one of these occasions where it
>> is appropriate to include users@. Just my puristic inhibitions.
>>
>> Of course I have preferences but can you share your thoughts, please?
>> ​
>> ​And don't forget to review Wei's [1] and Remi's [2] work please.
>>
>> ​[1] https://github.com/apache/cloudstack/pull/2435​
>> [2] https://github.com/apache/cloudstack/pull/2436
>>
>>


-- 
Daan


Re: [DISCUSS] VR upgrade downtime reduction

2018-02-06 Thread Wido den Hollander



On 02/05/2018 04:44 PM, Daan Hoogland wrote:

H devs,

I have recently (re-)submitted two PRs, one by Wei [1] and one by Remi [2],
that reduce downtime for redundant routers and redundant VPCs respectively.
(please review those)
Now from customers we hear that they also want to reduce downtime for
regular VRs so as we discussed this we came to two possible solutions that
we want to implement one of:

1. start and configure a new router before destroying the old one and then
as a last minute action stop the old one.


Seems like a simple solution to me, this wouldn't require a lot of 
changes in the VR.



2. make all routers start up redundancy services but for regular routers
start only one until an upgrade is required at which time a new, second
router can be started before killing the old one.​


True, but that would be a problem as you would need to script a lot in 
the VR.




​obviously both solutions have their merits, so I want to have your input
to make the broadest supported implementation.
-1 means there will be an overlap or a small delay and interruption of
service.
+1 It can be argued, "they got what they payed for".
-2 means a overhead in memory usage by the router by the extra services
running on it.
+2 the number of router-varieties will be further reduced.

-1&-2 We have to deal with potentially large upgrade steps from way before
the cloudstack era even and might be stuck to 1 because of that, needing to
hack around it. Any dealing with older VRs, pre 4.5 and especially pre 4.0
will be hard.



I don't like hacking. The VRs already are 'hacky' imho.

We (PCextreme) are only using Basic Networking so for us the VR only 
does DHCP and Cloud-init, so we don't care about this that much ;)


Wido


I am not cross posting though this might be one of these occasions where it
is appropriate to include users@. Just my puristic inhibitions.

Of course I have preferences but can you share your thoughts, please?
​
​And don't forget to review Wei's [1] and Remi's [2] work please.

​[1] https://github.com/apache/cloudstack/pull/2435​
[2] https://github.com/apache/cloudstack/pull/2436



4.11 Release announcment

2018-02-06 Thread Giles Sirett
Hi all

Rohit and I are wording the announcement for the 4.11 release

I'm trying to get a few quotes for the announcements from ACS  users


Something along the lines of "we're excited about this new version of 
Cloudstack because of"


If anybody here is able to provide a quote, can you please ping something over 
to me by Thursday 12:00 GMT


List of whats new below


New Features and Improvements
*Support for XenServer 7.1 and 7.2, and improved support for VMware 6.5.
*Host-HA 
framework and HA-provider for KVM hosts with and NFS as primary storage, and a 
new background polling task manager.
*Secure agents communication: new certificate authority 
framework and a default 
built-in root CA provider.
*New network type - 
L2.
*CloudStack metrics exporter for Prometheus.
*Cloudian Hyperstore 
Connector for CloudStack.
*Annotation feature for CloudStack entities such as hosts.
*Separation of volume snapshot creation of primary storage and backing 
operation on secondary storage.
*Limit admin access from specified CIDRs.
*Expansion of Management IP Range.
*Dedication of public IPs to SSVM and CPVM.
*Support for separate subnet for SSVM and CPVM.
*Bypass secondary storage template copy/transfer for KVM.
*Support for multi-disk OVA template for VMware.
*Storage overprovisioning for local storage.
*LDAP mapping with domain scope, and mapping of LDAP group to an 
account.
*Move user across accounts.
*Support for "VSD managed" networks with Nuage Networks.
*Extend config drive support for user data, metadata, and password 
(Nuage networks).
*Nuage domain template selection per VPC and support for network 
migration.
*Managed storage enhancements.
*Support for watchdog timer to KVM Instances.
*Support for Secondary IPv6 Addresses and Subnets.
*IPv6 Prefix Delegation support in Basic Networking.
*Ability to specific mac address while deploying VM or adding a nic to 
a VM.
*VMware dvswitch security policies configuration in network offering
*Allow more than 7 nics to be added to a VMware VM.
*Network 
rate
 usage for guest offering for VRs.
*Usage metrics for VM snapshot on primary storage
*Enable netscaler inline mode.
*NCC integration in CloudStack.
*The retirement of Midonet network plugin.
UI Improvements
*High precision of metrics in the dashboard.
*Event timeline - filter related events.
*Navigation improvements:
* VRs to account, network, instances
* Network and VRs to instances.
*List view improvements:
* As applicable, account, zone, network columns in list views.
* States and related columns with icons in various infrastructure 
entity views.
* Additional columns in several list views.
*New columns for additional information.
*Bulk operation support for stopping and destroying VMs (known the 
issue of manual refreshing required).
Structural Improvements
*Embedded Jetty and improved CloudStack management server configuration.
*Improved support for Java 8 in built artifacts/modules, packaging, and 
systemvm template.
*Debian 9 based systemvm template:
* Patches system VM without reboot, reduces VR/systemvm startup time to 
few tens of seconds.
* Faster console proxy startup and service availability.
* Improved support for redundant virtual routers, conntrackd and 
keepalived.
* Improved strongswan provided VPN (s2s and remote access).
* Packer based systemvm template generation and reduced disk size.
* Several optimization and improvements.







Kind regards
Giles


giles.sir...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue