Re: kvm: /var/log/cloudstack/agent/agent.log is a binary file

2019-09-03 Thread Wido den Hollander



On 9/3/19 9:57 AM, Daan Hoogland wrote:
> Can you find/look at the line before in the log. It is probably the one
> containing the hindering data. Or otherwise it *might* be a clue where in
> the flow it happens.
> 

Do you have any idea what the easiest way might be?

I checked agent.log.1.gz and after I decompress is that file is not
binary, text only.

The binary data only seems to be present in the current file.

Wido

> On Tue, Sep 3, 2019 at 8:22 AM Wido den Hollander  wrote:
> 
>>
>>
>> On 9/2/19 10:18 PM, Wei ZHOU wrote:
>>> Hi Wido,
>>>
>>> I had similar issue (not agent.log). It is probably caused by one or
>> few> lines with special characters.
>>
>> And I'm trying to figure out what causes it :-)
>>
>>> "grep -a" should work.
>>
>> I know, but other tools which analyze the logfile might also have
>> trouble reading the file due to binary characters.
>>
>> Wido
>>
>>>
>>> -Wei
>>>
>>> On Mon, 2 Sep 2019 at 19:35, Wido den Hollander  wrote:
>>>
>>>> Hi,
>>>>
>>>> I've seen this on multiple occasions with Ubuntu 18.04 (and maybe
>>>> 16.04?) hypervisors where according to 'grep' the agent.log is a binary
>>>> file:
>>>>
>>>> root@n06:~# grep security_group /var/log/cloudstack/agent/agent.log
>>>> Binary file /var/log/cloudstack/agent/agent.log matches
>>>> root@n06:~#
>>>>
>>>> If I open the file with 'less' I indeed see binary data.
>>>>
>>>> Tailing the file works just fine.
>>>>
>>>> Does anybody know where this is coming from? What in the CloudStack
>>>> agent causes it to write binary data to the logfile?
>>>>
>>>> Wido
>>>>
>>>
>>
>>
>>
> 


kvm: /var/log/cloudstack/agent/agent.log is a binary file

2019-09-03 Thread Wido den Hollander



On 9/2/19 10:18 PM, Wei ZHOU wrote:
> Hi Wido,
> 
> I had similar issue (not agent.log). It is probably caused by one or few> 
> lines with special characters.

And I'm trying to figure out what causes it :-)

> "grep -a" should work.

I know, but other tools which analyze the logfile might also have
trouble reading the file due to binary characters.

Wido

> 
> -Wei
> 
> On Mon, 2 Sep 2019 at 19:35, Wido den Hollander  wrote:
> 
>> Hi,
>>
>> I've seen this on multiple occasions with Ubuntu 18.04 (and maybe
>> 16.04?) hypervisors where according to 'grep' the agent.log is a binary
>> file:
>>
>> root@n06:~# grep security_group /var/log/cloudstack/agent/agent.log
>> Binary file /var/log/cloudstack/agent/agent.log matches
>> root@n06:~#
>>
>> If I open the file with 'less' I indeed see binary data.
>>
>> Tailing the file works just fine.
>>
>> Does anybody know where this is coming from? What in the CloudStack
>> agent causes it to write binary data to the logfile?
>>
>> Wido
>>
> 




***UNCHECKED*** Re: kvm: /var/log/cloudstack/agent/agent.log is a binary file

2019-09-03 Thread Wido den Hollander


bin1kKK_RS4uR.bin
Description: PGP/MIME version identification


kvm: /var/log/cloudstack/agent/agent.log is a binary file

2019-09-02 Thread Wido den Hollander
Hi,

I've seen this on multiple occasions with Ubuntu 18.04 (and maybe
16.04?) hypervisors where according to 'grep' the agent.log is a binary
file:

root@n06:~# grep security_group /var/log/cloudstack/agent/agent.log
Binary file /var/log/cloudstack/agent/agent.log matches
root@n06:~#

If I open the file with 'less' I indeed see binary data.

Tailing the file works just fine.

Does anybody know where this is coming from? What in the CloudStack
agent causes it to write binary data to the logfile?

Wido


Re: VXLAN and KVm experiences

2019-07-18 Thread Wido den Hollander



On 10/23/18 2:54 PM, Ivan Kudryavtsev wrote:
> Doesn't solution like this works seamlessly for large VXLAN networks?
> 
> https://vincent.bernat.ch/en/blog/2017-vxlan-bgp-evpn

We are using that with CloudStack right now. We have a modified version
of 'modifyvxlan.sh':
https://github.com/PCextreme/cloudstack/blob/vxlan-bgp-evpn/scripts/vm/network/vnet/modifyvxlan.sh

Your 'tunnelip' needs to be set on 'lo', in our case this is
10.255.255.255.X

We have the script in /usr/share/modifyvxlan.sh so that it's found by
the Agent and we don't overwrite the existing script (which might break
after a upgrade).

Our frr conf on the hypervisor:

frr version 7.1
frr defaults traditional
hostname myfirsthypervisor
log syslog informational
no ipv6 forwarding
service integrated-vtysh-config
!
interface enp81s0f0
 no ipv6 nd suppress-ra
!
interface enp81s0f1
 no ipv6 nd suppress-ra
!
interface lo
 ip address 10.255.255.9/32
 ipv6 address 2001:db8:100::9/128
!
router bgp 4200100123
 bgp router-id 10.255.255.9
 no bgp default ipv4-unicast
 neighbor uplinks peer-group
 neighbor uplinks remote-as external
 neighbor uplinks ebgp-multihop 255
 neighbor enp81s0f0 interface peer-group uplinks
 neighbor enp81s0f1 interface peer-group uplinks
 !
 address-family ipv4 unicast
  network 10.255.255.9/32
  neighbor uplinks activate
  neighbor uplinks next-hop-self
 exit-address-family
 !
 address-family ipv6 unicast
  network 2001:db8:100::9/128
  neighbor uplinks activate
 exit-address-family
 !
 address-family l2vpn evpn
  neighbor uplinks activate
  advertise-all-vni
 exit-address-family
!
line vty
!

Both enp81s0f0 and enp81s0f1 are 100G interfaces connected to Cumulus
Linux routers/switches and they use BGP Unnumbered (IPv6 Link Local) for
their BGP sessions.

Hope this helps!

Wido

> 
> вт, 23 окт. 2018 г., 8:34 Simon Weller :
> 
>> Linux native VXLAN uses multicast and each host has to participate in
>> multicast in order to see the VXLAN networks. We haven't tried using PIM
>> across a L3 boundary with ACS, although it will probably work fine.
>>
>> Another option is to use a L3 VTEP, but right now there is no native
>> support for that in CloudStack's VXLAN implementation, although we've
>> thought about proposing it as feature.
>>
>>
>> 
>> From: Wido den Hollander 
>> Sent: Tuesday, October 23, 2018 7:17 AM
>> To: dev@cloudstack.apache.org; Simon Weller
>> Subject: Re: VXLAN and KVm experiences
>>
>>
>>
>> On 10/23/18 1:51 PM, Simon Weller wrote:
>>> We've also been using VXLAN on KVM for all of our isolated VPC guest
>> networks for quite a long time now. As Andrija pointed out, make sure you
>> increase the max_igmp_memberships param and also put an ip address on each
>> interface host VXLAN interface in the same subnet for all hosts that will
>> share networking, or multicast won't work.
>>>
>>
>> Thanks! So you are saying that all hypervisors need to be in the same L2
>> network or are you routing the multicast?
>>
>> My idea was that each POD would be an isolated Layer 3 domain and that a
>> VNI would span over the different Layer 3 networks.
>>
>> I don't like STP and other Layer 2 loop-prevention systems.
>>
>> Wido
>>
>>>
>>> - Si
>>>
>>>
>>> 
>>> From: Wido den Hollander 
>>> Sent: Tuesday, October 23, 2018 5:21 AM
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: VXLAN and KVm experiences
>>>
>>>
>>>
>>> On 10/23/18 11:21 AM, Andrija Panic wrote:
>>>> Hi Wido,
>>>>
>>>> I have "pioneered" this one in production for last 3 years (and
>> suffered a
>>>> nasty pain of silent drop of packages on kernel 3.X back in the days
>>>> because of being unaware of max_igmp_memberships kernel parameters, so I
>>>> have updated the manual long time ago).
>>>>
>>>> I never had any issues (beside above nasty one...) and it works very
>> well.
>>>
>>> That's what I want to hear!
>>>
>>>> To avoid above issue that I described - you should increase
>>>> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  -
>> otherwise
>>>> with more than 20 vxlan interfaces, some of them will stay in down state
>>>> and have a hard traffic drop (with proper message in agent.log) with
>> kernel
>>>>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and
>> also
>>>> pay attention to MTU size as well - anyway everything is in the manual
>> (I
>>>> updated everythi

Re: 4.13 Heads up

2019-06-12 Thread Wido den Hollander
Sounds like a plan!

Wido

On 6/10/19 5:19 PM, Paul Angus wrote:
> In order to keep up the momentum on the release cadence, some of us are now 
> planning to get 4.13 moving
> 
> 
> 
> We've come up with a plan which I hope everyone will find acceptable (see 
> below).
> 
> 
> 
> 
> 
> ==  4.13.0.0: (next LTS release) 
> ==
> 
> 
> 
>   *   1-5 July 2019: Freeze master and stabilize
>   *   8-12 July 2019: After ~100% smoketest pass and only accept blocker 
> fixes, cut RC1
>   *   14 July - 5 Aug 2019: Release 4.13.0.0 (next LTS)
>   *   Milestone: https://github.com/apache/cloudstack/milestone/8
> 
> 
> 
> Cheers
> 
> 
> 
> Paul Angus
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 


Re: [DISCUSS] Deprecate older hypervisors in 4.14

2019-06-06 Thread Wido den Hollander



On 6/5/19 7:56 PM, Rohit Yadav wrote:
> All,
> 
> Bases on some conversation on a Github issue on moving to Python3 today, I 
> would like to propose a PR after 4.13 is cut on deprecation of following 
> hypervisors in the next major 4.14 release:
> 
> - XenServer 6.2, 6.5 and older
> - VMware 5.x
> - KVM/CentOS6/RHEL6 (though we've already voted and agreed to deprecate el6 
> packages in 4.14)
> 

I would also opt for Ubuntu 14.04 as that is EOL since April 2019 and we
should/could drop that support as well.

> Note that it was mentioned in recent release notes as well, but I wanted to 
> kick off a discussion thread if esp. our users have any objections or 
> concerns: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Hypervisor+and+Management+Server+OS+EOL+Dates
> 
> Thoughts? Anything we should add, remove? Thanks.
> 
> Regards,
> Rohit Yadav
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 


Re: [DISCUSS] Deprecate older hypervisors in 4.14

2019-06-06 Thread Wido den Hollander
Sounds like a good idea!

People can still stay on a older version of CS if needed for them.

We can't support older versions for ever.

Wido

On 6/5/19 7:56 PM, Rohit Yadav wrote:
> All,
> 
> Bases on some conversation on a Github issue on moving to Python3 today, I 
> would like to propose a PR after 4.13 is cut on deprecation of following 
> hypervisors in the next major 4.14 release:
> 
> - XenServer 6.2, 6.5 and older
> - VMware 5.x
> - KVM/CentOS6/RHEL6 (though we've already voted and agreed to deprecate el6 
> packages in 4.14)
> 
> Note that it was mentioned in recent release notes as well, but I wanted to 
> kick off a discussion thread if esp. our users have any objections or 
> concerns: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Hypervisor+and+Management+Server+OS+EOL+Dates
> 
> Thoughts? Anything we should add, remove? Thanks.
> 
> Regards,
> Rohit Yadav
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 


Re: 答复: RBD primary storage VM encounters Exclusive Lock after triggering HA

2019-05-28 Thread Wido den Hollander



On 5/28/19 1:48 PM, li jerry wrote:
> Hi Wido
> 
>  
> 
> I filled in the CLOUDSTACK is the following KEY
> 
>  
> 
> [root@cn01-nodeb ~]# ceph auth get client.cloudstack
> 
> exported keyring for client.cloudstack
> 
> [client.cloudstack]
> 
>   key = AQDTh7pcIJjNIhAAwk8jtxilJWXQR7osJRFMLw==
> 
>   caps mon = "allow r"
> 
>   caps osd = "allow rwx pool=rbd"
> 
>  

That's the problem :-) Your user needs to be updated.

The caps should be:

[client.cloudstack]
 key = AQDTh7pcIJjNIhAAwk8jtxilJWXQR7osJRFMLw==
 caps mon = "profile rbd"
 caps osd = "profile rbd pool=rbd"

See: http://docs.ceph.com/docs/master/rbd/rbd-cloudstack/

This will allow the client to blacklist the other and take over the
exclusive-lock.

Wido

> 
> *发件人: *Wido den Hollander <mailto:w...@widodh.nl>
> *发送时间: *2019年5月28日19:42
> *收件人: *dev@cloudstack.apache.org <mailto:dev@cloudstack.apache.org>;
> li jerry <mailto:div...@hotmail.com>; us...@cloudstack.apache.org
> <mailto:us...@cloudstack.apache.org>
> *主题: *Re: RBD primary storage VM encounters Exclusive Lock after
> triggering HA
> 
>  
> 
> 
> 
> On 5/28/19 6:16 AM, li jerry wrote:
>> Hello guys
>> 
>> we’ve deployed an environment with CloudStack 4.11.2 and KVM(CentOS7.6), and 
>> Ceph 13.2.5 is deployed as the primary storage.
>> We found some issues with the HA solution, and we are here to ask for you 
>> suggestions.
>> 
>> We’ve both enabled VM HA and Host HA feature in CloudStack, and the compute 
>> offering is tagged as ha.
>> When we try to perform a power failure test (unplug 1 node of 4), the 
>> running VMs on the removed node is automatically rescheduled to the other 
>> living nodes after 5 minutes, but all of them can not boot into the OS. We 
>> found the booting procedure is stuck by the IO read/write failure.
>> 
>> 
>> 
>> The following information is prompted after VM starts:
>> 
>> Generating "/run/initramfs/rdsosreport.txt"
>> 
>> Entering emergency mode. Exit the shell to continue.
>> Type "journalctl" to view system logs.
>> You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or 
>> /boot
>> after mounting them and attach it to a bug report
>> 
>> :/#
>> 
>> 
>> 
>> We found this is caused by the lock on the image:
>> [root@cn01-nodea ~]# rbd lock list a93010b0-2be2-49bd-b25e-ec89b3a98b4b
>> There is 1 exclusive lock on this image.
>> Locker ID  Address
>> client.1164351 auto 94464726847232 10.226.16.128:0/3002249644
>> 
>> If we remove the lock from the image, and restart the VM under CloudStack, 
>> this VM will boot successfully.
>> 
>> We know that if we disable the Exclusive Lock feature (by setting 
>> rbd_default_features = 3) for Ceph would solve this problem. But we don’t 
>> think it’s the best solution for the HA, so could you please give us some 
>> ideas about how you are doing and what is the best practice for this feature?
>> 
> 
> exclusive-lock is something to prevent a split-brain and having two
> clients write to it at the same time.
> 
> The lock should be released to the other client if this is requested,
> but I have the feeling that you might have a cephx problem there.
> 
> Can you post the output of:
> 
> $ ceph auth get client.X
> 
> Where you replace X by the user you are using for CloudStack? Also
> remove they 'key', I don't need that.
> 
> I want to look at the caps of the user.
> 
> Wido
> 
>> Thanks.
>> 
>> 
> 
>  
> 


Re: RBD primary storage VM encounters Exclusive Lock after triggering HA

2019-05-28 Thread Wido den Hollander



On 5/28/19 6:16 AM, li jerry wrote:
> Hello guys
> 
> we’ve deployed an environment with CloudStack 4.11.2 and KVM(CentOS7.6), and 
> Ceph 13.2.5 is deployed as the primary storage.
> We found some issues with the HA solution, and we are here to ask for you 
> suggestions.
> 
> We’ve both enabled VM HA and Host HA feature in CloudStack, and the compute 
> offering is tagged as ha.
> When we try to perform a power failure test (unplug 1 node of 4), the running 
> VMs on the removed node is automatically rescheduled to the other living 
> nodes after 5 minutes, but all of them can not boot into the OS. We found the 
> booting procedure is stuck by the IO read/write failure.
> 
> 
> 
> The following information is prompted after VM starts:
> 
> Generating "/run/initramfs/rdsosreport.txt"
> 
> Entering emergency mode. Exit the shell to continue.
> Type "journalctl" to view system logs.
> You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or 
> /boot
> after mounting them and attach it to a bug report
> 
> :/#
> 
> 
> 
> We found this is caused by the lock on the image:
> [root@cn01-nodea ~]# rbd lock list a93010b0-2be2-49bd-b25e-ec89b3a98b4b
> There is 1 exclusive lock on this image.
> Locker ID  Address
> client.1164351 auto 94464726847232 10.226.16.128:0/3002249644
> 
> If we remove the lock from the image, and restart the VM under CloudStack, 
> this VM will boot successfully.
> 
> We know that if we disable the Exclusive Lock feature (by setting 
> rbd_default_features = 3) for Ceph would solve this problem. But we don’t 
> think it’s the best solution for the HA, so could you please give us some 
> ideas about how you are doing and what is the best practice for this feature?
> 

exclusive-lock is something to prevent a split-brain and having two
clients write to it at the same time.

The lock should be released to the other client if this is requested,
but I have the feeling that you might have a cephx problem there.

Can you post the output of:

$ ceph auth get client.X

Where you replace X by the user you are using for CloudStack? Also
remove they 'key', I don't need that.

I want to look at the caps of the user.

Wido

> Thanks.
> 
> 


Re: Getting Cracking on a 4.11.3

2019-05-23 Thread Wido den Hollander
Nevertheless, looks good!

So the 4.11.3 release in July? Sounds good!

Wido

On 5/23/19 6:13 PM, Paul Angus wrote:
> Correction:   1 June 2019: Cut RC1**SHOULD READ  -->  10 July
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
> 
> -Original Message-
> From: Paul Angus  
> Sent: 23 May 2019 17:05
> To: dev ; us...@cloudstack.apache.org
> Subject: Getting Cracking on a
> 
> Hi everyone,
> 
> This a little 'heads up'.  In order to try to get our release cadence back on 
> track, some of us are planning to get cracking on a 4.11.3
> 
> We've come up with a plan which I hope everyone will find acceptable (see 
> below).
> 
> Let the fun commence!!
> 
> 4.11.3.0: (next current LTS minor release)
> ==
> 
> Until 3 June 2019: Accept any minor/major/critical/blocker fixes
> 3 June - 10 June 2019: Stabilise 4.11 branch
> 1 June 2019: Cut RC1**
> 11-24 June 2019: Release 4.11.3.0
> Milestone: https://github.com/apache/cloudstack/milestone/7
> 
> 
> ** With 4.11 branch already stable, we hope the effort towards stabilizing be 
> minimal
> 
> 
> 
> paul.an...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
>   
>  
> 


Re: [VOTE] Remove el6 support in future CloudStack versions (was Re: [DISCUSS] Remove support for el6 packaging in 4.13/4.14)

2019-05-21 Thread Wido den Hollander
+1

On 5/21/19 11:40 AM, Rohit Yadav wrote:
> All,
> 
> 
> Thank you for your feedback and discussions. From what we've discussed so 
> far, we've lazy consensus that nobody wants to use el6 or are limited to 
> upgrade to el7/el8 due to potential risks.
> 
> 
> Moving forward I put forth the following for voting:
> 
> 
> - Next minor/major releases (such as 4.11.3.0, 4.13.0.0) will be last ones to 
> support el6 packaging both for the management server and KVM host, but users 
> are discouraged from using them
> 
> - Next major release (4.13.0.0) will document in its release notes that we'll 
> stop supporting centos6/rhel6 packaging in future versions, i.e. 4.14 and 
> onwards
> 
> - After 4.13.0.0 is released, we will remove el6 related specs, packaging 
> scripts etc. from the codebase in the master branch
> 
> 
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
> 
> 
> ** PMCs kindly add binding to your votes, thanks.
> 
> 
> Thanks.
> 
> 
> 
> Regards,
> 
> Rohit Yadav
> 
> Software Architect, ShapeBlue
> 
> https://www.shapeblue.com
> 
> 
> 
> From: Erik Weber 
> Sent: Wednesday, April 24, 2019 19:32
> To: dev
> Subject: Re: [DISCUSS] Remove support for el6 packaging in 4.13/4.14
> 
> CentOS7 was released 5 years ago, upgrading is long overdue anyway.
> Realistically the next CloudStack release won't be out the door for
> another ~4-6 months either.
> 
> --
> Erik
> 
> On Wed, Apr 24, 2019 at 3:27 PM Ron Wheeler
>  wrote:
>>
>> According to https://en.wikipedia.org/wiki/CentOS
>>
>> CentOS 6 EOL is 2020
>> CentOS 7 EOL is 2024
>>
>>
>> +1 for removing support for CentOS 6.
>>
>> As Erik pointed out the sites running CentOS6 will have to move soon in
>> any event and it is probably better to do it now when there is still a
>> lot of current expertise and information available about how to do it
>> and how to make any changes to applications.
>>
>> Upgrading in a project that is under your control is usually easier than
>> one forced on you by a security issue or an operational failure.
>>
>> Ron
>>
>> On 4/24/19 3:24 AM, Erik Weber wrote:
>>> As an operations guy I can understand the want for future updates and
>>> not upgrading, but with the release plan of RHEL/CentOS I don't find
>>> it feasible.
>>>
>>> RHEL6 is 8 years old (and is still running kernel 2.6!) and isn't
>>> scheduled to be fully EOL until 2024.
>>>
>>> It is true that upgrading requires some effort (and risk) from
>>> operators, but this is work they eventually have to do anyway, so it's
>>> not a matter of /if/ they have to do it, but rather when.
>>>
>>> It is also true that current CloudStack releases should continue to
>>> work, it's also possible that someone might back port future fixes to
>>> a RHEL6 compatible fork (you're more than welcome to).
>>>
>>> I'd vote +1 to remove support for el6 packaging.
>>>
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
> 


Re: Poor NVMe Performance with KVM

2019-05-18 Thread Wido den Hollander
You also might want to set the allocation mode to something else then
shared. This causes the qcow2 metadata to be pre-allocated and that will
improve performance.

Wido

On 5/17/19 3:04 PM, Ivan Kudryavtsev wrote:
> Well, just FYI, I changed cache_mode from NULL (none), to writethrough
> directly in DB and the performance boosted greatly. It may be an important
> feature for NVME drives.
> 
> Currently, on 4.11, the user can set cache-mode for disk offerings, but
> cannot for service offerings, which are translated to cache=none
> corresponding disk offerings.
> 
> The only way is to use SQL to change that for root disk disk offerings.
> CreateServiceOffering API doesn't support cache mode. It can be a serious
> limitation for NVME users, because by default they could meet poor
> read/write performance.
> 
> пт, 17 мая 2019 г., 19:30 Ivan Kudryavtsev :
> 
>> Darius, thanks for your participation,
>>
>> first, I used 4.14 kernel which is the default one for my cluster. Next,
>> switched to 4.15 with dist-upgrade.
>>
>> Do you have an idea how to turn on amount of queues for virtio-scsi with
>> Cloudstack?
>>
>> пт, 17 мая 2019 г., 19:26 Darius Kasparavičius :
>>
>>> Hi,
>>>
>>> I can see a few issues with your xml file. You can try using "queues"
>>> inside your disk definitions. This should help a little, not sure by
>>> how much for your case, but for my specific it went up by almost the
>>> number of queues. Also try cache directsync or writethrough. You
>>> should switch kernel if bugs are still there with 4.15 kernel.
>>>
>>> On Fri, May 17, 2019 at 12:14 PM Ivan Kudryavtsev
>>>  wrote:
>>>>
>>>> Hello, colleagues.
>>>>
>>>> Hope, someone could help me. I just deployed a new VM host with Intel
>>> P4500
>>>> local storage NVMe drive.
>>>>
>>>> From Hypervisor host I can get expected performance, 200K RIOPS, 3GBs
>>> with
>>>> FIO, write performance is also high as expected.
>>>>
>>>> I've created a new KVM VM Service offering with virtio-scsi controller
>>>> (tried virtio as well) and VM is deployed. Now I try to benchmark it
>>> with
>>>> FIO. Results are very strange:
>>>>
>>>> 1. Read/Write with large blocks (1M) shows expected performance (my
>>> limits
>>>> are R=1000/W=500 MBs).
>>>>
>>>> 2. Write with direct=0 leads to expected 50K IOPS, while write with
>>>> direct=1 leads to very moderate 2-3K IOPS.
>>>>
>>>> 3. Read with direct=0, direct=1 both lead to 3000 IOPS.
>>>>
>>>> During the benchmark I see VM IOWAIT=20%, while host IOWAIT is 0% which
>>> is
>>>> strange.
>>>>
>>>> So, basically, from inside VM my NVMe works very slow when small IOPS
>>> are
>>>> executed. From the host, it works great.
>>>>
>>>> I tried to mount the volume with NBD to /dev/nbd0 and benchmark. Read
>>>> performance is nice. Maybe someone managed to use NVME with KVM with
>>> small
>>>> IOPS?
>>>>
>>>> The filesystem is XFS, previously tried with EXT4 - results are the
>>> same.
>>>>
>>>> This is the part of VM XML definition generated by CloudStack:
>>>>
>>>>   
>>>> /usr/bin/kvm-spice
>>>> 
>>>>   
>>>>   >>> file='/var/lib/libvirt/images/6809dbd0-4a15-4014-9322-fe9010695934'/>
>>>>   
>>>> 
>>>> >>> file='/var/lib/libvirt/images/ac43742c-3991-4be1-bff1-7617bf4fc6ef'/>
>>>> 
>>>>   
>>>>   
>>>>   
>>>> 1048576000
>>>> 524288000
>>>> 10
>>>> 5
>>>>   
>>>>   6809dbd04a1540149322
>>>>   
>>>>   
>>>> 
>>>> 
>>>>   
>>>>   
>>>>   
>>>>   
>>>>   
>>>>   
>>>> 
>>>> 
>>>>   
>>>>   >>> function='0x0'/>
>>>> 
>>>>
>>>> So, what I see now, is that it works slower than couple of two Samsung
>>> 960
>>>> PRO which is extremely strange.
>>>>
>>>> Thanks in advance.
>>>>
>>>>
>>>> --
>>>> With best regards, Ivan Kudryavtsev
>>>> Bitworks LLC
>>>> Cell RU: +7-923-414-1515
>>>> Cell USA: +1-201-257-1512
>>>> WWW: http://bitworks.software/ <http://bw-sw.com/>
>>>
>>
> 


Re: dynamically scalable of kvm

2019-05-16 Thread Wido den Hollander
Hi,

No, CloudStack does not support this at the moment afaik. You will need
to Stop and Start the VM to scale it.

Wido

On 5/10/19 6:40 AM, li jerry wrote:
> HI All
> 
> Does CLOUDSTACK now support dynamic scalable of KVM? I found that KVM was 
> already supported.
> 
> 


Re: [DISCUSS] Remove support for el6 packaging in 4.13/4.14

2019-04-15 Thread Wido den Hollander



On 4/15/19 9:44 AM, Rohit Yadav wrote:
> All,
> 
> 
> With CentOS8 around the corner to be released sometime around the summer, I 
> would like to propose to deprecate CentOS6 as support management server host 
> distro and KVM host distro. Non-systemd enabled Ubuntu releases have been 
> already deprecated [1].
> 
> 
> The older CentOS6 version would hold us back as we try to adapt, use and 
> support newer JRE version, kvm/libvirt version, the Linux kernel, and several 
> other older dependencies. Both CentOS6 and RHEL6 have reached EOL on May 
> 10th, 2017 wrt full updates [1].
> 
> 
> If we don't have any disagreements, I propose we remove el6 packaging support 
> in the next major release - 4.13. But, if there are users and organisations 
> that will be badly impacted, let 4.13 be the last of releases to support el6 
> and we definitely remove el6 support in 4.14.
> 
> What are your thoughts?

I agree! EL6 is just no longer suited for development. Good suggestion.

Wido

> 
> 
> [1] EOL date wiki reference: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Hypervisor+and+Management+Server+OS+EOL+Dates
> 
> 
> 
> Regards,
> 
> Rohit Yadav
> 
> Software Architect, ShapeBlue
> 
> https://www.shapeblue.com
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
> 


Re: Latest Qemu KVM EV appears to be broken with ACS

2019-04-13 Thread Wido den Hollander



On 4/12/19 9:33 PM, Rohit Yadav wrote:
> Thanks, I was already exploring a solution using qemu guest agent since 
> morning today. It just so happened that you also thought of the approach, and 
> I could validate my script to work with qemu ev 2.12 by the end of my day.
> 

That would be great actually. The Qemu Guest Agent is a lot better to
use. We might want to explore that indeed. Not for now, but it is a
better option to talk to VMs imho.

Wido

> A proper fix might require some additional changes in cloud-early-config and 
> therefore a new systemvmtemplate for 4.13.0.0/4.11.3.0, I'll start a PR on 
> that in the following week(s).
> 
> Regards.
> 
> Regards,
> Rohit Yadav
> 
> 
> From: Marcus 
> Sent: Saturday, April 13, 2019 12:31:33 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Latest Qemu KVM EV appears to be broken with ACS
> 
> Wow, that was fast. Good work.
> 
> The script seems to work for me. There was one case where I rebooted the
> router and got the old link local IP somehow. I'm not sure if that was a
> timing issue in seeing the existing /var/cache/cloud/cmdline before the new
> one was written or what, but if it was a timing issue it would seem like we
> should already have that problem with the existing cloud-early-config.
> 
> On Fri, Apr 12, 2019 at 12:24 PM Rohit Yadav 
> wrote:
> 
>> Hi Marcus, Simon,
>>
>>
>> I explore two of the short term solutions and I've a working (work in
>> progress) script that replaces the patchviasocket script to use the qemu
>> guest agent (that is installed in 4.11+ sytemvmtemplate). This was part of
>> a scoping exercise for solving the patching problem for qemu 2.12+ (Ubuntu
>> 19.04 has 3.x version).
>>
>>
>> This is what I've so far, however, further testing is needed:
>>
>> https://gist.github.com/rhtyd/ddb42c4c7581c4129ca04fbb829f16cf
>>
>>
>> The logic is completely written in bash as:
>>
>> - Try if we're able to contact the guest agent
>>
>> - Once we're able to connect, confirm that the I/O is not error prone
>>
>> - Then write the payload as file (the ssh public key and cmdline string)
>>
>> - Then fix file permissions
>> - Hope that internally cloud-early-config would detect the cmdline we had
>> saved and patching would work
>>
>>
>> While this may work, for the long term a proper fix is needed that should
>> be a standard patching mechanism across all hypervisors.
>>
>>
>> Regards,
>>
>> Rohit Yadav
>>
>> Software Architect, ShapeBlue
>>
>> https://www.shapeblue.com
>>
>> 
>> From: Marcus 
>> Sent: Friday, April 12, 2019 11:30:46 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: Latest Qemu KVM EV appears to be broken with ACS
>>
>> Long ago it was a disk. The problem was that these disks had to go
>> somewhere, a place where they could survive migrations, which didn't work
>> well for block based primary storage... at least for the code base at the
>> time. Using virtio socket was seen as a fairly standard way to communicate
>> temporary information to the guest, and didn't require managing the
>> lifecycle of a special disk.
>>
>> I believe the current problem is that the sender needs to remain connected
>> until the receiver has read. Maybe socat does this, but if so we need to
>> ensure that it is available and applied as a new RPM dependency. In my
>> testing, waiting on the sender side didn't 100% fix things, or sometimes
>> took a very long time due to the backoff algorithm on the
>> cloud-early-config receiver. Some tweaks to that made it more robust, but
>> it is still a game of trying to coordinate timing of two services on either
>> end. If it works though, I'm all for it.
>>
>> Just to throw another idea out there... If we want to fix this without
>> involving storage, I might suggest switching to the qemu-guest-agent that
>> now exists, with a socket and listening client already in the system vm.
>> This would be far more robust, I think, than our scripting reading unix
>> sockets without any sort of protocol or buffer control considerations, and
>> would likely be more robust to changes in qemu as the guest agent is the
>> primary target for the feature.
>>
>> We can directly write our /var/cache/cloud/cmdline from the host like so
>> (I'm using virsh but we could perhaps communicate with the guest agent
>> socket directly or via socat):
>>
>> virsh qemu-agent-command 19 '{"execute":"guest-file-open&q

Re: IPv6 in SG zones

2019-04-03 Thread Wido den Hollander



On 4/3/19 1:55 PM, Nux! wrote:
> Wido,
> 
> In my research I found a video of yours where you show 4.10 working with IPv6 
> and SGs.
> Wouldn't my 4.11.2 work, too?
> 

It only works with Basic Networking in 4.11, but not in Advanced zones.
And only with KVM afaik.

You probably need to populate the 'vlan' table manually with the right
information.

Wido

> Also, is this still current then?
> http://docs.cloudstack.apache.org/projects/archived-cloudstack-getting-started/en/latest/networking/ipv6.html
> 
> For example it is advised there to use a special systemvm, but I can't find 
> any non-regular ones.
> 
> Any other gotchas?
> 
> Regards,
> Lucian
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Wido den Hollander" 
>> To: "Nux!" , "dev" 
>> Sent: Wednesday, 3 April, 2019 12:04:21
>> Subject: Re: IPv6 in SG zones
> 
>> On 4/3/19 11:59 AM, Nux! wrote:
>>> Thanks, I'm on 4.11.2.0 though.
>>>
>>
>> 4.12 does IPv6 with SG on Advanced zones with Shared Networks.
>>
>> I tested this with V(X)LAN and works for us.
>>
>> Wido
>>
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>>
>>> Nux!
>>> www.nux.ro
>>>
>>> - Original Message -
>>>> From: "Andrija Panic" 
>>>> To: "dev" 
>>>> Cc: "Wido den Hollander" 
>>>> Sent: Tuesday, 2 April, 2019 23:34:11
>>>> Subject: Re: IPv6 in SG zones
>>>
>>>> https://github.com/apache/cloudstack/pull/3053
>>>>
>>>> https://github.com/apache/cloudstack/pull/3070
>>>>
>>>> ^^^ comes to my mind, in 4.12 only...
>>>>
>>>> On Tue, 2 Apr 2019 at 22:01, Nux!  wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I'm looking to deploy a new Adv+SG zone and want to go the extra mile and
>>>>> have IPv6 as well for VMs, however according to what docs I could find 
>>>>> IPv6
>>>>> support is still not complete (missing firewall/security groups for one),
>>>>> whereas I thought it was not the case.
>>>>> Can someone clarify for me if security groups are available for IPv6
>>>>> addresses and maybe point me into a good direction with regards to RTFM?
>>>>>
>>>>> Regards,
>>>>> Lucian
>>>>>
>>>>>
>>>>> --
>>>>> Sent from the Delta quadrant using Borg technology!
>>>>>
>>>>> Nux!
>>>>> www.nux.ro
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Andrija Panić


Re: IPv6 in SG zones

2019-04-03 Thread Wido den Hollander



On 4/3/19 11:59 AM, Nux! wrote:
> Thanks, I'm on 4.11.2.0 though.
> 

4.12 does IPv6 with SG on Advanced zones with Shared Networks.

I tested this with V(X)LAN and works for us.

Wido

> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Andrija Panic" 
>> To: "dev" 
>> Cc: "Wido den Hollander" 
>> Sent: Tuesday, 2 April, 2019 23:34:11
>> Subject: Re: IPv6 in SG zones
> 
>> https://github.com/apache/cloudstack/pull/3053
>>
>> https://github.com/apache/cloudstack/pull/3070
>>
>> ^^^ comes to my mind, in 4.12 only...
>>
>> On Tue, 2 Apr 2019 at 22:01, Nux!  wrote:
>>
>>> Hello,
>>>
>>> I'm looking to deploy a new Adv+SG zone and want to go the extra mile and
>>> have IPv6 as well for VMs, however according to what docs I could find IPv6
>>> support is still not complete (missing firewall/security groups for one),
>>> whereas I thought it was not the case.
>>> Can someone clarify for me if security groups are available for IPv6
>>> addresses and maybe point me into a good direction with regards to RTFM?
>>>
>>> Regards,
>>> Lucian
>>>
>>>
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>>
>>> Nux!
>>> www.nux.ro
>>>
>>
>>
>> --
>>
>> Andrija Panić


GPG issues download.cloudstack.org

2019-03-22 Thread Wido den Hollander
Hi,

This has been fixed, but the GPG key had to be replaced.

Please download the new key from:
https://download.cloudstack.org/release.asc

$ wget -O - https://download.cloudstack.org/release.asc|apt-key add -

That should be sufficient.

Wido

On 3/22/19 7:53 PM, Wido den Hollander wrote:
> Hi,
> 
> I'm seeing some GPG issues with signing the DEB packages on
> download.cloudstack.org
> 
> It seems like a new key will be required as the old one was/is weak anyway.
> 
> Working on it and will update as soon as it's fixed.
> 
> Wido
> 
> 




New GPG key for Ubuntu packages on download.cloudstack.org

2019-03-22 Thread Wido den Hollander
Hi,

The DEB packages on download.cloudstack.org will be signed using a new key:

https://download.cloudstack.org/release.asc

You can add this new key to your Apt using this one-liner:

$ wget -O - https://download.cloudstack.org/release.asc|apt-key add -

This new key replaces the old weak key.

Wido


***UNCHECKED*** Re: GPG issues download.cloudstack.org

2019-03-22 Thread Wido den Hollander


binSMEt88YUhN.bin
Description: PGP/MIME version identification


GPG issues download.cloudstack.org

2019-03-22 Thread Wido den Hollander
Hi,

I'm seeing some GPG issues with signing the DEB packages on
download.cloudstack.org

It seems like a new key will be required as the old one was/is weak anyway.

Working on it and will update as soon as it's fixed.

Wido




***UNCHECKED*** GPG issues download.cloudstack.org

2019-03-22 Thread Wido den Hollander


binkDIwMnA_0q.bin
Description: PGP/MIME version identification


Re: [RESULT][VOTE] Apache CloudStack 4.12.0.0

2019-03-20 Thread Wido den Hollander
Good news! Awesome :-)

On 3/19/19 10:36 PM, Gabriel Beims Bräscher wrote:
> Hi all,
> 
> After 3 business days, the vote for CloudStack 4.12.0.0 *passes* with 4 PMC
> + 2 non-PMC votes.
> 
> +1 (PMC / binding)
> * Wido den Hollander
> * Simon Weller
> * Rafael Weingärtner
> * Rohit Yadav
> 
> +1 (nonbinding)
> * Gabriel Bräscher
> * Nicolas Vazquez
> 
> 0
> none
> 
> -1
> none
> 
> Thanks to everyone participating.
> 
> I will now prepare the release announcement to go out after 24 hours to
> give the mirrors time to catch up.
> 
> Best regards,
> Gabriel
> 


Re: Any plan for 4.11 LTS extension ?

2019-03-15 Thread Wido den Hollander



On 3/15/19 10:20 AM, Riepl, Gregor (SWISS TXT) wrote:
> Hi Giles,
> 
>> I would *expect*  4.13.0 (LTS) to be released in Q2, which  will
>> supersede the 4.11 branch as the current LTS branch
> 
> Are you confident that this schedule can be kept?
> 4.12 is still in RC right now, and I don't think it's a good idea to
> rush another major release in just 3 months...
> 
4.11.3 will be released first with some bugfixes to keep 4.11 a proper
release.

4.12 needs to go out now so that we can test and prepare for 4.13. I'm
confident we can have a stable and proper 4.13 release as long as we
don't keep the window open for too long.

The major problem is having the master branch open for a long time,
features going in and people not testing it sufficiently.

By having a relatively short period between 4.12 and 4.13 we can catch
most bugs and stabilize for a proper LTS.

Wido


Re: [VOTE] Apache CloudStack 4.12.0.0 [RC5]

2019-03-15 Thread Wido den Hollander
+1 (binding)

Tested:

- Building DEB packages
- Run on Ubuntu 18.04
- Tested live storage migration
- Tested Advanced Networking with VXLAN
- Tested IPv6 deployment in Advanced Networking
- Tested destroy and re-create of Virtual Routers

On 3/14/19 10:58 PM, Gabriel Beims Bräscher wrote:
> Hi All,
> 
> I've created a 4.12.0.0 release (RC5), with the following artifacts up for
> a vote:
> The changes since RC4 are listed at the end of this email.
> 
> Git Branch: 4.12.0.0-RC20190314T1011
> https://github.com/apache/cloudstack/tree/4.12.0.0-RC20190314T1011
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.12.0.0-RC20190314T1011
> 
> Commit: a137398bf106028d2fd5344d599fcd2b560d2944
> https://github.com/apache/cloudstack/commit/a137398bf106028d2fd5344d599fcd2b560d2944
> 
> Source release for 4.12.0.0-RC20190314T1011:
> https://dist.apache.org/repos/dist/dev/cloudstack/4.12.0.0/
> 
> PGP release keys (signed using 25908455):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> The vote will be open for 3 business days (until 19th March).
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> Additional information:
> 
> For users' convenience, packages are available in
> http://cloudstack.apt-get.eu/
> The 4.12.0.0 RC5 is available for the following distros:
> - Ubuntu 14.04, 16.04, and 18.04:
> http://cloudstack.apt-get.eu/ubuntu/dists/trusty/4.12/
> http://cloudstack.apt-get.eu/ubuntu/dists/xenial/4.12/
> http://cloudstack.apt-get.eu/ubuntu/dists/bionic/4.12/
> 
> - CentOS6 and CentOS7:
> http://cloudstack.apt-get.eu/centos/6/4.12/
> http://cloudstack.apt-get.eu/centos/7/4.12/
> 
> Please, use the template 4.11.2 (located in [1]) when testing the RC5.
> The release notes [2] still need to be updated.
> 
> Changes Since RC4:
> Merged #3210 systemd: Fix -Dpid arg passing to systemd usage service [3]
> 
> [1] http://download.cloudstack.org/systemvm/4.11/
> [2]
> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/latest/index.html
> [3] https://github.com/apache/cloudstack/pull/3210
> 


Re: Proposal for EFI firmware support on KVM

2019-03-14 Thread Wido den Hollander



On 3/13/19 3:29 PM, Nathan Johnson wrote:
> I've put together an approach for EFI support that we would like to get some 
> feedback on before I create a PR.  Constructive criticism would be 
> appreciated.
> 
> I've added the following properties to be configured in the agent.properties:
> 
> guest.loader.efi - boolean to switch efi on.  This must be true before it 
> will inject any  entries into the domain xml
> guest.loader.image - this would be the path to the bios/efi image
> guest.loader.nvram - this optionally points to an nvram image
> 
> 
> Even when a host is configured so that it can use EFI, it will only actually 
> create a virtual machine when both of the following conditions are met:
> 
> 1) the host has guest.locader.efi set to true in its agent.properties

Can't we detect if the host is EFI capable without the need of adding a
new flag to the agent.properties?

It advertises the capability of EFI to the Mgmt server and only then can
efi=true Instances be started on that host.

> 2) the vm has the vm details parameter efi=true
> > At present there is no automatic way for the management server to know
in advance which hosts have EFI enabled.  I suppose this could be
approximated using tags.  It might be nice to make this more automatic,
and have the resource planner aware of the efi toggle on the VM, but I'm
not sure how best to implement that or if it's even worth it.
> 

As others already mentioned. The Agent/Host capabilities should be
sufficient?

Wido

> Thanks in advance!
> 
> 
> Nathan Johnson
> Senior R Engineer
> Education Networks of America
> 
> 
> 


Re: New VP of CloudStack: Paul Angus

2019-03-11 Thread Wido den Hollander
Congratz Paul!

On 3/11/19 4:16 PM, Tutkowski, Mike wrote:
> Hi everyone,
> 
> As you may know, the role of VP of CloudStack (Chair of the CloudStack PMC) 
> has a one-year term. My term has now come and gone.
> 
> I’m happy to announce that the CloudStack PMC has elected Paul Angus as our 
> new VP of CloudStack.
> 
> As many already know, Paul has been an active member of the CloudStack 
> Community for over six years now. I’ve worked with Paul on and off throughout 
> much of that time and I believe he’ll be a great fit for this role.
> 
> Please join me in welcoming Paul as the new VP of Apache CloudStack!
> 
> Thanks,
> Mike
> 


Re: [VOTE] Release Apache CloudStack CloudMonkey 6.0.0

2019-03-11 Thread Wido den Hollander
+1

Tested it locally on my laptop and does the things I do with it.

I couldn't find anything which didn't work.

Wido

On 3/11/19 8:07 AM, Rohit Yadav wrote:
> All,
> 
> 
> Due to the lack of any testing/voting, the RC1 voting window will stay open 
> indefinitely until lazy consensus/majority is met or a bug is reported 
> requiring RC2. Thanks.
> 
> 
> 
> Regards,
> 
> Rohit Yadav
> 
> Software Architect, ShapeBlue
> 
> https://www.shapeblue.com
> 
> 
> From: Rohit Yadav 
> Sent: Tuesday, March 5, 2019 5:45:01 PM
> To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
> Subject: [VOTE] Release Apache CloudStack CloudMonkey 6.0.0
> 
> Hi All,
> 
> I've created a 6.0.0 release of CloudMonkey, with the following artifacts
> up for a vote:
> 
> Git Branch and Commit SHA:
> https://github.com/apache/cloudstack-cloudmonkey/commit/74ff37cffc1d8bb3652f6887faa770d933ffe768
> 
> Commit: 74ff37cffc1d8bb3652f6887faa770d933ffe768
> 
> Github pre-release (for testing, contains changelog, artifacts/binaries to
> test, checksums/usage details):
> https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.0.0
> 
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/cloudmonkey-6.0.0
> 
> PGP release keys (signed using 5ED1E1122DC5E8A4A45112C2484248210EE3D884):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> The vote will be open for 72 hours.
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> [ ] +1 approve
> [ ] +0 no opinion
>  ] -1 disapprove (and the reason why)
> 
> Regards,
> Rohit Yadav
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 


Re: Release schedule for CloudStack CloudMonkey v6.0.0

2019-02-25 Thread Wido den Hollander
I'm using a build from 29-01-2019, is that still the latest version?

Works fine for me btw!

Wido

On 2/12/19 10:33 AM, Rohit Yadav wrote:
> All,
> 
> 
> I would like to invite everyone for a final round of testing before 
> cloudmonkey v6.0.0 RC1 can be cut for voting.
> 
> Proposed timeline:
> 
> 
> - Test, fix bugs, update documentation and stabilize: Until 24 Feb 2019
> 
> - Start RC1: Monday 25 Feb 2019
> 
> 
> Kindly test the latest cloudmonkey (testing) v6.0.0-beta3: 
> https://github.com/apache/cloudstack-cloudmonkey/releases
> 
> And help report issues: 
> https://github.com/apache/cloudstack-cloudmonkey/issues
> 
> 
> Recent blog: https://www.shapeblue.com/whats-coming-in-cloudmonkey-6-0/
> 
> 
> Thanks and regards,
> 
> Rohit Yadav
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
> 


Re: [VOTE] Apache CloudStack 4.12.0.0 [RC2]

2019-02-15 Thread Wido den Hollander
+1 (binding)

Tested:

- Building DEB packages
- Run on Ubuntu 18.04
- Tested live storage migration
- Tested Advanced Networking with VXLAN
- Tested IPv6 deployment in Advanced Networking
- Tested destroy and re-create of Virtual Routers

Wido

On 2/13/19 2:23 AM, Gabriel Beims Bräscher wrote:
> Hi All,
> 
> The issue in RC1 (4.12.0.0-RC20190206T2333) have been addressed and we are
> ready to go with RC2.
> I've created the 4.12 RC2 (4.12.0.0-RC20190212T2301) release candidate,
> with the following artifacts up for a vote:
> 
> Git Branch and Commit SH:
> https://github.com/apache/cloudstack/tree/master
> https://github.com/apache/cloudstack/commit/709845f4a333ad2ace0183706433a0653ba159c6
> Commit: 709845f4a333ad2ace0183706433a0653ba159c6
> 
> Source release for 4.12.0.0-RC20190212T2301:
> https://dist.apache.org/repos/dist/dev/cloudstack/4.12.0.0/
> 
> PGP release keys (signed using 25908455):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> The vote will be open for 3 business days (until 15th January).
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> Additional information:
> 
> For users' convenience, packages are available in
> http://cloudstack.apt-get.eu/
> RC1 has been built for the following distros:
> - Ubuntu 14.04, 16.04, and 18.04;
> - CentOS6 and CentOS7.
> 
> The system VM template from 4.11.2 [1] works for RC2. The release notes [2]
> still need to be updated.
> 
> Best Regards,
> Gabriel.
> 
> [1] http://download.cloudstack.org/systemvm/4.11/
> [2]
> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/latest/index.html
> 


Re: SSL - LetsEncrypt the Console Proxy

2019-01-18 Thread Wido den Hollander
Hi,

On 1/18/19 4:41 AM, asen...@testlabs.com.au wrote:
> Hi Guys,
> 
> Many people are using letsencrypt. This could replace the old retired
> realhostip.com DNS resolver . Noone would need to muck around with certs
> again on the console proxy.
> 
> I think it would be a fairly easy code change.
> 
> Thoughts?
> 

Will this work in all cases? It doesn't seem like a very easy change to
me. Yes, it would work with the HTTP ACME client, but only if you are
connected to the Internet with the CP.

The change might be easy, I don't know actually. Sounds like more work
as the Java code will need to talk to LE and request the certificate and
install it.

Wido

> Adrian Sender


Re: Introduction

2019-01-11 Thread Wido den Hollander



On 1/11/19 11:48 AM, Andrija Panic wrote:
> Hi all,
> 
> I would like to take this opportunity to (re)introduce myself - some of you 
> already know me from mailing list as Andrija Panic from HIAG/Safe Swiss Cloud.
> 
> I have moved forward and joined a great team in ShapeBlue as a Cloud 
> Architect and looking forward to further endeavors with CloudStack.
> FTR - I'm based in Belgrade, Serbia and been playing with CloudStack for last 
> 5 years in production.
> 

Welcome back again! Good to have you around :-)

Wido

> Cheers,
> Andrija Panić
> 
> andrija.pa...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
> 


Re: VXLAN and KVm experiences

2019-01-02 Thread Wido den Hollander
Hi,

On 12/28/18 5:43 PM, Ivan Kudryavtsev wrote:
> Wido, that's interesting. 
> 
> Do you think that the Cumulus-based switches with BGP inside have
> advantage over classic OSPF-based routing switches and separate multihop
> MP BGP route-servers for VNI propagation? 
> 

I don't know. We do not use OSFP anywhere in our netwerk. We are a
(i)BGP network only.

We want to use as much Open Software as possible. Buy switches we like
and then add ONIE based Software like Cumulus.

> I'm thinking about pure L3 OSPF-based backend networks for management
> and storage where cloudstack uses bridges on dummy interfaces with IP
> assigned while real NICS use utility IP-addresses in several OSPF
> networks and all those target IPs are distributed with OSPF. 
> 
> Next, VNI-s are created over bridges and their information is
> distributed over BGP. 
> 
> This approach helps to implement fault tolerance and multi-path routes
> with standard L3 stack without xSTP, VCS, etc, decrease broadcast domains.
> 
> Any thoughts?
> 

I wouldn't know for sure, we haven't looked into this yet.

Again, our plan, but not set in stone is:

- Unnumbered BGP (IPv6 Link Local) to all Hypervisors
- Link balancing using ECMP
- BGP+EVPN for VXLAN VNI distribution
- Use a static VNI for CloudStack POD IPv4
- Adapt the *modifyvxlan.sh* script to suit our needs

This way the transport of traffic will be all be done in a IPv6 only
fashion.

IPv4 to the hypervisors (POD Traffic and NFS SS) is all done by a VXLAN
device we create manually on them.

Wido

> 
> пт, 28 дек. 2018 г. в 05:34, Wido den Hollander  <mailto:w...@widodh.nl>>:
> 
> 
> 
> On 10/23/18 2:54 PM, Ivan Kudryavtsev wrote:
> > Doesn't solution like this works seamlessly for large VXLAN networks?
> >
> > https://vincent.bernat.ch/en/blog/2017-vxlan-bgp-evpn
> >
> 
> This is what we are looking into right now.
> 
> As CloudStack executes *modifyvxlan.sh* prior to starting an Instance it
> would be just a matter of replacing this script with a version which
> does the EVPN for us.
> 
> Our routers will probably be 36x100G SuperMicro Bare Matel switches
> running Cumulus.
> 
> Using unnumbered BGP over IPv6 we'll provide network connectivity to the
> Hypervisors.
> 
> Using FFR and EVPN we'll be able to enable VXLAN on the hypervisors and
> route traffic.
> 
> As these things seem to be very use-case specific I don't see how we can
> integrate this into CloudStack in a generic way.
> 
> The *modifyvxlan.sh* script gets the VNI as a argument, so anybody can
> adapt it to their own needs for their specific environment.
> 
> Wido
> 
> > вт, 23 окт. 2018 г., 8:34 Simon Weller :
> >
> >> Linux native VXLAN uses multicast and each host has to participate in
> >> multicast in order to see the VXLAN networks. We haven't tried
> using PIM
> >> across a L3 boundary with ACS, although it will probably work fine.
> >>
> >> Another option is to use a L3 VTEP, but right now there is no native
> >> support for that in CloudStack's VXLAN implementation, although we've
> >> thought about proposing it as feature.
> >>
> >>
> >> 
> >> From: Wido den Hollander mailto:w...@widodh.nl>>
> >> Sent: Tuesday, October 23, 2018 7:17 AM
> >> To: dev@cloudstack.apache.org <mailto:dev@cloudstack.apache.org>;
> Simon Weller
> >> Subject: Re: VXLAN and KVm experiences
> >>
> >>
> >>
> >> On 10/23/18 1:51 PM, Simon Weller wrote:
> >>> We've also been using VXLAN on KVM for all of our isolated VPC guest
> >> networks for quite a long time now. As Andrija pointed out, make
> sure you
> >> increase the max_igmp_memberships param and also put an ip
> address on each
> >> interface host VXLAN interface in the same subnet for all hosts
> that will
> >> share networking, or multicast won't work.
> >>>
> >>
> >> Thanks! So you are saying that all hypervisors need to be in the
> same L2
> >> network or are you routing the multicast?
> >>
> >> My idea was that each POD would be an isolated Layer 3 domain and
> that a
> >> VNI would span over the different Layer 3 networks.
> >>
> >> I don't like STP and other Layer 2 loop-prevention systems.
> >>
> >> Wido
> >>
> >>>
> >>> - 

Re: KVM wrong CPU speed reported

2018-12-28 Thread Wido den Hollander



On 12/28/18 9:23 AM, Andrija Panic wrote:
> Hi guys,
> 
> https://github.com/apache/cloudstack/commit/29e389eb872ffa816be99aa66ff20bdec56d3187
> 
> this one gets above gets CPU frequency with "effectively
> "cat sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq" which gives the
> TurboBoost CPU frequency (in case of Intel CPU) and similar for AMD.
> 
> Problem is that TurboBoost can only boost frequency of a very small number
> of CPU cores for a period of time - not by any means all cores.
> 
> CPU frequency is wrongly reported here as i.e. 3.4 GHz instead of 2.6 GHZ
> (example of Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz)
> 
> We here thus have effective over-provisioning of 30% by default.
> 
> Can we fix this somehow to actually read the base CPU frequency ?
> i.e. Xen reports correct CPU frequency (base/nominal one)
> 

Probably fixable by reading a different file on the KVM hypervisor. But
I always have the feeling that Ghz are a soft factor. Memory and Storage
are hard limits, but CPUs are different.

We have a over provision of 2x of 3x on many systems as the CPUs are
pretty much idle and the defaults of CS are very conservative.

What does a Ghz really tell us? A Ghz from 5 years ago isn't the same as
one from today.

Wido

> Best,
> 


Re: VXLAN and KVm experiences

2018-12-28 Thread Wido den Hollander



On 10/23/18 2:54 PM, Ivan Kudryavtsev wrote:
> Doesn't solution like this works seamlessly for large VXLAN networks?
> 
> https://vincent.bernat.ch/en/blog/2017-vxlan-bgp-evpn
> 

This is what we are looking into right now.

As CloudStack executes *modifyvxlan.sh* prior to starting an Instance it
would be just a matter of replacing this script with a version which
does the EVPN for us.

Our routers will probably be 36x100G SuperMicro Bare Matel switches
running Cumulus.

Using unnumbered BGP over IPv6 we'll provide network connectivity to the
Hypervisors.

Using FFR and EVPN we'll be able to enable VXLAN on the hypervisors and
route traffic.

As these things seem to be very use-case specific I don't see how we can
integrate this into CloudStack in a generic way.

The *modifyvxlan.sh* script gets the VNI as a argument, so anybody can
adapt it to their own needs for their specific environment.

Wido

> вт, 23 окт. 2018 г., 8:34 Simon Weller :
> 
>> Linux native VXLAN uses multicast and each host has to participate in
>> multicast in order to see the VXLAN networks. We haven't tried using PIM
>> across a L3 boundary with ACS, although it will probably work fine.
>>
>> Another option is to use a L3 VTEP, but right now there is no native
>> support for that in CloudStack's VXLAN implementation, although we've
>> thought about proposing it as feature.
>>
>>
>> 
>> From: Wido den Hollander 
>> Sent: Tuesday, October 23, 2018 7:17 AM
>> To: dev@cloudstack.apache.org; Simon Weller
>> Subject: Re: VXLAN and KVm experiences
>>
>>
>>
>> On 10/23/18 1:51 PM, Simon Weller wrote:
>>> We've also been using VXLAN on KVM for all of our isolated VPC guest
>> networks for quite a long time now. As Andrija pointed out, make sure you
>> increase the max_igmp_memberships param and also put an ip address on each
>> interface host VXLAN interface in the same subnet for all hosts that will
>> share networking, or multicast won't work.
>>>
>>
>> Thanks! So you are saying that all hypervisors need to be in the same L2
>> network or are you routing the multicast?
>>
>> My idea was that each POD would be an isolated Layer 3 domain and that a
>> VNI would span over the different Layer 3 networks.
>>
>> I don't like STP and other Layer 2 loop-prevention systems.
>>
>> Wido
>>
>>>
>>> - Si
>>>
>>>
>>> 
>>> From: Wido den Hollander 
>>> Sent: Tuesday, October 23, 2018 5:21 AM
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: VXLAN and KVm experiences
>>>
>>>
>>>
>>> On 10/23/18 11:21 AM, Andrija Panic wrote:
>>>> Hi Wido,
>>>>
>>>> I have "pioneered" this one in production for last 3 years (and
>> suffered a
>>>> nasty pain of silent drop of packages on kernel 3.X back in the days
>>>> because of being unaware of max_igmp_memberships kernel parameters, so I
>>>> have updated the manual long time ago).
>>>>
>>>> I never had any issues (beside above nasty one...) and it works very
>> well.
>>>
>>> That's what I want to hear!
>>>
>>>> To avoid above issue that I described - you should increase
>>>> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  -
>> otherwise
>>>> with more than 20 vxlan interfaces, some of them will stay in down state
>>>> and have a hard traffic drop (with proper message in agent.log) with
>> kernel
>>>>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and
>> also
>>>> pay attention to MTU size as well - anyway everything is in the manual
>> (I
>>>> updated everything I though was missing) - so please check it.
>>>>
>>>
>>> Yes, the underlying network will all be 9000 bytes MTU.
>>>
>>>> Our example setup:
>>>>
>>>> We have i.e. bond.950 as the main VLAN which will carry all vxlan
>> "tunnels"
>>>> - so this is defined as KVM traffic label. In our case it didn't make
>> sense
>>>> to use bridge on top of this bond0.950 (as the traffic label) - you can
>>>> test it on your own - since this bridge is used only to extract child
>>>> bond0.950 interface name, then based on vxlan ID, ACS will provision
>>>> vxlan...@bond0.xxx and join this new vxlan interface to NEW bridge
>> created
>>>> (and then of course vNIC goes to this new bridge), so original bridge
>

Re: Ubuntu DEBs not updated

2018-12-10 Thread Wido den Hollander
This has been fixed!

On 12/8/18 8:39 PM, Rohit Yadav wrote:
> + Wido - can you kick a build, thanks.
> 
> 
> - Rohit
> 
> <https://cloudstack.apache.org>
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> @shapeblue
>   
> 
>   
> 
> 
> *From:* Andrija Panic 
> *Sent:* Friday, December 7, 2018 4:24:07 PM
> *To:* dev
> *Subject:* Ubuntu DEBs not updated
>  
> Hi guys,
> 
> just a heads up - last "convenience" packages for Ubuntu are 4.11.1.0 -
> i.e. 4.11.2.0 is not present -  not sure if this needs fix or not...
> 
> http://download.cloudstack.org/ubuntu/dists/trusty/4.11/pool/
> http://download.cloudstack.org/ubuntu/dists/xenial/4.11/pool/
> 
> For Centos7 - I see 4.11.0.0, 4.11.1.0, 4.11.2.0 (all fine)
> For Centos6 - I see 4.11.1.0 and 4.11.2.0 ( all fine)
> For Ubuntu (^^^) I see 4.11.1.0 (trusty) and 4.11.0.0 and 4.11.1.0 (Xenial)
> 
> Best,
> 
> -- 
> 
> Andrija Panić


Re: does cloudstack agent run on alpine?

2018-12-10 Thread Wido den Hollander



On 12/10/18 6:50 AM, Ron Wheeler wrote:
> Does cloudstack agent run on alpine? It seems to support kvm
> 

I suspect it will.

What you need is:

- KVM
- Libvirt
- Java 8
- JNA / JNI for Java+libvirt
- iproute2
- bridge-utils

That should be sufficient.

Wido


Dummy SecurityGroup Provider for VXLAN/VLAN in Advanced Networking

2018-12-07 Thread Wido den Hollander
Hi,

I'm looking into this setup:

Advanced zone with VXLAN

- Guest Network 1: Network Offering with SG
- Guest Network 2: Network Offering WITHOUT SG

This doesn't work as the zone has SG enabled and thus all guest networks
require SG.

I wonder why each Guest Networks needs to have SG enabled. For KVM for
example it shouldn't be a technical requirement. As VXLAN (or even
VLANs) provide the isolation between different networks you should be
able to have one network with SG and the other without SG.

Does anybody know why each Guest network needs SG?

Now, I was thinking about creating 'DummySecurityGroupProvider' which
says 'true' to everything you ask it, but in reality doesn't do
anything. This way you could use that provider in a network offering and
work around this.

Would that make sense to people?

Wido


Re: NullPointerException on network GC on master

2018-11-28 Thread Wido den Hollander
On 11/29/18 8:42 AM, Wido den Hollander wrote:
> Hi,
> 
> While running with master I see the mgmt server throwing out this error
> in it's log (see below).
> 
> It doesn't hurt me at the moment, but the '** NOT SPECIFIED **' seems
> odd to me, as somethings goes wrong in the ORM towards the database.
> 
> Anybody else seeing this?

I needed to hit F5 on Github first. We already have a issue for this:
https://github.com/apache/cloudstack/issues/3064

> 
> Wido
> 
> 2018-11-29 08:40:56,677 WARN  [o.a.c.e.o.NetworkOrchestrator]
> (Network-Scavenger-1:ctx-a3a4da5d) (logid:278277a6) Caught exception
> while running network gc:
> com.cloud.utils.exception.CloudRuntimeException: Caught:
> com.mysql.jdbc.JDBC42PreparedStatement@8005a05: SELECT networks.id FROM
> networks  INNER JOIN network_offerings ON
> networks.network_offering_id=network_offerings.id  INNER JOIN
> op_networks ON networks.id=op_networks.id WHERE networks.removed IS NULL
>  AND  (op_networks.nics_count = ** NOT SPECIFIED **  AND op_networks.gc
> = ** NOT SPECIFIED **  AND op_networks.check_for_gc = ** NOT SPECIFIED ** )
>   at
> com.cloud.utils.db.GenericDaoBase.customSearchIncludingRemoved(GenericDaoBase.java:507)
>   at 
> com.cloud.utils.db.GenericDaoBase.customSearch(GenericDaoBase.java:518)
>   at
> com.cloud.network.dao.NetworkDaoImpl.findNetworksToGarbageCollect(NetworkDaoImpl.java:461)
>   at sun.reflect.GeneratedMethodAccessor242.invoke(Unknown Source)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:338)
>   at
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
>   at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
>   at
> com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
>   at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:174)
>   at
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
>   at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
>   at
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
>   at com.sun.proxy.$Proxy64.findNetworksToGarbageCollect(Unknown Source)
>   at
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$NetworkGarbageCollector.reallyRun(NetworkOrchestrator.java:2758)
>   at
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$NetworkGarbageCollector.runInContext(NetworkOrchestrator.java:2742)
>   at
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>   at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>   at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>   at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>   at
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> 
> 


NullPointerException on network GC on master

2018-11-28 Thread Wido den Hollander
Hi,

While running with master I see the mgmt server throwing out this error
in it's log (see below).

It doesn't hurt me at the moment, but the '** NOT SPECIFIED **' seems
odd to me, as somethings goes wrong in the ORM towards the database.

Anybody else seeing this?

Wido

2018-11-29 08:40:56,677 WARN  [o.a.c.e.o.NetworkOrchestrator]
(Network-Scavenger-1:ctx-a3a4da5d) (logid:278277a6) Caught exception
while running network gc:
com.cloud.utils.exception.CloudRuntimeException: Caught:
com.mysql.jdbc.JDBC42PreparedStatement@8005a05: SELECT networks.id FROM
networks  INNER JOIN network_offerings ON
networks.network_offering_id=network_offerings.id  INNER JOIN
op_networks ON networks.id=op_networks.id WHERE networks.removed IS NULL
 AND  (op_networks.nics_count = ** NOT SPECIFIED **  AND op_networks.gc
= ** NOT SPECIFIED **  AND op_networks.check_for_gc = ** NOT SPECIFIED ** )
at
com.cloud.utils.db.GenericDaoBase.customSearchIncludingRemoved(GenericDaoBase.java:507)
at 
com.cloud.utils.db.GenericDaoBase.customSearch(GenericDaoBase.java:518)
at
com.cloud.network.dao.NetworkDaoImpl.findNetworksToGarbageCollect(NetworkDaoImpl.java:461)
at sun.reflect.GeneratedMethodAccessor242.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:338)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at
com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:174)
at
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
at
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
at com.sun.proxy.$Proxy64.findNetworksToGarbageCollect(Unknown Source)
at
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$NetworkGarbageCollector.reallyRun(NetworkOrchestrator.java:2758)
at
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator$NetworkGarbageCollector.runInContext(NetworkOrchestrator.java:2742)
at
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException




Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-22 Thread Wido den Hollander
Hi Paul,

I'll build the DEB files, no problem! The RPMs is something I have no
infra for. Could you upload them?

Wido

On 11/22/18 9:16 AM, Paul Angus wrote:
> Hi Wido,
> 
> We're in a position to upload the 4.11.2.0 binaries to 
> download.cloudstack.org  could you build the RPMs and DEBs please?
> If it helps we can build the RPMs and put them up for you to sign.
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
> 
> -Original Message-
> From: Rohit Yadav  
> Sent: 21 November 2018 16:26
> To: dev@cloudstack.apache.org
> Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5
> 
> Hi Andrija,
> 
> In 4.11.2 VR we've restricted the maximum size of systemd/journald files so 
> you should not see any significant memory increase than say 25-50MBs. In my 
> local testing with kvm, xenserver and vmware, I was never able to reproduce 
> the memory issue on VRs.
> 
> Regards,
> Rohit Yadav
> 
> 
> From: Andrija Panic 
> Sent: Wednesday, November 21, 2018 6:24:30 PM
> To: dev
> Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5
> 
> FYI, I also t tested this on KVM (ssh into VR many times with while true..do 
> ...as Rene  suggested) and also observed small increase in memory, after 
> 10min of script running, it went up by 10-20MB...but not sure how significant 
> this is...
> 
> Andrija
> 
> On Wed, Nov 21, 2018, 13:27 Zehnder, Samuel  
>> Hi Rohit
>>
>> I think I've found something regarding memory issues with vmware:
>> Schema-update only updates default system-vm, but not newly registered
>> ones:
>>
>>
>> https://github.com/apache/cloudstack/blob/master/engine/schema/src/mai
>> n/resources/META-INF/db/schema-41000to41100.sql
>> :
>> 448: -- Use 'Other Linux 64-bit' as guest os for the default 
>> systemvmtemplate for VMware
>> 449: -- This fixes a memory allocation issue to systemvms on 
>> VMware/ESXi
>> 450: UPDATE `cloud`.`vm_template` SET guest_os_id=99 WHERE id=8;
>>
>> When I registered the new templates I selected Debian something as OS 
>> type. I now changed this to "Other Linux (64bit)", which is what above 
>> update is doing, and I can see significantly less memory used by VRs. 
>> I do not understand the reasons behind this behavior, I tried also 
>> other settings (Debian 9 64-bit, Other 3.x Linux), neither seem to 
>> handle memory well...
>>
>> As for the VPN part, you suggested
>>> you can build a custom systemvm.iso file with those settings.
>> Is it possible to simply replace the systemvm.iso file on mgmt-server, 
>> remove it from secondary and restart mgmt-server? Maybe you can point 
>> me here in the right direction.
>>
>> Thanks,
>> Sam
>>
>>
>>> -Original Message-
>>> From: Rohit Yadav 
>>> Sent: Dienstag, 20. November 2018 12:55
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC5
>>>
>>> Hi Samuel,
>>>
>>>
>>> Thanks for your email. I've opened this ticket for your first issue:
>>> https://github.com/apache/cloudstack/issues/3039
>>>
>>> Please follow René's advice to (a) try increase the VR memory and 
>>> see if
>> it
>>> helps, (b) have a script for reducing memory over time. We'll also 
>>> work
>> with
>>> the systemd project to see if they can fix and backport this for 
>>> Debian
>> 9.6+.
>>>
>>>
>>> For your second issue, in 4.9 which used a Debian7 based VR and 
>>> openswan for VPN, we've moved to strongswan. If your external Cisco 
>>> endpoint/integration can work with strongswan, please create a VPC 
>>> VR and manipulate the strongswan configs in that VR and share your 
>>> results or
>> send
>>> a PR, the changes need to be in one of the python files such as
>> configure.py.
>>> The #2 issue is very specific to your environment and is not a 
>>> general
>> error, if
>>> you're able to optimize the configuration for a VR, you can build a
>> custom
>>> systemvm.iso file with those settings. In addition, you can send a 
>>> PR or submit a Github issue with details, logs, configurations etc:
>>> https://github.com/apache/cloudstack/issues
>>>
>>>
>>> I think both the issues are not general blockers and should not void
>> 4.11.2.0
>>> voting.
>>>
>

Re: CloudStack master and 4.11.2 RC5 build issues

2018-11-19 Thread Wido den Hollander



On 11/19/18 10:03 AM, Paul Angus wrote:
> Wido, are you planning on building and testing RC5 today?  I'll hold off on a 
> result with my vote if you are.
> 

I am, but I can't seem to build RC5 and test it, so that would make it
hard to verify the release.

Wido

> 
> Kind regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
> 
> -Original Message-
> From: Rohit Yadav  
> Sent: 19 November 2018 00:51
> To: dev@cloudstack.apache.org
> Subject: Re: CloudStack master and 4.11.2 RC5 build issues
> 
> Hi Wido,
> 
> Can you share the output using jdk8 by running the mvn command with -e and -X 
> (you'll need to add that in some file in the debian directory or sinply do a 
> mvn based build with those flags)? Alternatively, you may also skip tests 
> using - DskipTests to get around the issue. Looks like it started happening 
> recently in 18.04 (I'm able to reproduce as well).
> 
> Regards,
> Rohit Yadav
> 
> 
> From: Wido den Hollander 
> Sent: Monday, November 19, 2018 12:26:47 AM
> To: dev@cloudstack.apache.org
> Subject: CloudStack master and 4.11.2 RC5 build issues
> 
> Hi,
> 
> Trying to build master using this command it fails:
> 
> $ dpkg-buildpackage -us -uc -b
> 
> I get this error:
> 
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 7.282 s
> [INFO] Finished at: 2018-11-18T19:55:10+01:00 [INFO] Final Memory: 50M/174M 
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test
> (default-test) on project cloud-framework-managed-context: Execution 
> default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test failed: The forked 
> VM terminated without properly saying goodbye. VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd
> /home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context
> && /usr/lib/jvm/java-11-openjdk-amd64/bin/java
> -Djava.security.egd=file:/dev/./urandom -noverify -jar 
> /home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context/target/surefire/surefirebooter4935588302032993476.jar
> /home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context/target/surefire/surefire7506593539939542941tmp
> /home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context/target/surefire/surefire_010482278773157415685tmp
> [ERROR] -> [Help 1]
> 
> I tried both Java 8 and 11 JDK from Ubuntu 18.04, but in both cases it fails.
> 
> wido@wido-laptop:~$ java -version
> openjdk version "10.0.2" 2018-07-17
> OpenJDK Runtime Environment (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3)
> OpenJDK 64-Bit Server VM (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3, mixed
> mode)
> wido@wido-laptop:~$
> 
> Have other people seen this?
> 
> Wido
> 
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
>   
>  
> 


Re: CloudStack master and 4.11.2 RC5 build issues

2018-11-19 Thread Wido den Hollander



On 11/19/18 1:51 AM, Rohit Yadav wrote:
> Hi Wido,
> 
> Can you share the output using jdk8 by running the mvn command with -e and -X 
> (you'll need to add that in some file in the debian directory or sinply do a 
> mvn based build with those flags)? Alternatively, you may also skip tests 
> using - DskipTests to get around the issue. Looks like it started happening 
> recently in 18.04 (I'm able to reproduce as well).
> 

Sure! Here is the output of such a build:

https://widodh.o.auroraobjects.eu/cloudstack/tmp/4.11.2-build.log.gz

Unzipped it's a 265M file, so be aware of a lot of data. I'm not sure
what's happening here.

Wido

> Regards,
> Rohit Yadav
> 
> ____
> From: Wido den Hollander 
> Sent: Monday, November 19, 2018 12:26:47 AM
> To: dev@cloudstack.apache.org
> Subject: CloudStack master and 4.11.2 RC5 build issues
> 
> Hi,
> 
> Trying to build master using this command it fails:
> 
> $ dpkg-buildpackage -us -uc -b
> 
> I get this error:
> 
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 7.282 s
> [INFO] Finished at: 2018-11-18T19:55:10+01:00
> [INFO] Final Memory: 50M/174M
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test
> (default-test) on project cloud-framework-managed-context: Execution
> default-test of goal
> org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test failed: The
> forked VM terminated without properly saying goodbye. VM crash or
> System.exit called?
> [ERROR] Command was /bin/sh -c cd
> /home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context
> && /usr/lib/jvm/java-11-openjdk-amd64/bin/java
> -Djava.security.egd=file:/dev/./urandom -noverify -jar
> /home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context/target/surefire/surefirebooter4935588302032993476.jar
> /home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context/target/surefire/surefire7506593539939542941tmp
> /home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context/target/surefire/surefire_010482278773157415685tmp
> [ERROR] -> [Help 1]
> 
> I tried both Java 8 and 11 JDK from Ubuntu 18.04, but in both cases it
> fails.
> 
> wido@wido-laptop:~$ java -version
> openjdk version "10.0.2" 2018-07-17
> OpenJDK Runtime Environment (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3)
> OpenJDK 64-Bit Server VM (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3, mixed
> mode)
> wido@wido-laptop:~$
> 
> Have other people seen this?
> 
> Wido
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 


Re: [ANNOUNCE] New committer: Andrija Panić

2018-11-19 Thread Wido den Hollander
Welcome Andrija!

On 11/19/18 5:27 AM, Tutkowski, Mike wrote:
> Hi everyone,
> 
> The Project Management Committee (PMC) for Apache CloudStack
> has invited Andrija Panić to become a committer and I am pleased
> to announce that he has accepted.
> 
> Please join me in congratulating Andrija on this accomplishment.
> 
> Thanks!
> Mike
> 


CloudStack master and 4.11.2 RC5 build issues

2018-11-18 Thread Wido den Hollander
Hi,

Trying to build master using this command it fails:

$ dpkg-buildpackage -us -uc -b

I get this error:

[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 7.282 s
[INFO] Finished at: 2018-11-18T19:55:10+01:00
[INFO] Final Memory: 50M/174M
[INFO]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test
(default-test) on project cloud-framework-managed-context: Execution
default-test of goal
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test failed: The
forked VM terminated without properly saying goodbye. VM crash or
System.exit called?
[ERROR] Command was /bin/sh -c cd
/home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context
&& /usr/lib/jvm/java-11-openjdk-amd64/bin/java
-Djava.security.egd=file:/dev/./urandom -noverify -jar
/home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context/target/surefire/surefirebooter4935588302032993476.jar
/home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context/target/surefire/surefire7506593539939542941tmp
/home/wido/Desktop/apache-cloudstack-4.11.2.0-src/framework/managed-context/target/surefire/surefire_010482278773157415685tmp
[ERROR] -> [Help 1]

I tried both Java 8 and 11 JDK from Ubuntu 18.04, but in both cases it
fails.

wido@wido-laptop:~$ java -version
openjdk version "10.0.2" 2018-07-17
OpenJDK Runtime Environment (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3)
OpenJDK 64-Bit Server VM (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3, mixed
mode)
wido@wido-laptop:~$

Have other people seen this?

Wido


Re: VXLAN and KVm experiences

2018-11-15 Thread Wido den Hollander



On 11/14/18 6:25 PM, Simon Weller wrote:
> Wido,
> 
> 
> Here is the original document on the implemention for VXLAN in ACS
> - 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Linux+native+VXLAN+support+on+KVM+hypervisor
> 
> It may shed some light on the reasons for the different multicast groups.
> 

Yes, I see now. It is to prevent a single multicast group being flooded
with traffic for VNIs.

Thanks!

Wido

>  
> - Si
> 
> 
> *From:* Wido den Hollander 
> *Sent:* Tuesday, November 13, 2018 4:40 AM
> *To:* dev@cloudstack.apache.org; Simon Weller
> *Subject:* Re: VXLAN and KVm experiences
>  
> 
> 
> On 10/23/18 2:34 PM, Simon Weller wrote:
>> Linux native VXLAN uses multicast and each host has to participate in 
>> multicast in order to see the VXLAN networks. We haven't tried using PIM 
>> across a L3 boundary with ACS, although it will probably work fine.
>> 
>> Another option is to use a L3 VTEP, but right now there is no native support 
>> for that in CloudStack's VXLAN implementation, although we've thought about 
>> proposing it as feature.
>> 
> 
> Getting back to this I see CloudStack does this:
> 
> local mcastGrp="239.$(( ($vxlanId >> 16) % 256 )).$(( ($vxlanId >> 8) %
> 256 )).$(( $vxlanId % 256 ))"
> 
> VNI 1000 would use group 239.0.3.232 and VNI 1001 uses 239.0.3.233 1000.
> 
> Why are we using a different mcast group for every VNI? As the VNI is
> encoded in the packet this should just work in one group, right?
> 
> Because this way you need to configure all those groups on your
> Router(s) as each VNI will use a different Multicast Group.
> 
> I'm just looking for the reason why we have this different multicast groups.
> 
> I was thinking that we might want to add a option to agent.properties
> where we allow users to set a fixed Multicast group for all traffic.
> 
> Wido
> 
> [0]:
> https://github.com/apache/cloudstack/blob/master/scripts/vm/network/vnet/modifyvxlan.sh#L33
> 
> 
> 
>> 
>> 
>> From: Wido den Hollander 
>> Sent: Tuesday, October 23, 2018 7:17 AM
>> To: dev@cloudstack.apache.org; Simon Weller
>> Subject: Re: VXLAN and KVm experiences
>> 
>> 
>> 
>> On 10/23/18 1:51 PM, Simon Weller wrote:
>>> We've also been using VXLAN on KVM for all of our isolated VPC guest 
>>> networks for quite a long time now. As Andrija pointed out, make sure you 
>>> increase the max_igmp_memberships param and also put an ip address on each 
>>> interface host VXLAN interface in the same subnet for all hosts that will 
>>> share networking, or multicast
> won't work.
>>>
>> 
>> Thanks! So you are saying that all hypervisors need to be in the same L2
>> network or are you routing the multicast?
>> 
>> My idea was that each POD would be an isolated Layer 3 domain and that a
>> VNI would span over the different Layer 3 networks.
>> 
>> I don't like STP and other Layer 2 loop-prevention systems.
>> 
>> Wido
>> 
>>>
>>> - Si
>>>
>>>
>>> 
>>> From: Wido den Hollander 
>>> Sent: Tuesday, October 23, 2018 5:21 AM
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: VXLAN and KVm experiences
>>>
>>>
>>>
>>> On 10/23/18 11:21 AM, Andrija Panic wrote:
>>>> Hi Wido,
>>>>
>>>> I have "pioneered" this one in production for last 3 years (and suffered a
>>>> nasty pain of silent drop of packages on kernel 3.X back in the days
>>>> because of being unaware of max_igmp_memberships kernel parameters, so I
>>>> have updated the manual long time ago).
>>>>
>>>> I never had any issues (beside above nasty one...) and it works very well.
>>>
>>> That's what I want to hear!
>>>
>>>> To avoid above issue that I described - you should increase
>>>> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  - otherwise
>>>> with more than 20 vxlan interfaces, some of them will stay in down state
>>>> and have a hard traffic drop (with proper message in agent.log) with kernel
>>>>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and also
>>>> pay attention to MTU size as well - anyway everything is in the manual (I
>>>> updated everything I though was missing) - so please check it.
>>>>
>>>
>>> Yes, the underly

Re: VXLAN and KVm experiences

2018-11-13 Thread Wido den Hollander



On 10/23/18 2:34 PM, Simon Weller wrote:
> Linux native VXLAN uses multicast and each host has to participate in 
> multicast in order to see the VXLAN networks. We haven't tried using PIM 
> across a L3 boundary with ACS, although it will probably work fine.
> 
> Another option is to use a L3 VTEP, but right now there is no native support 
> for that in CloudStack's VXLAN implementation, although we've thought about 
> proposing it as feature.
> 

Getting back to this I see CloudStack does this:

local mcastGrp="239.$(( ($vxlanId >> 16) % 256 )).$(( ($vxlanId >> 8) %
256 )).$(( $vxlanId % 256 ))"

VNI 1000 would use group 239.0.3.232 and VNI 1001 uses 239.0.3.233 1000.

Why are we using a different mcast group for every VNI? As the VNI is
encoded in the packet this should just work in one group, right?

Because this way you need to configure all those groups on your
Router(s) as each VNI will use a different Multicast Group.

I'm just looking for the reason why we have this different multicast groups.

I was thinking that we might want to add a option to agent.properties
where we allow users to set a fixed Multicast group for all traffic.

Wido

[0]:
https://github.com/apache/cloudstack/blob/master/scripts/vm/network/vnet/modifyvxlan.sh#L33



> 
> 
> From: Wido den Hollander 
> Sent: Tuesday, October 23, 2018 7:17 AM
> To: dev@cloudstack.apache.org; Simon Weller
> Subject: Re: VXLAN and KVm experiences
> 
> 
> 
> On 10/23/18 1:51 PM, Simon Weller wrote:
>> We've also been using VXLAN on KVM for all of our isolated VPC guest 
>> networks for quite a long time now. As Andrija pointed out, make sure you 
>> increase the max_igmp_memberships param and also put an ip address on each 
>> interface host VXLAN interface in the same subnet for all hosts that will 
>> share networking, or multicast won't work.
>>
> 
> Thanks! So you are saying that all hypervisors need to be in the same L2
> network or are you routing the multicast?
> 
> My idea was that each POD would be an isolated Layer 3 domain and that a
> VNI would span over the different Layer 3 networks.
> 
> I don't like STP and other Layer 2 loop-prevention systems.
> 
> Wido
> 
>>
>> - Si
>>
>>
>> 
>> From: Wido den Hollander 
>> Sent: Tuesday, October 23, 2018 5:21 AM
>> To: dev@cloudstack.apache.org
>> Subject: Re: VXLAN and KVm experiences
>>
>>
>>
>> On 10/23/18 11:21 AM, Andrija Panic wrote:
>>> Hi Wido,
>>>
>>> I have "pioneered" this one in production for last 3 years (and suffered a
>>> nasty pain of silent drop of packages on kernel 3.X back in the days
>>> because of being unaware of max_igmp_memberships kernel parameters, so I
>>> have updated the manual long time ago).
>>>
>>> I never had any issues (beside above nasty one...) and it works very well.
>>
>> That's what I want to hear!
>>
>>> To avoid above issue that I described - you should increase
>>> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  - otherwise
>>> with more than 20 vxlan interfaces, some of them will stay in down state
>>> and have a hard traffic drop (with proper message in agent.log) with kernel
>>>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and also
>>> pay attention to MTU size as well - anyway everything is in the manual (I
>>> updated everything I though was missing) - so please check it.
>>>
>>
>> Yes, the underlying network will all be 9000 bytes MTU.
>>
>>> Our example setup:
>>>
>>> We have i.e. bond.950 as the main VLAN which will carry all vxlan "tunnels"
>>> - so this is defined as KVM traffic label. In our case it didn't make sense
>>> to use bridge on top of this bond0.950 (as the traffic label) - you can
>>> test it on your own - since this bridge is used only to extract child
>>> bond0.950 interface name, then based on vxlan ID, ACS will provision
>>> vxlan...@bond0.xxx and join this new vxlan interface to NEW bridge created
>>> (and then of course vNIC goes to this new bridge), so original bridge (to
>>> which bond0.xxx belonged) is not used for anything.
>>>
>>
>> Clear, I indeed thought something like that would happen.
>>
>>> Here is sample from above for vxlan 867 used for tenant isolation:
>>>
>>> root@hostname:~# brctl show brvx-867
>>>
>>> bridge name bridge id   STP enabled interfaces
>>> brvx-8678000.2215cfce99ce   n

Re: KVM Max Guests Limit

2018-11-09 Thread Wido den Hollander



On 11/9/18 1:33 PM, Andrija Panic wrote:
> Thanks Wido - though I don't seem to be able to find any related setting
> (there is host.overcommit.mem.mb but that is not it - unless you can
> define negative value to it )  ?
> https://github.com/apache/cloudstack/blob/master/agent/conf/agent.properties
> 

host.reserved.mem.mb=32768

That one should be the setting you might want to look at.

In this case 32G is reserved and not available to CS.

Wido

> 
> thx
> 
> On Fri, 9 Nov 2018 at 13:03, Wido den Hollander  <mailto:w...@widodh.nl>> wrote:
> 
> 
> 
> On 11/9/18 12:56 PM, Andrija Panic wrote:
> > afaik not - but I did run once or twice intom perhaps looselym
> connected
> > issue - ACS reports 100% of host RAM (makes sense) asavailable for VM
> > deployment to ACS - so in 1-2 cases I did run into out of memory
> killer,
> > crashing my VMs.
> >
> > It would be great to have some amount of "reserve RAM" for host OS
> - or
> > simply have PER HOST RAM disableTreshold setting, similar to
> cluster level
> > "cluster.memory.allocated.capacity.disablethreshold", just on host
> level...
> >
> 
> 
> You can do that already, in agent.properties you can set reserved
> memory.
> 
> But I doubt indeed that we need such a limit in ACS at all, why do we
> need to limit the amount of Instances on a hypervisor?
> 
> Or at least set it to a very high number by default.
> 
> Wido
> 
> > On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner
> mailto:rafaelweingart...@gmail.com>>
> > wrote:
> >
> >> Do we need these logical constraints in ACS at all?
> >>
> >> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander  <mailto:w...@widodh.nl>> wrote:
> >>
> >>>
> >>>
> >>> On 11/8/18 11:20 PM, Simon Weller wrote:
> >>>> I think these is legacy and a guess back in the day. It was 50
> at one
> >>> point and it was lifted higher a few releases. ago.
> >>>>
> >>>
> >>> I see. I'm about to do a test with a bunch of 128GB hypervisors and
> >>> spawning a lot of 128M VMs. Trying to see where the limit might
> be and
> >>> also stress the VR a bit by loading a lot of DHCP entries.
> >>>
>     >>> Wido
> >>>
> >>>>
> >>>>
> >>>>
> >>>> 
> >>>> From: Ivan Kudryavtsev  <mailto:kudryavtsev...@bw-sw.com>>
> >>>> Sent: Thursday, November 8, 2018 3:58 PM
> >>>> To: dev
> >>>> Subject: Re: KVM Max Guests Limit
> >>>>
> >>>> Hi all, +1 for higher numbers.
> >>>>
> >>>> чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander  <mailto:w...@widodh.nl>>:
> >>>>
> >>>>> Hi,
> >>>>>
> >>>>> I see that for KVM we set the limit to 144 guests by default, can
> >>>>> anybody tell me why we have this limit set to 144?
> >>>>>
> >>>>> Searching a bit I found this:
> >>>>> https://access.redhat.com/articles/rhel-kvm-limits
> >>>>>
> >>>>> "This guest limit does not apply to Red Hat Enterprise Linux with
> >>>>> Unlimited Guests. There is no guest limit for Red Hat Enterprise
> >>>>> Virtualization"
> >>>>>
> >>>>> There is always a limit somewhere, but why do we set it to 144?
> >>>>>
> >>>>> I would personally vote for increasing this to 500 or something so
> >> that
> >>>>> users don't run into it that easily.
> >>>>>
> >>>>> Also, the log line is printed in DEBUG mode only when a host
> reaches
> >>>>> this limit, so I created a PR to set this to INFO:
> >>>>> https://github.com/apache/cloudstack/pull/3013
> >>>>>
> >>>>> Any input?
> >>>>>
> >>>>> Wido
> >>>>>
> >>>>
> >>>>
> >>>> --
> >>>> With best regards, Ivan Kudryavtsev
> >>>> Bitworks LLC
> >>>> Cell RU: +7-923-414-1515
> >>>> Cell USA: +1-201-257-1512
> >>>> WWW: http://bitworks.software/ <http://bw-sw.com/>
> >>>>
> >>>
> >>
> >>
> >> --
> >> Rafael Weingärtner
> >>
> >
> >
> 
> 
> 
> -- 
> 
> Andrija Panić


Re: KVM Max Guests Limit

2018-11-09 Thread Wido den Hollander



On 11/9/18 1:08 PM, Rafael Weingärtner wrote:
> For me, that seems some restrictions in paid productions. “you are client
> type X, then you can start only Y VMs”, and this has been a legacy around
> our code base.  We could very much remove this limit (on instance numbers);
> I expect operators to know what they are doing, and to monitor closely the
> platforms/systems they run. The management of other resources such as RAM,
> CPU, and others, I still consider them necessary though.
> 

That might be the case indeed. I think this can be removed completely,
but it's a lot of code around this. In the end it is a boolean which
tells if the hypervisor has reached the limit or not.

I will look into it.

Wido

> On Fri, Nov 9, 2018 at 10:03 AM Wido den Hollander  wrote:
> 
>>
>>
>> On 11/9/18 12:56 PM, Andrija Panic wrote:
>>> afaik not - but I did run once or twice intom perhaps looselym connected
>>> issue - ACS reports 100% of host RAM (makes sense) asavailable for VM
>>> deployment to ACS - so in 1-2 cases I did run into out of memory killer,
>>> crashing my VMs.
>>>
>>> It would be great to have some amount of "reserve RAM" for host OS - or
>>> simply have PER HOST RAM disableTreshold setting, similar to cluster
>> level
>>> "cluster.memory.allocated.capacity.disablethreshold", just on host
>> level...
>>>
>>
>>
>> You can do that already, in agent.properties you can set reserved memory.
>>
>> But I doubt indeed that we need such a limit in ACS at all, why do we
>> need to limit the amount of Instances on a hypervisor?
>>
>> Or at least set it to a very high number by default.
>>
>> Wido
>>
>>> On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner <
>> rafaelweingart...@gmail.com>
>>> wrote:
>>>
>>>> Do we need these logical constraints in ACS at all?
>>>>
>>>> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander 
>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 11/8/18 11:20 PM, Simon Weller wrote:
>>>>>> I think these is legacy and a guess back in the day. It was 50 at one
>>>>> point and it was lifted higher a few releases. ago.
>>>>>>
>>>>>
>>>>> I see. I'm about to do a test with a bunch of 128GB hypervisors and
>>>>> spawning a lot of 128M VMs. Trying to see where the limit might be and
>>>>> also stress the VR a bit by loading a lot of DHCP entries.
>>>>>
>>>>> Wido
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> 
>>>>>> From: Ivan Kudryavtsev 
>>>>>> Sent: Thursday, November 8, 2018 3:58 PM
>>>>>> To: dev
>>>>>> Subject: Re: KVM Max Guests Limit
>>>>>>
>>>>>> Hi all, +1 for higher numbers.
>>>>>>
>>>>>> чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I see that for KVM we set the limit to 144 guests by default, can
>>>>>>> anybody tell me why we have this limit set to 144?
>>>>>>>
>>>>>>> Searching a bit I found this:
>>>>>>> https://access.redhat.com/articles/rhel-kvm-limits
>>>>>>>
>>>>>>> "This guest limit does not apply to Red Hat Enterprise Linux with
>>>>>>> Unlimited Guests. There is no guest limit for Red Hat Enterprise
>>>>>>> Virtualization"
>>>>>>>
>>>>>>> There is always a limit somewhere, but why do we set it to 144?
>>>>>>>
>>>>>>> I would personally vote for increasing this to 500 or something so
>>>> that
>>>>>>> users don't run into it that easily.
>>>>>>>
>>>>>>> Also, the log line is printed in DEBUG mode only when a host reaches
>>>>>>> this limit, so I created a PR to set this to INFO:
>>>>>>> https://github.com/apache/cloudstack/pull/3013
>>>>>>>
>>>>>>> Any input?
>>>>>>>
>>>>>>> Wido
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> With best regards, Ivan Kudryavtsev
>>>>>> Bitworks LLC
>>>>>> Cell RU: +7-923-414-1515
>>>>>> Cell USA: +1-201-257-1512
>>>>>> WWW: http://bitworks.software/ <http://bw-sw.com/>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Rafael Weingärtner
>>>>
>>>
>>>
>>
> 
> 


Re: KVM Max Guests Limit

2018-11-09 Thread Wido den Hollander



On 11/9/18 12:56 PM, Andrija Panic wrote:
> afaik not - but I did run once or twice intom perhaps looselym connected
> issue - ACS reports 100% of host RAM (makes sense) asavailable for VM
> deployment to ACS - so in 1-2 cases I did run into out of memory killer,
> crashing my VMs.
> 
> It would be great to have some amount of "reserve RAM" for host OS - or
> simply have PER HOST RAM disableTreshold setting, similar to cluster level
> "cluster.memory.allocated.capacity.disablethreshold", just on host level...
> 


You can do that already, in agent.properties you can set reserved memory.

But I doubt indeed that we need such a limit in ACS at all, why do we
need to limit the amount of Instances on a hypervisor?

Or at least set it to a very high number by default.

Wido

> On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner 
> wrote:
> 
>> Do we need these logical constraints in ACS at all?
>>
>> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander  wrote:
>>
>>>
>>>
>>> On 11/8/18 11:20 PM, Simon Weller wrote:
>>>> I think these is legacy and a guess back in the day. It was 50 at one
>>> point and it was lifted higher a few releases. ago.
>>>>
>>>
>>> I see. I'm about to do a test with a bunch of 128GB hypervisors and
>>> spawning a lot of 128M VMs. Trying to see where the limit might be and
>>> also stress the VR a bit by loading a lot of DHCP entries.
>>>
>>> Wido
>>>
>>>>
>>>>
>>>>
>>>> 
>>>> From: Ivan Kudryavtsev 
>>>> Sent: Thursday, November 8, 2018 3:58 PM
>>>> To: dev
>>>> Subject: Re: KVM Max Guests Limit
>>>>
>>>> Hi all, +1 for higher numbers.
>>>>
>>>> чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :
>>>>
>>>>> Hi,
>>>>>
>>>>> I see that for KVM we set the limit to 144 guests by default, can
>>>>> anybody tell me why we have this limit set to 144?
>>>>>
>>>>> Searching a bit I found this:
>>>>> https://access.redhat.com/articles/rhel-kvm-limits
>>>>>
>>>>> "This guest limit does not apply to Red Hat Enterprise Linux with
>>>>> Unlimited Guests. There is no guest limit for Red Hat Enterprise
>>>>> Virtualization"
>>>>>
>>>>> There is always a limit somewhere, but why do we set it to 144?
>>>>>
>>>>> I would personally vote for increasing this to 500 or something so
>> that
>>>>> users don't run into it that easily.
>>>>>
>>>>> Also, the log line is printed in DEBUG mode only when a host reaches
>>>>> this limit, so I created a PR to set this to INFO:
>>>>> https://github.com/apache/cloudstack/pull/3013
>>>>>
>>>>> Any input?
>>>>>
>>>>> Wido
>>>>>
>>>>
>>>>
>>>> --
>>>> With best regards, Ivan Kudryavtsev
>>>> Bitworks LLC
>>>> Cell RU: +7-923-414-1515
>>>> Cell USA: +1-201-257-1512
>>>> WWW: http://bitworks.software/ <http://bw-sw.com/>
>>>>
>>>
>>
>>
>> --
>> Rafael Weingärtner
>>
> 
> 


Re: KVM Max Guests Limit

2018-11-09 Thread Wido den Hollander



On 11/8/18 11:20 PM, Simon Weller wrote:
> I think these is legacy and a guess back in the day. It was 50 at one point 
> and it was lifted higher a few releases. ago.
> 

I see. I'm about to do a test with a bunch of 128GB hypervisors and
spawning a lot of 128M VMs. Trying to see where the limit might be and
also stress the VR a bit by loading a lot of DHCP entries.

Wido

> 
> 
> 
> 
> From: Ivan Kudryavtsev 
> Sent: Thursday, November 8, 2018 3:58 PM
> To: dev
> Subject: Re: KVM Max Guests Limit
> 
> Hi all, +1 for higher numbers.
> 
> чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :
> 
>> Hi,
>>
>> I see that for KVM we set the limit to 144 guests by default, can
>> anybody tell me why we have this limit set to 144?
>>
>> Searching a bit I found this:
>> https://access.redhat.com/articles/rhel-kvm-limits
>>
>> "This guest limit does not apply to Red Hat Enterprise Linux with
>> Unlimited Guests. There is no guest limit for Red Hat Enterprise
>> Virtualization"
>>
>> There is always a limit somewhere, but why do we set it to 144?
>>
>> I would personally vote for increasing this to 500 or something so that
>> users don't run into it that easily.
>>
>> Also, the log line is printed in DEBUG mode only when a host reaches
>> this limit, so I created a PR to set this to INFO:
>> https://github.com/apache/cloudstack/pull/3013
>>
>> Any input?
>>
>> Wido
>>
> 
> 
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
> 


KVM Max Guests Limit

2018-11-08 Thread Wido den Hollander
Hi,

I see that for KVM we set the limit to 144 guests by default, can
anybody tell me why we have this limit set to 144?

Searching a bit I found this:
https://access.redhat.com/articles/rhel-kvm-limits

"This guest limit does not apply to Red Hat Enterprise Linux with
Unlimited Guests. There is no guest limit for Red Hat Enterprise
Virtualization"

There is always a limit somewhere, but why do we set it to 144?

I would personally vote for increasing this to 500 or something so that
users don't run into it that easily.

Also, the log line is printed in DEBUG mode only when a host reaches
this limit, so I created a PR to set this to INFO:
https://github.com/apache/cloudstack/pull/3013

Any input?

Wido


Re: [VOTE] Apache CloudStack 4.11.2.0 RC4

2018-11-02 Thread Wido den Hollander
+1 (binding)

I've tested:

- Building DEB packages for Ubuntu
- Install DEB packages
- Upgrade from 4.11.1 to 4.11.2

Wido

On 10/30/18 5:10 PM, Paul Angus wrote:
> Hi All,
> 
> By popular demand, I've created a 4.11.2.0 release (RC4), with the following 
> artefacts up for testing and a vote:
> 
> Git Branch and Commit SH:
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20181030T1040
> Commit: 840ad40017612e169665fa799a6d31a23ecad347
> 
> Source release (checksums and signatures are available at the same location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/
> 
> PGP release keys (signed using 8B309F7251EE0BC8):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> The vote will be open until Sunday 4th November.
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate 
> "(binding)" with their vote?
> 
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
> 
> Additional information:
> 
> For users' convenience, I've built packages from 
> 840ad40017612e169665fa799a6d31a23ecad347 and published RC4 repository here:
> http://packages.shapeblue.com/testing/41120rc4/
> 
> The release notes are still work-in-progress, but the systemvm template 
> upgrade section has been updated. You may refer the following for systemvm 
> template upgrade testing:
> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/latest/index.html
> 
> 4.11.2 systemvm templates are as before and available from here:
> http://packages.shapeblue.com/testing/systemvm/4112rc3
> 
> 
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 


Re: VXLAN and KVm experiences

2018-10-23 Thread Wido den Hollander



On 10/23/18 1:51 PM, Simon Weller wrote:
> We've also been using VXLAN on KVM for all of our isolated VPC guest networks 
> for quite a long time now. As Andrija pointed out, make sure you increase the 
> max_igmp_memberships param and also put an ip address on each interface host 
> VXLAN interface in the same subnet for all hosts that will share networking, 
> or multicast won't work.
> 

Thanks! So you are saying that all hypervisors need to be in the same L2
network or are you routing the multicast?

My idea was that each POD would be an isolated Layer 3 domain and that a
VNI would span over the different Layer 3 networks.

I don't like STP and other Layer 2 loop-prevention systems.

Wido

> 
> - Si
> 
> 
> 
> From: Wido den Hollander 
> Sent: Tuesday, October 23, 2018 5:21 AM
> To: dev@cloudstack.apache.org
> Subject: Re: VXLAN and KVm experiences
> 
> 
> 
> On 10/23/18 11:21 AM, Andrija Panic wrote:
>> Hi Wido,
>>
>> I have "pioneered" this one in production for last 3 years (and suffered a
>> nasty pain of silent drop of packages on kernel 3.X back in the days
>> because of being unaware of max_igmp_memberships kernel parameters, so I
>> have updated the manual long time ago).
>>
>> I never had any issues (beside above nasty one...) and it works very well.
> 
> That's what I want to hear!
> 
>> To avoid above issue that I described - you should increase
>> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  - otherwise
>> with more than 20 vxlan interfaces, some of them will stay in down state
>> and have a hard traffic drop (with proper message in agent.log) with kernel
>>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and also
>> pay attention to MTU size as well - anyway everything is in the manual (I
>> updated everything I though was missing) - so please check it.
>>
> 
> Yes, the underlying network will all be 9000 bytes MTU.
> 
>> Our example setup:
>>
>> We have i.e. bond.950 as the main VLAN which will carry all vxlan "tunnels"
>> - so this is defined as KVM traffic label. In our case it didn't make sense
>> to use bridge on top of this bond0.950 (as the traffic label) - you can
>> test it on your own - since this bridge is used only to extract child
>> bond0.950 interface name, then based on vxlan ID, ACS will provision
>> vxlan...@bond0.xxx and join this new vxlan interface to NEW bridge created
>> (and then of course vNIC goes to this new bridge), so original bridge (to
>> which bond0.xxx belonged) is not used for anything.
>>
> 
> Clear, I indeed thought something like that would happen.
> 
>> Here is sample from above for vxlan 867 used for tenant isolation:
>>
>> root@hostname:~# brctl show brvx-867
>>
>> bridge name bridge id   STP enabled interfaces
>> brvx-8678000.2215cfce99ce   no  vnet6
>>
>>  vxlan867
>>
>> root@hostname:~# ip -d link show vxlan867
>>
>> 297: vxlan867:  mtu 8142 qdisc noqueue
>> master brvx-867 state UNKNOWN mode DEFAULT group default qlen 1000
>> link/ether 22:15:cf:ce:99:ce brd ff:ff:ff:ff:ff:ff promiscuity 1
>> vxlan id 867 group 239.0.3.99 dev bond0.950 port 0 0 ttl 10 ageing 300
>>
>> root@ix1-c7-2:~# ifconfig bond0.950 | grep MTU
>>   UP BROADCAST RUNNING MULTICAST  MTU:8192  Metric:1
>>
>> So note how the vxlan interface has by 50 bytes smaller MTU than the
>> bond0.950 parent interface (which could affects traffic inside VM) - so
>> jumbo frames are needed anyway on the parent interface (bond.950 in example
>> above with minimum of 1550 MTU)
>>
> 
> Yes, thanks! We will be using 1500 MTU inside the VMs, so all the
> networks underneath will be ~9k.
> 
>> Ping me if more details needed, happy to help.
>>
> 
> Awesome! We'll be doing a PoC rather soon. I'll come back with our
> experiences later.
> 
> Wido
> 
>> Cheers
>> Andrija
>>
>> On Tue, 23 Oct 2018 at 08:23, Wido den Hollander  wrote:
>>
>>> Hi,
>>>
>>> I just wanted to know if there are people out there using KVM with
>>> Advanced Networking and using VXLAN for different networks.
>>>
>>> Our main goal would be to spawn a VM and based on the network the NIC is
>>> in attach it to a different VXLAN bridge on the KVM host.
>>>
>>> It seems to me that this should work, but I just wanted to check and see
>>> if people have experience with it.
>>>
>>> Wido
>>>
>>
>>
> 


Re: VXLAN and KVm experiences

2018-10-23 Thread Wido den Hollander



On 10/23/18 11:21 AM, Andrija Panic wrote:
> Hi Wido,
> 
> I have "pioneered" this one in production for last 3 years (and suffered a
> nasty pain of silent drop of packages on kernel 3.X back in the days
> because of being unaware of max_igmp_memberships kernel parameters, so I
> have updated the manual long time ago).
> 
> I never had any issues (beside above nasty one...) and it works very well.

That's what I want to hear!

> To avoid above issue that I described - you should increase
> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  - otherwise
> with more than 20 vxlan interfaces, some of them will stay in down state
> and have a hard traffic drop (with proper message in agent.log) with kernel
>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and also
> pay attention to MTU size as well - anyway everything is in the manual (I
> updated everything I though was missing) - so please check it.
> 

Yes, the underlying network will all be 9000 bytes MTU.

> Our example setup:
> 
> We have i.e. bond.950 as the main VLAN which will carry all vxlan "tunnels"
> - so this is defined as KVM traffic label. In our case it didn't make sense
> to use bridge on top of this bond0.950 (as the traffic label) - you can
> test it on your own - since this bridge is used only to extract child
> bond0.950 interface name, then based on vxlan ID, ACS will provision
> vxlan...@bond0.xxx and join this new vxlan interface to NEW bridge created
> (and then of course vNIC goes to this new bridge), so original bridge (to
> which bond0.xxx belonged) is not used for anything.
> 

Clear, I indeed thought something like that would happen.

> Here is sample from above for vxlan 867 used for tenant isolation:
> 
> root@hostname:~# brctl show brvx-867
> 
> bridge name bridge id   STP enabled interfaces
> brvx-8678000.2215cfce99ce   no  vnet6
> 
>  vxlan867
> 
> root@hostname:~# ip -d link show vxlan867
> 
> 297: vxlan867:  mtu 8142 qdisc noqueue
> master brvx-867 state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether 22:15:cf:ce:99:ce brd ff:ff:ff:ff:ff:ff promiscuity 1
> vxlan id 867 group 239.0.3.99 dev bond0.950 port 0 0 ttl 10 ageing 300
> 
> root@ix1-c7-2:~# ifconfig bond0.950 | grep MTU
>   UP BROADCAST RUNNING MULTICAST  MTU:8192  Metric:1
> 
> So note how the vxlan interface has by 50 bytes smaller MTU than the
> bond0.950 parent interface (which could affects traffic inside VM) - so
> jumbo frames are needed anyway on the parent interface (bond.950 in example
> above with minimum of 1550 MTU)
> 

Yes, thanks! We will be using 1500 MTU inside the VMs, so all the
networks underneath will be ~9k.

> Ping me if more details needed, happy to help.
> 

Awesome! We'll be doing a PoC rather soon. I'll come back with our
experiences later.

Wido

> Cheers
> Andrija
> 
> On Tue, 23 Oct 2018 at 08:23, Wido den Hollander  wrote:
> 
>> Hi,
>>
>> I just wanted to know if there are people out there using KVM with
>> Advanced Networking and using VXLAN for different networks.
>>
>> Our main goal would be to spawn a VM and based on the network the NIC is
>> in attach it to a different VXLAN bridge on the KVM host.
>>
>> It seems to me that this should work, but I just wanted to check and see
>> if people have experience with it.
>>
>> Wido
>>
> 
> 


Re: KVM CloudStack Agent Hacking proposal

2018-10-23 Thread Wido den Hollander



On 10/22/18 8:02 PM, Ivan Kudryavtsev wrote:
> Hello, Devs.
> 
> I would like to introduce a feature and decided to consult with you about
> its design before implementation. The feature is connected with KVM
> CloudStack agent. We have found it beneficial to be able to launch custom
> scripts upon VM start/stop. It can be done using Qemu hook but it has
> several drawbacks:
> - the hook is deployed by CS and adding additional lines into it leads to
> extra efforts when ACS package is updated.
> - it leads to deadlocks as you cannot effectively and easy to communicate
> with libvirt from hook even with "fork & exec" because security_groups.py
> and agent also participate and as a result it causes deadlocks.
> 
> Now, in the code, we have a call for "security_groups.py":
> 
> Start:
> https://github.com/apache/cloudstack/blob/65f31f1a9fbc1c20cd752d80a7e1117efc0248a5/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtStartCommandWrapper.java#L103
> 
> Stop:
> https://github.com/apache/cloudstack/blob/65f31f1a9fbc1c20cd752d80a7e1117efc0248a5/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtStopCommandWrapper.java#L88
> 
> I would like is to introduce a more generic approach, so the administrator
> can specify additional scripts in the agent.properties, which will be
> called the same way "security_groups.py" called.
> 
> custom.vm.start=/path/to/script1,path/to.script2
> custom.vm.stop=/path/to/script3,path/to.script4
> 
> So, this feature will help users to do custom hotplug mechanisms. E.g. we
> have such implementation which adds per-account VXLAN as a hotplug ethernet
> device. So, even for a Basic Zone, every VM gets automatic second NIC which
> helps to build a private network for an account.
> 
> Currently, we do the job thru adding lines into security_groups.py, which
> is not a good approach, especially for end users who don't want to hack the
> system.
> 
> Also, I'm thinking about changing /etc/libvirt/hooks/qemu the same way, so
> it was just an entry point to  /etc/libvirt/hooks/qemu.d/* located scripts.
> 
> Let me know about this feature proposal and if its design is good, we start
> developing it.
> 

Seems like a good thing! It adds flexibility to the VM.

How are you planning on getting things like the VM name and other
details to the scripts?

Wido

> Have a good day.
> 


VXLAN and KVm experiences

2018-10-23 Thread Wido den Hollander
Hi,

I just wanted to know if there are people out there using KVM with
Advanced Networking and using VXLAN for different networks.

Our main goal would be to spawn a VM and based on the network the NIC is
in attach it to a different VXLAN bridge on the KVM host.

It seems to me that this should work, but I just wanted to check and see
if people have experience with it.

Wido


Re: Using ConfigDrive in a shared network

2018-10-15 Thread Wido den Hollander



On 10/11/2018 09:39 PM, Rohit Yadav wrote:
> Create a new network offering or use the default one with config drive. While 
> creating a new one you would select user data provided by config drive and 
> also select config drive feature. Try the latest 4.11 or master via UI.
> 

I eventually ended up re-creating the network offering and my Guest
Network and I see the ISO being generated and attached now.

Thanks!

Wido

> Regards.
> ____
> From: Wido den Hollander 
> Sent: Wednesday, October 10, 2018 7:46:21 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Using ConfigDrive in a shared network
> 
> Hi,
> 
> On 10/10/2018 12:09 PM, Rohit Yadav wrote:
>> Hi Wido,
>>
>>
>> I've tested/used the feature in advanced zone/networking but without 
>> security groups, so there could be potentially some limitation around that 
>> (please check?). Here are few things you can try:
>>
> 
> Ok, thanks for the hint!
> 
>>
>> - genisoimage is available on your mgmt server or local development platform
> 
> Yes, it is.
> 
>>
>> - after updating any existing network offering see if restarting mgmt server 
>> helps?
> 
> Tried it multiple times, no change.
> 
>>
>> - use an IDE, attach debugger on the config drive element class and see if 
>> it participates at all during deployment?
>>
>> - in the global setting check if config drive on primary storage is enabled 
>> or not? Config drive by default requires secondary storage and will create 
>> isos on it. Also check agent logs? (the config drive current is created 
>> on/by the mgmt server and then the iso data is transferred via cmd-answer 
>> pattern to either secondary or primary storage).
>>
>> - Also dumpxml for the guest domain and see if any iso is attached?
>>
> 
> I checked all the Agent logs, Management Server logs, no trace of
> ConfigDrive being used anywhere. The VM has a empty CD-Rom, but that's
> the usual CD-Rom which is attached to a VM.
> 
> It seems that the Mgmt server isn't using ConfigDrive at the moment, so
> I'm not sure where this is going wrong.
> 
> Setting it for UserData in the Network Offering should be sufficient, right?
> 
> Wido
> 
>>
>> - Rohit
>>
>> <https://cloudstack.apache.org>
>>
>>
>>
>> 
>> From: Wido den Hollander 
>> Sent: Tuesday, October 9, 2018 2:36:01 PM
>> To: dev@cloudstack.apache.org
>> Subject: Using ConfigDrive in a shared network
>>
>> Hi,
>>
>> I can't get ConfigDrive for UserData to work on my Advanced Zone and I
>> can't figure out why.
>>
>> I updated 'DefaultSharedNetworkOfferingWithSGService' and it shows:
>>
>>   "service": [
>> {
>>   "name": "UserData",
>>   "provider": [
>> {
>>   "name": "ConfigDrive"
>> }
>>   ]
>> },
>> {
>>   "name": "Dns",
>>   "provider": [
>> {
>>   "name": "VirtualRouter"
>> }
>>   ]
>> },
>> {
>>   "name": "SecurityGroup",
>>   "provider": [
>> {
>>   "name": "SecurityGroupProvider"
>> }
>>   ]
>> },
>> {
>>   "name": "Dhcp",
>>   "provider": [
>> {
>>   "name": "VirtualRouter"
>> }
>>   ]
>> }
>>   ],
>>
>> As you can see, my UserData should be provided by 'ConfigDrive'.
>>
>> It's state is also 'Enabled', so that's good.
>>
>> If I deploy a VM however it doesn't get a ISO attached nor do I see any
>> trace of ConfigDrive in the logs.
>>
>> Does anybody have an idea?
>>
>> Thanks!
>>
>> Wido
>>
>> rohit.ya...@shapeblue.com
>> www.shapeblue.com<http://www.shapeblue.com>
>> Amadeus House, Floral Street, London  WC2E 9DPUK
>> @shapeblue
>>
>>
>>
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
> 


Re: Using ConfigDrive in a shared network

2018-10-10 Thread Wido den Hollander
Hi,

On 10/10/2018 12:09 PM, Rohit Yadav wrote:
> Hi Wido,
> 
> 
> I've tested/used the feature in advanced zone/networking but without security 
> groups, so there could be potentially some limitation around that (please 
> check?). Here are few things you can try:
> 

Ok, thanks for the hint!

> 
> - genisoimage is available on your mgmt server or local development platform

Yes, it is.

> 
> - after updating any existing network offering see if restarting mgmt server 
> helps?

Tried it multiple times, no change.

> 
> - use an IDE, attach debugger on the config drive element class and see if it 
> participates at all during deployment?
> 
> - in the global setting check if config drive on primary storage is enabled 
> or not? Config drive by default requires secondary storage and will create 
> isos on it. Also check agent logs? (the config drive current is created on/by 
> the mgmt server and then the iso data is transferred via cmd-answer pattern 
> to either secondary or primary storage).
> 
> - Also dumpxml for the guest domain and see if any iso is attached?
> 

I checked all the Agent logs, Management Server logs, no trace of
ConfigDrive being used anywhere. The VM has a empty CD-Rom, but that's
the usual CD-Rom which is attached to a VM.

It seems that the Mgmt server isn't using ConfigDrive at the moment, so
I'm not sure where this is going wrong.

Setting it for UserData in the Network Offering should be sufficient, right?

Wido

> 
> - Rohit
> 
> <https://cloudstack.apache.org>
> 
> 
> 
> 
> From: Wido den Hollander 
> Sent: Tuesday, October 9, 2018 2:36:01 PM
> To: dev@cloudstack.apache.org
> Subject: Using ConfigDrive in a shared network
> 
> Hi,
> 
> I can't get ConfigDrive for UserData to work on my Advanced Zone and I
> can't figure out why.
> 
> I updated 'DefaultSharedNetworkOfferingWithSGService' and it shows:
> 
>   "service": [
> {
>   "name": "UserData",
>   "provider": [
> {
>   "name": "ConfigDrive"
> }
>   ]
> },
> {
>   "name": "Dns",
>   "provider": [
> {
>   "name": "VirtualRouter"
> }
>   ]
> },
> {
>   "name": "SecurityGroup",
>   "provider": [
> {
>   "name": "SecurityGroupProvider"
> }
>   ]
> },
> {
>   "name": "Dhcp",
>   "provider": [
> {
>   "name": "VirtualRouter"
> }
>   ]
> }
>   ],
> 
> As you can see, my UserData should be provided by 'ConfigDrive'.
> 
> It's state is also 'Enabled', so that's good.
> 
> If I deploy a VM however it doesn't get a ISO attached nor do I see any
> trace of ConfigDrive in the logs.
> 
> Does anybody have an idea?
> 
> Thanks!
> 
> Wido
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 


Using ConfigDrive in a shared network

2018-10-09 Thread Wido den Hollander
Hi,

I can't get ConfigDrive for UserData to work on my Advanced Zone and I
can't figure out why.

I updated 'DefaultSharedNetworkOfferingWithSGService' and it shows:

  "service": [
{
  "name": "UserData",
  "provider": [
{
  "name": "ConfigDrive"
}
  ]
},
{
  "name": "Dns",
  "provider": [
{
  "name": "VirtualRouter"
}
  ]
},
{
  "name": "SecurityGroup",
  "provider": [
{
  "name": "SecurityGroupProvider"
}
  ]
},
{
  "name": "Dhcp",
  "provider": [
{
  "name": "VirtualRouter"
}
  ]
}
  ],

As you can see, my UserData should be provided by 'ConfigDrive'.

It's state is also 'Enabled', so that's good.

If I deploy a VM however it doesn't get a ISO attached nor do I see any
trace of ConfigDrive in the logs.

Does anybody have an idea?

Thanks!

Wido


Removing Ubuntu 14.04 LTS support in master (4.12 release)

2018-08-30 Thread Wido den Hollander
Hi,

I've just opened a Pull Requests [0] to remove support for Ubuntu 14.04.

Ubuntu 14.04 will be EOL in 2019 and the current LTS versions are 16.04
(2021) and 18.04 (2023).

Ubuntu 14.04 lacks Java 8 and also has older Qemu and Libvirt versions.

Yet to be implemented/merged functions like Live Storage Migration and
Burst I/O limitation for KVM require more recent versions of Qemu and
Libvirt not provided by Ubuntu 14.04.

Therefor I opened the Pull Request to drop support for 14.04 in the
master branch.

With the 4.12 release happening later this year or early next year we
would be trying to support a Ubuntu release which will go EOL in ~6 months.

The current 4.11 release will support Ubuntu 14.04, so users running
that CloudStack version can keep running their infrastructure.

If there are objections, please post a comment on the Pull Request or if
you agree with dropping the support, please post a comment as well.

Wido

[0]: https://github.com/apache/cloudstack/pull/2828


rdpclient.MockServerTest failing

2018-08-29 Thread Wido den Hollander
Hi,

When building DEB packages I keep running into this test which is failing:

"Error in mock server: Received fatal alert: handshake_failure
javax.net.ssl.SSLHandshakeException: Received fatal alert:
handshake_failure"

Is anybody else seeing this as well? I keep removing the tests from the
.DEB build now when building them, but that doesn't seem right.

Any ideas?

Wido

Running rdpclient.MockServerTest
Error in mock server: Received fatal alert: handshake_failure
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:2038)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1135)
at
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at streamer.debug.MockServer.run(MockServer.java:122)
at java.lang.Thread.run(Thread.java:748)
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.313
sec <<< FAILURE! - in rdpclient.MockServerTest
testIsMockServerCanUpgradeConnectionToSsl(rdpclient.MockServerTest)
Time elapsed: 0.276 sec  <<< ERROR!
javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol
is disabled or cipher suites are inappropriate)
at sun.security.ssl.Handshaker.activate(Handshaker.java:529)
at
sun.security.ssl.SSLSocketImpl.kickstartHandshake(SSLSocketImpl.java:1492)
at
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1361)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at
rdpclient.MockServerTest.testIsMockServerCanUpgradeConnectionToSsl(MockServerTest.java:166)


Re: CloudStack DNS names to PowerDNS export

2018-08-23 Thread Wido den Hollander
On 08/21/2018 07:45 AM, Ivan Kudryavtsev wrote:
> Hello, users, devs.
> 
> We developed a small service for the creation of DNS records (A, , PTR)
> in the PowerDNS and published it in the GitHub:
> 
> https://github.com/bwsw/cs-powerdns-integration
> 

That seems very nice. Very nicely done. I like PowerDNS :-)

Wido

> Licensed under Apache 2 License.
> 
> *Rationale*
> 
> CloudStack VR maintains DNS A records for VMs but since VR is an ephemeral
> entity which can be removed and recreated, which IP addresses can be
> changed, it's inconvenient to use it for zone delegation. Also, it's
> difficult to pair second DNS server with it as it requires VR hacking. So,
> to overcome those difficulties and provide external users with FQDN access
> to VMs we implemented the solution.
> 
> 


Re: CEPH / CloudStack features

2018-07-27 Thread Wido den Hollander
Hi,

On 07/27/2018 12:18 PM, Dag Sonstebo wrote:
> Hi all,
> 
> I’m trying to find out more about CEPH compatibility with CloudStack / KVM – 
> i.e. trying to put together a feature matrix of what works  and what doesn’t 
> compared to NFS (or other block storage platforms).
> There’s not a lot of up to date information on this – the configuration guide 
> on [1] is all I’ve located so far apart from a couple of one-liners in the 
> official documentation.
> 
> Could I get some feedback from the Ceph users in the community?
> 

Yes! So, at first, Ceph is KVM-only. Other hypervisors do not support
RBD (RADOS Block Device) from Ceph.

What is supported:

- Thin provisioning
- Discard / fstrim (Requires VirtIO-SCSI)
- Volume cloning
- Snapshots
- Disk I/O throttling (done by libvirt)

Meaning, when a template is deployed for the first time in a Primary
Storage it's written to Ceph and all other Instances afterwards are a
clone of that primary image.

You can snapshot a RBD image and then have it copied to Secondary
Storage. Now, I'm not sure if keeping the snapshot in Primary Storage
and reverting works yet, I haven't looked at that in recent times.

The snapshotting part on Primary Storage is probably something that
needs some love and attention, but otherwise I think all other features
are supported.

I would recommend a CentOS 7 or Ubuntu 16.04/18.04 hypervisor, both work
just fine with Ceph.

Wido

> Regards,
> Dag Sonstebo
> 
> [1] http://docs.ceph.com/docs/master/rbd/rbd-cloudstack/
> 
> dag.sonst...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 


Replace Basic Zone creation in UI by 'Easy Mode' (Advanced Zone)

2018-07-12 Thread Wido den Hollander
Hi,

I've brought this up before and even opened a PR [0] to disable Basic
Zone creation, but after some discussion and good feedback I think we
can do it differently.

In the UI we now have a wizard popping up when starting for the first
time where you can then create a new zone.

I think we can remove the Basic Zone here as it is right now and add a
'Easy Mode' where a Advanced Zone is created with vlan://untagged as
isolation with Security Grouping enabled just as it would be with a
Basic Zone.

Now, I'm not a UI nor JavaScript expert at all, so I wouldn't know how
to fix this, but is there somebody who wants to jump in and help with this?

Wido

[0]: https://github.com/apache/cloudstack/pull/2720


Re: 4.11.1.0 Packages on download.cloudstack.org

2018-07-02 Thread Wido den Hollander



On 07/02/2018 02:19 PM, Paul Angus wrote:
> @Wido den Hollander - are you happy for me to upload the centos RPMs?  Could 
> you send me access creds pls.
> 

Yes! Send me your public SSH key and I'll provide you the details on how
to uploads the RPMs.

> 
> Kind regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 
> 
> -Original Message-
> From: Wido den Hollander  
> Sent: 30 June 2018 21:01
> To: Paul Angus ; dev@cloudstack.apache.org
> Cc: Pierre-Luc Dion 
> Subject: Re: 4.11.1.0 Packages on download.cloudstack.org
> 
> 
> 
> On 06/29/2018 03:25 PM, Paul Angus wrote:
>> Hi Pierre-Luc / Wido
>>
>>  
>>
>> Could you guys create the 4.11.1.0 repos for CentOS6, CentOS7 and 
>> Debian on download.cloudstack.org please?
>>
> 
> I will add the DEB packages on Monday!
> 
> Wido
> 
>>  
>>
>> Thanks chaps.
>>
>>  
>>
>> Paul Angus
>>
>>  
>>
>> paul.an...@shapeblue.com
>> www.shapeblue.com
>> @shapeblue
>>   
>>
>>   
>>
> 


Re: 4.11.1.0 Packages on download.cloudstack.org

2018-06-30 Thread Wido den Hollander



On 06/29/2018 03:25 PM, Paul Angus wrote:
> Hi Pierre-Luc / Wido
> 
>  
> 
> Could you guys create the 4.11.1.0 repos for CentOS6, CentOS7 and Debian
> on download.cloudstack.org please?
> 

I will add the DEB packages on Monday!

Wido

>  
> 
> Thanks chaps.
> 
>  
> 
> Paul Angus
> 
>  
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> @shapeblue
>   
> 
>   
> 


Re: [DISCUSS] Blocking the creation of new Basic Networking zones

2018-06-28 Thread Wido den Hollander



On 06/22/2018 08:04 AM, Rohit Yadav wrote:
> Good idea, but a lot of things supported in advanced zone with KVM may not be 
> supported in case of VMware, XenServer etc. The larger refactoring will need 
> to account for how in various places checks exists how behaviors are enforced 
> when the zone is basic or not, and what kind of impact will it have on 
> non-KVM users using basic zone (if any) along with having an upgrade path for 
> such users.
> 

I have no experience with VMWare nor XenServer, but doesn't Advanced
Networking with a Shared VLAN work the same on those?

> 
> If the overall functionality is retained and a seamless upgrade path can be 
> created a new major release 5.0 is not be necessary (that should be a 
> different thread with inputs from various stakeholders on various topics).
> 

A upgrade will be very difficult to convert a Basic Network to Advanced
as bridges on the HV need to be changed and this can vary per deployment.

> 
> Wrt support of the next Java version, we'll need to consider the distro 
> provided Java version for a long time Java8 will be supported [1] but newer 
> versions Java 9/10 onwards are short-term non-LTS releases, debian 
> testing/next don't even have openjdk-9/10 packages yet.
> 
> 
> [1] http://www.oracle.com/technetwork/java/javase/eol-135779.html
> 
> 
> - Rohit
> 
> <https://cloudstack.apache.org>
> 
> 
> 
> 
> From: Wido den Hollander 
> Sent: Thursday, June 21, 2018 8:26:40 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] Blocking the creation of new Basic Networking zones
> 
> For now I've created a Pull Request so we can have a discussion about it
> there: https://github.com/apache/cloudstack/pull/2720
> 
> Wido
> 
> On 06/21/2018 02:34 PM, Gabriel Beims Bräscher wrote:
>> +1
>>
>> We have an empty page regarding 5.0 [1] in the Design documents section
>> [2]. It might be a good spot to sort out CloudStack 5.0 plans.
>>
>> [1]
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/5.0+Design+Documents
>> [2] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
>>
>> 2018-06-21 5:58 GMT-03:00 Daan Hoogland :
>>
>>> well, that one is a good one to update, but there was a dedicated 5.0 page
>>> at some time. I think we can just use this from here on in and merge
>>> anything else in it when we find it ;)
>>>
>>> On Thu, Jun 21, 2018 at 8:49 AM, Rafael Weingärtner <
>>> rafaelweingart...@gmail.com> wrote:
>>>
>>>> This one [1]?
>>>>
>>>> [1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Roadmap
>>>>
>>>> On Thu, Jun 21, 2018 at 10:46 AM, Daan Hoogland >>>
>>>> wrote:
>>>>
>>>>> Wido, there used to be a page on cwiki with plans for 5.0, I can not
>>> find
>>>>> it anymore but this should be added to it.
>>>>>
>>>>> On Wed, Jun 20, 2018 at 6:42 PM, ilya musayev <
>>>>> ilya.mailing.li...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> I think the simplicity of Basic Zone was - you can get away with 1
>>> VLAN
>>>>>> for everything (great for POC setup) where as Advanced Shared with
>>> VLAN
>>>>>> isolation requires several VLANs to get going.
>>>>>>
>>>>>> How would we cover this use case?
>>>>>>
>>>>>> On Wed, Jun 20, 2018 at 11:34 AM Tutkowski, Mike <
>>>>>> mike.tutkow...@netapp.com> wrote:
>>>>>>
>>>>>>> Also, yes, I agree with the list you provided, Wido. We might have
>>> to
>>>>>>> break “other fancy stuff” into more detail, though. ;)
>>>>>>>
>>>>>>> On 6/20/18, 12:32 PM, "Tutkowski, Mike" 
>>>>>>> wrote:
>>>>>>>
>>>>>>> Sorry, Wido :) I missed that part.
>>>>>>>
>>>>>>> On 6/20/18, 5:03 AM, "Wido den Hollander" 
>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 06/20/2018 12:31 AM, Tutkowski, Mike wrote:
>>>>>>> > If this initiative goes through, perhaps that’s a good
>>> time
>>>> to
>>>>>>> bump CloudStack’s release number to 5.0.0?
>>>>>>> >
>>>>>>>
>>>>>>>

Re: Working with a 'PendingReleaseNotes' file

2018-06-28 Thread Wido den Hollander



On 06/22/2018 07:52 AM, Rohit Yadav wrote:
> Looks like a good idea, +1.
> 

I've opened a PR for this: https://github.com/apache/cloudstack/pull/2723

If we want to go through with this we should merge the PR and then
update the Github template for a PR as well to make this a new checkbox.

Other places of documentation which we need to update?

Wido

> 
> - Rohit
> 
> <https://cloudstack.apache.org>
> 
> 
> 
> ____
> From: Wido den Hollander 
> Sent: Friday, June 22, 2018 12:13:49 AM
> To: dev@cloudstack.apache.org; Daan Hoogland
> Cc: Rafael Weingärtner
> Subject: Re: Working with a 'PendingReleaseNotes' file
> 
> 
> 
> On 06/21/2018 05:13 PM, Daan Hoogland wrote:
>> Wido, I like the idea that we can generate from git what has happened since
>> last time. I also recognise that we don't do that properly all the time.
> 
> True. But a bunch of commits don't akways tell the exact story. Well,
> they do, but not for the end-user.
> 
>> What this comes down to is discipline, however we implement it. Deciding if
>> a change is big enough would be up to both the author and the reviewer and
>> we could ask for a comment tagged "Release Note:" as well. Just saying many
>> ways to do this. going with your idea from the ceph community. Where do we
>> move the release notes when the release is out? and when? (release
>> candidate/first thing after approval/part of the release commit)
>>
> 
> You would move it to a file called "ReleaseNotes" afterwards, that is
> done by the Release Manager.
> 
> Wido
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 
>> On Wed, Jun 20, 2018 at 11:05 AM, Wido den Hollander  wrote:
>>
>>>
>>>
>>> On 06/20/2018 07:53 AM, Rafael Weingärtner wrote:
>>>> It seems so. It can then be a part of the PR. I mean, in the PR we could
>>>> require a commit that updates this file.
>>>
>>> Yes, that would be the thing. When you send a PR with a change that is
>>> 'big enough' you also include updating the Pending Release Notes file.
>>>
>>>> Of course, we need to discuss if all PRs should update it, or only
>>>> important things should go in.
>>>>
>>>
>>> I think only important things and that it's up to the developer to guess
>>> if it's important enough to go into the PendingReleaseNotes file.
>>>
>>> Wido
>>>
>>>> +1
>>>>
>>>> On Tue, Jun 19, 2018 at 11:00 PM, Wido den Hollander 
>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> At the Ceph project we work with a Pending Release Notes [0].
>>>>>
>>>>> The idea is that if a developer writes a new feature or fixes something
>>>>> that changes functionality (or adds one) she/he also updates the
>>>>> PendingReleaseNotes file.
>>>>>
>>>>> That way when a new version is released it's easier for the Release
>>>>> Manager to know what to highlight.
>>>>>
>>>>> Although we can try to get everything from Jira and Github Issues it
>>>>> might be difficult to use the proper wording.
>>>>>
>>>>> On every release the files is cleared and people start to add again.
>>>>>
>>>>> Would this be something which could benefit CloudStack and make the
>>>>> release notes easier and more complete?
>>>>>
>>>>> Wido
>>>>>
>>>>> [0]: https://github.com/ceph/ceph/blob/master/PendingReleaseNotes
>>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
>>
> 


Re: Working with a 'PendingReleaseNotes' file

2018-06-21 Thread Wido den Hollander



On 06/21/2018 05:13 PM, Daan Hoogland wrote:
> Wido, I like the idea that we can generate from git what has happened since
> last time. I also recognise that we don't do that properly all the time.

True. But a bunch of commits don't akways tell the exact story. Well,
they do, but not for the end-user.

> What this comes down to is discipline, however we implement it. Deciding if
> a change is big enough would be up to both the author and the reviewer and
> we could ask for a comment tagged "Release Note:" as well. Just saying many
> ways to do this. going with your idea from the ceph community. Where do we
> move the release notes when the release is out? and when? (release
> candidate/first thing after approval/part of the release commit)
> 

You would move it to a file called "ReleaseNotes" afterwards, that is
done by the Release Manager.

Wido

> On Wed, Jun 20, 2018 at 11:05 AM, Wido den Hollander  wrote:
> 
>>
>>
>> On 06/20/2018 07:53 AM, Rafael Weingärtner wrote:
>>> It seems so. It can then be a part of the PR. I mean, in the PR we could
>>> require a commit that updates this file.
>>
>> Yes, that would be the thing. When you send a PR with a change that is
>> 'big enough' you also include updating the Pending Release Notes file.
>>
>>> Of course, we need to discuss if all PRs should update it, or only
>>> important things should go in.
>>>
>>
>> I think only important things and that it's up to the developer to guess
>> if it's important enough to go into the PendingReleaseNotes file.
>>
>> Wido
>>
>>> +1
>>>
>>> On Tue, Jun 19, 2018 at 11:00 PM, Wido den Hollander 
>> wrote:
>>>
>>>> Hi,
>>>>
>>>> At the Ceph project we work with a Pending Release Notes [0].
>>>>
>>>> The idea is that if a developer writes a new feature or fixes something
>>>> that changes functionality (or adds one) she/he also updates the
>>>> PendingReleaseNotes file.
>>>>
>>>> That way when a new version is released it's easier for the Release
>>>> Manager to know what to highlight.
>>>>
>>>> Although we can try to get everything from Jira and Github Issues it
>>>> might be difficult to use the proper wording.
>>>>
>>>> On every release the files is cleared and people start to add again.
>>>>
>>>> Would this be something which could benefit CloudStack and make the
>>>> release notes easier and more complete?
>>>>
>>>> Wido
>>>>
>>>> [0]: https://github.com/ceph/ceph/blob/master/PendingReleaseNotes
>>>>
>>>
>>>
>>>
>>
> 
> 
> 


Re: [DISCUSS] Blocking the creation of new Basic Networking zones

2018-06-21 Thread Wido den Hollander
For now I've created a Pull Request so we can have a discussion about it
there: https://github.com/apache/cloudstack/pull/2720

Wido

On 06/21/2018 02:34 PM, Gabriel Beims Bräscher wrote:
> +1
> 
> We have an empty page regarding 5.0 [1] in the Design documents section
> [2]. It might be a good spot to sort out CloudStack 5.0 plans.
> 
> [1]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/5.0+Design+Documents
> [2] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
> 
> 2018-06-21 5:58 GMT-03:00 Daan Hoogland :
> 
>> well, that one is a good one to update, but there was a dedicated 5.0 page
>> at some time. I think we can just use this from here on in and merge
>> anything else in it when we find it ;)
>>
>> On Thu, Jun 21, 2018 at 8:49 AM, Rafael Weingärtner <
>> rafaelweingart...@gmail.com> wrote:
>>
>>> This one [1]?
>>>
>>> [1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Roadmap
>>>
>>> On Thu, Jun 21, 2018 at 10:46 AM, Daan Hoogland >>
>>> wrote:
>>>
>>>> Wido, there used to be a page on cwiki with plans for 5.0, I can not
>> find
>>>> it anymore but this should be added to it.
>>>>
>>>> On Wed, Jun 20, 2018 at 6:42 PM, ilya musayev <
>>>> ilya.mailing.li...@gmail.com>
>>>> wrote:
>>>>
>>>>> I think the simplicity of Basic Zone was - you can get away with 1
>> VLAN
>>>>> for everything (great for POC setup) where as Advanced Shared with
>> VLAN
>>>>> isolation requires several VLANs to get going.
>>>>>
>>>>> How would we cover this use case?
>>>>>
>>>>> On Wed, Jun 20, 2018 at 11:34 AM Tutkowski, Mike <
>>>>> mike.tutkow...@netapp.com> wrote:
>>>>>
>>>>>> Also, yes, I agree with the list you provided, Wido. We might have
>> to
>>>>>> break “other fancy stuff” into more detail, though. ;)
>>>>>>
>>>>>> On 6/20/18, 12:32 PM, "Tutkowski, Mike" 
>>>>>> wrote:
>>>>>>
>>>>>> Sorry, Wido :) I missed that part.
>>>>>>
>>>>>> On 6/20/18, 5:03 AM, "Wido den Hollander" 
>> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 06/20/2018 12:31 AM, Tutkowski, Mike wrote:
>>>>>> > If this initiative goes through, perhaps that’s a good
>> time
>>> to
>>>>>> bump CloudStack’s release number to 5.0.0?
>>>>>> >
>>>>>>
>>>>>> That's what I said in my e-mail :-) But yes, I agree with
>> you,
>>>>>> this
>>>>>> might be a good time to bump it to 5.0
>>>>>>
>>>>>> With that we would:
>>>>>>
>>>>>> - Drop creation of new Basic Networking Zones
>>>>>> - Support IPv6 in shared IPv6 networks
>>>>>> - Java 9?
>>>>>> - Drop support for Ubuntu 12.04
>>>>>> - Other fancy stuff?
>>>>>> - Support ConfigDrive in all scenarios properly
>>>>>>
>>>>>> How would that sound?
>>>>>>
>>>>>> Wido
>>>>>>
>>>>>> >> On Jun 19, 2018, at 3:17 PM, Wido den Hollander <
>>>>>> w...@widodh.nl> wrote:
>>>>>> >>
>>>>>> >>
>>>>>> >>
>>>>>> >>> On 06/19/2018 11:07 PM, Daan Hoogland wrote:
>>>>>> >>> I like this initiative, and here comes the big but even
>>>>>> though I myself
>>>>>> >>> might think it is not valid; Basic zones are there to
>>> give a
>>>>>> simple start
>>>>>> >>> for new users. If we can give a one-knob start/one page
>>>>>> wizard for creating
>>>>>> >>> a shared network in advanced zone with security groups
>> and
>>>>>> userdata, great.
>>>>>> >>
>>>>>> >> That would be a UI thing, but it would be a matter of
>> using
>>>>>> VLAN
>>>>>>   

Re: [DISCUSS] Blocking the creation of new Basic Networking zones

2018-06-20 Thread Wido den Hollander



On 06/20/2018 08:42 PM, ilya musayev wrote:
> I think the simplicity of Basic Zone was - you can get away with 1 VLAN
> for everything (great for POC setup) where as Advanced Shared with VLAN
> isolation requires several VLANs to get going.
> 

Does it? I have Advanced Networking running with just vlan://untagged as
a broadcast domain.

Using VLAN 0 would be exactly the same since that's the default VLAN.

> How would we cover this use case?
> 

By using 'untagged' as the VLAN number, don't we?

Wido

> On Wed, Jun 20, 2018 at 11:34 AM Tutkowski, Mike
> mailto:mike.tutkow...@netapp.com>> wrote:
> 
> Also, yes, I agree with the list you provided, Wido. We might have
> to break “other fancy stuff” into more detail, though. ;)
> 
> On 6/20/18, 12:32 PM, "Tutkowski, Mike"  <mailto:mike.tutkow...@netapp.com>> wrote:
> 
>     Sorry, Wido :) I missed that part.
> 
>     On 6/20/18, 5:03 AM, "Wido den Hollander"  <mailto:w...@widodh.nl>> wrote:
> 
> 
> 
>         On 06/20/2018 12:31 AM, Tutkowski, Mike wrote:
>         > If this initiative goes through, perhaps that’s a good
> time to bump CloudStack’s release number to 5.0.0?
>         >
> 
>         That's what I said in my e-mail :-) But yes, I agree with
> you, this
>         might be a good time to bump it to 5.0
> 
>         With that we would:
> 
>         - Drop creation of new Basic Networking Zones
>         - Support IPv6 in shared IPv6 networks
>         - Java 9?
>         - Drop support for Ubuntu 12.04
>         - Other fancy stuff?
>         - Support ConfigDrive in all scenarios properly
> 
>         How would that sound?
> 
>         Wido
> 
>         >> On Jun 19, 2018, at 3:17 PM, Wido den Hollander
> mailto:w...@widodh.nl>> wrote:
>         >>
>         >>
>         >>
>         >>> On 06/19/2018 11:07 PM, Daan Hoogland wrote:
>         >>> I like this initiative, and here comes the big but even
> though I myself
>         >>> might think it is not valid; Basic zones are there to
> give a simple start
>         >>> for new users. If we can give a one-knob start/one page
> wizard for creating
>         >>> a shared network in advanced zone with security groups
> and userdata, great.
>         >>
>         >> That would be a UI thing, but it would be a matter of
> using VLAN
>         >> isolation and giving in VLAN 0 or 'untagged', because
> that's basically
>         >> what Basic Networking does.
>         >>
>         >> It plugs the VM on top of usually cloudbr0 (KVM).
>         >>
>         >> If you use vlan://untagged for the broadcast_uri in
> Advanced Networking
>         >> you get exactly the same result.
>         >>
>         >>> And I really fancy this idea. let's make ACS more simple
> by throwing at as
>         >>> much code as we can in a gradual and controlled way :+1:
>         >>
>         >> I would love to. But I'm a real novice when it comes to
> the UI though.
>         >> So that would be something I wouldn't be good at doing.
>         >>
>         >> Blocking Basic Networking creation is a few if-statements
> at the right
>         >> location and you're done.
>         >>
>         >> Wido
>         >>
>         >>>
>         >>>> On Tue, Jun 19, 2018 at 10:57 PM, Wido den Hollander
> mailto:w...@widodh.nl>> wrote:
>         >>>>
>         >>>> Hi,
>         >>>>
>         >>>> We (PCextreme) are a big-time user of Basic Networking
> and recently
>         >>>> started to look into Advanced Networking with VLAN
> isolation and a
>         >>>> shared network.
>         >>>>
>         >>>> This provides (from what we can see) all the features
> Basic Networking
>         >>>> provides, like the VR just doing DHCP and UserData
> while the Hypervisor
>         >>>> does the Security Grouping.
>         >>>>
>         >>>> That made me wonder why we still have Basic Networking.
>         >>>>
>         >>>> Dro

Re: [DISCUSS] Blocking the creation of new Basic Networking zones

2018-06-20 Thread Wido den Hollander
I never meant for this thread to de-rail into what should be CloudStack
5.0 :-)

I haven't heard any objections against the creation of new Basic
Networking Zones being prohibited, but hey, it's been <24 hours since I
send the first mail.

Wido

On 06/20/2018 04:51 PM, Stephan Seitz wrote:
> Hi!
> 
> 
>>> With that we would:
>>>
>>> - Drop creation of new Basic Networking Zones
>>> - Support IPv6 in shared IPv6 networks
>>> - Java 9?
>>> - Drop support for Ubuntu 12.04
>>> - Other fancy stuff?
>> - Versioned API: keep v1 API (< v5.0.0)  and create a v2 API >= v5.0.0
>> where we fix all inconsistencies (ACL API generally, paging does not
>> always work, returned keys sometime camel case (crossZone), a.s.o.)
> 
> - Usable Error Messages (including a reason why things failed). Nothing fancy
> I think, following the respective Stacktrace in the Logfile, the top most 
> exception
> shows everything (in most cases), but looks like the last "generic" exception 
> is
> reported. 
> 
> 
>>>
>>> - Support ConfigDrive in all scenarios properly
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: Working with a 'PendingReleaseNotes' file

2018-06-20 Thread Wido den Hollander



On 06/20/2018 07:53 AM, Rafael Weingärtner wrote:
> It seems so. It can then be a part of the PR. I mean, in the PR we could
> require a commit that updates this file.

Yes, that would be the thing. When you send a PR with a change that is
'big enough' you also include updating the Pending Release Notes file.

> Of course, we need to discuss if all PRs should update it, or only
> important things should go in.
> 

I think only important things and that it's up to the developer to guess
if it's important enough to go into the PendingReleaseNotes file.

Wido

> +1
> 
> On Tue, Jun 19, 2018 at 11:00 PM, Wido den Hollander  wrote:
> 
>> Hi,
>>
>> At the Ceph project we work with a Pending Release Notes [0].
>>
>> The idea is that if a developer writes a new feature or fixes something
>> that changes functionality (or adds one) she/he also updates the
>> PendingReleaseNotes file.
>>
>> That way when a new version is released it's easier for the Release
>> Manager to know what to highlight.
>>
>> Although we can try to get everything from Jira and Github Issues it
>> might be difficult to use the proper wording.
>>
>> On every release the files is cleared and people start to add again.
>>
>> Would this be something which could benefit CloudStack and make the
>> release notes easier and more complete?
>>
>> Wido
>>
>> [0]: https://github.com/ceph/ceph/blob/master/PendingReleaseNotes
>>
> 
> 
> 


Re: [DISCUSS] Blocking the creation of new Basic Networking zones

2018-06-20 Thread Wido den Hollander



On 06/20/2018 12:31 AM, Tutkowski, Mike wrote:
> If this initiative goes through, perhaps that’s a good time to bump 
> CloudStack’s release number to 5.0.0?
> 

That's what I said in my e-mail :-) But yes, I agree with you, this
might be a good time to bump it to 5.0

With that we would:

- Drop creation of new Basic Networking Zones
- Support IPv6 in shared IPv6 networks
- Java 9?
- Drop support for Ubuntu 12.04
- Other fancy stuff?
- Support ConfigDrive in all scenarios properly

How would that sound?

Wido

>> On Jun 19, 2018, at 3:17 PM, Wido den Hollander  wrote:
>>
>>
>>
>>> On 06/19/2018 11:07 PM, Daan Hoogland wrote:
>>> I like this initiative, and here comes the big but even though I myself
>>> might think it is not valid; Basic zones are there to give a simple start
>>> for new users. If we can give a one-knob start/one page wizard for creating
>>> a shared network in advanced zone with security groups and userdata, great.
>>
>> That would be a UI thing, but it would be a matter of using VLAN
>> isolation and giving in VLAN 0 or 'untagged', because that's basically
>> what Basic Networking does.
>>
>> It plugs the VM on top of usually cloudbr0 (KVM).
>>
>> If you use vlan://untagged for the broadcast_uri in Advanced Networking
>> you get exactly the same result.
>>
>>> And I really fancy this idea. let's make ACS more simple by throwing at as
>>> much code as we can in a gradual and controlled way :+1:
>>
>> I would love to. But I'm a real novice when it comes to the UI though.
>> So that would be something I wouldn't be good at doing.
>>
>> Blocking Basic Networking creation is a few if-statements at the right
>> location and you're done.
>>
>> Wido
>>
>>>
>>>> On Tue, Jun 19, 2018 at 10:57 PM, Wido den Hollander  
>>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> We (PCextreme) are a big-time user of Basic Networking and recently
>>>> started to look into Advanced Networking with VLAN isolation and a
>>>> shared network.
>>>>
>>>> This provides (from what we can see) all the features Basic Networking
>>>> provides, like the VR just doing DHCP and UserData while the Hypervisor
>>>> does the Security Grouping.
>>>>
>>>> That made me wonder why we still have Basic Networking.
>>>>
>>>> Dropping all the code would be a big problem for users as you can't
>>>> simply migrate from Basic to Advanced. In theory we found out that it's
>>>> possible by changing the database, but I wouldn't guarantee it works in
>>>> every use-case. So doing this automatically during a upgrade would be
>>>> difficult.
>>>>
>>>> To prevent us from having to maintain the Basic Networking code for ever
>>>> I would like to propose and discuss the matter of preventing the
>>>> creation of new Basic Networking zones.
>>>>
>>>> In the future this can get us rid of a lot of if-else statements in the
>>>> code and it would make testing also easier as we have few things to test.
>>>>
>>>> Most of the development also seems to go in the Advanced Networking
>>>> direction.
>>>>
>>>> We are currently also working on IPv6 in Advanced Shared Networks and
>>>> that's progressing very good as well.
>>>>
>>>> Would this be something to call the 5.0 release where we simplify the
>>>> networking and in the UI/API get rid of Basic Networking while keeping
>>>> it alive for existing users?
>>>>
>>>> Wido
>>>>
>>>
>>>
>>>


Re: [DISCUSS] Blocking the creation of new Basic Networking zones

2018-06-19 Thread Wido den Hollander



On 06/19/2018 11:07 PM, Daan Hoogland wrote:
> I like this initiative, and here comes the big but even though I myself
> might think it is not valid; Basic zones are there to give a simple start
> for new users. If we can give a one-knob start/one page wizard for creating
> a shared network in advanced zone with security groups and userdata, great.

That would be a UI thing, but it would be a matter of using VLAN
isolation and giving in VLAN 0 or 'untagged', because that's basically
what Basic Networking does.

It plugs the VM on top of usually cloudbr0 (KVM).

If you use vlan://untagged for the broadcast_uri in Advanced Networking
you get exactly the same result.

> And I really fancy this idea. let's make ACS more simple by throwing at as
> much code as we can in a gradual and controlled way :+1:

I would love to. But I'm a real novice when it comes to the UI though.
So that would be something I wouldn't be good at doing.

Blocking Basic Networking creation is a few if-statements at the right
location and you're done.

Wido

> 
> On Tue, Jun 19, 2018 at 10:57 PM, Wido den Hollander  wrote:
> 
>> Hi,
>>
>> We (PCextreme) are a big-time user of Basic Networking and recently
>> started to look into Advanced Networking with VLAN isolation and a
>> shared network.
>>
>> This provides (from what we can see) all the features Basic Networking
>> provides, like the VR just doing DHCP and UserData while the Hypervisor
>> does the Security Grouping.
>>
>> That made me wonder why we still have Basic Networking.
>>
>> Dropping all the code would be a big problem for users as you can't
>> simply migrate from Basic to Advanced. In theory we found out that it's
>> possible by changing the database, but I wouldn't guarantee it works in
>> every use-case. So doing this automatically during a upgrade would be
>> difficult.
>>
>> To prevent us from having to maintain the Basic Networking code for ever
>> I would like to propose and discuss the matter of preventing the
>> creation of new Basic Networking zones.
>>
>> In the future this can get us rid of a lot of if-else statements in the
>> code and it would make testing also easier as we have few things to test.
>>
>> Most of the development also seems to go in the Advanced Networking
>> direction.
>>
>> We are currently also working on IPv6 in Advanced Shared Networks and
>> that's progressing very good as well.
>>
>> Would this be something to call the 5.0 release where we simplify the
>> networking and in the UI/API get rid of Basic Networking while keeping
>> it alive for existing users?
>>
>> Wido
>>
> 
> 
> 


Re: Convert KVM Instance to CloudStack

2018-06-19 Thread Wido den Hollander



On 06/14/2018 01:32 AM, ilya musayev wrote:
> Hi Users and Dev
> 
> I apologize for cross posting.. 
> 
> I have bunch of VMs that were deployed by CloudStack - however - the 
> management server along with a DB is no longer available.
> 
> This is a POC environment - but i would love not to loose and recreate the 
> VMs if possible,
> 
> Hence i’m thinking of writing re-injestion process of existing running KVM 
> instances back into new cloudstack - without doing template imports and such.
> 
> Has anyone create a tooling for this endevour by any chance? If not - i might 
> have to create one :(

Not really, but all I can think of is just re-create the VM in
CloudStack and then copy the old data from the Primary Storage to the
right location on the new Primary Storage.

In case of NFS just copying the right files and then starting the VM.

Wido

> 
> 
> Thanks
> ilya
> 


Working with a 'PendingReleaseNotes' file

2018-06-19 Thread Wido den Hollander
Hi,

At the Ceph project we work with a Pending Release Notes [0].

The idea is that if a developer writes a new feature or fixes something
that changes functionality (or adds one) she/he also updates the
PendingReleaseNotes file.

That way when a new version is released it's easier for the Release
Manager to know what to highlight.

Although we can try to get everything from Jira and Github Issues it
might be difficult to use the proper wording.

On every release the files is cleared and people start to add again.

Would this be something which could benefit CloudStack and make the
release notes easier and more complete?

Wido

[0]: https://github.com/ceph/ceph/blob/master/PendingReleaseNotes


[DISCUSS] Blocking the creation of new Basic Networking zones

2018-06-19 Thread Wido den Hollander
Hi,

We (PCextreme) are a big-time user of Basic Networking and recently
started to look into Advanced Networking with VLAN isolation and a
shared network.

This provides (from what we can see) all the features Basic Networking
provides, like the VR just doing DHCP and UserData while the Hypervisor
does the Security Grouping.

That made me wonder why we still have Basic Networking.

Dropping all the code would be a big problem for users as you can't
simply migrate from Basic to Advanced. In theory we found out that it's
possible by changing the database, but I wouldn't guarantee it works in
every use-case. So doing this automatically during a upgrade would be
difficult.

To prevent us from having to maintain the Basic Networking code for ever
I would like to propose and discuss the matter of preventing the
creation of new Basic Networking zones.

In the future this can get us rid of a lot of if-else statements in the
code and it would make testing also easier as we have few things to test.

Most of the development also seems to go in the Advanced Networking
direction.

We are currently also working on IPv6 in Advanced Shared Networks and
that's progressing very good as well.

Would this be something to call the 5.0 release where we simplify the
networking and in the UI/API get rid of Basic Networking while keeping
it alive for existing users?

Wido


Re: [VOTE] Apache CloudStack 4.11.1.0 LTS [RC2]

2018-06-15 Thread Wido den Hollander
+1 (binding)

Tested:
- Upgrade from 4.11 to 4.11.1 (with new SSVM)
- Deployment of new System VMs
- Deployment of new Instance

Wido

On 06/11/2018 05:49 PM, Paul Angus wrote:
> Hi All,
> 
> 
> 
> I've created a 4.11.1.0 release (RC2), with the following artefacts up for 
> testing and a vote:
> 
> 
> 
> Git Branch and Commit SH:
> 
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.1.0-RC20180611T1504
> 
> Commit: bcf602c7cd4ab662a7c4f208dee32fb8513e26c8
> 
> 
> 
> Source release (checksums and signatures are available at the same
> 
> location):
> 
> https://dist.apache.org/repos/dist/dev/cloudstack/4.11.1.0/
> 
> 
> 
> PGP release keys (signed using 8B309F7251EE0BC8):
> 
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> 
> 
> The vote will be open until the end of the week, 15nd June 2018.
> 
> 
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate 
> "(binding)" with their vote?
> 
> 
> 
> [ ] +1  approve
> 
> [ ] +0  no opinion
> 
> [ ] -1  disapprove (and reason why)
> 
> 
> 
> Additional information:
> 
> 
> 
> For users' convenience, I've built packages from 
> bcf602c7cd4ab662a7c4f208dee32fb8513e26c8 and published RC2 repository here:
> 
> http://packages.shapeblue.com/testing/4111rc2/
> 
> 
> 
> The release notes are still work-in-progress, but the systemvm template 
> upgrade section has been updated. You may refer the following for systemvm 
> template upgrade testing:
> 
> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/latest/index.html
> 
> 
> 
> 4.11.1 systemvm templates are available from here:
> http://packages.shapeblue.com/systemvmtemplate/4.11.1-rc1/
> 
> 
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 


Re: Why does a VLAN and Network have IP information?

2018-06-13 Thread Wido den Hollander



On 06/12/2018 11:32 AM, Daan Hoogland wrote:
> Wido, I think we can remove ip data from the vlan table, though it is going
> to require some hacking. Removing the vlan table seems not prudent to me,
> especially since we now have l2 networks (without ip provisioned).
> 

It is only used in Basic Networking it seems. However, Advanced
Networking with VLAN separation provides the same.

So right now we can't remove it, but it should be doable if we think of
a way to migrate from Basic to Advanced Networking with shared networks.

Wido

> On Tue, Jun 12, 2018 at 11:12 AM, Wido den Hollander  wrote:
> 
>> Hi,
>>
>> Looking at our design and tables in the database I'm wondering why both
>> a VLAN and a Network has IP information.
>>
>> A VLAN is a Layer 2 domain and shouldn't have any IP(4/6) information
>> and we also seem to store redundant information in there.
>>
>> Below is some information I have in a test database and I'm just trying
>> to understand why both have IP information.
>>
>> Imho this information should not be stored in the VLAN table as it's
>> redundant anyway. But still, why is it there? And why do we actually use
>> the VLAN table? Because even the VLAN tag is stored in the *networks*
>> table.
>>
>> Wido
>>
>> mysql> select * from vlan limit 1 \G
>> *** 1. row ***
>>  id: 1
>>uuid: d14f30ab-072e-41b7-bfcf-0aadd156e01d
>> vlan_id: 0
>>vlan_gateway: 192.168.200.1
>>vlan_netmask: 255.255.255.0
>> description: 192.168.200.100-192.168.200.200
>>   vlan_type: DirectAttached
>>  data_center_id: 1
>>  network_id: 203
>> physical_network_id: 200
>> ip6_gateway: 2001:db8:100::1
>>ip6_cidr: 2001:db8:100::/64
>>   ip6_range: NULL
>> removed: NULL
>> created: 2018-06-09 18:53:26
>> 1 row in set (0.00 sec)
>>
>> mysql>
>>
>> mysql> select * from networks where id = 203 \G
>> *** 1. row ***
>>id: 203
>>  name: GuestNetwork1
>>  uuid: f1f7281d-bedd-422c-bd44-eae9be172157
>>  display_text: GuestNetwork1
>>  traffic_type: Guest
>> broadcast_domain_type: Vlan
>> broadcast_uri: vlan://untagged
>>   gateway: 192.168.200.1
>>  cidr: 192.168.200.0/24
>>  mode: Dhcp
>>   network_offering_id: 6
>>   physical_network_id: 200
>>data_center_id: 1
>> guru_name: DirectNetworkGuru
>> state: Setup
>>   related: 203
>> domain_id: 1
>>account_id: 1
>>  dns1: NULL
>>  dns2: NULL
>> guru_data: NULL
>>set_fields: 0
>>  acl_type: Domain
>>network_domain: cs1cloud.internal
>>reservation_id: NULL
>>guest_type: Shared
>>  restart_required: 0
>>   created: 2018-06-09 18:53:26
>>   removed: NULL
>> specify_ip_ranges: 1
>>vpc_id: NULL
>>   ip6_gateway: NULL
>>  ip6_cidr: NULL
>>  network_cidr: NULL
>>   display_network: 1
>>network_acl_id: NULL
>>   streched_l2: 0
>> redundant: 0
>>   external_id: NULL
>> 1 row in set (0.01 sec)
>>
>> mysql>
>>
> 
> 
> 


Re: Why does a VLAN and Network have IP information?

2018-06-12 Thread Wido den Hollander



On 06/12/2018 12:11 PM, Rafael Weingärtner wrote:
> In theory, the object (either in Java or a DB table) that represents a VLAN
> should not have IP information. However, it seems that someone “reused” the
> object. We would need to check if the IP data stored there is not really
> used before removing it.
> 

Indeed. It seems redundant to me. We actually have a lot of redundant
database entries, but this one is rather obvious to me.

Wido

> 
> On Tue, Jun 12, 2018 at 11:32 AM, Daan Hoogland 
> wrote:
> 
>> Wido, I think we can remove ip data from the vlan table, though it is going
>> to require some hacking. Removing the vlan table seems not prudent to me,
>> especially since we now have l2 networks (without ip provisioned).
>>
>> On Tue, Jun 12, 2018 at 11:12 AM, Wido den Hollander 
>> wrote:
>>
>>> Hi,
>>>
>>> Looking at our design and tables in the database I'm wondering why both
>>> a VLAN and a Network has IP information.
>>>
>>> A VLAN is a Layer 2 domain and shouldn't have any IP(4/6) information
>>> and we also seem to store redundant information in there.
>>>
>>> Below is some information I have in a test database and I'm just trying
>>> to understand why both have IP information.
>>>
>>> Imho this information should not be stored in the VLAN table as it's
>>> redundant anyway. But still, why is it there? And why do we actually use
>>> the VLAN table? Because even the VLAN tag is stored in the *networks*
>>> table.
>>>
>>> Wido
>>>
>>> mysql> select * from vlan limit 1 \G
>>> *** 1. row ***
>>>  id: 1
>>>uuid: d14f30ab-072e-41b7-bfcf-0aadd156e01d
>>> vlan_id: 0
>>>vlan_gateway: 192.168.200.1
>>>vlan_netmask: 255.255.255.0
>>> description: 192.168.200.100-192.168.200.200
>>>   vlan_type: DirectAttached
>>>  data_center_id: 1
>>>  network_id: 203
>>> physical_network_id: 200
>>> ip6_gateway: 2001:db8:100::1
>>>ip6_cidr: 2001:db8:100::/64
>>>   ip6_range: NULL
>>> removed: NULL
>>> created: 2018-06-09 18:53:26
>>> 1 row in set (0.00 sec)
>>>
>>> mysql>
>>>
>>> mysql> select * from networks where id = 203 \G
>>> *** 1. row ***
>>>id: 203
>>>  name: GuestNetwork1
>>>  uuid: f1f7281d-bedd-422c-bd44-eae9be172157
>>>  display_text: GuestNetwork1
>>>  traffic_type: Guest
>>> broadcast_domain_type: Vlan
>>> broadcast_uri: vlan://untagged
>>>   gateway: 192.168.200.1
>>>  cidr: 192.168.200.0/24
>>>  mode: Dhcp
>>>   network_offering_id: 6
>>>   physical_network_id: 200
>>>data_center_id: 1
>>> guru_name: DirectNetworkGuru
>>> state: Setup
>>>   related: 203
>>> domain_id: 1
>>>account_id: 1
>>>  dns1: NULL
>>>  dns2: NULL
>>> guru_data: NULL
>>>set_fields: 0
>>>  acl_type: Domain
>>>network_domain: cs1cloud.internal
>>>reservation_id: NULL
>>>guest_type: Shared
>>>  restart_required: 0
>>>   created: 2018-06-09 18:53:26
>>>   removed: NULL
>>> specify_ip_ranges: 1
>>>vpc_id: NULL
>>>   ip6_gateway: NULL
>>>  ip6_cidr: NULL
>>>  network_cidr: NULL
>>>   display_network: 1
>>>network_acl_id: NULL
>>>   streched_l2: 0
>>> redundant: 0
>>>   external_id: NULL
>>> 1 row in set (0.01 sec)
>>>
>>> mysql>
>>>
>>
>>
>>
>> --
>> Daan
>>
> 
> 
> 


Re: Why does a VLAN and Network have IP information?

2018-06-12 Thread Wido den Hollander



On 06/12/2018 12:23 PM, Ivan Kudryavtsev wrote:
> Hi, Devs, ipv6 in vlan table is used. Without the information in that
> table, ipv6 wouldn't work with basic zone.
> 

Yes, I'm aware of that. Would be a rather simple fix. Just a few lines
of code. I know where to find it.

Wido

> вт, 12 июн. 2018 г., 13:11 Rafael Weingärtner :
> 
>> In theory, the object (either in Java or a DB table) that represents a VLAN
>> should not have IP information. However, it seems that someone “reused” the
>> object. We would need to check if the IP data stored there is not really
>> used before removing it.
>>
>>
>> On Tue, Jun 12, 2018 at 11:32 AM, Daan Hoogland 
>> wrote:
>>
>>> Wido, I think we can remove ip data from the vlan table, though it is
>> going
>>> to require some hacking. Removing the vlan table seems not prudent to me,
>>> especially since we now have l2 networks (without ip provisioned).
>>>
>>> On Tue, Jun 12, 2018 at 11:12 AM, Wido den Hollander 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Looking at our design and tables in the database I'm wondering why both
>>>> a VLAN and a Network has IP information.
>>>>
>>>> A VLAN is a Layer 2 domain and shouldn't have any IP(4/6) information
>>>> and we also seem to store redundant information in there.
>>>>
>>>> Below is some information I have in a test database and I'm just trying
>>>> to understand why both have IP information.
>>>>
>>>> Imho this information should not be stored in the VLAN table as it's
>>>> redundant anyway. But still, why is it there? And why do we actually
>> use
>>>> the VLAN table? Because even the VLAN tag is stored in the *networks*
>>>> table.
>>>>
>>>> Wido
>>>>
>>>> mysql> select * from vlan limit 1 \G
>>>> *** 1. row ***
>>>>  id: 1
>>>>uuid: d14f30ab-072e-41b7-bfcf-0aadd156e01d
>>>> vlan_id: 0
>>>>vlan_gateway: 192.168.200.1
>>>>vlan_netmask: 255.255.255.0
>>>> description: 192.168.200.100-192.168.200.200
>>>>   vlan_type: DirectAttached
>>>>  data_center_id: 1
>>>>  network_id: 203
>>>> physical_network_id: 200
>>>> ip6_gateway: 2001:db8:100::1
>>>>ip6_cidr: 2001:db8:100::/64
>>>>   ip6_range: NULL
>>>> removed: NULL
>>>> created: 2018-06-09 18:53:26
>>>> 1 row in set (0.00 sec)
>>>>
>>>> mysql>
>>>>
>>>> mysql> select * from networks where id = 203 \G
>>>> *** 1. row ***
>>>>id: 203
>>>>  name: GuestNetwork1
>>>>  uuid: f1f7281d-bedd-422c-bd44-eae9be172157
>>>>  display_text: GuestNetwork1
>>>>  traffic_type: Guest
>>>> broadcast_domain_type: Vlan
>>>> broadcast_uri: vlan://untagged
>>>>   gateway: 192.168.200.1
>>>>  cidr: 192.168.200.0/24
>>>>  mode: Dhcp
>>>>   network_offering_id: 6
>>>>   physical_network_id: 200
>>>>data_center_id: 1
>>>> guru_name: DirectNetworkGuru
>>>> state: Setup
>>>>   related: 203
>>>> domain_id: 1
>>>>account_id: 1
>>>>  dns1: NULL
>>>>  dns2: NULL
>>>> guru_data: NULL
>>>>set_fields: 0
>>>>  acl_type: Domain
>>>>network_domain: cs1cloud.internal
>>>>reservation_id: NULL
>>>>guest_type: Shared
>>>>  restart_required: 0
>>>>   created: 2018-06-09 18:53:26
>>>>   removed: NULL
>>>> specify_ip_ranges: 1
>>>>vpc_id: NULL
>>>>   ip6_gateway: NULL
>>>>  ip6_cidr: NULL
>>>>  network_cidr: NULL
>>>>   display_network: 1
>>>>network_acl_id: NULL
>>>>   streched_l2: 0
>>>> redundant: 0
>>>>   external_id: NULL
>>>> 1 row in set (0.01 sec)
>>>>
>>>> mysql>
>>>>
>>>
>>>
>>>
>>> --
>>> Daan
>>>
>>
>>
>>
>> --
>> Rafael Weingärtner
>>
> 


Why does a VLAN and Network have IP information?

2018-06-12 Thread Wido den Hollander
Hi,

Looking at our design and tables in the database I'm wondering why both
a VLAN and a Network has IP information.

A VLAN is a Layer 2 domain and shouldn't have any IP(4/6) information
and we also seem to store redundant information in there.

Below is some information I have in a test database and I'm just trying
to understand why both have IP information.

Imho this information should not be stored in the VLAN table as it's
redundant anyway. But still, why is it there? And why do we actually use
the VLAN table? Because even the VLAN tag is stored in the *networks* table.

Wido

mysql> select * from vlan limit 1 \G
*** 1. row ***
 id: 1
   uuid: d14f30ab-072e-41b7-bfcf-0aadd156e01d
vlan_id: 0
   vlan_gateway: 192.168.200.1
   vlan_netmask: 255.255.255.0
description: 192.168.200.100-192.168.200.200
  vlan_type: DirectAttached
 data_center_id: 1
 network_id: 203
physical_network_id: 200
ip6_gateway: 2001:db8:100::1
   ip6_cidr: 2001:db8:100::/64
  ip6_range: NULL
removed: NULL
created: 2018-06-09 18:53:26
1 row in set (0.00 sec)

mysql>

mysql> select * from networks where id = 203 \G
*** 1. row ***
   id: 203
 name: GuestNetwork1
 uuid: f1f7281d-bedd-422c-bd44-eae9be172157
 display_text: GuestNetwork1
 traffic_type: Guest
broadcast_domain_type: Vlan
broadcast_uri: vlan://untagged
  gateway: 192.168.200.1
 cidr: 192.168.200.0/24
 mode: Dhcp
  network_offering_id: 6
  physical_network_id: 200
   data_center_id: 1
guru_name: DirectNetworkGuru
state: Setup
  related: 203
domain_id: 1
   account_id: 1
 dns1: NULL
 dns2: NULL
guru_data: NULL
   set_fields: 0
 acl_type: Domain
   network_domain: cs1cloud.internal
   reservation_id: NULL
   guest_type: Shared
 restart_required: 0
  created: 2018-06-09 18:53:26
  removed: NULL
specify_ip_ranges: 1
   vpc_id: NULL
  ip6_gateway: NULL
 ip6_cidr: NULL
 network_cidr: NULL
  display_network: 1
   network_acl_id: NULL
  streched_l2: 0
redundant: 0
  external_id: NULL
1 row in set (0.01 sec)

mysql>


Re: Multiple Physical Networks in Basic Networking (KVM)

2018-06-09 Thread Wido den Hollander



On 06/08/2018 03:54 PM, Dag Sonstebo wrote:
> Ivan – not sure how you deal with per-network VM bandwidth (or what your use 
> case is) so probably worth testing in the lab.
> 

Isn't that done by libvirt in the XML? In Basic Zone at least that
works. It is part of the service offering.

> Wido – agree, I don’t see why our current “basic zone” can’t be deprecated in 
> the long run for “advanced zone with security groups” since they serve the 
> same purpose and the latter gives more flexibility. There may be use cases 
> where they don’t behave the same – but personally I’ve not come across any 
> issues.
> 

I wouldn't know those cases. I'll test and see how it works out. Give me
some time and I'll get back to this topic.

Might even be possible to convert a Basic Zone to a Advanced Zone by
doing some database mutations.

Wido

> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
> 
> On 08/06/2018, 14:44, "Wido den Hollander"  wrote:
> 
> 
> 
> On 06/08/2018 03:32 PM, Dag Sonstebo wrote:
> > Hi Ivan,
> > 
> > Not quite – “advanced zone with security group” allows you to have 
> multiple “basic” type networks isolated within their own VLANs and with 
> security groups isolation between VMs / accounts. The VR only does DNS/DHCP, 
> not GW/NAT.
> > 
> 
> Hmm, yes, that was actually what we/I is/are looking for. The main
> reason for Basic Networking is the shared services we offer on a public
> cloud.
> 
> A VR dies as soon as there is any flood, so that's why we have our
> physical routers do the work.
> 
> I thought that what you mentioned is "DirectAttached" networking.
> 
> But that brings me to the question why we still have Basic Networking
> :-) In earlier conversations I had with people I think that on the
> longer run Basic Networking can be dropped/merged in favor of Advanced
> Networking with Security Groups then, right?
> 
> Accounts/VMs are deployed Inside the same VLAN and isolation is done by
> Security Groups.
> 
> Sounds right, let me dig into that!
> 
> Wido
> 
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> > 
> > On 08/06/2018, 14:26, "Ivan Kudryavtsev"  
> wrote:
> > 
> > Hi, Dag. Not exactly. Advanced zone uses VR as a GW with SNAT/DNAT 
> which is
> > not quite good for public cloud in my case. Despite that it really 
> solves
> > the problem. But I would like to have it as simple as possible, 
> without VR
> > as a GW and xNAT.
> > 
> > пт, 8 июн. 2018 г., 15:21 Dag Sonstebo :
> > 
> > > Wido / Ivan – I’m probably missing something – but is the feature 
> you are
> > > looking for not the same functionality we currently have in 
> “advanced zones
> > > with security groups”?
> > >
> > > Regards,
> > > Dag Sonstebo
> > > Cloud Architect
> > > ShapeBlue
> > >
> > > On 08/06/2018, 14:14, "Ivan Kudryavtsev" 
>  wrote:
> > >
> > > Hi Wido, I also very interested in similar deployment, 
> especially
> > > combined
> > > with the capability of setting different network bandwidth for
> > > different
> > > networks, like
> > > 10.0.0.0/8 intra dc with 1g bandwidth per vm and white 
> ipv4/ipv6 with
> > > regular bandwidth management. But it seem it takes very big 
> redesign
> > > of VM
> > > settings and VR redesign is also required.
> > >
> > > When I tried to investigate if it possible with ACS basic 
> network,
> > > didn't
> > > succeed with any relevant information.
> > >
> > >
> > > пт, 8 июн. 2018 г., 14:56 Wido den Hollander :
> > >
> > > > Hi,
> > > >
> > > > I am looking into supporting multiple Physical Networks 
> inside onze
> > > > Basic Networking zone.
> > > >
> > > > First: The reason we use Basic Networking is the simplicity 
> and the
> > > fact
> > > > that our (Juniper) routers can do the routing and not the 
> VR.
> > 

Re: Multiple Physical Networks in Basic Networking (KVM)

2018-06-08 Thread Wido den Hollander



On 06/08/2018 03:32 PM, Dag Sonstebo wrote:
> Hi Ivan,
> 
> Not quite – “advanced zone with security group” allows you to have multiple 
> “basic” type networks isolated within their own VLANs and with security 
> groups isolation between VMs / accounts. The VR only does DNS/DHCP, not 
> GW/NAT.
> 

Hmm, yes, that was actually what we/I is/are looking for. The main
reason for Basic Networking is the shared services we offer on a public
cloud.

A VR dies as soon as there is any flood, so that's why we have our
physical routers do the work.

I thought that what you mentioned is "DirectAttached" networking.

But that brings me to the question why we still have Basic Networking
:-) In earlier conversations I had with people I think that on the
longer run Basic Networking can be dropped/merged in favor of Advanced
Networking with Security Groups then, right?

Accounts/VMs are deployed Inside the same VLAN and isolation is done by
Security Groups.

Sounds right, let me dig into that!

Wido

> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
> 
> On 08/06/2018, 14:26, "Ivan Kudryavtsev"  wrote:
> 
> Hi, Dag. Not exactly. Advanced zone uses VR as a GW with SNAT/DNAT which 
> is
> not quite good for public cloud in my case. Despite that it really solves
> the problem. But I would like to have it as simple as possible, without VR
> as a GW and xNAT.
> 
> пт, 8 июн. 2018 г., 15:21 Dag Sonstebo :
> 
> > Wido / Ivan – I’m probably missing something – but is the feature you 
> are
> > looking for not the same functionality we currently have in “advanced 
> zones
> > with security groups”?
> >
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> >
> > On 08/06/2018, 14:14, "Ivan Kudryavtsev"  
> wrote:
> >
> > Hi Wido, I also very interested in similar deployment, especially
> > combined
> > with the capability of setting different network bandwidth for
> > different
> > networks, like
> > 10.0.0.0/8 intra dc with 1g bandwidth per vm and white ipv4/ipv6 
> with
> > regular bandwidth management. But it seem it takes very big redesign
> > of VM
> > settings and VR redesign is also required.
> >
> > When I tried to investigate if it possible with ACS basic network,
> > didn't
> > succeed with any relevant information.
> >
> >
> > пт, 8 июн. 2018 г., 14:56 Wido den Hollander :
> >
> > > Hi,
> > >
> > > I am looking into supporting multiple Physical Networks inside 
> onze
> > > Basic Networking zone.
> > >
> > > First: The reason we use Basic Networking is the simplicity and 
> the
> > fact
> > > that our (Juniper) routers can do the routing and not the VR.
> > >
> > > ALL our VMs have external IPv4/IPv6 addresses and we do not use 
> NAT
> > > anywhere.
> > >
> > > But right now a Hypervisor has a single VLAN/POD going to it
> > terminated
> > > on 'cloudbr0' using vlan://untagged.
> > >
> > > But to better utilize our physical hardware it would be great it
> > Basic
> > > Networking would support multiple physical networks using VLAN
> > separation.
> > >
> > > For example:
> > >
> > > - PhysicalNetwork1: VLAN 100
> > > - PhysicalNetwork2: VLAN 101
> > > - PhysicalNetwork3: VLAN 102
> > >
> > > I've been looking into DirectAttached with Advanced Networking, 
> but I
> > > couldn't find any reference to it on how that exactly works.
> > >
> > > Right now for our use-case Basic Networking with multiple Physical
> > > Networks would work best for us.
> > >
> > > Has anybody looked at this or has any insight of the problems we
> > might
> > > run in to?
> > >
> > > Wido
> > >
> >
> >
> >
> > dag.sonst...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
> 
> 
> 
> dag.sonst...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 


Multiple Physical Networks in Basic Networking (KVM)

2018-06-08 Thread Wido den Hollander
Hi,

I am looking into supporting multiple Physical Networks inside onze
Basic Networking zone.

First: The reason we use Basic Networking is the simplicity and the fact
that our (Juniper) routers can do the routing and not the VR.

ALL our VMs have external IPv4/IPv6 addresses and we do not use NAT
anywhere.

But right now a Hypervisor has a single VLAN/POD going to it terminated
on 'cloudbr0' using vlan://untagged.

But to better utilize our physical hardware it would be great it Basic
Networking would support multiple physical networks using VLAN separation.

For example:

- PhysicalNetwork1: VLAN 100
- PhysicalNetwork2: VLAN 101
- PhysicalNetwork3: VLAN 102

I've been looking into DirectAttached with Advanced Networking, but I
couldn't find any reference to it on how that exactly works.

Right now for our use-case Basic Networking with multiple Physical
Networks would work best for us.

Has anybody looked at this or has any insight of the problems we might
run in to?

Wido


Re: [VOTE] Apache CloudStack 4.11.1.0 LTS [RC1]

2018-06-04 Thread Wido den Hollander



On 06/04/2018 12:17 PM, Frank Maximus wrote:
> -1.
> 
> Reset password with ConfigDrive seems to broken.
> The old password stays in use.

Did you check if the password has been updated on the ISO?

If you *just* reboot the VM from the CLI of the VM the ISO isn't re-read
by KVM/Qemu, so you will see stale data.

You will need to stop the VM from CloudStack and initiate a start again
so that it will rebuild the ISO.

Wido

> Will make a ticket soon.
> Also the example scripts in setup/bindir need to be changed.
> 
> 
> On Mon, May 28, 2018 at 10:26 AM Daan Hoogland 
> wrote:
> 
>> I checked three verification files, unpacked and build the code, and am
>> trusting the process otherwise:
>> 0 (binding)
>> The reason I am not giving a +1 is because the code presently does not
>> build on anything but linux, due to the configdrive test not building on
>> macosx (and I presume windows) If we add that to release notes I am fine
>> with it.
>>
>> On Sat, May 26, 2018 at 5:27 AM, Tutkowski, Mike <
>> mike.tutkow...@netapp.com>
>> wrote:
>>
>>> +1 (binding)
>>>
>>> I created a new cloud using commit
>> 5f48487dc62fd1decaabc4ab2a10f549d6c82400
>>> (RC1). I ran the automated regression tests for managed storage. All
>> tests
>>> passed.
>>>
>>> On 5/24/18, 9:56 AM, "Paul Angus"  wrote:
>>>
>>> Hi All,
>>>
>>>
>>>
>>> I've created a 4.11.1.0 release (RC1), with the following artefacts
>> up
>>> for testing and a vote:
>>>
>>> [NB we know there are issues for Nuage to sort in this RC, but they
>>> will be well contained, so let’s test everything else  ]
>>>
>>>
>>>
>>> Git Branch and Commit SH:
>>>
>>> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=
>>> shortlog;h=refs/heads/4.11.1.0-RC20180524T1000
>>>
>>> Commit: 5f48487dc62fd1decaabc4ab2a10f549d6c82400
>>>
>>>
>>>
>>> Source release (checksums and signatures are available at the same
>>>
>>> location):
>>>
>>> https://dist.apache.org/repos/dist/dev/cloudstack/4.11.1.0/
>>>
>>>
>>>
>>> PGP release keys (signed using 8B309F7251EE0BC8):
>>>
>>> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>>>
>>>
>>>
>>> The vote will be open till end of next week, 1st June 2018.
>>>
>>>
>>>
>>> For sanity in tallying the vote, can PMC members please be sure to
>>> indicate "(binding)" with their vote?
>>>
>>>
>>>
>>> [ ] +1  approve
>>>
>>> [ ] +0  no opinion
>>>
>>> [ ] -1  disapprove (and reason why)
>>>
>>>
>>>
>>> Additional information:
>>>
>>>
>>>
>>> For users' convenience, I've built packages from
>>> 5f48487dc62fd1decaabc4ab2a10f549d6c82400 and published RC1 repository
>>> here:
>>>
>>> http://packages.shapeblue.com/testing/4111rc1/
>>>
>>>
>>>
>>> The release notes are still work-in-progress, but the systemvm
>>> template upgrade section has been updated. You may refer the following
>> for
>>> systemvm template upgrade testing:
>>>
>>> http://docs.cloudstack.apache.org/projects/cloudstack-
>>> release-notes/en/latest/index.html
>>>
>>>
>>>
>>> 4.11.1 systemvm templates are available from here:
>>> http://packages.shapeblue.com/systemvmtemplate/4.11.1-rc1/
>>>
>>>
>>> Kind regards,
>>>
>>> Paul Angus
>>>
>>>
>>> paul.an...@shapeblue.com
>>> www.shapeblue.com
>>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>>> @shapeblue
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Daan
>>
> 


Re: Can't build master

2018-05-23 Thread Wido den Hollander


On 05/22/2018 08:58 PM, Tutkowski, Mike wrote:
> Hi Rohit,
> 
> I’ve tried a few things so far, but none seem to install genisoimage in 
> /usr/bin as the test indicates is required.
> 

genisoimage isn't a binary which is generated by CloudStack, you have to
install it.

Under Ubuntu Linux this would be:

$ apt install mkisofs

But I don't know how this works under MacOS, maybe using brew?

Wido

> From 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Setting+Up+a+CloudStack+Development+Environment+on+Mac+OS+X,
>  I’ve tried these steps:
> 
> • sudo port install cdrtools; or using brew: brew install cdrtools (could 
> take a long time)
> 'brew install cdrtools' did not work for me on OSX 10.9.  However, 'brew 
> install dvdrtools' did work for me...
> • NOTE - If after the above steps, for any reason, mkisofs is still not 
> installed, download it from the net. One good link to get mkisofs for mac is 
> - http://www.helios.de/viewart.html?id=1000-en#download . Follow the 
> instructions in the section "Download HELIOS “mkisofs” tested binary 
> versions". Use the macosx86 binary if you're running mac os x on an intel 
> platform. After downloading the mkisofs binary, copy it over to 
> /usr/local/bin/.
> 
> I only use Mac OS X to build the code locally. I don’t actually run the 
> management server from this machine (I run it on Ubuntu).
> 
> For the time being at least, I can just use –DskipTests=true when building on 
> Mac OS X.
> 
> Talk to you later,
> Mike
> 
> On 5/22/18, 12:19 AM, "Rohit Yadav" <rohit.ya...@shapeblue.com> wrote:
> 
> Hi Mike,
> 
> 
> Is genisoimage or mkisofs available on osx? This is usually installed at 
> /usr/bin/ on CentOS6/CentOS7/Ubuntu Linux. Can you try brew or something else 
> to install it?
> 
> They are also used by injectkeys.sh/.py when the management server 
> starts. The change is part of a recent PR I did and added a unit test for it 
> where it tries to build a config drive ISO file. If genisoimage is not 
> availabe on OSX, we can add some environment check to the unit test to skip 
> on non-Linux environments.
> 
> 
> - Rohit
> 
> <https://cloudstack.apache.org>
> 
> 
> 
> 
> From: Tutkowski, Mike <mike.tutkow...@netapp.com>
> Sent: Tuesday, May 22, 2018 2:13:23 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Can't build master
> 
> Just an FYI that this is on OS X Version 10.11.6.
> 
> From: "Tutkowski, Mike" <mike.tutkow...@netapp.com>
> Date: Monday, May 21, 2018 at 2:42 PM
> To: "dev@cloudstack.apache.org" <dev@cloudstack.apache.org>
> Subject: Can't build master
> 
> Hi,
> 
> Did I miss an e-mail or something? I’m having trouble building master 
> (below).
> 
> Thanks!
> Mike
> 
> Running org.apache.cloudstack.storage.configdrive.ConfigDriveBuilderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.cloudstack.storage.configdrive.ConfigDriveBuilder).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for 
> more info.
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.296 sec 
> <<< FAILURE! - in 
> org.apache.cloudstack.storage.configdrive.ConfigDriveBuilderTest
> 
> testConfigDriveBuild(org.apache.cloudstack.storage.configdrive.ConfigDriveBuilderTest)
>   Time elapsed: 0.278 sec  <<< ERROR!
> com.cloud.utils.exception.CloudRuntimeException: Unable to create iso 
> file: i-x-y.iso due to java.io.IOException: Cannot run program 
> "/usr/bin/genisoimage": error=2, No such file or directory
> at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> at com.cloud.utils.script.Script.execute(Script.java:215)
> at com.cloud.utils.script.Script.execute(Script.java:183)
> at 
> org.apache.cloudstack.storage.configdrive.ConfigDriveBuilder.buildConfigDrive(ConfigDriveBuilder.java:152)
> at 
> org.apache.cloudstack.storage.configdrive.ConfigDriveBuilderTest.testConfigDriveBuild(ConfigDriveBuilderTest.java:56)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>

Re: Ceph RBD issues in 4.11

2018-05-17 Thread Wido den Hollander


On 05/17/2018 04:32 PM, Glen Baars wrote:
> Hello Wido,
> 
> Thanks for the reply.
> 
> I used the RBD command line tool directly without specifying any features. 
> It’s the default since Ceph Jewel.
> 
> The features that I require are for these use cases:
> 
> 1.Using 'rbd du' the get the real disk usage of a volume without fast-diff is 
> very slow. We need to schedule this after hours on large images.
> 2.We are starting to use rbd mirroring across to our secondary DC and this 
> requires exclusive-lock.
> 3.Deep flatten can not be turned on after image creation. I am not sure if I 
> require this feature at this time.
> 

I see. Could you create a issue for this and assign it to me on Github?
I'll take a look at it later.

The way it works right now as is intended. You can try to patch this
locally on your system by building a JAR file with a patch in there.

Will try to get this into 4.12

Wido

> Kind regards,
> Glen Baars
> 
> -Original Message-
> From: Wido den Hollander <w...@widodh.nl>
> Sent: Thursday, 17 May 2018 10:18 PM
> To: dev@cloudstack.apache.org; Glen Baars <g...@onsitecomputers.com.au>
> Subject: Re: Ceph RBD issues in 4.11
> 
> 
> 
> On 05/17/2018 01:50 PM, Glen Baars wrote:
>> Hello Dev,
>>
>> I have recently upgraded our cloudstack environment to 4.11. Mostly
>> all has been smooth. ( this environment is legacy from cloud.com days!
>> )
>>
>> There are some issues that I have run into:
>>
>> 1.Can't install any VMs from ISO ( I have seen this in the list previously 
>> but can't find a bug report for it ) If further reports or debug will help I 
>> can assist. It is easy to reproduce.
>> 2.When a VM is created from a template, the RBD features are lost. More info 
>> below.
>>
>> Example of VM volume from template: -
>>
>> user@NAS-AUBUN-RK3-CEPH01:~# rbd info
>> AUBUN-KVM-CLUSTER01-SSD/feeb52ec-f111-4a0d-9785-23aadd7650a5
>>
>> rbd image 'feeb52ec-f111-4a0d-9785-23aadd7650a5':
>> size 150 GB in 38400 objects
>> order 22 (4096 kB objects)
>> block_name_prefix: rbd_data.142926a5ee64
>> format: 2
>> features: layering
>> flags:
>> create_timestamp: Fri Apr 27 12:46:21 2018
>> parent: 
>> AUBUN-KVM-CLUSTER01-SSD/d7dcd9e4-ed55-44ae-9a71-52c9307e53b4@cloudstack-base-snap
>> overlap: 150 GB
>>
>> Note the features are not the same as the parent : -
>>
>> user@NAS-AUBUN-RK3-CEPH01:~# rbd info
>> AUBUN-KVM-CLUSTER01-SSD/d7dcd9e4-ed55-44ae-9a71-52c9307e53b4
>> rbd image 'd7dcd9e4-ed55-44ae-9a71-52c9307e53b4':
>> size 150 GB in 38400 objects
>> order 22 (4096 kB objects)
>> block_name_prefix: rbd_data.141d274b0dc51
>> format: 2
>> features: layering, exclusive-lock, object-map, fast-diff, 
>> deep-flatten
>> flags:
>> create_timestamp: Fri Apr 27 12:37:05 2018
>>
>>
>> If you manually clone the volume the expected features are retained. We are 
>> running the latest Ceph version, KVM hosts on Ubuntu 16.04 with the latest 
>> Luminous qemu-img.
>>
> 
> How do you clone the volume manually? I assume with the rbd tool?
> 
> Because this is where Java/CloudStack clones the image:
> 
> https://github.com/apache/cloudstack/blob/master/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java#L957
> 
> 
> private int rbdFeatures = (1 << 0); /* Feature 1<<0 means layering in RBD 
> format 2 */
> 
> rbd.clone(template.getName(), rbdTemplateSnapName, io, disk.getName(), 
> rbdFeatures, rbdOrder);
> 
> 
> So it's on purpose and this has a historical reason which I can't thin of 
> anymore.
> 
> We can probably update this to include exclusive-lock, object-map, fast-diff 
> and deep-flatten. Or completely skip it and have RBD figure it out. Don't 
> know anymore why this code is in there.
> 
> But if this a real probably that you don't have those features?
> 
> Wido
> 
>> Kind regards,
>> Glen Baars
>>
>> This e-mail is intended solely for the benefit of the addressee(s) and any 
>> other named recipient. It is confidential and may contain legally privileged 
>> or confidential information. If you are not the recipient, any use, 
>> distribution, disclosure or copying of this e-mail is prohibited. The 
>> confidentiality and legal privilege attached to this communication is not 
>> waived or lost by reason of the mistaken transmission or delivery to you. If 
>> you have received this e-m

Re: Ceph RBD issues in 4.11

2018-05-17 Thread Wido den Hollander


On 05/17/2018 01:50 PM, Glen Baars wrote:
> Hello Dev,
> 
> I have recently upgraded our cloudstack environment to 4.11. Mostly all has 
> been smooth. ( this environment is legacy from cloud.com days! )
> 
> There are some issues that I have run into:
> 
> 1.Can't install any VMs from ISO ( I have seen this in the list previously 
> but can't find a bug report for it ) If further reports or debug will help I 
> can assist. It is easy to reproduce.
> 2.When a VM is created from a template, the RBD features are lost. More info 
> below.
> 
> Example of VM volume from template: -
> 
> user@NAS-AUBUN-RK3-CEPH01:~# rbd info 
> AUBUN-KVM-CLUSTER01-SSD/feeb52ec-f111-4a0d-9785-23aadd7650a5
> 
> rbd image 'feeb52ec-f111-4a0d-9785-23aadd7650a5':
> size 150 GB in 38400 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.142926a5ee64
> format: 2
> features: layering
> flags:
> create_timestamp: Fri Apr 27 12:46:21 2018
> parent: 
> AUBUN-KVM-CLUSTER01-SSD/d7dcd9e4-ed55-44ae-9a71-52c9307e53b4@cloudstack-base-snap
> overlap: 150 GB
> 
> Note the features are not the same as the parent : -
> 
> user@NAS-AUBUN-RK3-CEPH01:~# rbd info 
> AUBUN-KVM-CLUSTER01-SSD/d7dcd9e4-ed55-44ae-9a71-52c9307e53b4
> rbd image 'd7dcd9e4-ed55-44ae-9a71-52c9307e53b4':
> size 150 GB in 38400 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.141d274b0dc51
> format: 2
> features: layering, exclusive-lock, object-map, fast-diff, 
> deep-flatten
> flags:
> create_timestamp: Fri Apr 27 12:37:05 2018
> 
> 
> If you manually clone the volume the expected features are retained. We are 
> running the latest Ceph version, KVM hosts on Ubuntu 16.04 with the latest 
> Luminous qemu-img.
> 

How do you clone the volume manually? I assume with the rbd tool?

Because this is where Java/CloudStack clones the image:

https://github.com/apache/cloudstack/blob/master/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java#L957


private int rbdFeatures = (1 << 0); /* Feature 1<<0 means layering in
RBD format 2 */

rbd.clone(template.getName(), rbdTemplateSnapName, io, disk.getName(),
rbdFeatures, rbdOrder);


So it's on purpose and this has a historical reason which I can't thin
of anymore.

We can probably update this to include exclusive-lock, object-map,
fast-diff and deep-flatten. Or completely skip it and have RBD figure it
out. Don't know anymore why this code is in there.

But if this a real probably that you don't have those features?

Wido

> Kind regards,
> Glen Baars
> 
> This e-mail is intended solely for the benefit of the addressee(s) and any 
> other named recipient. It is confidential and may contain legally privileged 
> or confidential information. If you are not the recipient, any use, 
> distribution, disclosure or copying of this e-mail is prohibited. The 
> confidentiality and legal privilege attached to this communication is not 
> waived or lost by reason of the mistaken transmission or delivery to you. If 
> you have received this e-mail in error, please notify us immediately.
> 


Re: John Kinsella and Wido den Hollander now ASF members

2018-05-03 Thread Wido den Hollander
Thanks everybody! Much appreciated! :-)

Wido

On 05/03/2018 10:12 AM, Marc-Aurèle Brothier wrote:
> Congratulations to both of you!
> 
> On Thu, May 3, 2018 at 10:08 AM, Rohit Yadav <rohit.ya...@shapeblue.com>
> wrote:
> 
>> Congratulations John and Wido.
>>
>>
>>
>> - Rohit
>>
>> <https://cloudstack.apache.org>
>>
>>
>>
>> 
>> From: David Nalley <da...@gnsa.us>
>> Sent: Wednesday, May 2, 2018 9:27:37 PM
>> To: dev@cloudstack.apache.org; priv...@cloudstack.apache.org
>> Subject: John Kinsella and Wido den Hollander now ASF members
>>
>> Hi folks,
>>
>> As noted in the press release[1] John Kinsella and Wido den Hollander
>> have been elected to the ASF's membership.
>>
>> Members are the 'shareholders' of the foundation, elect the board of
>> directors, and help guide the future of the ASF.
>>
>> Congrats to both of you, very well deserved.
>>
>> --David
>>
>> [1] https://s.apache.org/ysxx
>>
>> rohit.ya...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>>
> 


Re: [DISCUSS] 4.11.1.0 release

2018-04-23 Thread Wido den Hollander


On 04/20/2018 03:03 PM, Rohit Yadav wrote:
> All,
> 
> 
> I would like to kick a discussion around the next 4.11 LTS minor/bugfix 
> release - 4.11.1.0.
> 
> 
> Many of us have tried to discuss, triage and fix issues that you've reported 
> both on users and dev MLs. It is possible that some of those discussions have 
> not met a conclusion/solution yet. Therefore, first check if your reported 
> issue has been reported or fixed towards the 4.11.1.0 milestone from here:
> 
> https://github.com/apache/cloudstack/milestone/5
> 
> 
> In case it is not reported/fixed, please report your issues with details here 
> (perhaps giving link to an existing email thread):
> 
> https://github.com/apache/cloudstack/issues
> 
> 
> When you do report an issue (or send a PR), especially upgrade or regression 
> related, please do use the milestone 4.11.1.0 along with suitable labels 
> which will make it easier to triage them.
> 
> 
> I still don't have an exact timeline to share yet but I'll try to get that 
> shared as soon as next week. Thanks.
> 

Thank you for your effort! Me and Gabriel will spend some time on
helping getting 4.11.1 out of the door!

Wido

> 
> - Rohit
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 
> 


Re: ConfigDrive status

2018-03-28 Thread Wido den Hollander


On 03/27/2018 08:07 PM, Kris Sterckx wrote:
> Guys if we are talking about the 4.11 implementation, this is documented
> here
> 
> 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Using+ConfigDrive+for+Metadata%2C+Userdata+and+Password
> 
> 
> It built further to Jayapal's work, but the support has been significantly
> generalized.
> 
> - generalized to support Isolated networks and VPC's as well .  (note : no
> support *yet* for Basic Zone)
> 

Thanks! It's on my whish-list to test this feature and some other things
like local storage migration.

Could take a while before I get to it, but then we can test this for
Basic Networking as well.

Wido

> - KVM support and ESXI   (lack of lab for testing XEN)
> 
> - it comes with its own User Data provider now ("ConfigDrive") , so any
> network offering can or cannot select Config Drive to be the provider of
> User Data.
>   The use case where VR is not any longer in any of the providers list is
> an interesting use case obviously, but it's fully flexible.
> 
> - OpenStack cloud-init aligned  (~ cfr  Miami CCC conclusion last year)
> 
> - CoreOS ignition support has been provided
> 
> 
> The iso's are stored today at the secondary storage.
> 
> 
> First time i see the higher reported issues . In which context were they
> filed ?
> 
> 
> Kris
> 
> 
> On 27 March 2018 at 16:55, Wido den Hollander <w...@widodh.nl> wrote:
> 
>>
>>
>> On 03/27/2018 10:29 AM, Jayapal Uradi wrote:
>>> Yes.
>>> In case shared network there is only DHCP and DNS and user data. If the
>> DHCP and DNS is external then I don’t want to launch a VR only for user
>> data.
>>> Initially it was started to address only specific requirement.
>>>
>>> We can have a config drive provider and it can be extended to all
>> networks/zones.
>>>
>>
>> Like Lucian/Nux said this can be very helpful in other situations as
>> well. ConfigDrive allows for metadata to reach the Instance without
>> networking function or DHCP.
>>
>> In IPv6-only environments this can also work very well, so yes, it would
>> be great if this works in other offerings.
>>
>> When I have the time I'll dig into this and see if I can get it working.
>>
>> Wido
>>
>>> Thanks,
>>> Jayapal
>>>
>>>> On Mar 27, 2018, at 1:48 PM, Nux! <n...@li.nux.ro> wrote:
>>>>
>>>> Hi Jayapal,
>>>>
>>>> This has probably already been discussed and am missing a lot of
>> context, but can you summarise why this is not applied in all zones?
>>>> A network independent userdata provider should seem like the lowest
>> common denominator, right?
>>>>
>>>> --
>>>> Sent from the Delta quadrant using Borg technology!
>>>>
>>>> Nux!
>>>> www.nux.ro
>>>>
>>>> - Original Message -
>>>>> From: "Jayapal Uradi" <jayapal.ur...@accelerite.com>
>>>>> To: "dev" <dev@cloudstack.apache.org>
>>>>> Sent: Tuesday, 27 March, 2018 05:34:44
>>>>> Subject: Re: ConfigDrive status
>>>>
>>>>> Hi Nux,
>>>>>
>>>>> It is developed only for advanced zone shared network (offering
>> without any
>>>>> services).
>>>>> This feature is developed on xenserver, kvm and vmware hypervisor.
>>>>>
>>>>> It can be extended to other networks.
>>>>>
>>>>> -Jayapal
>>>>>> On Mar 26, 2018, at 9:56 PM, Nux! <n...@li.nux.ro> wrote:
>>>>>>
>>>>>> Thanks for all the feedback, I'll do some reading then.
>>>>>>
>>>>>> Lucian
>>>>>>
>>>>>> --
>>>>>> Sent from the Delta quadrant using Borg technology!
>>>>>>
>>>>>> Nux!
>>>>>> www.nux.ro
>>>>>>
>>>>>> - Original Message -
>>>>>>> From: "ilya musayev" <ilya.mailing.li...@gmail.com>
>>>>>>> To: "dev" <dev@cloudstack.apache.org>
>>>>>>> Sent: Monday, 26 March, 2018 17:03:30
>>>>>>> Subject: Re: ConfigDrive status
>>>>>>
>>>>>>> Lucian
>>>>>>>
>>>>>>> We reported 3-4 issues with config drive. It’s being worked on. It
>> does
>>>>>>> work - 

Re: ConfigDrive status

2018-03-27 Thread Wido den Hollander


On 03/27/2018 10:29 AM, Jayapal Uradi wrote:
> Yes. 
> In case shared network there is only DHCP and DNS and user data. If the DHCP 
> and DNS is external then I don’t want to launch a VR only for user data.
> Initially it was started to address only specific requirement.
> 
> We can have a config drive provider and it can be extended to all 
> networks/zones.
> 

Like Lucian/Nux said this can be very helpful in other situations as
well. ConfigDrive allows for metadata to reach the Instance without
networking function or DHCP.

In IPv6-only environments this can also work very well, so yes, it would
be great if this works in other offerings.

When I have the time I'll dig into this and see if I can get it working.

Wido

> Thanks,
> Jayapal
> 
>> On Mar 27, 2018, at 1:48 PM, Nux! <n...@li.nux.ro> wrote:
>>
>> Hi Jayapal,
>>
>> This has probably already been discussed and am missing a lot of context, 
>> but can you summarise why this is not applied in all zones?
>> A network independent userdata provider should seem like the lowest common 
>> denominator, right?
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro
>>
>> - Original Message -
>>> From: "Jayapal Uradi" <jayapal.ur...@accelerite.com>
>>> To: "dev" <dev@cloudstack.apache.org>
>>> Sent: Tuesday, 27 March, 2018 05:34:44
>>> Subject: Re: ConfigDrive status
>>
>>> Hi Nux,
>>>
>>> It is developed only for advanced zone shared network (offering without any
>>> services).
>>> This feature is developed on xenserver, kvm and vmware hypervisor.
>>>
>>> It can be extended to other networks.
>>>
>>> -Jayapal
>>>> On Mar 26, 2018, at 9:56 PM, Nux! <n...@li.nux.ro> wrote:
>>>>
>>>> Thanks for all the feedback, I'll do some reading then.
>>>>
>>>> Lucian
>>>>
>>>> --
>>>> Sent from the Delta quadrant using Borg technology!
>>>>
>>>> Nux!
>>>> www.nux.ro
>>>>
>>>> - Original Message -
>>>>> From: "ilya musayev" <ilya.mailing.li...@gmail.com>
>>>>> To: "dev" <dev@cloudstack.apache.org>
>>>>> Sent: Monday, 26 March, 2018 17:03:30
>>>>> Subject: Re: ConfigDrive status
>>>>
>>>>> Lucian
>>>>>
>>>>> We reported 3-4 issues with config drive. It’s being worked on. It does
>>>>> work - but not per agreed upon specifications.
>>>>>
>>>>> See below
>>>>>
>>>>>
>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-10287
>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-10288
>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-10289
>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-10290
>>>>>
>>>>> Regards
>>>>> Ilya
>>>>>
>>>>> On Mon, Mar 26, 2018 at 8:58 AM Dag Sonstebo <dag.sonst...@shapeblue.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Lucian,
>>>>>>
>>>>>> I’m maybe not the right person to answer this – but my understanding is 
>>>>>> it
>>>>>> only kicks in when you have a network offering without a VR, at which 
>>>>>> point
>>>>>> the metadata etc. is presented as the config drive. Happy to be corrected
>>>>>> on this.
>>>>>>
>>>>>> We did however have a really major gotcha 18 months ago – when a customer
>>>>>> did a CloudStack upgrade and ended up with new unexpected config drives
>>>>>> causing changes to all the VMware disk controller addressing – meaning 
>>>>>> VMs
>>>>>> wouldn’t boot, couldn’t see disks, etc. If you use VMware I would test
>>>>>> beforehand.
>>>>>>
>>>>>> Regards,
>>>>>> Dag Sonstebo
>>>>>> Cloud Architect
>>>>>> ShapeBlue
>>>>>>
>>>>>> On 26/03/2018, 14:50, "Nux!" <n...@li.nux.ro> wrote:
>>>>>>
>>>>>>   Hi,
>>>>>>
>>>>>>   I am interested in the ConfigDrive feature.
>>>>>>   Before I potentially waste time on it, is anyone around here using it
>>

Re: Dockerizing Cloudstack package builders

2018-03-27 Thread Wido den Hollander


On 03/26/2018 11:38 PM, Khosrow Moossavi wrote:
> Thanks for the feedback Wido.
> 
> I, personally, wouldn't put them in *cloudstack* repo itself, since these
> are complementary tools. To me it would
> make more sense to put them under apache/ namespace on GitHub as we already
> have bunch of apache/cloudstack-*
> repos. And under apache/ or cloudstackl/ namespace on Docker Hub (whichever
> makes more sense).
> 

Yes, probably best to do that!

> If community agrees with going forward with this proposal, I would be more
> than happy to transfer ownership of
> the GH and DH repos I created.
> 

Ok, good! I don't know how that works with ASF though.

> That's actually a great point. I was assuming these are going to be used by
> only developers/maintainers which have
> 

Probably, but it's always easy to use those containers and quickly get
packages build.

> the source code at their disposals. I'm gonna wait for others
> feedback/suggestions before enhancing this though.
> 

Yes, it is. But looking good so far. Thanks again!

Wido

> On Mon, Mar 26, 2018 at 3:28 PM, Wido den Hollander <w...@widodh.nl> wrote:
> 
>>
>>
>> On 03/26/2018 06:21 PM, Khosrow Moossavi wrote:
>>> Hi community
>>>
>>> To make building Cloudstack's RPM or DEB packages more easily, portable
>> and
>>> reduce the need to
>>> have a *correct* hardware to build them, I've Dockerized them. This is
>>> essentially a follow up of what
>>> Wido had done (only for deb packages though).
>>>
>>
>> Very nice! I didn't have the time to work on this, but it kept popping
>> up. I'd love to have a automated builder on download.cloudstack.org
>> which does this for us automatically.
>>
>>> The proposed Builders can be found here[1] and here[2] on Github and
>>> here[3] and here[4] on Docker Hub.
>>>
>>
>> Cool!
>>
>>> I would appreciate it if you can give your feedback of them and if need
>> be,
>>> promote them on the official wiki/doc/repo.
>>>
>>
>> Getting them on a official repo should be possible I think, we could put
>> them in the /packaging directory of our main repo. Or maybe request a
>> second repo called 'cloudstack-docker' or something.
>>
>> Right now I see that your builder wants to have the cloudstack source
>> ready to be used (just as I did), but how about adding the option where
>> it downloads a tarball from the website if no source and just a version
>> is provided? Or it can do a git clone from Github and check out a
>> specific tag/version.
>>
>> Again, very much appreciated!
>>
>> Wido
>>
>>> [1]: https://github.com/khos2ow/cloudstack-rpm-builder
>>> [2]: https://github.com/khos2ow/cloudstack-deb-builder
>>> [3]: https://hub.docker.com/r/khos2ow/cloudstack-deb-builder/
>>> [4]: https://hub.docker.com/r/khos2ow/cloudstack-rpm-builder/
>>>
>>> Thanks
>>>
>>> Khosrow Moossavi
>>>
>>> Cloud Infrastructure Developer
>>>
>>> t 514.447.3456
>>>
>>> <https://goo.gl/NYZ8KK>
>>>
>>
> 


Re: Dockerizing Cloudstack package builders

2018-03-26 Thread Wido den Hollander


On 03/26/2018 06:21 PM, Khosrow Moossavi wrote:
> Hi community
> 
> To make building Cloudstack's RPM or DEB packages more easily, portable and
> reduce the need to
> have a *correct* hardware to build them, I've Dockerized them. This is
> essentially a follow up of what
> Wido had done (only for deb packages though).
> 

Very nice! I didn't have the time to work on this, but it kept popping
up. I'd love to have a automated builder on download.cloudstack.org
which does this for us automatically.

> The proposed Builders can be found here[1] and here[2] on Github and
> here[3] and here[4] on Docker Hub.
> 

Cool!

> I would appreciate it if you can give your feedback of them and if need be,
> promote them on the official wiki/doc/repo.
> 

Getting them on a official repo should be possible I think, we could put
them in the /packaging directory of our main repo. Or maybe request a
second repo called 'cloudstack-docker' or something.

Right now I see that your builder wants to have the cloudstack source
ready to be used (just as I did), but how about adding the option where
it downloads a tarball from the website if no source and just a version
is provided? Or it can do a git clone from Github and check out a
specific tag/version.

Again, very much appreciated!

Wido

> [1]: https://github.com/khos2ow/cloudstack-rpm-builder
> [2]: https://github.com/khos2ow/cloudstack-deb-builder
> [3]: https://hub.docker.com/r/khos2ow/cloudstack-deb-builder/
> [4]: https://hub.docker.com/r/khos2ow/cloudstack-rpm-builder/
> 
> Thanks
> 
> Khosrow Moossavi
> 
> Cloud Infrastructure Developer
> 
> t 514.447.3456
> 
> <https://goo.gl/NYZ8KK>
> 


Welcoming Mike as the new Apache CloudStack VP

2018-03-26 Thread Wido den Hollander
Hi all,

It's been a great pleasure working with the CloudStack project as the
ACS VP over the past year.

A big thank you from my side for everybody involved with the project in
the last year.

Hereby I would like to announce that Mike Tutkowski has been elected to
replace me as the Apache Cloudstack VP in our annual VP rotation.

Mike has a long history with the project and I am are happy welcome him
as the new VP for CloudStack.

Welcome Mike!

Thanks,

Wido


Re: [VOTE] Move to Github issues

2018-03-26 Thread Wido den Hollander
+1

On 03/26/2018 08:33 AM, Rohit Yadav wrote:
> All,
> 
> Based on the discussion last week [1], I would like to start a vote to put
> the proposal into effect:
> 
> - Enable Github issues, wiki features in CloudStack repositories.
> - Both user and developers can use Github issues for tracking issues.
> - Developers can use #id references while fixing an existing/open issue in
> a PR [2]. PRs can be sent without requiring to open/create an issue.
> - Use Github milestone to track both issues and pull requests towards a
> CloudStack release, and generate release notes.
> - Relax requirement for JIRA IDs, JIRA still to be used for historical
> reference and security issues. Use of JIRA will be discouraged.
> - The current requirement of two(+) non-author LGTMs will continue for PR
> acceptance. The two(+) PR non-authors can advise resolution to any issue
> that we've not already discussed/agreed upon.
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> Vote will be open for 120 hours. If the vote passes the following actions
> will be taken:
> - Get Github features enabled from ASF INFRA
> - Update CONTRIBUTING.md and other relevant cwiki pages.
> - Update project website
> 
> [1] https://markmail.org/message/llodbwsmzgx5hod6
> [2] https://blog.github.com/2013-05-14-closing-issues-via-pull-requests/
> 
> Regards,
> Rohit Yadav
> 


<    1   2   3   4   5   6   7   8   9   10   >