Ansible 2.8: CloudStack related changes

2019-05-16 Thread Rene Moser
Hi all

As announced previously in autumn 2018, I am ending my active
maintenance for the CloudStack Ansible integration with the 2.8 release.

It started as PoC during a weekend at a Swiss Linux hackers event
"Turrican Days" in autumn 2014 and turned into "thing" I have spent many
nights with. Take care of it.

The modules are in best conditions: Cloudstack is one of a few Ansible
integrations without any failing sanity checks. Special thanks goes to
David Passante who brought all the docs in shape!

We have automated integration tests based on a simulator docker setup
[1] currently running Cloudstack 4.11.2. The integration test code
coverage [2] is at >85%.

There are currently (only) 2 more members in the CloudStack team [3] in
Ansible.

Thanks again for all the support and appreciation I have received over
the years.

Ansible v2.8.0 is going to be released with the following, CloudStack
related changes, thanks for all the contributors:

David Passante (18):
  Cloudstack: fix support for some VPC service capabilities (#45727)
  cs_account: Implement role parameter support (#46166)
  cs_account: add ability to bind accounts to LDAP (#46219)
  Cloudstack: New module cs_vlan_ip_range (#51597)
  cloudstack: streamline modules doc (#52509)
  cloudstack: streamline modules doc (part 2) (#52730)
  cloudstack: streamline modules doc (part 3) (#53412)
  cs_iso: fix missing param "is_public" (#53740)
  cs_network_offering: Add choice list for supported_services in
arg_spec (#53901)
  cloudstack: streamline modules doc (part 4) (#53874)
  cs_volume: add volumes extraction and upload features (#54111)
  cs_instance_facts: add a "nic" fact to return VM networking
information (#54337)
  cs_service_offering: update params in arg spec and documentation
(#54511)
  cs_network_offering: add a for_vpc parameter (#54551)
  cloudstack: streamline modules doc (part 5) (#54523)
  cs_service_offering: Implement customizable compute offers (#54597)
  cloudstack: streamline modules doc (part 6) (#54641)
  cs_vlan_ip_range: Update return values documentation (#54677)

Gregor Riepl (1):
  Cloudstack: Add password reset module (#47931)

Patryk D. Cichy (5):
  Add new Cloudstack module cs_image_store (#53617)
  Add new CloudStack module cs_physical_network (#54098)
  Add a new CloudStack module - cs_traffic_type (#54451)
  Enable adding VLAN IP ranges for Physical Networks (#54576)
  Proper handling of lower case name for InternalLbVm Service
Provider (#55087)

Rene Moser (13):
  cs_loadbalancer_rule_member: fix error handling (#46012)
  cs_instance: fix host migration without volume (#46115)
  cs_instance: doc: fix typo in examples (#46035)
  cs_staticnat: fix sanity (#46037)
  cs_ip_address: use query_api, fixes error handling (#46034)
  cs_resourcelimit: use query_api for error handling (#46036)
  cs_ip_address: fix vpc and network mutually exclusive (#47846)
  cs_network_acl_rule: fix doc and sanity (#47835)
  cs_template: fix KeyError on state=extracted (#48675)
  cs_instance: fix typos in defaults for ip/ip6_ipaddress (#49064)
  cs_physical_network: use name as param for network (#54602)
  cloudstack: fix E326 (#54657)

This will be my last announcement and I most probably leaving the
cloudstack mailing lists in the next couple of days.

Best wishes
René

[1] https://github.com/ansible/cloudstack-test-container
[2]
https://codecov.io/gh/ansible/ansible/tree/devel/lib/ansible/modules/cloud/cloudstack
[3]
https://github.com/ansible/ansible/blob/0e0735f10ecb64634a4a1c9ac78a36743295417d/.github/BOTMETA.yml#L1471


Re: Any plan for 4.11 LTS extension ?

2019-03-16 Thread Rene Moser



On 3/14/19 10:48 AM, Giles Sirett wrote:
> Hi Haijiao
> 
> I would *expect*  4.13.0 (LTS) to be released in Q2, which  will supersede 
> the 4.11 branch as the current LTS branch

btw...

https://cwiki.apache.org/confluence/display/CLOUDSTACK/LTS:

"LTS releases starting with the first major release will be supported
for a minimum of 6 months after next LTS release or a total period of 24
months, whichever is greater"

--> next LTS Q2 2019 + 6 months = 4.11 LTS ends Q4 2019

Just my 2 cents

René


Re: [VOTE] Apache CloudStack 4.12.0.0 [RC2]

2019-02-13 Thread Rene Moser
Hi

Seeing a failing ansible integration test for 4.12.0.0 RC2,
which passes in 4.11.2:

API says:
Network with UUID:19f750cf-de98-43ab-b565-1935a7c23f6e is in allocated
and needs to be implemented first before acquiring an IP address

https://gist.github.com/resmo/0c3a89055962456f68b27567bb5794f6

Tests
https://github.com/ansible/ansible/blob/devel/test/integration/targets/cs_firewall/tasks/main.yml#L14

I see a network in allocation state, which gets in implementation state
after a VM deploy as expected. After this, the tests passes. But it
seems the API doesn't allow acquiring IP addresses in allocation state.
Is this by intention?

Regards
René







Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-21 Thread Rene Moser
We used "Debian GNU/Linux 8 (64-bit)" and in the release notes I see a
"OS Type: Other Linux 64-bit (or Debian 8.0 or 9.0 64-bit)" for vmware [1].

However, let me change this to "Other Linux 64-bit" and verify if it
makes any difference here.

René

[1]
http://docs.cloudstack.apache.org/en/4.11.1.0/upgrading/upgrade/upgrade-4.5.html#update-system-vm-templates



Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-19 Thread Rene Moser
Hi

On 11/19/18 4:43 PM, Zehnder, Samuel wrote:

> 1) Memory Consumption on vSphere
> 
> vRouter are starting to swap with low memory available, this also starts
> happening after increasing memory size to 512m. Interestingly, there's
> no process nor cache using the memory as far as "top", "ps", or other
> tools report.

We are seeing the same, looks like a memory leak to us but we assumed
that it might be related to our way we monitor the routers.

vSphere 5.5. We also increased the memory to 1gb but it only "takes more
time" to fill.

To us, it seems to be related to SSH connections or logins, and we found
a potential related issue on github indicating a memory leak in
systemd-logind

https://github.com/systemd/systemd/issues/8015

We were able to force the memory going down very quickly (200MB withing
minutes) by using, 2 parallel executions of the following commands ssh
in into a VR from the managemnt host (Sometimes a ctrl-c and restart is
required to enforce it)

while true; do ssh  'free'; done

with the following config

# cat .ssh/config
IdentityFile /var/cloudstack/management/.ssh/id_rsa
Port 3922
ControlPath ~/.ssh/master-%l-%r@%h:%p
ControlMaster auto

Host 10.100.10.*
  StrictHostKeyChecking no
  UserKnownHostsFile=/dev/null

Regards
René


Re: [VOTE] Apache CloudStack 4.11.2.0 RC5

2018-11-17 Thread Rene Moser
+0 Won't have a chance to test it, but RC4 look already not too bad to me.

Thanks for all the works!


On 11/13/18 2:59 PM, Paul Angus wrote:
> Hi All,
> 
> I've created a 4.11.2.0 release (RC5), with the following artefacts up for 
> testing and a vote:
> 
> 
> Git Branch and Commit SH:
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20181113T0924
> Commit: 5aae410dfce2bef5cc21a0892370cb5d0628f681
> 
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/
> 
> PGP release keys (signed using 51EE0BC8):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> The vote will be open for 72 hours - until 14:00 GMT on Friday 16th Nov.
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate 
> "(binding)" with their vote?
> 
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
> 
> Additional information:
> 
> For users' convenience, I've built packages from 
> 5aae410dfce2bef5cc21a0892370cb5d0628f681 and published RC5 repository here:
> http://packages.shapeblue.com/testing/41120rc5/
> 
> The release notes are still work-in-progress, but the systemvm template 
> upgrade section has been updated.
> 
> 4.11.2.0 systemvm templates are available from here:
> http://packages.shapeblue.com/testing/systemvm/41120rc5/
> 
> Only the following changes have been added to RC5:
> 
> +-+--+---+--++
> | Version | Github   | Type  | Priority | Description 
>|
> +=+==+===+==++
> | 4.11.2.0| `#3018`_ |   |  | Prevent 
> error on GroupAnswers on VR creation   |
> +-+--+---+--++
> | 4.11.2.0| `#3007`_ |   |  | Add missing 
> ConfigDrive entries on existing zones after|
> | |  |   |  | upgrade 
>|
> +-+--+---+--++
> | 4.11.2.0| `#2980`_ |   |  | [4.11] Fix 
> set initial reservation on public IP ranges |
> +-+--+---+--++
> | 4.11.2.0| `#3010`_ |   |  | Fix 
> DirectNetworkGuru canHandle checks for lowercase   |
> | |  |   |  | isolation 
> methods  |
> +-+--+---+--++
> 
> .. _`#3012`: https://github.com/apache/cloudstack/pull/3012
> .. _`#3018`: https://github.com/apache/cloudstack/pull/3018
> .. _`#3007`: https://github.com/apache/cloudstack/pull/3007
> .. _`#2980`: https://github.com/apache/cloudstack/pull/2980
> .. _`#3010`: https://github.com/apache/cloudstack/pull/3010
> 
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 


Re: [VOTE] Apache CloudStack 4.11.2.0 RC4

2018-11-02 Thread Rene Moser
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 10/30/2018 05:10 PM, Paul Angus wrote:
> Git Branch and Commit SH: 
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs
/heads/4.11.2.0-RC20181030T1040
>
> 
Commit: 840ad40017612e169665fa799a6d31a23ecad347

+1 (binding): Good job!

I've tested:
- - Upgraded from 4.11.1 to 4.11.2 (VMware 5.5)
- - Router / SystemVM upgrades
- - Account, User, Project handling
- - Advanced networking (firewall, static NAT)
- - VPC and ACL rules
- - User VMs create/update/destroy
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEWCYZsaT2rovFithIgwaWW+bWMx0FAlvc5pYACgkQgwaWW+bW
Mx2wDBAAp/wLnr78JO7Z3HqZQW0fVuOXB6GoCtLR2IctzDYmbw3uTIzR5iNdPe4l
0YM6DSFYF+bCllOfzmPYiuWtmzcRdB6xRaItZAv53NMobbGAR3rp7sFkESCt+SBe
5HGTU0aDrtj6vAskxHT2eM+R25VlJJjwjxqjsj4j/cJTVbgRDu13wnSfqzoOoVHg
cQhHu1ExEsqhaHp49WoNnlcsyFRgII1KPYWwiCFpHStmCPwN7FFw1zXHTc5H58/G
KF9IyIQj6DXX4huIn2vY2bjzgXGV2drOXD0c0PWIu133I1HTZ8+Q0E6FBIlnf1WI
Ub4+r+TH6xT1YGD4l055HNhf8Enw/t2mZAtrqANdI4rfT4s4A+/+hF/WfQg/yV+0
+1LTUnrNuuVkn9JFjcJ/I1rwRHcScZgpEA3PNWMTTlqQdB5uW5U87s6lLoN3clIr
bzH+Wbu/2i+pRBBxlmOmkEQtAkd/rzNyMDwObBDR7TGYlPOVZxW6CON58i4yjV5U
LC4Qo/Kw4cja+7e50gcAiWVGs75AvltH3PEuvYIbC/e695vjTZTdypIjkSc2y8tr
KadHpNVSgYuCX03oLc6ZKbonAB3QtyR0Gf7/5FW15lKYgRkjMuIHZfz+NbUgs2YU
bkXmzw8cncIQiaxq9xpDbydB4vpNJgbPkO4WGf6BniRa+D5sAV0=
=feeG
-END PGP SIGNATURE-


Re: [VOTE] Apache CloudStack 4.11.2.0 RC3

2018-10-18 Thread Rene Moser
On 10/18/2018 12:54 PM, Paul Angus wrote:
> The vote will be open until the middle of next week, 26th September 2018.
s/September/October/ right?


Re: Ansible 2.7: CloudStack related changes and future

2018-10-17 Thread Rene Moser
...and thanks for all the fish... and for the kind words, and a special
thank to Exoscale for their continuous support.

It has been a great time, with many opportunities I was able to master.
I have followed the vision to make CloudStack a thing in Ansible.

However, the reason I started the development was to make my daily job
easier. I makes me incredibly happy to see that it has gone way beyond
this goal, CloudStack has become a better solution to users and to your
customers.

Even it was a hobby, I took it seriously to develop things the best I
could and to set a commitment to improve constantly when I have learned
better. To be there and work on it. To be honest, my life was not always
balanced during the last 3 years.

To leave my job also means to leave CloudStack, the community and my
work behind makes me still a bit sad and it took a while to take this
step, but it makes room for new challenges to solve.

The future is bright.

René


Re: Ansible 2.7: CloudStack related changes and future

2018-10-08 Thread Rene Moser
Sorry, I forgot to mention, that David Passante is already part of the
ansible cloudstack team, but at least one other person would be helpful.

Thanks again
René


Ansible 2.7: CloudStack related changes and future

2018-10-08 Thread Rene Moser
Hi all

First, please note I am leaving my current job by the end of November
and I don't see that CloudStack will play any role in my professional
future.

As a result, I official announce the end of my maintenance for the
Ansible CloudStack modules with the release of Ansible v2.8.0 in spring
2019.

If anyone is interested to take over, please let me know so I can
officially introduce him/her to the Ansible community.

Thanks for all the support and joy I have had with CloudStack and the
community!

Ansible v2.7.0 is released with the following, CloudStack related changes:

David Passante (1):
  cloudstack: new module cs_disk_offering (#41795)

Rene Moser (4):
  cs_firewall: fix idempotence and tests for cloudstack v4.11 (#42458)
  cs_vpc: fix disabled or wrong vpc offering taken (#42465)
  cs_pod: workaround for 4.11 API break (#43944)
  cs_template: implement update and revamp (#37015)

Yoan Blanc (1):
  cs instance root_disk size update resizes the root volume (#43817)

nishiokay (2):
  [cloudstack] fix cs_host example (#42419)
  Update cs_storage_pool.py (#42454)


Best wishes
René


Re: [cloudstack] branch master updated (1fa4f10 -> fcc87d9)

2018-10-08 Thread Rene Moser
On 10/08/2018 09:24 AM, Rohit Yadav wrote:
> +1 for protecting master and recent 4.{11,10,9...} branches from force pushes.

One of those Mondays... ;)

+1


Re: [4.11.1] VR memory leak?

2018-10-01 Thread Rene Moser
On 10/01/2018 01:41 PM, Simon Weller wrote:
> Any obvious processes using memory?

No, unfortunately not. Everything looks calm but real memory usage
increases slowly.

We are still trying to find the source issue.

René


[4.11.1] VR memory leak?

2018-10-01 Thread Rene Moser
Hi

We observe a specious pattern in memory usage (see graph free memory
https://photos.app.goo.gl/sffEmBEoZ1gbRd18A)

we restarted the VR last Friday, today on Monday, we have less then 20%
memory of 1 GB.

The memory is used memory not cached (also see
https://photos.app.goo.gl/b9eAd3xoETvDVKzH9)

Does anyone see an identical pattern? Anyone a chance to test 4.11.2
system VMs against this issue?

Regards
René


Re: [VOTE] Apache CloudStack 4.11.2.0 RC2

2018-09-28 Thread Rene Moser
Hi

On 09/28/2018 05:21 PM, Boris Stoyanov wrote:
> Hi guys,
> 
> I’ve did some upgrade testing of RC2. I did upgraded database successfully 
> from 4.5.2.2, 4.9.3 and 4.11.1, but unfortunately I’ve run into a 
> connectivity issue between vmware 4.5u3 environments. 
> 
> Looks like TLS1.2 is not supported at first glance.
> 
>   Caused by: javax.net.ssl.SSLHandshakeException: Server chose TLSv1, but 
> that protocol version is not enabled or not supported by the client.

> I’m guessing we’ll need an RC3. 

This is a known issue and also exists in 4.11 (upgrade from 4.5 to 4.11.1)

Probably only needs some docs:

in /etc/cloudstack/management/java.security.ciphers

change line

jdk.tls.disabledAlgorithms=SSLv2Hello, SSLv3, TLSv1, TLSv1.1, DH keySize
< 128, RSA keySize < 128, DES keySize < 128, SHA1 keySize < 128, MD5
keySize < 128, RC4

to

jdk.tls.disabledAlgorithms=SSLv2Hello, SSLv3, DH keySize < 128, RSA
keySize < 128, DES keySize < 128, SHA1 keySize < 128, MD5 keySize < 128, RC4

solves it.

Regards
René


Re: VRs swapping with 256 MB RAM

2018-09-21 Thread Rene Moser
Hi

Just to clarify, we had 256 MB in our lab only.

The prod VRs have 1 GB RAM and 4 CPUs, 1 CPU per network interface for
optimal performance + 1 for the OS.

Regards
René

On 09/20/2018 02:35 PM, Rakesh Venkatesh wrote:
> Hello Rene
> 
> Even for VR's running on KVM, 256MB is really less. Thats why we offer
> extra VR with 2 cores and 1GB RAM as another option so that customers can
> use it instead of the default 256MB and 1 core.
> 
> On Tue, Sep 18, 2018 at 5:56 PM Rene Moser  wrote:
> 
>> Hi
>>
>> While running test for a 4.11.1 (VMware) upgrade in our lab, we run into
>> low memory / swapping of VRs having 256 MB RAM. After 2-3 days it became
>> critical because the management server connections to VRs took very
>> long, minutes, this resulted in many more problems all over.
>>
>> Make sure your VRs have enough RAM.
>>
>> Regards
>> René
>>
> 
> 


VRs swapping with 256 MB RAM

2018-09-18 Thread Rene Moser
Hi

While running test for a 4.11.1 (VMware) upgrade in our lab, we run into
low memory / swapping of VRs having 256 MB RAM. After 2-3 days it became
critical because the management server connections to VRs took very
long, minutes, this resulted in many more problems all over.

Make sure your VRs have enough RAM.

Regards
René


Re: API Break in pods API

2018-08-16 Thread Rene Moser
Thanks Nicolas for the pointer.

Breaking the API is bad, even worse that the API docs still suggest to
update start/end ips by the API updatePod AND no errors returned (silent
fail).

We left users behind here.


On 08/17/2018 01:31 AM, Nicolas Vazquez wrote:
> Hi Rene,
> 
> 
> As you mentioned it was part of the work introduced by PR 2048. I've been 
> taking a look at its FS on this link: 
> 
>  
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Expansion+of+Management+IP+Range
> 
> 
> Under
>  'Architecture and Design Description' I found out these points:
> 
>   *   Now, we can only update the name, gateway and netmask of a Pod using 
> updatePod API.
>   *   We cannot update StartIp and EndIp of the pod. Because we cannot 
> replace them with just one range.
>   *   Even if we pass startIp and endIp parameters in updatePod API. It is 
> considered as NULL thereafter.

This should not be handled it as NULL because as a user, you can't see
that the updating of the start/end IP did not work. Instead an error
must be returned saying how to update the IPs

Regards
René



API Break in pods API

2018-08-10 Thread Rene Moser
Hi

In https://cloudstack.apache.org/api/apidocs-4.11/apis/updatePod.html
the updatePod API shows that startip and endip could be updated.
However, since 4.11, I can't anymore.

Can anyone confirm? Did I miss a thing?

This (possible) API break in 4.11 for pods seems to be introduced by
https://github.com/apache/cloudstack/pull/2048

See Issues
- https://github.com/apache/cloudstack/issues/2797
- https://github.com/apache/cloudstack/issues/2735

I don't get any answers of the author. How do we proceed to restore the API?

Regards
René




Re: Github Issues

2018-07-18 Thread Rene Moser
Hi

On 07/17/2018 02:01 PM, Marc-Aurèle Brothier wrote:
> Hi Paul,
> 
> My 2 cents on the topic.
> 
> people are commenting on issues when it should by the PR and vice-versa
>>
> 
> I think this is simply due to the fact that with one login you can do both,
> versus before you had to have a JIRA login which people might have tried to
> avoid, preferring using github directly, ensuring the conversation will
> only be on the PR. Most of the issues in Jira didn't have any conversation
> at all.
> 
> But I do feel also the pain of searching the issues on github as it's more
> free-hand than a jira system. At the same time it's easier and quicker to
> navigate, so it ease the pain at the same time ;-)
> I would say that the current labels isn't well organized to be able to
> search like in jira but it could. For example any label has a prefix
> describing the jira attribute type (component, version, ...) Then a bot
> scanning the issue content could set some of them as other open source
> project are doing. The bad thing here is that you might end up with too
> many labels. Maybe @resmo can give his point of view on how things are
> managed in Ansible (https://github.com/ansible/ansible/pulls - lots of
> labels, lots of issues and PRs). I don't know if that's a solution but
> labels seem the only way to organize things.

Personally, I don't care much if jira or github issues. Github issues
worked pretty well for me so far.

However, We don't use all the things that make the work easier with
github issues. I assume we invested much more efforts in making "jira"
the way we wanted, now we assume that github just works?

The benefit about github issues is, that it has an extensive api which
let you automate. There are many helpful tools making our life easier.

Let a bot do the issue labeling, workflowing, and user guiding and even
merging PR after ci passed when 2 comments have LGTM.

Look at https://github.com/kubernetes/kubernetes e.g.

Short: If we want to automate things and evolve, github may be the
better platform, if we want to keep things manual, then jira is probably
more suitable.

Regards
René







Re: [Proposal] new API for recover routers

2018-07-11 Thread Rene Moser
Hi Dag

On 07/11/2018 12:08 PM, Dag Sonstebo wrote:
> Hi Rene,
> 
> So if you have upgraded 4.5 to 4.11 (or any version upgrade where new system 
> VM templates are in play) then your 4.5 VRs are by definition destroyed and 
> you are running with 4.11 VRs. As a result there is nothing to recover – the 
> rollback mechanism in this case is to roll back DB, destroy all 4.11 VRs (and 
> anything else unknown to the original DB) – then start up the 4.5 management 
> servers. At this point CloudStack management will either automatically build 
> “new” 4.5 VRs – or worst case you do a network restart which recreates the VR.

Yes we know the procedure of the downgrade. The nice thing of the
upgrade is that the VR names stay as is and we thought it would be nice
if this would also be possible while "downgrading" or more precisely,
state="your VR are not in a version the running ACS supports --> recover
them"

The thing is with "destroyed VR" hey are not coming back automatically,
do they? They are only be recreated on a e.g. network restart or an api
call related to a VR (user VM stop/start, etc.) right?

Regards
René



[Proposal] new API for recover routers

2018-07-11 Thread Rene Moser
Hi

While tested v4.11.1 in the lab and upgraded the routers, we came across
the need of "downgrade" the routers after rolled back to ACS to v4.5
(testing upgrade process).

So my proposal is to have a new api for recover the routers to the
system-vm template, AFAICS similar to
https://cloudstack.apache.org/api/apidocs-4.11/apis/recoverVirtualMachine.html
but for routers.

Any thoughts?

Regards
René



Re: [DISCUSS] Blocking the creation of new Basic Networking zones

2018-06-20 Thread Rene Moser



On 06/20/2018 01:03 PM, Wido den Hollander wrote:
> 
> 
> On 06/20/2018 12:31 AM, Tutkowski, Mike wrote:
>> If this initiative goes through, perhaps that’s a good time to bump 
>> CloudStack’s release number to 5.0.0?
>>
> 
> That's what I said in my e-mail :-) But yes, I agree with you, this
> might be a good time to bump it to 5.0
> 
> With that we would:
> 
> - Drop creation of new Basic Networking Zones
> - Support IPv6 in shared IPv6 networks
> - Java 9?
> - Drop support for Ubuntu 12.04
> - Other fancy stuff?

- Versioned API: keep v1 API (< v5.0.0)  and create a v2 API >= v5.0.0
where we fix all inconsistencies (ACL API generally, paging does not
always work, returned keys sometime camel case (crossZone), a.s.o.)

> - Support ConfigDrive in all scenarios properly


Re: 4.11.1 install feedback

2018-05-23 Thread Rene Moser
Hi again

Regarding router: the router looks more stable (rohit lab version).
However, we still need to manually reboot it after first provisioning,
otherwise the management server does not get access by ssh.

Having a lot of fw rules and many VMs in an advanced network, still
takes a "hell of a time" to get the VR fully configured.

This is on VMware 6.5, I think, there is no automated testing for this
env right?

Regards
René


Re: 4.11.1 install feedback

2018-05-22 Thread Rene Moser

On 05/22/2018 06:08 PM, Paul Angus wrote:
> Had you 'upgraded' to dynamic roles in your 4.9 environment Rene?

right, it was for 4.9. Yes, did that.

the "session expired" issue seems only related to UI. api keys still work.





Re: 4.11.1 install feedback

2018-05-22 Thread Rene Moser
appending some logs

2018-05-22 17:45:49,929 DEBUG [c.c.a.ApiServlet]
(qtp1386767190-11:ctx-0a395356) (logid:33635259) ===START===
10.184.2.226 -- GET  command=listZones=json&_=1527003949904
2018-05-22 17:45:49,932 DEBUG [c.c.a.ApiServer]
(qtp1386767190-11:ctx-0a395356 ctx-7a746d15) (logid:33635259) CIDRs from
which account 'Acct[2-admin]' is allowed to perform API calls:
0.0.0.0/0,::/0
2018-05-22 17:45:49,937 DEBUG [c.c.a.ApiServlet]
(qtp1386767190-11:ctx-0a395356 ctx-7a746d15) (logid:33635259) ===END===
10.184.2.226 -- GET  command=listZones=json&_=1527003949904
2018-05-22 17:45:50,025 DEBUG [c.c.a.ApiServlet]
(qtp1386767190-16:ctx-8f4234ef) (logid:53421488) ===START===
10.184.2.226 -- GET  command=cloudianIsEnabled=json&_=1527003949976
2018-05-22 17:45:50,028 DEBUG [c.c.a.ApiServer]
(qtp1386767190-16:ctx-8f4234ef ctx-1c7598cc) (logid:53421488) CIDRs from
which account 'Acct[2-admin]' is allowed to perform API calls:
0.0.0.0/0,::/0
2018-05-22 17:45:50,032 DEBUG [c.c.a.ApiServlet]
(qtp1386767190-16:ctx-8f4234ef ctx-1c7598cc) (logid:53421488) ===END===
10.184.2.226 -- GET  command=cloudianIsEnabled=json&_=1527003949976
2018-05-22 17:45:50,093 DEBUG [c.c.a.ApiServlet]
(qtp1386767190-17:ctx-16bff054) (logid:1abefa91) ===START===
10.184.2.226 -- GET  command=quotaIsEnabled=json&_=1527003950074
2018-05-22 17:45:50,097 DEBUG [c.c.a.ApiServer]
(qtp1386767190-17:ctx-16bff054 ctx-b0e8c230) (logid:1abefa91) CIDRs from
which account 'Acct[2-admin]' is allowed to perform API calls:
0.0.0.0/0,::/0
2018-05-22 17:45:50,098 DEBUG [c.c.a.ApiServer]
(qtp1386767190-17:ctx-16bff054 ctx-b0e8c230) (logid:1abefa91) The given
command 'quotaIsEnabled' either does not exist, is not available for
user, or not available from ip address '/10.184.2.226'.
2018-05-22 17:45:50,098 DEBUG [c.c.a.ApiServlet]
(qtp1386767190-17:ctx-16bff054 ctx-b0e8c230) (logid:1abefa91) ===END===
10.184.2.226 -- GET  command=quotaIsEnabled=json&_=1527003950074
2018-05-22 17:45:50,232 DEBUG [c.c.a.ApiServlet]
(qtp1386767190-20:ctx-5a7f0d91) (logid:9a3f5b56) ===START===
10.184.2.226 -- GET  command=listZones=json&_=1527003950156
2018-05-22 17:45:50,232 DEBUG [c.c.a.ApiServer]
(qtp1386767190-20:ctx-5a7f0d91 ctx-382a670b) (logid:9a3f5b56) Expired
session, missing signature, or missing apiKey -- ignoring request.
Signature: null, apiKey: null


On 05/22/2018 05:39 PM, Rene Moser wrote:
> Hi
> 
> I ran the update from 4.9.3 to 4.11.1 (rohit lab) and got into the issue
> where I can still not login with admin after upgrade. I immediately get
> a "session expired" in the UI. I remember an issue related to roles but
> can not find the "workaround" and thought it were fixed for 4.11.1.
> 
> Any help is appreciated.
> 
> René
> 


4.11.1 install feedback

2018-05-22 Thread Rene Moser
Hi

I ran the update from 4.9.3 to 4.11.1 (rohit lab) and got into the issue
where I can still not login with admin after upgrade. I immediately get
a "session expired" in the UI. I remember an issue related to roles but
can not find the "workaround" and thought it were fixed for 4.11.1.

Any help is appreciated.

René


Re: Default API response type: XML -> JSON

2018-04-24 Thread Rene Moser
I am +1 but with a versioned API (v2) because this would break most
likely many API implementations.

René

On 04/23/2018 03:34 PM, Marc-Aurèle Brothier wrote:
> Hi everyone,
> 
> I thought it would be good to move from XML to JSON by default in the
> response of the API if no response type is sent to the server along with
> the request. I'm wondering that's the opinion of people on the mailing list.
> 
> Moreover, if anyone knows a tool working with the API in XML can you list
> them, so I could check the code and see if the change can be done without
> breaking it.
> 
> PR to change default response type: (
> https://github.com/apache/cloudstack/pull/2593).
> If this change would cause more trouble, or is not needed in your opinion,
> I don't mind to close the PR.
> 
> Kind regards,
> Marc-Aurèle
> 


[DISCUSS] VR upgrading workflow thoughts

2018-04-02 Thread Rene Moser
Hi

One of the biggest challenges in cloudstack is upgrading VRs in an
advanced networking setup.

Even though with the latest efforts made by shapeblue and Rohit (nice
work) the replacement of a VR does not disconnect the services behind
the router anymore, there is still room for improvement.

Currently, the issue we still face for clouds in production using
advanced networking is, a valid roll back path.

Today upgrade path works like this (correct me when I am wrong)

1. upload new template
2. upgrade management service
3. rolling out new VRs

The issue is, the VRs can not be fully used until upgraded (new
instances, new firewall rules etc, are not possible)

Our vision is that a new VR template would also be compatible with
previous version of cloudstack management service. This would allow to
rolling out new VRs using _before_ upgrading the management service:

1. upload new template
2. rolling out new VRs
3. upgrade management service

What are the benefits of this?

It would allow to test the VRs before the management service upgrade and
roll back to previous template (or upload a fixed template) in case of
issues.

A rollback of the management service would not necessarily result in
redeployment of VRs as they were still compatible.

Any thoughts?













Re: [VOTE] Move to Github issues

2018-03-26 Thread Rene Moser
+1


Re: Cloudstack metrics, usage collection and reporting?

2018-03-08 Thread Rene Moser
Only have
https://github.com/resmo/awesome-cloudstack#montitoring-and-graphs




Re: [4.11] Management to VR connection issues

2018-02-26 Thread Rene Moser


On 02/26/2018 12:41 PM, Rohit Yadav wrote:

> - If waiting for ssh and apache2 as part of post-init solves the issue, this 
> would require a new systemvmtemplate as the systemd scripts cannot be changed 
> or make effect during first boot.

The waiting for ssh was not the issue, it was a result.

The hang of cloud-postinit caused by p.wait() when having a ton of
iptable rules was the issue. But this is addressed already. should be fine.

a systemctl list-jobs shows "no pending jobs" anymore, so the boot has
completed.

After that the VR should be accessable by SSH (3922) by managemement
right, but it is not.

Did you see  the changes after a reboot (please compare the screenshots
of the ip addr output I sent). After that reboot/network change, SSH
works...


> - I think the additional nics always used to show up for vmware, there is a 
> global setting to configure this (extra nics for vmware, probably because 
> older versions did not support dynamic nic addition on vmware vrs).

On 4.5.2, we only see 4 NICs. in 4.11 we see 5 of them. We were just
wondering if this could result in an issue. What global setting would
that be?


> - For VR timeouts, see logs and check if from management server host you're 
> able to SSH into the VR using the private IP and port 3922. See the 
> troubleshooting wiki: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSVM%2C+templates%2C+Secondary+storage+troubleshooting

Yes, after a manual reboot of the VR, we can SSH-in as I wrote. Without
a reboot of the VR, we get a "no route to host". So it seems not even an
arp ping is working.


> - Can you share/check which processes are consuming the RAM, 256MB ram is 
> usually enough for non-redundant VRs. (share output of top or check using 
> htop?). Make sure to use a latest Linux version (any Debian variant such as 
> Debian 8, 9 or Ubuntu 16.04+ may also work). The issue is vCenter/ESXi 6.5 
> for some reason, gives lower RAM compared to 6.0 and 5.5 and has poor support 
> for legacy os. I had faced/found this issue while testing redundant VRs which 
> take more RAM usually than normal VRs.

Using the shapeblue VR template (your template ;))

So the man docs says
https://manpages.debian.org/stretch/initscripts/tmpfs.5.en.html

unfortunately only a fstab entry worked for me, setting the
/etc/default/tmpfs didn't.

https://github.com/apache/cloudstack/pull/2468/commits/bd882a8f80763595a89a3b74330500e1965bfda3








Re: [4.11] Management to VR connection issues

2018-02-26 Thread Rene Moser
Hi Paul

On 02/26/2018 11:35 AM, Paul Angus wrote:
> Rene,
> Have you checked the OS getting applied on vCenter?

It's "Other 3.x or later Linux (64-bit)" also see
https://photos.google.com/share/AF1QipPNnFnP8xMIHgYCQ0rZtDsyeVGoJIyHjPqWP8BP-hiNRnd0CxHuc8xn5GetIrpocQ/photo/AF1QipPIeKUnVU4hQ8_AuNT8galxvDyPsMqjkMkIqKPV?key=WmlJRUhMNnh3cXZheFp0WDZuMFZtTmpOQlo2Y2NR





Re: [4.11] Management to VR connection issues

2018-02-26 Thread Rene Moser
Hi again

We found the main problem.

== cloud-postinit hang

When having many iptables rules resulting in cloud-postinit to hang for
10min unless it was killed by systemd. As a result the ssh daemon was
not started for 10 min because it is configured to be started after
cloud-postinit.

It seems the issue was already fixed by
https://github.com/apache/cloudstack/commit/ce67726c6d3db6e7db537e76da6217c5d5f4b10e

== VR still needs manual reboot

However, we still notice adapter changes after a reboot: see before
after screenshots of "ip addr" in
https://photos.app.goo.gl/9XsjOJjLqQ9SRjYV2. We still need to manually
reboot the VR to make the network actually working.

== VR has too many adapters?

Next thing we noticed there are many network adapters (NICs) for this
non-vpc router (see screenshot of the vcenter in
https://photos.app.goo.gl/9XsjOJjLqQ9SRjYV2). Adapter 4 and 5 seem
unnecessary. Any comments on that?

== VR with 256 MB RAM dows not work

Next issue we found is, that the VR must have more than 256MB RAM.
Otherwise systemd will complain the daemon can not be reloaded, because
the ram disk of /run has too less space.

Feb 23 16:24:36 r-413-VM postinit.sh[1089]: Failed to reload daemon:
Refusing to reload, not enough space available on /run/systemd.
Currently, 8.6M are free, but a safety buffer of 16.0M is enforced.
root@r-413-VM:~# df -h /run/
Filesystem  Size  Used Avail Use% Mounted on
tmpfs16M  7.2M  8.7M  46% /run

Increaing to 512MB RAM helped:

root@r-413-VM:~# df -h /run/
Filesystem  Size  Used Avail Use% Mounted on
tmpfs41M  7.8M   34M  19% /run

Unsure if this can be tuned on systemd level, didn't find a way yet.

== VR API Command timeouts

When executing command related to VR, e.g. restart network, start/stop
router the command won't reach the vcenter api, and times out. We are
unsure yet, why.

== VR minor fixes

Next we fixed 2 minor things along.

* rsyslogd config syntax issue
* IMHO we should start apache2 also after cloud-postinit

Also see https://github.com/apache/cloudstack/pull/2468

Regards
René


Re: [4.11] Management to VR connection issues

2018-02-22 Thread Rene Moser

On 02/20/2018 08:04 PM, Rohit Yadav wrote:
> Hi Rene,
> 
> 
> Thanks for sharing - I've not seen this in test/production environment yet. 
> Does it help to destroy the VR and check if the issue persists? Also, is this 
> behaviour system-wide for every VR, or VRs of specific networks or topologies 
> such as VPCs? Are these VRs redundant in nature?

We have non-redundant VRs, and we haven't looked at VPC routers yet.

The current analyses shows the following:

1. Started the process to upgrade an existing router.
2. Router gets destroyed and re-deployed with new template 4.11 as expected.
3. Router OS has started, ACS router state keeps "starting". When we
login by console, we see some actions in the cloud.log. At this point,
router will be left in this state and gets destroyed after job timeout.
4. We reboot manually on the OS level. VR gets rebooted.
5. After the OS has booted, ACS Router state switches to "Running"
6. We can login by ssh. however ACS router still shows
"requires upgrade" (but the OS has already booted with template 4.11)
7. When we upgrade, the same process happens again points 1-3. Feels
like a dead lock.


Logs:
https://transfer.sh/DdTtH/management-server.log.gz

We continue our investigations

Regards
René



[4.11] Management to VR connection issues

2018-02-20 Thread Rene Moser
Hi

We upgraded from 4.9 to 4.11. VMware 6.5.0. (Testing environment).

VR upgrade went through. But we noticed that the communication between
the management server and the VR are not working properly.

We do not yet fully understand the issue, one thing we noted is that the
networks configs seems not be bound to the same interfaces after every
reboot. As a result, after a reboot you may can connect to the VR by
SSH, after another reboot you can't anymore.

The Network name eth0 switched from the NIC id 3 to 4 after reboot.

The VR is kept in "starting" state, of course as a consequence we get
many issues related to this, no VM deployments (kept in starting state),
VM expunging failure (cleanup fails), a.s.o.

Have anyone experienced similar issues?

Regards
René


Re: [DISCUSS] DB upgrade issue workaround for 4.10.0.0 users upgrading to 4.11.0.0

2018-02-14 Thread Rene Moser
On 02/14/2018 06:21 PM, Daan Hoogland wrote:
> the -x would only add it to the comment making it harder to find. As for
> multiple stable branches; merging forward always folows all branches
> forward so a fix on 4.9 would be merged forward to 4.10 and then 4.10 would
> be merged forward again to 4.11 and finally to master. of course there is
> always work to do in terms of solving merge conflicts but these are
> generally less then port work as the order of any commits to the
> intemediats is always preserved.

I don't say this workflow is "bad" or does not work "technically".

To me, it looks "hard" to make a decision to which branch should a fix
go in the first place. And in this workflow, you basically have to
decided it _before_ the merge: To 4.11? or even 4.10? And if I should
have merged to 4.10 but merged it to 4.11, now what?

In contrast of the cherry-pick workflow: you decide _after_ to which
versions the fix should be backported to.

To me, this seems much convenient. But can live with that.

René


Re: [DISCUSS] DB upgrade issue workaround for 4.10.0.0 users upgrading to 4.11.0.0

2018-02-14 Thread Rene Moser
Hi Daan

On 02/14/2018 05:26 PM, Daan Hoogland wrote:
> Rene,
> 
> The issue is certainly not due the git workflow but to upgrade schemes we
> have.
> 
> The result of this workflow for us is that it is easier to find to which
> branches a particular commit is added as by merging forward the commit id
> of the actual fix doesn't change. so instead of looking in each branch for
> a bit of code you can look for a commit id on a branches log.

Ah I see.

However, the same can be achieved by adding -x to cherry-picks (to add
the origin commit id), without the downside that a fix can "only" go
into one stable branch.

Keep in mind, we certainly do have more than one stable branch at a time
(4.11-lts, 4.12). A fix should be applicable to any stable branch.

Or how would this work with the current workflow?

René







Re: [DISCUSS] DB upgrade issue workaround for 4.10.0.0 users upgrading to 4.11.0.0

2018-02-14 Thread Rene Moser
Hi all

Thanks for taking care of this issue.

However, I wonder how we can prevent such issues in the future and
wondering if this "incident" has its orginos by our current git workflow.

I find the workflow "commit to stable branch --> merge back to master"
is inconvenient. Because at one point, the branches divert too much
(think of refactorings on master branch) and a merge back to the master
would not work or would break.

In most of the project I work with, there are stable-branches as well
but any fix (and feature of course) has to be on master first. Any
serious fix could be cherry picked to stable branches (there may be more
than one) if applicable or an alternative fix could be made. stable
branches never receives merges from or to master branch.

To be honest, I haven't seen any project adopting this workflow, I
wondered why and how come we have this workflow.

Thanks for clarification.

René



On 02/12/2018 02:29 PM, Rohit Yadav wrote:
> All,
> 
> 
> Some of us have discussed and found an upgrade path issue that only affects 
> the 4.10.0.0 users who may see missing columns in certain tables post 
> upgrading to 4.11.0.0 version.
> 
> 
> The issue is/was that at the time 4.10.0.0 was released, PRs were merged to 
> the 'then' master branch that made changes to the 4.9.3.0->4.10.0.0 upgrade 
> path instead of the 4.10.0.0->4.11.0.0 upgrade path. One of such change was 
> an ALTER statement that added a new column `service_package_id`, and if 
> 4.10.0.0 version is upgraded to 4.11.0.0 such environments may report sql 
> related errors related to this column.
> 
> 
> (My colleagues Bobby and Ernie may chip in their findings and test results as 
> well)
> 
> 
> Pull request: https://github.com/apache/cloudstack/pull/2452
> 
> Jira ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-10285
> 
> 
> The proposed workaround is to move the changes to the expected upgrade path 
> of 4.10.0.0->4.11.0.0. This is *not ideal* given 4.11.0.0 is released but at 
> least 4.10.0.0 users who may want to upgrade to 4.11.1.0 or later won't face 
> the issue.
> 
> 
> The proposed workaround for current 4.10.0.0 users who may want to upgrade to 
> 4.11.0.0 is to run the following sql statements from the PR above before 
> upgrading to 4.11.0.0: (we may discuss and update the release notes website 
> as well)
> 
> 
> ### start sql ###
> 
> 
> use cloud;
> 
> 
> CREATE TABLE IF NOT EXISTS `cloud`.`netscaler_servicepackages` (
>   `id` bigint unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
>   `uuid` varchar(255) UNIQUE,
>   `name` varchar(255) UNIQUE COMMENT 'name of the service package',
>   `description` varchar(255) COMMENT 'description of the service package',
>   PRIMARY KEY  (`id`)
> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
> 
> CREATE TABLE IF NOT EXISTS `cloud`.`external_netscaler_controlcenter` (
>   `id` bigint unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
>   `uuid` varchar(255) UNIQUE,
>   `username` varchar(255) COMMENT 'username of the NCC',
>   `password` varchar(255) COMMENT 'password of NCC',
>   `ncc_ip` varchar(255) COMMENT 'IP of NCC Manager',
>   `num_retries` bigint unsigned NOT NULL default 2 COMMENT 'Number of retries 
> in ncc for command failure',
>   PRIMARY KEY  (`id`)
> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
> 
> ALTER TABLE `cloud`.`sslcerts` ADD COLUMN `name` varchar(255) NULL default 
> NULL COMMENT 'Name of the Certificate';
> ALTER TABLE `cloud`.`network_offerings` ADD COLUMN `service_package_id` 
> varchar(255) NULL default NULL COMMENT 'Netscaler ControlCenter Service 
> Package';
> 
> 
> DROP VIEW IF EXISTS `cloud`.`user_view`;
> CREATE VIEW `cloud`.`user_view` AS
> select
> user.id,
> user.uuid,
> user.username,
> user.password,
> user.firstname,
> user.lastname,
> user.email,
> user.state,
> user.api_key,
> user.secret_key,
> user.created,
> user.removed,
> user.timezone,
> user.registration_token,
> user.is_registered,
> user.incorrect_login_attempts,
> user.source,
> user.default,
> account.id account_id,
> account.uuid account_uuid,
> account.account_name account_name,
> account.type account_type,
> account.role_id account_role_id,
> domain.id domain_id,
> domain.uuid domain_uuid,
> domain.name domain_name,
> domain.path domain_path,
> async_job.id job_id,
> async_job.uuid job_uuid,
> async_job.job_status job_status,
> async_job.account_id job_account_id
> from
> `cloud`.`user`
> inner join
> `cloud`.`account` ON user.account_id = account.id
> inner join
> `cloud`.`domain` ON account.domain_id = domain.id
> left join
> `cloud`.`async_job` ON async_job.instance_id = user.id
> and async_job.instance_type = 'User'
> and 

Re: [DISCUSS] VR upgrade downtime reduction

2018-02-07 Thread Rene Moser
On 02/06/2018 02:47 PM, Remi Bergsma wrote:
> Hi Daan,
> 
> In my opinion the biggest issue is the fact that there are a lot of different 
> code paths: VPC versus non-VPC, VPC versus redundant-VPC, etc. That's why you 
> cannot simply switch from a single VPC to a redundant VPC for example. 
> 
> For SBP, we mitigated that in Cosmic by converting all non-VPCs to a VPC with 
> a single tier and made sure all features are supported. Next we merged the 
> single and redundant VPC code paths. The idea here is that redundancy or not 
> should only be a difference in the number of routers. Code should be the 
> same. A single router, is also "master" but there just is no "backup".
> 
> That simplifies things A LOT, as keepalived is now the master of the whole 
> thing. No more assigning ip addresses in Python, but leave that to keepalived 
> instead. Lots of code deleted. Easier to maintain, way more stable. We just 
> released Cosmic 6 that has this feature and are now rolling it out in 
> production. Looking good so far. This change unlocks a lot of possibilities, 
> like live upgrading from a single VPC to a redundant one (and back). In the 
> end, if the redundant VPC is rock solid, you most likely don't even want 
> single VPCs any more. But that will come.
> 
> As I said, we're rolling this out as we speak. In a few weeks when everything 
> is upgraded I can share what we learned and how well it works. CloudStack 
> could use a similar approach.

+1 Pretty much this.

René


Re: [VOTE] Apache Cloudstack 4.11.0.0 (LTS)

2018-01-17 Thread Rene Moser
On 01/17/2018 03:34 PM, Daan Hoogland wrote:
> People, People,
> 
> a lot of us are busy with meltdown fixes and a full component test takes 
> about the 72 hours that we have for our voting, I propose to extend the vote 
> period until at least Monday.

+1

I wonder where this 72 hours windows come from... Is it just be or,
based on the amount of changes and "things" to test, I would like to
expect a window in the size of 7-14 days ...?

René


Re: [DISCUSS] new way of github working

2018-01-15 Thread Rene Moser


On 01/15/2018 09:06 PM, Rafael Weingärtner wrote:
> Daan,
> 
> Now that master is open for merges again, can we get a feedback here? It
> might be interesting to find a consensus and a standardize way of working
> for everybody before we start merging things in master again …

+1 to allow merge commits on master branch to keep history of a series
of patches when they help to understand the change.

René




Re: [DISCUSS] Freezing master for 4.11

2018-01-08 Thread Rene Moser
On 01/08/2018 01:30 PM, Rohit Yadav wrote:
> Does this look agreeable? Thanks.

+1


Re: [DISCUSS] new way of github working

2018-01-08 Thread Rene Moser
Hi

On 01/08/2018 10:45 AM, Daan Hoogland wrote:
> Devs,
> 
> I see a lot of merge master to branch commits appearing on PRs. This is
> against prior (non-hard) agreements on how we work. It is getting to be the
> daily practice so;
> How do we feel about
> 1. not using merge commits anymore?
> 2. merging back as a way of solving conflicts?
> and
> Do we need to make a policy of it or do we let it evolve, at the risk of
> having more hard to track feature/version matrices?

I am +1 merge commits (when it make sense).

Keep history and be able to identify which commits belongs to what PR.

Also helpful if there are more than one author. It also helps when the
PR contains commit with fixes which can be cherry picked and backported
(e.g. the systemvm PR as an example).

Merge commits are not bad if we used them correctly. However, merge
commits in PRs -> bad

I am -1 "always squash commits". Source code is write once read many,
splitting commits into multiple commits can help understand the code.


René


Re: [DISCUSS] Freezing master for 4.11

2018-01-08 Thread Rene Moser
On 01/08/2018 09:47 AM, Daan Hoogland wrote:
> Rohit, Ivan,
> 
> I think we can argue that the five open PRs on the milestone can still go
> in as long as active work on them continues. I have not looked at Ivan's
> PRs yet but can see they were entered in december and he is actively
> working on it so why not include those in the milestone. A bigger concern
> is that some of the remaining PRs in that milestone are potentially
> conflicting. So we feature freeze now and work only to get the set list in
> (and blockers).

+1 Daan


Re: Any pythonista willing to help backport some cloud-init changes?

2017-12-28 Thread Rene Moser
On 12/28/2017 05:57 PM, Nux! wrote:
> 1 out of 2 hunks FAILED -- saving rejects to file 
> tests/unittests/test_datasource/test_cloudstack.py.rej

No python magic needed here, only git magic :)
So only the tests part did not apply, I would just "exclude" those...

git am --exclude=tests/unittests/test_datasource/test_cloudstack.py
da1db792b2721d94ef85df8c136e78012c49c6e5.patch

I amended the commit message and addded the source commit id, here is
the patch for 0.7.9:

https://github.com/resmo/cloud-init/commit/6a89a904ce2d12047f6d0c1944706e63a950ae83.patch

Regards
René


Re: Any pythonista willing to help backport some cloud-init changes?

2017-12-28 Thread Rene Moser
Jo Nux

On 12/28/2017 10:04 AM, Nux! wrote:
> Hi,
> 
> I've been made aware by a user that there is an issue (regression) with the 
> latest cloud-init in CentOS, in that the Cloudstack data source does not 
> parse dhcp lease files with "-" in them.
> There's a fix, but it does not apply cleanly. Can someone have a go at it?
> https://bugs.launchpad.net/cloud-init/+bug/1717147
> https://github.com/cloud-init/cloud-init/commit/da1db792b2721d94ef85df8c136e78012c49c6e5

I didn't get what you mean "not apply cleanly"? Need more infos and
context. What is the problem with the patch
https://github.com/cloud-init/cloud-init/commit/da1db792b2721d94ef85df8c136e78012c49c6e5
?

Regards
René


Re: [DISCUSS] Changing events to include UUIDs, could it break your integration

2017-12-28 Thread Rene Moser
Hi

On 12/28/2017 10:52 AM, Rohit Yadav wrote:
> All,
> 
> 
> We've come across a pull request which changes the event description to 
> use/export UUIDs instead of the numeric internal ID of a resource. I'm not 
> sure if this could potentially break any external integration such as 
> billing, crms etc. so wanted to get your feedback on this. My understanding 
> is external billing/intergrations would consume from the usage related tables 
> for data than events table.
> 
> 
> The PR is https://github.com/apache/cloudstack/pull/1940
> 
> 
> Comments, thoughts? Thanks.

Even though I am +1 with this change, we should work towards versioning
the API to prevent breaking anything out there.

René


Re: [DISCUSS] Redundant Virtual Routers on VMware?

2017-12-14 Thread Rene Moser
Hi

On 12/08/2017 11:56 AM, Rohit Yadav wrote:
> Is anyone using redundant virtual routers with VMware, either in VPCs or 
> isolated networks (with recent or older versions of ACS)?

No, not currently. We once had rVR but this is quite a while ago. We
migrated away but it was related to issues finally turned out not
related to rVR.

Regards
René





Re: Call for participation: Issue triaging and PR review/testing

2017-12-13 Thread Rene Moser
Hi all

On 12/13/2017 05:04 AM, Ivan Kudryavtsev wrote:
> Hello, devs, users, Rohit. Have a good day.
> 
> Rohit, you intend to freeze 4.11 on 8 january and, frankly speaking, I see
> risks here. A major risk is that 4.10 is too buggy and it seems nobody uses
> it actually right now in production because it's unusable, unfortunately,
> so we are planning to freeze 4.11 which stands on untested 4.10 with a lot
> of lacks still undiscovered and not reported. I believe it's a very
> dangerous way to release one more release with bad quality. Actually,
> marvin and units don't cover regressions I meet in 4.10. Ok, let's take a
> look at new one our engineers found today in 4.10:

So, the point is, how do we (users, devs, all) improve quality?

Marvin is great for smoke testing but CloudStack is dealing with many
infra vendor components, which are not covered by the tests. How can we
detect flows not covered by marvin?

For me, I decided (independent of this discussion) to write integration
tests in a way one would not expect, not following the "happy path":

Try to break CloudStack, to make a better CloudStack.

Put a chaos monkey in your test infra: Shut down storage, kill a host,
put latency on storage, disable network on hosts, make load on a host.
read only fs on a cluster wide primary fs. shut down a VR, remove a VR.

Things that can happen!

Not surprisingly I use Ansible. It has an extensive amount of modules
which can be used to battle prove anything of your infra. Ansible
playbooks are fairly easy to write, even when you are not used to write
code.

I will share my works when ready.

René







Re: [PROPOSE] RM for 4.11

2017-11-29 Thread Rene Moser
Perfect. Thanks Rohit, Daan and Paul!

On 11/29/2017 11:14 AM, Rohit Yadav wrote:
> Hi All,
> 
> I’d like to put myself forward as release manager for 4.11. The 4.11
> releases will be the next major version LTS release since 4.9 and will be
> supported for 20 months per the LTS manifesto [2] until 1 July 2019.
> 
> Daan Hoogland and Paul Angus will assist during the process and all of us
> will be the gatekeepers for reviewing/testing/merging the PRs, others will
> be welcome to support as well.
> 
> As a community member, I will try to help get PRs reviewed, tested and
> merged (as would everyone else I hope) but with an RM hat on I would like
> to see if we can make that role less inherently life-consuming and put the
> onus back on the community to get stuff done.
> 
> Here the plan:
> 1. As RM I put forward the freeze date of the 8th of January 2018, hoping
> for community approval.
> 2. After the freeze date (8th Jan) until GA release, features will not be
> allowed and fixes only as long as there are blocker issues outstanding.
> Fixes for other issues will be individually judged on their merit and risk.
> 3. RM will triage/report critical and blocker bugs for 4.11 [4] and
> encourage people to get them fixed.
> 4. RM will create RCs and start voting once blocker bugs are cleared and
> baseline smoke test results are on par with previous 4.9.3.0/4.10.0.0 smoke
> test results.
> 5. RM will allocate at least a week for branch stabilization and testing.
> At the earliest, on 15th January, RM will put 4.11.0.0-rc1 for voting from
> the 4.11 branch, and master will be open to accepting new features.
> 6. RM will repeat 3-5 as required. Voting/testing of -rc2, -rc3 and so on
> will be created as required.
> 7. Once vote passes - RM will continue with the release procedures [1].
> 
> In conjunction with that, I also propose and put forward the date of 4.12
> cut-off as 4 months [3] after GA release of 4.11 (so everyone knows when
> the next one is coming hopefully giving peace of mind to those who have
> features which would not make the proposed 4.11 cut off).
> 
> I’d like the community (including myself and colleagues) to:
> - Up to 8th January, community members try to review, test and merge as
> many fixes as possible, while super-diligent to not de-stabilize the master
> branch.
> - Engage with gatekeepers to get your PRs reviewed, tested and merged
> (currently myself, Daan and Paul, others are welcome to engage as well). Do
> not merge the PRs
> - A pull request may be reverted where the author(s) are not responding and
> authors may be asked to re-submit their changes after taking suitable
> remedies.
> - Find automated method to show (at a glance) statuses of PRs with respect
> to:
>   · Number of LGTMs
>   · Smoke tests
>   · Functional tests
>   · Travis tests passing
>   · Mergeability
> - Perform a weekly run of a component-test matrix against the master branch
> before Jan 8th cut off (based on current hypervisors including basic (KVM)
> and advanced networking).
> - Continue to fix broken tests.
> 
> Thoughts, feedback, comments?
> 
> [1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+Procedure
> [2] https://cwiki.apache.org/confluence/display/CLOUDSTACK/LTS
> [3] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Releases
> [4] The current list of blocker and critical bugs currently stands as per
> the following list:
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20CLOUDSTACK%20AND%20issuetype%20%3D%20Bug%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20affectedVersion%20in%20(4.10.0.0%2C%204.10.1.0%2C%204.11.0.0%2C%20Future)%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC
> 
> Regards,
> Rohit Yadav
> 


Re: CloudStack LTS EOL date?

2017-11-27 Thread Rene Moser
Hi Paul

On 11/27/2017 06:27 PM, Paul Angus wrote:
> Hi Rene,
> 
> note: I'm only stating what the original intent was when LTS was originally 
> proposed.  I'm not trying to dictate what we must do now or in the future.
> 
> ... The LTS scheme was designed when there was a release coming out every one 
> or two months, and these releases were effectively only receiving fixes for a 
> month or two.
> 
> To answer your questions (taking into account the note above)- 4.9 was an LTS 
> version, which meant that there would be a 4.9.1, 4.9.2..4.9.6... for a 
> period of 2 years.
> 4.9.x was the latest 'version' at the beginning of 2017.
> Unfortunately, it was also the latest version in the middle of 2017. So in 
> mid-2017 we took the latest version (4.9 again) and said that we would 
> continue backporting fix for that version for 2 years from mid 2017.

This is absolutely fine, we are on the same page.

> Some variant of your proposal "6 months after next LTS release (minimum 18 
> months)." would also work.
> I think something like "supported for a minimum of 6 months after next LTS 
> release or a total period of 24 months, whichever is greater" suits what you 
> are saying?

That would definitely make things more clear to users.

Regards
René


Re: CloudStack LTS EOL date?

2017-11-27 Thread Rene Moser
Hi Paul

On 11/22/2017 05:39 PM, Paul Angus wrote:
> HI All, 
> 
> The current LTS cycle is based on having an LTS release twice a year (at the 
> time of design, ACS releases were coming out monthly).
> 
> So, twice a year (nominally, January and July) we take the then current 
> version of CloudStack, and declare that an LTS version, for which would we 
> would then backport fixes for a period of up to 2 years.  Thereby giving end 
> users a version of CloudStack which would receive bug fixes for an extended 
> period.  
> 
> This year however, the current version in January was the same as the current 
> version in July, therefore 4.9 became the 'July' LTS as well as January LTS 
> and therefore 4.9 will be supported until summer 2019 (hence the 4.9.3 
> release)
> 
> I and a number of my colleagues remain committed to continue to support 'LTS' 
> releases in this fashion (there just wasn't anything really to 'announce' in 
> July), which may be why people think that nothing is happening.
> 
> With 2 LTS releases a year (6 months apart), 'next LTS +6 months' would only 
> be 12 months from release.  Which I think is really too short a period for 
> the majority of enterprises.  Although we haven't written it this way, the 
> current scheme gives a EOL of 'next LTS + 18 months'.
> 
> So, I'm in favour of leaving things as they are.   The wiki page looks like 
> it needs updating to be clearer (I'm happy to do that) 

Does the release of an LTS include minor releases, is a release of 4.9.x
an "LTS" release? Or is a 4.x an LTS release. My understanding was, that
4.x are new "releases".

My concerns are, we can not guarantee 2 LTS releases per year, can we?

Predicting the future is hard and we should have a more "relative"
sentence how long we support it.

In my opition, there should be an overlapping of support time for at
east 6 monnths (that is why I added +6 months).

What would it cost to change the support time of an LTS like:

"6 months after next LTS release (minimum 18 months)."

Regards
René


Re: [DISCUSS] Move to Debian9 systemvmtemplate

2017-11-21 Thread Rene Moser
Hi Rohit

First, thank you very much for the effort!

On 11/21/2017 01:07 PM, Rohit Yadav wrote:

> Please advise if you can collaborate with me on this, especially around 
> testing. Thanks!

I have some 2 "nice to have"s, AFAICS these haven't been addressed yet:

- SNMP readonly listen on "linklocalip"
- HAProxy stats (or even exporter
https://github.com/prometheus/haproxy_exporter) listen on linklocalip

Regards
René



CloudStack LTS EOL date?

2017-11-21 Thread Rene Moser
Hi all

The current LTS release is 4.9 which is EOL in June 2018 according to
https://cwiki.apache.org/confluence/display/CLOUDSTACK/LTS

AFAIK there are no works planed for a new LTS. The release pace has
slown down (the high pace and leaving users behind fixes was the reason
for the LTS).

I am still pro LTS but in my opinion we should have defined the EOL in
relation of the successor LTS release date: "The EOL of the current LTS
is +6 months after the next LTS release."

Small example:

Current LTS 4.9
Next LTS 4.1x release on 01.04. --> LTS 4.9 is 01.10.

Does this make sense? Other suggestions?

Regards
René


Re: POLL: ACL default egress policy rule in VPC

2017-11-20 Thread Rene Moser
Looks like the winner is 3 for devel.

Thanks for the participation.

Regards
René

On 11/13/2017 06:47 PM, Rene Moser wrote:
> Hi Devs
> 
> The last days I fought with the ACL egress rule behaviour and I would
> like to make a poll in which direction the fix should go.
> 
> Short Version:
> 
> We need to define a better default behaviour for acl default egress
> rule. I see 3 different options:
> 
> 1. always add a default deny all egress rule.
> 
> This would be super easy to do (should probably also the intermediate
> fix for 4.9, see https://github.com/apache/cloudstack/pull/2323)
> 
> 
> 2. add a deny all egress rule in case if have at least one egress allow
> rule.
> 
> A bit intransparent to the user, but doable. This seems to be the
> behaviour how it was designed and should have been implemented.
> 
> 
> 3. use the default setting in the network offering "egressdefaultpolicy"
> to specify the default behavior.
> 
> There is already a setting which specifies this behaviour but is not
> used in VPC. Why not use it?
> 
> As a consequence when using this setting, the user should get more infos
> about the policy of the network offering while choosing it for the tier.
> 
> 
> Poll:
> 
> 1. []
> 2. []
> 3. []
> 4. [] Other? What?
> 
> 
> Long Version:
> 
> First, let's have a look of the issue:
> 
> In version 4.5, creating a new acl with no egress (ACL_OUTBOUND) rule
> would result in a "accept egress all":
> 
> -A PREROUTING -s 10.10.0.0/24 ! -d 10.10.0.1/32 -i eth2 -m state --state
> NEW -j ACL_OUTBOUND_eth2
> -A ACL_OUTBOUND_eth2 -j ACCEPT
> 
> When an egress (here deny 25 egress) rule (no mather if deny or allow)
> gets added the result is a "deny all" appended:
> 
> -A PREROUTING -s 10.10.0.0/24 ! -d 10.10.0.1/32 -i eth2 -m state --state
> NEW -j ACL_OUTBOUND_eth2
> -A ACL_OUTBOUND_eth2 -p tcp -m tcp --dport 25 -j DROP
> -A ACL_OUTBOUND_eth2 -j DROP
> 
> This does not make any sense and is a bug IMHO.
> 
> 
> In 4.9 the behaviour is different:
> 
> (note there is a bug in the ordering of egress rules which is fixed by
> https://github.com/apache/cloudstack/pull/2313)
> 
> The default policy is kept accept egress all.
> 
> -A PREROUTING -s 10.11.1.0/24 ! -d 10.11.1.1/32 -i eth2 -m state --state
> NEW -j ACL_OUTBOUND_eth2
> -A ACL_OUTBOUND_eth2 -d 224.0.0.18/32 -j ACCEPT
> -A ACL_OUTBOUND_eth2 -d 225.0.0.50/32 -j ACCEPT
> -A ACL_OUTBOUND_eth2 -p tcp -m tcp --dport 80 -j ACCEPT
> 
> 
> To me it looks like the wanted behavior was "egress all as default. If
> we have allow rules, append deny all". This would make sense but is
> quite instransparent.
> 
> But let's poll
> 
> 


Re: POLL: ACL default egress policy rule in VPC

2017-11-17 Thread Rene Moser
Hi

On 11/16/2017 11:14 AM, Nux! wrote:
> 4. I think Jayapal's reply deserves more attention.
> 
> See below.
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Jayapal Uradi" 
>> To: "dev" 
>> Sent: Tuesday, 14 November, 2017 05:12:52
>> Subject: Re: POLL: ACL default egress policy rule in VPC
> 
>> Hi Rene,
>>
>> Please look at my inline comments.
>> Let me add some context for the VPC egress/ingress rules behavior.
>>
>> Pre 4.5 (subject to correction) the behavior of VPC acl is as follows.
>>
>> 1. Default egress is ALLOW and ingress is DROP.
>>   a.  When a rule is added to egress then that particular rule traffic is 
>> allowed
>>   and rest is blocked in egress.

This works as designed in 4.5.

However, from a user perspective, if we have an "allow all" to egress, a
user would add a drop for a single port, but then anything else is also
blocked. The current implementation does not determine, if the rule
added is allow or blocked.

>>   b.  When a rule is added to ingress then that particular rule traffic is 
>> allowed
>>   and rest is blocked in egress.
>>
>> After 4.5 ACL lists and ACL items feature is introduced there we have 
>> ‘default
>> allow’ and ‘default deny’ ACLs. User can also
>> create a custom acl. In ACL feature we can add mix of allow and deny rules 
>> and
>> the ordering of rules is maintained.

"default allow" and "default deny" already exists in 4.5 as well.

>> 1.  when ‘default allow’ is selected while creating the vpc tier
>>By default traffic is ALLOWED and rules can be added to ALLOW/DENY the 
>> traffic
>>   After adding the rules there will be ACCEPT at the end
>> 2.  when ‘default deny’ is selected while creating the vpc tier
>>By default traffic is DENY and rules can be added to DENY/ALLOW the 
>> traffic.
>>  After adding the rules there will be DROP at the end

Makes sense,

>> 3. If no ACL selected for the ACL then Pre 4.5 behavior will be there.
>> 4. With custom acl default ingress is DROP and egress is ALLOW. User can add
>> rules for allow/deny rules.

This works as designed. H

owever, the issue I see here is that the behaviour for exising rules
changed. This can potentially lead to a security hole. In 4.5 we had an
implicit "block all egress when one rule", after 4.5 we have no such rule.

One can arg for or against this "behaviour". The choice should be up to
a cloudstack admin, and it would be best if this can be configured. That
is why using the egressdefaultpolicy makes sense.

>>
>> If you see behavior other than above then there will be bug.
>>
>> Currently in VPC egress behavior is controlled from the ACLs. If include
>> ‘egressdefaultpolicy’ then there will be confusion.

I don't see why there should be any confusion at all. If you would like
to keep the behaviour currently in 4.9, you just set
egressdefaultpolicy=true. voila.

>> What I feel is that current VPC ACLs are flexible enough  to configure the
>> required behavior.

It is flexible but the default has changed. a cloudstack admin should
have the choice.

René


Re: New committer: Gabriel Beims Bräscher

2017-11-15 Thread Rene Moser
Welcome Gabriel and congrat!

On 11/15/2017 06:22 PM, Gabriel Beims Bräscher wrote:
> Thank you guys!
> 
> 2017-11-15 12:11 GMT-02:00 Tutkowski, Mike :
> 
>> Congratulations, Gabriel!
>>
>> On Nov 15, 2017, at 3:36 AM, Sigert GOEMINNE > nuagenetworks.net> wrote:
>>
>> Congratulations Gabriel!
>>
>> *Sigert Goeminne*
>> Software Development Engineer
>>
>> *nuage*networks.net 
>> Copernicuslaan 50
>> 2018 Antwerp
>> Belgium
>>
>>
>>
>>
>> On Wed, Nov 15, 2017 at 11:32 AM, Rafael Weingärtner <
>> rafaelweingart...@gmail.com> wrote:
>>
>> The Project Management Committee (PMC) for Apache CloudStack has invited
>> Gabriel Beims Bräscher to become committer and we are pleased to announce
>> that he has accepted.
>>
>> Gabriel has shown commitment to Apache CloudStack community, contributing
>> with PRs in a constant fashion. Moreover, he has also proved great
>> abilities to interact with the community quite often in our mailing lists
>> and Slack channel trying to help people.
>>
>> Let´s congratulate and welcome Apache CloudStack’s newest committer.
>>
>> --
>> Rafael Weingärtner
>>
>>
> 


Re: POLL: ACL default egress policy rule in VPC

2017-11-13 Thread Rene Moser
Note the typo in the user mailing list email address, don't use reply
all... sry


POLL: ACL default egress policy rule in VPC

2017-11-13 Thread Rene Moser
Hi Devs

The last days I fought with the ACL egress rule behaviour and I would
like to make a poll in which direction the fix should go.

Short Version:

We need to define a better default behaviour for acl default egress
rule. I see 3 different options:

1. always add a default deny all egress rule.

This would be super easy to do (should probably also the intermediate
fix for 4.9, see https://github.com/apache/cloudstack/pull/2323)


2. add a deny all egress rule in case if have at least one egress allow
rule.

A bit intransparent to the user, but doable. This seems to be the
behaviour how it was designed and should have been implemented.


3. use the default setting in the network offering "egressdefaultpolicy"
to specify the default behavior.

There is already a setting which specifies this behaviour but is not
used in VPC. Why not use it?

As a consequence when using this setting, the user should get more infos
about the policy of the network offering while choosing it for the tier.


Poll:

1. []
2. []
3. []
4. [] Other? What?


Long Version:

First, let's have a look of the issue:

In version 4.5, creating a new acl with no egress (ACL_OUTBOUND) rule
would result in a "accept egress all":

-A PREROUTING -s 10.10.0.0/24 ! -d 10.10.0.1/32 -i eth2 -m state --state
NEW -j ACL_OUTBOUND_eth2
-A ACL_OUTBOUND_eth2 -j ACCEPT

When an egress (here deny 25 egress) rule (no mather if deny or allow)
gets added the result is a "deny all" appended:

-A PREROUTING -s 10.10.0.0/24 ! -d 10.10.0.1/32 -i eth2 -m state --state
NEW -j ACL_OUTBOUND_eth2
-A ACL_OUTBOUND_eth2 -p tcp -m tcp --dport 25 -j DROP
-A ACL_OUTBOUND_eth2 -j DROP

This does not make any sense and is a bug IMHO.


In 4.9 the behaviour is different:

(note there is a bug in the ordering of egress rules which is fixed by
https://github.com/apache/cloudstack/pull/2313)

The default policy is kept accept egress all.

-A PREROUTING -s 10.11.1.0/24 ! -d 10.11.1.1/32 -i eth2 -m state --state
NEW -j ACL_OUTBOUND_eth2
-A ACL_OUTBOUND_eth2 -d 224.0.0.18/32 -j ACCEPT
-A ACL_OUTBOUND_eth2 -d 225.0.0.50/32 -j ACCEPT
-A ACL_OUTBOUND_eth2 -p tcp -m tcp --dport 80 -j ACCEPT


To me it looks like the wanted behavior was "egress all as default. If
we have allow rules, append deny all". This would make sense but is
quite instransparent.

But let's poll




Re: New committers: Nathan Johnson and Marc-Aurèle Brothier

2017-09-22 Thread Rene Moser
Congrats Nathan and Marc-Aurèle!

René

On 09/22/2017 06:31 PM, Rafael Weingärtner wrote:
> The Project Management Committee (PMC) for Apache CloudStack has invited
> Nathan Johnson and Marc-Aurèle Brothier to become committers and we are
> pleased to announce that they have accepted.
> 
> They have shown commitment to Apache CloudStack community, contributing
> with PRs in a constant fashion. Moreover, they have proved great abilities
> to interact with the community quite often in our mailing lists and Slack
> channel trying to help people.
> 
> Let´s congratulate and welcome Apache CloudStack’s two newest committers.
> 
> --
> Rafael Weingärtner
> 


Re: [RESULT][VOTE] Apache CloudStack 4.9.3.0 (RC1)

2017-09-07 Thread Rene Moser
Thanks Rohit for the efforts!

On 09/07/2017 04:22 PM, Rohit Yadav wrote:
> All,
> 
> The vote for CloudStack 4.9.3.0 [1] *passes* with
> 3 PMC + 3 non-PMC votes.
> 
> +1 (PMC / binding)
> 3 persons (Milamber, Daan, Rohit)
> 
> +1 (non binding)
> 3 person (Glenn, Boris, René)
> 
> 0
> none
> 
> -1
> none
> 
> Thanks to everyone participating.
> 
> I will now prepare the release announcement to go out after 24 hours to
> give the mirrors time to catch up.
> 
> [1] http://markmail.org/message/puekeyeamkl34yzy
> 
> Regards.
> 
> On Mon, Aug 28, 2017 at 6:44 PM, Rohit Yadav  wrote:
>>
>> Hi All,
>>
>> I've created a 4.9.3.0 RC1 release, with the following artifacts up for a
> vote:
>>
>> Git Branch and Commit SH:
>>
>>
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.9.3.0-RC20170828T1452
>> https://github.com/apache/cloudstack/tree/4.9.3.0-RC20170828T1452
>> Commit: d145944be0d04724802ff132399514bf71c3e7b0
>>
>> 4.9 branch smoke test PR:
>> https://github.com/apache/cloudstack/pull/2217
>>
>> List of commits/changes since 4.9.2.0 release:
>>
> https://github.com/apache/cloudstack/compare/4.9.2.0...4.9.3.0-RC20170828T1452
>>
>> Source release (checksums and signatures are available at the same
> location):
>> https://dist.apache.org/repos/dist/dev/cloudstack/4.9.3.0/
>>
>> PGP release keys (signed using 0EE3D884):
>> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>>
>> Vote will be open for 72 hours.
>>
>> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
>>
>> [ ] +1  approve
>> [ ] +0  no opinion
>> [ ] -1  disapprove (and reason why)
>>
>> Regards,
>> Rohit Yadav
> 


Re: [VOTE] Apache Cloudstack 4.9.3.0 RC1

2017-09-04 Thread Rene Moser
On 09/04/2017 05:46 AM, Rohit Yadav wrote:
> Thank you all for voting, we've enough binding votes to pass the RC, however, 
> some PMC members have expressed interest in testing/voting so I'll extend the 
> voting window by another 72 hours and we'll conclude the voting after EOD 
> Wed, 6th August 2017.

I assume you meant 6th September 2017. ;)


Re: [VOTE] Apache Cloudstack 4.9.3.0 RC1

2017-09-01 Thread Rene Moser
+1

Run Ansible CloudStack simulator test suite with tests for
  account
  affinitygroup
  cluster
  configuration
  domain
  firewall
  host
  instance
  instancegroup
  nic
  iso
  LB
  network ACL
  network ACL rules
  pod
  portforward
  project
  region
  resourcelimit
  role
  router
  securitygroup
  secuirtygroup rule
  sshkeypair
  storage pool
  user
  vmsnapshot
  volume
  vpc
  network
  vpn gateway
  zone

+ Basic Testing on VMware 6.5 in Advanced Zone upgraded from 4.9.2

Thanks!
René

On 08/28/2017 03:14 PM, Rohit Yadav wrote:
> Hi All,
> 
> I've created a 4.9.3.0 RC1 release, with the following artifacts up for a
> vote:
> 
> Git Branch and Commit SH:
> 
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.9.3.0-RC20170828T1452
> https://github.com/apache/cloudstack/tree/4.9.3.0-RC20170828T1452
> Commit: d145944be0d04724802ff132399514bf71c3e7b0
> 
> 4.9 branch smoke test PR:
> https://github.com/apache/cloudstack/pull/2217
> 
> List of commits/changes since 4.9.2.0 release:
> https://github.com/apache/cloudstack/compare/4.9.2.0...4.9.3.0-RC20170828T1452
> 
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.9.3.0/
> 
> PGP release keys (signed using 0EE3D884):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> Vote will be open for 72 hours.
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> Regards,
> Rohit Yadav
> 


Re: [discuss] log4j2

2017-08-04 Thread Rene Moser
Hi

I would consider https://logback.qos.ch/ as a possible replacement as well.

Regards
René

On 08/04/2017 08:56 PM, Paul Angus wrote:
> Hi All,
> 
> My meanderings through various corners of CloudStack caused me to be look at 
> our logging.
> 
> Log4j v1 has been end of life since May 2015 [1]
> 
> And was replaced by Log4j 2. The improvements over v1 are listed here [1]
> 
> Is there an appetite to move to log4j v2 ??
> 
> 
> [1] 
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> [2] https://logging.apache.org/log4j/2.x/
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 


Re: [DISCUSS] Metadata server IP improvement

2017-07-26 Thread Rene Moser
On 07/26/2017 09:00 AM, Wido den Hollander wrote:
> This has been discussed before and right now there is a PR for using Config 
> Drive: https://github.com/apache/cloudstack/pull/2116
> 
> The problem with 169.254.169.254 is:
> 
> - It doesn't work with IPv6
> - It doesn't work with Basic Networking
> - You need to do iptables intercepting on the VR
> 
> Config Drive is a IP-protocol independent solution for getting metadata into 
> the Instance without the need for IP connectivity.
> 
> Imho that's a much better solution.

Perfect, makes sense! Thanks for the quick reply.

René


[DISCUSS] Metadata server IP improvement

2017-07-25 Thread Rene Moser
Hi

Speaking about VR improvements: I would like to change the way we need
to find the metadata API.

Currently we do something like "cat
/var/lib/dhclient/dhclient-eth0.leases | grep dhcp-server-identifier |
tail -1" to find the IP of the service.

However, parsing a dhcp lease file is not the best option and it is not
consistent across OS levels.

OpenStack and EC2 AWS uses 169.254.169.254. Would it make sense to adopt
this?

René


Re: [DISCUSS] Move to Debian9 systemvmtemplate

2017-07-24 Thread Rene Moser
Hi Rohit


On 07/23/2017 06:08 PM, Rohit Yadav wrote:
> All,
> 
> 
> Just want to kick an initial discussion around migration to Debian9 based 
> systemvmtemplate, and get your feedback on the same.
> 
> Here's a work-in-progress PR: https://github.com/apache/cloudstack/pull/2198

Have you considered to replace veewee by packer?

Our friends from schubergphilis have already done some work here
https://github.com/MissionCriticalCloud/systemvm-packer.

However there would be also an official way to convert the definitions
https://www.packer.io/guides/veewee-to-packer.html

Regards René


Re: [DISCUSS] Move to Debian9 systemvmtemplate

2017-07-24 Thread Rene Moser
Hi

On 07/23/2017 06:21 PM, Paul Angus wrote:
> 
> I think that we should look at lighter Distros like Arch Linux in order to 
> get the boot times of the system VMs down.
> That in conjunction with improving the configuration will gives as a much 
> leaner, meaner System VM.

I tend to -1 for Arch.

Debian is a proven mature Distro. It is used as base for many critical
systems e.g. VyOS or also cumulus OS, not to mention Ubuntu.

Arch linux was not designed as a server OS. It has a rolling release
policy and it is great if you want to use the latest releases, you have
to update often.

Because we usually do not update VR OS that often, Arch is IMHO not a
good candidate. On the other hand Debian is a good one.

I would also doubt that Arch "boots" faster than Debian. IMHO the ways
_we_ configure the VR is still the performance killer.

Let's stick with Debian.


Re: next major release

2017-07-10 Thread Rene Moser
Hi Rajani, Wei

On 07/10/2017 08:40 AM, Wei ZHOU wrote:
> It depends on how many new futures or improvements will be merged.
> 
> Here are some features we are interested (and in progress)
> - CloudStack Container Service (by shapeblue)
> - NoVNC console (GSoC project)
> - Config Drive (by wido)
> - Network/VPC VR bug fixes
> 
> I vote for 5.0.0.0 if all above are done.
> 
> -Wei

I would also go for 4.11 until we have a good reason to call the next
version 5.0

One point mentioned few weeks ago is to freeze the current API into v1
and use v2 (client/api/v2) which fixes and cleans up the current API. I
am currently collection all issues and improvements suggestions.

René


Re: JIRA - PLEASE READ

2017-06-29 Thread Rene Moser
Hi Paul

On 06/29/2017 11:06 AM, Paul Angus wrote:
> Hi,   Mr Grumpy here!
> 
> I was looking the commits and I'm seeing commits going in with no Jira Issue 
> assigned.
> My understanding is that there must be a Jira ticket for EVERY 
> fix/enhancement/feature, so that we have a way to search and track these 
> things.  
> + Release notes will be impossible to create without a proper Jira history.
> And no one will know what has gone into CloudStack.
> 
> I know rules and procedures are a PITA, but we can't let this turn into the 
> wild west!

I would say it depends (swiss neutral mindset) :).

Sometimes you'd like to have a full description (high level view) of
what changes are related to which commits.

Sometimes it would be overkill to open a jira ticket for every smaller
enhancement.

E.g.
https://github.com/apache/cloudstack/commit/d53dd0a671e635edcefcae332e1b7d428ac7600b
e.g. imho there is no value in a jira ticket for this change.

However, I agree that for everything "changelog" relevant, I would like
to have a jira ticket for.

René


[DISCUSS] API versioning

2017-06-04 Thread Rene Moser
Hi

I recently developed ansible modules for the ACL API and ... found this
has a really inconsistent API naming. E.g.

createNetworkACL <<-- this creates an ACL rule
createNetworkACLList <<-- this create the ACL

updateNetworkACLItem <<-- this updates an ACL rule
updateNetworkACLList <<-- this updates the ACL

My first thoughs was, someone has to fix this, like

createNetworkAclRule <<-- this create the ACL rule
createNetworkAcl <<-- this creates an ACL

updateNetworkAclRule <<-- this updates the ACL rule
updateNetworkAcl <<-- this updates an ACL

But how without breaking the API for backwards compatibility? I know a
few other places where the API has inconsistent namings. Fixing the API
but in a controlled way? What about by adding a version to the API?

I would like to introduce a API versioning to cloudstack: The current
API would be frozen into verison v1. The new API will have v2. The
versioned API has the URL scheme:

/client/api/

The current API would be /client/api/v1 and the /client/api would be an
alias for v1. This ensures backwards compatibility.

This would allow us to deprecate and change APIs.

Any thoughts?





Re: [apache/cloudstack] Vyos router plugin (#2105)

2017-06-03 Thread Rene Moser
Hi Matthew

On 06/03/2017 01:50 PM, Matthew Smart wrote:
> This didn't go through the first time for some reason...
> 
> Hey guys,
> 
> I know this is going to take some time to merge. In the meantime, I am
> preparing my design for the next iteration of the plugin which will
> tentatively add DNS, DHCP, VPN, and maybe VRRP (redundant routing)
> support. I am not very familiar with GIT or the PR process. Assuming my
> design phase completes before my PR is merged how should I go about
> starting work on the next iteration of the plugin? Can I create a new
> branch from the branch that is waiting for the PR to be merged and make
> my changes there?

This is possible, just branch off from that branch by

# change to the vyos branch
git checkout 

# branch off
git checkout -b ".

But be aware that if you change your origin branch, it can cause merge
conflicts. While conflicts are not generally a problem, it can cause
some headache if you are not familiar with it.

In case that happens or if you have questions, don't hesitate to ask
here again.

Regards
René




Re: Miami CCC '17 Roundtable/Hackathon Summary

2017-05-29 Thread Rene Moser
Hi

On 05/23/2017 02:16 PM, Simon Weller wrote:

> We floated some ideas related to short term VR fixes in order to make it more 
> modular, as well as API driven, rather than the currently SSH JSON injections.

Speaking about endless possibilities... ;)

I support the initiative (+1) to make the routing more API driven and
modular, the issue I see with a "too hard backed" appliance is the
integration into the existing environment.

One big benefit of the VR is that we can relatively easy customize it.

I had some thoughts about how to integrate a standardized "custom
configuration" mechanism to the VR.

I like the idea to have a "user data" or "cloud init" for the VR on the
network offerings level. This would allow any virtual appliance "vendor"
to implement a simple interface (e.g. static yaml/json data) which
allows the "cloudstack admin" to customize the virtual appliance in the
network offerings API.

E.g. for our VR, the "cloud init" interface would allow

* to install and configure custom monitoring solution
* configure the automated update mechanism
* add web hooks to trigger what so ever
* install and run cfgmgmt like puppet/ansible-pull
* etc.

So for any virtual appliance the interfaces would be the same but the
config option would differ based on features they provide.

Regards
René


Re: Awesome CloudStack

2017-05-10 Thread Rene Moser
Hi Will

On 05/10/2017 07:49 PM, Will Stevens wrote:
> Awesome!!!  Great initiative Rene.  You going to be at ApacheCon in Miami?

Sad to say but no, I can not make it.

> Maybe we can give this list a bit of airtime to give it some more
> visibility.  :)

That would be awesome! ;)



Re: Awesome CloudStack

2017-05-10 Thread Rene Moser


On 05/10/2017 08:50 PM, Gabriel Beims Bräscher wrote:
> I work in the development of the Autonomiccs plugin; a plugin that
> autonomously manages environments orchestrated by Apache CloudStack (more
> details at [1]). Do you believe that it would be interesting adding plugins
> such as Autonomiccs to your project?

And yes! This sound pretty much like an awesome project! ;)


Re: Awesome CloudStack

2017-05-10 Thread Rene Moser
Hi Gabriel

On 05/10/2017 08:50 PM, Gabriel Beims Bräscher wrote:
> Hi Rene,
> 
> That is a great idea!
> 
> Is the content related to any project that is linked somehow with Apache
> CloudStack?

As simple as anything "awesome" or let's say useful that is related to
the CloudStack ecosystem. What it ever may be!

The idea of awesome lists is not invented here, see
https://github.com/sindresorhus/awesome.


Awesome CloudStack

2017-05-10 Thread Rene Moser
Hi

I started "A curated list of bookmarks, packages, tutorials, videos and
other cool resources from the CloudStack ecosystem" on
https://github.com/resmo/awesome-cloudstack

Feel free to extend it by sending PRs ;)

René


Re: [PROPOSAL] branch on first RC and open up master for features

2017-05-08 Thread Rene Moser
I am +1

Even though git is distributed, the github workflow (PRs) has some scale
limitations. Having a lots of small PRs can lead into a maintenance hell
for keeping them mergeable.

One way around it is to either have a "next"-branch or branch of master.
Either way, branching is required. Since we have a lot of integration
running on master, it is a far less expensive to branch of master
instead of having a "next" branch.

I would rather see devs working on bug fixing and features than merge
conflicts.

René


Re: Very slow Virtual Router provisioning with 4.9.2.0

2017-05-03 Thread Rene Moser
Thanks Remi for the hint and Daan for pick it up! That is why I like
open source software development and this project ;)

On 05/03/2017 02:49 PM, Daan Hoogland wrote:
> Happy to pick this up, Remi. I'm travelling now but will look at both on
> Friday.




Re: Github messages on dev list

2017-04-28 Thread Rene Moser
Hi Wido

There have been messages from github in the past (you probably had a
filter?). I already wrote I don't like it and suggested to use
comm...@cloudstack.apache.org for automated and bot messages.

But a few devs insisted they like to keep them here that filtering would
be an option (for all others). So I created a filter (although I still
thing my suggestion would be the cleaner solution). The filter is not
matching anymore, I also have to adjust it.

I am still +1 for moving them to commits@ and whoever needs them can
subscribe to this list. Opt-In.

René

On 04/28/2017 08:28 AM, Wido den Hollander wrote:
> Hi,
> 
> See: https://issues.apache.org/jira/browse/INFRA-13929
> 
> We are now getting all these messages from Github on dev@cloudstack.apache.org
> 
> We didn't have this before. Do we want this?
> 
> Personally I find it very annoying that my inbox now gets so many messages 
> from Github.
> 
> Anybody else feeling that way?
> 
> Wido
> 


Re: IMPORTANT: Moving to Gitbox/Github

2017-04-15 Thread Rene Moser
Thanks Wido, worked like a charm. ;)


Re: represent the object in the logs

2017-04-12 Thread Rene Moser
Hi Rafael

On 04/12/2017 08:27 PM, Rafael Weingärtner wrote:
> Well, I think the missing point to understand why we have these different
> situations is the understanding of the developer’s mind. It is so diverse
> and unique from people to people, and because we do not have a policy on
> that, each developer is doing the best they know and think.

I should not have asked "why". I know why, nobody is perfect :) That is
fine.

I just wondered if there is something I've been missing (I am also not
perfect and I know :P).

> For instance, at first sight, the idea of calling the “toString” of an
> object to append its information in a log entry might be interesting.
> However, objects like the “VmInstance” are quite big and would probably
> clutter the log entry.

So, logs are getting big, we must handle logs anyway. I have no problem
with big logs. I like big logs, big logs are detailed, bigger logs are
more detailed.

> In my opinion, for most log entries at the DEBUG/INFO/WARN levels the ID of
> the logged element should be more than enough to the
> operators/administrators to track down events or problems in the cloud.

I kind of disagree here for 2 technical reasons:

The ID of the record (ID NOT UUID) is not unique, (think about
upgrade/rollback scenarios, the id will probably be identical but the
resource would not be the same).

The ID is only used "internally" for relation mapping, there is not
reason to show this ID to a user/admin as a user will not be able to
query a VM by its ID, but UUID.

I would like to see at least (only?) the UUID for an resource being
logged, as it is the external, unique identifier for an resource.

> Others probably think the opposite, meaning that, only the ID is not enough
> and more details about the object in an event are required. In the absence
> of a policy to regulate this, both are valid arguments under different
> perspectives.

Other thoughts anyone?

Kind Regards
René


represent the object in the logs

2017-04-12 Thread Rene Moser
Hi

In logs we see all kind variations how an object is printed to the
users. E.g. for VMs we have seen, the DB ID, UUID, internal name, name
etc. in different debug log messages. And this makes debugging harder
than it should.

I wondered, why don't we just use the "toString()" in VO classes.

E.g in this PR
https://github.com/apache/cloudstack/pull/1252/files#diff-ab243638abf88707693d464b3b1836feR1503


We see a:

s_logger.debug("Host is in PrepareForMaintenance state - Stop VM
operation on the VM id: " + vm.getId() + " is not allowed");


With a toString in the VO class of the VM this would turn into

s_logger.debug("Host is in PrepareForMaintenance state - Stop VM
operation on the VM: " + vm + " is not allowed");

Did I miss anything?

René


CloudStack related changes in Ansible 2.3

2017-04-12 Thread Rene Moser
Hi CloudStack users

Ansible 2.3 is about to be released, I would like to summarize the
CloudStack related features and changes in this release.


New modules
---

- cs_host
- cs_nic
- cs_region
- cs_role
- cs_vpc

Examples and usage for these modules can be found in the docs,
http://docs.ansible.com/ansible/list_of_cloud_modules.html#cloudstack as
usual.


Docs


The CloudStack guide
http://docs.ansible.com/ansible/guide_cloudstack.html has been updated,
note the new feature "Environment Variables"
http://docs.ansible.com/ansible/guide_cloudstack.html#environment-variables


VPC
---

The VPC support has been improved in the related modules, but there is
still some work to do.


Integration tests
-

Soon, CloudStack related new Ansible PRs will be automatically tested
(~1.000 tasks) on a CI against a CloudStack Simulator running 4.9.x.


Future Module Development
-

Due some other side projects of mine (writing books takes more time than
one would might think), development of new modules is lagging a bit. One
module (cs_serviceoffer) is currently WIP
https://github.com/ansible/ansible/pull/19041.

But no worries, new modules are planed:
- cs_diskoffer
- modules for VPN setup


Cloud Role
--

At SWISS TXT, we created a Ansible role for setting up VMs in a
cloudstack cloud with advanced networking for different customer
projects, The role is open source (BSD) and can be found on GitHub
https://github.com/swisstxt/ansible-role-cloud-infra

Feel free to fork and improve it.


Goal of my Ansible CloudStack Project
-

I often get ask, why I am doing it.

My goal is to not only install and upgrade CloudStack by Ansible (that
is relatively easy... and can even be done without that much cloudstack
api interaction) but configure _and_ maintaining a cloud (basic or
advanced networking) in a reliable way!

It will install the OS and install cloudstack management server, install
the OS on the hosts, setup hypversisors, create zones, pods, clusters,
accounts, users, add configured hosts to cloudstack all this by a single
run and the best of it, you can re-run it safely again and again,
without fear breaking anything.

Have to add a new host? No problem, put the hardware in the rack and
connect it to the net, ansible will take care on the next run: it can be
that simple.

Also note, ansible can manage your network switches, routers and
firewalls too! http://docs.ansible.com/ansible/list_of_network_modules.html

The possibilities are endless...

Thanks
René


















Re: [VOTE] Apache Cloudstack should join the gitbox experiment.

2017-04-10 Thread Rene Moser
On 04/10/2017 06:22 PM, Daan Hoogland wrote:
> In the Apache foundation an experiment has been going on to host
> mirrors of Apache project on github with more write access then just
> to the mirror-bot. For those projects committers can merge on github
> and put labels on PRs.
> 
> I move to have the project added to the gitbox experiment
> please cast your votes
> 
> +1 CloudStack should be added to the gitbox experiment
> +-0 I don't care
> -1 CloudStack shouldn't be added to the gitbox experiment and give your 
> reasons

+1


Re: MySQL 5.7 and SQL Mode

2017-04-10 Thread Rene Moser
Hi Wido

On 04/10/2017 05:00 PM, Wido den Hollander wrote:
> Hi,
> 
> While testing with Ubuntu 16.04 and CloudStack 4.10 (from master) I've ran 
> into this error on the management server:
> 
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Expression #1 of 
> SELECT list is not in GROUP BY clause and contains nonaggregated column 
> 'cloud.i.id' which is not functionally dependent on columns in GROUP BY 
> clause; this is incompatible with sql_mode=only_full_group_by
>   at sun.reflect.GeneratedConstructorAccessor50.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
>   at com.mysql.jdbc.Util.getInstance(Util.java:387)
>   at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:939)
>   at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3878)
>   at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3814)
> 
> I was able to fix this to add this to my my.cnf:
> 
> [mysqld]  
> sql_mode = 
> "STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
> 
> Should we maybe set the SQL Mode as a connection parameter when connecting to 
> the DB? This prevents users from having to set this manually in their MySQL 
> configuration.
> 
> Did somebody else run into this with MySQL 5.7?

Yes, I also run into this while testing and switched back to mysql 5.6.

Regards
René


Re: Welcoming Wido as the new ACS VP

2017-03-17 Thread Rene Moser
Thanks Will for all the efforts!

Congrats Wido and I have no doubts you keep up the good work ;)

-Rene


Re: Return VPC ID of VM

2017-03-11 Thread Rene Moser
Now I see my fallacy. Thanks again for clarification. Never mind.
René


On 03/11/2017 10:06 AM, Remi Bergsma wrote:
> Hi René,
> 
> From UI, you navigate to any VM, click Nics, then use “Add Network to VM” 
> button. 
> 
> You’re basically sending the addNicToVirtualMachine API call.
> 
> Regards, Remi
> 
> 
> On 11/03/2017, 09:57, "Rene Moser" <m...@renemoser.net> wrote:
> 
> Hi Remi
> 
> Perfect! Thanks for the clarification. I still wonder how to do it but
> that is another story :)
> 
> René
> 
> On 03/11/2017 08:40 AM, Remi Bergsma wrote:
> > Hi René,
> > 
> > I just posted some screenshots on the PR that show a VM can be part of 
> more than one VPC (mail wouldn’t allow screenshots) so please have a look at 
> Github.
> > 
> > Regards, Remi
> > 
> > On 10/03/2017, 17:52, "Rene Moser" <m...@renemoser.net> wrote:
> > 
> > Hi
> > 
> > I created https://github.com/apache/cloudstack/pull/1999 to return 
> the
> > vpcid the VM belongs to in listVirtualMachines-
> > 
> > I need this for the cloudstack ansible modules to fully determine 
> VMs
> > uniquely as listVirtualMachines returns all VMs (non-vpc and vpc) 
> and
> > they can be named identically.
> > 
> > (I have currently implemented a workaround to go though 
> listnetworks for
> > every VM, but this is a very expensive operation, and that is why I
> > wanted to solve it by returning the VPC id on the VM level.)
> > 
> > I had a discussion with ustcweizhou which says that a VM can be 
> part of
> > multiple VPCs and therefore it would not make sense to add vpcid to 
> the
> > response. I disagree, a VM can not be in different VPCs.
> > 
> > Can anyone join the discussion? Is it possible a VM can be in 
> different
> > VPCs?
> > 
> > Thanks for the clarification
> > 
> > René
> > 
> > 
> 
> 


Re: Return VPC ID of VM

2017-03-11 Thread Rene Moser
Hi Remi

Perfect! Thanks for the clarification. I still wonder how to do it but
that is another story :)

René

On 03/11/2017 08:40 AM, Remi Bergsma wrote:
> Hi René,
> 
> I just posted some screenshots on the PR that show a VM can be part of more 
> than one VPC (mail wouldn’t allow screenshots) so please have a look at 
> Github.
> 
> Regards, Remi
> 
> On 10/03/2017, 17:52, "Rene Moser" <m...@renemoser.net> wrote:
> 
> Hi
> 
> I created https://github.com/apache/cloudstack/pull/1999 to return the
> vpcid the VM belongs to in listVirtualMachines-
> 
> I need this for the cloudstack ansible modules to fully determine VMs
> uniquely as listVirtualMachines returns all VMs (non-vpc and vpc) and
> they can be named identically.
> 
> (I have currently implemented a workaround to go though listnetworks for
> every VM, but this is a very expensive operation, and that is why I
> wanted to solve it by returning the VPC id on the VM level.)
> 
> I had a discussion with ustcweizhou which says that a VM can be part of
> multiple VPCs and therefore it would not make sense to add vpcid to the
> response. I disagree, a VM can not be in different VPCs.
> 
> Can anyone join the discussion? Is it possible a VM can be in different
> VPCs?
> 
> Thanks for the clarification
> 
> René
> 
> 


Return VPC ID of VM

2017-03-10 Thread Rene Moser
Hi

I created https://github.com/apache/cloudstack/pull/1999 to return the
vpcid the VM belongs to in listVirtualMachines-

I need this for the cloudstack ansible modules to fully determine VMs
uniquely as listVirtualMachines returns all VMs (non-vpc and vpc) and
they can be named identically.

(I have currently implemented a workaround to go though listnetworks for
every VM, but this is a very expensive operation, and that is why I
wanted to solve it by returning the VPC id on the VM level.)

I had a discussion with ustcweizhou which says that a VM can be part of
multiple VPCs and therefore it would not make sense to add vpcid to the
response. I disagree, a VM can not be in different VPCs.

Can anyone join the discussion? Is it possible a VM can be in different
VPCs?

Thanks for the clarification

René


Re: :[VOTE] Apache Cloudstack 4.10.0.0

2017-03-01 Thread Rene Moser
Hi

While not be directly related to the clodustack java source code, any
RPM created using the specs from the repo e.g. from packages/centos7
and proceeding an upgrade, will hit CLOUDSTACK-9765, PR
https://github.com/apache/cloudstack/pull/1923 fixes the issue.

Regards
René


On 03/01/2017 02:12 AM, Rajani Karuturi wrote:
> Hi All,
> 
> I've created a 4.10.0.0 release, with the following artifacts up for a vote:
> 
> Git Branch and Commit
> SH:https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.10.0.0-RC20170301T0634
> Commit:7c1d003b5269b375d87f4f6cfff8a144f0608b67
> 
> 
> Source release (checksums and signatures are available at the same
> location):https://dist.apache.org/repos/dist/dev/cloudstack/4.10.0.0/
> 
> PGP release keys (signed using
> CBB44821):https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> Vote will be open for 72 hours.
> 
> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> 
> 
> ~Rajani
> http://cloudplatform.accelerite.com/
> 


Re: [PROPOSAL] add native vm-cluster orchestration service (was: [PROPOSAL] add native container orchestration service)

2017-02-28 Thread Rene Moser
On 02/20/2017 02:56 PM, Daan Hoogland wrote:
> So, being very late in the discussion but havingread the whole thread before 
> editting the title of this thread,
> 
> Can we agree that we want a generic vm-cluster service and leave the 
> container bits to containers? Kishan can you share your design? Shapeblue 
> wants to rebase their k8 service on top of this and I would like yours and 
> Murali's work to not conflict.

+1


Re: Can't create a zone using master

2017-02-14 Thread Rene Moser
Probably related to https://github.com/apache/cloudstack/pull/1927



On 02/14/2017 10:44 PM, Tutkowski, Mike wrote:
> Hi,
> 
> I’m getting a NullPointerException when trying to create a zone using master.
> 
> Below is the relevant code in ConfigurationManagerImpl.
> 
> In the else block, network.getCidr() returns null and NetUtil.getCidrNetmask 
> then throws a NullPointerException.
> 
> I noticed that network.getGateway() also returns null (which seems odd).
> 
> Thoughts on this?
> 
> Thanks!
> Mike
> 
> public Pair> validateIpRange(final String 
> startIP, final String endIP, final String newVlanGateway, final String 
> newVlanNetmask, final List vlans, final boolean ipv4,
> final boolean ipv6, String ip6Gateway, String ip6Cidr, final String 
> startIPv6, final String endIPv6, final Network network) {
> String vlanGateway = null;
> String vlanNetmask = null;
> boolean sameSubnet = false;
> if (CollectionUtils.isNotEmpty(vlans)) {
> for (final VlanVO vlan : vlans) {
> vlanGateway = vlan.getVlanGateway();
> vlanNetmask = vlan.getVlanNetmask();
> sameSubnet = hasSameSubnet(ipv4, vlanGateway, vlanNetmask, 
> newVlanGateway, newVlanNetmask, startIP, endIP,
> ipv6, ip6Gateway, ip6Cidr, startIPv6, endIPv6, network);
> if (sameSubnet) break;
> }
> } else {
> vlanGateway = network.getGateway();
> vlanNetmask = NetUtils.getCidrNetmask(network.getCidr());
> 


Re: GSoC projects

2017-02-08 Thread Rene Moser


On 02/08/2017 02:46 PM, Syed Ahmed wrote:
> I want to propose another topic to relplace our old and crummy console with
> a NoVNC console. Long time ago I developed a prototype and the results were
> very promising [2]. I have opened a JIRA ticket for this as well [1]

That would be AWESOME!


Re: re-introduction

2017-02-01 Thread Rene Moser
Glad to have you back!

I wished I would also have more time to work on cloudstack, having a few
things on my list. :)

Regards
René

On 02/01/2017 09:26 AM, Daan Hoogland wrote:
> Hello,
> 
> 
> My name is Daan Hoogland. I've been mostly out of the community since May 
> last year. I am now back through the generous sponsorship of my new employer 
> and will be working (mostly) as developer on cloudstack.
> 
> For those who remember me and are curious, I've been learning some scala and 
> some rust in the meanwhile and have been working on financial middleware in 
> between.
> 
> 
> I expect to have good times back in here :)
> 
> daan.hoogl...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> @shapeblue
>   
>  
> 
> 


Re: Dedicated IP range for SSVM/CPVM

2017-01-19 Thread Rene Moser
https://issues.apache.org/jira/browse/CLOUDSTACK-9750


Re: Dedicated IP range for SSVM/CPVM

2017-01-18 Thread Rene Moser
Hi

Ok, wrong wording here, not userVM of course but the public IPs for
isolated networks. But you got the idea :)

Great I am not the one guy with this use case. Filing a feature request
in JIRA. Thanks to all for your inputs!

Regards
René


Re: Dedicated IP range for SSVM/CPVM

2017-01-18 Thread Rene Moser
Hi Will

On 01/17/2017 06:13 AM, Will Stevens wrote:
> Rene, this is probably not going to solve your problem, but I use this
> trick for other use cases.  You can setup more than one range.  ACS seems
> to always exhaust one range before moving on to the next range.  If it is a
> new install, then you can do a range with only 2 IPs in it and make it
> first.  Since the first two IPs which will be provisioned when ACS is setup
> is the SSVM and CPVM, they will automatically take the two IPs from that
> special range.
> 
> I am pretty sure I have tested this.  Later when other IPs have been used
> from the other range, if you destroy the SSVM or CPVM, they will come back
> up on one of the two IPs that they were on before because they will be free
> again and they will be used first again.  If your system is really active,
> then you will be in a race condition while the SSVM and CPVM get bounced to
> get the same IPs back.
> 
> Anyway, I figured I would mention it because it may be a workaround you can
> make use of.  I do this in dev/staging environments which need real public
> IPs, but I don't need the SSVM and CPVM to have real public IPs.  This lets
> me preserve two real public IPs by using private IPs for that first range
> for the SSVM and CPVM.

Thanks for the hint, ;).Bbut it is an existing production setup, so it
won't help in my case.

René


Re: Welcoming Simon Weller & Paul Angus to the PMC

2017-01-16 Thread Rene Moser
Concrats guys!

René


  1   2   >