Dedicated IP range for SSVM/CPVM

2017-01-16 Thread Rene Moser
Hi

We would like to make a change proposal for SSVM/CPVM.

Currently, the SSVM/CPVM get an IP from the "default" pool of
vlaniprange which is the from the account "system"


  "vlaniprange": [
{
  "account": "system",
  "domain": "ROOT",
  "endip": "10.101.0.250",
  "forvirtualnetwork": true,
  "gateway": "10.101.0.1",
  "netmask": "255.255.255.0",
  "startip": "10.101.0.11",
  ...

},


  "systemvm": [
{
  "activeviewersessions": 0,
  "gateway": "10.101.0.1",
  "hypervisor": "VMware",
  "id": "d9a8abe5-b1e0-47d6-8f39-01b48ff1e0fa",
  "name": "v-5877-VM",
  "privatenetmask": "255.255.255.0",
  "publicip": "10.101.0.113",
  "publicnetmask": "255.255.255.0",
  "state": "Running",
  ...
},


For security considerations we would like to define a dedicated IP range
for SSVM/CPVM, which, preferably, should not have any relation to the
default pool range.

The default pool range should be used for userVMs only. To indicate the
use I propolse 2 new flags, which only considered for "account=system"
and indicate if the range can be used for userVMs or/and systemVMs.

For backwards compatibility this would be the default

"foruservms": true,
"forsystemvms": true,


to have a separate range for UserVMs/SystemVMs, it would look like


  "vlaniprange": [
{
  "account": "system",
  "domain": "ROOT",
  "foruservms": true,
  "forsystemvms": false,
  "endip": "192.160.123.250",
  "forvirtualnetwork": true,
  "gateway": "192.160.123.1",
  "netmask": "255.255.255.0",
  "startip": "192.160.123.11",
  ...

},

  "vlaniprange": [
{
  "account": "system",
  "domain": "ROOT",
  "foruservms": false,
  "forsystemvms": true,
  "endip": "10.101.0.250",
  "forvirtualnetwork": true,
  "gateway": "10.101.0.1",
  "netmask": "255.255.255.0",
  "startip": "10.101.0.11",
  ...

},


Does anyone has see any conflicts with this proposal?

Regards
René



Re: [RESULT][VOTE] Apache Cloudstack 4.9.2.0 (RC2)

2017-01-06 Thread Rene Moser
Thanks Rohit!

On 01/06/2017 06:03 AM, Rohit Yadav wrote:
> Hi all,
> 
> 
> 
> After 72 hours, the vote for CloudStack 4.9.2.0 *passes* with
> 
> 4 PMC + 0 non-PMC votes.
> 
> 
> 
> +1 (PMC / binding)
> 
> 4 person (Wido, Bruno, Rajani, Rohit)
> 
> 
> 
> +1 (non binding)
> 
> none
> 
> 
> 0
> 
> none
> 
> 
> 
> -1
> 
> none
> 
> 
> 
> Thanks to everyone participating.
> 
> 
> 
> I will now prepare the release announcement to go out after 24 hours to
> give the mirrors time to catch up.
> 
> Regards.
> 


Re: Back to the list

2016-11-30 Thread Rene Moser
LOL

On 11/30/2016 11:46 AM, Wilder Rodrigues wrote:
> I might need that filter, Rene! :D

As I run my own mail server and read my mails on mobile devices, I put
the filter on my mail server with a header check (postfix). This might
not work for you :)


/^Subject: \[GitHub\] cloudstack .*/DISCARD GitHub CloudStack flood


Ansible 2.2: CloudStack Modules News

2016-11-30 Thread Rene Moser
Hi List

As I know there are a few Ansible users here using the CloudStack
modules, let me give you an update:

New Modules in 2.2
- cs_router
- cs_snapshot_policy

In the upcoming 2.2.1, the modules also work with python3.


Roadmap for 2.3
===

New modules planned
---
- cs_host
- cs_vpc (done)
- cs_nic (done)
- cs_serviceoffer (currently WIP
https://github.com/ansible/ansible-modules-extras/pull/3396, testing and
feedback would be welcome!)
- and more

Diff Support:
-
In 2.3 if you set --diff you will get a line diff of the things changed
for many of the cloudstack modules. This will also work for --check mode.


VPC Support
---
I am about to extend VPC support in the modules and working on new
modules related to VPC.


Integration Testing
---
I am working on fully automated integration tests for ansbile cloudstack
modules PRs against a dockerized simulator.


ENV VAR Support
---
I already implemented a way to set ENV variables for domain, account,
project, zone and vpc in 2.3. It allows to DRY With help of ansible
block feature. See more info in the cloudstack guide docs
http://docs.ansible.com/ansible/guide_cloudstack.html#environment-variables


Support
---
A good tooling is essential for CloudStack. Ansible is one of the most
used cfg management tools around.

Thanks for all the support I received in 2016
https://renemoser.net/blog/2015/11/26/crowdfunding-ansible-cloudstack-modules/.

I still need your support in 2017 to continuing my work. I don't have a
commercial use of these modules and develop them in my free time (1 day
per week). If you use them and/or like my work, a small donation would
be much appreciated. Please contact me off list for details.

Thanks
René


Re: Back to the list

2016-11-30 Thread Rene Moser
Welcome back Wilder!

I am also back as I manged to have a decent filtering of all the github
mails :)

René

On 11/30/2016 09:58 AM, Wilder Rodrigues wrote:
> Hi there,
> 
> I have been away for a while, but would like to let you now that I will try
> to follow a a bit more closely the development around CloudStack. :)
> 
> After I left Schuberg Philis I forgot to change my email address on the
> dev-list. It means that I might have missed some messages. Sorry for that.
> 
> See you around.
> 
> Cheers,
> Wilder
> 


Re: [DISCUSS] Replacing the VR

2016-09-12 Thread Rene Moser
Hi

On 09/12/2016 10:20 PM, Will Stevens wrote:
> *Disclaimer:* This is a thought experiment and should be treated as such.
> Please weigh in with the good and bad of this idea...
> 
> A couple of us have been discussing the idea of potentially replacing the
> ACS VR with the VyOS [1] (Open Source Vyatta VM).  There may be a license
> issue because I think it is licensed under GPL, but for the sake of
> discussion, let's assume we can overcome any license issues.

VyOS is Debian based, much like the current VR. As long as it is not
shipped with CloudStack, all fine.

> I have spent some time recently with the VyOS and I have to admit, I was
> pretty impressed.  It is simple and intuitive and it gives you a lot more
> options for auditing the configuration etc...

I had the same "crazy" thoughts when I heard about VyOS the first time.

When I looked at VyOS, the release cycle were not very frequent and the
current stable release is still based on Debian 6 (EOL [1] since 02.16)

However to me, it doesn't matter if it's VyOS or CloudLinux, or another
solution.

The question is more like what is wrong with the current VR and how can
we make the VR great again. Things I would like to see:

- VR must have a "clean", programmable, documented, API, supporting
batch processing.
- VR must be rock solid (minimal shell) state of the art, up to date,
but small (Only contain things CloudStack needs, not more)
- VR must be scale well (...) and support stateful HA
- VR must be easy to upgrade (security) without downtimes.

Christmas is soon... ;)

René

[1] https://www.debian.org/News/2016/20160212



Re: Virtual Router - Zabbix integration

2016-09-07 Thread Rene Moser
Hi

On 09/07/2016 09:34 AM, Artjoms Petrovs wrote:
> Hello, All!
> 
> A while back I’ve found a need to monitor Virtual Router performance (
> CPU, Network peaks, etc ), which is not provided by VR Service
> Monitoring Tool.
> 
>  
> 
> Does anyone have experience with adding Zabbix ( or Nagios ) agent to
> Virtual Router default template? How update-safe is this approach? Maybe
> someone has used Chef or Ansible to provide custom configuration of
> Virtual Routers after install/reinstall?

Monitoring is a huge topic and we did some work on it.

The easiest way IMHO to get some basic monitoring is by using SNMP. This
would be easy to setup on Zabbix without need to install a Zabbix Agent
on the VR. However I am not sure if SNMP is already installed and
configured out of the box on the system template.

We are using Ansible (and I don't getting tired to mention it) and a
dynamic inventory to get all routers from the API and periodically
ensuring the have all the desired configuration (updates, log shipping,
some tuning). That is why we didn't have much benefit in a
pre-configured customized system template but this would be also an
option of course.

Beside that, a colleague of mine created tools to monitoring cloudstack
https://github.com/swisstxt/cloudstack-nagios. These use SSH to get all
the infos on the VRs without need to make any configuration on the VRs.

Hope that helps

René


Re: Meet BlueOrangutan

2016-08-07 Thread Rene Moser
Hi Rohit


On 08/06/2016 11:14 AM, Rohit Yadav wrote:

> Meet blueorangutan [1], a Github bot account that will help us automate 
> CloudStack (PR) testing [2][3] among other things.

Pretty cool. I like this progress. Well done!

> It works by polling Github notifications for the apache/cloudstack repository 
> every minutes and then reacts to comments. We can post comments on a 
> apache/cloudstack PR and ask @blueorangutan to perform certain build jobs 
> such as building packages, then running Trillian [2] tests (across a set of 
> hypervisors) using those packages, and finally report us the results.
> 
> 
> Since, the task of building packages and testing them are expensive. A 
> typical packaging job may take up to 30 minutes, a typical Trillian [2][3] 
> environment can take about 30 minutes to build/deploy a zone, and a Trillian 
> (smoke) test run may take hours while an exhaustive Trillian 
> (component+smoke) test run may take 3-4 days. Due to these reasons, for now 
> the '@blueorangutan test' task is restricted to a selected Github users (my 
> colleagues at ShapeBlue). Running Trillian test for each PR may be expensive, 
> we may consider batching smaller thoroughly reviewed PRs, then create 
> packages for a set of PRs and test them all at once as well.

You mentioned "expensive", are there any plans to distribute the
workloads across whoever likes to provide some workload capacity?

Since trillian works on top of cloudstack and vmware, the only
requirement would be a similar environment and providing user account
access, right?

However to make it even more flexible, I would prefer a jenkins
master/slaves setup, in which you would configure slaves like fully
working trillian workers in the local environments.

Thinking a bit further how to make it as easy as possible for users, a
docker image containing the setup of jenkins having the job configured
and all necessary dependencies (ansible, cs, cloudmonkey) independend of
the docker host OS would be an option.

Any thoughts?

Regards
Rene







Re: [VOTE] Split Marvin to its own repository

2016-07-18 Thread Rene Moser
Hi

I used to use marvin for setup simulator environments for using it as
integration test environments (4.5-latest) for the Ansible CloudStack
modules.

It's been a while and I can not really remember exactly what it was
caused it to fail but since a few weeks I was not able to setup such an
environment. I think it was related to some "cypto" dependencies which
didn't fit anymore.

Would splitting marvin out help to make the setup more reliable for
older cloudstack versions (e.g. 4.5 >=) as well?

Thanks for clarification.

Regards
René


On 07/18/2016 11:44 AM, Rohit Yadav wrote:
> All,
> 
> Based on a recent discussion thread [1], I want to start a voting thread to
> gather consensus around splitting Marvin from the CloudStack repository.
> 
> On successful voting, we would extract and maintain Marvin as a separate
> library in a separate repository (example repository [2]) and various
> build/test systems such as Travis [3] can install it directly for usage
> with pip+git etc.
> 
> Background: During the build process, a commands.xml generated to build
> apidocs is also used to generate CloudStack Cmd and Request classes are
> auto-generated, which is the only dependency why we needed Marvin and
> CloudStack together. The auto-generated cloudstackAPI module can be also
> generated against a live running CloudStack mgmt server which has api
> discovery (listApis) enabled. The integration tests will still be tied to a
> branch and will remain withing the repository. A PR [3] was sent to show
> that we can still execute tests using this approach, and this would finally
> allow us to build, release and use Marvin as an independent library.
> 
> Vote will be open for 72 hours.
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> [1] http://markmail.org/thread/kiezqhjpz44hvrau
> [2] https://github.com/rhtyd/marvin
> [3] https://github.com/apache/cloudstack/pull/1599
> 
> Regards,
> Rohit Yadav
> 


Re: [VOTE] Move 'apache/cloudstack' -> 'apache-cloudstack/cloudstack'

2016-03-19 Thread Rene Moser
On 03/18/2016 11:44 PM, Will Stevens wrote:

> *Proposal:*
> Transfer ownership of the 'apache/cloudstack' mirrored repository out of
> the 'apache' github organization into the 'apache-cloudstack' github
> organization (which I have already setup and started inviting users to).
> Both members of the ACS community and the ASF board will have 'owner'
> permissions on this new organization.  This will allow for permissions to
> be applied specifically to the 'apache-cloudstack' organization and not
> have to be applied to the entire 'apache' organization.
> 
> By transferring ownership, all of the PRs will be copied to the new
> repository and redirects will be created on github from 'apache/cloudstack'
> to 'apache-cloudstack/cloudstack'.

We might also have to involve github support here.

The apache top level projects github projects have a special setting
made by github internals that these projects are mirrored from
git://git.apache.org/cloudstack.git.

I am not sure how this will behave after the technical organization move.

Maybe they can disable this and before the organization move, they can
create a  new mirrored repo in apache/cloudstack. That would also be
great for consistency.





Re: LDAP auth failures

2016-03-09 Thread Rene Moser
Hi

On 03/09/2016 06:33 AM, Abhinandan Prateek wrote:
> In 4.5 there is a timeout param that was added ‘ldap.read.timeout’ that
> defaults to 1000,
> It should be set to about 6000 and that should resolve the read timeout
> that you guys see.

we already set it to 6000 (and more) as you can see here:

2016-02-29 10:05:42,382 DEBUG [cloudstack.ldap.LdapManagerImpl]
(catalina-exec-4:ctx-9ffa7c60) ldap Exception:
javax.naming.NamingException: LDAP response read timed out, timeout
used:6000ms.; remaining name 'dc=foo,dc=bar'


The only thing that "solved" the problem was

-Dcom.sun.jndi.ldap.connect.pool.timeout=6

We have a suspicion that the issue is related somehow in our ldap setup
(tcp, or openldap config) in relation to the connection pooling in
cloudstack.

René




LDAP auth failures

2016-03-03 Thread Rene Moser
We are experiencing authentication issues with LDAP since upgrade to 4.5.1.

After some time (...), users can not authenticate anymore, however,
authentication in other services using ldap works during this time. The
issue is only related to cloudstack login it seems.

We haven't found the root cause yet, a network setup issue or openldap
config issue can not be excluded.

Stacktrace:

2016-02-29 10:05:36,375 DEBUG [cloudstack.ldap.LdapContextFactory]
(catalina-exec-4:ctx-9ffa7c60) initializing ldap with provider url:
ldap://ldap.example.com:389
2016-02-29 10:05:42,382 DEBUG [cloudstack.ldap.LdapManagerImpl]
(catalina-exec-4:ctx-9ffa7c60) ldap Exception:
javax.naming.NamingException: LDAP response read timed out, timeout
used:6000ms.; remaining name 'dc=foo,dc=bar'
at com.sun.jndi.ldap.Connection.readReply(Connection.java:485)
at com.sun.jndi.ldap.LdapClient.getSearchReply(LdapClient.java:639)
at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:562)
at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1985)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1847)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at
org.apache.cloudstack.ldap.LdapUserManager.searchUsers(LdapUserManager.java:206)
at
org.apache.cloudstack.ldap.LdapUserManager.getUser(LdapUserManager.java:122)
at
org.apache.cloudstack.ldap.LdapManagerImpl.getUser(LdapManagerImpl.java:173)
at
org.apache.cloudstack.ldap.LdapManagerImpl.canAuthenticate(LdapManagerImpl.java:97)
at
org.apache.cloudstack.ldap.LdapAuthenticator.authenticate(LdapAuthenticator.java:61)
2016-02-29 10:05:42,383 DEBUG [cloudstack.ldap.LdapManagerImpl]
(catalina-exec-4:ctx-9ffa7c60) Exception while doing an LDAP bind for
user  johndoe
org.apache.cloudstack.ldap.NoLdapUserMatchingQueryException: No users
matching: No Ldap User found for username: johndoe

As I understand there is a username lookup (bind with top reader
credentials) to see if a user exists in the ldap. if found a new
connection will be etablished for auth. In the above stacktrace it seem
that the username lookup fails.

Further we see on the ACS management server however, is that LDAP
connection are not going to be closed at any time.

For _every_ successful auth, the tcp connection remains established forever.

In my understanding of
http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/config.html
these connections will become idle after successful authentication and
reused for new authentication.

However, the reuse for the auth doesn't seem to work. _Every_ new
successful auth of a user _creates_ a new ldap connection. We don't know
if this is related to our problem, but at least it doesn't look like a
wanted behavior.

In the docs we read: "By default, idle connections remain in the pool
indefinitely until they are garbage-collected"

But as said, they seem never be gc-ed. After we added
-Dcom.sun.jndi.ldap.connect.pool.timeout=6 to the
/etc/cloudstack/management/tomcat6.conf which resulted in the
connections beeing gc-ed and we didn't have any report about missing
login since then.

Has anyone also see such an issue? Any thoughts?

René



Re: [DISCUSS] Keeping system vms up to date

2016-02-22 Thread Rene Moser
Erik

On 02/22/2016 01:50 PM, Erik Weber wrote:
> Adding a boilerplate in the install/admin docs that says "If you have no
> other tools in place to handle system vm updates, consider enabling this
> option: x.y.z" is good enough for me.
> This is supposed to be a way for all those who don't have any other/better
> means of doing this, not a mandatory/forced way of doing it for everyone
> else.

Sounds good. :)


>> I would like to see an api for download and update latest system-vm
>> template. AFAIK this is still not solved (without touching DB) to update
>> system-vm templates having same version.

What do you think about security updating system VM templates?
Up-to-date system vm templates would be needed anyway.

>> This way it would be up to the user to handle the upgrade and to think a
>> bit further we could also define a rollback scenario (use previous
>> template).
>>
>>
> This thread is ment to discuss varies ways to achieve the goal, I did not
> mean to propose a single way of doing it.
> Pushing an ansible inventory script (that works with all major ACS
> hypervisors) and a playbook is another option.

It is a shame but I have no experience with "other hypervisors" with
ACS, just VMware. :( How are KVM/XEN different in VRs then VMware, isn't
there a VR accessible by SSH (by so called "linklocal" IP) from ACS
management node?

René



Re: [DISCUSS] Keeping system vms up to date

2016-02-22 Thread Rene Moser
Hi

I don't like the idea cloudstack management handles the "apt-get update
&& apt-get upgrade" (I am -1 for this solution) or at least I would like
to disable it by configuration, if we go this direction.

We use ansible (what a surprise) to update the VR and also add some
custom patches to it. We have a dynamic inventory getting all the VR
with linklocal IP as ssh host and regulary run playbooks to these VRs
running by a jenkins job.

This sounds a bit kind of a hack at the beginning but it has the
advantage that we are able to run the very same playbooks also against
our test and stage cloud. Which gives a good feeling.

I would like to see an api for download and update latest system-vm
template. AFAIK this is still not solved (without touching DB) to update
system-vm templates having same version.

This way it would be up to the user to handle the upgrade and to think a
bit further we could also define a rollback scenario (use previous
template).

Regards
René



On 02/22/2016 09:53 AM, Erik Weber wrote:
> As of 4.6 or so, we don't really need to distribute new system vm templates
> all that often, and that is great for upgrades, but less so from a security
> perspective.
> 
> With the current approach we ship old system vm templates, with out of date
> packages, and there is currently no good out of the box way to handle that.
> 
> There is a few ways to handle it, including, but not limited to:
> 
> 1) Introduce a configuration value that specifies if you want to run
> apt-get update && apt-get upgrade on boot. This slows down deployments and
> will only get worse as times passes and there are more packages to update.
> An alternative is to specify a list of packages we _HAVE_ to keep updated
> and only update those.
> 
> 2) Package new system vms for all releases, but not bump the version number
> (or introduce a patch version number). This is ment to ensure that new
> cloud deployments are somewhat up to date, but won't update existing ones
> nor ensure that the deployment is kept up to date.
> 
> 3) Add an optional? cronjob that does apt-get update && apt-get upgrade,
> the downside is that you risk having some downtime for certain services.


Re: [PROPOSAL] LTS Release Cycle

2016-02-08 Thread Rene Moser
John,

Something is not clear to me about the frequency of new LTS releases and
support time range.

You wrote in the proposal, that we branch off for a new LTS version 2
times a year, but only 2 LTS versions will in active maintained at any
time, but supported for 20 months.

This conflicting in my mind.

This means we do not branch off _every_ year twice? Otherwise we would
have 3 releases within 12 months 1.1/1.7/1.1. And the support will be
only ~13 months for max as we do not maintain 3 releases.

What I am missing?

René


On Tue, 02 Feb 2016 16:40:42 GMT, John wrote:
> All,
> 
> Based on the feedback from Ilya, Erik, and Daan, I have updated my
> original LTS proposal to clarify that LTS releases are official project
> deliverables, commit traceability across branches, and RM approval of PRs:
> 
> ## START ##
> 
> Motivation
> ==
> 
> The current monthly release cycle addresses the needs of users focused
> on deploying new functionality as quickly as possible.  It does not
> address the needs of users oriented towards stability rather than new
> functionality.  These users typically employ QA processes to comply with
> corporate policy and/or regulatory requirements.  To maintain a growing,
> thriving community, we must address the needs of both user types.
> Therefore, I propose that we overlay a LTS release cycle onto the
> monthly release cycle to address the needs of stability-oriented users
> with minimal to no impact on the monthly release cycle.  This proposed
> LTS release cycle has the following goals:
> 
>   * Prefer Stability to New Functionality: Deliver releases that only
> address defects and CVEs.  This narrowly focused change scope greatly
> reduces the upgrade risk/operational impact and shorter internal QA cycles.
>   * Reliable Release Lifetimes: Embracing a time-based release strategy,
> the LTS release cycle will provide users with a reliable support time
> frames.  Users can use these time frames provide users with an 20 month
> window in which to plan upgrades.
>   * Support Sustainability: With a defined end of support for LTS
> releases and a maximum of two (2) LTS releases under active maintenance
> at any given time, community members can better plan their commitments
> to release support activities.  We also have a agreed upon policy for
> release end-of-life (EOL) to debate about continuing work on old releases.
> 
> LTS releases would be official project releases.  Therefore, they would
> be subject to same release voting requirements and available from the
> project downloads page.
> 
> Proposed Process
> 
> 
> LTS release branches will be cut twice year on 1 Jan and 1 July based
> the tag of the most recent monthly release.  The branch will be named
> _LTS and each LTS release will be versioned in the form of
> _.  For example, if we cut an LTS
> branch based on 4.7.0, the branch would be named 4.7.0_LTS and the
> version of the first LTS release would be 4.7.0_0, the second would be
> 4.7.0_1, etc.  This release naming convention differentiates LTS and
> monthly releases, communicates the version on which the LTS release is
> based, and allows the maintenance releases for monthly releases without
> version number contention/conflict.  Finally, like master, an LTS branch
> would be always deployable following its initial release.  While it is
> unlikely that LTS users would deploy from the branch, the quality
> discipline of this requirement will benefit the long term stability of
> LTS releases.  Like master, all PRs targeting an LTS branch would
> require two LGTMs (one code review and one independent test), as well
> as, an LGTM from the branch RM.  A combined code review/test LGTM and an
> RM LGTM would be acceptable.
> 
> The following are the types of changes that would permitted and
> guarantees provided to users:
> 
>   * No features or enhancements would be backported to LTS release branches.
>   * Database changes would be limited to those required to address the
> backported defect fixes.
>   * Support for the release/version of the following components from the
> release on which the LTS is based throughout the entire release cycle:
> * MySQL/MariaDB
> * JDK/JRE
> * Linux distributions
>   * API compatibility for between all LTS revisions.  API changes would
> be limited to those required to fix defects or address security issues.
> 
> An LTS release would have a twenty (20) month lifetime from the date the
> release branch is cut.  This support period allows up to two (2) months
> of branch stabilization before initial release with a minimum of
> eighteen (18) months of availability for deployment.  The twenty (20)
> month LTS lifecycle would be divided into following support periods:
> 
>   * 0-2 months (average): Stablization of the LTS branch with fixes
> based on defects discovered from functional, regression, endurance, and
> scalability testing.
>   * 2-14 months: backport blocker and critical priority defect fixe

Re: LTS release or not

2016-01-11 Thread Rene Moser
Hi Remi

On 01/11/2016 04:16 PM, Remi Bergsma wrote:
> Maintaining LTS is harder than it seems. For example with upgrading. You can 
> only upgrade to versions that are released _after_ the specific LTS version. 
> This due to the way upgrades work. If you release 4.7.7 when we’re on say 
> 4.10, you cannot upgrade to 4.8 or 4.9. The same for 4.5: 4.5.4 cannot 
> upgrade to any 4.6, 4.7 or 4.8 because it simply didn’t exist when these 
> versions were released. (4.5.3 has been accounted for so that does work this 
> time). If you want to keep doing 4.5 releases 18 months from now, that’s 
> going to be a real issue. Users probably won’t understand and expect it to 
> work. And yes, we will change the upgrading procedures but it’s not there yet.

Out of curiosity. I thought about patch relases like this scheme 4.5.2.x
for LTS. This would work right?

Regards
René


Summary: -1 LTS

2016-01-11 Thread Rene Moser
LTS by the community is not an option for now:

Most of the threads/users/devs had concerns or are skeptical how it can
be done in practice.

As we recently changed the release process, it seems to "early" to
change it again or add new processes to it.

I still think CloudStack need some kind of LTS to serve business needs
but unsure if _we_ as community should do it.

Thanks for participating.

Regards
René





Re: LTS release or not

2016-01-11 Thread Rene Moser
On 01/11/2016 02:28 PM, Nux! wrote:

> What lifetime are we talking rougly for an LTS release? 6 months, 12 months?

I thought about 18 months. After 12 months we define a new LTS.

So users have a 6 months time frame to switch from LTS to LTS.


Re: [DISCUSS] Move to Github

2016-01-11 Thread Rene Moser

On 01/11/2016 10:56 AM, sebgoa wrote:

> this is exactly what "moving to github" would mean.
> if we agree to do this, we then need to work with infra and the board to make 
> sure everything is ok in terms of provenance and that it does not "break" our 
> ASF "commitment"

I see. Thanks for info.




Re: [DISCUSS] Move to Github

2016-01-11 Thread Rene Moser
Hi Sebastien

On 01/11/2016 09:53 AM, Sebastien Goasguen wrote:
> Part 3:
> 
> 
> To me the main issue for us is that our current privileges on GitHub prevent 
> us from building more productive CI workflow and makes the life of the RM 
> more difficult (cannot use labels, cannot use issues, cannot configured 
> triggers/hooks etc). ...

Out of curiosity, it seems the main problem is that we are using the
apache account of github.

Why don't we create an own one?

e.g. github.com/cloudstackdev

--> PR merge on  github.com/cloudstackdev
--> hook to jenkins job pushing it to git http://git.apache.org
--> will be mirrored to https://github.com/apache/cloudstack

Did I miss anything why this would not be possible?

The only thing would be that we can not push to http://git.apache.org
directly anymore.

Regards
René


Re: LTS release or not

2016-01-10 Thread Rene Moser


On 01/10/2016 11:46 PM, Erik Weber wrote:

> What if the fix is part of a refactorization or a new feature?
> Providing a LTS is not 'easy as pie' with a product like CloudStack where a
> lot of code changes over time.

Didn't say it's easy :)

Yes re-factorization is one of the unsolved "problems". That is why I
wrote, squashing is evil for fixing bugs.

It is important that devs fix the bug first, commit and then do the
refactor commits

If the devs fixed it by "mistake" somehow in the refactor in the past,
then we can decide if we can re-implement the fix without refactor (aka
it would be "obvious and small"). LTS would not mean we fix all the bugs.

> For instance, /if/ the strongswan feature is merged to 4.8, how to you
> handle /ANY/ VPN fixes in 4.5 since they don't even use the same software?
> And the whole VR process was refactored in 4.6 -- meaning you won't be
> using the same scripts, or even the same language.

Yes, we must dedicated from fix to fix. How and if.

> Even if a bugfix is included in master and tested, it is impossible to say
> how it will react with an older/different solution.

It depends, for sure testing must also be done.

> The same can be said for library updates etc.
> 
> Don't get me wrong, I'm not opposing a LTS version -- actually I would
> rather like to have one. I just want to be clear about the fact that it
> won't be always be easy, and not all fixes might be possible to backport --
> depending on how strict you'll be with 3rd party stuff.

That is fine, I am aware of that.

>> * Fix must be important.
>>
> 
> Who defines what 'important' is?

"must be important" means we do not backport trivial things like typos
in docs and so forth, only important things. And I would say important
in a common sense. But it doesn't mean that all important fixes will be
backportable, because they may not be necessary "obvious and small".


> On a last note, doing LTS -- atleast in my opinion -- requires commitment.
> Anything called LTS is usually getting a lot of user focus/traction and
> have to be rock solid and maintained.

That is why I started this thread, I would see a commitment, that _we_
as community provide a LTS for those who need it.


Re: LTS release or not

2016-01-10 Thread Rene Moser

On 01/10/2016 10:07 PM, Wido den Hollander wrote:
> Ok, understood. However, it will be up to users on their own to pick
> this LTS maintainment up.

It would be up to the devs making fixes small (so no squashing for
fixes) and notify the one maintaining the LTS version if they feel the
fix is that important to be in LTS. Wouldn't be that hard work.

> That means testing, releasing, testing, backporting, testing, releasing.
> 
> Certain users will focus on getting new releases out and others on doing
> LTS work.

The process of backporting is not defined yet, but I would like to adopt
the Linux kernel long term policy:

* Fix must be already in mainline
* Fix must be important.
* Fix must be obvious and small.

Which means, we only fix stuff in LTS which is already fixed in
mainline. Important stuff only.

We can even define, the mainline version must be released with the fix,
before getting into LTS. So even the LTS releases would be behind the
mainline releases and the fix has been tested in mainline.


Re: LTS release or not

2016-01-10 Thread Rene Moser
Daan

Have not yet decided which version, but fixes will be backported into
LTS not the other way around.

But I see what you mean. The code base may have much diverted before 4.7
right?

It is not really a problem. It only means more work (argh...). Sooner or
later this will happen for every release we choose.

I would like to use 4.5 for several reasons, one obvious is, that we
know that it is in production on several clouds, including us.


On 01/10/2016 12:40 PM, Daan Hoogland wrote:
> Rene, I would advice to support 4.7 as LTS. It adheres to the new
> development/release process unlike 4.5 and any bugfixes there can
> automatically be merged forward to newer releases to reduce the chance of
> regression.
> 
> I am in favour of the general concept.
> 
> On Sun, Jan 10, 2016 at 12:12 AM, Rubens Malheiro > wrote:
> 
>> +1
>> Em 9 de jan de 2016 8:55 PM, "Rene Moser"  escreveu:
>>
>>> Hi
>>>
>>> I recently started a discussion about the current release process. You
>>> may have noticed that CloudStack had a few releases in the last 2 months.
>>>
>>> My concerns were that many CloudStack users will be confused about these
>>> many releases (which one to take? Are fixes backported? How long will it
>>> receive fixes? Do I have to upgrade?).
>>>
>>> We leads me to the question: Does CloudStack need an LTS version? To me
>>> it would make sense in many ways:
>>>
>>> * Users in restrictive cloud environments can choose LTS for getting
>>> backwards compatible bug fixes only.
>>>
>>> * Users in agile cloud environments can choose latest stable and getting
>>> new features fast.
>>>
>>> * CloudStack developers must only maintain the latest stable (mainline)
>>> and the LTS version.
>>>
>>> * CloudStack developers and mainline users can accept, that mainline may
>>> break environments but will receive fast forward fixes.
>>>
>>> To me this would make a lot of sense. I am actually thinking about
>>> maintaining 4.5 as a LTS by myself.
>>>
>>> Any thoughts? +1/-1?
>>>
>>> Regards
>>> René
>>>
>>
> 
> 
> 


Re: LTS release or not

2016-01-10 Thread Rene Moser
Hi Wido

On 01/10/2016 08:23 PM, Wido den Hollander wrote:
> I personally am against LTS versions. If we keep the release cycle short
> enough each .1 increment in version will only include a very small set
> of features and bug fixes.
> 
> In the old days it took months for a release, if we bring that back to
> weeks the amount of changes are minimal.

The current release process is fine! We don't want to change that! No way!

It fits the needs of those CloudStack users who want features fast and a
minimal of risk.

> You can then decide to always stay behind 3 months on the releases or
> suddenly make a jump if you want to.

No, unfortunately some of the users can not do that (yet). Staying 3
months behind fixes (security?) is not an option. In my case it would be
even longer 6-12 months. For those, an _additinoal_ LTS version would be
the way to go.

> In my perspective clouds are agile and they should be developed that way.

I fully agree with development, development could go even more agile,
releases can be agile as well, the problems come when the applications
go into operation.

If the operation is not ready for deploying agile, those users will left
behind.

The only thing we need to do is backport fixes to a separate LTS branch.
That's it.

Only fixes, obvious fixes.

> We should however simplify the upgrade even more:
> - Separate database changes from code changes (proposed already)
> - Put the VR in a separate project

Yes, all fine with that.

Again. I do not want to slow down development and releases.

An additional LTS version could even help agile development because you
can always rely on the argument:

If you want most annoying stable cloudstack, use LTS. If you want to get
features fast, use mainline. This is the trade off.



LTS release or not

2016-01-09 Thread Rene Moser
Hi

I recently started a discussion about the current release process. You
may have noticed that CloudStack had a few releases in the last 2 months.

My concerns were that many CloudStack users will be confused about these
many releases (which one to take? Are fixes backported? How long will it
receive fixes? Do I have to upgrade?).

We leads me to the question: Does CloudStack need an LTS version? To me
it would make sense in many ways:

* Users in restrictive cloud environments can choose LTS for getting
backwards compatible bug fixes only.

* Users in agile cloud environments can choose latest stable and getting
new features fast.

* CloudStack developers must only maintain the latest stable (mainline)
and the LTS version.

* CloudStack developers and mainline users can accept, that mainline may
break environments but will receive fast forward fixes.

To me this would make a lot of sense. I am actually thinking about
maintaining 4.5 as a LTS by myself.

Any thoughts? +1/-1?

Regards
René


Re: Minor releases!

2016-01-07 Thread Rene Moser
On 01/07/2016 05:04 PM, sebgoa wrote:

> Yet folks (like Rene) may like a pattern of just minor and very infrequent 
> major. While folks like Remi want continuous deployment.
> 
> So at the cost of sounding a bit "fatherly" we indeed need to discuss this a 
> bit. I mentioned it after 4.6, and communicate clearly to everyone how this 
> works.
> 
> Keeping in mind that we don't want to abandon anyone and we want everybody to 
> be able to upgrade easily.

Hmm no it is not quite right.

I am not against frequent majors, but I my use case needs releases to be
maintained longer then the next major release.

So I think the best solution would be every 12 months a LTS releases
maintained for 18 months for users having such conditions.

I'll get in touch with Rohit.

Regards
René


Re: Minor releases!

2016-01-07 Thread Rene Moser
On 01/07/2016 05:27 PM, Remi Bergsma wrote:

> Anyway, I cannot and don’t want to convince you. We want something different 
> and that is fine. What I do want to know is what others want. Because if the 
> majority wants what you are asking for, we should do that. 

It is not my decision, it is a condition.

In a perfect world we could support both uses cases.

The more I think about it, the more I think a CloudStack LTS version
would be what I need.

Thanks again and don't feel offended. I was not my intention.

Kind Regards
René


Re: [DISCUSS] Move to Github

2015-12-19 Thread Rene Moser


On 12/19/2015 07:57 PM, Remi Bergsma wrote:

> I disagree with testing based on complexity. You simply cannot know the 
> implications upfront, as that is why you run the tests. What seems small, can 
> break it all.
> 
> Example:
> This commit seems an easy_fix, right? Just a findbugs issue resolved.
> https://github.com/apache/cloudstack/commit/6a4927f660f776bcbd12ae45f4e63ae2c2e96774

Ok, definition of easy is for _me_: fixing a typo in a debug log output,
adding license headers, fixing a comment. adding documentation.

So we just have minor and major, fine for me. But would you run
integration tests for adding missing license header?

> It was just merged indeed, exactly as you propose. But it did cause a major 
> outage. And I don’t even use HyperV.
> Details: https://github.com/apache/cloudstack/pull/761
> 
> This is why all PRs to 4.6 and 4.7 (200+) that we merged over the last couple 
> of months were tested against a real cloud. No easy fixes and minor changes. 
> We always need to run the full integration tests.

This would be the best case (and good job btw!) but for how long can you
made it without automation? We need automated tests, and if possible
full blown automated integration testing. I agree.

> When new functionality is proposed, there aren’t many people willing to write 
> unit and integration tests to cover it. Until that changes, testing can only 
> guard whatever the tests cover. And when we merge new stuff without tests, 
> the total coverage goes down making the tests less relevant. In fact, when we 
> resolve a bug we should write a tests along with it. I know of one guy that 
> does that on a regular basis.

It is ~2016. No excuse. I know we can not test everything, we are
dealing with hardware. But I would rather say, "merge it, it covers
tests and they passed" then "merge it, it has 2 LGTM".

> It’s not so simple as it seems unfortunately.

It has never been simple, and I didn't say so, but it should be our goal
to get there. Right?

Regards
René


Re: [DISCUSS] Move to Github

2015-12-19 Thread Rene Moser
Hi Seb

On 12/19/2015 10:12 AM, sebgoa wrote:

> Late October I started thread [1] about moving our repo to GitHub, I would 
> like to re-open this discussion.
> 
> Now that we have stabilized master and release 4.6.0, 4.6.1, 4.6.2 and 4.7.0 
> we need to think about the next steps.
> 
> To me Git and GitHub has become an essential tool to any software 
> development, not using it to its full potential is hurting us.
> 
> Just as an example I would like to point you to [2], this a PR I made to 
> Kubernetes (a container orchestrator), it literally added 14 characters in a 
> json file.
> This was really a very minor change.
> 
> The PR automatically triggered 3 bots which created 7 labels, it ran end to 
> end testss, Jenkins jobs and triggered third part builds.
> It was automatically merged.

I am fine moving to github.

But IMHO the git hosting is not the problem, the problem is how far do
we trust the current tests and how we can them improve.

Moving to github doesn't improve testing. Doing manual tests is okay and
hard work, it does not speed up things.

We need fully automated unit _and_ integration tests that we trust. I do
not trust in mocking and simulating infrastructure.

We discovered most of the major problems running cloudstack on real
hardware in real world scenarios. Race conditions, unexpected VR
reboots, VMs not getting IPs from DHCP, etc.

Rating complexity of changes: easy_fix, minor_change, major_change

Running tests according complexity:

- easy_fix: just merge it.
- minor_change: unit and simulator test passed
- major_change: the full blown integration testing

IMHO we should work on solid testing and development is fun, merging a
click and releasing a breath.

Just my 2 cents.

Regards
René







Re: [DISCUSS] ACS 4.5.3 release

2015-11-20 Thread Rene Moser
Hi Rohit

We at swiss txt would appreciate a 4.5.3 to be released asap, it already
contains this fix https://github.com/apache/cloudstack/pull/922 which
lets us finally activate DRS in our vmware setup. Huge gain on
performance and reliability.

Another one related to VR which has _not_ yet been merged:

https://github.com/apache/cloudstack/pull/1045

Further the language files should be updated, I worked on german
translation lately. If you would take care on that, that would be great.
Thanks!

Yours
René


On 11/20/2015 07:27 AM, Rohit Yadav wrote:
> Hi all,
> 
> I want to ask how happy people are with the last 4.5.2 release and if
> there are any issues they want to report or want to be fixed in a future
> minor release. If we’ve enough demand, we can work towards a last 4.5
> minor release. Thanks.
> 
> Rohit Yadav
> *Software Architect*
> *
> 
> 
> D: +44 20 3642 6102  | S: +44 20 3603 0540
>  | M: +91 88 262 30892 
>  
> rohit.ya...@shapeblue.com
>  | www.shapeblue.com
>  | Twitter:@ShapeBlue
> 
>  
> ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS
> *
> **
> 
> Find out more about ShapeBlue and our range of CloudStack related services
> 
> IaaS Cloud Design & Build
> 
> CSForge – rapid IaaS deployment framework 
> CloudStack Consulting 
> CloudStack Software Engineering
> 
> CloudStack Infrastructure Support
> 
> CloudStack Bootcamp Training Courses
> 
> 
> This email and any attachments to it may be confidential and are
> intended solely for the use of the individual to whom it is addressed.
> Any views or opinions expressed are solely those of the author and do
> not necessarily represent those of Shape Blue Ltd or related companies.
> If you are not the intended recipient of this email, you must neither
> take any action based upon its contents, nor copy or show it to anyone.
> Please contact the sender if you believe you have received this email in
> error. Shape Blue Ltd is a company incorporated in England & Wales.
> ShapeBlue Services India LLP is a company incorporated in India and is
> operated under license from Shape Blue Ltd. Shape Blue Brasil
> Consultoria Ltda is a company incorporated in Brasil and is operated
> under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company
> registered by The Republic of South Africa and is traded under license
> from Shape Blue Ltd. ShapeBlue is a registered trademark.


cloudstack vulnerable by COLLECTIONS-580?

2015-11-10 Thread Rene Moser
Hi

This security issue came to my attention:
https://issues.apache.org/jira/browse/COLLECTIONS-580

See
http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
for more background information.

I am not sure if cloudstack is affected, at least we have dependency to
this vulnerable lib:

 $ grep -Rl InvokerTransformer .
./plugins/hypervisors/kvm/target/dependencies/commons-collections-3.2.1.jar
./client/target/cloud-client-ui-4.5.2.war
./client/target/cloud-client-ui-4.5.2/WEB-INF/lib/commons-collections-3.2.1.jar
./usage/target/dependencies/commons-collections-3.2.1.jar
./agent/target/dependencies/commons-collections-3.2.jar
./engine/service/target/engine/WEB-INF/lib/commons-collections-3.2.jar

Thanks for clarification.

Yours
René


Re: git tags on cloudstack repo

2015-11-05 Thread Rene Moser
Hi

On 11/05/2015 09:52 AM, Remi Bergsma wrote:
> Guys,
> 
> I accidentally pushed the 4.6.0 tag and I shouldn't have done that before it 
> is final. Sorry! 

Ok then! Was not meant to be a criticism at all, Remi, Don't get me
wrong. Would be good to me to remove it.

Yours
René




Re: git tags on cloudstack repo

2015-11-05 Thread Rene Moser
On 11/05/2015 09:45 AM, Erik Weber wrote:

> Unless I'm seeing wrong, we have a branch, and it's called

I am not seeing it on github yet, but on here
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=tags



git tags on cloudstack repo

2015-11-05 Thread Rene Moser
Hi

I have kind of a question about the tagging policy in cloudstack git repo:

We now have tag 4.6.0 but we call it an RC. This raises 2 problems:

1st: from the public perspective, this is a released version. Or how can
anyone not related to the project see, if this is an RC or not?

2nd: tags seems not to be persistent. If we decide to do another RC, the
tag would be moved to another commit. If we test "4.6.0" we can not say
which version has been tested.

Proposal: We adopt the well known versioning model "semantic versioning"
http://semver.org/.

This means the RC would have been named 4.6.0-RC20151104T1522 or 4.6.0-rc.1

Do I miss anything? Any thoughts?

Regards
René


Re: [DISCUSS] Move to GitHub

2015-10-26 Thread Rene Moser
Hi

On 10/26/2015 10:59 AM, Wido den Hollander wrote:
>> But, I thinks that would be better to keep the control of the source
>> code repo. This is the core of your work.
>>
> 
> Agree. I love Github and it works great, but make it THE primary source
> for the source code and go away from ASF? Well, I'm not sure.
> 
> True, Git is distributed and that's cool, but I think we should make
> sure that as a project we aren't depended on Github.

Same here, love GitHub but go away from ASF...? Not sure.


[BLOCKER] CLOUDSTACK-8848

2015-09-26 Thread Rene Moser
I discovered the race condition bug related to CLOUDSTACK-8848 while 
testing in our lab and daan started a PR 
https://github.com/apache/cloudstack/pull/829 for discussion.


But it turned out to be a dead end discussion. Daan and I started a 
debug session on Friday a week ago and we discovered the real problem, 
but it was unclear how it can be solved. Daan was off from the next day on.


After another discussion with @anshul1886 started at 
https://github.com/apache/cloudstack/pull/829#issuecomment-141613687 he 
brought me to the solution I created in 
https://github.com/apache/cloudstack/pull/885.


The related comment from ashul:

>From code it seems to be getting updated and DB also suggests that.
>It will not be updated if there is no power change for 
>MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT. But that is to reduce DB 
>transactions and will not create issues as it is updated if there is 
>change in power state.


This means all the calculation of how to handle a missing power state is 
related to an outdated DB record due DB transaction optimization.


My change makes sure if we detected a outdated record, we reset the 
counter to make sure we get new state updates.


In the worst case (if the VM is really missing), the handling of missing 
state updates is postponed to the next missingStateReport. So to me, 
this is really a safe way to fix this issue.


I patched our lab environment, where we discovered the race condition in 
the first place and we didn't see the bug happened again.


You can find the logs here https://github.com/apache/cloudstack/pull/885 
attached to the PR.


It isn't easy to test it, I learned when to start a VR migration to hit 
the race condition. So that is why I write this message to show you I 
tested it in real world conditions.


Yours
resmo



Re: [DISCUSS][PROPOSAL]missing power state reports from hypervisors on VMs ([BLOCKER]?)

2015-09-16 Thread Rene Moser

On 09/16/2015 11:46 AM, Anshul Gangwar wrote:
> It’s not difficult to find a good grace period. It will simply depend on your 
> Hypervisor settings how it is configured for HA. You can easily figure out 
> for how much time there will be no VM on any Host from your settings and 
> simply put 2-3 times of that period as grace period.

I still think I would not that easy as it seems in every case but to
stay constructive, I would like to give it a shot.

René




Re: [DISCUSS][PROPOSAL]missing power state reports from hypervisors on VMs ([BLOCKER]?)

2015-09-16 Thread Rene Moser

Hi René

On 09/16/2015 10:17 AM, Anshul Gangwar wrote:
> Currently we report only PowerOn VMs and do not report PowerOff VMs that's 
> why we consider Missing and PowerOff as same And that's how most of the code 
> is written for VM sync and each Hypervisor resource has same understanding. 
> This will effect HA and many more unknown places. So please do not even 
> consider to merge this change.
> 
> So Now coming to bug we can fix that by changing global setting pingInterval 
> to appropriate value according to hypervisor settings which takes care of 
> these transitional period of missing report here or can be handled by 
> introducing gracePeriod global setting.

This is interesting, I also wrote in the bug report gracePeriod
calculation might be related.
https://github.com/apache/cloudstack/blob/4.5.2/engine/orchestration/src/com/cloud/vm/VirtualMachinePowerStateSyncImpl.java#L110.

IMHO making this value configurable would might solve it, but it is hard
to "guess" what a good grace period would be.

In terms of VMware it depends on amounts of esx in the clusters, and
they can be different.

But another question is, why make one _global_ grace period for every
hypervisor. Think about, users can have mixed hypervisors setups.

So to me, a global grace period setting might not be the best solution,
instead we should take care hypervisor functionality, in this case
VMware, it handels HA by itself.

I know a VR in 4.5 would be broken after an VMware HA event, but there
is another global setting, which can be enabled if you like for out of
band migrations router restarts.

So to me, in 4.5 I am +1 for the patch of daan makes sense, if
hypervisor is VMware.

Yours
René



Re: [DISCUSS] Let's fix CloudStack Upgrades and DB migrations with CloudStack Chimp

2015-08-31 Thread Rene Moser
+1!

On 08/28/2015 08:51 AM, Rohit Yadav wrote:
> Hi all,
> 
> Some of us have discussed in the past on fixing CloudStack’s upgrade and
> DB migration, I’m trying to explore if we can really fix this. Please
> review, advise changes, add suggestions along with your upgrade
> experience so we can improve this in near future starting with ACS 4.7
> or above:
> 
> http://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Chimp
> (note: this wiki will keep changing depending on the discussions and
> feedback)


[Discuss] separate ML for PR and build notification?

2015-07-13 Thread Rene Moser
Hey

Since we "rebuild" our communication stack (slack, irc, ML, ...) I would
bring up some discussion about the "noise" in the dev ML.

I like to be in dev ML but I am not that interested in notifications
about builds on PR, PR comments, and Jenkins builds.

I am suggest to make a seprate ML for those who really want it.

Any thoughts?

Yours
René



Re: [DISCUSS] LTS releases?

2015-07-03 Thread Rene Moser
Sebastien,

So wouldn't it be nice to make clear which release is still supported
and which release is not?

On 03.07.2015 09:20, sebgoa wrote:

> I think we got in a situation with 4.4 that called for us to keep maintaining 
> 4.3….and even after 4.5 was released. Because 4.3 was seen as a good release.

Your are saying 4.3 is a good release, shouldn't it be maintained a bit
"longer"?

So currently we have:

main 4.5.x
stable: 4.4.x
legacy: 4.3.x
deprecated: 4.2.0

When 4.6 is released, what should a release be dropped? 4.4.x?

main: 4.6.0
stable: 4.5.x
legacy: 4.3.x

What is your plan about this?

Yours
René



[DISCUSS] LTS releases?

2015-07-02 Thread Rene Moser
Maybe a little bit off topic to the new release process, therefor a new
thread...

speaking about releases. I just thought about supporting LTS releases.

This would mean "someone" or "we" make a commitment to add bug fixes
(only) for a specified time. e.g. 2 years for a release or until the
next LTS release?

Would this something anyone would be interested in?

Yours
René


Re: [DISCUSS] Release principles for Apache CloudStack

2015-07-02 Thread Rene Moser
Hi Remi

On 02.07.2015 13:46, Remi Bergsma wrote:
> I talked to several people over the past weeks and wrote this wiki article:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack
>  
> 
>  
> 
> If you like this way of working, I volunteer to be your RM :-)

It is always good to see this release process which looks +/- identical
to successful Linux kernel release process. :)

The only thing I am curious about, if I get it right:

Why do you branch off for RC? Why not just "tag" a commit in branch
master as RC and branch off once it is releases from a release tag v.x.y?

Because it is unlikely you make any commits on that branch anyway. Or
did I miss anything?

Yours
René





Re: IRC and Slack

2015-07-02 Thread Rene Moser
Hi

On 02.07.2015 10:39, Sebastien Goasguen wrote:
> We need to take a decision here. Shall we officially abandon IRC and out a 
> notice there that points towards Slack.

-1 for abondon IRC.

* IRC is simple and easy, well known and distributed. Not every question
fits to IRC, thats ok. If it gets too complicated ML is preferred.

* IRC freenode is the convention of how to get direct contact to devs of
any open source project without need of invitation and email
registration, and saved me "my ass" several times...

Yours
René


RFH: CLOUDSTACK-6276: project support in affinitygroups

2015-06-30 Thread Rene Moser
Hi

I started to work on adding project support in affinity groups.

You can see the commits in a PR
https://github.com/apache/cloudstack/pull/508.

There is still some work to do like fix current unit test and adding new
unit test and test it generally.

I don't feel I can work on it for the next 2-3 months, but I would
appreciate if project support would work.

So I would like to ask for help to get it done. Any help is welcome!

Thanks
René


Re: [PROPOSAL] Commit to master through PR only

2015-06-26 Thread Rene Moser
Hi

On 25.06.2015 16:38, Sebastien Goasguen wrote:
> - Only commit through PR will land on master (after a minimum of 2 LGTM and 
> green Travis results)
> - Direct commit will be reverted
> - Any committer can merge the PR.

That's the way I used to work. That's fine! :)

One technical benefit is that the master should always be "stable" and
"release ready" because PR will be builded and tested before merge.

Yours
René



Re: CloudStack Ansible Role

2015-06-17 Thread Rene Moser
On 06/17/2015 03:01 PM, Rene Moser wrote:
> https://github.com/resmg/ansible-role-cloudstack.

Should be > https://github.com/resmo/ansible-role-cloudstack


CloudStack Ansible Role

2015-06-17 Thread Rene Moser
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi

Paul Angus aleady did some efforts of covering installation of
CloudStack using Ansible in the docs
http://docs.cloudstack.apache.org/en/master/ansible.html Thanks!

But there are some issues with styling, it has some parts in it which
are deprecated and the doc might not be the best place for pasting a
"playbook" because users have to copy and paste it.

I thought it makes perfectly sense to create a complete, best practice,
fully tested Ansible role for installing CloudStack.

I created the skeleton in my GitHub account
https://github.com/resmg/ansible-role-cloudstack.

Role features:
- --
- - No hard coupled dependency to other roles (DB installation will be
optional opt-out to let users use their special roles for galera
clusters and so forth)
- - Install and upgrade CloudStack environments(ACS management, DB, KVM
hosts, XEN hosts, ...)
- - Fully tested
- - Debian, Ubuntu and CentOS
- - Apache License (of course)

Goals:
- --
The role can be used in production for managing CloudStack installation
as well as for testing installations and upgrades.

You will also be able to create docker boxes with help of Ansible's
docker module and this role.

Further in Ansible 2.0, there are already 16 Ansible CloudStack modules
for accessing the API. So a fully configured CloudStack environment just
using Ansible is not far away. This would let us to make deep
integration testing just a command and few playbooks away.


Further:
- 
It would be nice if this role would be under the "apache" GitHub
namespace to be able to also put it under the "apache" namespace in
https://galaxy.ansible.com/ (role index). But we will look into this,
when it is ready.

Yours
René
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJVgW/DAAoJEIMGllvm1jMdVRgP/jdX+RVLEWsUvwCQQvuEVU90
DPCOiMHbeNhXYYCpg27ajqp+RJ9midSb9BHMFT9ZIY9V/J8Mo0AmraoBaWpM5jjB
YmP3RBBHAt6hLjOQ3NFwS6HUnturYRJEZeyvZIppE38bZLT9dPdpH45/knQisQN/
j18+3PDu+LfK18v1QCPcUzfE4lOn8VLao5IPQhavkzbTpGLaypWjL4FyKNx8Xt4b
yEmcwmPMmdFGwDuT27fT1Zsifoq9WLRC1Bz2fYZmHRxtklRtWd8KSMmHXPeAQzf7
0Dgjto5qwjUSBwYRkI3WCcHnvi6yyqmTmIoWpGjBCshO+9miCtw9JsC4nOxmhjR8
sZve9T9wwu/wIUS0dXIyLI1fkPIeCMzU5fu44aOz8+IAFWtNPcbjF6wyWWhuTD4Y
78ThJN3mdhJHiFZfKWdZG20SF/h2m2SWutABjdmSjGOLqT4vms8nqMO6ykkE9rBs
CFmaiop73jNQ7fbfepUZZAUOTjSu9yWyAQjJzDWJQx8Z5J1yngINywLXYOPBpPU9
sRxODUjQ+b5Eh1AVyTAtg2Vp5B1TxG2xIILx/gY2XGJqG5eA36qrLTW0DLbkbJgH
G8akdF2BX/6N2bUOc947fatJ7m4X3dDta8CrhjwtJ+c4eIJGv1jjFCu9Rh+zSGRY
IWgQdqIIJ8NSajhGM8B4
=Ni6B
-END PGP SIGNATURE-


Re: 4.6

2015-06-08 Thread Rene Moser
Hi Seb

On 08.06.2015 21:43, Sebastien Goasguen wrote:

> We need to freeze 4.6 asap.

Oh boy, you are way too fast :)

How many days will I have to finish
https://github.com/apache/cloudstack/pull/264 ?

And a more serious issue: The changes made in
https://issues.apache.org/jira/browse/CLOUDSTACK-7994 are IMHO still in
the master branch, this would be a blocker for us. Also see
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201506.mbox/%3C2D22E663-AFC2-4292-B16A-F6C2FCA0C217%40shapeblue.com%3E

I created a ticket https://issues.apache.org/jira/browse/CLOUDSTACK-8545

Yours
René


Re: [DISCUSS] Out of Band VR migration, should we reboot VR or not?

2015-06-04 Thread Rene Moser
Hi

On 04.06.2015 14:29, Rohit Yadav wrote:
> Hi,
> 
>> On 04-Jun-2015, at 11:05 am, Remi Bergsma  
>> wrote:
>>
>> To summarise:
>> #1. rebooting VR is needed for hypervisors that have their own DR (like 
>> VMware and Hyperv) as a restart outside of CloudStack makes it lose its 
>> config hence the VR is unavailable
>> #2. rebooting is NOT needed for successful live migrations on _any_ 
>> hypervisor (since there was no restart everything still works)
>> #3. CloudStack 4.6 has persistent config in VR, so rebooting is never needed

> On 4.4 and 4.5 branches, if we revert it than it will break #1 and if we keep 
> it - it breaks #2. We need to fix this in a hypervisor agnostic way.

#1 is something we are already aware of. So I vote for a revert.

We can currently live with it, because

1. Live Migration in VMware hardly fails for such a small VM without
much I/O and such low amount of RAM.

2. We are monitoring the router reboots (uptime) and cloudstack restart
router manually if this happens.

Just a silly question: why not implementing a detected OS out of bound
reboot looking for the

if uptime < last_uptime ? restartRouter

on the router in 4.4 and 4.5?


Re: [DISCUSS] Out of Band VR migration, should we reboot VR or not?

2015-06-03 Thread Rene Moser
Hi

On 03.06.2015 17:06, Ian Southam wrote:
> If the machine crashes and/or rebooted during the oob migration by a party 
> that is not the orchestrator, (read vCenter) then the rules will be lost. 

I fully agree, a reboot due a failing live migration, would cause a
reboot. So what? Then we blame VMware, the orchestration will reboot the
VR and everything is fine. This would cause seconds of outage.

But then the missing persistence of the iptables would be the problem,
not the live migration task, right?

We should fix the persistence of the rules during reboot and not try to
be more clever then the hypervisor cluster orchestration.

Just my 2 cents.






RE: [DISCUSS] Out of Band VR migration, should we reboot VR or not?

2015-06-03 Thread Rene Moser
Sorry for not answering in the thread, I was not on the dev ML so I
could not reply

I reported this current behavior to be an issue on the user ML and
wanted to ask Koushik Das about his experiences.

I would not agree, in an Vmware environment live migrations, e.g. Vmware
DRS breaks IPtables normally. In my opinion, this would make DRS
senseless. And if it happens, it would be uncommon.

We do live migrations daily with VMs having IPtables rules and I didn't
see such a behaviour on any of these VMs.

Could you share more information about your experiences Koushik Das? In
what conditions this happend?

In any case I would love to test DRS live migraions on VR without this
current behavior. In any way, with this current behaviour, we would have
a lot of downtimes.

The other "solution" would be reapplying the rules without reboot, I am
not fully aware of the new behaviour af aggration but wouldn't this also
cause a network outage?

Yours
René





Re: [UPDATE] 2015 progress

2015-05-19 Thread Rene Moser
Hi Seb!

Great to see this progress! One little note from the ansible side:

On 19.05.2015 15:38, sebgoa wrote:
> 4-Ansible
> The ansible cloudstack module from Rene Moser is being merged in as a core 
> Ansible module.
> Expect it in Ansible 2.0

There are currently 11 ansible modules [1] for CloudStack, and still a
few more in the pipeline, hopefully will get all into Ansbile 2.0
release. Most of the modules are integration tested [2].

BTW, my goal is to be able to setup and configure a full CloudStack
cloud environment from zero to hero, incl. user, accounts, network,
hosts, zones, routers, etc with Ansible.

[1]
https://github.com/ansible/ansible-modules-extras/tree/devel/cloud/cloudstack

[2]
https://github.com/ansible/ansible/blob/devel/test/integration/cloudstack.yml.




signature.asc
Description: OpenPGP digital signature


Re: Preparing for 4.6

2015-05-18 Thread Rene Moser
Hi Stephan

On 18.05.2015 10:39, Stephen Turner wrote:
> In my XenCenter dev team at Citrix, we have the policy of requiring a ticket 
> number on every commit. If we find a bug and there isn't already a ticket, we 
> create a ticket before committing the fix. I guess I've just dug through 
> history too many times to understand why something that appears wrong was 
> done, only to find an inadequate description at the end of the trail.

IMHO understanding why commit x changed is more a problem of the commit
message or description.

If I pick a random fix commit in the linux kernel,
https://github.com/torvalds/linux/commit/5c1ac56b51b9d222ab202dec1ac2f4215346129d
you see this small fix and a detailed description, why.

Personally I do not like the dependency to network related, centraliced
ticket tracking system for understanding code changes of a distributed SCM.

But I do see the advantage in seeing what has been reported to be broken
and what commit fixes this bug. But the commit description should be
fairly detailed, why a commit changed something.

However since the changelog for the users is actually not generated from
the git log, it makes totally sense to open a ticket before fixing bugs,
to get all fixes covered in the changelog.

Yours
René



signature.asc
Description: OpenPGP digital signature


Re: Preparing for 4.6

2015-05-18 Thread Rene Moser
Hi

On 15.05.2015 11:27, Sebastien Goasguen wrote:
> Folks,
> 
> As we prepare to try a new process for 4.6 release it would be nice to start 
> paying attention to master.
> 
> - Good commit messages

The question is, what makes a commit message good? Maybe this helps:

http://chris.beams.io/posts/git-commit/

> - Reference to a JIRA bug

Must there be a JIRA bug? I did some commits without jira bugs in the
past. But I noticed that those are not "tracked" in the changelog of the
new release. So should there be a policy (is there?) that there must be
a jira bug for fixes?

> - Squashing commits ( cc/ wilder :))

This really depends. I would not generally prefer squashing commits.

The example of
https://github.com/apache/cloudstack/commits/master?page=2 is more an
example of "bad" commit messages.

If you look at the commits, they make sense but the commit message
indicates that they cover similar work in different aspects, which they
actually don't.

But if you look at this example here
https://github.com/ansible/ansible-modules-extras/commits/devel?author=gregdek
where you can see dozens of similar commits, those should be squashed.

Yours
René



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] Apache Cloudstack 4.5.1

2015-05-07 Thread Rene Moser
Hi Rohit

On 07.05.2015 15:09, Rohit Yadav wrote:
> Hi Rene,
> 
> The issue you had reported is reproducible when you have a 4.2.1 or 4.3.2 
> cloud database but a 4.5.1 or 4.5.0 cloud_usage database, and you try to 
> upgrade to ACS 4.5.1 that fails for cloud_usage database (since it’s already 
> 4.5.1). This could probably be the case in your environment as well that 
> caused those failures for you. I tested db upgrades today from ACS and CCP 
> 4.2 and 4.3 versions to ACS 4.5.1 on MySQL 5.5 and 5.6, and I think they work 
> out of the box with no changes needed.

Thanks for your investigations!

Since I could not reproduce it a second time, your analyses makes
totally sense and as you mentioned before and which I have overlooked
the tables were different, so in any case my error reporting can be
ignored and closed as a false postive. Can not wait to see 4.5.1 in
action! Thanks!

René


Re: [VOTE] Apache Cloudstack 4.5.1

2015-05-05 Thread Rene Moser
Hi

I am not able to reproduce anymore.

vote 0 from me!

Yours
René

Am 5. Mai 2015 17:02:47 MESZ, schrieb Marcus :
>This is the sort of thing that I'd personally not -1, unless we can
>prove
>that it's a regression. If the files were released in 4.5.0 and haven't
>been modified, I'd prefer to ship some bugfixes rather than trying to
>fix
>all known bugs before shipping.
>
>On Tue, May 5, 2015 at 6:47 AM, Daan Hoogland 
>wrote:
>
>> Rene, but it is a problem in the release. It sound very strange that
>it
>> would not have been caught in any of the 4.4 releases or in 4.5.0. I
>am
>> hesitant but a -1 is on the surface of my keyboard.
>>
>> Op di 5 mei 2015 om 13:03 schreef Rene Moser :
>>
>> > Hi
>> >
>> > Tested an update from 4.2.1 to 4.5.1 which failed because of 2
>identical
>> > ALTER TABLE statements for cloud_usage in schema-421to430.sql and
>> > schema-430to440.sql
>> >
>> >
>> >
>>
>https://github.com/apache/cloudstack/blob/4.5-RC20150504T1217/setup/db/db/schema-421to430.sql#L787
>> >
>> >
>> >
>>
>https://github.com/apache/cloudstack/blob/4.5-RC20150504T1217/setup/db/db/schema-430to440.sql#L464
>> >
>> > Commenting it out in schema-430to440.sql fixed it the update. Not
>really
>> > sure if this would brake anything in other conditions.
>> >
>> > It is not really a problem of 4.5.1, so not vote against it.
>> >
>> > Yours
>> > René
>> >
>> > On 04.05.2015 13:20, Rohit Yadav wrote:
>> > > Hi All,
>> > >
>> > > I've created a 4.5.1 release, with the following artifacts up for
>a
>> vote:
>> > >
>> > > Git Branch and Commit SH:
>> > >
>> >
>>
>https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.5
>> > > Commit: 0eb4eb23701f0c6fec8bd5461cd9aa9f92c9576d
>> > >
>> > > List of changes:
>> > >
>> >
>>
>https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES.md;hb=4.5
>> > > https://github.com/apache/cloudstack/commits/4.5-RC20150504T1217
>> > >
>> > > Source release (checksums and signatures are available at the
>same
>> > > location):
>> > > https://dist.apache.org/repos/dist/dev/cloudstack/4.5.1/
>> > >
>> > > PGP release keys (signed using 0EE3D884):
>> > > https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>> > >
>> > > Vote will be open for 72 hours.
>> > >
>> > > For sanity in tallying the vote, can PMC members please be sure
>to
>> > > indicate "(binding)" with their vote?
>> > >
>> > > [ ] +1  approve
>> > > [ ] +0  no opinion
>> > > [ ] -1  disapprove (and reason why)
>> > >
>> > > For convenience of testing, you may use the following
>repositories and
>> > > location to download systemvm templates:
>> > >
>> > > http://packages.shapeblue.com/cloudstack/testing/
>> > > http://packages.shapeblue.com/systemvmtemplate/4.5/
>> > >
>> > > Regards.
>> > >
>> >
>> >
>>



Re: [VOTE] Apache Cloudstack 4.5.1

2015-05-05 Thread Rene Moser
Hi

Tested an update from 4.2.1 to 4.5.1 which failed because of 2 identical
ALTER TABLE statements for cloud_usage in schema-421to430.sql and
schema-430to440.sql

https://github.com/apache/cloudstack/blob/4.5-RC20150504T1217/setup/db/db/schema-421to430.sql#L787

https://github.com/apache/cloudstack/blob/4.5-RC20150504T1217/setup/db/db/schema-430to440.sql#L464

Commenting it out in schema-430to440.sql fixed it the update. Not really
sure if this would brake anything in other conditions.

It is not really a problem of 4.5.1, so not vote against it.

Yours
René

On 04.05.2015 13:20, Rohit Yadav wrote:
> Hi All,
> 
> I've created a 4.5.1 release, with the following artifacts up for a vote:
> 
> Git Branch and Commit SH:
> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.5
> Commit: 0eb4eb23701f0c6fec8bd5461cd9aa9f92c9576d
> 
> List of changes:
> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES.md;hb=4.5
> https://github.com/apache/cloudstack/commits/4.5-RC20150504T1217
> 
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.5.1/
> 
> PGP release keys (signed using 0EE3D884):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> Vote will be open for 72 hours.
> 
> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> For convenience of testing, you may use the following repositories and
> location to download systemvm templates:
> 
> http://packages.shapeblue.com/cloudstack/testing/
> http://packages.shapeblue.com/systemvmtemplate/4.5/
> 
> Regards.
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] Apache CloudStack 4.5.0 RC4

2015-03-03 Thread Rene Moser


On 03.03.2015 10:41, Rohit Yadav wrote:

> I’m lazy :) — No reason actually, I’ll sign them once we’ve a release. I’ll 
> perhaps fix my build scripts soon to automatically sign them once they are 
> built.

Yes, me too :)

That woud be great, so I can stay lazy installing signed debian packages. ;)

Regards
René



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] Apache CloudStack 4.5.0 RC4

2015-03-03 Thread Rene Moser
Hi Rohit

On 03.03.2015 08:27, Rohit Yadav wrote:
> If you're unable to build from source, you may use the following
> (unsigned) repository for testing Apache CloudStack 4.5.0 RC4:

Thanks!

I am curious why do you not sign these packages? It makes live a bit
harder for automated testing if Debian packages are not signed.

Regards
René



signature.asc
Description: OpenPGP digital signature


API response change command=deleteIso

2015-01-13 Thread Rene Moser
Hi dev

I found an inconsistency, call it a typo, in the API response and would
like to make an API change [1]. I saw there has been a similar change in
the past [2].

Thanks for your response.

Regards
René

[1] https://github.com/apache/cloudstack/pull/63
[2]
https://github.com/resmo/cloudstack/commit/e7134b82f90cfa9482cd08c3cf4633c64d621117



signature.asc
Description: OpenPGP digital signature


<    1   2