Re: new contributor guide

2019-03-22 Thread Marc-Aurèle Brothier
Hi Pierre-Luc,

You can have a look at those links for the install and development setup:
- https://github.com/apache/cloudstack/blob/master/INSTALL.md

- https://github.com/apache/cloudstack/blob/master/CONTRIBUTING.md

Those should get you started. If those aren’t enough and/or you get stuck, come 
back to the community so we can update those with the missing stuff.

A good first contribution candidate could be changes to those docs as it’s 
easier to get those extra missing details as a newbie to a project.

Marco

> On 22 Mar 2019, at 13:48, Pierre-Luc Dion  wrote:
> 
> Hi team,
> 
> I was wondering if someone would have an idea of which doc a new
> contributor should look at when someone want to start contributing to
> cloudstack? I haven't contribute in code for some time and I'm lost, and
> while working with a university to help us on contributing to cloudstack I
> realise we have no easy doc as a new dev member.
> 
> things like,
> * how to setup a dev station, (devcloud? which version?)
> * how to contribute bug fix
> * how to contribute new feature.
> 
> If we have such info sweet, otherwise, I'm willing to update this part.


Re: CII best practices & cncf

2019-03-21 Thread Marc-Aurèle Brothier
I thought I saw openstack in the list but I must be mixing it up with something 
else since I can’t find it there anymore.

> On 21 Mar 2019, at 13:31, Sven Vogel  wrote:
> 
> Hi Marco,
> 
> I think it’s a good thing for the first. By the second one cncf I thought 
> it’s only for cloud native as the name it implies?
> 
> Greetings
> 
> Sven
> 
> Von meinem iPhone gesendet
> 
> 
> __
> 
> Sven Vogel
> Teamlead Platform
> 
> EWERK RZ GmbH
> Brühl 24, D-04109 Leipzig
> P +49 341 42649 - 11
> F +49 341 42649 - 18
> s.vo...@ewerk.com
> www.ewerk.com
> 
> Geschäftsführer:
> Dr. Erik Wende, Hendrik Schubert, Frank Richter, Gerhard Hoyer
> Registergericht: Leipzig HRB 17023
> 
> Zertifiziert nach:
> ISO/IEC 27001:2013
> DIN EN ISO 9001:2015
> DIN ISO/IEC 2-1:2011
> 
> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
> 
> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
> 
> Disclaimer Privacy:
> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist 
> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der 
> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung, 
> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte 
> informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie die 
> E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System. Vielen 
> Dank.
> 
> The contents of this e-mail (including any attachments) are confidential and 
> may be legally privileged. If you are not the intended recipient of this 
> e-mail, any disclosure, copying, distribution or use of its contents is 
> strictly prohibited, and you should please notify the sender immediately and 
> then delete it (including any attachments) from your system. Thank you.
>> Am 21.03.2019 um 13:14 schrieb Marc-Aurèle Brothier :
>> 
>> Hi all,
>> 
>> I found recently the core infrastructure website and saw other projects 
>> listed there, but not Cloudstack. So I thought it would be a good thing to 
>> add this visibility. I’m in the process to fill up the form which you can 
>> find here:
>> 
>> https://bestpractices.coreinfrastructure.org/en/projects/2620
>> 
>> I hope you don’t mind I started it without asking first the community.
>> 
>> A next step I see would be to get listed on the cncf (cncf.io) to give even 
>> more visibility to the project.
>> 
>> Marco



CII best practices & cncf

2019-03-21 Thread Marc-Aurèle Brothier
Hi all,

I found recently the core infrastructure website and saw other projects listed 
there, but not Cloudstack. So I thought it would be a good thing to add this 
visibility. I’m in the process to fill up the form which you can find here:

https://bestpractices.coreinfrastructure.org/en/projects/2620

I hope you don’t mind I started it without asking first the community.

A next step I see would be to get listed on the cncf (cncf.io) to give even 
more visibility to the project.

Marco

Re: [DISCUSS] Move to jdk11 and use jlink

2019-03-20 Thread Marc-Aurèle Brothier
Hi Rohit,

I think it’s a good move. After some recent testing I found some 
incompatibility between the jdk11 and older version of spring. I don’t have at 
hands the links but you have to upgrade to some specific version to make it 
work.

Marco

> On 20 Mar 2019, at 05:59, Rohit Yadav  wrote:
> 
> All,
> 
> JDK8 has reached eol wrt public updates from Oracle and JDK11 is the most 
> recent LTS. Should we discuss and plan the next release to move to JDK11, the 
> effort may be as minimal as changing the jdk requirements in maven config 
> files.
> 
> Wrt consumption on centos6 (was there was an argument to drop centos6 
> support?) , centos7 and ubuntu 16.04+ with project jigsaw and jdk11's jlink 
> we can ship stripped down jre along with CloudStack artifacts. This would 
> mean we may no longer depend on distribution provided JRE, much like shipping 
> a single uberjar we can bundle stripped jre in it as well, including for 
> usage, kvm agent and systemvm agents. This can be beneficial in project's 
> control wrt security and CloudStack can still run on platforms that don't 
> have openjdk11 packages. The effort here I guess would be to create build 
> assemblies or use a maven plugin to export such a bundle. Thoughts?
> 
> Regards,
> Rohit Yadav
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
> 
> 
> 



Re: SSL offload in the VR

2018-11-20 Thread Marc-Aurèle Brothier
Hi Paul,

What made you think so? I haven’t pushed any change related to the VR. At 
Exoscale, we removed the VR entirely instead and moved the services down at the 
HV level.

Kind regards,
Marc-Aurèle

> On 20 Nov 2018, at 09:04, Paul Angus  wrote:
> 
> Hi Marc-Aurèle,
>  
> As far as I can tell, I believe that you added the ability to upload an SSL 
> certificate to the VR in 4.11
> I can’t find any user documentation for it in our Wiki or the in 
> read-the-docs.
> I guess it’s something that you use at Exoscale, could you do a pull request 
> to cloudstack-documentation or forward me some documentation which I can then 
> add to the users documentation.
>  
>  
> Kind regards,
>  
> Paul Angus
>  
> paul.an...@shapeblue.com 
> www.shapeblue.com
> @shapeblue
>   
> 
>   


API rate limiting

2018-11-01 Thread Marc-Aurèle Brothier
Hi,

Are people using the API rate limiting feature ?

I'm working on removing the ehcache and was wondering if I need to find an 
alternate solution for that plugin or if it could be removed too.

Marc-Aurèle


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Caching / Ehcache

2018-11-01 Thread Marc-Aurèle Brothier
I wasn't in Montreal so I wasn't aware of the willing to change the JPA layer. 
That's definitively a good thing.

> On 30 Oct 2018, at 17:05, Rafael Weingärtner  
> wrote:
> 
> Hello Marc,
> 
> It is said to hear that you might not be active as you used to be. However,
> I do wish you success in your new path.
> 
> Now, regarding the ehCache. +1 to remove it. Were you in Montreal? This
> year in Montreal, one of the things we discussed is to move away from this
> ad-hoc JPA solution that we have to a standardized and proved one (e.g.
> Spring data + some commonly used JAP implementation such as Eclipse link,
> OpenJPA, and others). AS a first step towards that, we could disable and
> remove the use of ehCache.
> 
> 
> 
> P.S. sorry the late reply, but I have been busy with others topics not
> related to ACS.
> 
> On Sun, Oct 28, 2018 at 2:17 PM Daan Hoogland 
> wrote:
> 
>> Sorry to hear you are reducing your activity on CloudStack, Marc-Aurele.
>> Hope you fare well.
>> 
>> On Sat, Oct 27, 2018 at 10:10 PM Marc-Aurèle Brothier 
>> wrote:
>> 
>>> Hi everyone,
>>> 
>>> (Again as the email formatting has been removed and was hard to read -
>>> I hope it will be better this time).
>>> 
>>> While trying to lower the DB load for CloudStack I did some long
>>> testing and here are my outcomes for the current cache mechanism in
>>> CloudStack.
>>> 
>>> I would be interested to hear from people who try to customize the
>>> ehcache configuration in CS. A PR (
>>> https://github.com/apache/cloudstack/pull/2913) is also open to
>>> desactivate (before deleting) ehcache in CS, read below to understand
>>> why.
>>> 
>>> 
>>> # Problems
>>> 
>>> The code in CS does not seem to fit any caching mechanism especially
>>> due to the homemade DAO code. The main 3 flaws are the following:
>>> 
>>> ## Entities are not expected to be shared
>>> 
>>> There is quite a lot of code with method calls passing entity IDs value
>>> as long, which does some object fetching. Without caching, this
>>> behavior will create distinct objects each time an entity with the same
>>> ID is fetched. With the cache enabled, the same object will be shared
>>> among those methods. It has been seen that it does generate some side
>>> effects where code still expected unchanged entity attributes after
>>> calling different methods thus generating exception/bugs.
>>> 
>>> ## DAO update operations are using search queries
>>> 
>>> Some part of the code are updating entities based on a search query,
>>> therefore the whole cache must be invalidated (see GenericDaoBase:
>>> public int update(UpdateBuilder ub, final SearchCriteria sc, Integer
>>> rows);).
>>> 
>>> ## Entities based on views joining multiple tables
>>> 
>>> There are quite a lot of entities based on SQL views joining multiple
>>> entities in a same object. Enabling caching on those would require a
>>> mechanism to link and cross-remove related objects whenever one of the
>>> sub-entity is changed.
>>> 
>>> 
>>> # Final word
>>> 
>>> Based on the previously discussed points, the best approach IMHO would
>>> be to move out of the custom DAO framework in CS and use a well known
>>> one. It will handle caching well and the joins made by the views in the
>>> code. It's not an easy change, but it will fix along a lot of issues
>>> and add a proven / robust framework to an important part of the code.
>>> 
>>> The work to change the DAO layer is a huge task, I don't know how / who
>>> will perform it.
>>> 
>>> What are the proposals for a new DAO framework ?
>>> 
>>> 
>>> FYI I will stop working for Exoscale at the end of the month, so I
>>> won't be able to tackle such challenge as I won't be working with CS
>>> anymore. I'll try my best to continue looking at the project to give my
>>> insights and share the experienced I have with CS.
>>> 
>>> 
>>> Marc-Aurèle
>>> 
>>> 
>> 
>> --
>> Daan
>> 
> 
> 
> --
> Rafael Weingärtner



signature.asc
Description: Message signed with OpenPGP using GPGMail


Caching / Ehcache

2018-10-27 Thread Marc-Aurèle Brothier
Hi everyone,

(Again as the email formatting has been removed and was hard to read -
I hope it will be better this time).

While trying to lower the DB load for CloudStack I did some long
testing and here are my outcomes for the current cache mechanism in
CloudStack.

I would be interested to hear from people who try to customize the
ehcache configuration in CS. A PR (
https://github.com/apache/cloudstack/pull/2913) is also open to
desactivate (before deleting) ehcache in CS, read below to understand
why.


# Problems

The code in CS does not seem to fit any caching mechanism especially
due to the homemade DAO code. The main 3 flaws are the following:

## Entities are not expected to be shared

There is quite a lot of code with method calls passing entity IDs value
as long, which does some object fetching. Without caching, this
behavior will create distinct objects each time an entity with the same
ID is fetched. With the cache enabled, the same object will be shared
among those methods. It has been seen that it does generate some side
effects where code still expected unchanged entity attributes after
calling different methods thus generating exception/bugs.

## DAO update operations are using search queries

Some part of the code are updating entities based on a search query,
therefore the whole cache must be invalidated (see GenericDaoBase:
public int update(UpdateBuilder ub, final SearchCriteria sc, Integer
rows);).

## Entities based on views joining multiple tables

There are quite a lot of entities based on SQL views joining multiple
entities in a same object. Enabling caching on those would require a
mechanism to link and cross-remove related objects whenever one of the
sub-entity is changed.


# Final word

Based on the previously discussed points, the best approach IMHO would
be to move out of the custom DAO framework in CS and use a well known
one. It will handle caching well and the joins made by the views in the
code. It's not an easy change, but it will fix along a lot of issues
and add a proven / robust framework to an important part of the code.

The work to change the DAO layer is a huge task, I don't know how / who
will perform it.

What are the proposals for a new DAO framework ?


FYI I will stop working for Exoscale at the end of the month, so I
won't be able to tackle such challenge as I won't be working with CS
anymore. I'll try my best to continue looking at the project to give my
insights and share the experienced I have with CS.


Marc-Aurèle



Caching - Ehcache

2018-10-23 Thread Marc-Aurèle Brothier
Hi everyone,

Will trying to lower the DB load for CloudStack I did some long testing
and
here are my outcomes for the current cache mechanism in CloudStack.

I would be interested to hear from people who try to customize the
ehcache
configuration in CS.
A PR (https://github.com/apache/cloudstack/pull/2913) is also open to
desactivate (before deleting) ehcache in CS, read below to understand
why.

ProblemsThe code in CS does not seem to fit any caching mechanism
especially due to the homemade DAO code. The main 3 flaws are the
following:
Entities are not expected to be sharedThere is quite a lot of code with
method calls passing entity IDs value as long, which does some object
fetching. Without caching, this behavior will create distinct objects
each time an entity with the same ID is fetched. With the cache
enabled, the same object will be shared among those methods. It has
been seen that it does generate some side effects where code still
expected unchanged entity attributes after calling different methods
thus generating exception/bugs.
DAO update operations are using search queriesSome part of the code are
updating entities based on a search query, therefore the whole cache
must be invalidated (see GenericDaoBase: public int
update(UpdateBuilder ub, final SearchCriteria sc, Integer rows);).
Entities based on views joining multiple tablesThere are quite a lot of
entities based on SQL views joining multiple entities in a same object.
Enabling caching on those would require a mechanism to link and cross-
remove related objects whenever one of the sub-entity is changed.
Final wordBased on the previously discussed points, the best approach
IMHO would be to move out of the custom DAO framework in CS and use a
well known one. It will handle caching well and the joins made by the
views
in the code. It's not an easy change, but it will fix along a lot of
issues and
add a proven / robust framework to an important part of the code.

The work to change the DAO layer is a huge task, I don't know how / who
will perform it.

What are the proposals for a new DAO framework ?

FYI I will stop working for Exoscale at the end of the month, so I
won't be able to
tackle such challenge as I won't be working with CS anymore. I'll try
my best
to continue looking at the project to give my insights and share the
experienced
I have with CS.

Marc-Aurèle


Re: Github Issues

2018-07-17 Thread Marc-Aurèle Brothier
Hi Paul,

My 2 cents on the topic.

people are commenting on issues when it should by the PR and vice-versa
>

I think this is simply due to the fact that with one login you can do both,
versus before you had to have a JIRA login which people might have tried to
avoid, preferring using github directly, ensuring the conversation will
only be on the PR. Most of the issues in Jira didn't have any conversation
at all.

But I do feel also the pain of searching the issues on github as it's more
free-hand than a jira system. At the same time it's easier and quicker to
navigate, so it ease the pain at the same time ;-)
I would say that the current labels isn't well organized to be able to
search like in jira but it could. For example any label has a prefix
describing the jira attribute type (component, version, ...) Then a bot
scanning the issue content could set some of them as other open source
project are doing. The bad thing here is that you might end up with too
many labels. Maybe @resmo can give his point of view on how things are
managed in Ansible (https://github.com/ansible/ansible/pulls - lots of
labels, lots of issues and PRs). I don't know if that's a solution but
labels seem the only way to organize things.

Marc-Aurèle

On Tue, Jul 17, 2018 at 10:53 AM, Paul Angus 
wrote:

> Hi All,
>
> We have been trialling replacing Jira with Github Issues.   I think that
> we should have a conversation about it before it become the new standard by
> default.
>
> From my perspective, I don't like it.  Searching has become far more
> difficult, categorising has also. When there is a bug fix it can only be
> targeted for a single version, which makes them easy to lose track of, and
> when looking at milestones issues and PRs get jumbled up and people are
> commenting on issues when it should by the PR and vice-versa (yes I've done
> it too).
> In summary, from an administrative point of view it causes a lot more
> problems than it solves.
>
> I yield the floor to hear other people's opinions...
>
>
> Kind regards,
>
> Paul Angus
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: [DISCUSS][ASK] Should agent wait for pending tasks on (mgmt server) disconnection?

2018-05-16 Thread Marc-Aurèle Brothier
Hi Suresh,

As long as the TCP link isn't closed, you can have network hiccups without
any issue. If the link is close, the event is propagated on the management
server and on the agent side and there's isn't much that can be done to
address this easily with the current code base.

Marc-Aurèle

On Wed, May 16, 2018 at 1:25 PM, Rohit Yadav <rohit.ya...@shapeblue.com>
wrote:

> Hi Suresh,
>
>
> As explained earlier and advised to look at code on the PR, perhaps you
> did not get time so have a look here:
>
> https://github.com/apache/cloudstack/blob/4.11/agent/
> src/com/cloud/agent/Agent.java#L488
>
>
> The reconnect() historically sets the link to null. Therefore, any answer
> from pending tasks end up failing here:
>
> https://github.com/apache/cloudstack/blob/4.11/agent/
> src/com/cloud/agent/Agent.java#L868
>
> and,
>
> https://github.com/apache/cloudstack/blob/4.11/agent/
> src/com/cloud/agent/Agent.java#L893
>
>
> Do note that reconnect() only cancels watch tasks but does not
> cancel/shutdown any running task. Also, in case of network error, the mgmt
> server will fail at thread/context where is has done a agent.send() and
> expecting an answer.
>
>
> You can also perform a small test by doing a while or sleep around this
> code to see how getLink().send() behave when agent does reconnect. When it
> does not reconnect, i.e. the agent is blocked by pending tasks to complete
> such tasks always fail.
>
>
> - Rohit
>
> <https://cloudstack.apache.org>
>
>
>
> 
> From: Suresh Kumar Anaparti <sureshkumar.anapa...@gmail.com>
> Sent: Wednesday, May 16, 2018 4:27:36 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS][ASK] Should agent wait for pending tasks on (mgmt
> server) disconnection?
>
> Hi Rohit,
>
> When Management Server and Agent are up and running and there is a network
> failure, I think it is better to wait for some time for the pending tasks
> to complete, instead of failing them and try reconnecting. If network delay
> is minimal, there can be a valid thread/context in the management server to
> handle the answers.
>
> It would be great if there are no major side-effects with this PR changes.
>
> Thanks,
> Suresh
>
> On Wed, May 16, 2018 at 3:40 PM, Rohit Yadav <rohit.ya...@shapeblue.com>
> wrote:
>
> > All,
> >
> >
> > Based on testing against KVM, XenServer and VMware and this discussion,
> > I'll merged the PR based on code reviews and tests. I investigated both
> > code-wise and against live environment for possible side-effects of
> letting
> > agent connect without being blocked on pending tasks and I found no new
> > fault behaviour.
> >
> >
> > If there are any objections or bugs, please share in which case we'll
> > revert the change to continue legacy/historic behaviour. Thanks.
> >
> >
> > - Rohit
> >
> > <https://cloudstack.apache.org>
> >
> >
> >
> > 
> > From: Rohit Yadav <rohit.ya...@shapeblue.com>
> > Sent: Tuesday, May 15, 2018 2:37:58 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: [DISCUSS][ASK] Should agent wait for pending tasks on (mgmt
> > server) disconnection?
> >
> > Hi Suresh,
> >
> >
> > I've replied to your comment on the PR. In addition, when (i) management
> > server is restarted any pending operation on KVM/SSVM agent side will
> fail
> > fail to be communicated back in the correct thread/context and it depends
> > on a specific feature whether is supports sync or cleanup mechanism, in
> > most cases, the async/job timeout may kick in or cause queue/concurrent
> > failure seen in logs. When (ii) agent is reconnected, it reconnects only
> > after any pending job finishes therefore such jobs finish and fail to be
> > communicated back to the mgmt server (the answer instance is failed to be
> > sent on the link, as link is no longer valid and causes exception).
> >
> >
> > - Rohit
> >
> > <https://cloudstack.apache.org>
> >
> >
> >
> > 
> > From: Suresh Kumar Anaparti <sureshkumar.anapa...@gmail.com>
> > Sent: Tuesday, May 15, 2018 12:06:14 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: [DISCUSS][ASK] Should agent wait for pending tasks on (mgmt
> > server) disconnection?
> >
> > Hi,
> >
> > @rhtyd, I checked the PR changes. Good that the agent is not waiting for
> > the pending jobs and retrying connection to management server. This might
> > have impact on s

Re: [DISCUSS][ASK] Should agent wait for pending tasks on (mgmt server) disconnection?

2018-05-14 Thread Marc-Aurèle Brothier
Correct about the thread context, so if the answer is coming into a
management server that doesn't have the context and drops it, it should be
fine then. The PR is then already a good improvement to let the agent
reconnect even when it's doing a long processing request, so it can keeps
on completing other jobs too.

Regarding the restart/shutdown operation, yes I have to push now the
changes to be able to stop some processing tasks (fetching new async jobs
mainly) on a management server to ensure a cleaner shutdown. My solution,
as said, is based on the content of a file that is compatible with HA
proxy, thus not the LB mechanism added recently in CS. It could be changed
for an API call to put/move out a management server from maintenance. The
listManagementServers API call has been merged and it was a requirement for
that.

About Zookeeper, it's not on the rolling shutdown/restart for now. We are
using it as an efficient and true lock mechanism between multiple
management servers. We are slowly moving the locks code towards ZK and
added one during the allocation phase to ensure no host would be over
allocated. I will take this discussion in another email threads since I
have a few questions regarding ZK and also which to talk about the
connection between the agent & management servers.

On Mon, May 14, 2018 at 2:39 PM, Rohit Yadav <rohit.ya...@shapeblue.com>
wrote:

> Thanks Marc and Rafael for replying.
>
>
> In my experimentation, when agent disconnects if will wait for the pending
> jobs/task to complete and on completion it creates an Answer instance and
> tries to sent it using a `link` which no longer exists and fails. This is
> current behaviour, on the mgmt server side the resource/task will be left
> hanging and may not be automatically marked failed right away (may be after
> the configured timeout). My best guess is that the application of the
> change should likely not have any side-effects, other than the
> exceptions/faults we already observe.
>
>
> In my test, the failed async job did not get retried and I hit the famour
> 'concurrency limit 1' issue. At this point, I had to manually cleanup the
> snapshot row, the rows from sync_queue, sync_queue_item and async_job.  The
> current implementation we have on the agent side where mgmt server send a
> cmd and agent returns an answer after processing it -- we don't have the
> same for mgmt server where an agent sends a cmd's answer and mgmt server
> processes it irrespective of the context. Therefore, unless the answer
> receiving mgmt server is not in the right thread/context/state those
> answers are dropped.
>
>
> I think we need to solve for (1) claim and ownership management of a
> resource (how to manage when the owner/mgmt server shuts down or dies), (2)
> task handover - executing tasks (in-flight) when mgmt server is shutdown to
> other mgmt server, (3) central locking-service for this and other uses. The
> bigger change ties with the other things we've seen in the discussion
> around mgmt server restart/shutdown. Till the time we get to solving the
> bigger issue,  perhaps we can provide some API/visual/UI ways to show the
> root admin the async jobs in flight for a management server or alert him,
> perhaps an API to do cleaner mgmt server shutdown that waits for all
> pending async jobs on a mgmg server to complete and does not take any new
> async/job API requests (say like Jenkins does with jobs)?
>
>
> Marc - were n't you working on a zookeeper based rolling shutdown/restart?
> Did that handle some of the failure cases?
>
>
> - Rohit
>
> <https://cloudstack.apache.org>
>
>
>
> 
> From: Marc-Aurèle Brothier <ma...@exoscale.ch>
> Sent: Monday, May 14, 2018 4:06:56 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS][ASK] Should agent wait for pending tasks on (mgmt
> server) disconnection?
>
> Hi,
>
> I'm also for a bigger change but this PR already moves forward to a better
> agent <-> management connection hanlding.
>
> @rhtyd did you test your PR manually by, for example, requesting a long
> snapshot operation and disconnecting the agent.
>
> I have one concern here: when an async job is taken from the DB by a
> management server (in a cluster configuration), the mgmgt ID is put in the
> row to tell which mgmt is managing the job. On disconnection from an agent,
> the event is propagated and the job is mark as failed in the database, and
> an error is return in the API for that command. Here we are only resolving
> the fact to let the agent reconnect quickly but I'm unsure of what will
> happen in the mgmt when the job response is received by a mgmt (which might
> be another one than the one registered in the job db row). I know it's here
> it's becomi

Re: [DISCUSS][ASK] Should agent wait for pending tasks on (mgmt server) disconnection?

2018-05-14 Thread Marc-Aurèle Brothier
Hi,

I'm also for a bigger change but this PR already moves forward to a better
agent <-> management connection hanlding.

@rhtyd did you test your PR manually by, for example, requesting a long
snapshot operation and disconnecting the agent.

I have one concern here: when an async job is taken from the DB by a
management server (in a cluster configuration), the mgmgt ID is put in the
row to tell which mgmt is managing the job. On disconnection from an agent,
the event is propagated and the job is mark as failed in the database, and
an error is return in the API for that command. Here we are only resolving
the fact to let the agent reconnect quickly but I'm unsure of what will
happen in the mgmt when the job response is received by a mgmt (which might
be another one than the one registered in the job db row). I know it's here
it's becoming complicated because one async job might be only one part of a
bigger scenario for a command (like a live migration). I just want to
ensure it won't propagate further inconsistency.

Marco

On Sat, May 12, 2018 at 7:26 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Would prefer “A bigger design fix would be to make management server
> asynchronous of agent side answer/response handling”. However, I understand
> the volume of changes that requires.
>
> I looked at the PR, and I think that everything is ok there. Of course, I
> think we might need some more time to review and think about the possible
> outcomes of such changes.
>
> On Fri, May 11, 2018 at 7:55 AM, Rohit Yadav 
> wrote:
>
> > All,
> >
> >
> > Historically, when the agent (kvm, ssvm, cpvm) is disconnected from the
> > management server (say due to mgmt server restart etc), the reconnection
> > logic waits for any pending tasks/commands to complete before
> reconnection
> > attempts are made. I tried to search git history but could not find a
> > reason, can anyone share why we may need this?
> >
> >
> > Based on the reported issue:
> >
> > https://github.com/apache/cloudstack/issues/2633
> >
> >
> > I've a working patch which removes this limitation:
> >
> > https://github.com/apache/cloudstack/pull/2638
> >
> >
> > From testing with various combinations of tasks, I found that when that
> > happens even if the pending task succeeds it fails to send an Answer to
> the
> > mgmt server, therefore from the control plane's perspective that task is
> > still pending/on-going.
> >
> >
> > When the mgmt server comes back online, and the agent finally reconnects
> > (pending on how long the pending task took) the executed operation is
> still
> > pending in mgmt server's view and may sometimes require manual cleanups
> in
> > database. By removing the limitation in above PR, at least the agent
> > reconnects faster while of the failure/fault behaviours remain the same.
> A
> > bigger design fix would be to make management server asynchronous of
> agent
> > side answer/response handling.
> >
> >
> > - Rohit
> >
> > 
> >
> >
> >
> > rohit.ya...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
>
>
> --
> Rafael Weingärtner
>


Re: John Kinsella and Wido den Hollander now ASF members

2018-05-03 Thread Marc-Aurèle Brothier
Congratulations to both of you!

On Thu, May 3, 2018 at 10:08 AM, Rohit Yadav 
wrote:

> Congratulations John and Wido.
>
>
>
> - Rohit
>
> 
>
>
>
> 
> From: David Nalley 
> Sent: Wednesday, May 2, 2018 9:27:37 PM
> To: dev@cloudstack.apache.org; priv...@cloudstack.apache.org
> Subject: John Kinsella and Wido den Hollander now ASF members
>
> Hi folks,
>
> As noted in the press release[1] John Kinsella and Wido den Hollander
> have been elected to the ASF's membership.
>
> Members are the 'shareholders' of the foundation, elect the board of
> directors, and help guide the future of the ASF.
>
> Congrats to both of you, very well deserved.
>
> --David
>
> [1] https://s.apache.org/ysxx
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: Default API response type: XML -> JSON

2018-04-24 Thread Marc-Aurèle Brothier
@rafael - I think it's overkill to have this as a configuration option. We
should have one default response type, or maybe not have a default one and
enforce the use of the response type the client is willing to receive.

On Mon, Apr 23, 2018 at 3:39 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> I do think it is an interesting proposal. I have been thinking, and what if
> we do something different; what about a global parameter where the root
> admin can define the default serialization mechanism (XML, JSON, RDF,
> others...)? The default value could be XML to maintain backward
> compatibility. Then, it is up to the root admin to define this behavior.
>
>
> On Mon, Apr 23, 2018 at 10:34 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
>
> > Hi everyone,
> >
> > I thought it would be good to move from XML to JSON by default in the
> > response of the API if no response type is sent to the server along with
> > the request. I'm wondering that's the opinion of people on the mailing
> > list.
> >
> > Moreover, if anyone knows a tool working with the API in XML can you list
> > them, so I could check the code and see if the change can be done without
> > breaking it.
> >
> > PR to change default response type: (
> > https://github.com/apache/cloudstack/pull/2593).
> > If this change would cause more trouble, or is not needed in your
> opinion,
> > I don't mind to close the PR.
> >
> > Kind regards,
> > Marc-Aurèle
> >
>
>
>
> --
> Rafael Weingärtner
>


Default API response type: XML -> JSON

2018-04-23 Thread Marc-Aurèle Brothier
Hi everyone,

I thought it would be good to move from XML to JSON by default in the
response of the API if no response type is sent to the server along with
the request. I'm wondering that's the opinion of people on the mailing list.

Moreover, if anyone knows a tool working with the API in XML can you list
them, so I could check the code and see if the change can be done without
breaking it.

PR to change default response type: (
https://github.com/apache/cloudstack/pull/2593).
If this change would cause more trouble, or is not needed in your opinion,
I don't mind to close the PR.

Kind regards,
Marc-Aurèle


Re: [DISCUSS] CloudStack graceful shutdown

2018-04-18 Thread Marc-Aurèle Brothier
ait
> >> until they expire and ACS expunges them. Of course, it is possible to
> >> develop a forceful maintenance method as well. Then, when the “prepare
> for
> >> maintenance” takes longer than a parameter, we could kill hanging jobs.
> >>
> >> All of this would allow the MS to be kept up and receiving requests
> until
> >> it can be safely shutdown. What do you guys about this approach?
> >>
> >> On Tue, Apr 10, 2018 at 6:52 PM, Yiping Zhang <yzh...@marketo.com>
> wrote:
> >>
> >> As a cloud admin, I would love to have this feature.
> >>>
> >>> It so happens that I just accidentally restarted my ACS management
> server
> >>> while two instances are migrating to another Xen cluster (via storage
> >>> migration, not live migration).  As results, both instances
> >>> ends up with corrupted data disk which can't be reattached or migrated.
> >>>
> >>> Any feature which prevents this from happening would be great.  A low
> >>> hanging fruit is simply checking for
> >>> if there are any async jobs running, especially any kind of migration
> >>> jobs
> >>> or other known long running type of
> >>> jobs and warn the operator  so that he has a chance to abort server
> >>> shutdowns.
> >>>
> >>> Yiping
> >>>
> >>> On 4/5/18, 3:13 PM, "ilya musayev" <ilya.mailing.li...@gmail.com>
> >>> wrote:
> >>>
> >>>  Andrija
> >>>
> >>>  This is a tough scenario.
> >>>
> >>>  As an admin, they way i would have handled this situation, is to
> >>> advertise
> >>>  the upcoming outage and then take away specific API commands from
> a
> >>> user a
> >>>  day before - so he does not cause any long running async jobs.
> Once
> >>>  maintenance completes - enable the API commands back to the user.
> >>> However -
> >>>  i dont know who your user base is and if this would be an
> acceptable
> >>>  solution.
> >>>
> >>>  Perhaps also investigate what can be done to speed up your long
> >>> running
> >>>  tasks...
> >>>
> >>>  As a side node, we will be working on a feature that would allow
> >>> for a
> >>>  graceful termination of the process/job, meaning if agent noticed
> a
> >>>  disconnect or termination request - it will abort the command in
> >>> flight. We
> >>>  can also consider restarting this tasks again or what not - but it
> >>> would
> >>>  not be part of this enhancement.
> >>>
> >>>  Regards
> >>>  ilya
> >>>
> >>>  On Thu, Apr 5, 2018 at 6:47 AM, Andrija Panic <
> >>> andrija.pa...@gmail.com
> >>>  wrote:
> >>>
> >>>  > Hi Ilya,
> >>>  >
> >>>  > thanks for the feedback - but in "real world", you need to
> >>> "understand"
> >>>  > that 60min is next to useless timeout for some jobs (if I
> >>> understand
> >>> this
> >>>  > specific parameter correctly ?? - job is really canceled, not
> only
> >>> job
> >>>  > monitoring is canceled ???) -
> >>>  >
> >>>  > My value for the  "job.cancel.threshold.minutes" is 2880 minutes
> >>> (2
> >>> days?)
> >>>  >
> >>>  > I can tell you when you have CEPH/NFS (CEPH even "worse" case,
> >>> since
> >>> slower
> >>>  > read durign qemu-img convert process...) of 500GB, then imagine
> >>> snapshot
> >>>  > job will take many hours. Should I mention 1TB volumes (yes, we
> >>> had
> >>>  > client's like that...)
> >>>  > Than attaching 1TB volume, that was uploaded to ACS (lives
> >>> originally on
> >>>  > Secondary Storage, and takes time to be copied over to NFS/CEPH)
> >>> will take
> >>>  > up to few hours.
> >>>  > Then migrating 1TB volume from NFS to CEPH, or CEPH to NFS, also
> >>> takes
> >>>  > time...etc.
> >>>  >
> >>>  > I'm just giving you feedback as "u

Re: [DISCUSS] CloudStack graceful shutdown

2018-04-05 Thread Marc-Aurèle Brothier
Hi all,

Good point ilya but as stated by Sergey there's more thing to consider
before being able to do a proper shutdown. I augmented my script I gave you
originally and changed code in CS. What we're doing for our environment is
as follow:

1. the MGMT looks for a change in the file /etc/lb-agent which contains
keywords for HAproxy[2] (ready, maint) so that HA-proxy can disable the
mgmt on the keyword "maint" and the mgmt server stops a couple of
threads[1] to stop processing async jobs in the queue
2. Looks for the async jobs and wait until there is none to ensure you can
send the reconnect commands (if jobs are running, a reconnect will result
in a failed job since the result will never reach the management server -
the agent waits for the current job to be done before reconnecting, and
discard the result... rooms for improvement here!)
3. Issue a reconnectHost command to all the hosts connected to the mgmt
server so that they reconnect to another one, otherwise the mgmt must be up
since it is used to forward commands to agents.
4. when all agents are reconnected, we can shutdown the management server
and perform the maintenance.

One issue remains for me, during the reconnect, the commands that are
processed at the same time should be kept in a queue until the agents have
finished any current jobs and have reconnected. Today the little time
window during which the reconnect happens can lead to failed jobs due to
the agent not being connected at the right moment.

I could push a PR for the change to stop some processing threads based on
the content of a file. It's possible also to cancel the drain of the
management by simply changing the content of the file back to "ready"
again, instead of "maint" [2].

[1] AsyncJobMgr-Heartbeat, CapacityChecker, StatsCollector
[2] HA proxy documentation on agent checker: https://cbonte.github.io/
haproxy-dconv/1.6/configuration.html#5.2-agent-check

Regarding your issue on the port blocking, I think it's fair to consider
that if you want to shutdown your server at some point, you have to stop
serving (some) requests. Here the only way it's to stop serving everything.
If the API had a REST design, we could reject any POST/PUT/DELETE
operations and allow GET ones. I don't know how hard it would be today to
only allow listBaseCmd operations to be more friendly with the users.

Marco


On Thu, Apr 5, 2018 at 2:22 AM, Sergey Levitskiy 
wrote:

> Now without spellchecking :)
>
> This is not simple e.g. for VMware. Each management server also acts as an
> agent proxy so tasks against a particular ESX host will be always
> forwarded. That right answer will be to support a native “maintenance mode”
> for management server. When entered to such mode the management server
> should release all agents including SSVM, block/redirect API calls and
> login request and finish all async job it originated.
>
>
>
> On Apr 4, 2018, at 5:15 PM, Sergey Levitskiy > wrote:
>
> This is not simple e.g. for VMware. Each management server also acts as an
> agent proxy so tasks against a particular ESX host will be always
> forwarded. That right answer will be to a native support for “maintenance
> mode” for management server. When entered to such mode the management
> server should release all agents including save, block/redirect API calls
> and login request and finish all a sync job it originated.
>
> Sent from my iPhone
>
> On Apr 4, 2018, at 3:31 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
>
> Ilya, still regarding the management server that is being shut down issue;
> if other MSs/or maybe system VMs (I am not sure to know if they are able to
> do such tasks) can direct/redirect/send new jobs to this management server
> (the one being shut down), the process might never end because new tasks
> are always being created for the management server that we want to shut
> down. Is this scenario possible?
>
> That is why I mentioned blocking the port 8250 for the “graceful-shutdown”.
>
> If this scenario is not possible, then everything s fine.
>
>
> On Wed, Apr 4, 2018 at 7:14 PM, ilya musayev  >
> wrote:
>
> I'm thinking of using a configuration from "job.cancel.threshold.minutes" -
> it will be the longest
>
> "category": "Advanced",
>
> "description": "Time (in minutes) for async-jobs to be forcely
> cancelled if it has been in process for long",
>
> "name": "job.cancel.threshold.minutes",
>
> "value": "60"
>
>
>
>
> On Wed, Apr 4, 2018 at 1:36 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
>
> Big +1 for this feature; I only have a few doubts.
>
> * Regarding the tasks/jobs that management servers (MSs) execute; are
> these
> tasks originate from requests that come to the MS, or is it possible that
> requests received by one management server 

Re: Welcoming Mike as the new Apache CloudStack VP

2018-03-26 Thread Marc-Aurèle Brothier
Thanks for the work Wido, and congrats Mike

On Tue, Mar 27, 2018 at 6:31 AM, Jayapal Uradi  wrote:

> Congratulations, Milke!
>
> -Jayapal
>
> > On Mar 27, 2018, at 2:38 AM, Nitin Kumar Maharana  accelerite.com> wrote:
> >
> > Congratulations, Mike!!
> >
> > -
> > Nitin
> > On 26-Mar-2018, at 7:41 PM, Wido den Hollander > wrote:
> >
> > Hi all,
> >
> > It's been a great pleasure working with the CloudStack project as the
> > ACS VP over the past year.
> >
> > A big thank you from my side for everybody involved with the project in
> > the last year.
> >
> > Hereby I would like to announce that Mike Tutkowski has been elected to
> > replace me as the Apache Cloudstack VP in our annual VP rotation.
> >
> > Mike has a long history with the project and I am are happy welcome him
> > as the new VP for CloudStack.
> >
> > Welcome Mike!
> >
> > Thanks,
> >
> > Wido
> >
> > DISCLAIMER
> > ==
> > This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Accelerite, a Persistent Systems business does not accept any
> liability for virus infected mails.
>
>


Re: [VOTE] Move to Github issues

2018-03-26 Thread Marc-Aurèle Brothier
+1

On Mon, Mar 26, 2018 at 3:05 PM, Will Stevens  wrote:

> +1
>
> On Mon, Mar 26, 2018, 5:51 AM Nicolas Vazquez, <
> nicolas.vazq...@shapeblue.com> wrote:
>
> > +1
> >
> > 
> > From: Dag Sonstebo 
> > Sent: Monday, March 26, 2018 5:06:29 AM
> > To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
> > Subject: Re: [VOTE] Move to Github issues
> >
> > +1
> >
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> >
> > On 26/03/2018, 07:33, "Rohit Yadav"  wrote:
> >
> > All,
> >
> > Based on the discussion last week [1], I would like to start a vote
> to
> > put
> > the proposal into effect:
> >
> > - Enable Github issues, wiki features in CloudStack repositories.
> > - Both user and developers can use Github issues for tracking issues.
> > - Developers can use #id references while fixing an existing/open
> > issue in
> > a PR [2]. PRs can be sent without requiring to open/create an issue.
> > - Use Github milestone to track both issues and pull requests
> towards a
> > CloudStack release, and generate release notes.
> > - Relax requirement for JIRA IDs, JIRA still to be used for
> historical
> > reference and security issues. Use of JIRA will be discouraged.
> > - The current requirement of two(+) non-author LGTMs will continue
> for
> > PR
> > acceptance. The two(+) PR non-authors can advise resolution to any
> > issue
> > that we've not already discussed/agreed upon.
> >
> > For sanity in tallying the vote, can PMC members please be sure to
> > indicate
> > "(binding)" with their vote?
> >
> > [ ] +1  approve
> > [ ] +0  no opinion
> > [ ] -1  disapprove (and reason why)
> >
> > Vote will be open for 120 hours. If the vote passes the following
> > actions
> > will be taken:
> > - Get Github features enabled from ASF INFRA
> > - Update CONTRIBUTING.md and other relevant cwiki pages.
> > - Update project website
> >
> > [1] https://markmail.org/message/llodbwsmzgx5hod6
> > [2]
> > https://blog.github.com/2013-05-14-closing-issues-via-pull-requests/
> >
> > Regards,
> > Rohit Yadav
> >
> >
> >
> > dag.sonst...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
> > nicolas.vazq...@shapeblue.com
> > www.shapeblue.com
> > ,
> > @shapeblue
> >
> >
> >
> >
>


Re: [DISCUSS] Relax strict requirement of JIRA ID for PRs

2018-03-13 Thread Marc-Aurèle Brothier
@rafael, you said: they all required Jira tickets to track the discussion
and facilitate the management

I can see the discussion happening in the PR on github, but the Jira ticket
by itself doesn't do much, except copy/pasting the github discussion. Then
it's down to "facilitate the management", which I only see as listing the
changes for a release as far as I know. But this can be achieved on github
too.

As Daan mentioned, there are those things that are not code related which
should have a way of tracking. But what's the difference in tracking them
as a Jira issue vs a Github issue (they can't be a PR)? Those are point of
view exchanges with messages & links, with a final status, most of the time
without a strong link to a release number. If they do, they can be added to
a milestone.

So far I don't see how things done with Jira cannot be achieved on Github.
It's just a matter of changing habits to simplify the workflow for new
comers (and old joiners too ;-) ).

On Tue, Mar 13, 2018 at 1:02 PM, Daan Hoogland <daan.hoogl...@gmail.com>
wrote:

> Will, you are speaking my mind; any external registration tool should be
> based on the source. The only reason for having an external tool without
> relation to the code is to keep track of what is *not* (or not fully)
> implemented.
>
> On Tue, Mar 13, 2018 at 12:58 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
>
> > I meant a way of describing them (changes/proposals) further. Sometimes
> we
> > have commits only with title, and then the Jira ticket would be a way of
> > documenting that commit. I do prefer the idea of inserting the whole
> > description in the commit body though. [for me] it looks easier to work
> > directly with commits and PRs; as you said, we can generate release notes
> > based on commits directly [and issues on GH]. However, for that, we need
> to
> > fine-tune our workflow.
> >
> >
> > On Tue, Mar 13, 2018 at 8:40 AM, Will Stevens <wstev...@cloudops.com>
> > wrote:
> >
> > > I am +1 to relaxing the requirement of Jira ticket.
> > >
> > > Rafael, what do you mean when you say "Jira tickets are used to
> register
> > > changes"?
> > >
> > > I think ever since 4.9 the actual PRs included in the code are the
> source
> > > of truth for the changes in the actual code (at least from a release
> > notes
> > > perspective).  This is why the release notes can show changes that only
> > > have PRs and no Jira ticket.  At least my release notes generator is
> > built
> > > that way.  I think Rohit has built a similar release notes generator,
> so
> > I
> > > can't speak to his version...
> > >
> > > *Will Stevens*
> > > Chief Technology Officer
> > > c 514.826.0190
> > >
> > > <https://goo.gl/NYZ8KK>
> > >
> > > On Tue, Mar 13, 2018 at 6:42 AM, Rafael Weingärtner <
> > > rafaelweingart...@gmail.com> wrote:
> > >
> > > > Marc, yes Jira tickets are used to register changes. However, what
> > Rohit
> > > > and others (including me) are noticing is that there are certain
> types
> > of
> > > > changes (minor/bureaucracy) that do not require Jira tickets. The
> issue
> > > is
> > > > the wording “change”. What consist of a change that is worth
> mentioning
> > > in
> > > > the release notes? Everything we do in a branch is a change towards a
> > > > release, but not everything is useful for operators/administrators to
> > > see.
> > > >
> > > > I would say that to fix bugs, introduce new features, extend existing
> > > > features, introduce a major change in the code such as that standard
> > > maven
> > > > thing that you did, they all required Jira tickets to track the
> > > discussion
> > > > and facilitate the management. On the other side of the spectrum, we
> > have
> > > > things such as removing dead/unused code, opening a new version
> > (creating
> > > > the upgrade path that we still use for the DB), fix a description in
> an
> > > API
> > > > method, and so on. Moreover, the excessive use of Jira tickets leads
> to
> > > > hundreds of Jira tickets that we do not know that status of. We have
> > > quite
> > > > a big number of tickets opened that could be closed. This has been
> > worse;
> > > > we are improving as time goes by.
> > > >
> > > > I would say that to make this more transparent to others (especially
> > > &

Re: [DISCUSS] Relax strict requirement of JIRA ID for PRs

2018-03-13 Thread Marc-Aurèle Brothier
That's a good idea, because people are more and more used to only create PR
on github, and it would be helpful to be more explanatory on the way we
work to push changes. I still think we should encourage the use of the
github milestone as Rohit did with the 4.11.0 (
https://github.com/apache/cloudstack/milestone/3?closed=1) to list the
changes in the release notes with the help of the labels to tag the PRs
instead of relying on the jira ticket (it requires to have another login).

As far as I can remember, the JIRA tickets are used to list the changes of
a release, but nothing else. Or am I missing something?

Marc-Aurèle

On Tue, Mar 13, 2018 at 9:57 AM, Rohit Yadav 
wrote:

> All,
>
>
> To make it easier for people to contribute changes and encourage
> PR/contributions, should we relax the requirement of a JIRA ticket for pull
> requests that solve trivial/minor issues such as doc bugs, build fixes etc?
> A JIRA ticket may still be necessary for new features and non-trivial
> bugfixes or changes.
>
>
> Another alternative could be to introduce a CONTRIBUTING.md [1] that
> explains the list of expected things to contributors when they send a PR
> (for example, a jira id, links to fs or other resources, a short summary
> and long description, test results etc).
>
>
> Thoughts?
>
>
> [1] https://help.github.com/articles/setting-guidelines-
> for-repository-contributors/
>
>
> - Rohit
>
> 
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: I'd like to introduce you to Khosrow

2018-02-23 Thread Marc-Aurèle Brothier
Welcome Khosrow!

On Fri, Feb 23, 2018 at 3:21 AM, Syed Ahmed  wrote:

> Welcome khusrow
> On Thu, Feb 22, 2018 at 8:09 PM Simon Weller 
> wrote:
>
> > Welcome Khosrow.
> >
> > Simon Weller/615-312-6068
> >
> > -Original Message-
> > From: Khosrow Moossavi [kmooss...@cloudops.com]
> > Received: Thursday, 22 Feb 2018, 7:00PM
> > To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
> > Subject: Re: I'd like to introduce you to Khosrow
> >
> > Thank you Pierre-Luc,
> > I'm super excited to be part of the community.
> >
> > On Feb 22, 2018 18:42, "Rafael Weingärtner"  >
> > wrote:
> >
> > > Welcome!
> > > Congratualations for the great job done so far...
> > >
> > > On Thu, Feb 22, 2018 at 8:40 PM, Pierre-Luc Dion 
> > > wrote:
> > >
> > > > Hi fellow colleagues,
> > > >
> > > > I might be a bit late with this email...
> > > >
> > > > I'd like to introduce Khosrow Moossavi, who recently join our team
> and
> > > his
> > > > focus is currently exclusively on dev for Cloudstack with cloud.ca.
> > > >
> > > > Our 2 current priorities are:
> > > > -fixing VRs,SVMs to run has HVM VMs in xenserver.
> > > > - redesign, or rewrite, the remote management vpn for vpc, poc in
> > > progress
> > > > for IKEv2...
> > > >
> > > >
> > > >
> > > > Some of you might have interact with him already.
> > > >
> > > >
> > > > Also, we are going to be more active for the upcomming 4.12 release.
> > > >
> > > >
> > > > Cheers!
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>


Re: SystemVM documentation

2018-02-16 Thread Marc-Aurèle Brothier
There's a readme in the directory for the systemVM:
https://github.com/apache/cloudstack/tree/master/tools/appliance

On Fri, Feb 16, 2018 at 2:11 PM, Raja Pullela 
wrote:

> Hi Rohit,
>
> Can you please point to the documentation you may have for creating the
> 4.11 SystemVMs ?
>
> Best,
> Raja
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Accelerite, a Persistent Systems business does not accept any
> liability for virus infected mails.
>


Re: Experimental - Direct download for templates

2018-02-07 Thread Marc-Aurèle Brothier
That's a great feature I think, thanks Nicolas for pushing it!

Marc-Aurèle

On Wed, Feb 7, 2018 at 3:19 PM, Nicolas Vazquez <
nicolas.vazq...@shapeblue.com> wrote:

> Hi all,
>
>
> A feature has been introduced on 4.11.0 allowing to register templates
> bypassing secondary storage using a new option 'Direct Download'. It allows
> templates to be directly downloaded into primary storage at VM deployment
> time. It is an experimental feature and it is currently supported on KVM
> hypervisor only. PR: https://github.com/apache/cloudstack/pull/2379
>
>
> A brief description on the current implementation:
>
> - CloudStack allows registering Direct Download/Bypass Secondary Storage
> templates for KVM hypervisor by setting the direct_download flag to true on
> registerTemplate.
>
> - Templates are not downloaded to secondary storage after they are
> registered on CloudStack, are marked as Bypass Secondary Storage and as
> Ready for deployment.
>
> - When Bypassed templates are selected for VM deployment, download is
> delegated to the agents, which would store the templates on primary storage
> instead of copying them from secondary storage
>
> - Metalinks are supported, but aria2 dependency has to be manually
> installed on the agents.
>
>
> There are currently some PRs in progress for 4.11.1 with some improvements
> for this functionality.
>
>
> Any comments/ideas?
>
>
> Thanks,
>
> Nicolas
>
> nicolas.vazq...@shapeblue.com
> www.shapeblue.com
> ,
> @shapeblue
>
>
>
>


Maven standard: PR to rebase & where is gone the file history?

2018-01-22 Thread Marc-Aurèle Brothier
Hi everyone,

PR which moved all the files to follow the maven standard for the file
structure has been merged on master, therefore all files are now located at:
- src/main/java for the java code
- src/main/resources for the resources
- src/main/webapp for the webapp
- src/main/scripts for an scripts
- src/test/java for the java test code
- src/test/resources for test resources


*Github file history*
For those who are used to look at the file history in github, you will have
the impression that it's gone due to the move. It's an issue on github, not
in the repository. From the command line, you simply have to add the
parameter --follow to have a file history to fllow the move. Github hasn't
implemented it in their web view. Most IDE will add the --follow to your
files history.
git log --follow 

If you wish to have the --follow implemented on github, you should send
them a ticket so they add some +1 on their features todo list.

*PR to rebase*
Due to the file move, most of the PRs need a rebase action. Those should be
straightforward since git will naturally move the changes to the new file
location. If you have created a new file, you will have to move it to the
correct directory structure. After pulling master, to perform the rebase,
given XX the number of commits in your PR, you can run:
git rebase --onto master HEAD~XX 

Sorry for that, but there was no way around to make it smoother.

Cheers,
Marc-Aurèle


[cloud-init] change on how DHCP leases are scanned to fetch VR address

2018-01-16 Thread Marc-Aurèle Brothier
Hi all,

I pushed a PR for cloud-init to change the way the DHCP leases are scanned
to retrieve the VR ip address. Currently it does scan the lease files in
reverse time order (newest file first) to get the VR address and if that
address does not work, it falls back on the default gateway address.

My change is to scan the DHCP lease in alphabetical order of the iterface
names, retrieving all potential address from those file and iterating among
them in the same order to find the VR address, with the latest entry on the
default gateway address.
This is to overcomes issue when additional interfaces are added to the VM
which are also configured with a DHCP server which is not part of
cloudstack. In the current situtation, cloud-init would try to use the
address from that newest interface.

Does that sound like a good change to everyone? Or would it break, slow
down, your current VM boot process ?

https://code.launchpad.net/~ma-brothier/cloud-init/+git/cloud-init/+merge/336145

Cheers,
Marc-Aurèle


Re: [NOTICE] Master can accept PRs now

2018-01-15 Thread Marc-Aurèle Brothier
Hi!
Ok, as my PR for the maven standard structure does a lot of file moves, I'm
putting it as a first PR to accept if people agree to:
https://github.com/apache/cloudstack/pull/2283

I still have to rebase and fix the conflict, which will be done shortly.

On Mon, Jan 15, 2018 at 1:19 PM, Rohit Yadav 
wrote:

> All,
>
>
> I've updated master to 4.12.0.0-SNAPSHOT, it may accept feature PRs now.
> They should use the 4.12 milestone: https://github.com/apache/
> cloudstack/milestone/4
>
>
> If you've sent a bugfix (and acceptable enhancement [1]) PR, kindly change
> the base branch to 4.11 and milestone to 4.11.1:
> https://github.com/apache/cloudstack/milestone/5
>
>
> Please find time to test and vote on the 4.11.0.0-rc1 and discuss/share
> any issues especially blockers in the RC1 voting thread.
>
>
> [1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/LTS
>
>
> - Rohit
>
> 
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: How to achieve SUM (N * M) with ACS DAO

2018-01-10 Thread Marc-Aurèle Brothier
Oups, my bad, very bad actually. I had in mind you had the same speed Mhz

You can always run a raw SQL query, there're a lot in the CapacityDao due
to the limitation on the mix of functions + group by.

On Wed, Jan 10, 2018 at 1:45 PM, Ivan Kudryavtsev <kudryavtsev...@bw-sw.com>
wrote:

> Hmmm, I don't think the math works that way, multiplication and sum are not
> same order operations. Right now the only way I found is to fetch all rows
> with required fields from db and do computation in the code, but it doesn't
> look efficient though.
>
> 10 янв. 2018 г. 19:41 пользователь "Marc-Aurèle Brothier" <
> ma...@exoscale.ch>
> написал:
>
> Hmm, yeah that's what I realized after sending the email. With the DAO, you
> can only apply function on a single column / entity attribute, therefore it
> won't be possible as you want. Another way to achieve this would be to do
> SELECT SUM(A), SUM(B) FROM T with the DAO, and on the Java side do the
> multiplication. Woud that work for your use-case?
>
>
> On Wed, Jan 10, 2018 at 1:27 PM, Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> wrote:
>
> > Hello, Marc-Aurele, thank you for the snippet, but it looks like the sum
> > usage everywhere in the codebase. It's aggregation over single field.
> What
> > I need is to pass multuplication of two fields (cores x speed) inside the
> > sum, but I don't understand how to do it with dao... Thank you.
> >
> > 10 янв. 2018 г. 19:12 пользователь "Marc-Aurèle Brothier" <
> > ma...@exoscale.ch>
> > написал:
> >
> > > Have a look here:
> > > https://github.com/apache/cloudstack/blob/master/engine/
> > > schema/src/com/cloud/storage/dao/VolumeDaoImpl.java#L347
> > >
> > > Or you can search other example in the code base with this string
> > > "Func.SUM": thttps://
> > > github.com/apache/cloudstack/blob/master/engine/schema/src/
> > > com/cloud/storage/dao/VolumeDaoImpl.java#L347
> > >
> > > On Wed, Jan 10, 2018 at 9:42 AM, Ivan Kudryavtsev <
> > > kudryavtsev...@bw-sw.com>
> > > wrote:
> > >
> > > > Hello, colleagues, please could anyone guide me how to implement
> > > > aggregation SUM over two fields multiplication? Many thanks in
> advance.
> > > >
> > > > I'm trying to achieve:
> > > > SELECT SUM(A * B) FROM T with DAO
> > > >
> > > >
> > > >
> > > > --
> > > > With best regards, Ivan Kudryavtsev
> > > > Bitworks Software, Ltd.
> > > > Cell: +7-923-414-1515
> > > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > > >
> > >
> >
>


Re: How to achieve SUM (N * M) with ACS DAO

2018-01-10 Thread Marc-Aurèle Brothier
Hmm, yeah that's what I realized after sending the email. With the DAO, you
can only apply function on a single column / entity attribute, therefore it
won't be possible as you want. Another way to achieve this would be to do
SELECT SUM(A), SUM(B) FROM T with the DAO, and on the Java side do the
multiplication. Woud that work for your use-case?


On Wed, Jan 10, 2018 at 1:27 PM, Ivan Kudryavtsev <kudryavtsev...@bw-sw.com>
wrote:

> Hello, Marc-Aurele, thank you for the snippet, but it looks like the sum
> usage everywhere in the codebase. It's aggregation over single field. What
> I need is to pass multuplication of two fields (cores x speed) inside the
> sum, but I don't understand how to do it with dao... Thank you.
>
> 10 янв. 2018 г. 19:12 пользователь "Marc-Aurèle Brothier" <
> ma...@exoscale.ch>
> написал:
>
> > Have a look here:
> > https://github.com/apache/cloudstack/blob/master/engine/
> > schema/src/com/cloud/storage/dao/VolumeDaoImpl.java#L347
> >
> > Or you can search other example in the code base with this string
> > "Func.SUM": thttps://
> > github.com/apache/cloudstack/blob/master/engine/schema/src/
> > com/cloud/storage/dao/VolumeDaoImpl.java#L347
> >
> > On Wed, Jan 10, 2018 at 9:42 AM, Ivan Kudryavtsev <
> > kudryavtsev...@bw-sw.com>
> > wrote:
> >
> > > Hello, colleagues, please could anyone guide me how to implement
> > > aggregation SUM over two fields multiplication? Many thanks in advance.
> > >
> > > I'm trying to achieve:
> > > SELECT SUM(A * B) FROM T with DAO
> > >
> > >
> > >
> > > --
> > > With best regards, Ivan Kudryavtsev
> > > Bitworks Software, Ltd.
> > > Cell: +7-923-414-1515
> > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > >
> >
>


Re: How to achieve SUM (N * M) with ACS DAO

2018-01-10 Thread Marc-Aurèle Brothier
Have a look here:
https://github.com/apache/cloudstack/blob/master/engine/schema/src/com/cloud/storage/dao/VolumeDaoImpl.java#L347

Or you can search other example in the code base with this string
"Func.SUM": thttps://
github.com/apache/cloudstack/blob/master/engine/schema/src/com/cloud/storage/dao/VolumeDaoImpl.java#L347

On Wed, Jan 10, 2018 at 9:42 AM, Ivan Kudryavtsev 
wrote:

> Hello, colleagues, please could anyone guide me how to implement
> aggregation SUM over two fields multiplication? Many thanks in advance.
>
> I'm trying to achieve:
> SELECT SUM(A * B) FROM T with DAO
>
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ 
>


Re: [DISCUSS] new way of github working

2018-01-08 Thread Marc-Aurèle Brothier
Same opinion as Rafael described.

On Mon, Jan 8, 2018 at 11:30 AM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> I did not fully understand what you meant.
>
> Are you talking about the merge commit that can be created when a PR is
> merged? Or, are you talking about a merge commit that is added to a PR when
> a conflict is solved by its author(s)?
>
>
> I do not have problems with the first type of merge commits. However, I
> think we should not allow the second type to get into our code base.
>
> On Mon, Jan 8, 2018 at 7:45 AM, Daan Hoogland 
> wrote:
>
> > Devs,
> >
> > I see a lot of merge master to branch commits appearing on PRs. This is
> > against prior (non-hard) agreements on how we work. It is getting to be
> the
> > daily practice so;
> > How do we feel about
> > 1. not using merge commits anymore?
> > 2. merging back as a way of solving conflicts?
> > and
> > Do we need to make a policy of it or do we let it evolve, at the risk of
> > having more hard to track feature/version matrices?
> >
> > --
> > Daan
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: [VOTE] Clean up old and obsolete branches.

2018-01-03 Thread Marc-Aurèle Brothier
+1

On Wed, Jan 3, 2018 at 9:56 AM, Wido den Hollander  wrote:

> +1
>
> > Op 3 jan. 2018 om 09:02 heeft Rohit Yadav 
> het volgende geschreven:
> >
> > +0
> >
> >
> > - Rohit
> >
> > 
> >
> >
> >
> > 
> > From: Rafael Weingärtner 
> > Sent: Tuesday, January 2, 2018 5:16:24 PM
> > To: dev@cloudstack.apache.org
> > Subject: [VOTE] Clean up old and obsolete branches.
> >
> > Hope you guys had great holy days!
> >
> > Resuming the discussion we started last year in [1]. It is time to vote
> and
> > then to push (if the vote is successful) the protocol defined to our
> wiki.
> > Later we can start enforcing it.
> > I will summarize the protocol for branches in the official repository.
> >
> >   1. We only maintain the master and major release branches. We currently
> >   have a system of X.Y.Z.S. I define major release here as a release that
> >   changes either ((X or Y) or (X and Y));
> >   2. We will use tags for versioning. Therefore, all versions we release
> >   are tagged accordingly, including minor and security releases;
> >   3. When releasing the “SNAPSHOT” is removed and the branch of the
> >   version is created (if the version is being cut from master). Rule (1)
> one
> >   is applied here; therefore, only major releases will receive branches.
> >   Every release must have a tag according to the format X.Y.Z.S. After
> >   releasing, we bump the POM of the version to next available SNAPSHOT;
> >   4. If there's a need to fix an old version, we work on HEAD of
> >   corresponding release branch. For instance, if we want to fix
> something in
> >   release 4.1.1.0, we will work on branch 4.1, which will have the POM
> set to
> >   4.1.2.0-SNAPSHOT;
> >   5. People should avoid (it is not forbidden though) using the official
> >   apache repository to store working branches. If we want to work
> together on
> >   some issues, we can set up a fork and give permission to interested
> parties
> >   (the official repository is restricted to committers). If one uses the
> >   official repository, the branch used must be cleaned right after
> merging;
> >   6. Branches not following these rules will be removed if they have not
> >   received attention (commits) for over 6 (six) months;
> >   7. Before the removal of a branch in the official repository it is
> >   mandatory to create a Jira ticket and send a notification email to
> >   CloudStack’s dev mailing list. If there are no objections, the branch
> can
> >   be deleted seven (7) business days after the notification email is
> sent;
> >   8. After the branch removal, the Jira ticket must be closed.
> >
> > Let’s go to the poll:
> > (+1) – I want to work using this protocol
> > (0) – Indifferent to me
> > (-1) – I prefer the way it is not, without any protocol/guidelines
> >
> >
> > [1]
> > http://mail-archives.apache.org/mod_mbox/cloudstack-dev/
> 201711.mbox/%3CCAHGRR8ozDBX%3DJJewLz_cu-YP9vA3TEmesvxGArTDBPerAOj8Cw%
> 40mail.gmail.com%3E
> >
> > --
> > Rafael Weingärtner
> >
> > rohit.ya...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
>


Known trillian test failures

2017-12-20 Thread Marc-Aurèle Brothier
@rhtyd

Could something be done to avoid confusing people pushing PR to have
trillian test failures, which apparently are know to fail all the time or
often? I know it's hard to keep the tests in good shape and make them run
smoothly but I find it very disturbing and therefore I have to admit I'm
not paying attention to those outputs, sadly.

Skipping them adds the high risk of never getting fixed... I would hope
that someone having full access the the management & agent's logs could fix
them, since AFAIK they aren't available.

Cheers


Re: [DISCUSS] Management server (pre-)shutdown to avoid killing jobs

2017-12-18 Thread Marc-Aurèle Brothier
It's definitively a great direction to take and much more robust. ZK would
be great fit to monitor the state of management servers and agent with the
help of the ephemeral nodes. On the other side, it's not encouraged to use
it as a messaging queue, and kafka would be a much better fit for that
purpose, having partitions/topics. Doing a quick overview of the
architecture I would see ZK used a inter-JVM lock, holding mgmt & agent
status nodes, along with their capacities using a direct connection from
each of them to ZK. Kafka would be use to exchange the current command
messages between management servers, management server & agents. With those
2 kind of brokers in the middle it will make the system super resilient.
For example, if a management sends a command to stop a VM on a host, but
that host's agent is stopping to perform an upgrade, when it connects back
to the kafka topic, its "stop" message would still be there if it didn't
expired and the command could be process. Of course it would have taken
more time, but still, it would not return an error message. This would
remove the need to create and manage threads in the management server to
handle all the async tasks & checks and move it to an event driven
approach. At the same time it adds 2 dependencies that require setup &
configuration, moving out of the goal to have an easy, almost all-included,
installable solution... Trade-offs to be discussed.

On Mon, Dec 18, 2017 at 8:06 PM, ilya musayev <ilya.mailing.li...@gmail.com>
wrote:

> I very much agree with Paul, we should consider moving into resilient model
> with least dependence I.e ha-proxy..
>
> Send a notification to partner MS to take over the job management would be
> ideal.
>
> On Mon, Dec 18, 2017 at 9:28 AM Paul Angus <paul.an...@shapeblue.com>
> wrote:
>
> > Hi Marc-Aurèle,
> >
> > Personally, my utopia would be to be able to pass async jobs between
> mgmt.
> > servers.
> > So rather than waiting in indeterminate time for a snapshot to complete,
> > monitoring the job is passed to another management server.
> >
> > I would LOVE that something like Zookeeper monitored the state of the
> > mgmt. servers, so that 'other' management servers could take over the
> async
> > jobs in the (unlikely) event that a management server becomes
> unavailable.
> >
> >
> >
> > Kind regards,
> >
> > Paul Angus
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
> > -Original Message-
> > From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> > Sent: 18 December 2017 13:56
> > To: dev@cloudstack.apache.org
> > Subject: [DISCUSS] Management server (pre-)shutdown to avoid killing jobs
> >
> > Hi everyone,
> >
> > Another point, another thread. Currently when shutting down a management
> > server, despite all the "stop()" method not being called as far as I
> know,
> > the server could be in the middle of processing an async job task. It
> will
> > lead to a failed job since the response won't be delivered to the correct
> > management server even though the job might have succeed on the agent. To
> > overcome this limitation due to our weekly production upgrades, we added
> a
> > pre-shutdown mechanism which works along side HA-proxy. The management
> > server keeps a eye onto a file "lb-agent" in which some keywords can be
> > written following the HA proxy guide (
> > https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#5.2-agent-
> check
> > ).
> > When it finds "maint", "stopped" or "drain", it stops those threads:
> >  - AsyncJobManager._heartbeatScheduler: responsible to fetch and start
> > execution of AsyncJobs
> >  - AlertManagerImpl._timer: responsible to send capacity check commands
> >  - StatsCollector._executor: responsible to schedule stats command
> >
> > Then the management server stops most of its scheduled tasks. The correct
> > thing to do before shutting down the server would be to send
> > "rebalance/reconnect" commands to all agents connected on that management
> > server to ensure that commands won't go through this server at all.
> >
> > Here, HA-proxy is responsible to stop sending API requests to the
> > corresponding server with the help of this local agent check.
> >
> > In case you want to cancel the maintenance shutdown, you could write
> > "up/ready" in the file and the different schedulers will be restarted.
> >
> > This is really more a change for operation around CS for people doing
> live
> > upgrade on a regular basis, so I'm unsure if the community would want
> such
> > a change in the code base. It goes a bit in the opposite direction of the
> > change for removing the need of HA-proxy
> > https://github.com/apache/cloudstack/pull/2309
> >
> > If there is enough positive feedback for such a change, I will port them
> > to match with the upstream branch in a PR.
> >
> > Kind regards,
> > Marc-Aurèle
> >
>


[DISCUSS] Management server (pre-)shutdown to avoid killing jobs

2017-12-18 Thread Marc-Aurèle Brothier
Hi everyone,

Another point, another thread. Currently when shutting down a management
server, despite all the "stop()" method not being called as far as I know,
the server could be in the middle of processing an async job task. It will
lead to a failed job since the response won't be delivered to the correct
management server even though the job might have succeed on the agent. To
overcome this limitation due to our weekly production upgrades, we added a
pre-shutdown mechanism which works along side HA-proxy. The management
server keeps a eye onto a file "lb-agent" in which some keywords can be
written following the HA proxy guide (
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#5.2-agent-check).
When it finds "maint", "stopped" or "drain", it stops those threads:
 - AsyncJobManager._heartbeatScheduler: responsible to fetch and start
execution of AsyncJobs
 - AlertManagerImpl._timer: responsible to send capacity check commands
 - StatsCollector._executor: responsible to schedule stats command

Then the management server stops most of its scheduled tasks. The correct
thing to do before shutting down the server would be to send
"rebalance/reconnect" commands to all agents connected on that management
server to ensure that commands won't go through this server at all.

Here, HA-proxy is responsible to stop sending API requests to the
corresponding server with the help of this local agent check.

In case you want to cancel the maintenance shutdown, you could write
"up/ready" in the file and the different schedulers will be restarted.

This is really more a change for operation around CS for people doing live
upgrade on a regular basis, so I'm unsure if the community would want such
a change in the code base. It goes a bit in the opposite direction of the
change for removing the need of HA-proxy
https://github.com/apache/cloudstack/pull/2309

If there is enough positive feedback for such a change, I will port them to
match with the upstream branch in a PR.

Kind regards,
Marc-Aurèle


Re: Clean up old and obsolete branches

2017-12-18 Thread Marc-Aurèle Brothier
+1 for me

On the point 5, since you can have people working together on forks, I
would simply state that no other branches except the official ones can be
in the project repository, removing: "If one uses the official repository,
the branch used must be cleaned right after merging;"

On Mon, Dec 18, 2017 at 2:05 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Guys, this is the moment to give your opinion here. Since nobody has
> commented anything on the protocol. I will just add some more steps before
> deletion.
>
>1. Only maintain the master and major release branches. We currently
>have a system of X.Y.Z.S. I define major release here as a release that
>changes either ((X or Y) or (X and Y));
>2. We will use tags for versioning. Therefore, all versions we release
>are tagged accordingly, including minor and security releases;
>3. When releasing the “SNAPSHOT” is removed and the branch of the
>version is created (if the version is being cut from master). Rule (1)
> one
>is applied here; therefore, only major releases will receive branches.
>Every release must have a tag in the format X.Y.Z.S. After releasing, we
>bump the pom of the version to next available SNAPSHOT;
>4. If there's a need to fix an old version, we work on HEAD of
>corresponding release branch;
>5. People should avoid using the official apache repository to store
>working branches. If we want to work together on some issues, we can
> set up
>a fork and give permission to interested parties (the official
> repository
>is restricted to committers). If one uses the official repository, the
>branch used must be cleaned right after merging;
>6. Branches not following these rules will be removed if they have not
>received attention (commits) for over 6 (six) months;
>7. Before the removal of a branch in the official repository it is
>mandatory to create a Jira ticket and send a notification email to
>CloudStack’s dev mailing list. If there are no objections, the branch
> can
>be deleted seven (7) business days after the notification email is sent;
>8. After the branch removal, the Jira ticket must be closed.
>
>
>  I will wait more two days. If we do not get comments here anymore, I will
> call for a vote, and then if there are no objections I will write the
> protocol on our Wiki. Afterwards, we can start removing branches (following
> the defined protocol).
>
> On Thu, Dec 14, 2017 at 5:08 PM, Daan Hoogland 
> wrote:
>
> > sounds lime a lazy consensus vote; +1 from me
> >
> > On Thu, Dec 14, 2017 at 7:07 PM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > Guys,
> > >
> > > Khosrow has done a great job here, but we still need to move this
> forward
> > > and create a standard/guidelines on how to use the official repo.
> Looking
> > > at the list in [1] we can clearly see that things are messy.  This is a
> > > minor discussion, but in my opinion, we should finish it.
> > >
> > > [1] https://issues.apache.org/jira/browse/CLOUDSTACK-10169
> > >
> > > I will propose the following regarding the use of the official
> > repository.
> > > I will be waiting for your feedback, and then proceed with a vote.
> > >
> > >1. Only maintain the master and major release branches. We currently
> > >have a system of X.Y.Z.S. I define major release here as a release
> > that
> > >changes either ((X or Y) or (X and Y));
> > >2. We will use tags for versioning. Therefore, all versions we
> release
> > >are tagged accordingly, including minor and security releases;
> > >3. When releasing the “SNAPSHOT” is removed and the branch of the
> > >version is created (if the version is being cut from master). Rule
> (1)
> > > one
> > >is applied here; therefore, only major releases will receive
> branches.
> > >Every release must have a tag in the format X.Y.Z.S. After
> releasing,
> > we
> > >bump the pom of the version to next available SNAPSHOT;
> > >4. If there's a need to fix an old version, we work on HEAD of
> > >corresponding release branch;
> > >5. People should avoid using the official apache repository to store
> > >working branches. If we want to work together on some issues, we can
> > > set up
> > >a fork and give permission to interested parties (the official
> > > repository
> > >is restricted to committers). If one uses the official repository,
> the
> > >branch used must be cleaned right after merging;
> > >6. Branches not following these rules will be removed if they have
> not
> > >received attention (commits) for over 6 (six) months.
> > >
> > > I think that is all. Do you guys have additions/removals/proposals so
> we
> > > can move this forward?
> > >
> > > On Mon, Dec 4, 2017 at 7:20 PM, Khosrow Moossavi <
> kmooss...@cloudops.com
> > >
> > > wrote:
> > >
> > > > I agree Erik. I updated the list in CLOUDSTACK-10169 

Re: [Discuss] Management cluster / Zookeeper holding locks

2017-12-18 Thread Marc-Aurèle Brothier
Sorry about the confusion. It's not going to replace the DB transactions in
the DAO way. Today we can say that there are 2 types of locks in CS, either
a pure transaction one, with the select for update which locks a row for
any operation by other threads, or a more programmatic one with the op_lock
table holding entries for pure locking mechanism used by the Merovigian
class. Zookeeper could be used to replace the latter, and wouldn't be a
good candidate for the other one.

To give more precise example of the replacement, it could be use to replace
the lock on VM operations, when only one opertion at a time must be
performed on a VM. It should not be used to replace locks in DAOs which
lock a VO entry to update some of its field.

Rafael,does that clarifies you thoughts and concerns about transactions,
connections ?

On Mon, Dec 18, 2017 at 1:10 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> So, we would need to change every piece of code that opens and uses
> connections and transactions to change to ZK model? I mean, to direct the
> flow to ZK.
>
> On Mon, Dec 18, 2017 at 8:55 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
>
> > I understand your point, but there isn't any "transaction" in ZK. The
> > transaction and commit stuff are really for DB and not part of ZK. All
> > entries (if you start writing data in some nodes) are versioned. For
> > example you could enforce that to overwrite a node value you must submit
> > the node data having the same last version id to ensure you were
> > overwriting from the latest value/state of that node. Bear in mind that
> you
> > should not put too much data into your ZK, it's not a database
> replacement,
> > neither a nosql db.
> >
> > The ZK client (CuratorFramework object) is started on the server startup,
> > and you only need to pass it along your calls so that the connection is
> > reused, or retried, depending on the state. Nothing manual has to be
> done,
> > it's all in this curator library.
> >
> > On Mon, Dec 18, 2017 at 11:44 AM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > I did not check the link before. Sorry about that.
> > >
> > > Reading some of the pages there, I see curator more like a client
> library
> > > such as MySQL JDBC client.
> > >
> > > When I mentioned framework, I was looking for something like
> Spring-data.
> > > So, we could simply rely on the framework to manage connections and
> > > transactions. For instance, we could define a pattern that would open
> > > connection with a read-only transaction. And then, we could annotate
> > > methods that would write in the database something with
> > > @Transactional(readonly = false). If we are going to a change like this
> > we
> > > need to remove manually open connections and transactions. Also, we
> have
> > to
> > > remove the transaction management code from our code base.
> > >
> > > I would like to see something like this [1] in our future. No manually
> > > written transaction code, and no transaction management in our code
> base.
> > > Just simple annotation usage or transaction pattern in Spring XML
> files.
> > >
> > > [1]
> > > https://github.com/rafaelweingartner/daily-tasks/
> > > blob/master/src/main/java/br/com/supero/desafio/services/
> > TaskService.java
> > >
> > > On Mon, Dec 18, 2017 at 8:32 AM, Marc-Aurèle Brothier <
> ma...@exoscale.ch
> > >
> > > wrote:
> > >
> > > > @rafael, yes there is a framework (curator), it's the link I posted
> in
> > my
> > > > first message: https://curator.apache.org/
> curator-recipes/shared-lock.
> > > html
> > > > This framework helps handling all the complexity of ZK.
> > > >
> > > > The ZK client stays connected all the time (as the DB connection
> pool),
> > > and
> > > > only one connection (ZKClient) is needed to communicate with the ZK
> > > server.
> > > > The framework handles reconnection as well.
> > > >
> > > > Have a look at ehc curator website to understand its goal:
> > > > https://curator.apache.org/
> > > >
> > > > On Mon, Dec 18, 2017 at 11:01 AM, Rafael Weingärtner <
> > > > rafaelweingart...@gmail.com> wrote:
> > > >
> > > > > Do we have framework to do this kind of looking in ZK?
> > > > > I mean, you said " create a new InterProcessSemaphoreMutex which
> > > handles
> >

Re: [Discuss] Management cluster / Zookeeper holding locks

2017-12-18 Thread Marc-Aurèle Brothier
I understand your point, but there isn't any "transaction" in ZK. The
transaction and commit stuff are really for DB and not part of ZK. All
entries (if you start writing data in some nodes) are versioned. For
example you could enforce that to overwrite a node value you must submit
the node data having the same last version id to ensure you were
overwriting from the latest value/state of that node. Bear in mind that you
should not put too much data into your ZK, it's not a database replacement,
neither a nosql db.

The ZK client (CuratorFramework object) is started on the server startup,
and you only need to pass it along your calls so that the connection is
reused, or retried, depending on the state. Nothing manual has to be done,
it's all in this curator library.

On Mon, Dec 18, 2017 at 11:44 AM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> I did not check the link before. Sorry about that.
>
> Reading some of the pages there, I see curator more like a client library
> such as MySQL JDBC client.
>
> When I mentioned framework, I was looking for something like Spring-data.
> So, we could simply rely on the framework to manage connections and
> transactions. For instance, we could define a pattern that would open
> connection with a read-only transaction. And then, we could annotate
> methods that would write in the database something with
> @Transactional(readonly = false). If we are going to a change like this we
> need to remove manually open connections and transactions. Also, we have to
> remove the transaction management code from our code base.
>
> I would like to see something like this [1] in our future. No manually
> written transaction code, and no transaction management in our code base.
> Just simple annotation usage or transaction pattern in Spring XML files.
>
> [1]
> https://github.com/rafaelweingartner/daily-tasks/
> blob/master/src/main/java/br/com/supero/desafio/services/TaskService.java
>
> On Mon, Dec 18, 2017 at 8:32 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
>
> > @rafael, yes there is a framework (curator), it's the link I posted in my
> > first message: https://curator.apache.org/curator-recipes/shared-lock.
> html
> > This framework helps handling all the complexity of ZK.
> >
> > The ZK client stays connected all the time (as the DB connection pool),
> and
> > only one connection (ZKClient) is needed to communicate with the ZK
> server.
> > The framework handles reconnection as well.
> >
> > Have a look at ehc curator website to understand its goal:
> > https://curator.apache.org/
> >
> > On Mon, Dec 18, 2017 at 11:01 AM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > Do we have framework to do this kind of looking in ZK?
> > > I mean, you said " create a new InterProcessSemaphoreMutex which
> handles
> > > the locking mechanism.". This feels that we would have to continue
> > opening
> > > and closing this transaction manually, which is what causes a lot of
> our
> > > headaches with transactions (it is not MySQL locks fault entirely, but
> > our
> > > code structure).
> > >
> > > On Mon, Dec 18, 2017 at 7:47 AM, Marc-Aurèle Brothier <
> ma...@exoscale.ch
> > >
> > > wrote:
> > >
> > > > We added ZK lock for fix this issue but we will remove all current
> > locks
> > > in
> > > > ZK in favor of ZK one. The ZK lock is already encapsulated in a
> project
> > > > with an interface, but more work should be done to have a proper
> > > interface
> > > > for locks which could be implemented with the "tool" you want,
> either a
> > > DB
> > > > lock for simplicity, or ZK for more advanced scenarios.
> > > >
> > > > @Daan you will need to add the ZK libraries in CS and have a running
> ZK
> > > > server somewhere. The configuration value is read from the
> > > > server.properties. If the line is empty, the ZK client is not created
> > and
> > > > any lock request will immediately return (not holding any lock).
> > > >
> > > > @Rafael: ZK is pretty easy to setup and have running, as long as you
> > > don't
> > > > put too much data in it. Regarding our scenario here, with only
> locks,
> > > it's
> > > > easy. ZK would be only the gatekeeper to locks in the code, ensuring
> > that
> > > > multi JVM can request a true lock.
> > > > For the code point of view, you're opening a connection to a ZK node
> > (any
> > > > of a 

Re: [Discuss] Management cluster / Zookeeper holding locks

2017-12-18 Thread Marc-Aurèle Brothier
@rafael, yes there is a framework (curator), it's the link I posted in my
first message: https://curator.apache.org/curator-recipes/shared-lock.html
This framework helps handling all the complexity of ZK.

The ZK client stays connected all the time (as the DB connection pool), and
only one connection (ZKClient) is needed to communicate with the ZK server.
The framework handles reconnection as well.

Have a look at ehc curator website to understand its goal:
https://curator.apache.org/

On Mon, Dec 18, 2017 at 11:01 AM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Do we have framework to do this kind of looking in ZK?
> I mean, you said " create a new InterProcessSemaphoreMutex which handles
> the locking mechanism.". This feels that we would have to continue opening
> and closing this transaction manually, which is what causes a lot of our
> headaches with transactions (it is not MySQL locks fault entirely, but our
> code structure).
>
> On Mon, Dec 18, 2017 at 7:47 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
>
> > We added ZK lock for fix this issue but we will remove all current locks
> in
> > ZK in favor of ZK one. The ZK lock is already encapsulated in a project
> > with an interface, but more work should be done to have a proper
> interface
> > for locks which could be implemented with the "tool" you want, either a
> DB
> > lock for simplicity, or ZK for more advanced scenarios.
> >
> > @Daan you will need to add the ZK libraries in CS and have a running ZK
> > server somewhere. The configuration value is read from the
> > server.properties. If the line is empty, the ZK client is not created and
> > any lock request will immediately return (not holding any lock).
> >
> > @Rafael: ZK is pretty easy to setup and have running, as long as you
> don't
> > put too much data in it. Regarding our scenario here, with only locks,
> it's
> > easy. ZK would be only the gatekeeper to locks in the code, ensuring that
> > multi JVM can request a true lock.
> > For the code point of view, you're opening a connection to a ZK node (any
> > of a cluster) and you create a new InterProcessSemaphoreMutex which
> handles
> > the locking mechanism.
> >
> > On Mon, Dec 18, 2017 at 10:24 AM, Ivan Kudryavtsev <
> > kudryavtsev...@bw-sw.com
> > > wrote:
> >
> > > Rafael,
> > >
> > > - It's easy to configure and run ZK either in single node or cluster
> > > - zookeeper should replace mysql locking mechanism used inside ACS code
> > > (places where ACS locks tables or rows).
> > >
> > > I don't think from the other size, that moving from MySQL locks to ZK
> > locks
> > > is easy and light and (even implemetable) way.
> > >
> > > 2017-12-18 16:20 GMT+07:00 Rafael Weingärtner <
> > rafaelweingart...@gmail.com
> > > >:
> > >
> > > > How hard is it to configure Zookeeper and get everything up and
> > running?
> > > > BTW: what zookeeper would be managing? CloudStack management servers
> or
> > > > MySQL nodes?
> > > >
> > > > On Mon, Dec 18, 2017 at 7:13 AM, Ivan Kudryavtsev <
> > > > kudryavtsev...@bw-sw.com>
> > > > wrote:
> > > >
> > > > > Hello, Marc-Aurele, I strongly believe that all mysql locks should
> be
> > > > > removed in favour of truly DLM solution like Zookeeper. The
> > performance
> > > > of
> > > > > 3node ZK ensemble should be enough to hold up to 1000-2000 locks
> per
> > > > second
> > > > > and it helps to move to truly clustered MySQL like galera without
> > > single
> > > > > master server.
> > > > >
> > > > > 2017-12-18 15:33 GMT+07:00 Marc-Aurèle Brothier <ma...@exoscale.ch
> >:
> > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > I was wondering how many of you are running CloudStack with a
> > cluster
> > > > of
> > > > > > management servers. I would think most of you, but it would be
> nice
> > > to
> > > > > hear
> > > > > > everyone voices. And do you get hosts going over their capacity
> > > limits?
> > > > > >
> > > > > > We discovered that during the VM allocation, if you get a lot of
> > > > parallel
> > > > > > requests to create new VMs, most notably with large profiles, the
> > > > > capacity
> > > > > >

Re: [Discuss] Management cluster / Zookeeper holding locks

2017-12-18 Thread Marc-Aurèle Brothier
We added ZK lock for fix this issue but we will remove all current locks in
ZK in favor of ZK one. The ZK lock is already encapsulated in a project
with an interface, but more work should be done to have a proper interface
for locks which could be implemented with the "tool" you want, either a DB
lock for simplicity, or ZK for more advanced scenarios.

@Daan you will need to add the ZK libraries in CS and have a running ZK
server somewhere. The configuration value is read from the
server.properties. If the line is empty, the ZK client is not created and
any lock request will immediately return (not holding any lock).

@Rafael: ZK is pretty easy to setup and have running, as long as you don't
put too much data in it. Regarding our scenario here, with only locks, it's
easy. ZK would be only the gatekeeper to locks in the code, ensuring that
multi JVM can request a true lock.
For the code point of view, you're opening a connection to a ZK node (any
of a cluster) and you create a new InterProcessSemaphoreMutex which handles
the locking mechanism.

On Mon, Dec 18, 2017 at 10:24 AM, Ivan Kudryavtsev <kudryavtsev...@bw-sw.com
> wrote:

> Rafael,
>
> - It's easy to configure and run ZK either in single node or cluster
> - zookeeper should replace mysql locking mechanism used inside ACS code
> (places where ACS locks tables or rows).
>
> I don't think from the other size, that moving from MySQL locks to ZK locks
> is easy and light and (even implemetable) way.
>
> 2017-12-18 16:20 GMT+07:00 Rafael Weingärtner <rafaelweingart...@gmail.com
> >:
>
> > How hard is it to configure Zookeeper and get everything up and running?
> > BTW: what zookeeper would be managing? CloudStack management servers or
> > MySQL nodes?
> >
> > On Mon, Dec 18, 2017 at 7:13 AM, Ivan Kudryavtsev <
> > kudryavtsev...@bw-sw.com>
> > wrote:
> >
> > > Hello, Marc-Aurele, I strongly believe that all mysql locks should be
> > > removed in favour of truly DLM solution like Zookeeper. The performance
> > of
> > > 3node ZK ensemble should be enough to hold up to 1000-2000 locks per
> > second
> > > and it helps to move to truly clustered MySQL like galera without
> single
> > > master server.
> > >
> > > 2017-12-18 15:33 GMT+07:00 Marc-Aurèle Brothier <ma...@exoscale.ch>:
> > >
> > > > Hi everyone,
> > > >
> > > > I was wondering how many of you are running CloudStack with a cluster
> > of
> > > > management servers. I would think most of you, but it would be nice
> to
> > > hear
> > > > everyone voices. And do you get hosts going over their capacity
> limits?
> > > >
> > > > We discovered that during the VM allocation, if you get a lot of
> > parallel
> > > > requests to create new VMs, most notably with large profiles, the
> > > capacity
> > > > increase is done too far after the host capacity checks and results
> in
> > > > hosts going over their capacity limits. To detail the steps: the
> > > deployment
> > > > planner checks for cluster/host capacity and pick up one deployment
> > plan
> > > > (zone, cluster, host). The plan is stored in the database under a
> > VMwork
> > > > job and another thread picks that entry and starts the deployment,
> > > > increasing the host capacity and sending the commands. Here there's a
> > > time
> > > > gap between the host being picked up and the capacity increase for
> that
> > > > host of a couple of seconds, which is well enough to go over the
> > capacity
> > > > on one or more hosts. A few VMwork job can be added in the DB queue
> > > > targeting the same host before one gets picked up.
> > > >
> > > > To fix this issue, we're using Zookeeper to act as the multi JVM lock
> > > > manager thanks to their curator library (
> > > > https://curator.apache.org/curator-recipes/shared-lock.html). We
> also
> > > > changed the time when the capacity is increased, which occurs now
> > pretty
> > > > much after the deployment plan is found and inside the zookeeper
> lock.
> > > This
> > > > ensure we don't go over the capacity of any host, and it has been
> > proven
> > > > efficient since a month in our management server cluster.
> > > >
> > > > This adds another potential requirement which should be discuss
> before
> > > > proposing a PR. Today the code works seamlessly without ZK too, to
> > ensure
> > > > it's not a hard requirement, for example in a lab.
> > > >
> > > > Comments?
> > > >
> > > > Kind regards,
> > > > Marc-Aurèle
> > > >
> > >
> > >
> > >
> > > --
> > > With best regards, Ivan Kudryavtsev
> > > Bitworks Software, Ltd.
> > > Cell: +7-923-414-1515
> > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>


[Discuss] Management cluster / Zookeeper holding locks

2017-12-18 Thread Marc-Aurèle Brothier
Hi everyone,

I was wondering how many of you are running CloudStack with a cluster of
management servers. I would think most of you, but it would be nice to hear
everyone voices. And do you get hosts going over their capacity limits?

We discovered that during the VM allocation, if you get a lot of parallel
requests to create new VMs, most notably with large profiles, the capacity
increase is done too far after the host capacity checks and results in
hosts going over their capacity limits. To detail the steps: the deployment
planner checks for cluster/host capacity and pick up one deployment plan
(zone, cluster, host). The plan is stored in the database under a VMwork
job and another thread picks that entry and starts the deployment,
increasing the host capacity and sending the commands. Here there's a time
gap between the host being picked up and the capacity increase for that
host of a couple of seconds, which is well enough to go over the capacity
on one or more hosts. A few VMwork job can be added in the DB queue
targeting the same host before one gets picked up.

To fix this issue, we're using Zookeeper to act as the multi JVM lock
manager thanks to their curator library (
https://curator.apache.org/curator-recipes/shared-lock.html). We also
changed the time when the capacity is increased, which occurs now pretty
much after the deployment plan is found and inside the zookeeper lock. This
ensure we don't go over the capacity of any host, and it has been proven
efficient since a month in our management server cluster.

This adds another potential requirement which should be discuss before
proposing a PR. Today the code works seamlessly without ZK too, to ensure
it's not a hard requirement, for example in a lab.

Comments?

Kind regards,
Marc-Aurèle


Re: [PROPOSE] RM for 4.11

2017-11-29 Thread Marc-Aurèle Brothier - Exoscale
Great Rohit!

I see you started to tag the PR with the milestone 4.11 on github,
that's great! I wish we use more of the little features on github to
help organize the releases and reviews.

Marc-Aurèle

On Wed, 2017-11-29 at 15:44 +0530, Rohit Yadav wrote:
> Hi All,
> 
> I’d like to put myself forward as release manager for 4.11. The 4.11
> releases will be the next major version LTS release since 4.9 and
> will be
> supported for 20 months per the LTS manifesto [2] until 1 July 2019.
> 
> Daan Hoogland and Paul Angus will assist during the process and all
> of us
> will be the gatekeepers for reviewing/testing/merging the PRs, others
> will
> be welcome to support as well.
> 
> As a community member, I will try to help get PRs reviewed, tested
> and
> merged (as would everyone else I hope) but with an RM hat on I would
> like
> to see if we can make that role less inherently life-consuming and
> put the
> onus back on the community to get stuff done.
> 
> Here the plan:
> 1. As RM I put forward the freeze date of the 8th of January 2018,
> hoping
> for community approval.
> 2. After the freeze date (8th Jan) until GA release, features will
> not be
> allowed and fixes only as long as there are blocker issues
> outstanding.
> Fixes for other issues will be individually judged on their merit and
> risk.
> 3. RM will triage/report critical and blocker bugs for 4.11 [4] and
> encourage people to get them fixed.
> 4. RM will create RCs and start voting once blocker bugs are cleared
> and
> baseline smoke test results are on par with previous 4.9.3.0/4.10.0.0
> smoke
> test results.
> 5. RM will allocate at least a week for branch stabilization and
> testing.
> At the earliest, on 15th January, RM will put 4.11.0.0-rc1 for voting
> from
> the 4.11 branch, and master will be open to accepting new features.
> 6. RM will repeat 3-5 as required. Voting/testing of -rc2, -rc3 and
> so on
> will be created as required.
> 7. Once vote passes - RM will continue with the release procedures
> [1].
> 
> In conjunction with that, I also propose and put forward the date of
> 4.12
> cut-off as 4 months [3] after GA release of 4.11 (so everyone knows
> when
> the next one is coming hopefully giving peace of mind to those who
> have
> features which would not make the proposed 4.11 cut off).
> 
> I’d like the community (including myself and colleagues) to:
> - Up to 8th January, community members try to review, test and merge
> as
> many fixes as possible, while super-diligent to not de-stabilize the
> master
> branch.
> - Engage with gatekeepers to get your PRs reviewed, tested and merged
> (currently myself, Daan and Paul, others are welcome to engage as
> well). Do
> not merge the PRs
> - A pull request may be reverted where the author(s) are not
> responding and
> authors may be asked to re-submit their changes after taking suitable
> remedies.
> - Find automated method to show (at a glance) statuses of PRs with
> respect
> to:
>   · Number of LGTMs
>   · Smoke tests
>   · Functional tests
>   · Travis tests passing
>   · Mergeability
> - Perform a weekly run of a component-test matrix against the master
> branch
> before Jan 8th cut off (based on current hypervisors including basic
> (KVM)
> and advanced networking).
> - Continue to fix broken tests.
> 
> Thoughts, feedback, comments?
> 
> [1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+Pr
> ocedure
> [2] https://cwiki.apache.org/confluence/display/CLOUDSTACK/LTS
> [3] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Releases
> [4] The current list of blocker and critical bugs currently stands as
> per
> the following list:
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20CLOUDSTACK
> %20AND%20issuetype%20%3D%20Bug%20AND%20status%20in%20(Open%2C%20%22In
> %20Progress%22%2C%20Reopened)%20AND%20priority%20in%20(Blocker%2C%20C
> ritical)%20AND%20affectedVersion%20in%20(4.10.0.0%2C%204.10.1.0%2C%20
> 4.11.0.0%2C%20Future)%20ORDER%20BY%20priority%20DESC%2C%20updated%20D
> ESC
> 
> Regards,
> Rohit Yadav


Re: CloudStack LTS EOL date?

2017-11-21 Thread Marc-Aurèle Brothier - Exoscale
It makes more sense to me too.


On Tue, 2017-11-21 at 12:04 +0100, Rene Moser wrote:
> Hi all
> 
> The current LTS release is 4.9 which is EOL in June 2018 according to
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/LTS
> 
> AFAIK there are no works planed for a new LTS. The release pace has
> slown down (the high pace and leaving users behind fixes was the
> reason
> for the LTS).
> 
> I am still pro LTS but in my opinion we should have defined the EOL
> in
> relation of the successor LTS release date: "The EOL of the current
> LTS
> is +6 months after the next LTS release."
> 
> Small example:
> 
> Current LTS 4.9
> Next LTS 4.1x release on 01.04. --> LTS 4.9 is 01.10.
> 
> Does this make sense? Other suggestions?
> 
> Regards
> René


Re: [FS] Request for comments: Secure VM Live Migration for KVM

2017-11-17 Thread Marc-Aurèle Brothier - Exoscale
Working, thanks!


On Fri, 2017-11-17 at 11:27 -0200, Rafael Weingärtner wrote:
> Marc I added permission to you; can you test if you can make comments
> now?
> 
> On Fri, Nov 17, 2017 at 11:20 AM, Marc-Aurèle Brothier - Exoscale <
> ma...@exoscale.ch> wrote:
> 
> > I'm not able to post comments on the wiki even when logged in so I
> > post
> > to the mailing list. I guess I'm not in any special wiki group to
> > edit
> > CS pages.
> > 
> > Good news you made the live migration working (right?) on master.
> > Is it
> > really something we want to control under CS on the agent
> > installation
> > all this libvirt TLS setup? Maybe the installation could write
> > libvirtd
> > configuration file for TLS and non-TLS setup in CS and/or libvirt
> > /etc
> > directory but without overriding the normal one. I have to admit
> > I'm
> > not familiar with how things are usually done in CS for external
> > components.
> > 
> > You can also add to cloudstack configuration the libvirt flags used
> > for
> > the live migration, which should be customizable in some way. On my
> > PR
> > it's in agent.properties, but it could be sent along with the
> > migration
> > command.
> > 
> > I would welcome if you could setup a wiki page that I could edit on
> > the
> > KVM live migration so I could add my remark on my experience and
> > things
> > to config/consider.
> > 
> > On your question: +1 on having the configuration value for TLS or
> > plain
> > tcp.
> > 
> > Marc-Aurèle
> > 
> > On Thu, 2017-11-16 at 10:32 +, Rohit Yadav wrote:
> > > All,
> > > 
> > > 
> > > Kindly review and share your thoughts and comments for a new
> > > feature
> > > - Secure VM live migration for KVM, this feature builds on top of
> > > the
> > > previous feature that brought in a new CA framework [1] for
> > > CloudStack.
> > > 
> > > 
> > > Here is a rough first draft for your review:
> > > 
> > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Secure+KVM
> > > +VM+
> > > Live+Migration
> > > 
> > > 
> > > [1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Secure
> > > +Age
> > > nt+Communications
> > > 
> > > 
> > > Regards.
> > > 
> > > rohit.ya...@shapeblue.com
> > > www.shapeblue.com
> > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > @shapeblue
> > > 
> > > 
> > > 
> 
> 
> 


Re: [FS] Request for comments: Secure VM Live Migration for KVM

2017-11-17 Thread Marc-Aurèle Brothier - Exoscale
I'm not able to post comments on the wiki even when logged in so I post
to the mailing list. I guess I'm not in any special wiki group to edit
CS pages.

Good news you made the live migration working (right?) on master. Is it
really something we want to control under CS on the agent installation
all this libvirt TLS setup? Maybe the installation could write libvirtd
configuration file for TLS and non-TLS setup in CS and/or libvirt /etc
directory but without overriding the normal one. I have to admit I'm
not familiar with how things are usually done in CS for external
components.

You can also add to cloudstack configuration the libvirt flags used for
the live migration, which should be customizable in some way. On my PR
it's in agent.properties, but it could be sent along with the migration
command.

I would welcome if you could setup a wiki page that I could edit on the
KVM live migration so I could add my remark on my experience and things
to config/consider.

On your question: +1 on having the configuration value for TLS or plain
tcp.

Marc-Aurèle

On Thu, 2017-11-16 at 10:32 +, Rohit Yadav wrote:
> All,
> 
> 
> Kindly review and share your thoughts and comments for a new feature
> - Secure VM live migration for KVM, this feature builds on top of the
> previous feature that brought in a new CA framework [1] for
> CloudStack.
> 
> 
> Here is a rough first draft for your review:
> 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Secure+KVM+VM+
> Live+Migration
> 
> 
> [1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Secure+Age
> nt+Communications
> 
> 
> Regards.
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 


Re: New committer: Gabriel Beims Bräscher

2017-11-15 Thread Marc-Aurèle Brothier - Exoscale
Welcome Gabriel & congrats!

Marc-Aurèle

On Wed, 2017-11-15 at 08:32 -0200, Rafael Weingärtner wrote:
> The Project Management Committee (PMC) for Apache CloudStack has
> invited
> Gabriel Beims Bräscher to become committer and we are pleased to
> announce
> that he has accepted.
> 
> Gabriel has shown commitment to Apache CloudStack community,
> contributing
> with PRs in a constant fashion. Moreover, he has also proved great
> abilities to interact with the community quite often in our mailing
> lists
> and Slack channel trying to help people.
> 
> Let´s congratulate and welcome Apache CloudStack’s newest committer.
> 
> --
> Rafael Weingärtner


Re: Remote debugging not working anymore

2017-11-13 Thread Marc-Aurèle Brothier - Exoscale
@rhtyd - the java-opts file content (+ read in the startup script)
should be put back to allow startup customization.



On Mon, 2017-11-13 at 15:18 +0100, Sigert GOEMINNE wrote:
> Hi all,
> 
> Starting from the changes in PR2226
> , what do i need to
> specify
> to enable remote debugging on a system? Before I added
> -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 in
> /etc/default/cloudstack-management and restarted the cloudstack mgmt
> server
> but apparently this doesn't work anymore. If i run netstat -tlpn, It
> is not
> showing port 5005 in my case. Do I need to change something in our
> test
> infrastructure?
> 
> Thanks.
> Kind regards,
> Sigert


Re: [ANNOUNCE] Syed Mushtaq Ahmed has joined the PMC

2017-10-10 Thread Marc-Aurèle Brothier
Congrats Syed 

On Mon, Oct 9, 2017 at 7:49 PM, Syed Ahmed  wrote:

> Thank you all for the kind words. It's been a pleasure working with you
> guys. I hope we continue the good work!
>
> -Syed
>
> On Mon, Oct 9, 2017 at 12:32 PM, Gabriel Beims Bräscher <
> gabrasc...@gmail.com> wrote:
>
> > Congrats, Syed!!!
> > Well deserved!
> >
> > 2017-10-09 13:26 GMT-03:00 Nitin Kumar Maharana <
> > nitinkumar.mahar...@accelerite.com>:
> >
> > > Congratulations, Syed!!!
> > > On 09-Oct-2017, at 4:56 PM, Paul Angus  mailto:
> > > paul.an...@shapeblue.com>> wrote:
> > >
> > > Fellow CloudStackers,
> > >
> > > It gives me great pleasure to say that Syed has be invited to join the
> > PMC
> > > and has gracefully accepted.
> > > Please joining me in congratulating Syed!
> > >
> > >
> > > Kind regards,
> > >
> > > Paul Angus
> > >
> > >
> > > paul.an...@shapeblue.com
> > > www.shapeblue.com
> > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > @shapeblue
> > >
> > >
> > >
> > >
> > > DISCLAIMER
> > > ==
> > > This e-mail may contain privileged and confidential information which
> is
> > > the property of Accelerite, a Persistent Systems business. It is
> intended
> > > only for the use of the individual or entity to which it is addressed.
> If
> > > you are not the intended recipient, you are not authorized to read,
> > retain,
> > > copy, print, distribute or use this message. If you have received this
> > > communication in error, please notify the sender and delete all copies
> of
> > > this message. Accelerite, a Persistent Systems business does not accept
> > any
> > > liability for virus infected mails.
> > >
> >
>


Re: Checkstyle / code style / reformatting

2017-09-29 Thread Marc-Aurèle Brothier
The checkstyle rules can be imported in pretty much any IDE to help during
coding and to display the errors immediately.
To speed up the detection of checkstyle errors, we could edit the post
commit hooks to run a mvn command only for the checkstyle rules, and then
do the usual stuff.

I'm not in favor to alter the code after being approved. The PRs should be
"correct" on all points: code style, functionality, tests, doc

On Fri, Sep 29, 2017 at 10:19 AM, Daan Hoogland <daan.hoogl...@gmail.com>
wrote:

> I am a fan of convention and think everyone should have some. Strict
> enforcing of non-functionals, I'm not to big on. I see what you want to
> achieve here but am reluctant not to -1 any strictness. If we do it with a
> post-commit-hook (or if such a thing exists a post-merge-hook) we will
> allow for code to be altered after being approved, won't we?
>
> On Fri, Sep 29, 2017 at 9:44 AM, Marc-Aurèle Brothier <ma...@exoscale.ch>
> wrote:
>
> > Hi everyone,
> >
> > Would you think it's worth tightening the checkstyle rules and run a
> > reformatting pass on the code base to align everything slowly? I know it
> > will hide changes using the blame functionality, and would force people
> to
> > edit some PR. Is the trade-off worth it for you?
> >
> > The idea would be to add one by one those extra checkstyle rules, and do
> > the code change/reformat accordingly each time with one rule in one PR. A
> > new rule should first be accepted by the community before starting on the
> > PR to do the changes (otherwise it might be a lot of wasted time). When
> > merged, a new rule can be added.
> >
> > The code style that bothers me most right now for example is blocks
> without
> > braces, so my first proposal would be:
> >
> > 
> > 
> > 
> >
> > I have a lot more to add, but as said, one at a time ;-)
> >
> > The list of rules: http://checkstyle.sourceforge.net/checks.html
> > The current rules:
> > https://github.com/apache/cloudstack/blob/master/tools/
> > checkstyle/src/main/resources/cloud-style.xml
> >
> > What do you think?
> >
> > Marc-Aurèle
> >
>
>
>
> --
> Daan
>


Checkstyle / code style / reformatting

2017-09-29 Thread Marc-Aurèle Brothier
Hi everyone,

Would you think it's worth tightening the checkstyle rules and run a
reformatting pass on the code base to align everything slowly? I know it
will hide changes using the blame functionality, and would force people to
edit some PR. Is the trade-off worth it for you?

The idea would be to add one by one those extra checkstyle rules, and do
the code change/reformat accordingly each time with one rule in one PR. A
new rule should first be accepted by the community before starting on the
PR to do the changes (otherwise it might be a lot of wasted time). When
merged, a new rule can be added.

The code style that bothers me most right now for example is blocks without
braces, so my first proposal would be:





I have a lot more to add, but as said, one at a time ;-)

The list of rules: http://checkstyle.sourceforge.net/checks.html
The current rules:
https://github.com/apache/cloudstack/blob/master/tools/checkstyle/src/main/resources/cloud-style.xml

What do you think?

Marc-Aurèle


Re: Clean up of unused constants

2017-09-26 Thread Marc-Aurèle Brothier
+1 to sort them too

On Tue, Sep 26, 2017 at 4:13 PM, Daan Hoogland 
wrote:

> +1 Sigert, be my guest. We will run integration tests on your changes
> anyway, so low risk ;)
>
> On 2017/09/26 15:55, "Rafael Weingärtner" 
> wrote:
>
> IMO, if something is not used or if something does not work, it has to
> be removed or fixed.
>
> I am +1 for the removal of unused constants. Did you check if the value
> of these unused constants were being used somewhere? I mean, using the
> value without referencing the constant.
>
>
> On 9/26/2017 9:55 AM, Sigert GOEMINNE wrote:
> > Hi all,
> >
> > Am I allowed to remove all unused constants in ApiConstants.java?
> >
> > Kind regards,
> >
> > *Sigert Goeminne*
> > Software Development Engineer
> >
>
> --
> Rafael Weingärtner
>
>
>
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: New committers: Nathan Johnson and Marc-Aurèle Brothier

2017-09-22 Thread Marc-Aurèle Brothier
Thank you all! 
I'm happy to contribute and help push forward the project!

Cheers

On Fri, Sep 22, 2017 at 11:26 PM, Rene Moser <m...@renemoser.net> wrote:

> Congrats Nathan and Marc-Aurèle!
>
> René
>
> On 09/22/2017 06:31 PM, Rafael Weingärtner wrote:
> > The Project Management Committee (PMC) for Apache CloudStack has invited
> > Nathan Johnson and Marc-Aurèle Brothier to become committers and we are
> > pleased to announce that they have accepted.
> >
> > They have shown commitment to Apache CloudStack community, contributing
> > with PRs in a constant fashion. Moreover, they have proved great
> abilities
> > to interact with the community quite often in our mailing lists and Slack
> > channel trying to help people.
> >
> > Let´s congratulate and welcome Apache CloudStack’s two newest committers.
> >
> > --
> > Rafael Weingärtner
> >
>


Re: Need to ask for help again (Migration in cloudstack)

2017-09-05 Thread Marc-Aurèle Brothier
Hi Dimitriy,

I wrote the PR for the live migration in cloudstack (PR 1709). We're using
an older version than upstream so it's hard for me to fix the integration
tests errors. All I can tell you, is that you should first configure
libvirt correctly for migration. You can play with it by manually running
virsh commands to initiate the migration. The networking part will not work
after the VM being on the other machine if down manually.

Marc-Aurèle

On Tue, Sep 5, 2017 at 2:07 PM, Dmitriy Kaluzhniy <
dmitriy.kaluzh...@gmail.com> wrote:

> Hello,
> That's what I want, thank you!
> I want to have Live migration on KVM with non-shared storages.
> As I understood, migration is performed by LibVirt.
>
> 2017-09-01 17:04 GMT+03:00 Simon Weller :
>
> > Dmitriy,
> >
> > Can you give us a bit more information about what you're trying to do?
> > If you're looking for live migration on non shared storage with KVM,
> there
> > is an outstanding PR  in the works to support that:
> >
> > https://github.com/apache/cloudstack/pull/1709
> >
> > - Si
> >
> >
> > 
> > From: Rajani Karuturi 
> > Sent: Friday, September 1, 2017 4:07 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: Need to ask for help again (Migration in cloudstack)
> >
> > You might start with this commit
> > https://github.com/apache/cloudstack/commit/
> 21ce3befc8ea9e1a6de449a21499a5
> > 0ff141a183
> >
> >
> > and storage_motion_supported column in hypervisor_capabilities
> > table.
> >
> > Thanks,
> >
> > ~ Rajani
> >
> > http://cloudplatform.accelerite.com/
> >
> > On August 31, 2017 at 6:29 PM, Dmitriy Kaluzhniy
> > (dmitriy.kaluzh...@gmail.com) wrote:
> >
> > Hello!
> > I contacted this mail before, but I wasn't subscribed to mailing
> > list.
> > The reason I'm contacting you - I need advise.
> > During last week I was learning cloudstack code to find where is
> > implemented logic of this statements I found in cloudstack
> > documentation:
> > "(KVM) The VM must not be using local disk storage. (On
> > XenServer and
> > VMware, VM live migration with local disk is enabled by
> > CloudStack support
> > for XenMotion and vMotion.)
> >
> > (KVM) The destination host must be in the same cluster as the
> > original
> > host. (On XenServer and VMware, VM live migration from one
> > cluster to
> > another is enabled by CloudStack support for XenMotion and
> > vMotion.)"
> >
> > I made up a long road through source code but still can't see
> > it. If you
> > can give me any advise - it will be amazing.
> > Anyway, thank you.
> >
> > --
> >
> > *Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
> >
>
>
>
> --
>
>
>
> *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
>


Re: [DISCUSS] Closing old Pull Requests on Github

2017-07-24 Thread Marc-Aurèle Brothier
Hi Wido,

I have one comment on this topic. Some of those PRs are lying there because
no one took the time to merge them (I have a couple like that) since they
were not very important (I think it's the reason), fixing only a small
glitch or improving an output. If we start to close the PRs because there
isn't activity on them, we should be sure to treat all PRs equally in term
on timeline when they arrive. Using the labels to sort them and make
filtering easier would also be something important IMO. Today there are
200+ PRs but we cannot filter them and have not much idea on their status,
except by checking if they are "mergeable". This should not conflict with
the Jira tickets & discussion that happened previously.

Marco

On Mon, Jul 24, 2017 at 10:22 AM, Wido den Hollander  wrote:

> Hi,
>
> While writing this e-mail we have 191 Open Pull requests [0] on Github and
> that number keeps hovering around ~200.
>
> We have a great number of PRs being merged, but a lot of code is old and
> doesn't even merge anymore.
>
> My proposal would be that we close all PRs which didn't see any activity
> in the last 3 months (Jun, July and May 2017) with the following message:
>
> "This Pull Request is being closed for not seeing any activity since May
> 2017.
>
> The CloudStack project is in a transition from the Apache Foundation's Git
> infrastructure to Github and due to that not all PRs we able to be tested
> and/or merged.
>
> It's not our intention to say that we don't value the PR, but it's a way
> to get a better overview of what needs to be merged.
>
> If you think closing this PR is a mistake, please add a comment and
> re-open the PR! If you do that, could you please make sure that the PR
> merges against the branch it was submitted against?
>
> Thank you very much for your understanding and cooperation!"
>
> How does that sound?
>
> Wido
>
>
> [0]: https://github.com/apache/cloudstack/pulls
>


Re: ***Cloudstack EUUG - CFP***

2017-06-27 Thread Marc-Aurèle Brothier
Hi Giles,

I could give a talk on live migration with KVM that we have since one year
here. That might help to get the feature fixed and merged for CS. Let me
know.
https://github.com/apache/cloudstack/pull/1709

Marc-Aurèle

On Thu, Jun 22, 2017 at 1:41 PM, Wido den Hollander  wrote:

> Hi,
>
> Registered and arranged travel!
>
> I can give a short IPv6 talk (again?). Aug 17th we should be running the
> 4.10 code in production so I can share some details en demos?
>
> Wido
>
> > Op 22 juni 2017 om 11:42 schreef Giles Sirett <
> giles.sir...@shapeblue.com>:
> >
> >
> > All
> > The next Cloudstack European User Group will be in 17 August in London
> > I'm just putting the agenda together at the moment. If anybodys got a
> cloudstack-related talk they'd like to present then ping me.
> >
> > For those that don't fancy speaking, it would be great to see as many of
> you there as possible
> >
> > https://www.eventbrite.co.uk/e/cloudstack-european-user-
> group-tickets-35565783215?aff=es2
> >
> >
> > Kind regards
> > Giles
> >
> >
> > giles.sir...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
>


Re: nvidia tesla grid card implementation (kvm)

2017-05-24 Thread Marc-Aurèle Brothier
Hi Sven,

We implemented for KVM a pci manager to give a pass through to any PCI
devices. We use that to provision VM with Tesla P100 GPU card, 1 to 4 cards
per VM. My code is based on CS 4.4.2 and not the upstream version as we are
not following anymore upstream. But I completely got rid of Xen
implementation because the design could only fit Xen server way of dealing
with GPU card. It's not something that I can push upstream. Here is an
example of the XML definition of a VM which has 4 GPU cards:




  
  

  
  
  


  
  

  
  
  


  
  

  
  
  


  
  

  
  
  

...


You have to map the exact source pci bus of the Nvidia cards from the host
in the XML.

It a pass-through, meaning that the full card is given to the VM, therefore
it cannot be shared across multiple VMs.

Hope that helps

Marco

On Mon, May 22, 2017 at 4:22 PM, Vogel, Sven  wrote:

> Hi Nitin,
>
> thanks for your answer. I mean the xml file for the VM which will always
> be defined from cloudstack for the kvm hyervisor when we start a machine.
>
> https://access.redhat.com/documentation/en-US/Red_Hat_
> Enterprise_Linux/7/html/Virtualization_Deployment_and_
> Administration_Guide/chap-Guest_virtual_machine_device_
> configuration.html#sect-device-GPU
>
> https://griddownloads.nvidia.com/flex/GRID_4_2_Support_Matrix.pdf
>
> e.g.
>
> 
>   pci__02_00_0
>   /sys/devices/pci:00/:00:03.0/:02:00.0
>   pci__00_03_0
>   
> pci-stub
>   
>   
> 0
> 2
> 0
> 0
> GK106GL [Quadro K4000]
> NVIDIA Corporation
> 
> 
>   
>   
> 
> 
>   
>   
> 
>   
> 
>
> I think xen should be supported. What do you think?
>
> Sven Vogel
>
> -Ursprüngliche Nachricht-
> Von: Nitin Kumar Maharana [mailto:nitinkumar.mahar...@accelerite.com]
> Gesendet: Montag, 22. Mai 2017 16:03
> An: dev@cloudstack.apache.org
> Betreff: Re: nvidia tesla grid card implementation (kvm)
>
> Hi Sven,
>
> Currently the K1 and K2 cards are only supported in XenServer.
> For other cards we have to add support even for XenServer and other
> hypervisors.
>
> I didn’t understand what is xml defined files you are talking about. Can
> you please elaborate a little bit?
>
>
> Thanks,
> Nitin
> > On 22-May-2017, at 5:46 PM, Vogel, Sven 
> wrote:
> >
> > Hi,
> >
> > i saw that in cloudstack nvidia k1 and k2 are implemented. Now there are
> new cards on market, Tesla M60, Tesla M10 and Tesla M6.
> >
> > It there anybody who can implement it in the xml defined files? How can
> we help?
> >
> > Thanks for help
> >
> > Sven Vogel
> > Kupper Computer GmbH
>
>
>
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Accelerite, a Persistent Systems business does not accept any
> liability for virus infected mails.
>


Re: [DISCUSS] Config Drive: Using the OpenStack format?

2017-05-21 Thread Marc-Aurèle Brothier
For me too, I don't see the point of repeating everything to just put 
"cloudstack". To keep a way to differentiate that it's based on CloudStack, I 
would add a file "cloud_provider.txt" or "cloud_version.txt" containing 
CloudStack X.Y.Z

So that in case things change at a certain point in time we can make cloud-init 
to parse this particular file to know how to handle the other files.

Marco

> On 20 May 2017, at 20:54, Simon Weller <swel...@ena.com.INVALID> wrote:
> 
> Yes, I don't see any point in reinventing the wheel.
> 
> Simon Weller/615-312-6068
> 
> -Original Message-
> From: Wido den Hollander [w...@widodh.nl]
> Received: Saturday, 20 May 2017, 8:45AM
> To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
> Subject: Re: [DISCUSS] Config Drive: Using the OpenStack format?
> 
> Just to check all the +1's, they are about using the OpenStack format. Right?
> 
> Config Drive will be there no matter what.
> 
> Wido
> 
>> Op 19 mei 2017 om 19:45 heeft Kris Sterckx <kris.ster...@nuagenetworks.net> 
>> het volgende geschreven:
>> 
>> FYI
>> 
>> Slides are here :
>> https://www.slideshare.net/2000monkeys/apache-cloudstack-collab-miami-user-data-alternatives-to-the-vr
>> 
>> Thanks
>> 
>> - Kris
>> 
>>> On 19 May 2017 at 12:58, Wei ZHOU <ustcweiz...@gmail.com> wrote:
>>> 
>>> gd idea
>>> 
>>> 2017-05-19 15:33 GMT+02:00 Marc-Aurèle Brothier <ma...@exoscale.ch>:
>>> 
>>>> Hi Widoo,
>>>> 
>>>> That sounds like a pretty good idea in my opinion. +1 for adding it
>>>> 
>>>> Marco
>>>> 
>>>> 
>>>>> On 19 May 2017, at 15:15, Wido den Hollander <w...@widodh.nl> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> Yesterday at ApacheCon Kris from Nuage networks gave a great
>>>> presentation about alternatives for userdata from the VR: Config Drive
>>>>> 
>>>>> In short, a CD-ROM/ISO attached to the Instance containing the
>>>> meta/userdata instead of having the VR serve it.
>>>>> 
>>>>> The outstanding PR [0] uses it's own format on the ISO while cloud-init
>>>> already has support for config drive [1].
>>>>> 
>>>>> This format uses 'openstack' in the name, but it seems to be in
>>>> cloud-init natively and well supported.
>>>>> 
>>>>> I started the discussion yesterday during the talk and thought to take
>>>> it to the list.
>>>>> 
>>>>> My opinion is that we should use the OpenStack format for the config
>>>> drive:
>>>>> 
>>>>> - It's already in cloud-init
>>>>> - Easier to templates to be used on CloudStack
>>>>> - Easier adoption
>>>>> 
>>>>> We can always write a file like "GENERATED_BY_APACHE_CLOUDSTACK" or
>>>> something on the ISO.
>>>>> 
>>>>> We can also symlink the 'openstack' directory to a directory called
>>>> 'cloudstack' on the ISO.
>>>>> 
>>>>> Does anybody else have a opinion on this one?
>>>>> 
>>>>> Wido
>>>>> 
>>>>> [0]: https://github.com/apache/cloudstack/pull/2097
>>>>> [1]: http://cloudinit.readthedocs.io/en/latest/topics/
>>>> datasources/configdrive.html#version-2
>>>> 
>>>> 
>>> 



Re: [DISCUSS] Config Drive: Using the OpenStack format?

2017-05-19 Thread Marc-Aurèle Brothier
Hi Widoo,

That sounds like a pretty good idea in my opinion. +1 for adding it

Marco


> On 19 May 2017, at 15:15, Wido den Hollander  wrote:
> 
> Hi,
> 
> Yesterday at ApacheCon Kris from Nuage networks gave a great presentation 
> about alternatives for userdata from the VR: Config Drive
> 
> In short, a CD-ROM/ISO attached to the Instance containing the meta/userdata 
> instead of having the VR serve it.
> 
> The outstanding PR [0] uses it's own format on the ISO while cloud-init 
> already has support for config drive [1].
> 
> This format uses 'openstack' in the name, but it seems to be in cloud-init 
> natively and well supported.
> 
> I started the discussion yesterday during the talk and thought to take it to 
> the list.
> 
> My opinion is that we should use the OpenStack format for the config drive:
> 
> - It's already in cloud-init
> - Easier to templates to be used on CloudStack
> - Easier adoption
> 
> We can always write a file like "GENERATED_BY_APACHE_CLOUDSTACK" or something 
> on the ISO.
> 
> We can also symlink the 'openstack' directory to a directory called 
> 'cloudstack' on the ISO.
> 
> Does anybody else have a opinion on this one?
> 
> Wido
> 
> [0]: https://github.com/apache/cloudstack/pull/2097
> [1]: 
> http://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html#version-2



Re: Very slow Virtual Router provisioning with 4.9.2.0

2017-05-03 Thread Marc-Aurèle Brothier
Hi Wido,

Well for us, it's not a version problem, it's simply a design problem. This
VR is very problematic during any upgrade of cloudstack (which I perform
every week almost on our platform), same goes for the secondary storage VMs
which scans all templates. We've planned on our roadmap to get rid of the
system vms. The VR is really a SPoF.

On Tue, May 2, 2017 at 7:57 PM, Wido den Hollander  wrote:

> Hi,
>
> Last night I upgraded a CloudStack 4.5.2 setup to 4.9.2.0. All went well,
> but the VR provisioning is terribly slow which causes all kinds of problems.
>
> The vr_cfg.sh and update_config.py scripts start to run. Restart dnsmasq,
> add metadata, etc.
>
> But for just 1800 hosts this can take up to 2 hours and that causes
> timeouts in the management server and other problems.
>
> 2 hours is just very, very slow. So I am starting to wonder if something
> is wrong here.
>
> Did anybody else see this?
>
> Running Basic Networking with CloudStack 4.9.2.0
>
> Wido
>


Re: [VOTE] Apache Cloudstack should join the gitbox experiment.

2017-04-11 Thread Marc-Aurèle Brothier
+1

On Tue, Apr 11, 2017 at 7:38 AM, Wido den Hollander  wrote:

> +1
>
> > Op 10 april 2017 om 18:22 schreef Daan Hoogland  >:
> >
> >
> > In the Apache foundation an experiment has been going on to host
> > mirrors of Apache project on github with more write access then just
> > to the mirror-bot. For those projects committers can merge on github
> > and put labels on PRs.
> >
> > I move to have the project added to the gitbox experiment
> > please cast your votes
> >
> > +1 CloudStack should be added to the gitbox experiment
> > +-0 I don't care
> > -1 CloudStack shouldn't be added to the gitbox experiment and give your
> reasons
> >
> > thanks,
> > --
> > Daan
>


Re: Handling of DB migrations on forks

2017-02-21 Thread Marc-Aurèle Brothier
Jeff, I do wonder the same thing about the ORM... I hit the ORM limitation
in many places now without being able to do joins on the same table
specifically for capacity check, and you can see in the code many hand made
SQL query on that part. I think the views came into the pictures for the
same reason.

Daan, the project maintainers should enforce that. I also posted another
finding that the upgrade path are not identical due to the order in which
upgrade files are executed, see (https://github.com/apache/
cloudstack/pull/1768)

On Tue, Feb 21, 2017 at 10:31 AM, Jeff Hair <j...@greenqloud.com> wrote:

> Something like Liquibase would be nice. Hibernate might be even better.
> Even idempotent migrations would be a step in the right direction. Perhaps
> reject any migration changes that aren't INSERT IGNORE, DROP IF EXISTS,
> etc?
>
> I'm not sure why the DAO was originally hand-rolled. Perhaps the original
> developers didn't think any ORM on the market met their needs. I would love
> to leave DB migrations almost entirely behind. I believe Hibernate is smart
> enough to construct dynamic migrations when fields are added and removed
> from tables, right?
>
> *Jeff Hair*
> Technical Lead and Software Developer
>
> Tel: (+354) 415 0200
> j...@greenqloud.com
> www.greenqloud.com
>
> On Tue, Feb 21, 2017 at 9:27 AM, Daan Hoogland <
> daan.hoogl...@shapeblue.com>
> wrote:
>
> > Marc-Aurele, you are totally right and people agree with you but no one
> > seems to give this priority
> >
> > daan.hoogl...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> > @shapeblue
> >
> >
> >
> >
> > -Original Message-
> > From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> > Sent: 21 February 2017 10:04
> > To: dev@cloudstack.apache.org
> > Subject: Re: Handling of DB migrations on forks
> >
> > IMO the database changes should be idempotent as much as possible with
> > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS". For other things like
> > altering a table, it's more complicated to achieve that in pure SQL.
> > A good call would be to integrate http://www.liquibase.org/ to manage
> the
> > schema and changes in a more descriptive way which allows
> branches/merges.
> >
> > On Tue, Feb 21, 2017 at 9:46 AM, Daan Hoogland <daan.hoogl...@gmail.com>
> > wrote:
> >
> > > Good strategy and I would make that not a warning but a fatal, as the
> > > resulting ACS version will probably not work.
> > >
> > > On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU <ustcweiz...@gmail.com>
> > wrote:
> > > > Then you have to create your own branch forked from 4.10.0
> > > >
> > > > In our branch, I moved some table changes (eg ALTER TABLE, CREATE
> > > > TABLE) from schema-.sql to
> > > > engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> > > > If SQLException is throwed, then show a warning message instead
> > > > upgrade interruption..
> > > > By this way, the database will not be broken in the upgrade or fresh
> > > > installation.
> > > >
> > > > -Wei
> > > >
> > > >
> > > > 2017-02-14 11:52 GMT+01:00 Jeff Hair <j...@greenqloud.com>:
> > > >
> > > >> Hi all,
> > > >>
> > > >> Many people in the CS community maintain forks of CloudStack, and
> > > >> might have implemented features or bug fixes long before they get
> > > >> into
> > > mainline.
> > > >> I'm curious as to how people handle database migrations with their
> > > forks.
> > > >> To make a DB migration, the CS version must be updated. If a
> > > >> developer
> > > adds
> > > >> a migration to their fork on say, version 4.8.5. Later, they decide
> > > >> to upgrade to 4.10.0 which has their migration in the schema
> > > >> upgrade to 4.10.0.
> > > >>
> > > >> How do people handle this? As far as I know, CS will crash on the
> > > >> DB upgrade due to SQL errors. Do people just sanitize migrations
> > > >> when they pull from downstream or somehting?
> > > >>
> > > >> Jeff
> > > >>
> > >
> > >
> > >
> > > --
> > > Daan
> > >
> >
>


Re: Handling of DB migrations on forks

2017-02-21 Thread Marc-Aurèle Brothier
IMO the database changes should be idempotent as much as possible with
"CREATE OR REPLACE VIEW..." "DROP IF EXISTS". For other things like
altering a table, it's more complicated to achieve that in pure SQL.
A good call would be to integrate http://www.liquibase.org/ to manage the
schema and changes in a more descriptive way which allows branches/merges.

On Tue, Feb 21, 2017 at 9:46 AM, Daan Hoogland 
wrote:

> Good strategy and I would make that not a warning but a fatal, as the
> resulting ACS version will probably not work.
>
> On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU  wrote:
> > Then you have to create your own branch forked from 4.10.0
> >
> > In our branch, I moved some table changes (eg ALTER TABLE, CREATE TABLE)
> > from schema-.sql
> > to engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> > If SQLException is throwed, then show a warning message instead upgrade
> > interruption..
> > By this way, the database will not be broken in the upgrade or fresh
> > installation.
> >
> > -Wei
> >
> >
> > 2017-02-14 11:52 GMT+01:00 Jeff Hair :
> >
> >> Hi all,
> >>
> >> Many people in the CS community maintain forks of CloudStack, and might
> >> have implemented features or bug fixes long before they get into
> mainline.
> >> I'm curious as to how people handle database migrations with their
> forks.
> >> To make a DB migration, the CS version must be updated. If a developer
> adds
> >> a migration to their fork on say, version 4.8.5. Later, they decide to
> >> upgrade to 4.10.0 which has their migration in the schema upgrade to
> >> 4.10.0.
> >>
> >> How do people handle this? As far as I know, CS will crash on the DB
> >> upgrade due to SQL errors. Do people just sanitize migrations when they
> >> pull from downstream or somehting?
> >>
> >> Jeff
> >>
>
>
>
> --
> Daan
>


Re: [QUESTION] Upgrade path to JDK8

2017-02-21 Thread Marc-Aurèle Brothier
No there isn't any issue except having the bugs & fixes of the JDK you're
using. You can compile it with a JDK 1.8 as long as you don't change the
target bytecode version for 1.7

On Tue, Feb 21, 2017 at 8:15 AM, Wei ZHOU <ustcweiz...@gmail.com> wrote:

> Marco,
>
> Good point. Is there any issue if we compile code with jdk8 but run it on
> jdk7 (systemvm) ?
>
> -Wei
>
> 2017-02-21 7:43 GMT+01:00 Marc-Aurèle Brothier <ma...@exoscale.ch>:
>
> > There's a list of compatibility issues between Java 7 & Java 8 here
> > http://www.oracle.com/technetwork/java/javase/8-
> > compatibility-guide-2156366.
> > html
> >
> > The main problem I would see in two system communicating while running
> > different Java version is the way they handle serialization and
> > de-serialization of objects which had been a problem in the past between
> > some Java versions. AFAIK we're using JSON for that now, so if the code
> > already compiles with Java8, it should not be a problem.
> >
> > On Mon, Feb 20, 2017 at 10:36 PM, Wei ZHOU <ustcweiz...@gmail.com>
> wrote:
> >
> > > We tested 4.7.1+systemd patches as well, it also works fine.
> > >
> > > -Wei
> > >
> > > 2017-02-20 22:34 GMT+01:00 Wei ZHOU <ustcweiz...@gmail.com>:
> > >
> > > > @Will and @Syed, I build the packages of 4.9.2+systemd patches on
> > ubuntu
> > > > 16.04 (openjdk 8).
> > > > Then install the packages to management server and kvm hosts (all are
> > > > ubuntu 16.04 with openjdk8).
> > > > The systemvm template is 4.6 with openjdk7.
> > > >
> > > > cpvm and ssvm work fine.
> > > >
> > > > As there is no java process in VR, so I did not check, VR should not
> be
> > > > impacted.
> > > >
> > > > -Wei
> > > >
> > > > 2017-02-20 22:16 GMT+01:00 Pierre-Luc Dion <pd...@cloudops.com>:
> > > >
> > > >> That's quite interesting Chiradeep!
> > > >>
> > > >> so I could do something like this I guest:
> > > >>
> > > >> mvn clean install
> > > >>
> > > >> and then this one to build the systemvm.iso:
> > > >> mvn -Psystemvm -source 1.7 -target 1.7 install
> > > >>
> > > >>
> > > >> I'll give it a try! but for now, I'm worried about existing VR, they
> > > must
> > > >> continue to work while running on jdk7.  newer VPC would be ok to
> run
> > > with
> > > >> jdk8.  but we need to make sure while upgrading the
> management-server
> > we
> > > >> are not in the obligation to upgrade VR's.
> > > >>
> > > >> For sure it is required for strongswan + JDK8 to ugprade the VR,
> but a
> > > >>  existing VR should remain usable for port forwarding, vm creation
> and
> > > >> such...
> > > >>
> > > >> I'll post my finding...
> > > >>
> > > >> Thanks !
> > > >>
> > > >>
> > > >>
> > > >> On Mon, Feb 20, 2017 at 3:59 PM, Chiradeep Vittal <
> > chirade...@gmail.com
> > > >
> > > >> wrote:
> > > >>
> > > >> > You can build the system vm with  -source 1.7 -target 1.7
> > > >> > Also unless you are using Java8 features (lambda) the classfiles
> > > >> produced
> > > >> > by javac 8 should work in a 1.7 JVM
> > > >> >
> > > >> > Sent from my iPhone
> > > >> >
> > > >> > > On Feb 20, 2017, at 11:51 AM, Will Stevens <
> wstev...@cloudops.com
> > >
> > > >> > wrote:
> > > >> > >
> > > >> > > yes, that is what I was expecting.  which is why I was asking
> > about
> > > >> Wei's
> > > >> > > setup because he seems to have worked around that problem.  Or
> he
> > > has
> > > >> a
> > > >> > > custom SystemVM template running with both JDK7 and JDK8.
> > > >> > >
> > > >> > > *Will STEVENS*
> > > >> > > Lead Developer
> > > >> > >
> > > >> > > <https://goo.gl/NYZ8KK>
> > > >> > >
> > > >> > >> On Mon, Feb 20, 2017 at 2:20 PM, Syed Ahmed <
> sah...@cloudops.com
> > >

Re: [QUESTION] Upgrade path to JDK8

2017-02-20 Thread Marc-Aurèle Brothier
There's a list of compatibility issues between Java 7 & Java 8 here
http://www.oracle.com/technetwork/java/javase/8-compatibility-guide-2156366.
html

The main problem I would see in two system communicating while running
different Java version is the way they handle serialization and
de-serialization of objects which had been a problem in the past between
some Java versions. AFAIK we're using JSON for that now, so if the code
already compiles with Java8, it should not be a problem.

On Mon, Feb 20, 2017 at 10:36 PM, Wei ZHOU  wrote:

> We tested 4.7.1+systemd patches as well, it also works fine.
>
> -Wei
>
> 2017-02-20 22:34 GMT+01:00 Wei ZHOU :
>
> > @Will and @Syed, I build the packages of 4.9.2+systemd patches on ubuntu
> > 16.04 (openjdk 8).
> > Then install the packages to management server and kvm hosts (all are
> > ubuntu 16.04 with openjdk8).
> > The systemvm template is 4.6 with openjdk7.
> >
> > cpvm and ssvm work fine.
> >
> > As there is no java process in VR, so I did not check, VR should not be
> > impacted.
> >
> > -Wei
> >
> > 2017-02-20 22:16 GMT+01:00 Pierre-Luc Dion :
> >
> >> That's quite interesting Chiradeep!
> >>
> >> so I could do something like this I guest:
> >>
> >> mvn clean install
> >>
> >> and then this one to build the systemvm.iso:
> >> mvn -Psystemvm -source 1.7 -target 1.7 install
> >>
> >>
> >> I'll give it a try! but for now, I'm worried about existing VR, they
> must
> >> continue to work while running on jdk7.  newer VPC would be ok to run
> with
> >> jdk8.  but we need to make sure while upgrading the management-server we
> >> are not in the obligation to upgrade VR's.
> >>
> >> For sure it is required for strongswan + JDK8 to ugprade the VR, but a
> >>  existing VR should remain usable for port forwarding, vm creation and
> >> such...
> >>
> >> I'll post my finding...
> >>
> >> Thanks !
> >>
> >>
> >>
> >> On Mon, Feb 20, 2017 at 3:59 PM, Chiradeep Vittal  >
> >> wrote:
> >>
> >> > You can build the system vm with  -source 1.7 -target 1.7
> >> > Also unless you are using Java8 features (lambda) the classfiles
> >> produced
> >> > by javac 8 should work in a 1.7 JVM
> >> >
> >> > Sent from my iPhone
> >> >
> >> > > On Feb 20, 2017, at 11:51 AM, Will Stevens 
> >> > wrote:
> >> > >
> >> > > yes, that is what I was expecting.  which is why I was asking about
> >> Wei's
> >> > > setup because he seems to have worked around that problem.  Or he
> has
> >> a
> >> > > custom SystemVM template running with both JDK7 and JDK8.
> >> > >
> >> > > *Will STEVENS*
> >> > > Lead Developer
> >> > >
> >> > > 
> >> > >
> >> > >> On Mon, Feb 20, 2017 at 2:20 PM, Syed Ahmed 
> >> > wrote:
> >> > >>
> >> > >> The problem is that systemvm.iso is built with java 8 whereas java
> on
> >> > the
> >> > >> VR is java 7
> >> > >>> On Mon, Feb 20, 2017 at 13:20 Will Stevens  >
> >> > wrote:
> >> > >>>
> >> > >>> Did it work after resetting a VPC or when blowing away the SSVM or
> >> > >> CPVM?  I
> >> > >>> would not expect the SSVM or the CPVM to come up if the management
> >> > server
> >> > >>> was built with JDK8 and the system vm template is only using JDK7.
> >> Can
> >> > >> you
> >> > >>> confirm?​
> >> > >>>
> >> > >>> *Will STEVENS*
> >> > >>> Lead Developer
> >> > >>>
> >> > >>> 
> >> > >>>
> >> >  On Mon, Feb 20, 2017 at 1:15 PM, Wei ZHOU  >
> >> > wrote:
> >> > 
> >> >  We've tested management server 4.7.1 with ubuntu 16.04/openjdk8
> and
> >> >  systemvm 4.6 with debian7/openjdk7.
> >> >  The systemvms (ssvm, cpvm) work fine.
> >> > 
> >> >  I agree we need consider the openjdk upgrade in systemvm
> template.
> >> > 
> >> >  -Wei
> >> > 
> >> >  2017-02-20 18:15 GMT+01:00 Will Stevens :
> >> > 
> >> > > Regarding my question. Is it because of the version of Java that
> >> the
> >> > > systemvm.iso is built on?
> >> > >
> >> > > On Feb 20, 2017 11:58 AM, "Will Stevens"  >
> >> > >>> wrote:
> >> > >
> >> > >> A question that is hidden in here is:
> >> > >> - Why does the JDK version on the management server have to
> match
> >> > >> the
> >> >  JDK
> >> > >> version of the System VM?
> >> > >>
> >> > >> *Will STEVENS*
> >> > >> Lead Developer
> >> > >>
> >> > >> 
> >> > >>
> >> > >> On Mon, Feb 20, 2017 at 11:50 AM, Pierre-Luc Dion <
> >> > >>> pd...@cloudops.com>
> >> > >> wrote:
> >> > >>
> >> > >>> Hi,
> >> > >>>
> >> > >>> In the context of deployment of CloudStack with VPCs,
> >> > >>> What will happen to a cloud when upgrading to 4.10 that now
> use
> >> > >>> jdk8?
> >> > >>>
> >> > >>> Does upgrading the management-server to 4.10 jdk8 and 

Code base evolution over time of cloudstack

2017-01-12 Thread Marc-Aurèle Brothier
Hi everyone,

I thought you might like to have a graphical view of how cloudstack code
has evolved over time. I ran the script
https://github.com/erikbern/git-of-theseus on the project on December 15th
2016.

Cheers,
Marc-Aurèle


Jira not showing issues if not logged in

2016-11-10 Thread Marc-Aurèle Brothier
Hi,

There seems to be a bug or incorrect settings in the Jira CS project. As
public user I can only see 6 open issues and the latest update on an issue
is from February 2016 on the project dashboard. If I log in, I have a
correct project dashboard with 1800+ open issues.


Re: PR Backlog

2016-10-28 Thread Marc-Aurèle Brothier
I think it's a good idea to use the labels for triage and priority. It will
make the PR and issue view much simpler on Github.
If it's not used on Github, does it mean that all PR/issue on Github must
have a JIRA and the triage/sorting is only done there?

Marc-Aurèle

On Wed, Oct 26, 2016 at 7:18 PM, Paul Angus 
wrote:

> All,
>
> Now that we're getting some testing momentum going, I was thinking that we
> need a way to categorise and prioritise the PR queue.
> There are a limited number of labels available for use to apply to the PRs
> within Github as it is,  but we could go through and highlight bug fixes
> and prioritise them and mark ones with outstanding questions against them...
> We also need the PRs rebased as some are pretty old.
>
> Thoughts/flames?
>
>
> Kind regards,
>
> Paul Angus
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: Question about JavaScript validators in UI

2016-10-26 Thread Marc-Aurèle Brothier
>From a quick look at the code you've forgotten one "|" for the or condition:

$.validator.addMethod("ipv46cidr", function(value, element) {
if (!$.validator.methods.ipv4cidr.call(this, value, element) ||
!$.validator.methods.ipv6cidr.call(this, value, element))
return false;

return true;
}, "The specified IPv4/IPv6 CIDR is invalid.");

On Wed, Oct 26, 2016 at 4:37 PM, Wido den Hollander  wrote:

> So my JS skills are way to low, but I tried this, but it doesn't seem to
> work:
>
> $.validator.addMethod("ipv46cidr", function(value, element) {
> if (!$.validator.methods.ipv4cidr.call(this, value, element) |
> !$.validator.methods.ipv6cidr.call(this, value, element))
> return false;
>
> return true;
> }, "The specified IPv4/IPv6 CIDR is invalid.");
>
> What am I missing here?
>
> Wido
>
> > Op 23 oktober 2016 om 9:37 schreef Rohit Yadav <
> rohit.ya...@shapeblue.com>:
> >
> >
> > Hi Wido,
> >
> >
> > Yes, you can add a new validator that can validator that the provided
> address is either ipv4 or ipv6, here:
> >
> > https://github.com/apache/cloudstack/blob/master/ui/
> scripts/sharedFunctions.js#L2327
> >
> >
> > Give the validator any appropriate name, and use it in the network.js
> code replacing the currently defined validator with yours.
> >
> >
> > Regards.
> >
> > 
> > From: Wido den Hollander 
> > Sent: 21 October 2016 17:33:29
> > To: dev@cloudstack.apache.org
> > Subject: Question about JavaScript validators in UI
> >
> > Hi,
> >
> > While working on the IPv6 for Basic Networking I'm at the stage of the
> Security Groups.
> >
> > When entering a CIDR in the UI which is not IPv4 (eg ::/0) it will show:
> 'The specified IPv4 CIDR is invalid.'
> >
> > That's true, so looking in network.js I see this piece of code:
> >
> > 'cidr': {
> >   edit: true,
> >   label: 'label.cidr',
> >   isHidden: true,
> >   validation: {
> > ipv4cidr: true
> >   }
> >  },
> >
> > There is a ipv6cidr validation method as well. How can I modify the
> JavaScript in such a way that either a valid IPv4 OR IPv6 CIDR has to be
> entered?
> >
> > My JavaScript skills are rather low.
> >
> > Thanks!
> >
> > Wido
> >
> > rohit.ya...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
>