Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-28 Thread Wido den Hollander

> Op 28 juni 2017 om 11:10 schreef Rajani Karuturi <raj...@apache.org>:
> 
> 
> Yes, those shouldn't have been merged.  We should have released
> faster and then merged.
> 
> Lets think of it as ours and us than theirs and those.
> 

True!

So let's see if we can get 4.10 out of the door and get 4.11 out there faster.

Merge what needs to be done, test, and go for 4.11 and 4.12

We might say that we don't merge everything for 4.11 and leave a few bits for 
4.12.

Wido

> ~ Rajani
> 
> http://cloudplatform.accelerite.com/
> 
> On June 28, 2017 at 12:19 PM, Paul Angus
> (paul.an...@shapeblue.com) wrote:
> 
> Those new PRs should not have been merged.
> 
> Those on the mailing list should respect the process and accept
> that they will have to wait until code is unfrozen.
> 
> Kind regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com
> www.shapeblue.com ( http://www.shapeblue.com )
> 53 Chandos Place, Covent Garden, London WC2N 4HSUK
> @shapeblue
> 
> -Original Message-
> From: Rajani Karuturi [mailto:raj...@apache.org]
> Sent: 28 June 2017 07:45
> To: dev@cloudstack.apache.org
> Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3
> 
> Paul,
> 
> Which shows we are not actively following RCs. That PR was a
> blocker for RC3 and was well discussed. That PR is a perfect
> example that we are not working as community to release code.
> That is a fix for a blocker which stayed open for more than 45
> days.
> 
> If you see till RC2 it was only blockers that were merged. But,
> since it has taken a lot more time to fix blockers, more PRs were
> merged on request on the mailing list(and we don't have people
> even to object it). you can think of it as a combination of two
> releases due to the time it has taken.
> 
> ~ Rajani
> 
> http://cloudplatform.accelerite.com/
> 
> On June 28, 2017 at 12:06 PM, Paul Angus
> (paul.an...@shapeblue.com) wrote:
> 
> Rajani,
> 
> I suspect that fatigue with the 4.10 release testing that we are
> seeing is due to the time it has taken to release it. And that is
> has been caused by new code going in, which have introduced new
> bugs.
> 
> This was demonstrated in the last -1 from Kris. This change was
> merged 10 days ago.
> 
> Kind regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com
> www.shapeblue.com ( http://www.shapeblue.com ) 
> ( http://www.shapeblue.com )
> 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue
> 
> -Original Message-
> From: Rajani Karuturi [mailto:raj...@apache.org]
> Sent: 28 June 2017 06:14
> To: dev@cloudstack.apache.org
> Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3
> 
> We can do a release every month as long as we have enough people
> actively participating in the release process.
> 
> We have people who wants to have their code/features checked in.
> We, very clearly do not have enough people working on
> releases/blockers. How many of us are testing/voting on releases
> or PRs? We have blockers in jira, with no one to fix. We have PRs
> open for release blockers for more than a month with no one to
> test.
> 
> I would ask everyone to start testing releases/PRs and voting on
> them actively.
> 
> We need people who can do the work. We already know what needs
> to be done as outlined in the release principles wiki after long
> discussions on this list.
> 
> Whether we create a branch off RC or continue on master wont
> change the current situation.
> 
> We, as community should commit to testing and releasing code.
> principles and theory wont help.
> 
> Thanks,
> 
> ~ Rajani
> 
> http://cloudplatform.accelerite.com/
> 
> On June 27, 2017 at 9:43 PM, Rafael Weingärtner
> (rafaelweingart...@gmail.com) wrote:
> 
> +1 to what Paul said.
> IMHO, as soon as we start a release candidate to close a
> version, all merges should stop (period); the only exceptions
> should be PRs that address specific problems in the RC.
> I always thought that we had a protocol for that [1]; maybe for
> this version, we have not followed it?
> 
> [1]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen
> 
> On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus
> <paul.an...@shapeblue.com>
> wrote:
> 
> Hi All,
> 
> From my view point 'we' have been the architects of our own
> downfall. Once a code freeze is in place NO new features, NO
> enhancements should be going in. Once we're at an RC stage, NO
> new bug fixes other that for the blockers should be going in.
> that way the release gets out, and the next 

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-28 Thread Rajani Karuturi
Yes, those shouldn't have been merged.  We should have released
faster and then merged.

Lets think of it as ours and us than theirs and those.

~ Rajani

http://cloudplatform.accelerite.com/

On June 28, 2017 at 12:19 PM, Paul Angus
(paul.an...@shapeblue.com) wrote:

Those new PRs should not have been merged.

Those on the mailing list should respect the process and accept
that they will have to wait until code is unfrozen.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue

-Original Message-
From: Rajani Karuturi [mailto:raj...@apache.org]
Sent: 28 June 2017 07:45
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

Paul,

Which shows we are not actively following RCs. That PR was a
blocker for RC3 and was well discussed. That PR is a perfect
example that we are not working as community to release code.
That is a fix for a blocker which stayed open for more than 45
days.

If you see till RC2 it was only blockers that were merged. But,
since it has taken a lot more time to fix blockers, more PRs were
merged on request on the mailing list(and we don't have people
even to object it). you can think of it as a combination of two
releases due to the time it has taken.

~ Rajani

http://cloudplatform.accelerite.com/

On June 28, 2017 at 12:06 PM, Paul Angus
(paul.an...@shapeblue.com) wrote:

Rajani,

I suspect that fatigue with the 4.10 release testing that we are
seeing is due to the time it has taken to release it. And that is
has been caused by new code going in, which have introduced new
bugs.

This was demonstrated in the last -1 from Kris. This change was
merged 10 days ago.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com ) 
( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue

-Original Message-
From: Rajani Karuturi [mailto:raj...@apache.org]
Sent: 28 June 2017 06:14
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

We can do a release every month as long as we have enough people
actively participating in the release process.

We have people who wants to have their code/features checked in.
We, very clearly do not have enough people working on
releases/blockers. How many of us are testing/voting on releases
or PRs? We have blockers in jira, with no one to fix. We have PRs
open for release blockers for more than a month with no one to
test.

I would ask everyone to start testing releases/PRs and voting on
them actively.

We need people who can do the work. We already know what needs
to be done as outlined in the release principles wiki after long
discussions on this list.

Whether we create a branch off RC or continue on master wont
change the current situation.

We, as community should commit to testing and releasing code.
principles and theory wont help.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 27, 2017 at 9:43 PM, Rafael Weingärtner
(rafaelweingart...@gmail.com) wrote:

+1 to what Paul said.
IMHO, as soon as we start a release candidate to close a
version, all merges should stop (period); the only exceptions
should be PRs that address specific problems in the RC.
I always thought that we had a protocol for that [1]; maybe for
this version, we have not followed it?

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen

On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus
<paul.an...@shapeblue.com>
wrote:

Hi All,

>From my view point 'we' have been the architects of our own
downfall. Once a code freeze is in place NO new features, NO
enhancements should be going in. Once we're at an RC stage, NO
new bug fixes other that for the blockers should be going in.
that way the release gets out, and the next one can get going.
If
4.10 had gone out in a timely fashion, then we'd probably be on
4.11 if not 4.12 by now, with all the new features AND all the
new fixes in.

People sliding new changes/bug fixes/enhancements in are not
making the product better, they're stopping progress. As we can
clearly see here.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com ) 
( http://www.shapeblue.com ) ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
Sent: 27 June 2017 01:25
To: dev@cloudstack.apache.org
Cc: Wido den Hollander <w...@widodh.nl>
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

I tend to agree with you here, Daan. I know the downside we’ve
discussed in the past is that overall community participation in
the RC process has dropped off when such a new branch is created
(since the community as a whole

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-28 Thread Rajani Karuturi
I don't think creating a branch will help in releasing faster. It
will only make it worse in my opinion.

If we can release faster, features will stay in the PR branch for
a short while and can be merged quickly.

~ Rajani

http://cloudplatform.accelerite.com/

On June 28, 2017 at 12:17 PM, Daan Hoogland
(daan.hoogl...@gmail.com) wrote:

I'm with Mike on this. fixes go into the rc branch, features
don't and
that's a clearer line then we have now. or we could just keep
rc'ing
untill one passes and keep working on stablising whichever
branch we
choose for that allowing both features and fixes.

On Wed, Jun 28, 2017 at 8:40 AM, Tutkowski, Mike
<mike.tutkow...@netapp.com> wrote:

Hi,

I personally still like the idea of a new branch being created
right around the time we cut our first RC.

Even if people want to commit changes to the new branch, they
should understand that that code won't be formally released until
the pending RC is validated and released.

That being the case, I would think those who choose to commit to
the new branch will have a vested interest in the RC going out,
as well.

In any event, in addition to the current automated regression
tests that are run, we still have a lot of tests that are not
hooked into the build that are being run ad hoc (managed storage
automated tests are an example). Additionally, we seem to have a
lot of manual tests being run.

Until we can deliver a framework in which we have a very high
percentage of the system covered by automated tests, there is
really no way we should consider monthly releases.

I think we are still shooting for releases every four months,
which seems fair given our current system.

If we enact some deadlines like a code freeze going forward,
that should help. With only blocker PRs going into subsequent
RCs, we should be able to avoid a lot of unnecessary spin.

I definitely want to point out that I appreciate everyone's time
and effort. In particular, I want to be clear that it is not my
intent to be critical of anyone who's been working in release
management. My only goal with this chain of e-mails is to see if
we can continue to improve the process.

Thanks, everyone!
Mike

On Jun 27, 2017, at 11:14 PM, Rajani Karuturi <raj...@apache.org>
wrote:

We can do a release every month as long as we have enough people
actively participating in the release process.

We have people who wants to have their code/features checked in.
We, very clearly do not have enough people working on
releases/blockers. How many of us are testing/voting on releases
or PRs? We have blockers in jira, with no one to fix. We have
PRs
open for release blockers for more than a month with no one to
test.

I would ask everyone to start testing releases/PRs and voting on
them actively.

We need people who can do the work. We already know what needs
to
be done as outlined in the release principles wiki after long
discussions on this list.

Whether we create a branch off RC or continue on master wont
change the current situation.

We, as community should commit to testing and releasing code.
principles and theory wont help.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 27, 2017 at 9:43 PM, Rafael Weingärtner
(rafaelweingart...@gmail.com) wrote:

+1 to what Paul said.
IMHO, as soon as we start a release candidate to close a
version, all
merges should stop (period); the only exceptions should be PRs
that address
specific problems in the RC.
I always thought that we had a protocol for that [1]; maybe for
this
version, we have not followed it?

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen

On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus
<paul.an...@shapeblue.com>
wrote:

Hi All,

>From my view point 'we' have been the architects of our own
downfall. Once
a code freeze is in place NO new features, NO enhancements
should be going
in. Once we're at an RC stage, NO new bug fixes other that for
the blockers
should be going in. that way the release gets out, and the next
one can get
going. If 4.10 had gone out in a timely fashion, then we'd
probably be on
4.11 if not 4.12 by now, with all the new features AND all the
new fixes in.

People sliding new changes/bug fixes/enhancements in are not
making the
product better, they're stopping progress. As we can clearly see
here.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com ) 
( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
Sent: 27 June 2017 01:25
To: dev@cloudstack.apache.org
Cc: Wido den Hollander <w...@widodh.nl>
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

I tend to agree with you here, Daan. I know the downside we’ve
discussed
in the past is that overall community part

RE: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-28 Thread Paul Angus
Those new PRs should not have been merged.

Those on the mailing list should respect the process and accept that they will 
have to wait until code is unfrozen.





Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Rajani Karuturi [mailto:raj...@apache.org] 
Sent: 28 June 2017 07:45
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

Paul,

Which shows we are not actively following RCs. That PR was a blocker for RC3 
and was well discussed. That PR is a perfect example that we are not working as 
community to release code.
That is a fix for a blocker which stayed open for more than 45 days.

If you see till RC2 it was only blockers that were merged. But, since it has 
taken a lot more time to fix blockers, more PRs were merged on request on the 
mailing list(and we don't have people even to object it). you can think of it 
as a combination of two releases due to the time it has taken.

~ Rajani

http://cloudplatform.accelerite.com/

On June 28, 2017 at 12:06 PM, Paul Angus
(paul.an...@shapeblue.com) wrote:

Rajani,

I suspect that fatigue with the 4.10 release testing that we are seeing is due 
to the time it has taken to release it. And that is has been caused by new code 
going in, which have introduced new bugs.

This was demonstrated in the last -1 from Kris. This change was merged 10 days 
ago.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue

-Original Message-
From: Rajani Karuturi [mailto:raj...@apache.org]
Sent: 28 June 2017 06:14
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

We can do a release every month as long as we have enough people actively 
participating in the release process.

We have people who wants to have their code/features checked in.
We, very clearly do not have enough people working on releases/blockers. How 
many of us are testing/voting on releases or PRs? We have blockers in jira, 
with no one to fix. We have PRs open for release blockers for more than a month 
with no one to test.

I would ask everyone to start testing releases/PRs and voting on them actively.

We need people who can do the work. We already know what needs to be done as 
outlined in the release principles wiki after long discussions on this list.

Whether we create a branch off RC or continue on master wont change the current 
situation.

We, as community should commit to testing and releasing code.
principles and theory wont help.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 27, 2017 at 9:43 PM, Rafael Weingärtner
(rafaelweingart...@gmail.com) wrote:

+1 to what Paul said.
IMHO, as soon as we start a release candidate to close a version, all merges 
should stop (period); the only exceptions should be PRs that address specific 
problems in the RC.
I always thought that we had a protocol for that [1]; maybe for this version, 
we have not followed it?

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen

On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus <paul.an...@shapeblue.com>
wrote:

Hi All,

From my view point 'we' have been the architects of our own downfall. Once a 
code freeze is in place NO new features, NO enhancements should be going in. 
Once we're at an RC stage, NO new bug fixes other that for the blockers should 
be going in.
that way the release gets out, and the next one can get going. If
4.10 had gone out in a timely fashion, then we'd probably be on
4.11 if not 4.12 by now, with all the new features AND all the new fixes in.

People sliding new changes/bug fixes/enhancements in are not making the product 
better, they're stopping progress. As we can clearly see here.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com ) ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
Sent: 27 June 2017 01:25
To: dev@cloudstack.apache.org
Cc: Wido den Hollander <w...@widodh.nl>
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

I tend to agree with you here, Daan. I know the downside we’ve discussed in the 
past is that overall community participation in the RC process has dropped off 
when such a new branch is created (since the community as a whole tends to 
focus more on the new branch rather than on testing the RC and releasing it).

I believe we should do the following: As we approach the first RC, we need to 
limit the number of PRs going into the branch (in order to stabilize it). If we 
had a super duper array of automated regression tests that ran again

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-28 Thread Daan Hoogland
I'm with Mike on this. fixes go into the rc branch, features don't and
that's a clearer line then we have now. or we could just keep rc'ing
untill one passes and keep working on stablising whichever branch we
choose for that allowing both features and fixes.

On Wed, Jun 28, 2017 at 8:40 AM, Tutkowski, Mike
<mike.tutkow...@netapp.com> wrote:
> Hi,
>
> I personally still like the idea of a new branch being created right around 
> the time we cut our first RC.
>
> Even if people want to commit changes to the new branch, they should 
> understand that that code won't be formally released until the pending RC is 
> validated and released.
>
> That being the case, I would think those who choose to commit to the new 
> branch will have a vested interest in the RC going out, as well.
>
> In any event, in addition to the current automated regression tests that are 
> run, we still have a lot of tests that are not hooked into the build that are 
> being run ad hoc (managed storage automated tests are an example). 
> Additionally, we seem to have a lot of manual tests being run.
>
> Until we can deliver a framework in which we have a very high percentage of 
> the system covered by automated tests, there is really no way we should 
> consider monthly releases.
>
> I think we are still shooting for releases every four months, which seems 
> fair given our current system.
>
> If we enact some deadlines like a code freeze going forward, that should 
> help. With only blocker PRs going into subsequent RCs, we should be able to 
> avoid a lot of unnecessary spin.
>
> I definitely want to point out that I appreciate everyone's time and effort. 
> In particular, I want to be clear that it is not my intent to be critical of 
> anyone who's been working in release management. My only goal with this chain 
> of e-mails is to see if we can continue to improve the process.
>
> Thanks, everyone!
> Mike
>
>> On Jun 27, 2017, at 11:14 PM, Rajani Karuturi <raj...@apache.org> wrote:
>>
>> We can do a release every month as long as we have enough people
>> actively participating in the release process.
>>
>> We have people who wants to have their code/features checked in.
>> We, very clearly do not have enough people working on
>> releases/blockers. How many of us are testing/voting on releases
>> or PRs? We have blockers in jira, with no one to fix. We have PRs
>> open for release blockers for more than a month with no one to
>> test.
>>
>> I would ask everyone to start testing releases/PRs and voting on
>> them actively.
>>
>> We need people who can do the work. We already know what needs to
>> be done as outlined in the release principles wiki after long
>> discussions on this list.
>>
>> Whether we create a branch off RC or continue on master wont
>> change the current situation.
>>
>> We, as community should commit to testing and releasing code.
>> principles and theory wont help.
>>
>> Thanks,
>>
>> ~ Rajani
>>
>> http://cloudplatform.accelerite.com/
>>
>> On June 27, 2017 at 9:43 PM, Rafael Weingärtner
>> (rafaelweingart...@gmail.com) wrote:
>>
>> +1 to what Paul said.
>> IMHO, as soon as we start a release candidate to close a
>> version, all
>> merges should stop (period); the only exceptions should be PRs
>> that address
>> specific problems in the RC.
>> I always thought that we had a protocol for that [1]; maybe for
>> this
>> version, we have not followed it?
>>
>> [1]
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen
>>
>> On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus
>> <paul.an...@shapeblue.com>
>> wrote:
>>
>> Hi All,
>>
>> From my view point 'we' have been the architects of our own
>> downfall. Once
>> a code freeze is in place NO new features, NO enhancements
>> should be going
>> in. Once we're at an RC stage, NO new bug fixes other that for
>> the blockers
>> should be going in. that way the release gets out, and the next
>> one can get
>> going. If 4.10 had gone out in a timely fashion, then we'd
>> probably be on
>> 4.11 if not 4.12 by now, with all the new features AND all the
>> new fixes in.
>>
>> People sliding new changes/bug fixes/enhancements in are not
>> making the
>> product better, they're stopping progress. As we can clearly see
>> here.
>>
>> Kind regards,
>>
>> Paul Angus
>>
>> paul.an...@

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-28 Thread Rajani Karuturi
Paul,

Which shows we are not actively following RCs. That PR was a
blocker for RC3 and was well discussed. That PR is a perfect
example that we are not working as community to release code.
That is a fix for a blocker which stayed open for more than 45
days.

If you see till RC2 it was only blockers that were merged. But,
since it has taken a lot more time to fix blockers, more PRs were
merged on request on the mailing list(and we don't have people
even to object it). you can think of it as a combination of two
releases due to the time it has taken.

~ Rajani

http://cloudplatform.accelerite.com/

On June 28, 2017 at 12:06 PM, Paul Angus
(paul.an...@shapeblue.com) wrote:

Rajani,

I suspect that fatigue with the 4.10 release testing that we are
seeing is due to the time it has taken to release it. And that is
has been caused by new code going in, which have introduced new
bugs.

This was demonstrated in the last -1 from Kris. This change was
merged 10 days ago.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue

-Original Message-
From: Rajani Karuturi [mailto:raj...@apache.org]
Sent: 28 June 2017 06:14
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

We can do a release every month as long as we have enough people
actively participating in the release process.

We have people who wants to have their code/features checked in.
We, very clearly do not have enough people working on
releases/blockers. How many of us are testing/voting on releases
or PRs? We have blockers in jira, with no one to fix. We have PRs
open for release blockers for more than a month with no one to
test.

I would ask everyone to start testing releases/PRs and voting on
them actively.

We need people who can do the work. We already know what needs
to be done as outlined in the release principles wiki after long
discussions on this list.

Whether we create a branch off RC or continue on master wont
change the current situation.

We, as community should commit to testing and releasing code.
principles and theory wont help.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 27, 2017 at 9:43 PM, Rafael Weingärtner
(rafaelweingart...@gmail.com) wrote:

+1 to what Paul said.
IMHO, as soon as we start a release candidate to close a
version, all merges should stop (period); the only exceptions
should be PRs that address specific problems in the RC.
I always thought that we had a protocol for that [1]; maybe for
this version, we have not followed it?

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen

On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus
<paul.an...@shapeblue.com>
wrote:

Hi All,

>From my view point 'we' have been the architects of our own
downfall. Once a code freeze is in place NO new features, NO
enhancements should be going in. Once we're at an RC stage, NO
new bug fixes other that for the blockers should be going in.
that way the release gets out, and the next one can get going. If
4.10 had gone out in a timely fashion, then we'd probably be on
4.11 if not 4.12 by now, with all the new features AND all the
new fixes in.

People sliding new changes/bug fixes/enhancements in are not
making the product better, they're stopping progress. As we can
clearly see here.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com ) 
( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
Sent: 27 June 2017 01:25
To: dev@cloudstack.apache.org
Cc: Wido den Hollander <w...@widodh.nl>
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

I tend to agree with you here, Daan. I know the downside we’ve
discussed in the past is that overall community participation in
the RC process has dropped off when such a new branch is created
(since the community as a whole tends to focus more on the new
branch rather than on testing the RC and releasing it).

I believe we should do the following: As we approach the first
RC, we need to limit the number of PRs going into the branch (in
order to stabilize it). If we had a super duper array of
automated regression tests that ran against the code, then we
might be able to avoid this, but our automated test suite is not
extensive enough for us to do so.

As we approach the first RC, only blockers and trivial (ex. text
changes)
PRs should be permitted in. Once we cut the first RC, create a
new branch for ongoing dev work. In between RCs, we can only
allow in code related to blocker PRs (or trivial text changes, as
discussed before).

What do people think?

On 6/13/17, 4:56 AM, "Daan Hoogland" <daan.hoogl...@gmail.com>
wrote:

thi

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-28 Thread Tutkowski, Mike
Hi,

I personally still like the idea of a new branch being created right around the 
time we cut our first RC.

Even if people want to commit changes to the new branch, they should understand 
that that code won't be formally released until the pending RC is validated and 
released.

That being the case, I would think those who choose to commit to the new branch 
will have a vested interest in the RC going out, as well.

In any event, in addition to the current automated regression tests that are 
run, we still have a lot of tests that are not hooked into the build that are 
being run ad hoc (managed storage automated tests are an example). 
Additionally, we seem to have a lot of manual tests being run.

Until we can deliver a framework in which we have a very high percentage of the 
system covered by automated tests, there is really no way we should consider 
monthly releases.

I think we are still shooting for releases every four months, which seems fair 
given our current system.

If we enact some deadlines like a code freeze going forward, that should help. 
With only blocker PRs going into subsequent RCs, we should be able to avoid a 
lot of unnecessary spin.

I definitely want to point out that I appreciate everyone's time and effort. In 
particular, I want to be clear that it is not my intent to be critical of 
anyone who's been working in release management. My only goal with this chain 
of e-mails is to see if we can continue to improve the process.

Thanks, everyone!
Mike

> On Jun 27, 2017, at 11:14 PM, Rajani Karuturi <raj...@apache.org> wrote:
> 
> We can do a release every month as long as we have enough people
> actively participating in the release process.
> 
> We have people who wants to have their code/features checked in.
> We, very clearly do not have enough people working on
> releases/blockers. How many of us are testing/voting on releases
> or PRs? We have blockers in jira, with no one to fix. We have PRs
> open for release blockers for more than a month with no one to
> test.
> 
> I would ask everyone to start testing releases/PRs and voting on
> them actively.
> 
> We need people who can do the work. We already know what needs to
> be done as outlined in the release principles wiki after long
> discussions on this list.
> 
> Whether we create a branch off RC or continue on master wont
> change the current situation.
> 
> We, as community should commit to testing and releasing code.
> principles and theory wont help.
> 
> Thanks,
> 
> ~ Rajani
> 
> http://cloudplatform.accelerite.com/
> 
> On June 27, 2017 at 9:43 PM, Rafael Weingärtner
> (rafaelweingart...@gmail.com) wrote:
> 
> +1 to what Paul said.
> IMHO, as soon as we start a release candidate to close a
> version, all
> merges should stop (period); the only exceptions should be PRs
> that address
> specific problems in the RC.
> I always thought that we had a protocol for that [1]; maybe for
> this
> version, we have not followed it?
> 
> [1]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen
> 
> On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus
> <paul.an...@shapeblue.com>
> wrote:
> 
> Hi All,
> 
> From my view point 'we' have been the architects of our own
> downfall. Once
> a code freeze is in place NO new features, NO enhancements
> should be going
> in. Once we're at an RC stage, NO new bug fixes other that for
> the blockers
> should be going in. that way the release gets out, and the next
> one can get
> going. If 4.10 had gone out in a timely fashion, then we'd
> probably be on
> 4.11 if not 4.12 by now, with all the new features AND all the
> new fixes in.
> 
> People sliding new changes/bug fixes/enhancements in are not
> making the
> product better, they're stopping progress. As we can clearly see
> here.
> 
> Kind regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com
> www.shapeblue.com ( http://www.shapeblue.com )
> 53 Chandos Place, Covent Garden, London WC2N 4HSUK
> @shapeblue
> 
> -Original Message-
> From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
> Sent: 27 June 2017 01:25
> To: dev@cloudstack.apache.org
> Cc: Wido den Hollander <w...@widodh.nl>
> Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3
> 
> I tend to agree with you here, Daan. I know the downside we’ve
> discussed
> in the past is that overall community participation in the RC
> process has
> dropped off when such a new branch is created (since the
> community as a
> whole tends to focus more on the new branch rather than on
> testing the RC
> and releasing it).
> 
> I believ

RE: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-28 Thread Paul Angus
Rajani,

I suspect that fatigue with the 4.10 release testing that we are seeing is due 
to the time it has taken to release it.  And that is has been caused by new 
code going in, which have introduced new bugs.

This was demonstrated in the last -1 from Kris. This change was merged 10 days 
ago.



Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Rajani Karuturi [mailto:raj...@apache.org] 
Sent: 28 June 2017 06:14
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

We can do a release every month as long as we have enough people actively 
participating in the release process.

We have people who wants to have their code/features checked in.
We, very clearly do not have enough people working on releases/blockers. How 
many of us are testing/voting on releases or PRs? We have blockers in jira, 
with no one to fix. We have PRs open for release blockers for more than a month 
with no one to test.

I would ask everyone to start testing releases/PRs and voting on them actively.

We need people who can do the work. We already know what needs to be done as 
outlined in the release principles wiki after long discussions on this list.

Whether we create a branch off RC or continue on master wont change the current 
situation.

We, as community should commit to testing and releasing code.
principles and theory wont help.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 27, 2017 at 9:43 PM, Rafael Weingärtner
(rafaelweingart...@gmail.com) wrote:

+1 to what Paul said.
IMHO, as soon as we start a release candidate to close a version, all merges 
should stop (period); the only exceptions should be PRs that address specific 
problems in the RC.
I always thought that we had a protocol for that [1]; maybe for this version, 
we have not followed it?

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen

On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus <paul.an...@shapeblue.com>
wrote:

Hi All,

From my view point 'we' have been the architects of our own downfall. Once a 
code freeze is in place NO new features, NO enhancements should be going in. 
Once we're at an RC stage, NO new bug fixes other that for the blockers should 
be going in. that way the release gets out, and the next one can get going. If 
4.10 had gone out in a timely fashion, then we'd probably be on
4.11 if not 4.12 by now, with all the new features AND all the new fixes in.

People sliding new changes/bug fixes/enhancements in are not making the product 
better, they're stopping progress. As we can clearly see here.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
Sent: 27 June 2017 01:25
To: dev@cloudstack.apache.org
Cc: Wido den Hollander <w...@widodh.nl>
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

I tend to agree with you here, Daan. I know the downside we’ve discussed in the 
past is that overall community participation in the RC process has dropped off 
when such a new branch is created (since the community as a whole tends to 
focus more on the new branch rather than on testing the RC and releasing it).

I believe we should do the following: As we approach the first RC, we need to 
limit the number of PRs going into the branch (in order to stabilize it). If we 
had a super duper array of automated regression tests that ran against the 
code, then we might be able to avoid this, but our automated test suite is not 
extensive enough for us to do so.

As we approach the first RC, only blockers and trivial (ex. text
changes)
PRs should be permitted in. Once we cut the first RC, create a new branch for 
ongoing dev work. In between RCs, we can only allow in code related to blocker 
PRs (or trivial text changes, as discussed before).

What do people think?

On 6/13/17, 4:56 AM, "Daan Hoogland" <daan.hoogl...@gmail.com>
wrote:

this is why i say we should branch on first RC, fix in release branch only and 
merge forward

On Tue, Jun 13, 2017 at 12:41 PM, Will Stevens < williamstev...@gmail.com> 
wrote:

I know it is hard to justify not merging PRs that seem ready but are

not

blockers in an RC, but it is a vicious circle which ultimately

results in a

longer RC process.

It is something i struggled with as a release manager as well.

On Jun 13, 2017 1:56 AM, "Rajani Karuturi" <raj...@apache.org>

wrote:

Thanks Mike,

Will hold off next RC until we hear an update from you.

Regarding merging non-blockers, unfortunately, its a side-effect of taking more 
than three months in the RC phase :(

Than

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-27 Thread Rajani Karuturi
We can do a release every month as long as we have enough people
actively participating in the release process.

We have people who wants to have their code/features checked in.
We, very clearly do not have enough people working on
releases/blockers. How many of us are testing/voting on releases
or PRs? We have blockers in jira, with no one to fix. We have PRs
open for release blockers for more than a month with no one to
test.

I would ask everyone to start testing releases/PRs and voting on
them actively.

We need people who can do the work. We already know what needs to
be done as outlined in the release principles wiki after long
discussions on this list.

Whether we create a branch off RC or continue on master wont
change the current situation.

We, as community should commit to testing and releasing code.
principles and theory wont help.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 27, 2017 at 9:43 PM, Rafael Weingärtner
(rafaelweingart...@gmail.com) wrote:

+1 to what Paul said.
IMHO, as soon as we start a release candidate to close a
version, all
merges should stop (period); the only exceptions should be PRs
that address
specific problems in the RC.
I always thought that we had a protocol for that [1]; maybe for
this
version, we have not followed it?

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen

On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus
<paul.an...@shapeblue.com>
wrote:

Hi All,

>From my view point 'we' have been the architects of our own
downfall. Once
a code freeze is in place NO new features, NO enhancements
should be going
in. Once we're at an RC stage, NO new bug fixes other that for
the blockers
should be going in. that way the release gets out, and the next
one can get
going. If 4.10 had gone out in a timely fashion, then we'd
probably be on
4.11 if not 4.12 by now, with all the new features AND all the
new fixes in.

People sliding new changes/bug fixes/enhancements in are not
making the
product better, they're stopping progress. As we can clearly see
here.

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
Sent: 27 June 2017 01:25
To: dev@cloudstack.apache.org
Cc: Wido den Hollander <w...@widodh.nl>
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

I tend to agree with you here, Daan. I know the downside we’ve
discussed
in the past is that overall community participation in the RC
process has
dropped off when such a new branch is created (since the
community as a
whole tends to focus more on the new branch rather than on
testing the RC
and releasing it).

I believe we should do the following: As we approach the first
RC, we need
to limit the number of PRs going into the branch (in order to
stabilize
it). If we had a super duper array of automated regression tests
that ran
against the code, then we might be able to avoid this, but our
automated
test suite is not extensive enough for us to do so.

As we approach the first RC, only blockers and trivial (ex. text
changes)
PRs should be permitted in. Once we cut the first RC, create a
new branch
for ongoing dev work. In between RCs, we can only allow in code
related to
blocker PRs (or trivial text changes, as discussed before).

What do people think?

On 6/13/17, 4:56 AM, "Daan Hoogland" <daan.hoogl...@gmail.com>
wrote:

this is why i say we should branch on first RC, fix in release
branch
only and merge forward

On Tue, Jun 13, 2017 at 12:41 PM, Will Stevens <
williamstev...@gmail.com> wrote:

I know it is hard to justify not merging PRs that seem ready but
are

not

blockers in an RC, but it is a vicious circle which ultimately

results in a

longer RC process.

It is something i struggled with as a release manager as well.

On Jun 13, 2017 1:56 AM, "Rajani Karuturi" <raj...@apache.org>

wrote:

Thanks Mike,

Will hold off next RC until we hear an update from you.

Regarding merging non-blockers, unfortunately, its a side-effect
of taking more than three months in the RC phase :(

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 13, 2017 at 10:10 AM, Tutkowski, Mike
(mike.tutkow...@netapp.com) wrote:

Hi everyone,

I had a little time this evening and re-ran some VMware-related
tests around managed storage. I noticed a problem that I’d like
to investigate before we spin up the next RC. Let’s hold off on
the next RC until I can find out more (I should know more within
24 hours).

Thanks!
Mike

On 6/12/17, 2:40 AM, "Wido den Hollander" <w...@widodh.nl>
wrote:

Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike"

<mike.tutkow...@netapp.com>:

Hi,

I opened a PR against the most 

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-27 Thread Tutkowski, Mike
I'm glad you guys (Paul and Rafael) agree with me. We should cut a branch once 
the first RC is built. Then we should only allow blockers in to fix RC issues.

This should speed up our releases in the future.

> On Jun 27, 2017, at 10:14 AM, Rafael Weingärtner 
> <rafaelweingart...@gmail.com> wrote:
> 
> +1 to what Paul said.
> IMHO, as soon as we start a release candidate to close a version, all
> merges should stop (period); the only exceptions should be PRs that address
> specific problems in the RC.
> I always thought that we had a protocol for that [1]; maybe for this
> version, we have not followed it?
> 
> [1]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen
> 
> On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus <paul.an...@shapeblue.com>
> wrote:
> 
>> Hi All,
>> 
>> From my view point 'we' have been the architects of our own downfall. Once
>> a code freeze is in place NO new features, NO enhancements should be going
>> in. Once we're at an RC stage, NO new bug fixes other that for the blockers
>> should be going in. that way the release gets out, and the next one can get
>> going.  If 4.10 had gone out in a timely fashion, then we'd probably be on
>> 4.11 if not 4.12 by now, with all the new features AND all the new fixes in.
>> 
>> People sliding new changes/bug fixes/enhancements in are not making the
>> product better, they're stopping progress.  As we can clearly see here.
>> 
>> 
>> Kind regards,
>> 
>> Paul Angus
>> 
>> paul.an...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>> 
>> 
>> 
>> 
>> -Original Message-----
>> From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
>> Sent: 27 June 2017 01:25
>> To: dev@cloudstack.apache.org
>> Cc: Wido den Hollander <w...@widodh.nl>
>> Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3
>> 
>> I tend to agree with you here, Daan. I know the downside we’ve discussed
>> in the past is that overall community participation in the RC process has
>> dropped off when such a new branch is created (since the community as a
>> whole tends to focus more on the new branch rather than on testing the RC
>> and releasing it).
>> 
>> I believe we should do the following: As we approach the first RC, we need
>> to limit the number of PRs going into the branch (in order to stabilize
>> it). If we had a super duper array of automated regression tests that ran
>> against the code, then we might be able to avoid this, but our automated
>> test suite is not extensive enough for us to do so.
>> 
>> As we approach the first RC, only blockers and trivial (ex. text changes)
>> PRs should be permitted in. Once we cut the first RC, create a new branch
>> for ongoing dev work. In between RCs, we can only allow in code related to
>> blocker PRs (or trivial text changes, as discussed before).
>> 
>> What do people think?
>> 
>> On 6/13/17, 4:56 AM, "Daan Hoogland" <daan.hoogl...@gmail.com> wrote:
>> 
>>this is why i say we should branch on first RC, fix in release branch
>>only and merge forward
>> 
>>On Tue, Jun 13, 2017 at 12:41 PM, Will Stevens <
>> williamstev...@gmail.com> wrote:
>>> I know it is hard to justify not merging PRs that seem ready but are
>> not
>>> blockers in an RC, but it is a vicious circle which ultimately
>> results in a
>>> longer RC process.
>>> 
>>> It is something i struggled with as a release manager as well.
>>> 
>>> On Jun 13, 2017 1:56 AM, "Rajani Karuturi" <raj...@apache.org>
>> wrote:
>>> 
>>> Thanks Mike,
>>> 
>>> Will hold off next RC until we hear an update from you.
>>> 
>>> Regarding merging non-blockers, unfortunately, its a side-effect
>>> of taking more than three months in the RC phase :(
>>> 
>>> Thanks,
>>> 
>>> ~ Rajani
>>> 
>>> http://cloudplatform.accelerite.com/
>>> 
>>> On June 13, 2017 at 10:10 AM, Tutkowski, Mike
>>> (mike.tutkow...@netapp.com) wrote:
>>> 
>>> Hi everyone,
>>> 
>>> I had a little time this evening and re-ran some VMware-related
>>> tests around managed storage. I noticed a problem that I’d like
>>> to investigate before we spin up the next RC. Let’s hold off on
>>> the next RC

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-27 Thread Rafael Weingärtner
+1 to what Paul said.
IMHO, as soon as we start a release candidate to close a version, all
merges should stop (period); the only exceptions should be PRs that address
specific problems in the RC.
I always thought that we had a protocol for that [1]; maybe for this
version, we have not followed it?

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen

On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus <paul.an...@shapeblue.com>
wrote:

> Hi All,
>
> From my view point 'we' have been the architects of our own downfall. Once
> a code freeze is in place NO new features, NO enhancements should be going
> in. Once we're at an RC stage, NO new bug fixes other that for the blockers
> should be going in. that way the release gets out, and the next one can get
> going.  If 4.10 had gone out in a timely fashion, then we'd probably be on
> 4.11 if not 4.12 by now, with all the new features AND all the new fixes in.
>
> People sliding new changes/bug fixes/enhancements in are not making the
> product better, they're stopping progress.  As we can clearly see here.
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
> Sent: 27 June 2017 01:25
> To: dev@cloudstack.apache.org
> Cc: Wido den Hollander <w...@widodh.nl>
> Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3
>
> I tend to agree with you here, Daan. I know the downside we’ve discussed
> in the past is that overall community participation in the RC process has
> dropped off when such a new branch is created (since the community as a
> whole tends to focus more on the new branch rather than on testing the RC
> and releasing it).
>
> I believe we should do the following: As we approach the first RC, we need
> to limit the number of PRs going into the branch (in order to stabilize
> it). If we had a super duper array of automated regression tests that ran
> against the code, then we might be able to avoid this, but our automated
> test suite is not extensive enough for us to do so.
>
> As we approach the first RC, only blockers and trivial (ex. text changes)
> PRs should be permitted in. Once we cut the first RC, create a new branch
> for ongoing dev work. In between RCs, we can only allow in code related to
> blocker PRs (or trivial text changes, as discussed before).
>
> What do people think?
>
> On 6/13/17, 4:56 AM, "Daan Hoogland" <daan.hoogl...@gmail.com> wrote:
>
> this is why i say we should branch on first RC, fix in release branch
> only and merge forward
>
> On Tue, Jun 13, 2017 at 12:41 PM, Will Stevens <
> williamstev...@gmail.com> wrote:
> > I know it is hard to justify not merging PRs that seem ready but are
> not
> > blockers in an RC, but it is a vicious circle which ultimately
> results in a
> > longer RC process.
> >
> > It is something i struggled with as a release manager as well.
> >
> > On Jun 13, 2017 1:56 AM, "Rajani Karuturi" <raj...@apache.org>
> wrote:
> >
> > Thanks Mike,
> >
> > Will hold off next RC until we hear an update from you.
> >
> > Regarding merging non-blockers, unfortunately, its a side-effect
> > of taking more than three months in the RC phase :(
> >
> > Thanks,
> >
> > ~ Rajani
> >
> > http://cloudplatform.accelerite.com/
> >
> > On June 13, 2017 at 10:10 AM, Tutkowski, Mike
> > (mike.tutkow...@netapp.com) wrote:
> >
> > Hi everyone,
> >
> > I had a little time this evening and re-ran some VMware-related
> > tests around managed storage. I noticed a problem that I’d like
> > to investigate before we spin up the next RC. Let’s hold off on
> > the next RC until I can find out more (I should know more within
> > 24 hours).
> >
> > Thanks!
> > Mike
> >
> > On 6/12/17, 2:40 AM, "Wido den Hollander" <w...@widodh.nl>
> > wrote:
> >
> >> Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike"
> > <mike.tutkow...@netapp.com>:
> >>
> >>
> >> Hi,
> >>
> >> I opened a PR against the most recent RC:
> > https://github.com/apache/cloudstack/pull/2141
> >>
> >

RE: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-26 Thread Paul Angus
Hi All,

From my view point 'we' have been the architects of our own downfall. Once a 
code freeze is in place NO new features, NO enhancements should be going in. 
Once we're at an RC stage, NO new bug fixes other that for the blockers should 
be going in. that way the release gets out, and the next one can get going.  If 
4.10 had gone out in a timely fashion, then we'd probably be on 4.11 if not 
4.12 by now, with all the new features AND all the new fixes in.

People sliding new changes/bug fixes/enhancements in are not making the product 
better, they're stopping progress.  As we can clearly see here.


Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com] 
Sent: 27 June 2017 01:25
To: dev@cloudstack.apache.org
Cc: Wido den Hollander <w...@widodh.nl>
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

I tend to agree with you here, Daan. I know the downside we’ve discussed in the 
past is that overall community participation in the RC process has dropped off 
when such a new branch is created (since the community as a whole tends to 
focus more on the new branch rather than on testing the RC and releasing it).

I believe we should do the following: As we approach the first RC, we need to 
limit the number of PRs going into the branch (in order to stabilize it). If we 
had a super duper array of automated regression tests that ran against the 
code, then we might be able to avoid this, but our automated test suite is not 
extensive enough for us to do so.

As we approach the first RC, only blockers and trivial (ex. text changes) PRs 
should be permitted in. Once we cut the first RC, create a new branch for 
ongoing dev work. In between RCs, we can only allow in code related to blocker 
PRs (or trivial text changes, as discussed before).

What do people think?

On 6/13/17, 4:56 AM, "Daan Hoogland" <daan.hoogl...@gmail.com> wrote:

this is why i say we should branch on first RC, fix in release branch
only and merge forward

On Tue, Jun 13, 2017 at 12:41 PM, Will Stevens <williamstev...@gmail.com> 
wrote:
> I know it is hard to justify not merging PRs that seem ready but are not
> blockers in an RC, but it is a vicious circle which ultimately results in 
a
> longer RC process.
>
> It is something i struggled with as a release manager as well.
>
> On Jun 13, 2017 1:56 AM, "Rajani Karuturi" <raj...@apache.org> wrote:
>
> Thanks Mike,
>
> Will hold off next RC until we hear an update from you.
>
> Regarding merging non-blockers, unfortunately, its a side-effect
> of taking more than three months in the RC phase :(
>
> Thanks,
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On June 13, 2017 at 10:10 AM, Tutkowski, Mike
> (mike.tutkow...@netapp.com) wrote:
>
> Hi everyone,
>
> I had a little time this evening and re-ran some VMware-related
> tests around managed storage. I noticed a problem that I’d like
> to investigate before we spin up the next RC. Let’s hold off on
> the next RC until I can find out more (I should know more within
> 24 hours).
>
> Thanks!
> Mike
>
> On 6/12/17, 2:40 AM, "Wido den Hollander" <w...@widodh.nl>
> wrote:
>
>> Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike"
> <mike.tutkow...@netapp.com>:
>>
>>
>> Hi,
>>
>> I opened a PR against the most recent RC:
> https://github.com/apache/cloudstack/pull/2141
>>
>> I ran all managed-storage regression tests against it and they
> pass (as noted in detail in the PR).
>>
>> If someone wants to take this code and create a new RC from
> it, I’m +1 on the new RC as long as this is the only commit added
> to it since the current RC.
>
> Thanks Mike!
>
> If this PR is good we should probably merge it asap and go for
> RC5.
>
> 4.10 should really be released by now.
>
> Wido
>
>>
>> Thanks!
>> Mike
>>
>> On 6/9/17, 7:43 PM, "Tutkowski, Mike"
> <mike.tutkow...@netapp.com> wrote:
>>
>> Hi everyone,
>>
>> I found a critical issue that was introduced into this RC
> since the most recent RC, so I am -1 on this RC.
>>
>> The fix for this ticket breaks the support for storing volume
> snapshots on primary storage (which is a feature managed storage
> can support):

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-26 Thread Tutkowski, Mike
I tend to agree with you here, Daan. I know the downside we’ve discussed in the 
past is that overall community participation in the RC process has dropped off 
when such a new branch is created (since the community as a whole tends to 
focus more on the new branch rather than on testing the RC and releasing it).

I believe we should do the following: As we approach the first RC, we need to 
limit the number of PRs going into the branch (in order to stabilize it). If we 
had a super duper array of automated regression tests that ran against the 
code, then we might be able to avoid this, but our automated test suite is not 
extensive enough for us to do so.

As we approach the first RC, only blockers and trivial (ex. text changes) PRs 
should be permitted in. Once we cut the first RC, create a new branch for 
ongoing dev work. In between RCs, we can only allow in code related to blocker 
PRs (or trivial text changes, as discussed before).

What do people think?

On 6/13/17, 4:56 AM, "Daan Hoogland"  wrote:

this is why i say we should branch on first RC, fix in release branch
only and merge forward

On Tue, Jun 13, 2017 at 12:41 PM, Will Stevens  
wrote:
> I know it is hard to justify not merging PRs that seem ready but are not
> blockers in an RC, but it is a vicious circle which ultimately results in 
a
> longer RC process.
>
> It is something i struggled with as a release manager as well.
>
> On Jun 13, 2017 1:56 AM, "Rajani Karuturi"  wrote:
>
> Thanks Mike,
>
> Will hold off next RC until we hear an update from you.
>
> Regarding merging non-blockers, unfortunately, its a side-effect
> of taking more than three months in the RC phase :(
>
> Thanks,
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On June 13, 2017 at 10:10 AM, Tutkowski, Mike
> (mike.tutkow...@netapp.com) wrote:
>
> Hi everyone,
>
> I had a little time this evening and re-ran some VMware-related
> tests around managed storage. I noticed a problem that I’d like
> to investigate before we spin up the next RC. Let’s hold off on
> the next RC until I can find out more (I should know more within
> 24 hours).
>
> Thanks!
> Mike
>
> On 6/12/17, 2:40 AM, "Wido den Hollander" 
> wrote:
>
>> Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike"
> :
>>
>>
>> Hi,
>>
>> I opened a PR against the most recent RC:
> https://github.com/apache/cloudstack/pull/2141
>>
>> I ran all managed-storage regression tests against it and they
> pass (as noted in detail in the PR).
>>
>> If someone wants to take this code and create a new RC from
> it, I’m +1 on the new RC as long as this is the only commit added
> to it since the current RC.
>
> Thanks Mike!
>
> If this PR is good we should probably merge it asap and go for
> RC5.
>
> 4.10 should really be released by now.
>
> Wido
>
>>
>> Thanks!
>> Mike
>>
>> On 6/9/17, 7:43 PM, "Tutkowski, Mike"
>  wrote:
>>
>> Hi everyone,
>>
>> I found a critical issue that was introduced into this RC
> since the most recent RC, so I am -1 on this RC.
>>
>> The fix for this ticket breaks the support for storing volume
> snapshots on primary storage (which is a feature managed storage
> can support):
>>
>> https://issues.apache.org/jira/browse/CLOUDSTACK-9685
>>
>> Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e
>>
>> At a high level, what it does is remove a row from the
> cloud.snapshot_store_ref table when a volume is deleted that has
> one or more volume snapshots.
>>
>> This is fine for non-managed (traditional) storage; however,
> managed storage can store volume snapshots on primary storage, so
> removing this row breaks that functionality.
>>
>> I can fix the problem that this commit introduced by looking
> at the primary storage that supports the volume snapshot and
> checking the following: 1) Is this managed storage? 2) If yes, is
> the snapshot in question stored on that primary storage?
>>
>> The problem is I will be out of the office for a couple weeks
> and will not be able to address this until I return.
>>
>> We could revert the commit, but I still will not have time to
> run the managed-storage regression test suite until I return.
>>
>> On a side note, it looks like this commit was introduced since
> the most recent RC. I would argue that it was not a blocker and
> should not have been placed into the new RC. We (as a community)
> tend to have a lot of code go in between RCs and that just
> increases the chances of 

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-13 Thread Tutkowski, Mike
FYI: I located what was going on with VMware + managed storage. It looks like 
there was a feature that went in (at some point…not sure when) that added the 
ability to resize a root disk (so it doesn’t have to be the same size as the 
template it uses) when spinning up a VM. That code triggered an exception with 
managed storage because it was appending an extra “.vmdk” on to the file name 
(so the VMDK file couldn’t be located in the datastore). I have corrected the 
problem and pushed a second commit to PR 2141.

https://github.com/apache/cloudstack/pull/2141

If we’d like this targeted against master, let me know.

Thanks!
Mike

On 6/13/17, 4:56 AM, "Daan Hoogland"  wrote:

this is why i say we should branch on first RC, fix in release branch
only and merge forward

On Tue, Jun 13, 2017 at 12:41 PM, Will Stevens  
wrote:
> I know it is hard to justify not merging PRs that seem ready but are not
> blockers in an RC, but it is a vicious circle which ultimately results in 
a
> longer RC process.
>
> It is something i struggled with as a release manager as well.
>
> On Jun 13, 2017 1:56 AM, "Rajani Karuturi"  wrote:
>
> Thanks Mike,
>
> Will hold off next RC until we hear an update from you.
>
> Regarding merging non-blockers, unfortunately, its a side-effect
> of taking more than three months in the RC phase :(
>
> Thanks,
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On June 13, 2017 at 10:10 AM, Tutkowski, Mike
> (mike.tutkow...@netapp.com) wrote:
>
> Hi everyone,
>
> I had a little time this evening and re-ran some VMware-related
> tests around managed storage. I noticed a problem that I’d like
> to investigate before we spin up the next RC. Let’s hold off on
> the next RC until I can find out more (I should know more within
> 24 hours).
>
> Thanks!
> Mike
>
> On 6/12/17, 2:40 AM, "Wido den Hollander" 
> wrote:
>
>> Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike"
> :
>>
>>
>> Hi,
>>
>> I opened a PR against the most recent RC:
> https://github.com/apache/cloudstack/pull/2141
>>
>> I ran all managed-storage regression tests against it and they
> pass (as noted in detail in the PR).
>>
>> If someone wants to take this code and create a new RC from
> it, I’m +1 on the new RC as long as this is the only commit added
> to it since the current RC.
>
> Thanks Mike!
>
> If this PR is good we should probably merge it asap and go for
> RC5.
>
> 4.10 should really be released by now.
>
> Wido
>
>>
>> Thanks!
>> Mike
>>
>> On 6/9/17, 7:43 PM, "Tutkowski, Mike"
>  wrote:
>>
>> Hi everyone,
>>
>> I found a critical issue that was introduced into this RC
> since the most recent RC, so I am -1 on this RC.
>>
>> The fix for this ticket breaks the support for storing volume
> snapshots on primary storage (which is a feature managed storage
> can support):
>>
>> https://issues.apache.org/jira/browse/CLOUDSTACK-9685
>>
>> Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e
>>
>> At a high level, what it does is remove a row from the
> cloud.snapshot_store_ref table when a volume is deleted that has
> one or more volume snapshots.
>>
>> This is fine for non-managed (traditional) storage; however,
> managed storage can store volume snapshots on primary storage, so
> removing this row breaks that functionality.
>>
>> I can fix the problem that this commit introduced by looking
> at the primary storage that supports the volume snapshot and
> checking the following: 1) Is this managed storage? 2) If yes, is
> the snapshot in question stored on that primary storage?
>>
>> The problem is I will be out of the office for a couple weeks
> and will not be able to address this until I return.
>>
>> We could revert the commit, but I still will not have time to
> run the managed-storage regression test suite until I return.
>>
>> On a side note, it looks like this commit was introduced since
> the most recent RC. I would argue that it was not a blocker and
> should not have been placed into the new RC. We (as a community)
> tend to have a lot of code go in between RCs and that just
> increases the chances of introducing critical issues and thus
> delaying the release. We’ve gotten better at this over the years,
> but we should focus more on only allowing the entry of new code
> into a follow-on RC that is critical (or so trivial as to not at
> all be likely to introduce any problems…like fixing an error
> message).
>>

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-13 Thread Daan Hoogland
this is why i say we should branch on first RC, fix in release branch
only and merge forward

On Tue, Jun 13, 2017 at 12:41 PM, Will Stevens  wrote:
> I know it is hard to justify not merging PRs that seem ready but are not
> blockers in an RC, but it is a vicious circle which ultimately results in a
> longer RC process.
>
> It is something i struggled with as a release manager as well.
>
> On Jun 13, 2017 1:56 AM, "Rajani Karuturi"  wrote:
>
> Thanks Mike,
>
> Will hold off next RC until we hear an update from you.
>
> Regarding merging non-blockers, unfortunately, its a side-effect
> of taking more than three months in the RC phase :(
>
> Thanks,
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On June 13, 2017 at 10:10 AM, Tutkowski, Mike
> (mike.tutkow...@netapp.com) wrote:
>
> Hi everyone,
>
> I had a little time this evening and re-ran some VMware-related
> tests around managed storage. I noticed a problem that I’d like
> to investigate before we spin up the next RC. Let’s hold off on
> the next RC until I can find out more (I should know more within
> 24 hours).
>
> Thanks!
> Mike
>
> On 6/12/17, 2:40 AM, "Wido den Hollander" 
> wrote:
>
>> Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike"
> :
>>
>>
>> Hi,
>>
>> I opened a PR against the most recent RC:
> https://github.com/apache/cloudstack/pull/2141
>>
>> I ran all managed-storage regression tests against it and they
> pass (as noted in detail in the PR).
>>
>> If someone wants to take this code and create a new RC from
> it, I’m +1 on the new RC as long as this is the only commit added
> to it since the current RC.
>
> Thanks Mike!
>
> If this PR is good we should probably merge it asap and go for
> RC5.
>
> 4.10 should really be released by now.
>
> Wido
>
>>
>> Thanks!
>> Mike
>>
>> On 6/9/17, 7:43 PM, "Tutkowski, Mike"
>  wrote:
>>
>> Hi everyone,
>>
>> I found a critical issue that was introduced into this RC
> since the most recent RC, so I am -1 on this RC.
>>
>> The fix for this ticket breaks the support for storing volume
> snapshots on primary storage (which is a feature managed storage
> can support):
>>
>> https://issues.apache.org/jira/browse/CLOUDSTACK-9685
>>
>> Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e
>>
>> At a high level, what it does is remove a row from the
> cloud.snapshot_store_ref table when a volume is deleted that has
> one or more volume snapshots.
>>
>> This is fine for non-managed (traditional) storage; however,
> managed storage can store volume snapshots on primary storage, so
> removing this row breaks that functionality.
>>
>> I can fix the problem that this commit introduced by looking
> at the primary storage that supports the volume snapshot and
> checking the following: 1) Is this managed storage? 2) If yes, is
> the snapshot in question stored on that primary storage?
>>
>> The problem is I will be out of the office for a couple weeks
> and will not be able to address this until I return.
>>
>> We could revert the commit, but I still will not have time to
> run the managed-storage regression test suite until I return.
>>
>> On a side note, it looks like this commit was introduced since
> the most recent RC. I would argue that it was not a blocker and
> should not have been placed into the new RC. We (as a community)
> tend to have a lot of code go in between RCs and that just
> increases the chances of introducing critical issues and thus
> delaying the release. We’ve gotten better at this over the years,
> but we should focus more on only allowing the entry of new code
> into a follow-on RC that is critical (or so trivial as to not at
> all be likely to introduce any problems…like fixing an error
> message).
>>
>> Thanks for your efforts on this, everyone!
>> Mike
>>
>> On 6/9/17, 8:52 AM, "Tutkowski, Mike"
>  wrote:
>>
>> Hi Rajani,
>>
>> I will see if I can get all of my managed-storage testing
> (both automated and manual) done today. If not, we’ll need to see
> if someone else can complete it before we OK this RC as I won’t
> be back in the office for a couple weeks. I’ll report back later
> today.
>>
>> Thanks,
>> Mike
>>
>> On 6/9/17, 2:34 AM, "Rajani Karuturi" 
> wrote:
>>
>> Yup. thats right. I dont know how it happened but, it created
>> from the previous RC commit. The script is supposed to do a
> git
>> pull. I didn't notice any failures. Not sure what went wrong.
>>
>> Thanks for finding it mike. I am creating RC4 now and
> cancelling
>> this.
>>
>> ~ Rajani
>>
>> http://cloudplatform.accelerite.com/
>>
>> On June 9, 2017 at 12:07 PM, Tutkowski, Mike
>> (mike.tutkow...@netapp.com) wrote:
>>
>> Hi Rajani,
>>
>> I don’t see the following PR in this RC:
>>
>> https://github.com/apache/cloudstack/pull/2098
>>
>> I ran all of my managed-storage regression tests. They all
>> passed with the exception of the one that 

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-13 Thread Will Stevens
I know it is hard to justify not merging PRs that seem ready but are not
blockers in an RC, but it is a vicious circle which ultimately results in a
longer RC process.

It is something i struggled with as a release manager as well.

On Jun 13, 2017 1:56 AM, "Rajani Karuturi"  wrote:

Thanks Mike,

Will hold off next RC until we hear an update from you.

Regarding merging non-blockers, unfortunately, its a side-effect
of taking more than three months in the RC phase :(

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 13, 2017 at 10:10 AM, Tutkowski, Mike
(mike.tutkow...@netapp.com) wrote:

Hi everyone,

I had a little time this evening and re-ran some VMware-related
tests around managed storage. I noticed a problem that I’d like
to investigate before we spin up the next RC. Let’s hold off on
the next RC until I can find out more (I should know more within
24 hours).

Thanks!
Mike

On 6/12/17, 2:40 AM, "Wido den Hollander" 
wrote:

> Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike"
:
>
>
> Hi,
>
> I opened a PR against the most recent RC:
https://github.com/apache/cloudstack/pull/2141
>
> I ran all managed-storage regression tests against it and they
pass (as noted in detail in the PR).
>
> If someone wants to take this code and create a new RC from
it, I’m +1 on the new RC as long as this is the only commit added
to it since the current RC.

Thanks Mike!

If this PR is good we should probably merge it asap and go for
RC5.

4.10 should really be released by now.

Wido

>
> Thanks!
> Mike
>
> On 6/9/17, 7:43 PM, "Tutkowski, Mike"
 wrote:
>
> Hi everyone,
>
> I found a critical issue that was introduced into this RC
since the most recent RC, so I am -1 on this RC.
>
> The fix for this ticket breaks the support for storing volume
snapshots on primary storage (which is a feature managed storage
can support):
>
> https://issues.apache.org/jira/browse/CLOUDSTACK-9685
>
> Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e
>
> At a high level, what it does is remove a row from the
cloud.snapshot_store_ref table when a volume is deleted that has
one or more volume snapshots.
>
> This is fine for non-managed (traditional) storage; however,
managed storage can store volume snapshots on primary storage, so
removing this row breaks that functionality.
>
> I can fix the problem that this commit introduced by looking
at the primary storage that supports the volume snapshot and
checking the following: 1) Is this managed storage? 2) If yes, is
the snapshot in question stored on that primary storage?
>
> The problem is I will be out of the office for a couple weeks
and will not be able to address this until I return.
>
> We could revert the commit, but I still will not have time to
run the managed-storage regression test suite until I return.
>
> On a side note, it looks like this commit was introduced since
the most recent RC. I would argue that it was not a blocker and
should not have been placed into the new RC. We (as a community)
tend to have a lot of code go in between RCs and that just
increases the chances of introducing critical issues and thus
delaying the release. We’ve gotten better at this over the years,
but we should focus more on only allowing the entry of new code
into a follow-on RC that is critical (or so trivial as to not at
all be likely to introduce any problems…like fixing an error
message).
>
> Thanks for your efforts on this, everyone!
> Mike
>
> On 6/9/17, 8:52 AM, "Tutkowski, Mike"
 wrote:
>
> Hi Rajani,
>
> I will see if I can get all of my managed-storage testing
(both automated and manual) done today. If not, we’ll need to see
if someone else can complete it before we OK this RC as I won’t
be back in the office for a couple weeks. I’ll report back later
today.
>
> Thanks,
> Mike
>
> On 6/9/17, 2:34 AM, "Rajani Karuturi" 
wrote:
>
> Yup. thats right. I dont know how it happened but, it created
> from the previous RC commit. The script is supposed to do a
git
> pull. I didn't notice any failures. Not sure what went wrong.
>
> Thanks for finding it mike. I am creating RC4 now and
cancelling
> this.
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On June 9, 2017 at 12:07 PM, Tutkowski, Mike
> (mike.tutkow...@netapp.com) wrote:
>
> Hi Rajani,
>
> I don’t see the following PR in this RC:
>
> https://github.com/apache/cloudstack/pull/2098
>
> I ran all of my managed-storage regression tests. They all
> passed with the exception of the one that led to PR 2098.
>
> As I examine the RC in a bit more detail, it sits on top of
> ed2f573, but I think it should sit on top of ed376fc.
>
> As a result, I am -1 on the RC.
>
> It takes me about a day to run all of the managed-storage
> regression tests and I am out of the office for the next
couple
> weeks, so I’d really like to avoid another RC until I’m back
and
> able to test the next RC.
>
> 

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-12 Thread Rajani Karuturi
Thanks Mike,

Will hold off next RC until we hear an update from you.

Regarding merging non-blockers, unfortunately, its a side-effect
of taking more than three months in the RC phase :(

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 13, 2017 at 10:10 AM, Tutkowski, Mike
(mike.tutkow...@netapp.com) wrote:

Hi everyone,

I had a little time this evening and re-ran some VMware-related
tests around managed storage. I noticed a problem that I’d like
to investigate before we spin up the next RC. Let’s hold off on
the next RC until I can find out more (I should know more within
24 hours).

Thanks!
Mike

On 6/12/17, 2:40 AM, "Wido den Hollander" 
wrote:

> Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike"
:
>
>
> Hi,
>
> I opened a PR against the most recent RC:
https://github.com/apache/cloudstack/pull/2141
>
> I ran all managed-storage regression tests against it and they
pass (as noted in detail in the PR).
>
> If someone wants to take this code and create a new RC from
it, I’m +1 on the new RC as long as this is the only commit added
to it since the current RC.

Thanks Mike!

If this PR is good we should probably merge it asap and go for
RC5.

4.10 should really be released by now.

Wido

>
> Thanks!
> Mike
>
> On 6/9/17, 7:43 PM, "Tutkowski, Mike"
 wrote:
>
> Hi everyone,
>
> I found a critical issue that was introduced into this RC
since the most recent RC, so I am -1 on this RC.
>
> The fix for this ticket breaks the support for storing volume
snapshots on primary storage (which is a feature managed storage
can support):
>
> https://issues.apache.org/jira/browse/CLOUDSTACK-9685
>
> Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e
>
> At a high level, what it does is remove a row from the
cloud.snapshot_store_ref table when a volume is deleted that has
one or more volume snapshots.
>
> This is fine for non-managed (traditional) storage; however,
managed storage can store volume snapshots on primary storage, so
removing this row breaks that functionality.
>
> I can fix the problem that this commit introduced by looking
at the primary storage that supports the volume snapshot and
checking the following: 1) Is this managed storage? 2) If yes, is
the snapshot in question stored on that primary storage?
>
> The problem is I will be out of the office for a couple weeks
and will not be able to address this until I return.
>
> We could revert the commit, but I still will not have time to
run the managed-storage regression test suite until I return.
>
> On a side note, it looks like this commit was introduced since
the most recent RC. I would argue that it was not a blocker and
should not have been placed into the new RC. We (as a community)
tend to have a lot of code go in between RCs and that just
increases the chances of introducing critical issues and thus
delaying the release. We’ve gotten better at this over the years,
but we should focus more on only allowing the entry of new code
into a follow-on RC that is critical (or so trivial as to not at
all be likely to introduce any problems…like fixing an error
message).
>
> Thanks for your efforts on this, everyone!
> Mike
>
> On 6/9/17, 8:52 AM, "Tutkowski, Mike"
 wrote:
>
> Hi Rajani,
>
> I will see if I can get all of my managed-storage testing
(both automated and manual) done today. If not, we’ll need to see
if someone else can complete it before we OK this RC as I won’t
be back in the office for a couple weeks. I’ll report back later
today.
>
> Thanks,
> Mike
>
> On 6/9/17, 2:34 AM, "Rajani Karuturi" 
wrote:
>
> Yup. thats right. I dont know how it happened but, it created
> from the previous RC commit. The script is supposed to do a
git
> pull. I didn't notice any failures. Not sure what went wrong.
>
> Thanks for finding it mike. I am creating RC4 now and
cancelling
> this.
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On June 9, 2017 at 12:07 PM, Tutkowski, Mike
> (mike.tutkow...@netapp.com) wrote:
>
> Hi Rajani,
>
> I don’t see the following PR in this RC:
>
> https://github.com/apache/cloudstack/pull/2098
>
> I ran all of my managed-storage regression tests. They all
> passed with the exception of the one that led to PR 2098.
>
> As I examine the RC in a bit more detail, it sits on top of
> ed2f573, but I think it should sit on top of ed376fc.
>
> As a result, I am -1 on the RC.
>
> It takes me about a day to run all of the managed-storage
> regression tests and I am out of the office for the next
couple
> weeks, so I’d really like to avoid another RC until I’m back
and
> able to test the next RC.
>
> Thanks!
> Mike
>
> On 6/7/17, 4:36 AM, "Rajani Karuturi" 
wrote:
>
> Hi All,
>
> I've created 4.10.0.0 release with the following artifacts up
> for a vote:
>
> Git Branch and Commit SH:
>

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-12 Thread Tutkowski, Mike
Hi everyone,

I had a little time this evening and re-ran some VMware-related tests around 
managed storage. I noticed a problem that I’d like to investigate before we 
spin up the next RC. Let’s hold off on the next RC until I can find out more (I 
should know more within 24 hours).

Thanks!
Mike

On 6/12/17, 2:40 AM, "Wido den Hollander"  wrote:

> Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike" 
:
> 
> 
> Hi,
> 
> I opened a PR against the most recent RC: 
https://github.com/apache/cloudstack/pull/2141
> 
> I ran all managed-storage regression tests against it and they pass (as 
noted in detail in the PR).
> 
> If someone wants to take this code and create a new RC from it, I’m +1 on 
the new RC as long as this is the only commit added to it since the current RC.

Thanks Mike!

If this PR is good we should probably merge it asap and go for RC5.

4.10 should really be released by now.

Wido

> 
> Thanks!
> Mike
> 
> On 6/9/17, 7:43 PM, "Tutkowski, Mike"  wrote:
> 
> Hi everyone,
> 
> I found a critical issue that was introduced into this RC since the 
most recent RC, so I am -1 on this RC.
> 
> The fix for this ticket breaks the support for storing volume 
snapshots on primary storage (which is a feature managed storage can support):
> 
> https://issues.apache.org/jira/browse/CLOUDSTACK-9685
> 
> Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e
> 
> At a high level, what it does is remove a row from the 
cloud.snapshot_store_ref table when a volume is deleted that has one or more 
volume snapshots.
> 
> This is fine for non-managed (traditional) storage; however, managed 
storage can store volume snapshots on primary storage, so removing this row 
breaks that functionality.
> 
> I can fix the problem that this commit introduced by looking at the 
primary storage that supports the volume snapshot and checking the following: 
1) Is this managed storage? 2) If yes, is the snapshot in question stored on 
that primary storage?
> 
> The problem is I will be out of the office for a couple weeks and 
will not be able to address this until I return.
> 
> We could revert the commit, but I still will not have time to run the 
managed-storage regression test suite until I return.
> 
> On a side note, it looks like this commit was introduced since the 
most recent RC. I would argue that it was not a blocker and should not have 
been placed into the new RC. We (as a community) tend to have a lot of code go 
in between RCs and that just increases the chances of introducing critical 
issues and thus delaying the release. We’ve gotten better at this over the 
years, but we should focus more on only allowing the entry of new code into a 
follow-on RC that is critical (or so trivial as to not at all be likely to 
introduce any problems…like fixing an error message).
> 
> Thanks for your efforts on this, everyone!
> Mike
> 
> On 6/9/17, 8:52 AM, "Tutkowski, Mike"  
wrote:
> 
> Hi Rajani,
> 
> I will see if I can get all of my managed-storage testing (both 
automated and manual) done today. If not, we’ll need to see if someone else can 
complete it before we OK this RC as I won’t be back in the office for a couple 
weeks. I’ll report back later today.
> 
> Thanks,
> Mike
> 
> On 6/9/17, 2:34 AM, "Rajani Karuturi"  wrote:
> 
> Yup. thats right. I dont know how it happened but, it created
> from the previous RC commit. The script is supposed to do a 
git
> pull. I didn't notice any failures. Not sure what went wrong.
> 
> Thanks for finding it mike. I am creating RC4 now and 
cancelling
> this.
> 
> ~ Rajani
> 
> http://cloudplatform.accelerite.com/
> 
> On June 9, 2017 at 12:07 PM, Tutkowski, Mike
> (mike.tutkow...@netapp.com) wrote:
> 
> Hi Rajani,
> 
> I don’t see the following PR in this RC:
> 
> https://github.com/apache/cloudstack/pull/2098
> 
> I ran all of my managed-storage regression tests. They all
> passed with the exception of the one that led to PR 2098.
> 
> As I examine the RC in a bit more detail, it sits on top of
> ed2f573, but I think it should sit on top of ed376fc.
> 

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-12 Thread Wido den Hollander
> Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike" 
> :
> 
> 
> Hi,
> 
> I opened a PR against the most recent RC: 
> https://github.com/apache/cloudstack/pull/2141
> 
> I ran all managed-storage regression tests against it and they pass (as noted 
> in detail in the PR).
> 
> If someone wants to take this code and create a new RC from it, I’m +1 on the 
> new RC as long as this is the only commit added to it since the current RC.

Thanks Mike!

If this PR is good we should probably merge it asap and go for RC5.

4.10 should really be released by now.

Wido

> 
> Thanks!
> Mike
> 
> On 6/9/17, 7:43 PM, "Tutkowski, Mike"  wrote:
> 
> Hi everyone,
> 
> I found a critical issue that was introduced into this RC since the most 
> recent RC, so I am -1 on this RC.
> 
> The fix for this ticket breaks the support for storing volume snapshots 
> on primary storage (which is a feature managed storage can support):
> 
> https://issues.apache.org/jira/browse/CLOUDSTACK-9685
> 
> Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e
> 
> At a high level, what it does is remove a row from the 
> cloud.snapshot_store_ref table when a volume is deleted that has one or more 
> volume snapshots.
> 
> This is fine for non-managed (traditional) storage; however, managed 
> storage can store volume snapshots on primary storage, so removing this row 
> breaks that functionality.
> 
> I can fix the problem that this commit introduced by looking at the 
> primary storage that supports the volume snapshot and checking the following: 
> 1) Is this managed storage? 2) If yes, is the snapshot in question stored on 
> that primary storage?
> 
> The problem is I will be out of the office for a couple weeks and will 
> not be able to address this until I return.
> 
> We could revert the commit, but I still will not have time to run the 
> managed-storage regression test suite until I return.
> 
> On a side note, it looks like this commit was introduced since the most 
> recent RC. I would argue that it was not a blocker and should not have been 
> placed into the new RC. We (as a community) tend to have a lot of code go in 
> between RCs and that just increases the chances of introducing critical 
> issues and thus delaying the release. We’ve gotten better at this over the 
> years, but we should focus more on only allowing the entry of new code into a 
> follow-on RC that is critical (or so trivial as to not at all be likely to 
> introduce any problems…like fixing an error message).
> 
> Thanks for your efforts on this, everyone!
> Mike
> 
> On 6/9/17, 8:52 AM, "Tutkowski, Mike"  wrote:
> 
> Hi Rajani,
> 
> I will see if I can get all of my managed-storage testing (both 
> automated and manual) done today. If not, we’ll need to see if someone else 
> can complete it before we OK this RC as I won’t be back in the office for a 
> couple weeks. I’ll report back later today.
> 
> Thanks,
> Mike
> 
> On 6/9/17, 2:34 AM, "Rajani Karuturi"  wrote:
> 
> Yup. thats right. I dont know how it happened but, it created
> from the previous RC commit. The script is supposed to do a git
> pull. I didn't notice any failures. Not sure what went wrong.
> 
> Thanks for finding it mike. I am creating RC4 now and cancelling
> this.
> 
> ~ Rajani
> 
> http://cloudplatform.accelerite.com/
> 
> On June 9, 2017 at 12:07 PM, Tutkowski, Mike
> (mike.tutkow...@netapp.com) wrote:
> 
> Hi Rajani,
> 
> I don’t see the following PR in this RC:
> 
> https://github.com/apache/cloudstack/pull/2098
> 
> I ran all of my managed-storage regression tests. They all
> passed with the exception of the one that led to PR 2098.
> 
> As I examine the RC in a bit more detail, it sits on top of
> ed2f573, but I think it should sit on top of ed376fc.
> 
> As a result, I am -1 on the RC.
> 
> It takes me about a day to run all of the managed-storage
> regression tests and I am out of the office for the next couple
> weeks, so I’d really like to avoid another RC until I’m back and
> able to test the next RC.
> 
> Thanks!
> Mike
> 
> On 6/7/17, 4:36 AM, "Rajani Karuturi"  wrote:
> 
> Hi All,
> 
> I've created 4.10.0.0 release with the following artifacts up
> for a vote:
> 
> 

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-10 Thread Tutkowski, Mike
Hi,

I opened a PR against the most recent RC: 
https://github.com/apache/cloudstack/pull/2141

I ran all managed-storage regression tests against it and they pass (as noted 
in detail in the PR).

If someone wants to take this code and create a new RC from it, I’m +1 on the 
new RC as long as this is the only commit added to it since the current RC.

Thanks!
Mike

On 6/9/17, 7:43 PM, "Tutkowski, Mike"  wrote:

Hi everyone,

I found a critical issue that was introduced into this RC since the most 
recent RC, so I am -1 on this RC.

The fix for this ticket breaks the support for storing volume snapshots on 
primary storage (which is a feature managed storage can support):

https://issues.apache.org/jira/browse/CLOUDSTACK-9685

Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e

At a high level, what it does is remove a row from the 
cloud.snapshot_store_ref table when a volume is deleted that has one or more 
volume snapshots.

This is fine for non-managed (traditional) storage; however, managed 
storage can store volume snapshots on primary storage, so removing this row 
breaks that functionality.

I can fix the problem that this commit introduced by looking at the primary 
storage that supports the volume snapshot and checking the following: 1) Is 
this managed storage? 2) If yes, is the snapshot in question stored on that 
primary storage?

The problem is I will be out of the office for a couple weeks and will not 
be able to address this until I return.

We could revert the commit, but I still will not have time to run the 
managed-storage regression test suite until I return.

On a side note, it looks like this commit was introduced since the most 
recent RC. I would argue that it was not a blocker and should not have been 
placed into the new RC. We (as a community) tend to have a lot of code go in 
between RCs and that just increases the chances of introducing critical issues 
and thus delaying the release. We’ve gotten better at this over the years, but 
we should focus more on only allowing the entry of new code into a follow-on RC 
that is critical (or so trivial as to not at all be likely to introduce any 
problems…like fixing an error message).

Thanks for your efforts on this, everyone!
Mike

On 6/9/17, 8:52 AM, "Tutkowski, Mike"  wrote:

Hi Rajani,

I will see if I can get all of my managed-storage testing (both 
automated and manual) done today. If not, we’ll need to see if someone else can 
complete it before we OK this RC as I won’t be back in the office for a couple 
weeks. I’ll report back later today.

Thanks,
Mike

On 6/9/17, 2:34 AM, "Rajani Karuturi"  wrote:

Yup. thats right. I dont know how it happened but, it created
from the previous RC commit. The script is supposed to do a git
pull. I didn't notice any failures. Not sure what went wrong.

Thanks for finding it mike. I am creating RC4 now and cancelling
this.

~ Rajani

http://cloudplatform.accelerite.com/

On June 9, 2017 at 12:07 PM, Tutkowski, Mike
(mike.tutkow...@netapp.com) wrote:

Hi Rajani,

I don’t see the following PR in this RC:

https://github.com/apache/cloudstack/pull/2098

I ran all of my managed-storage regression tests. They all
passed with the exception of the one that led to PR 2098.

As I examine the RC in a bit more detail, it sits on top of
ed2f573, but I think it should sit on top of ed376fc.

As a result, I am -1 on the RC.

It takes me about a day to run all of the managed-storage
regression tests and I am out of the office for the next couple
weeks, so I’d really like to avoid another RC until I’m back and
able to test the next RC.

Thanks!
Mike

On 6/7/17, 4:36 AM, "Rajani Karuturi"  wrote:

Hi All,

I've created 4.10.0.0 release with the following artifacts up
for a vote:

Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=commit;h=a55738a31d0073f2925c6fb84bf7a6bb32f4ca27
Commit:a55738a31d0073f2925c6fb84bf7a6bb32f4ca27
Branch: 4.10.0.0-RC20170607T1407

Source release (checksums and signatures are available at the
same
location):

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-09 Thread Tutkowski, Mike
Hi everyone,

I found a critical issue that was introduced into this RC since the most recent 
RC, so I am -1 on this RC.

The fix for this ticket breaks the support for storing volume snapshots on 
primary storage (which is a feature managed storage can support):

https://issues.apache.org/jira/browse/CLOUDSTACK-9685

Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e

At a high level, what it does is remove a row from the cloud.snapshot_store_ref 
table when a volume is deleted that has one or more volume snapshots.

This is fine for non-managed (traditional) storage; however, managed storage 
can store volume snapshots on primary storage, so removing this row breaks that 
functionality.

I can fix the problem that this commit introduced by looking at the primary 
storage that supports the volume snapshot and checking the following: 1) Is 
this managed storage? 2) If yes, is the snapshot in question stored on that 
primary storage?

The problem is I will be out of the office for a couple weeks and will not be 
able to address this until I return.

We could revert the commit, but I still will not have time to run the 
managed-storage regression test suite until I return.

On a side note, it looks like this commit was introduced since the most recent 
RC. I would argue that it was not a blocker and should not have been placed 
into the new RC. We (as a community) tend to have a lot of code go in between 
RCs and that just increases the chances of introducing critical issues and thus 
delaying the release. We’ve gotten better at this over the years, but we should 
focus more on only allowing the entry of new code into a follow-on RC that is 
critical (or so trivial as to not at all be likely to introduce any 
problems…like fixing an error message).

Thanks for your efforts on this, everyone!
Mike

On 6/9/17, 8:52 AM, "Tutkowski, Mike"  wrote:

Hi Rajani,

I will see if I can get all of my managed-storage testing (both automated 
and manual) done today. If not, we’ll need to see if someone else can complete 
it before we OK this RC as I won’t be back in the office for a couple weeks. 
I’ll report back later today.

Thanks,
Mike

On 6/9/17, 2:34 AM, "Rajani Karuturi"  wrote:

Yup. thats right. I dont know how it happened but, it created
from the previous RC commit. The script is supposed to do a git
pull. I didn't notice any failures. Not sure what went wrong.

Thanks for finding it mike. I am creating RC4 now and cancelling
this.

~ Rajani

http://cloudplatform.accelerite.com/

On June 9, 2017 at 12:07 PM, Tutkowski, Mike
(mike.tutkow...@netapp.com) wrote:

Hi Rajani,

I don’t see the following PR in this RC:

https://github.com/apache/cloudstack/pull/2098

I ran all of my managed-storage regression tests. They all
passed with the exception of the one that led to PR 2098.

As I examine the RC in a bit more detail, it sits on top of
ed2f573, but I think it should sit on top of ed376fc.

As a result, I am -1 on the RC.

It takes me about a day to run all of the managed-storage
regression tests and I am out of the office for the next couple
weeks, so I’d really like to avoid another RC until I’m back and
able to test the next RC.

Thanks!
Mike

On 6/7/17, 4:36 AM, "Rajani Karuturi"  wrote:

Hi All,

I've created 4.10.0.0 release with the following artifacts up
for a vote:

Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=commit;h=a55738a31d0073f2925c6fb84bf7a6bb32f4ca27
Commit:a55738a31d0073f2925c6fb84bf7a6bb32f4ca27
Branch: 4.10.0.0-RC20170607T1407

Source release (checksums and signatures are available at the
same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.10.0.0/

SystemVm Templates:
http://download.cloudstack.org/systemvm/4.10/RC3/

PGP release keys (signed using CBB44821):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

Vote will be open for 72 hours.

For sanity in tallying the vote, can PMC members please be sure
to indicate
"(binding)" with their vote?

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
~Rajani
http://cloudplatform.accelerite.com/





Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-09 Thread Rajani Karuturi
Yup. thats right. I dont know how it happened but, it created
from the previous RC commit. The script is supposed to do a git
pull. I didn't notice any failures. Not sure what went wrong.

Thanks for finding it mike. I am creating RC4 now and cancelling
this.

~ Rajani

http://cloudplatform.accelerite.com/

On June 9, 2017 at 12:07 PM, Tutkowski, Mike
(mike.tutkow...@netapp.com) wrote:

Hi Rajani,

I don’t see the following PR in this RC:

https://github.com/apache/cloudstack/pull/2098

I ran all of my managed-storage regression tests. They all
passed with the exception of the one that led to PR 2098.

As I examine the RC in a bit more detail, it sits on top of
ed2f573, but I think it should sit on top of ed376fc.

As a result, I am -1 on the RC.

It takes me about a day to run all of the managed-storage
regression tests and I am out of the office for the next couple
weeks, so I’d really like to avoid another RC until I’m back and
able to test the next RC.

Thanks!
Mike

On 6/7/17, 4:36 AM, "Rajani Karuturi"  wrote:

Hi All,

I've created 4.10.0.0 release with the following artifacts up
for a vote:

Git Branch and Commit SH:
https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=commit;h=a55738a31d0073f2925c6fb84bf7a6bb32f4ca27
Commit:a55738a31d0073f2925c6fb84bf7a6bb32f4ca27
Branch: 4.10.0.0-RC20170607T1407

Source release (checksums and signatures are available at the
same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.10.0.0/

SystemVm Templates:
http://download.cloudstack.org/systemvm/4.10/RC3/

PGP release keys (signed using CBB44821):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

Vote will be open for 72 hours.

For sanity in tallying the vote, can PMC members please be sure
to indicate
"(binding)" with their vote?

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
~Rajani
http://cloudplatform.accelerite.com/

Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

2017-06-09 Thread Tutkowski, Mike
Hi Rajani,

I don’t see the following PR in this RC:

https://github.com/apache/cloudstack/pull/2098

I ran all of my managed-storage regression tests. They all passed with the 
exception of the one that led to PR 2098.

As I examine the RC in a bit more detail, it sits on top of ed2f573, but I 
think it should sit on top of ed376fc.

As a result, I am -1 on the RC.

It takes me about a day to run all of the managed-storage regression tests and 
I am out of the office for the next couple weeks, so I’d really like to avoid 
another RC until I’m back and able to test the next RC.

Thanks!
Mike

On 6/7/17, 4:36 AM, "Rajani Karuturi"  wrote:

Hi All,

I've created 4.10.0.0 release with the following artifacts up for a vote:

Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=commit;h=a55738a31d0073f2925c6fb84bf7a6bb32f4ca27
Commit:a55738a31d0073f2925c6fb84bf7a6bb32f4ca27
Branch: 4.10.0.0-RC20170607T1407

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.10.0.0/

SystemVm Templates: http://download.cloudstack.org/systemvm/4.10/RC3/

PGP release keys (signed using CBB44821):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

Vote will be open for 72 hours.

For sanity in tallying the vote, can PMC members please be sure to indicate
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

Thanks,
~Rajani
http://cloudplatform.accelerite.com/