Re: QA signup

2018-09-19 Thread kurt greaves
It's pretty much only third party plugins. I need it for the LDAP
authenticator, and StratIO's lucene plugin will also need it. I know there
are users out there with their own custom plugins that would benefit from
it as well (and various other open source projects). It would make it
easier, however it certainly is feasible for these devs to just build the
jars themselves (and I've done this so far). If it's going to be easy I
think there's value in generating and hosting nightly jars, but if it's
difficult I can just write some docs for DIY.

On Thu, 20 Sep 2018 at 12:20, Mick Semb Wever  wrote:

> Sorry about the terrible english in my last email.
>
>
> > On the target audience:
> >
> > [snip]
> >  For developers building automation around testing and
> > validation, it’d be great to have a common build to work from rather
> > than each developer producing these builds themselves.
>
>
> Sure. My question was only in context of maven artefacts.
> It seems to me all the use-cases you highlight would be for the binary
> artefacts.
>
> If that's the case we don't need to worry about publishing snapshots maven
> artefacts, and can just focus on uploading nightly builds to
> https://dist.apache.org/repos/dist/dev/cassandra/
>
> Or is there a use-case I'm missing that needs the maven artefacts?
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: dev-h...@cassandra.apache.org
>
>


Re: QA signup

2018-09-19 Thread Mick Semb Wever
Sorry about the terrible english in my last email.


> On the target audience:
> 
> [snip]
>  For developers building automation around testing and 
> validation, it’d be great to have a common build to work from rather 
> than each developer producing these builds themselves.


Sure. My question was only in context of maven artefacts.
It seems to me all the use-cases you highlight would be for the binary 
artefacts.

If that's the case we don't need to worry about publishing snapshots maven 
artefacts, and can just focus on uploading nightly builds to 
https://dist.apache.org/repos/dist/dev/cassandra/

Or is there a use-case I'm missing that needs the maven artefacts?

-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



Measuring Release Quality

2018-09-19 Thread Scott Andreas
Hi everyone,

Now that many teams have begun testing and validating Apache Cassandra 4.0, 
it’s useful to think about what “progress” looks like. While metrics alone may 
not tell us what “done” means, they do help us answer the question, “are we 
getting better or worse — and how quickly”?

A friend described to me a few attributes of metrics he considered useful, 
suggesting that good metrics are actionable, visible, predictive, and 
consequent:

– Actionable: We know what to do based on them – where to invest, what to fix, 
what’s fine, etc.
– Visible: Everyone who has a stake in a metric has full visibility into it and 
participates in its definition.
– Predictive: Good metrics enable forecasting of outcomes – e.g., “consistent 
performance test results against build abc predict an x% reduction in 99%ile 
read latency for this workload in prod".
– Consequent: We take actions based on them (e.g., not shipping if tests are 
failing).

Here are some notes in Confluence toward metrics that may be useful to track 
beginning in this phase of the development + release cycle. I’m interested in 
your thoughts on these. They’re also copied inline for easier reading in your 
mail client.

Link: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93324430

Cheers,

– Scott

––

Measuring Release Quality:

[ This document is a draft + sketch of ideas. It is located in the "discussion" 
section of this wiki to indicate that it is an active draft – not a document 
that has been voted on, achieved consensus, or in any way official. ]

Introduction:

This document outlines a series of metrics that may be useful toward measuring 
release quality, and quantifying progress during the testing / validation phase 
of the Apache Cassandra 4.0 release cycle.

The goal of this document is to think through what we should consider measuring 
to quantify our progress testing and validating Apache Cassandra 4.0. This 
document explicitly does not discuss release criteria – though metrics may be a 
useful input to a discussion on that topic.


Metric: Build / Test Health (produced via CI, recorded in Confluence):

Bread-and-butter metrics intended to capture baseline build health, flakiness 
in the test suite, and presented as a time series to understand how they’ve 
changed from build to build and release to release:

Metrics:

– Pass / fail metrics for unit tests
– Pass / fail metrics for dtests
– Flakiness stats for unit and dtests


Metric: “Found Bug” Count by Methodology (sourced via JQL, reported in 
Confluence):

These are intended to help us understand the efficacy of each methodology being 
applied. We might consider annotating bugs found in JIRA with the methodology 
that produced them. This could be consumed as input in a JQL query and reported 
on the Confluence dev wiki.

As we reach a pareto-optimal level of investment in a methodology, we’d expect 
to see its found-bug rate taper. As we achieve higher quality across the board, 
we’d expect to see a tapering in found-bug counts across all methodologies. In 
the event that one or two approaches is an outlier, this could indicate the 
utility of doubling down on a particular form of testing.

We might consider reporting “Found By” counts for methodologies such as:

– Property-based / fuzz testing
– Replay testing
– Upgrade / Diff testing
– Performance testing
– Shadow traffic
– Unit/dtest coverage of new areas
– Source audit


Metric: “Found Bug” Count by Subsystem/Component (sourced via JQL, reported in 
Confluence):

Similar to “found by,” but “found where.” These metrics help us understand 
which components or subsystems of the database we’re finding issues in. In the 
event that a particular area stands out as “hot,” we’ll have the quantitative 
feedback we need to support investment there. Tracking these counts over time – 
and their first derivative – the rate – also helps us make statements regarding 
progress in various subsystems. Though we can’t prove a negative (“no bugs have 
been found, therefore there are no bugs”), we gain confidence as their rate 
decreases normalized to the effort we’re putting in.

We might consider reporting “Found In” counts for components as enumerated in 
JIRA, such as:
– Auth
– Build
– Compaction
– Compression
– Core
– CQL
– Distributed Metadata
– …and so on.


Metric: “Found Bug” Count by Severity (sourced via JQL, reported in Confluence)

Similar to “found by/where,” but “how bad”? These metrics help us understand 
the severity of the issues we encounter. As build quality improves, we would 
expect to see decreases in the severity of issues identified. A high rate of 
critical issues identified late in the release cycle would be cause for 
concern, though it may be expected at an earlier time.

These could roughly be sourced from the “Priority” field in JIRA:
– Trivial
– Minor
– Major
– Critical
– Blocker

While “priority” doesn’t map directly to “severity,” it may be a useful proxy. 
Alternately, we could 

Re: QA signup

2018-09-19 Thread Jonathan Haddad
It seems to me that improving / simplifying the process of building the
packages might solve this problem better.  For example, in the tests you
linked to, they were using a custom build that hadn't been rolled into
trunk.  I expect we're going to see a lot of that.

If we make building a deb package as easy as running `docker-compose run
build-deb` then that addresses the problem of testing all branches.

This sort of exists already in cassandra-builds, but it's doing a little
more than just building a basic deb package.  Might be easier if it was
directly in the Cassandra repo.

On Wed, Sep 19, 2018 at 5:00 PM Scott Andreas  wrote:

> Got it, thanks!
>
> On the target audience:
>
> This would primarily be C* developers who are working on development,
> testing, and validation of the release. The performance testing exercise
> Joey, Vinay, Sumanth, Jason, Jordan, and Dinesh completed yesterday is a
> good example [1]. For developers building automation around testing and
> validation, it’d be great to have a common build to work from rather than
> each developer producing these builds themselves.
>
> Some purposes that could serve:
> – Ensuring we’re testing the same artifact. While a build produced from a
> given SHA should be ~identical, we can make stronger statements about
> particular build artifacts produced by common infrastructure than local
> builds produced by individual developers.
> – Automation: the ability to automate test suite runs on publication of a
> new build (e.g., perf, replay, traffic shadowing, upgrade tests, etc).
> – Faster dev/test/validation cycles. The perf tests in [1] identified two
> issues whose fixes will land in C-14503. Being able to pick up these fixes
> on commit via automation provides quicker feedback than waiting for a new
> build to be manually cut.
> – Fewer developers experiencing blocked automation. In a case where a
> regression is identified in a build produced by a commit (e.g., a snapshot
> build is “DOA” for testing purposes), a quick fix could resolve the issue
> with a new testable artifact produced within a day.
> – Better delineation between developer builds and those we recommend to
> the user community. Our ability to produce snapshot/nightly artifacts
> reduces pressure to cut an alpha for testing, reducing pressure to nominate
> community-facing releases in order to further the developer-focused goals
> above.
>
> ––
> [1]
> https://cwiki.apache.org/confluence/display/CASSANDRA/4.0+Performance+Testing
>
>
> On September 18, 2018 at 5:47:18 PM, Mick Semb Wever (m...@apache.org
> ) wrote:
>
> Scott,
>
> > @Mick, thanks for your reply re: publishing snapshots/nightlies. In
> > terms of what’s needed to configure these, would it be automation around
> > building release artifacts, publishing jars to the Maven snapshots repo,
> …
>
>
> Maven artefacts are deployed to the ASF snapshot repository in Nexus.
> The short of it is to add credentials for `apache.snapshots.https` to
> ~/.m2/settings.xml and run `ant publish`.
>
> It looks like `ant publish` won't run when it's not a release, but
> otherwise the maven deploy properties in `build.xml` look correct for
> snapshots.
>
> I haven't looked into how to automate this in Jenkins in regards to the
> settings.xml credentials and the gpg signing.
>
> For info at: http://www.apache.org/dev/publishing-maven-artifacts.html
>
> I question I have is who are we targeting with maven snapshots? Is this an
> audience that can easily enough be building the jars themselves to test
> during the feature freeze period?
>
>
> > and to dist/dev/cassandra on dist.apache.org for binary artifacts?
>
>
> This is a simpler task, just upload (`svn commit`) the nightly binaries to
> https://dist.apache.org/repos/dist/dev/cassandra/
>
> See https://www.apache.org/legal/release-policy.html#host-rc
>
> regards,
> Mick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: dev-h...@cassandra.apache.org
>
>

-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Re: QA signup

2018-09-19 Thread Scott Andreas
Got it, thanks!

On the target audience:

This would primarily be C* developers who are working on development, testing, 
and validation of the release. The performance testing exercise Joey, Vinay, 
Sumanth, Jason, Jordan, and Dinesh completed yesterday is a good example [1]. 
For developers building automation around testing and validation, it’d be great 
to have a common build to work from rather than each developer producing these 
builds themselves.

Some purposes that could serve:
– Ensuring we’re testing the same artifact. While a build produced from a given 
SHA should be ~identical, we can make stronger statements about particular 
build artifacts produced by common infrastructure than local builds produced by 
individual developers.
– Automation: the ability to automate test suite runs on publication of a new 
build (e.g., perf, replay, traffic shadowing, upgrade tests, etc).
– Faster dev/test/validation cycles. The perf tests in [1] identified two 
issues whose fixes will land in C-14503. Being able to pick up these fixes on 
commit via automation provides quicker feedback than waiting for a new build to 
be manually cut.
– Fewer developers experiencing blocked automation. In a case where a 
regression is identified in a build produced by a commit (e.g., a snapshot 
build is “DOA” for testing purposes), a quick fix could resolve the issue with 
a new testable artifact produced within a day.
– Better delineation between developer builds and those we recommend to the 
user community. Our ability to produce snapshot/nightly artifacts reduces 
pressure to cut an alpha for testing, reducing pressure to nominate 
community-facing releases in order to further the developer-focused goals above.

––
[1] 
https://cwiki.apache.org/confluence/display/CASSANDRA/4.0+Performance+Testing


On September 18, 2018 at 5:47:18 PM, Mick Semb Wever 
(m...@apache.org) wrote:

Scott,

> @Mick, thanks for your reply re: publishing snapshots/nightlies. In
> terms of what’s needed to configure these, would it be automation around
> building release artifacts, publishing jars to the Maven snapshots repo, …


Maven artefacts are deployed to the ASF snapshot repository in Nexus.
The short of it is to add credentials for `apache.snapshots.https` to 
~/.m2/settings.xml and run `ant publish`.

It looks like `ant publish` won't run when it's not a release, but otherwise 
the maven deploy properties in `build.xml` look correct for snapshots.

I haven't looked into how to automate this in Jenkins in regards to the 
settings.xml credentials and the gpg signing.

For info at: http://www.apache.org/dev/publishing-maven-artifacts.html

I question I have is who are we targeting with maven snapshots? Is this an 
audience that can easily enough be building the jars themselves to test during 
the feature freeze period?


> and to dist/dev/cassandra on dist.apache.org for binary artifacts?


This is a simpler task, just upload (`svn commit`) the nightly binaries to 
https://dist.apache.org/repos/dist/dev/cassandra/

See https://www.apache.org/legal/release-policy.html#host-rc

regards,
Mick

-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org