Re: Static Analysis tests

2016-04-27 Thread Nate Finch
Maybe we're not as far apart as I thought at first.

My thought was that they'd live under github.com/juju/juju/devrules (or
some other name) and therefore only get run during a full test run or if
you run them there specifically.  What is a full test run if not a test of
all our code?  These tests just happen to test all the code at once, rather
than piece by piece.  Combining with the other thread, if we also marked
them as skipped under -short, you could easily still run go test ./...
-short from the root of the juju repo and not incur the extra 16.5 seconds
(gocheck has a nice feature where if you call c.Skip() in the SetUpSuite,
it skips all the tests in the suite, which is particularly appropriate to
these tests, since it's the SetUpSuite that takes all the time).

Mostly, I just didn't want them to live off in a separate repo or run with
a separate tool.

On Wed, Apr 27, 2016 at 11:39 PM Andrew Wilkins <
andrew.wilk...@canonical.com> wrote:

> On Thu, Apr 28, 2016 at 11:14 AM Nate Finch 
> wrote:
>
>> From the other thread:
>>
>> I wrote a test that parses the entire codebase under github.com/juju/juju to
>> look for places where we're creating a new value of crypto/tls.Config
>> instead of using the new helper function that I wrote that creates one with
>> more secure defaults.  It takes 16.5 seconds to run on my machine.  There's
>> not really any getting around the fact that parsing the whole tree takes a
>> long time.
>>
>> What I *don't* want is to put these tests somewhere else which requires
>> more thought/setup to run.  So, no separate long-tests directory or
>> anything.  Keep the tests close to the code and run in the same way we run
>> unit tests.
>>
>
> The general answer to this belongs back in the other thread, but I agree
> that long-running *unit* tests (if there should ever be such a thing)
> should not be shunted off to another location. Keep the unit tests with the
> unit. Integration tests are a different matter, because they cross multiple
> units. Likewise, tests for project policies.
>
> Andrew's response:
>>
>>
>> *The nature of the test is important here: it's not a test of Juju
>> functionality, but a test to ensure that we don't accidentally use a TLS
>> configuration that doesn't match our project-wide constraints. It's static
>> analysis, using the test framework; and FWIW, the sort of thing that Lingo
>> would be a good fit for.*
>>
>> *I'd suggest that we do organise things like this separately, and run
>> them as part of the "scripts/verify.sh" script. This is the sort of test
>> that you shouldn't need to run often, but I'd like us to gate merges on.*
>>
>> So, I don't really think the method of testing should determine where a
>> test lives or how it is run.  I could test the exact same things with a
>> more common unit test - check the tls config we use when dialing the API is
>> using tls 1.2, that it only uses these specific ciphersuites, etc.  In
>> fact, we have some unit tests that do just that, to verify that SSL is
>> disabled.  However, then we'd need to remember to write those same tests
>> for every place we make a tls.Config.
>>
>
> The method of testing is not particularly relevant; it's the *purpose*
> that matters. You could probably use static analysis for a lot of our
> units; it would be inappropriate, but they'd still be testing units, and so
> should live with them.
>
> The point I was trying to make is that this is not a test of one unit, but
> a policy that covers the entire codebase. You say that you don't want to it
> put them "somewhere else", but it's not at all clear to me where you think
> we *should* have them.
>
>> The thing I like about having this as part of the unit tests is that it's
>> zero friction.  They already gate landings.  We can write them and run them
>> them just like we write and run go tests 1000 times a day.  They're not
>> special.  There's no other commands I need to remember to run, scripts I
>> need to remember to set up.  It's go test, end of story.
>>
>
> Using the Go testing framework is fine. I only want to make sure we're not
> slowing down the edit/test cycle by frequently testing things that are
> infrequently going to change. It's the same deal as with integration tests;
> there's a trade-off between the time spent and confidence level.
>
>> The comment about Lingo is valid, though I think we have room for both in
>> our processes.  Lingo, in my mind, is more appropriate at review-time,
>> which allows us to write lingo rules that may not have 100% confidence.
>> They can be strong suggestions rather than gating rules.  The type of test
>> I wrote should be a gating rule - there are no false positives.
>>
>> To give a little more context, I wrote the test as a suite, where you can
>> add tests to hook into the code parsing, so we can trivially add more tests
>> that use the full parsed code, while only incurring the 16.5 second parsing
>> hit once for the entire suite.  That 

Re: Static Analysis tests

2016-04-27 Thread Andrew Wilkins
On Thu, Apr 28, 2016 at 11:14 AM Nate Finch 
wrote:

> From the other thread:
>
> I wrote a test that parses the entire codebase under github.com/juju/juju to
> look for places where we're creating a new value of crypto/tls.Config
> instead of using the new helper function that I wrote that creates one with
> more secure defaults.  It takes 16.5 seconds to run on my machine.  There's
> not really any getting around the fact that parsing the whole tree takes a
> long time.
>
> What I *don't* want is to put these tests somewhere else which requires
> more thought/setup to run.  So, no separate long-tests directory or
> anything.  Keep the tests close to the code and run in the same way we run
> unit tests.
>

The general answer to this belongs back in the other thread, but I agree
that long-running *unit* tests (if there should ever be such a thing)
should not be shunted off to another location. Keep the unit tests with the
unit. Integration tests are a different matter, because they cross multiple
units. Likewise, tests for project policies.

Andrew's response:
>
>
> *The nature of the test is important here: it's not a test of Juju
> functionality, but a test to ensure that we don't accidentally use a TLS
> configuration that doesn't match our project-wide constraints. It's static
> analysis, using the test framework; and FWIW, the sort of thing that Lingo
> would be a good fit for.*
>
> *I'd suggest that we do organise things like this separately, and run them
> as part of the "scripts/verify.sh" script. This is the sort of test that
> you shouldn't need to run often, but I'd like us to gate merges on.*
>
> So, I don't really think the method of testing should determine where a
> test lives or how it is run.  I could test the exact same things with a
> more common unit test - check the tls config we use when dialing the API is
> using tls 1.2, that it only uses these specific ciphersuites, etc.  In
> fact, we have some unit tests that do just that, to verify that SSL is
> disabled.  However, then we'd need to remember to write those same tests
> for every place we make a tls.Config.
>

The method of testing is not particularly relevant; it's the *purpose* that
matters. You could probably use static analysis for a lot of our units; it
would be inappropriate, but they'd still be testing units, and so should
live with them.

The point I was trying to make is that this is not a test of one unit, but
a policy that covers the entire codebase. You say that you don't want to it
put them "somewhere else", but it's not at all clear to me where you think
we *should* have them.

> The thing I like about having this as part of the unit tests is that it's
> zero friction.  They already gate landings.  We can write them and run them
> them just like we write and run go tests 1000 times a day.  They're not
> special.  There's no other commands I need to remember to run, scripts I
> need to remember to set up.  It's go test, end of story.
>

Using the Go testing framework is fine. I only want to make sure we're not
slowing down the edit/test cycle by frequently testing things that are
infrequently going to change. It's the same deal as with integration tests;
there's a trade-off between the time spent and confidence level.

> The comment about Lingo is valid, though I think we have room for both in
> our processes.  Lingo, in my mind, is more appropriate at review-time,
> which allows us to write lingo rules that may not have 100% confidence.
> They can be strong suggestions rather than gating rules.  The type of test
> I wrote should be a gating rule - there are no false positives.
>
> To give a little more context, I wrote the test as a suite, where you can
> add tests to hook into the code parsing, so we can trivially add more tests
> that use the full parsed code, while only incurring the 16.5 second parsing
> hit once for the entire suite.  That doesn't really affect this discussion
> at all, but I figured people might appreciate that this could be extended
> for more than my one specific test.  I certainly wouldn't advocate people
> writing new 17 seconds tests all over the place.
>

That sounds lovely, thank you.

Cheers,
Andrew
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Static Analysis tests

2016-04-27 Thread Nate Finch
>From the other thread:

I wrote a test that parses the entire codebase under github.com/juju/juju to
look for places where we're creating a new value of crypto/tls.Config
instead of using the new helper function that I wrote that creates one with
more secure defaults.  It takes 16.5 seconds to run on my machine.  There's
not really any getting around the fact that parsing the whole tree takes a
long time.

What I *don't* want is to put these tests somewhere else which requires
more thought/setup to run.  So, no separate long-tests directory or
anything.  Keep the tests close to the code and run in the same way we run
unit tests.

Andrew's response:


*The nature of the test is important here: it's not a test of Juju
functionality, but a test to ensure that we don't accidentally use a TLS
configuration that doesn't match our project-wide constraints. It's static
analysis, using the test framework; and FWIW, the sort of thing that Lingo
would be a good fit for.*

*I'd suggest that we do organise things like this separately, and run them
as part of the "scripts/verify.sh" script. This is the sort of test that
you shouldn't need to run often, but I'd like us to gate merges on.*

So, I don't really think the method of testing should determine where a
test lives or how it is run.  I could test the exact same things with a
more common unit test - check the tls config we use when dialing the API is
using tls 1.2, that it only uses these specific ciphersuites, etc.  In
fact, we have some unit tests that do just that, to verify that SSL is
disabled.  However, then we'd need to remember to write those same tests
for every place we make a tls.Config.

The thing I like about having this as part of the unit tests is that it's
zero friction.  They already gate landings.  We can write them and run them
them just like we write and run go tests 1000 times a day.  They're not
special.  There's no other commands I need to remember to run, scripts I
need to remember to set up.  It's go test, end of story.

The comment about Lingo is valid, though I think we have room for both in
our processes.  Lingo, in my mind, is more appropriate at review-time,
which allows us to write lingo rules that may not have 100% confidence.
They can be strong suggestions rather than gating rules.  The type of test
I wrote should be a gating rule - there are no false positives.

To give a little more context, I wrote the test as a suite, where you can
add tests to hook into the code parsing, so we can trivially add more tests
that use the full parsed code, while only incurring the 16.5 second parsing
hit once for the entire suite.  That doesn't really affect this discussion
at all, but I figured people might appreciate that this could be extended
for more than my one specific test.  I certainly wouldn't advocate people
writing new 17 seconds tests all over the place.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-27 Thread Katherine Cox-Buday
That is an awesome idea! +1

On 04/27/2016 05:51 PM, Andrew Wilkins wrote:
> On Thu, Apr 28, 2016 at 1:44 AM Nate Finch  > wrote:
>
> I was actually trying to avoid talking about the test itself to
> keep things shorter ;)  
>
> The test is parsing the entire codebase under github.com/juju/juju
>  to look for places where we're
> creating a new value of crypto/tls.Config instead of using the new
> helper function that I wrote that creates one with more secure
> defaults.  There's not really any getting around the fact that
> parsing the whole tree takes a long time.
>
>
> The nature of the test is important here: it's not a test of Juju
> functionality, but a test to ensure that we don't accidentally use a
> TLS configuration that doesn't match our project-wide constraints.
> It's static analysis, using the test framework; and FWIW, the sort of
> thing that Lingo would be a good fit for.
>
> I'd suggest that we *do* organise things like this separately, and run
> them as part of the "scripts/verify.sh" script. This is the sort of
> test that you shouldn't need to run often, but I'd like us to gate
> merges on.
>
> Cheers,
> Andrew
>  
>
> On Wed, Apr 27, 2016 at 1:25 PM Nicholas Skaggs
>  > wrote:
>
> This is a timely discussion Nate. I'll avoid saying too much
> off the
> top, but I do have a question.
>
> On 04/27/2016 12:24 PM, Nate Finch wrote:
> > I just wrote a test that takes ~16.5 seconds on my machine.
> Why does the test take so long? Are you intending it to be a
> short /
> small scoped test?
>
> Nicholas
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com 
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
>

-- 
-
Katherine

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-27 Thread Andrew Wilkins
On Thu, Apr 28, 2016 at 1:44 AM Nate Finch  wrote:

> I was actually trying to avoid talking about the test itself to keep
> things shorter ;)
>
> The test is parsing the entire codebase under github.com/juju/juju to
> look for places where we're creating a new value of crypto/tls.Config
> instead of using the new helper function that I wrote that creates one with
> more secure defaults.  There's not really any getting around the fact that
> parsing the whole tree takes a long time.
>

The nature of the test is important here: it's not a test of Juju
functionality, but a test to ensure that we don't accidentally use a TLS
configuration that doesn't match our project-wide constraints. It's static
analysis, using the test framework; and FWIW, the sort of thing that Lingo
would be a good fit for.

I'd suggest that we *do* organise things like this separately, and run them
as part of the "scripts/verify.sh" script. This is the sort of test that
you shouldn't need to run often, but I'd like us to gate merges on.

Cheers,
Andrew


> On Wed, Apr 27, 2016 at 1:25 PM Nicholas Skaggs <
> nicholas.ska...@canonical.com> wrote:
>
>> This is a timely discussion Nate. I'll avoid saying too much off the
>> top, but I do have a question.
>>
>> On 04/27/2016 12:24 PM, Nate Finch wrote:
>> > I just wrote a test that takes ~16.5 seconds on my machine.
>> Why does the test take so long? Are you intending it to be a short /
>> small scoped test?
>>
>> Nicholas
>>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


MAAS 2.0 in CI

2016-04-27 Thread Martin Packman
CI now has revision testing for MAAS 2.0 provider support, though
master does not yet bootstrap successfully in our environment.

End of last week start of this, Curtis and I updated finfolk and its
hosted 1.9 vmaas to xenial, and maas 2.0 (with some small adventures
along the way). Today I also switched from ppa:maas/next to
ppa:maas-maintainters/experimental3 for beta4+bzr4958 at request of
Dimiter. Please poke again if we need to pick up another new revision
for more maas fixes.

That gets us started on maas 2.0 as part of revision testing.
Currently it fails early in the bootstrap process, as our setup is a
little more complex than what's been validated with so far:

"boot resource 2.0 schema check failed: kflavor: expected string, got nothing"


I tried manually removing our centos images, which got a little further:

"filesystem 2.0 schema check failed: mount_point: expected string, got nothing"


For now, I've just put up one (non-voting) maas 2.0 job, but will fill
in the blanks with bundle testing and so on when the basics are
working:



The changes CI need to talk to maas 2.0 behind juju are here:



Martin

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-27 Thread Nate Finch
I was actually trying to avoid talking about the test itself to keep things
shorter ;)

The test is parsing the entire codebase under github.com/juju/juju to look
for places where we're creating a new value of crypto/tls.Config instead of
using the new helper function that I wrote that creates one with more
secure defaults.  There's not really any getting around the fact that
parsing the whole tree takes a long time.

On Wed, Apr 27, 2016 at 1:25 PM Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> This is a timely discussion Nate. I'll avoid saying too much off the
> top, but I do have a question.
>
> On 04/27/2016 12:24 PM, Nate Finch wrote:
> > I just wrote a test that takes ~16.5 seconds on my machine.
> Why does the test take so long? Are you intending it to be a short /
> small scoped test?
>
> Nicholas
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: adding unit tests that take a long time

2016-04-27 Thread Nicholas Skaggs
This is a timely discussion Nate. I'll avoid saying too much off the 
top, but I do have a question.


On 04/27/2016 12:24 PM, Nate Finch wrote:

I just wrote a test that takes ~16.5 seconds on my machine.
Why does the test take so long? Are you intending it to be a short / 
small scoped test?


Nicholas

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


adding unit tests that take a long time

2016-04-27 Thread Nate Finch
I just wrote a test that takes ~16.5 seconds on my machine. Since we are
trying to *reduce* the time taken by tests, I think it's prudent to give
developers a way to avoid running longer tests (obviously things like the
landing bot etc should always run all the tests).

What I *don't* want is to put long tests somewhere else which requires more
thought/setup to run.  So, no separate long-tests directory or anything.
Keep the tests close to the code and run in the same way we run unit
tests.  This also lowers the barrier to changing older tests that also take
a long time.

I suggest we allow developers to opt out of longer running tests.  This is
already supported by passing the -short flag to go test.

Then, long tests could use

if testing.Short() {
c.Skip("skipping long running test.")
}

go test will run this test, but go test -short will skip it.  Sweet, easy,
devs can run go test -short over and over, and when they need to run for
landing, they can drop the -short.  CI and the landing bot don't need to
change at all.

We could also do some sort of opt-in scheme with build tags, but then CI
and the landing bot would have to change to opt in, and you'd need separate
files for long tests, which is a pain, etc.

Thoughts?
-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev