Re: Proposal for integration tests infrastructure

2014-11-04 Thread Honza Horak
Tim, thanks for your comments, I think we're on the same page in 
basically all aspects you mention.


It seems like the issue was more wording and I still fail to find some 
better term for the tests that I was referring to -- so, how to ideally 
call the tests that are not unit tests any more, but still verify only 
one or more particular component(s)? What about Functional tests, 
would it be more precise?


Honza

On 11/03/2014 07:10 PM, Tim Flink wrote:

On Mon, 03 Nov 2014 17:08:40 +0100
Honza Horak hho...@redhat.com wrote:


On 10/28/2014 08:08 AM, Nick Coghlan wrote:

On 10/22/2014 09:43 PM, Honza Horak wrote:

Fedora lacks integration testing (unit testing done during build
is not enough). Taskotron will be able to fill some gaps in the
future, so maintainers will be able to set-up various tasks after
their component is built. But even before this works we can
benefit from having the tests already available (and run them
manually if needed).

Hereby, I'd like to get ideas and figure out answers for how and
where to keep the tests. A similar discussion already took place
before, which I'd like to continue in:
https://lists.fedoraproject.org/pipermail/devel/2014-January/193498.html

And some short discussion already took place here as well:
https://lists.fedoraproject.org/pipermail/env-and-stacks/2014-October/000570.html


It's worth clarifying your scope here, as integration tests means
different things to different people, and the complexity varies
wildly depending on *what* you're trying to test.

If you're just looking at tests of individual packages beyond what
folks have specified in their RPM %check macro, then this is
exactly the case that Taskotron is designed to cover.

If you're looking at more complex cases like multihost testing, bare
metal testing across multiple architectures, or installer
integration testing, then that's what Beaker was built to handle
(and has already been handling for RHEL for several years).

That level is where you start to cross the line into true system
level acceptance tests and you often *want* those maintained
independently of the individual components in order to catch
regressions in behaviour other services are relying on.


Good point about defining the scope, thanks.. From my POV, we should
rather start with some less complicated scenarios, so we can have
something ready to use in reasonable time.

Let's say the common use case would be defining tests that verify
components' basic functionality that cannot be run during build.
This should cover simple installation scenarios, running test-suites
that need to be run outside of build process, or tests that need to
be run for multiple components at the same time (e.g. testing basic
functionality of LAMP stack). This should also cover issues with
SELinux, systemd units, etc. that cannot be tested during build and
IMHO are often cause of issues.

I have no problem to state clearly for now that the tests cannot
define any hardware requirements, even non-localhost networking. In
other words the tests will be run on one machine with any hardware
and any (or none) network.

However, I'd rather see tests not tight to a particular component,
since even simple test might cover two or three of them and it
wouldn't be correct tight it to all nor to only one of them.


Yeah, I think that package-specific checks are a similar but slightly
different kettle of fish than we're discussing here.

We'd have to figure out how the integration tests would be scheduled
(nightly, on change in a set of packages corresponding to each check,
etc.) but that can wait until we've refined what we're looking to do a
bit more.


snip


How to deliver tests?
a/ just use them directly from git (we need to keep some metadata
for dependencies anyway)
b/ package them as RPMs (we can keep metadata there; e.g.
Taskotron will run only tests that have Provides:
ci-tests(mariadb) after mariadb is built; we also might automate
packaging tests to RPMs)


Our experience with Beaker suggests that you want to support both -
running directly from Git tends to be better for test development,
while using RPMs tends to be better for dependency management and
sharing test infrastructure code.


Which framework to use?
People have no time to learn new things, so we should let them to
write the tests in any language and just define some conventions
how to run them.


Taskotron already covers this pretty well (even if invoking Beaker
tests, it would make more sense to do that via Taskotron rather than
directly).


Right, Taskotron involvement seems like the best bet now, but it
should not be tight to it -- in case Taskotron is replaced by some
other tool for executing tasks in the future, we cannot loose the
tests themselves.


While I don't see Taskotron going away anytime soon, I agree that we
should avoid tight coupling where it makes sense to avoid it.

With my captain obvious hat on, the trick is figuring out where the
point of diminishing returns is - too 

Re: Proposal for integration tests infrastructure

2014-11-04 Thread Colin Walters
This looks related to: https://wiki.gnome.org/GnomeGoals/InstalledTests

(Note that the Issues with make check is equivalent to issues with rpm 
%check)

It's implemented by gnome-continuous, and there's been a bit of effort to make 
-tests subpackages for some pieces in Fedora, but AFAIK no runner yet.

The thing that makes the gnome-continuous model a more radical departure here 
is it does *not* run the equivalent of rpm %check - it only supports 
InstalledTests.  After a component gets a new git commit, it's built (but not 
tested), shipped, and then *all tests* are rerun inside a VM from the resulting 
shipped tree.

The beauty of this is that all tests are running all of the time.  
Continuously.  This has caught real bugs much faster in several cases, because 
the tests for all dependencies get run when a given shared library changes.

This idea is *not* GNOME specific as the Related Art section shows; also 
worth a compare and contrast with Debian autopkgtest: 
http://anonscm.debian.org/cgit/autopkgtest/autopkgtest.git/tree/doc/README.package-tests.rst

Extending this to support headless tests that ran as Linux containers (e.g. via 
Docker) would likely be very worthwhile, and cover a lot of components like gcc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Proposal for integration tests infrastructure

2014-11-04 Thread Matthias Clasen

On Tue, 2014-11-04 at 04:56 -0500, Colin Walters wrote:
 This looks related to: https://wiki.gnome.org/GnomeGoals/InstalledTests
 
 (Note that the Issues with make check is equivalent to issues with rpm 
 %check)
 
 It's implemented by gnome-continuous, and there's been a bit of effort to 
 make -tests subpackages for some pieces in Fedora, but AFAIK no runner yet.

The upstream continuous test runner is in the gnome-desktop-testing
package. And the following tests packages are available to make use of
it:

gjs-tests
gtk3-tests
glib2-tests
pango-tests
clutter-tests
gdk-pixbuf2-tests
gnome-weather-tests
gtksourceview3-tests
glib-networking-tests

I would love to bring the other upstream tests to Fedora as well -
altogether, we have 500 tests running continuously in gnome-continuous.



-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Proposal for integration tests infrastructure

2014-11-03 Thread Honza Horak

On 10/28/2014 08:08 AM, Nick Coghlan wrote:

On 10/22/2014 09:43 PM, Honza Horak wrote:

Fedora lacks integration testing (unit testing done during build is not
enough). Taskotron will be able to fill some gaps in the future, so
maintainers will be able to set-up various tasks after their component
is built. But even before this works we can benefit from having the
tests already available (and run them manually if needed).

Hereby, I'd like to get ideas and figure out answers for how and where
to keep the tests. A similar discussion already took place before, which
I'd like to continue in:
https://lists.fedoraproject.org/pipermail/devel/2014-January/193498.html

And some short discussion already took place here as well:
https://lists.fedoraproject.org/pipermail/env-and-stacks/2014-October/000570.html


It's worth clarifying your scope here, as integration tests means
different things to different people, and the complexity varies wildly
depending on *what* you're trying to test.

If you're just looking at tests of individual packages beyond what folks
have specified in their RPM %check macro, then this is exactly the case
that Taskotron is designed to cover.

If you're looking at more complex cases like multihost testing, bare
metal testing across multiple architectures, or installer integration
testing, then that's what Beaker was built to handle (and has already
been handling for RHEL for several years).

That level is where you start to cross the line into true system level
acceptance tests and you often *want* those maintained independently of
the individual components in order to catch regressions in behaviour
other services are relying on.


Good point about defining the scope, thanks.. From my POV, we should 
rather start with some less complicated scenarios, so we can have 
something ready to use in reasonable time.


Let's say the common use case would be defining tests that verify 
components' basic functionality that cannot be run during build. This 
should cover simple installation scenarios, running test-suites that 
need to be run outside of build process, or tests that need to be run 
for multiple components at the same time (e.g. testing basic 
functionality of LAMP stack). This should also cover issues with 
SELinux, systemd units, etc. that cannot be tested during build and IMHO 
are often cause of issues.


I have no problem to state clearly for now that the tests cannot define 
any hardware requirements, even non-localhost networking. In other words 
the tests will be run on one machine with any hardware and any (or none) 
network.


However, I'd rather see tests not tight to a particular component, since 
even simple test might cover two or three of them and it wouldn't be 
correct tight it to all nor to only one of them.


snip


How to deliver tests?
a/ just use them directly from git (we need to keep some metadata for
dependencies anyway)
b/ package them as RPMs (we can keep metadata there; e.g. Taskotron will
run only tests that have Provides: ci-tests(mariadb) after mariadb is
built; we also might automate packaging tests to RPMs)


Our experience with Beaker suggests that you want to support both -
running directly from Git tends to be better for test development, while
using RPMs tends to be better for dependency management and sharing test
infrastructure code.


Which framework to use?
People have no time to learn new things, so we should let them to write
the tests in any language and just define some conventions how to run them.


Taskotron already covers this pretty well (even if invoking Beaker
tests, it would make more sense to do that via Taskotron rather than
directly).


Right, Taskotron involvement seems like the best bet now, but it should 
not be tight to it -- in case Taskotron is replaced by some other tool 
for executing tasks in the future, we cannot loose the tests themselves.


That's actually why I don't like the idea to keep the tests in 
Taskotron's git repo -- that could easily end up with using some 
specific Taskotron features and potential move to other system or 
running them as standalone tests would be problematic.


Honza
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Proposal for integration tests infrastructure

2014-11-03 Thread Honza Horak

On 10/24/2014 07:18 PM, John Dulaney wrote:

How to deliver tests? a/ just use them directly from git (we need
to keep some metadata for dependencies anyway) b/ package them as
RPMs (we can keep metadata there; e.g. Taskotron will run only
tests that have Provides: ci-tests(mariadb) after mariadb is
built; we also might automate packaging tests to RPMs)


To answer both of these, the plan is to keep taskotron tasks in their own
git repo; currently this is at (0).

To run the tasks, taskotron sets up a disposable task client and then git
clones the task to be run.  This solves the issue of delivery and allows
a continuous integration-like solution.


I mentioned it in a different mail in this thread (just sent), but just 
TL;DR version of it -- I think we should keep tests away from Taskotron 
itself, even if they will be run by it.


Honza
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Proposal for integration tests infrastructure

2014-11-03 Thread Honza Horak

On 10/28/2014 08:08 AM, Nick Coghlan wrote:

Note that any or all of the above may be appropriate, depending on the
exact nature of the specific tests.

For example, there are already some public Beaker installer tests at
https://bitbucket.org/fedoraqa/fedora-beaker-tests  for execution on
http://beaker.fedoraproject.org/


I didn't know about this instance, interesting.

Can anybody shortly explain the plan with this instance, if there is 
any? It would make sense to me to use Taskotron to run Beaker tests at 
http://beaker.fedoraproject.org, but maybe there are some other plans?


Honza
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Proposal for integration tests infrastructure

2014-11-03 Thread Tim Flink
On Mon, 03 Nov 2014 17:22:27 +0100
Honza Horak hho...@redhat.com wrote:

 On 10/28/2014 08:08 AM, Nick Coghlan wrote:
  Note that any or all of the above may be appropriate, depending on
  the exact nature of the specific tests.
 
  For example, there are already some public Beaker installer tests at
  https://bitbucket.org/fedoraqa/fedora-beaker-tests  for execution on
  http://beaker.fedoraproject.org/
 
 I didn't know about this instance, interesting.

It's currently a dev instance (read: not backed up, best-effort level
of support) mostly due to some issues in the instance. The beaker devs
are aware of the issue and hopefully there will be a fix before too
long.

Once that fix is released, we still need to make a full production
system and that'll take some doing - not impossible but not really
trivial. We're trying to get that all figured out but as a short
summary: a working, production beaker instance for fedora will take a
bit of time once beaker is fully compatible with how things are set up
in infra. How long depends on overall priority and if a couple details
work out the way we hope.

I'm happy to go into more details but will skip them for the sake of
brevity on a large distribution cross-posted thread.

 Can anybody shortly explain the plan with this instance, if there is 
 any? It would make sense to me to use Taskotron to run Beaker tests
 at http://beaker.fedoraproject.org, but maybe there are some other
 plans?

That's pretty much what we have in mind, yeah.

Tim


signature.asc
Description: PGP signature
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Proposal for integration tests infrastructure

2014-11-03 Thread Tim Flink
On Mon, 03 Nov 2014 17:08:40 +0100
Honza Horak hho...@redhat.com wrote:

 On 10/28/2014 08:08 AM, Nick Coghlan wrote:
  On 10/22/2014 09:43 PM, Honza Horak wrote:
  Fedora lacks integration testing (unit testing done during build
  is not enough). Taskotron will be able to fill some gaps in the
  future, so maintainers will be able to set-up various tasks after
  their component is built. But even before this works we can
  benefit from having the tests already available (and run them
  manually if needed).
 
  Hereby, I'd like to get ideas and figure out answers for how and
  where to keep the tests. A similar discussion already took place
  before, which I'd like to continue in:
  https://lists.fedoraproject.org/pipermail/devel/2014-January/193498.html
 
  And some short discussion already took place here as well:
  https://lists.fedoraproject.org/pipermail/env-and-stacks/2014-October/000570.html
 
  It's worth clarifying your scope here, as integration tests means
  different things to different people, and the complexity varies
  wildly depending on *what* you're trying to test.
 
  If you're just looking at tests of individual packages beyond what
  folks have specified in their RPM %check macro, then this is
  exactly the case that Taskotron is designed to cover.
 
  If you're looking at more complex cases like multihost testing, bare
  metal testing across multiple architectures, or installer
  integration testing, then that's what Beaker was built to handle
  (and has already been handling for RHEL for several years).
 
  That level is where you start to cross the line into true system
  level acceptance tests and you often *want* those maintained
  independently of the individual components in order to catch
  regressions in behaviour other services are relying on.
 
 Good point about defining the scope, thanks.. From my POV, we should 
 rather start with some less complicated scenarios, so we can have 
 something ready to use in reasonable time.
 
 Let's say the common use case would be defining tests that verify 
 components' basic functionality that cannot be run during build.
 This should cover simple installation scenarios, running test-suites
 that need to be run outside of build process, or tests that need to
 be run for multiple components at the same time (e.g. testing basic 
 functionality of LAMP stack). This should also cover issues with 
 SELinux, systemd units, etc. that cannot be tested during build and
 IMHO are often cause of issues.
 
 I have no problem to state clearly for now that the tests cannot
 define any hardware requirements, even non-localhost networking. In
 other words the tests will be run on one machine with any hardware
 and any (or none) network.
 
 However, I'd rather see tests not tight to a particular component,
 since even simple test might cover two or three of them and it
 wouldn't be correct tight it to all nor to only one of them.

Yeah, I think that package-specific checks are a similar but slightly
different kettle of fish than we're discussing here.

We'd have to figure out how the integration tests would be scheduled
(nightly, on change in a set of packages corresponding to each check,
etc.) but that can wait until we've refined what we're looking to do a
bit more.

 snip
 
  How to deliver tests?
  a/ just use them directly from git (we need to keep some metadata
  for dependencies anyway)
  b/ package them as RPMs (we can keep metadata there; e.g.
  Taskotron will run only tests that have Provides:
  ci-tests(mariadb) after mariadb is built; we also might automate
  packaging tests to RPMs)
 
  Our experience with Beaker suggests that you want to support both -
  running directly from Git tends to be better for test development,
  while using RPMs tends to be better for dependency management and
  sharing test infrastructure code.
 
  Which framework to use?
  People have no time to learn new things, so we should let them to
  write the tests in any language and just define some conventions
  how to run them.
 
  Taskotron already covers this pretty well (even if invoking Beaker
  tests, it would make more sense to do that via Taskotron rather than
  directly).
 
 Right, Taskotron involvement seems like the best bet now, but it
 should not be tight to it -- in case Taskotron is replaced by some
 other tool for executing tasks in the future, we cannot loose the
 tests themselves.

While I don't see Taskotron going away anytime soon, I agree that we
should avoid tight coupling where it makes sense to avoid it.

With my captain obvious hat on, the trick is figuring out where the
point of diminishing returns is - too much independence can be just as
problematic as not enough.

 That's actually why I don't like the idea to keep the tests in 
 Taskotron's git repo -- that could easily end up with using some 
 specific Taskotron features and potential move to other system or 
 running them as standalone tests would be problematic.

There are a couple of 

RE: Proposal for integration tests infrastructure

2014-10-24 Thread John Dulaney
Some thoughts:

 Where to keep tests? a/ in current dist-git for related components
 (problem with sharing parts of code, problem where to keep tests
 related for more components) b/ in separate git with similar
 functionality as dist-git (needs new infrastructure, components are
 not directly connected with tests, won't make mess in current
 dist-git) c/ in current dist-git but as ordinary components (no new
 infrastructure needed but components are not directly connected
 with tests)

 How to deliver tests? a/ just use them directly from git (we need
 to keep some metadata for dependencies anyway) b/ package them as
 RPMs (we can keep metadata there; e.g. Taskotron will run only
 tests that have Provides: ci-tests(mariadb) after mariadb is
 built; we also might automate packaging tests to RPMs)

To answer both of these, the plan is to keep taskotron tasks in their own
git repo; currently this is at (0).

To run the tasks, taskotron sets up a disposable task client and then git
clones the task to be run.  This solves the issue of delivery and allows
a continuous integration-like solution.

 Structure for tests? a/ similar to what components use (branches
 for Fedora versions) b/ only one branch Test maintainers should be
 allowed to behave the same as package maintainers do -- one likes
 keeping branches the same and uses %if %fedora macros, someone
 else likes specs clean and rather maintain more different branches)
 -- we won't find one structure that would fit all, so allowing both
 ways seems better.

This is something we'll need to figure out, but, I suspect git branches will
be involved.

 Which framework to use? People have no time to learn new things, so
 we should let them to write the tests in any language and just
 define some conventions how to run them.

You'll need to at least define the task in a yaml file, and output will need to
be TAP.  The example task is at (1).


 TAP (Test Anything Protocol) FTW. It really makes sense when you're
 trying to combine tests from multiple different languages, testing
 frameworks, etc.

 Stef


Indeed, which is why we settled on it.



John.



(0)  https://bitbucket.org/fedoraqa

(1)  https://bitbucket.org/fedoraqa/task-rpmlint
  
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Proposal for integration tests infrastructure

2014-10-24 Thread Tim Flink
On Wednesday, October 22, 2014 01:43:57 PM you wrote:
 Fedora lacks integration testing (unit testing done during build is
 not enough). Taskotron will be able to fill some gaps in the future,
 so maintainers will be able to set-up various tasks after their
 component is built. But even before this works we can benefit from
 having the tests already available (and run them manually if needed).
 
 Hereby, I'd like to get ideas and figure out answers for how and where
 to keep the tests. A similar discussion already took place before,
 which I'd like to continue in:
 https://lists.fedoraproject.org/pipermail/devel/2014-January/193498.html
 
 And some short discussion already took place here as well:
 https://lists.fedoraproject.org/pipermail/env-and-stacks/2014-October/000570
 .html

Instead of cross-posting to several lists, I'm going to just reply here 
instead of copying/fragmenting the conversation more.

 Some high level requirements:
 * tests will be written by maintainers or broader community, not a
 dedicated team
 * tests will be easy to run on anybody's computer (but might be
 potentially destructive; some secure environment will not be part of
 tests)
 * tests will be run automatically after related components get built
 (probably by Taskotron)

Just to make sure I understand what you're talking about here, you're
talking about mechanical checks, right? Something that is run by a
program and returns a limited-state result (ie PASS/FAIL/UNKNOWN)?

I think that you've hit on a lot of what we have in mind for Taskotron,
to be honest.

The tasks in Taskotron are run by libtaskotron and outside of things
like posting results or having access to secrets, do not require any of
the other infrastructure components that make up an entire Taskotron
deployment. The parts of Taskotron outside of libtaskotron are
responsible for scheduling, reporting and managing the execution of
tasks.

Anyone can install libtaskotron, clone a task's git repository and
start running tasks. If this doesn't work in all reasonable cases, then
we have violated one of the core design principles of Taskotron and it
will be fixed.

By designing for git-repo-contained tasks, a set of people with proper 
permissions can change tasks in pretty much the same way that a group
of developers change source code.

 Where to keep tests?
 a/ in current dist-git for related components (problem with sharing
 parts of code, problem where to keep tests related for more
 components) b/ in separate git with similar functionality as dist-git
 (needs new infrastructure, components are not directly connected with
 tests, won't make mess in current dist-git)
 c/ in current dist-git but as ordinary components (no new
 infrastructure needed but components are not directly connected with
 tests)

I'm leaning somewhat towards a somewhat separate dist-git-ish solution
right now. By keeping it separate, we can't make a mess of the package
ACLs, don't need to worry about giving non-packagers access to the
dist-git repos and aren't adding a bunch of stuff to an already working
system.

I'd also like to see the tasks be easily accessible from checked out
dist-git repos. I'm not sure that submodules or subtrees are good
answers here but having the tasks appear as a subdirectory of dist-git
repos sounds like a good way to integrate things to me.

 How to deliver tests?
 a/ just use them directly from git (we need to keep some metadata for
 dependencies anyway)
 b/ package them as RPMs (we can keep metadata there; e.g. Taskotron
 will run only tests that have Provides: ci-tests(mariadb) after
 mariadb is built; we also might automate packaging tests to RPMs)

I'm of the opinion that keeping stuff in plain git is the best choice.
For this particular use case, I'm not aware of any advantages from
packaging checks as long as we're smart about updating git repos prior
to task execution and it's additional overhead - especially if we want
to have non-packagers involved in task creation and maintenance.

 Structure for tests?
 a/ similar to what components use (branches for Fedora versions)
 b/ only one branch
 Test maintainers should be allowed to behave the same as package
 maintainers do -- one likes keeping branches the same and uses %if
 %fedora macros, someone else likes specs clean and rather maintain
 more different branches) -- we won't find one structure that would
 fit all, so allowing both ways seems better.

I think that restricting stuff to a single branch is going to be too 
complicated and messy. The method of branching that is used in dist-git
seems to be pretty well accepted and IMHO it's a logical approach to
allowing per-version check differences without introducing a bunch of
mess and complexity to the tasks to be run.

 Which framework to use?
 People have no time to learn new things, so we should let them to
 write the tests in any language and just define some conventions how
 to run them.

Specifying a single framework to use in all cases would 

Re: Proposal for integration tests infrastructure

2014-10-24 Thread Tim Flink
On Fri, 24 Oct 2014 14:10:23 -0600
Tim Flink tfl...@redhat.com wrote:

 On Wednesday, October 22, 2014 01:43:57 PM you wrote:
  Fedora lacks integration testing (unit testing done during build is
  not enough). Taskotron will be able to fill some gaps in the future,
  so maintainers will be able to set-up various tasks after their
  component is built. But even before this works we can benefit from
  having the tests already available (and run them manually if
  needed).

Bah, I skipped right over this part.

How much interest is there among contributors in having a system for
storing tasks which wouldn't be run or recorded in a central system?
It's an intriguing idea that would have benefits if enough people used
it.

For the folks who would be interested - can you give examples of the
kinds of checks/tasks which you would run in such a setup?

Tim


signature.asc
Description: PGP signature
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Re: Proposal for integration tests infrastructure

2014-10-23 Thread Stef Walter
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 22.10.2014 13:43, Honza Horak wrote:
 Fedora lacks integration testing (unit testing done during build
 is not enough). Taskotron will be able to fill some gaps in the 
 future, so maintainers will be able to set-up various tasks after 
 their component is built. But even before this works we can
 benefit from having the tests already available (and run them
 manually if needed).
 
 Hereby, I'd like to get ideas and figure out answers for how and 
 where to keep the tests. A similar discussion already took place 
 before, which I'd like to continue in: 
 https://lists.fedoraproject.org/pipermail/devel/2014-January/193498.html


 
And some short discussion already took place here as well:
 https://lists.fedoraproject.org/pipermail/env-and-stacks/2014-October/000570.html




 
Some high level requirements: * tests will be written by
 maintainers or broader community, not a dedicated team * tests
 will be easy to run on anybody's computer (but might be
 potentially destructive; some secure environment will not be part
 of tests) * tests will be run automatically after related
 components get built (probably by Taskotron)
 
 Where to keep tests? a/ in current dist-git for related components 
 (problem with sharing parts of code, problem where to keep tests 
 related for more components) b/ in separate git with similar 
 functionality as dist-git (needs new infrastructure, components
 are not directly connected with tests, won't make mess in current 
 dist-git) c/ in current dist-git but as ordinary components (no
 new infrastructure needed but components are not directly
 connected with tests)
 
 How to deliver tests? a/ just use them directly from git (we need 
 to keep some metadata for dependencies anyway) b/ package them as 
 RPMs (we can keep metadata there; e.g. Taskotron will run only 
 tests that have Provides: ci-tests(mariadb) after mariadb is 
 built; we also might automate packaging tests to RPMs)
 
 Structure for tests? a/ similar to what components use (branches 
 for Fedora versions) b/ only one branch Test maintainers should be 
 allowed to behave the same as package maintainers do -- one likes 
 keeping branches the same and uses %if %fedora macros, someone 
 else likes specs clean and rather maintain more different
 branches) -- we won't find one structure that would fit all, so
 allowing both ways seems better.
 
 Which framework to use? People have no time to learn new things,
 so we should let them to write the tests in any language and just 
 define some conventions how to run them.

TAP (Test Anything Protocol) FTW. It really makes sense when you're
trying to combine tests from multiple different languages, testing
frameworks, etc.

Stef

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlRJWOMACgkQe/sRCNknZa/ltQCfcTBBPOIl3fISqjm0j3YUw+TU
eSAAoIMo+NSOg/iWf27VQuq0J2rTebT/
=L9uQ
-END PGP SIGNATURE-
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Proposal for integration tests infrastructure

2014-10-22 Thread Honza Horak
Fedora lacks integration testing (unit testing done during build is not 
enough). Taskotron will be able to fill some gaps in the future, so 
maintainers will be able to set-up various tasks after their component 
is built. But even before this works we can benefit from having the 
tests already available (and run them manually if needed).


Hereby, I'd like to get ideas and figure out answers for how and where 
to keep the tests. A similar discussion already took place before, which 
I'd like to continue in:

https://lists.fedoraproject.org/pipermail/devel/2014-January/193498.html

And some short discussion already took place here as well:
https://lists.fedoraproject.org/pipermail/env-and-stacks/2014-October/000570.html

Some high level requirements:
* tests will be written by maintainers or broader community, not a 
dedicated team
* tests will be easy to run on anybody's computer (but might be 
potentially destructive; some secure environment will not be part of tests)
* tests will be run automatically after related components get built 
(probably by Taskotron)


Where to keep tests?
a/ in current dist-git for related components (problem with sharing 
parts of code, problem where to keep tests related for more components)
b/ in separate git with similar functionality as dist-git (needs new 
infrastructure, components are not directly connected with tests, won't 
make mess in current dist-git)
c/ in current dist-git but as ordinary components (no new infrastructure 
needed but components are not directly connected with tests)


How to deliver tests?
a/ just use them directly from git (we need to keep some metadata for 
dependencies anyway)
b/ package them as RPMs (we can keep metadata there; e.g. Taskotron will 
run only tests that have Provides: ci-tests(mariadb) after mariadb is 
built; we also might automate packaging tests to RPMs)


Structure for tests?
a/ similar to what components use (branches for Fedora versions)
b/ only one branch
Test maintainers should be allowed to behave the same as package 
maintainers do -- one likes keeping branches the same and uses %if 
%fedora macros, someone else likes specs clean and rather maintain more 
different branches) -- we won't find one structure that would fit all, 
so allowing both ways seems better.


Which framework to use?
People have no time to learn new things, so we should let them to write 
the tests in any language and just define some conventions how to run them.


Cheers,
Honza
--
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct