Re: making test suites work the same way

2017-02-06 Thread Kamil Paral
> Well, after more discussions with kparal, we are still unsure about the
> "right" way to tackle this.
> Our current call would be:
> 1) sync requirements.txt versions with fedora (mostly done)
> 2) allow --system-site-packages in the test_env
> 3) do `pip install -r requirements.txt` (with possible flags to enforce
> versions) to the makefile virtualenv creation step
> 4) add info to readme, that testing needs installation of packages from pypi,
> and that some of them need compilation
> 4-1) put together a list of packages that need to be installed (the
> python-foobar kind, not -devel + gcc) to the system, in order to "skip" the
> stuff that needs to be compiled

> Sounds reasonable, Kamil? Others?

I went back and forth on this. I thought it would be a really simple change, 
and as usual, it seems more pain than gain. So, I went forward with this: 
1. add tox.ini to projects to allow simple test suite execution with `pytest` 
(non-controversial) 
2. configure tox.ini to print out test coverage (non-controversial) 
3. remove --system-site-packages from all places (readme, makefile) for those 
projects, that can be *fully* installed from pypi *without any compilation* 
(hopefully non-controversial). 
4. keep (or add) --system-site-packages to readme/makefile for the remaining 
projects, and add readme info how to deal with pypi compilation or local rpm 
installation 

What Josef mentioned is that he wouldn't try to replicate a perfect environment 
directly on dev machine, because that's a lot of work. Instead, use the current 
non-perfect environment on dev machines (which should be fine most of the time 
anyway) and have a separate CI service (hopefully in the future) with more 
strict environment configuration. I guess that's the most practical solution. 

We might even want to reopen the question how to version deps in 
requirements.txt vs spec file, but I'd keep that for a separate thread, if 
needed. 

My current patches for resultsdb projects are these: 
https://phab.qa.fedoraproject.org/D1114 
https://phab.qa.fedoraproject.org/D1116 
https://phab.qa.fedoraproject.org/D1117 
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: making test suites work the same way

2017-02-06 Thread Josef Skladanka
On Mon, Feb 6, 2017 at 1:35 PM, Kamil Paral  wrote:

>
> That's a good point. But do we have a good alternative here? If we depend
> on packages like that, I see only two options:
>
> a) ask the person to install pyfoo as an RPM (in readme)
> b) ask the person to install gcc and libfoo-devel as an RPM (in readme)
> and pyfoo will be then compiled and installed from pypi
>
> Approach a) is somewhat easier and does not require compilation stack and
> devel libraries. OTOH it requires using virtualenv with
> --system-site-packages, which means people get different results on
> different setups. That's exactly what I'm trying to eliminate (or at least
> reduce). E.g. https://phab.qa.fedoraproject.org/D where I can run the
> test suite from makefile and you can't, and it's quite difficult to figure
> out why.
>
>
With b) approach, you need compilation stack on the system. I don't think
> it's such a huge problem, because you're a developer after all. The
> advantage is that virtualenv can be created without --system-site-packages,
> which means locally installed libraries do not affect the execution/test
> suite results. Also, pyfoo is installed with exactly the right version,
> further reducing differences between setups. The only thing that can differ
> is the version of libfoo-devel, which can affect the behavior. But the
> likeliness of that happening is much smaller than having pyfoo of a
> different version or pulling any deps from the system site packages.
>
>
The reason why I want to recommend `make test` for running the test suite
> (at least in readme), is because in the makefile we can ensure that a clean
> virtualenv with correct properties is created, and only and exactly the
> right versions of deps from requirements.txt are installed. We can perform
> further necessary steps, like installing the project
> . That further increases
> reliability. Compare this to manually running `pytest`- a custom virtualenv
> must be active; it can be configured differently than recommended in
> readme, it can be out of date, or it can have more packages installed than
> needed; you might forget some necessary steps.
>
>
Sure, I am a devel, but not a C-devel... As I told you in our other
conversation - I see what you are trying to accomplish, but for me the gain
does not even balance the issues. With variant 'a', all you need to do is
make sure "these python packages are installed" to run the test suite. I'd
rather have something like `requirements_testing.txt` where all the deps
are speeled out in the proper versions, and using that as a base for the
virtualenv population (I guess we could easily make do with the
requirements.py we have now). Either you have the right version in your
system (or in your own development virtualenv from which you are running
the tests), or the right version will be installed for you from pip.
Yes, we might get down to people having to install bunch of header files,
and gcc, if for some reason their system is so different that they can not
obtain the right version in any other way, but it will work most of the
time.



> Of course nothing prevents you from simply running the test suite using
> `pytest`. It's the same approach that Phab will do when submitting a patch.
> However, when some issues arises, I'd like all parties to be able to run
> `make test` and it should return the same result. That should be the most
> reliable method, and if it doesn't return the same thing, it means we have
> an important problem somewhere, and it's not just "a wrongly configured
> project on one dev machine".
>
So, I see these main use cases for `make test` and b) approach:
> * good a reliable default for newcomers, an approach that's the least
> likely to go wrong
> * determining the reason for failures that only one party sees and the
> other doesn't
> * `make test-ci` target, that will hopefully be used one day to perform
> daily/per-commit CI testing of our codebases. Again, using the most
> reliable method available.
>
>
Sure, nobody forces _me_ to do it this way, but I still fail to see the
overall general benefit. If a random _python web app_ project that I wanted
to submit a patch for wanted me to install gcc and tons of -devel libs, I'd
be going to the next door. We were talking "accessibility" a lot with Phab,
and one of the arguments against it (not saying it was you in particular)
was that "it is complicated, and needs additional packages installed". This
is even worse version of the same. At least to me.
On top of that - who is going to be syncing up the versions of said
packages between Fedora (our target) and the requirements.txt? What release
are we going to be using as the target? And is it even the right place and
way to do it?



> For some codebases this is not viable anyway, e.g. libtaskotron, because
> they depend on packages not available in pypi (koji) and thus need
> --system-site-packages. But e.g. resultsdb