On 11/01/2016 11:36 AM, Jakub Hrozek wrote:
On Tue, Nov 01, 2016 at 10:31:20AM +0200, Nikolai Kondrashov wrote:
Hi Jakub,
On 10/27/2016 05:20 PM, Jakub Hrozek wrote:
I'm currently working on integration tests for the 'files' provider and
during this work I started to feel we are pushing the boundaries around our
test infrastructure already quite a bit. When SSSD talks over network to
a server, then we're more or less okay, but for some parts of SSSD, like
the files provider, we have to mock a lot of pieces and the end result
is that we are testing something that resembles the target system, but
probably has its own bugs. Additionally, we can't run some tests at all
(anything against IPA) and I suspect we'll be running into this sort of
a problem even more in the future.
So I'm interested in hearing what are other's thoughts on exploring how
to run some of our tests in a privileged environment, either in a VM or
in a container?
Our current tests have the big advantage of being able to provision a
test locally in a screen session, but maybe something similar would be
possible by e.g. running a screen in a container and attaching to its
tty.. And for "simple" tests like LDAP provider we could keep the current
infrastructure around.
I agree that the current test setup is limiting, can be difficult to work
with, and bring its own issues. However I believe we haven't employed it
fully yet, and I'm not sure how spending time on another test system compares
to just writing more tests (e.g. for Samba and just general tests).
Yes, many tests can still continue exploring the wrapped scenario. I
think I will be perfectly happy writing tests for the KCM service using a
cwrapped KDC. If your tests talks to a server, then as long as the server
is running and you're talking to it over socket_wrapper, you're fine.
But to give a concrete example, here are the issues I ran into with the
files provider tests:
- in production, the files provider dlopens libnss_files.so and
calls its functions. I had to wrap a different module for tests
that calls nss_wrapped functions instead to reach nss_wrapper's
passwd and groups. This is just additional work, but I wouldn't have
to do it in a VM/container.
- more importantly, one part of testing the files provider
is detecting the changes the admin does to /etc/passwd and
/etc/groups. In tests, I wrote a simple module that modifies
nss_wrappers' passwd and group files, but then I need to take care
so that my module changes the files using the same modifications
(create a new file and replace the old one with it, not modify
the old file in-place) otherwise the files provider was receiving
different inotify callbacks. In a container or a VM, I would just
call the shadow-utils binaries.
Other examples of things we can't test easily with wrappers are D-Bus
system bus interfaces (these wouldn't be too hard to wrap, though,
'just' additional work) and anything that involves an IPA server.
Yes, I agree, a VM or a container can help with many scenarios.
Regardless, I was casually thinking about this for a while and have this to
say. I think being able to run tests locally, on your machine, easily is very
important. It is also important to let outside developers run those tests with
minimal setup.
Yes, I really like being able to spin up a screen session and work on my
test there. But what I'm worried about is that (see the examples about
files provider) by writings wrappers and helpers for my tests so that
they run in a cwrapped environment is actually testing something
different than what the real system would do.
Yes. If we can do containers and VMs as portably, easy and fast to run, that
would be better.
The Docker container registry helps with that a lot: we can just publish the
containers we built and pull/refresh them automatically with each run, and so
could outside developers. At least as far as I understand it. I don't think
there is a similar service for VM images, and I don't think we want to build
our own.
There is:
https://atlas.hashicorp.com/boxes/search
but of course, this would require the system running the test to have
vagrant installed.
Well, as long as it's present in all major distros and doesn't require much
manual setup, we should be fine.
Looking at how Cockpit runs their test, they simply publish qcow2 disk
images on Internet and fetch them from there..
That's nice. I was thinking about something like this. How would we manage
maintenance and updates, though? The images will probably be quite big and
we'll need to implement some caching scheme. The latter part is easy, though,
as long as clocks are more-or-less in sync.
Even though Docker is still limiting, it is less limiting than the current
setup.
(Replying to Michal also here..)
In my view, whether we use containers or VMs should be a detail as long
as the test is concerned, the test would just execute and output its
results "somewhere". Of course, the test runner must set up the
environment for the test and that's where we either provision a VM or a
container.
I would be careful here. We can sink a lot of time into making the tests be
environment-agnostic. It might be more productive just to port them to a new
environment, as necessary.
Containers might be more dense (less resource-hungry) and it's quite
easy to bind-mount directories to share data like log files or the test
output.
Yes, this is nice. However, you can still use something like sshfs to get to
VM's files easily. Not ready-made as in Docker, but relatively easy to do.
On the other hand, I know that running some services, like IPA, in a
container is not totally straightforward.
Yes, this worries me as well.
Nick
_______________________________________________
sssd-devel mailing list -- sssd-devel@lists.fedorahosted.org
To unsubscribe send an email to sssd-devel-le...@lists.fedorahosted.org