On 06/27/2016 01:31 PM, Milan Kubík wrote:
On 06/27/2016 02:57 AM, Fraser Tweedale wrote:
Then the failure would be problem of the preceding test and we would
need to fix it. We are dealing with test side effects
On Fri, Jun 24, 2016 at 12:08:24PM +0200, Milan Kubík wrote:
On 06/24/2016 03:42 AM, Fraser Tweedale wrote:
The issue then is in the wording? The other approach I could have
On Tue, Jun 21, 2016 at 05:01:35PM +0200, Milan Kubík wrote:
Hi Fraser and list,
I have made changes to the test plan on the wiki  according to the
information in "[Testplan review] Sub CAs" thread.
I also implemented the tests in the test plan:
patch 0038 - CATracker and CA CRUD test
patch 0039 - extension to CA ACL test
patch 0040 - functional test with ACLs and certificate profile,
previous S/MIME based tests. This patch also tests for the
behavior when profile ID or CA cn are ommited.
The tests ATM do not verify the Issuer name in the certificate
from the ipa entry of the certificate.
The approach you are using::
assert cert_info['result']['issuer'] ==
is not quite as you describe (these are virtual attributes, not
attributes of an actual entry); but the approach is valid.
is to retrieve the two certificates and compare the fields manually.
Are these virtual attributes created from the certificate itself?
The ACL, SMIME CA and S/MIME profile lifetime is constrained by the
Fraser, could you please verify my reasoning behind the test cases
cert-request in the patch 40?
The tests look OK. With the default CA / default profiles, is there
appropriate isolation between test cases to ensure that if, e.g.
some other test case adds/modifies CA ACLs such that these
expected-to-fail tests now pass, that this does not affect the
TestCertSignMIMEwithSubCA test case?
enforced by pytest.
The two test cases depend on the fact documented in the designs and
cert-request fallbacks to when CA or profile ID are not provided.
Unless something changes caIPAserviceCert profile or affiliated ACL,
the test cases
If you have thought about possible interference from other tests, I
Note another problematic scenario: what if a different (preceding)
test adds a CA ACL that would allow the requests that you expect to
fail? Just something to think about :)
in other parts of the execution already...
The test is constructed in a way that isolates it (to a certain
degree) by the mechanisms available
in pytest. Of course I cannot make the test future-proof or guarantee
that a bug in some other test
will not affect the execution of other tests as they all run against
one IPA instance.
I do not think, however, that potential misbehaving test case that
should prevent us from implementing this and similar test cases.
If you have some specific issue that is in the patch, I'm happy to fix
I will try to think more about corner cases here.
Attaching rebased patches and removing the expected fail from one of
tests as ticket 5981 has fix posted.
can we continue with the review, please?
Manage your subscription for the Freeipa-devel mailing list:
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code