Hi Ludwig,

Thanks for the clarification! But then why does CSRUV allows to be deleted on a working replica? Shouldn't we keep this behavior somehow consistent?

On 10/07/2016 09:29 AM, Ludwig Krispenz wrote:

On 09/13/2016 10:10 AM, Oleg Fayans wrote:
Hi Ludwig,

The ipa-replica-manage clean-ruv sometimes does not quite work.

For example: I have a master and 2 replicas. Initial output of
'ipa-replica-manage list-ruv' looks like this:


Replica Update Vectors:
    f24replica2.pesen.net:389: 7
    f24master.pesen.net:389: 4
    f24replica1.pesen.net:389: 3
Certificate Server Replica Update Vectors:
    f24master.pesen.net:389: 6
        f24replica1.pesen.net:389: 5
    f24replica2.pesen.net:389: 8


When I do 'ipa-replica-manage clean-ruv 5' and then list-ruv, it shows
the expected result:

Replica Update Vectors:
    f24replica2.pesen.net:389: 7
    f24master.pesen.net:389: 4
    f24replica1.pesen.net:389: 3
Certificate Server Replica Update Vectors:
    f24master.pesen.net:389: 6
    f24replica2.pesen.net:389: 8

But when I then do 'ipa-replica-manage clean-ruv 3', the command
executes successfully, but list-ruv still shows 5 RUVs instead of four.

After all nodes are restarted still 5 RUV's are displaayed, but if I
clean the RUV N 3 manually again, it works and leaves (expected) 4 RUVs.

Do you have an idea, what it might be and how to debug this?
did you remove the replica before cleaning the ruv, you cannot just run
cleanruv for an active replica, it always will come back.



On 08/05/2016 06:36 PM, Martin Basti wrote:


On 03.08.2016 14:45, Oleg Fayans wrote:
Hi Martin,

Thanks for the review! Both patches were updated.

On 07/28/2016 04:11 PM, Martin Basti wrote:


On 08.07.2016 15:41, Oleg Fayans wrote:
Hi Martin,

Thanks for the review!

On 07/08/2016 02:18 PM, Martin Basti wrote:


On 27.06.2016 13:53, Oleg Fayans wrote:
Hi guys,

Is there a chance the patches NN 0047.1 and 0048.1 get reviewed
before
4.4 release? They cover a good part of the Managed Topology 4.4
feature.

On 06/17/2016 11:18 AM, Oleg Fayans wrote:
One more test was added to the patch-0048

On 06/17/2016 09:43 AM, Oleg Fayans wrote:
Fixed a bug in the previous patch, automated 2 more testcases
from
http://www.freeipa.org/page/V4/Manage_replication_topology_4_4/Test_Plan





On 06/16/2016 04:46 PM, Oleg Fayans wrote:






IIUC, this will turn off the machine completely, how is cleanup done
then.  AFAIK our tests cannot turn on machine again and run
cleanup, so
you will not be able to run more tests on the same topology without
manual cleanup and manual start.

+        replica = self.replicas[0]
+        replica.run_command(['poweroff'])

IMO would be better to just call 'ipactl stop' instead of 'poweroff'

Agreed! Fixed.


Martin^2




*Automated ipa-replica-manage del tests*

1)
+        replica.run_command(['ipactl', 'stop'])
+        time.sleep(3)

Why do you need sleep here?

Removed, it was left from the old "poweroff" approach



2)
+        ruvid_re = re.compile(".*%s:389: (\d+).*" % replica.hostname)
+        replica_ruvs = ruvid_re.findall(result.stdout_text)
+        master.run_command(['ipa-replica-manage', 'clean-ruv', 'f',
+                            '-p', master.config.dirman_password,
+                            replica_ruvs[0]])

Because you are using re.findall(), without any match you will receive
IndexError here replica_ruvs[0]. IMO it deserves assert before

Implemented the assert which checks that the output contains enough
replica RUVs


3)
assert(replica.hostname in result1.stdout_text)

I think that this is error prone. What if there is just error
'could not
connect to replica <replica hostname>', or something similar.
instead of
listing/cleaning/whatever operation was executed. I think that it
should
be more specific regexp than just finding a replica name substring
(Yes
In IPA we dont always print error so stderr)

I'm not sure, but probably there might be cases when non critical
error
happen and exist status is still 0

Agree. Implemented a regex-based search


4)

+        replica.run_command(['poweroff'])
+        time.sleep(3)

There should not be poweroff, probably sleep could be removed too.

Gone



  *   Automated clean-ruv subcommand test*

1) PEP8, 2 new lines expected
./ipatests/test_integration/test_topology.py:163:1: E302 expected 2
blank lines, found 0
./ipatests/test_integration/test_topology.py:182:80: E501 line too
long
(85 > 79 characters)

Fixed



2)
I dont like doing assert just with count of occurences of substring in
STDOUT, would be possible to improve this somehow?

Maybe, but frankly, I don't see how. In this case we are making sure
that both simple and CA-specific RUVs of a replica are displayed. The
format of the output is strict:
Replica Update Vectors:
replica1_hostname:389: RUV_id
replica2_hostname:389: RUV_id
Certificate Server Replica Update Vectors:
replica1_hostname:389: RUV_id
replica2_hostname:389: RUV_id
If we do not see 2 occurrences of the replica hostname than definitely
something went wrong


3)
I'm not sure if clean-ruv is instant operations or there is some magic
happening in background (we have abort-clean-ruv). Maybe some sleep
should be there, but this needs investigation.

+        assert(replica.hostname in result2.stdout_text), (
+            "The wrong RUV was deleted")
+        result3 = master.run_command(['ipa-replica-manage',
'list-ruv',
+                                      '-p',
master.config.dirman_password])
+        assert(result3.stdout_text.count(replica.hostname) == 1), (
+            "CA RUV of the replica is still displayed")


Based on my discussion with Stanislav Laznicka, I understood that by
default clean-ruv does not return the shell until the operation is
finished. You can force dropping into the shell by pressing CTRL+C, in
which case the background job will still be running, but this is not
the default behavior

Test failed:
        result4 = master.run_command(['ipa-replica-manage', 'list-ruv',
                                      '-p',
master.config.dirman_password])
      assert(replica.hostname not in result4.stdout_text), (
            "replica's RUV is still displayed")
E       AssertionError: replica's RUV is still displayed
E       assert 'replica3.ipa.test' not in 'Replica Update
V...ipa.test:389: 8\n'
E         'replica3.ipa.test' is contained here:
E           Replica Update Vectors:
E           \tmaster.ipa.test:389: 4
E           \treplica3.ipa.test:389: 3
E           \treplica2.ipa.test:389: 7
E           Certificate Server Replica Update Vectors:
E           \tmaster.ipa.test:389: 6
E           \treplica2.ipa.test:389: 8


[root@master ~]# ipa topologysegment-find
Suffix name: domain
------------------
2 segments matched
------------------
  Segment name: master.ipa.test-to-replica2.ipa.test
  Left node: master.ipa.test
  Right node: replica2.ipa.test
  Connectivity: both

  Segment name: master.ipa.test-to-replica3.ipa.test
  Left node: master.ipa.test
  Right node: replica3.ipa.test
  Connectivity: both
----------------------------
Number of entries returned 2
----------------------------
[root@master ~]# ipa-replica-manage list-ruv
Directory Manager password:

Replica Update Vectors:
    master.ipa.test:389: 4
    replica2.ipa.test:389: 7
    replica3.ipa.test:389: 3
Certificate Server Replica Update Vectors:
    master.ipa.test:389: 6
    replica2.ipa.test:389: 8
[root@master ~]#

Then I tried manually to clean RUV 3, and it behaves somehow odd

[root@master ~]# 'ipa-replica-manage' 'clean-ruv' '3' '-p'
'Secret123' '-f'
Clean the Replication Update Vector for replica3.ipa.test:389
Background task created to clean replication data. This may take a
while.
This may be safely interrupted with Ctrl+C
Cleanup task created
[root@master ~]# less /var/log/dirsrv/slapd-IPA-TEST/errors
[root@master ~]# ipa-replica-manage list-ruv
Directory Manager password:

Replica Update Vectors:
    master.ipa.test:389: 4
    replica2.ipa.test:389: 7
    replica3.ipa.test:389: 3
Certificate Server Replica Update Vectors:
    master.ipa.test:389: 6
    replica2.ipa.test:389: 8
[root@master ~]# 'ipa-replica-manage' 'clean-ruv' '3' '-p'
'Secret123' '-f'
Clean the Replication Update Vector for replica3.ipa.test:389
CLEANALLRUV task for replica id 3 already exists.
This may be safely interrupted with Ctrl+C
Cleanup task created

[root@master ~]# ipa-replica-manage list-clean-ruv -p Secret123
No CLEANALLRUV tasks running

No abort CLEANALLRUV tasks running
[root@master ~]# 'ipa-replica-manage' 'clean-ruv' '3' '-p'
'Secret123' '-f'
Clean the Replication Update Vector for replica3.ipa.test:389
Background task created to clean replication data. This may take a
while.
This may be safely interrupted with Ctrl+C
Cleanup task created
[root@master ~]# ipa-replica-manage list-clean-ruv -p Secret123
CLEANALLRUV tasks
RID 3: Successfully cleaned rid(3).

No abort CLEANALLRUV tasks running
[root@master ~]# ipa-replica-manage list-ruv -p Secret123
Replica Update Vectors:
    master.ipa.test:389: 4
    replica2.ipa.test:389: 7
Certificate Server Replica Update Vectors:
    master.ipa.test:389: 6
    replica2.ipa.test:389: 8


I'm not sure if this behavior is right, Ludwig may know.



--
Oleg Fayans
Quality Engineer
FreeIPA team
RedHat.

--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Reply via email to