[Freeipa-devel] [PATCH 0291] idviews: Complain if host is already assigned the ID View in idview-apply

2014-12-11 Thread Tomas Babej
Hi,

When running a idview-apply command, the hosts that were already assigned
the desired view were silently ignored. Make sure such hosts show up in
the list of failed hosts.

https://fedorahosted.org/freeipa/ticket/4743

-- 
Tomas Babej
Associate Software Engineer | Red Hat | Identity Management
RHCE | Brno Site | IRC: tbabej | freeipa.org 


From d858a9edaef5878affc4dd585d2d9a840dec22c4 Mon Sep 17 00:00:00 2001
From: Tomas Babej tba...@redhat.com
Date: Thu, 11 Dec 2014 07:50:40 +0100
Subject: [PATCH] idviews: Complain if host is already assigned the ID View in
 idview-apply

When running a idview-apply command, the hosts that were already assigned
the desired view were silently ignored. Make sure such hosts show up in
the list of failed hosts.

https://fedorahosted.org/freeipa/ticket/4743
---
 ipalib/plugins/idviews.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/ipalib/plugins/idviews.py b/ipalib/plugins/idviews.py
index 9c8721018325f56e681f168b55c31055bfd07345..eddd8c323a1dc7c16bd7b9bd8d9a021317015ccf 100644
--- a/ipalib/plugins/idviews.py
+++ b/ipalib/plugins/idviews.py
@@ -279,8 +279,8 @@ class baseidview_apply(LDAPQuery):
 completed = completed + 1
 succeeded['host'].append(host)
 except errors.EmptyModlist:
-# If view was already applied, do not complain
-pass
+# If view was already applied, complain about it
+failed['host'].append((host, ID View already applied))
 except errors.NotFound:
 failed['host'].append((host, not found))
 except errors.PublicError as e:
-- 
1.9.3

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH 0168] Better workaround to get status of CA during upgrade

2014-12-11 Thread Martin Basti

On 10/12/14 19:21, Jan Cholasta wrote:

Dne 10.12.2014 v 18:01 Jan Cholasta napsal(a):

Dne 1.12.2014 v 16:48 Martin Basti napsal(a):

On 01/12/14 08:46, Jan Cholasta wrote:

Hi,

Dne 27.11.2014 v 14:24 Martin Basti napsal(a):

Ticket: https://fedorahosted.org/freeipa/ticket/4676
Replaces current workaround. Should go to 4.1.3.
Patch attached.


When constructing URLs with host:port, please use
ipautil.format_netloc().

wget should be added as a dependency of freeipa-python in the spec 
file.


Honza


Updated patch attached.



Thanks, ACK.

Pushed to:
master: 337faf506462a01c6dbcd00f2039ed5627691864
ipa-4-1: 5052af773f652bc19e91fe49e15351e5c5c7d976



It turns out I messed up the review (sorry). This fixes the upgrade, 
but it also breaks ipa-server-install:


2014-12-10T06:06:44Z DEBUG   [8/27]: starting certificate server instance
2014-12-10T06:06:44Z DEBUG Starting external process
2014-12-10T06:06:44Z DEBUG args='/bin/systemctl' 'start' 
'pki-tomcatd.target'

2014-12-10T06:06:45Z DEBUG Process finished, return code=0
2014-12-10T06:06:45Z DEBUG stdout=
2014-12-10T06:06:45Z DEBUG stderr=
2014-12-10T06:06:45Z DEBUG Starting external process
2014-12-10T06:06:45Z DEBUG args='/bin/systemctl' 'is-active' 
'pki-tomcatd.target'

2014-12-10T06:06:45Z DEBUG Process finished, return code=0
2014-12-10T06:06:45Z DEBUG stdout=active

2014-12-10T06:06:45Z DEBUG stderr=
2014-12-10T06:06:45Z DEBUG wait_for_open_ports: localhost [8080, 8443] 
timeout 300
2014-12-10T06:06:49Z DEBUG The httpd proxy is not installed, wait on 
local port

2014-12-10T06:06:49Z DEBUG Waiting until the CA is running
2014-12-10T06:06:49Z DEBUG Starting external process
2014-12-10T06:06:49Z DEBUG args='/usr/bin/wget' '-S' '-O' '-' 
'--timeout=30' 
'https://vm-088.idm.lab.bos.redhat.com:8443/ca/admin/ca/getStatus'

2014-12-10T06:07:09Z DEBUG Process finished, return code=5
2014-12-10T06:07:09Z DEBUG stdout=
2014-12-10T06:07:09Z DEBUG stderr=--2014-12-10 01:06:49-- 
https://vm-088.idm.lab.bos.redhat.com:8443/ca/admin/ca/getStatus
Resolving vm-088.idm.lab.bos.redhat.com 
(vm-088.idm.lab.bos.redhat.com)... 10.16.78.88
Connecting to vm-088.idm.lab.bos.redhat.com 
(vm-088.idm.lab.bos.redhat.com)|10.16.78.88|:8443... connected.
ERROR: cannot verify vm-088.idm.lab.bos.redhat.com's certificate, 
issued by ‘/O=IDM.LAB.BOS.REDHAT.COM/CN=Certificate Authority’:

  Self-signed certificate encountered.
To connect to vm-088.idm.lab.bos.redhat.com insecurely, use 
`--no-check-certificate'.


2014-12-10T06:07:09Z DEBUG The CA status is: check interrupted


I have reopened the ticket.

Patch with '--no-check-certificate' option attached. Before workaround 
there was no certificate check, so it should not be problem if we ignore 
the certificate.

Martin^2

--
Martin Basti

From 94ebe22c56bb311072e207e6380a5638bf422c82 Mon Sep 17 00:00:00 2001
From: Martin Basti mba...@redhat.com
Date: Thu, 11 Dec 2014 09:38:46 +0100
Subject: [PATCH] Fix don't check certificate during getting CA status

Due workaroud we accidentaly started to check certificate, which causes
problems during installation.

Ticket: https://fedorahosted.org/freeipa/ticket/4676
---
 ipaplatform/redhat/services.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ipaplatform/redhat/services.py b/ipaplatform/redhat/services.py
index 20d0adec421ecd3285464e2a51b9d5c61a0e3d92..8759cab76c7d72a3abbf935e7f15f7a32a0b6987 100644
--- a/ipaplatform/redhat/services.py
+++ b/ipaplatform/redhat/services.py
@@ -204,6 +204,7 @@ class RedHatCAService(RedHatService):
 paths.BIN_WGET,
 '-S', '-O', '-',
 '--timeout=30',
+'--no-check-certificate',
 url
 ]
 
-- 
1.8.3.1

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] FreeIPA integration with external DNS services

2014-12-11 Thread Petr Spacek
On 10.12.2014 18:50, Simo Sorce wrote:
 On Wed, 10 Dec 2014 15:13:30 +0100
 Petr Spacek pspa...@redhat.com wrote:
 
 I think that external DNS could depend on Vault (assuming that
 external DNS support will be purely optional).
 
 TBH, I do not think this is a sensible option, the Vault will drag huge
 dependencies for now, and I would like to avoid that if all we need is
 to add a couple of A/SRV records to an external DNS.
 
 If we can't come up with a service, I think I am ok telling admins they
 need to manually copy the TKEY (or use puppet or other similar
 configuration manager to push the key file around) on each replica, and
 we defer automatic distribution of TKEYs.
 
 We will have a service that can give out keys, it is identified as
 necessary in the replica promotion proposal, so we'll eventually get
 there.

Thank you for discussion. Now I would like to know in which direction are we
heading with external DNS support :-)

I have to admit that I don't understand why we are spending time on Vault and
at the same time we refuse to use it ...

Anyway, someone competent has to decide if we want to implement external DNS
support and:
- defer key distribution for now
- use Vault
- re-invent Vault and use that new cool thing

-- 
Petr^2 Spacek

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 0177] Fix adding (warning) messages on client side

2014-12-11 Thread Jan Cholasta

Hi,

Dne 9.12.2014 v 16:07 Martin Basti napsal(a):

Ticket: https://fedorahosted.org/freeipa/ticket/4793

I'm able to reproduce it only in one nose test.


Which test?



Patch attached.


What about:

result['messages'] = result.get('messages', ()) + (message.to_dict(),)

(My point is, don't support both lists and tuples, pick just one.)

Honza

--
Jan Cholasta

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 0168] Better workaround to get status of CA during upgrade

2014-12-11 Thread Jan Cholasta

Dne 11.12.2014 v 10:01 Martin Basti napsal(a):

On 10/12/14 19:21, Jan Cholasta wrote:

Dne 10.12.2014 v 18:01 Jan Cholasta napsal(a):

Dne 1.12.2014 v 16:48 Martin Basti napsal(a):

On 01/12/14 08:46, Jan Cholasta wrote:

Hi,

Dne 27.11.2014 v 14:24 Martin Basti napsal(a):

Ticket: https://fedorahosted.org/freeipa/ticket/4676
Replaces current workaround. Should go to 4.1.3.
Patch attached.


When constructing URLs with host:port, please use
ipautil.format_netloc().

wget should be added as a dependency of freeipa-python in the spec
file.

Honza


Updated patch attached.



Thanks, ACK.

Pushed to:
master: 337faf506462a01c6dbcd00f2039ed5627691864
ipa-4-1: 5052af773f652bc19e91fe49e15351e5c5c7d976



It turns out I messed up the review (sorry). This fixes the upgrade,
but it also breaks ipa-server-install:

2014-12-10T06:06:44Z DEBUG   [8/27]: starting certificate server instance
2014-12-10T06:06:44Z DEBUG Starting external process
2014-12-10T06:06:44Z DEBUG args='/bin/systemctl' 'start'
'pki-tomcatd.target'
2014-12-10T06:06:45Z DEBUG Process finished, return code=0
2014-12-10T06:06:45Z DEBUG stdout=
2014-12-10T06:06:45Z DEBUG stderr=
2014-12-10T06:06:45Z DEBUG Starting external process
2014-12-10T06:06:45Z DEBUG args='/bin/systemctl' 'is-active'
'pki-tomcatd.target'
2014-12-10T06:06:45Z DEBUG Process finished, return code=0
2014-12-10T06:06:45Z DEBUG stdout=active

2014-12-10T06:06:45Z DEBUG stderr=
2014-12-10T06:06:45Z DEBUG wait_for_open_ports: localhost [8080, 8443]
timeout 300
2014-12-10T06:06:49Z DEBUG The httpd proxy is not installed, wait on
local port
2014-12-10T06:06:49Z DEBUG Waiting until the CA is running
2014-12-10T06:06:49Z DEBUG Starting external process
2014-12-10T06:06:49Z DEBUG args='/usr/bin/wget' '-S' '-O' '-'
'--timeout=30'
'https://vm-088.idm.lab.bos.redhat.com:8443/ca/admin/ca/getStatus'
2014-12-10T06:07:09Z DEBUG Process finished, return code=5
2014-12-10T06:07:09Z DEBUG stdout=
2014-12-10T06:07:09Z DEBUG stderr=--2014-12-10 01:06:49--
https://vm-088.idm.lab.bos.redhat.com:8443/ca/admin/ca/getStatus
Resolving vm-088.idm.lab.bos.redhat.com
(vm-088.idm.lab.bos.redhat.com)... 10.16.78.88
Connecting to vm-088.idm.lab.bos.redhat.com
(vm-088.idm.lab.bos.redhat.com)|10.16.78.88|:8443... connected.
ERROR: cannot verify vm-088.idm.lab.bos.redhat.com's certificate,
issued by ‘/O=IDM.LAB.BOS.REDHAT.COM/CN=Certificate Authority’:
  Self-signed certificate encountered.
To connect to vm-088.idm.lab.bos.redhat.com insecurely, use
`--no-check-certificate'.

2014-12-10T06:07:09Z DEBUG The CA status is: check interrupted


I have reopened the ticket.


Patch with '--no-check-certificate' option attached. Before workaround
there was no certificate check, so it should not be problem if we ignore
the certificate.
Martin^2



Thanks, ACK.

Pushed to:
master: 95becc1d542c78721088398eddbfd0d0ffe9b27f
ipa-4-1: 8440c2ee97e1c7e29e20629a2579af28a6d654be

--
Jan Cholasta

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 0040] Remove dependency on subscription-manager

2014-12-11 Thread Martin Basti

On 11/12/14 05:01, Gabe Alford wrote:

Updated patch attached.

Thanks,

Gabe

On Wed, Dec 10, 2014 at 8:02 AM, Martin Basti mba...@redhat.com 
mailto:mba...@redhat.com wrote:


On 10/12/14 15:45, Gabe Alford wrote:

Hello,

Fix for https://fedorahosted.org/freeipa/ticket/4783

Thanks,

Gabe


Hello, thanks for patch.

The patch needs rebase for IPA-4-1 branch

Martin^2

-- 
Martin Basti




Thank you.

ACK freeipa-rga-0040: master
ACK freeipa-rga-0040-2: IPA-4-1

--
Martin Basti

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH 0177] Fix adding (warning) messages on client side

2014-12-11 Thread Martin Basti

On 11/12/14 11:19, Jan Cholasta wrote:

Hi,

Dne 9.12.2014 v 16:07 Martin Basti napsal(a):

Ticket: https://fedorahosted.org/freeipa/ticket/4793

I'm able to reproduce it only in one nose test.


Which test?
If you apply my patch 170 and add a random forwardzone, then DNS root 
zone tests failed.




Patch attached.


What about:

result['messages'] = result.get('messages', ()) + 
(message.to_dict(),)


(My point is, don't support both lists and tuples, pick just one.)

Honza

This is question for framework guru (you?), I tried to preserve format 
unchanged.

Shouldn't be all values in lists in server part?

Martin^2

--
Martin Basti

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 0040] Remove dependency on subscription-manager

2014-12-11 Thread Martin Kosek
On 12/11/2014 12:05 PM, Martin Basti wrote:
 On 11/12/14 05:01, Gabe Alford wrote:
 Updated patch attached.

 Thanks,

 Gabe

 On Wed, Dec 10, 2014 at 8:02 AM, Martin Basti mba...@redhat.com
 mailto:mba...@redhat.com wrote:

 On 10/12/14 15:45, Gabe Alford wrote:

 Hello,

 Fix for https://fedorahosted.org/freeipa/ticket/4783

 Thanks,

 Gabe


 Hello, thanks for patch.

 The patch needs rebase for IPA-4-1 branch

 Martin^2

 -- Martin Basti


 Thank you.
 
 ACK freeipa-rga-0040: master
 ACK freeipa-rga-0040-2: IPA-4-1

Thanks Gabe and Martin!

Pushed to master, ipa-4-1.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] topology management question

2014-12-11 Thread Ludwig Krispenz


On 12/05/2014 04:50 PM, Simo Sorce wrote:

On Thu, 04 Dec 2014 14:33:09 +0100
Ludwig Krispenz lkris...@redhat.com wrote:


hi,

I just have another (hopefully this will end soon) issue I want to
get your input. (please read to teh end first)

To recapture the conditions:
-  the topology plugin manages the connections between servers as
segments in the shared tree
- it is authoritative for managed servers, eg it controls all
connections between servers listed under cn=masters,
it is permissive for connection to other servers
- it rejects any removal of a segment, which would disconnect the
topology.
- a change in topology can be applied to any server in the topology,
it will reach the respective servers and the plugin will act upon it

Now there is a special case, causing a bit of trouble. If a replica
is to be removed from the topology, this means that
the replication agreements from and to this replica should be
removed, the server should be removed from the manages servers.
The problem is that:
- if you remove the server first, the server becomes unmanaged and
removal of the segment will not trigger a removal of the replication
agreement

Can you explain what you mean if you remove the server first exactly ?
What LDAP operation will be performed, by the management tools ?
as far as the plugin is concerned a removal of a replica triggers two 
operations:
- removal of the host from the sservers in cn=masters, so the server is 
no longer considered as managed
- removal of the segment(s) connecting the to be removed replica to 
other still amnaged servers, which should remove the corresponding 
replication agreements.

It was the order of these two operations I was talking



- if you remove the segments first, one segment will be the last one
connecting this replica to the topology and removal will be rejected

We should never remove the segments first indeed.

if we can fully control that only specific management tools can be used,
we can define the order, but an admin could apply individual operations
and still it would be good if nothing breaks



Now, with some effort this can be resolved, eg
if the server is removed, keep it internally as removed server and
for segments connecting this server trigger removal of replication
agreements or mark a the last segment, when tried to remove, as
pending and once the server is removed also remove the corresponding
repl agreements

Why should we keep it internally ?
If you mark the agreements as managed by setting an attribute on them,
then you will never have any issue recognizing a managed agreement in
cn=config, and you will also immediately find out it is old as it is
not backed by a segment so you will safely remove it.

I didn't want to add new flags/fields to the replication agreements
as long as anything can be handled by the data in the shared tree.
internally was probably misleading, but I will think about it again


Segments (and their agreements) should be removed as trigger on the
master entry getting removed. This should be done even if it causes a
split brain, because if the server is removed, no matter how much we
wish to keep tropology integrity we effectively are in a split brain
situation, keeping toplogy agreements alive w/o the server entry
doesn't help.
If we can agree on that, that presence/removal of masters is the primary 
trigger
that's fine. I was thinking of situations where a server was removed, 
but not uninstalled.

Just taking it out of the topology, but it could still be reached



But there is a problem, which I think is much harder and I am not
sure how much effort I should put in resolving it.
If we want to have the replication agreements cleaned up after
removal of a replica without direct modification of cn=config, we
need to follow the path above,
but this also means that the last change needs to reach both the
removed replica (R) and the last server(S) it is connected to.

It would be nice if the changed reached the replica, indeed, but not a
big deal if it doesn't, if you are removing the replica it means you
are decommissioning it, so it is not really that important that it
receives updates, it will be destroyed shortly.
That's what I was not sure about, couldn't there be cases where it is 
not destroyed,

just isolated.

And if it is not destroyed for whatever reason, it will be removed from
the masters group anyway so it will have no permission to replicate
back, and no harm is done to the overall domain.


The bad thing is that if this change triggers a
removal of the replication agreement on S it could be that the change
is not replicated to R before the agreement is removed and is lost.
There is no way (or no easy) way to know for teh plugin if a change
was received by an other server,

There is an easy way, contact the other server and see if the change
happened in its LDAP tree :)
BNut this is not really necessary, as explained above.


I was also thinking about some kind
of acknowledge mechanism by doing a 

Re: [Freeipa-devel] FreeIPA integration with external DNS services

2014-12-11 Thread Simo Sorce
On Thu, 11 Dec 2014 10:43:02 +0100
Petr Spacek pspa...@redhat.com wrote:

 On 10.12.2014 18:50, Simo Sorce wrote:
  On Wed, 10 Dec 2014 15:13:30 +0100
  Petr Spacek pspa...@redhat.com wrote:
  
  I think that external DNS could depend on Vault (assuming that
  external DNS support will be purely optional).
  
  TBH, I do not think this is a sensible option, the Vault will drag
  huge dependencies for now, and I would like to avoid that if all we
  need is to add a couple of A/SRV records to an external DNS.
  
  If we can't come up with a service, I think I am ok telling admins
  they need to manually copy the TKEY (or use puppet or other similar
  configuration manager to push the key file around) on each replica,
  and we defer automatic distribution of TKEYs.
  
  We will have a service that can give out keys, it is identified as
  necessary in the replica promotion proposal, so we'll eventually get
  there.
 
 Thank you for discussion. Now I would like to know in which direction
 are we heading with external DNS support :-)
 
 I have to admit that I don't understand why we are spending time on
 Vault and at the same time we refuse to use it ...
 
 Anyway, someone competent has to decide if we want to implement
 external DNS support and:
 - defer key distribution for now

I vote for deferring for now.

Simo.

 - use Vault
 - re-invent Vault and use that new cool thing
 



-- 
Simo Sorce * Red Hat, Inc * New York

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] topology management question

2014-12-11 Thread Simo Sorce
On Thu, 11 Dec 2014 14:18:36 +0100
Ludwig Krispenz lkris...@redhat.com wrote:

 
 On 12/05/2014 04:50 PM, Simo Sorce wrote:
  On Thu, 04 Dec 2014 14:33:09 +0100
  Ludwig Krispenz lkris...@redhat.com wrote:
 
  hi,
 
  I just have another (hopefully this will end soon) issue I want to
  get your input. (please read to teh end first)
 
  To recapture the conditions:
  -  the topology plugin manages the connections between servers as
  segments in the shared tree
  - it is authoritative for managed servers, eg it controls all
  connections between servers listed under cn=masters,
  it is permissive for connection to other servers
  - it rejects any removal of a segment, which would disconnect the
  topology.
  - a change in topology can be applied to any server in the
  topology, it will reach the respective servers and the plugin will
  act upon it
 
  Now there is a special case, causing a bit of trouble. If a replica
  is to be removed from the topology, this means that
  the replication agreements from and to this replica should be
  removed, the server should be removed from the manages servers.
  The problem is that:
  - if you remove the server first, the server becomes unmanaged and
  removal of the segment will not trigger a removal of the
  replication agreement
  Can you explain what you mean if you remove the server first
  exactly ? What LDAP operation will be performed, by the management
  tools ?
 as far as the plugin is concerned a removal of a replica triggers two 
 operations:
 - removal of the host from the sservers in cn=masters, so the server
 is no longer considered as managed
 - removal of the segment(s) connecting the to be removed replica to 
 other still amnaged servers, which should remove the corresponding 
 replication agreements.
 It was the order of these two operations I was talking

We can define a correct order, the plugin can refuse to do any other
order for direct operations (we need to be careful not to refuse
replication operations I think).

 
  - if you remove the segments first, one segment will be the last
  one connecting this replica to the topology and removal will be
  rejected
  We should never remove the segments first indeed.
 if we can fully control that only specific management tools can be
 used, we can define the order, but an admin could apply individual
 operations and still it would be good if nothing breaks

I think we had a plan to return UNWILLING_TO_PERFORM if the admin tries
to remove the last segment first. So we would have no problem really,
the admin can try and fail. If he wants to remove a master he'll have
to remove it from the masters group, and this will trigger the removal
of all segments.

  Now, with some effort this can be resolved, eg
  if the server is removed, keep it internally as removed server and
  for segments connecting this server trigger removal of replication
  agreements or mark a the last segment, when tried to remove, as
  pending and once the server is removed also remove the
  corresponding repl agreements
  Why should we keep it internally ?
  If you mark the agreements as managed by setting an attribute on
  them, then you will never have any issue recognizing a managed
  agreement in cn=config, and you will also immediately find out it
  is old as it is not backed by a segment so you will safely remove
  it.
 I didn't want to add new flags/fields to the replication agreements
 as long as anything can be handled by the data in the shared tree.

We have too. I think it is a must or we will find numerous corner cases.
Is there a specific reason why you do not want to add flags to
replication agreements in cn=config ?

 internally was probably misleading, but I will think about it again

Ok, it is important we both understand what issues we see with any of
the possible approaches so we can agree on the best one.

  Segments (and their agreements) should be removed as trigger on the
  master entry getting removed. This should be done even if it causes
  a split brain, because if the server is removed, no matter how much
  we wish to keep tropology integrity we effectively are in a split
  brain situation, keeping toplogy agreements alive w/o the server
  entry doesn't help.
 If we can agree on that, that presence/removal of masters is the
 primary trigger that's fine.

Yes I think we can definitely agree that this is the primary trigger
for server removal/addition.

 I was thinking of situations where a server was removed, 
 but not uninstalled.

Understood, but even then it makes no real difference, once the server
is removed from the group of masters it will not be able to replicate
outbound anymore as the other master's ACIs will not recognize this
server credentials as valid replicator creds.

 Just taking it out of the topology, but it could still be reached

It can be reached, and that may be a problem for clients. But in the
long term this should be true only for clients manually configured to
reach that 

Re: [Freeipa-devel] FreeIPA integration with external DNS services

2014-12-11 Thread Martin Kosek
On 12/11/2014 03:05 PM, Simo Sorce wrote:
 On Thu, 11 Dec 2014 10:43:02 +0100
 Petr Spacek pspa...@redhat.com wrote:
 
 On 10.12.2014 18:50, Simo Sorce wrote:
 On Wed, 10 Dec 2014 15:13:30 +0100
 Petr Spacek pspa...@redhat.com wrote:

 I think that external DNS could depend on Vault (assuming that
 external DNS support will be purely optional).

 TBH, I do not think this is a sensible option, the Vault will drag
 huge dependencies for now, and I would like to avoid that if all we
 need is to add a couple of A/SRV records to an external DNS.

 If we can't come up with a service, I think I am ok telling admins
 they need to manually copy the TKEY (or use puppet or other similar
 configuration manager to push the key file around) on each replica,
 and we defer automatic distribution of TKEYs.

 We will have a service that can give out keys, it is identified as
 necessary in the replica promotion proposal, so we'll eventually get
 there.

 Thank you for discussion. Now I would like to know in which direction
 are we heading with external DNS support :-)

 I have to admit that I don't understand why we are spending time on
 Vault and at the same time we refuse to use it ...

 Anyway, someone competent has to decide if we want to implement
 external DNS support and:
 - defer key distribution for now
 
 I vote for deferring for now.
 
 Simo.

+1, we can defer until we have the Simo's KISS service from replica promotion 
work:

http://www.freeipa.org/page/V4/Replica_Promotion#Key_Interchange_Security_Service_.28KISS.29

Same as Simo, I would also rather avoid the dependency on PKIVault for this
base infrastructure feature orthogonal to FreeIPA PKI.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 0291] idviews: Complain if host is already assigned the ID View in idview-apply

2014-12-11 Thread Tomas Babej

On 12/11/2014 11:07 AM, Jan Cholasta wrote:
 Hi,

 Dne 11.12.2014 v 09:23 Tomas Babej napsal(a):
 Hi,

 When running a idview-apply command, the hosts that were already
 assigned
 the desired view were silently ignored. Make sure such hosts show up in
 the list of failed hosts.

 https://fedorahosted.org/freeipa/ticket/4743

 Shouldn't the error message strings be translatable?


Sure, why not. Good point, I transformed the other error message string
as well.

Updated patch attach.

 Honza


-- 
Tomas Babej
Associate Software Engineer | Red Hat | Identity Management
RHCE | Brno Site | IRC: tbabej | freeipa.org 

From 5f35614048c58ce24cecfc54485ac0a04a9b8a27 Mon Sep 17 00:00:00 2001
From: Tomas Babej tba...@redhat.com
Date: Thu, 11 Dec 2014 07:50:40 +0100
Subject: [PATCH] idviews: Complain if host is already assigned the ID View in
 idview-apply

When running a idview-apply command, the hosts that were already assigned
the desired view were silently ignored. Make sure such hosts show up in
the list of failed hosts.

https://fedorahosted.org/freeipa/ticket/4743
---
 ipalib/plugins/idviews.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/ipalib/plugins/idviews.py b/ipalib/plugins/idviews.py
index 9c8721018325f56e681f168b55c31055bfd07345..50fe6f3e18c62f0c830c61304801aec369ab593f 100644
--- a/ipalib/plugins/idviews.py
+++ b/ipalib/plugins/idviews.py
@@ -279,10 +279,10 @@ class baseidview_apply(LDAPQuery):
 completed = completed + 1
 succeeded['host'].append(host)
 except errors.EmptyModlist:
-# If view was already applied, do not complain
-pass
+# If view was already applied, complain about it
+failed['host'].append((host, _(ID View already applied)))
 except errors.NotFound:
-failed['host'].append((host, not found))
+failed['host'].append((host, _(not found)))
 except errors.PublicError as e:
 failed['host'].append((host, str(e)))
 
-- 
1.9.3

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH 0170] Detect and warn about invalid forwardzone configuration

2014-12-11 Thread Petr Spacek
Hello,

I have only few nitpicks and one minor non-nitpick. Rest is in-line.

On 10.12.2014 18:20, Martin Basti wrote:
 freeipa-mbasti-0170.4-Detect-and-warn-about-invalid-DNS-forward-zone-confi.patch
 
 
 From a1b70e7a12ffdb08941d43587a05d7e36b57ab2b Mon Sep 17 00:00:00 2001
 From: Martin Basti mba...@redhat.com
 Date: Fri, 21 Nov 2014 16:54:09 +0100
 Subject: [PATCH] Detect and warn about invalid DNS forward zone configuration
 
 Shows warning if forward and parent authoritative zone do not have
 proper NS record delegation, which can cause the forward zone will be
 ineffective and forwarding will not work.
 
 Ticket: https://fedorahosted.org/freeipa/ticket/4721
 ---
  ipalib/messages.py|  13 ++
  ipalib/plugins/dns.py | 332 
 --
  2 files changed, 334 insertions(+), 11 deletions(-)
 
 diff --git a/ipalib/messages.py b/ipalib/messages.py
 index 
 102e35275dbe37328c84ecb3cd5b2a8d8578056f..b44beca729f5483a7241e4c98a9f724ed663e70f
  100644
 --- a/ipalib/messages.py
 +++ b/ipalib/messages.py
 @@ -200,6 +200,19 @@ class 
 DNSServerDoesNotSupportDNSSECWarning(PublicMessage):
 uIf DNSSEC validation is enabled on IPA server(s), 
 uplease disable it.)
  
 +class ForwardzoneIsNotEffectiveWarning(PublicMessage):
 +
 +**13008** Forwardzone is not effective, forwarding will not work because
 +there is authoritative parent zone, without proper NS delegation
 +
 +
 +errno = 13008
 +type = warning
 +format = _(uforward zone \%(fwzone)s\ is not effective because of 
 +   umissing proper NS delegation in authoritative zone 
 +   u\%(authzone)s\. Please add NS record 
 +   u\%(ns_rec)s\ to parent zone \%(authzone)s\.)
 +
  
  def iter_messages(variables, base):
  Return a tuple with all subclasses
 diff --git a/ipalib/plugins/dns.py b/ipalib/plugins/dns.py
 index 
 c5d96a8c4fcdf101254ecefb60cb83d63bee6310..5c3a017989b23a1c6076d9dc4d93be65dd66cc67
  100644
 --- a/ipalib/plugins/dns.py
 +++ b/ipalib/plugins/dns.py
 @@ -1725,6 +1725,241 @@ def _normalize_zone(zone):
  return zone
  
  
 +def _get_auth_zone_ldap(name):
 +
 +Find authoritative zone in LDAP for name
Nitpick: Please add this note:
. Only active zones are considered.

 +:param name:
 +:return: (zone, truncated)
 +zone: authoritative zone, or None if authoritative zone is not in LDAP
 +
 +assert isinstance(name, DNSName)
 +ldap = api.Backend.ldap2
 +
 +# Create all possible parent zone names
 +search_name = name.make_absolute()
 +zone_names = []
 +for i in xrange(len(search_name)):
 +zone_name_abs = DNSName(search_name[i:]).ToASCII()
 +zone_names.append(zone_name_abs)
 +# compatibility with IPA  4.0, zone name can be relative
 +zone_names.append(zone_name_abs[:-1])
 +
 +# Create filters
 +objectclass_filter = ldap.make_filter({'objectclass':'idnszone'})
 +zonenames_filter = ldap.make_filter({'idnsname': zone_names})
 +zoneactive_filter = ldap.make_filter({'idnsZoneActive': 'true'})
 +complete_filter = ldap.combine_filters(
 +[objectclass_filter, zonenames_filter, zoneactive_filter],
 +rules=ldap.MATCH_ALL
 +)
 +
 +try:
 +entries, truncated = ldap.find_entries(
 +filter=complete_filter,
 +attrs_list=['idnsname'],
 +base_dn=DN(api.env.container_dns, api.env.basedn),
 +scope=ldap.SCOPE_ONELEVEL
 +)
 +except errors.NotFound:
 +return None, False
 +
 +# always use absolute zones
 +matched_auth_zones = [entry.single_value['idnsname'].make_absolute()
 +  for entry in entries]
 +
 +# return longest match
 +return max(matched_auth_zones, key=len), truncated
 +
 +
 +def _get_longest_match_ns_delegation_ldap(zone, name):
 +
 +Finds record in LDAP which has the longest match with name.
 +
 +NOTE: does not search in zone apex, returns None if there is no NS
 +delegation outside of zone apex
Nitpick:
Searches for deepest delegation for name in LDAP zone.

NOTE: NS record in zone apex is not considered as delegation. It returns None
if there is no delegation outside of zone apex.

 +
 +Example:
 +zone: example.com.
 +name: ns.sub.example.com.
 +
 +records:
 +extra.ns.sub.example.com.
 +sub.example.com.
 +example.com
 +
 +result: sub.example.com.
 +
 +:param zone: zone name
 +:param name:
 +:return: (record, truncated);
 +record: record name if success, or None if no such record exists, or
 +record is zone apex record
Nitpick:
:return: (match, truncated);
match: delegation name if success, or None if no delegation record exists

 +
 +assert isinstance(zone, DNSName)
 +assert isinstance(name, DNSName)
 +
 +ldap = api.Backend.ldap2
 +
 +# get zone DN
 +zone_dn = 

Re: [Freeipa-devel] [PATCH 0170] Detect and warn about invalid forwardzone configuration

2014-12-11 Thread Martin Basti

Updated aptch attached:

diff with previous:

diff --git a/ipalib/plugins/dns.py b/ipalib/plugins/dns.py
index f9d8321..7a80036 100644
--- a/ipalib/plugins/dns.py
+++ b/ipalib/plugins/dns.py
@@ -1735,7 +1735,7 @@ def _normalize_zone(zone):

 def _get_auth_zone_ldap(name):
 
-Find authoritative zone in LDAP for name
+Find authoritative zone in LDAP for name. Only active zones are 
considered.

 :param name:
 :return: (zone, truncated)
 zone: authoritative zone, or None if authoritative zone is not in LDAP
@@ -1781,10 +1781,10 @@ def _get_auth_zone_ldap(name):

 def _get_longest_match_ns_delegation_ldap(zone, name):
 
-Finds record in LDAP which has the longest match with name.
+Searches for deepest delegation for name in LDAP zone.

-NOTE: does not search in zone apex, returns None if there is no NS
-delegation outside of zone apex
+NOTE: NS record in zone apex is not considered as delegation.
+It returns None if there is no delegation outside of zone apex.

 Example:
 zone: example.com.
@@ -1799,9 +1799,8 @@ def _get_longest_match_ns_delegation_ldap(zone, name):

 :param zone: zone name
 :param name:
-:return: (record, truncated);
-record: record name if success, or None if no such record exists, or
-record is zone apex record
+:return: (match, truncated);
+match: delegation name if success, or None if no delegation record 
exists

 
 assert isinstance(zone, DNSName)
 assert isinstance(name, DNSName)
@@ -1846,7 +1845,6 @@ def _get_longest_match_ns_delegation_ldap(zone, name):

 # test if entry contains NS records
 for entry in entries:
-print entry
 if entry.get('nsrecord'):
 matched_records.append(entry.single_value['idnsname'])

@@ -3444,7 +3442,7 @@ class dnsrecord(LDAPObject):
 def warning_if_ns_change_cause_fwzone_ineffective(self, result, *keys,
   **options):
 Detect if NS record change can make forward zones 
ineffective due

-missing delegation. Run after parent's execute method method.
+missing delegation. Run after parent's execute method.
 
 record_name_absolute = keys[-1]
 zone = keys[-2]

--
Martin Basti

From c85d7639e62ca5871e0598db973c9540b056b197 Mon Sep 17 00:00:00 2001
From: Martin Basti mba...@redhat.com
Date: Fri, 21 Nov 2014 16:54:09 +0100
Subject: [PATCH] Detect and warn about invalid DNS forward zone configuration

Shows warning if forward and parent authoritative zone do not have
proper NS record delegation, which can cause the forward zone will be
ineffective and forwarding will not work.

Ticket: https://fedorahosted.org/freeipa/ticket/4721
---
 ipalib/messages.py|  13 ++
 ipalib/plugins/dns.py | 330 --
 2 files changed, 332 insertions(+), 11 deletions(-)

diff --git a/ipalib/messages.py b/ipalib/messages.py
index 102e35275dbe37328c84ecb3cd5b2a8d8578056f..b44beca729f5483a7241e4c98a9f724ed663e70f 100644
--- a/ipalib/messages.py
+++ b/ipalib/messages.py
@@ -200,6 +200,19 @@ class DNSServerDoesNotSupportDNSSECWarning(PublicMessage):
uIf DNSSEC validation is enabled on IPA server(s), 
uplease disable it.)
 
+class ForwardzoneIsNotEffectiveWarning(PublicMessage):
+
+**13008** Forwardzone is not effective, forwarding will not work because
+there is authoritative parent zone, without proper NS delegation
+
+
+errno = 13008
+type = warning
+format = _(uforward zone \%(fwzone)s\ is not effective because of 
+   umissing proper NS delegation in authoritative zone 
+   u\%(authzone)s\. Please add NS record 
+   u\%(ns_rec)s\ to parent zone \%(authzone)s\.)
+
 
 def iter_messages(variables, base):
 Return a tuple with all subclasses
diff --git a/ipalib/plugins/dns.py b/ipalib/plugins/dns.py
index 34afc189866993481229bb68a5edd77e0a4eaff3..7a80036c94432a01ea8781101712ea1135134948 100644
--- a/ipalib/plugins/dns.py
+++ b/ipalib/plugins/dns.py
@@ -1733,6 +1733,239 @@ def _normalize_zone(zone):
 return zone
 
 
+def _get_auth_zone_ldap(name):
+
+Find authoritative zone in LDAP for name. Only active zones are considered.
+:param name:
+:return: (zone, truncated)
+zone: authoritative zone, or None if authoritative zone is not in LDAP
+
+assert isinstance(name, DNSName)
+ldap = api.Backend.ldap2
+
+# Create all possible parent zone names
+search_name = name.make_absolute()
+zone_names = []
+for i in xrange(len(search_name)):
+zone_name_abs = DNSName(search_name[i:]).ToASCII()
+zone_names.append(zone_name_abs)
+# compatibility with IPA  4.0, zone name can be relative
+zone_names.append(zone_name_abs[:-1])
+
+# Create filters
+objectclass_filter = ldap.make_filter({'objectclass':'idnszone'})
+zonenames_filter = 

Re: [Freeipa-devel] topology management question

2014-12-11 Thread Petr Spacek
On 11.12.2014 15:20, Simo Sorce wrote:
 On Thu, 11 Dec 2014 14:18:36 +0100
 Ludwig Krispenz lkris...@redhat.com wrote:
 

 On 12/05/2014 04:50 PM, Simo Sorce wrote:
 On Thu, 04 Dec 2014 14:33:09 +0100
 Ludwig Krispenz lkris...@redhat.com wrote:

 hi,

 I just have another (hopefully this will end soon) issue I want to
 get your input. (please read to teh end first)

 To recapture the conditions:
 -  the topology plugin manages the connections between servers as
 segments in the shared tree
 - it is authoritative for managed servers, eg it controls all
 connections between servers listed under cn=masters,
 it is permissive for connection to other servers
 - it rejects any removal of a segment, which would disconnect the
 topology.
 - a change in topology can be applied to any server in the
 topology, it will reach the respective servers and the plugin will
 act upon it

 Now there is a special case, causing a bit of trouble. If a replica
 is to be removed from the topology, this means that
 the replication agreements from and to this replica should be
 removed, the server should be removed from the manages servers.
 The problem is that:
 - if you remove the server first, the server becomes unmanaged and
 removal of the segment will not trigger a removal of the
 replication agreement
 Can you explain what you mean if you remove the server first
 exactly ? What LDAP operation will be performed, by the management
 tools ?
 as far as the plugin is concerned a removal of a replica triggers two 
 operations:
 - removal of the host from the sservers in cn=masters, so the server
 is no longer considered as managed
 - removal of the segment(s) connecting the to be removed replica to 
 other still amnaged servers, which should remove the corresponding 
 replication agreements.
 It was the order of these two operations I was talking
 
 We can define a correct order, the plugin can refuse to do any other
 order for direct operations (we need to be careful not to refuse
 replication operations I think).
 

 - if you remove the segments first, one segment will be the last
 one connecting this replica to the topology and removal will be
 rejected
 We should never remove the segments first indeed.
 if we can fully control that only specific management tools can be
 used, we can define the order, but an admin could apply individual
 operations and still it would be good if nothing breaks
 
 I think we had a plan to return UNWILLING_TO_PERFORM if the admin tries
 to remove the last segment first. So we would have no problem really,
 the admin can try and fail. If he wants to remove a master he'll have
 to remove it from the masters group, and this will trigger the removal
 of all segments.
 
 Now, with some effort this can be resolved, eg
 if the server is removed, keep it internally as removed server and
 for segments connecting this server trigger removal of replication
 agreements or mark a the last segment, when tried to remove, as
 pending and once the server is removed also remove the
 corresponding repl agreements
 Why should we keep it internally ?
 If you mark the agreements as managed by setting an attribute on
 them, then you will never have any issue recognizing a managed
 agreement in cn=config, and you will also immediately find out it
 is old as it is not backed by a segment so you will safely remove
 it.
 I didn't want to add new flags/fields to the replication agreements
 as long as anything can be handled by the data in the shared tree.
 
 We have too. I think it is a must or we will find numerous corner cases.
 Is there a specific reason why you do not want to add flags to
 replication agreements in cn=config ?
 
 internally was probably misleading, but I will think about it again
 
 Ok, it is important we both understand what issues we see with any of
 the possible approaches so we can agree on the best one.
 
 Segments (and their agreements) should be removed as trigger on the
 master entry getting removed. This should be done even if it causes
 a split brain, because if the server is removed, no matter how much
 we wish to keep tropology integrity we effectively are in a split
 brain situation, keeping toplogy agreements alive w/o the server
 entry doesn't help.
 If we can agree on that, that presence/removal of masters is the
 primary trigger that's fine.
 
 Yes I think we can definitely agree that this is the primary trigger
 for server removal/addition.
 
 I was thinking of situations where a server was removed, 
 but not uninstalled.
 
 Understood, but even then it makes no real difference, once the server
 is removed from the group of masters it will not be able to replicate
 outbound anymore as the other master's ACIs will not recognize this
 server credentials as valid replicator creds.
 
 Just taking it out of the topology, but it could still be reached
 
 It can be reached, and that may be a problem for clients. But in the
 long term this should be true only for clients manually configured 

Re: [Freeipa-devel] topology management question

2014-12-11 Thread Simo Sorce
On Thu, 11 Dec 2014 17:03:55 +0100
Petr Spacek pspa...@redhat.com wrote:

 On 11.12.2014 15:20, Simo Sorce wrote:
  On Thu, 11 Dec 2014 14:18:36 +0100
  Ludwig Krispenz lkris...@redhat.com wrote:
  
 
  On 12/05/2014 04:50 PM, Simo Sorce wrote:
  On Thu, 04 Dec 2014 14:33:09 +0100
  Ludwig Krispenz lkris...@redhat.com wrote:
 
  hi,
 
  I just have another (hopefully this will end soon) issue I want
  to get your input. (please read to teh end first)
 
  To recapture the conditions:
  -  the topology plugin manages the connections between servers as
  segments in the shared tree
  - it is authoritative for managed servers, eg it controls all
  connections between servers listed under cn=masters,
  it is permissive for connection to other servers
  - it rejects any removal of a segment, which would disconnect the
  topology.
  - a change in topology can be applied to any server in the
  topology, it will reach the respective servers and the plugin
  will act upon it
 
  Now there is a special case, causing a bit of trouble. If a
  replica is to be removed from the topology, this means that
  the replication agreements from and to this replica should be
  removed, the server should be removed from the manages servers.
  The problem is that:
  - if you remove the server first, the server becomes unmanaged
  and removal of the segment will not trigger a removal of the
  replication agreement
  Can you explain what you mean if you remove the server first
  exactly ? What LDAP operation will be performed, by the management
  tools ?
  as far as the plugin is concerned a removal of a replica triggers
  two operations:
  - removal of the host from the sservers in cn=masters, so the
  server is no longer considered as managed
  - removal of the segment(s) connecting the to be removed replica
  to other still amnaged servers, which should remove the
  corresponding replication agreements.
  It was the order of these two operations I was talking
  
  We can define a correct order, the plugin can refuse to do any other
  order for direct operations (we need to be careful not to refuse
  replication operations I think).
  
 
  - if you remove the segments first, one segment will be the last
  one connecting this replica to the topology and removal will be
  rejected
  We should never remove the segments first indeed.
  if we can fully control that only specific management tools can be
  used, we can define the order, but an admin could apply individual
  operations and still it would be good if nothing breaks
  
  I think we had a plan to return UNWILLING_TO_PERFORM if the admin
  tries to remove the last segment first. So we would have no problem
  really, the admin can try and fail. If he wants to remove a master
  he'll have to remove it from the masters group, and this will
  trigger the removal of all segments.
  
  Now, with some effort this can be resolved, eg
  if the server is removed, keep it internally as removed server
  and for segments connecting this server trigger removal of
  replication agreements or mark a the last segment, when tried to
  remove, as pending and once the server is removed also remove the
  corresponding repl agreements
  Why should we keep it internally ?
  If you mark the agreements as managed by setting an attribute on
  them, then you will never have any issue recognizing a managed
  agreement in cn=config, and you will also immediately find out it
  is old as it is not backed by a segment so you will safely
  remove it.
  I didn't want to add new flags/fields to the replication agreements
  as long as anything can be handled by the data in the shared tree.
  
  We have too. I think it is a must or we will find numerous corner
  cases. Is there a specific reason why you do not want to add flags
  to replication agreements in cn=config ?
  
  internally was probably misleading, but I will think about it
  again
  
  Ok, it is important we both understand what issues we see with any
  of the possible approaches so we can agree on the best one.
  
  Segments (and their agreements) should be removed as trigger on
  the master entry getting removed. This should be done even if it
  causes a split brain, because if the server is removed, no matter
  how much we wish to keep tropology integrity we effectively are
  in a split brain situation, keeping toplogy agreements alive w/o
  the server entry doesn't help.
  If we can agree on that, that presence/removal of masters is the
  primary trigger that's fine.
  
  Yes I think we can definitely agree that this is the primary trigger
  for server removal/addition.
  
  I was thinking of situations where a server was removed, 
  but not uninstalled.
  
  Understood, but even then it makes no real difference, once the
  server is removed from the group of masters it will not be able to
  replicate outbound anymore as the other master's ACIs will not
  recognize this server credentials as valid replicator creds.
  
  Just taking it out of 

Re: [Freeipa-devel] [PATCH 0170] Detect and warn about invalid forwardzone configuration

2014-12-11 Thread Petr Spacek
On 11.12.2014 16:50, Martin Basti wrote:
 Updated aptch attached:

Nice work, ACK!

-- 
Petr^2 Spacek

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel