Re: [Freeipa-devel] [PATCHES] Remove dependencies to private samba libs

2012-09-06 Thread Martin Kosek
On 09/03/2012 02:03 PM, Alexander Bokovoy wrote:
 - Original Message -
 From: Sumit Bose sb...@redhat.com
 To: freeipa-devel freeipa-devel@redhat.com
 Sent: Sunday, September 2, 2012 8:04:22 PM
 Subject: [Freeipa-devel] [PATCHES] Remove dependencies to private samba libs

 Hi,

 private samba libraries will become a new version with every new
 samba
 release. To avoid rebuilding of IPA whenever a new samba version is
 released the following series of patches removes the dependencies to
 private samba libraries by replacing the related calls with others.

 This should fix https://fedorahosted.org/freeipa/ticket/3013
 ACK to all of them, I checked and they work correctly.
 
 I had concerns about strdup upper patch but talloc_strdup_upper() avoids 
 reallocation of memory for our use case (storing security key).
 

Pushed them all to master, ipa-3-0.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 197 Fixed search in HBAC test

2012-09-06 Thread Petr Vobornik

On 09/05/2012 07:24 PM, Endi Sukma Dewata wrote:

On 9/3/2012 6:28 AM, Petr Vobornik wrote:

b) force refresh when searching with unchanged filter


I did (b). Updated patch attached.

I don't want to implement 'expiration date' at the moment. It's too
widespread change. Maybe in FreeIPA 3.2.


ACK.


Pushed to master and ipa-3-0.

--
Petr Vobornik

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 203 Notify success on add, delete and update

2012-09-06 Thread Petr Vobornik

On 09/05/2012 07:25 PM, Endi Sukma Dewata wrote:

On 9/3/2012 8:35 AM, Petr Vobornik wrote:

Notification of success was added to:
  * details facet: update
  * association facet and association widget: add, delete items
  * attribute facet: delete items (notification of add should be handled
in entity adder dialog)
  * sudo rule: add, remove option
  * dnsrecord: add, update, delete

https://fedorahosted.org/freeipa/ticket/2977


ACK. Some minor issues below:


Pushed to master.



1. The notification for operations involving multiple entries are a bit
inconsistent. In the search facet deleting entries will generate
Selected entries were deleted. Enabling/disabling entries will
generate n items were enabled/disabled. In the association facet,
HBAC/sudo rules, adding/deleting entries will generate Items
added/removed. For sudo options it's Option/Option(s) added/removed.



To change notification we should also review confirmation messages. For 
example delete message reflect 'Do you want to delete selected items'.


I guess if we want to change it, we should make them more specific and 
avoid general words like items or entries. Btw which of the two is better?



2. The notification for operations involving a single entry are a bit
consistent too, but I think this is an existing issue. If you add an
entry the notification will say entity successfully added. It
doesn't show the primary key which may be generated by the server. As a
comparison if you delete an entry from the details page it will say
Deleted entity 'primary key', and for update operation entity
primary key updated.



In adder dialog we should probably used the message sent by the server, 
as we do in the other cases.



3. Also existing issue, if you add using Add and Add Another the
notification doesn't fade away, does it matter?


I don't think we should change it. Because in the time when it would 
fade away user will be filling some fields. User might be surprised by 
sudden move in the dialog.




Feel free to address this separately.



--
Petr Vobornik

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 199 Permissions: select only applicable options on type change

2012-09-06 Thread Petr Vobornik

On 09/05/2012 07:24 PM, Endi Sukma Dewata wrote:

On 9/3/2012 5:59 AM, Petr Vobornik wrote:

Updated patch attached.


ACK.


Pushed to master and ipa-3-0.

--
Petr Vobornik

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 209 Fixed problem while deleting entry with unsaved changes

2012-09-06 Thread Petr Vobornik

On 09/06/2012 12:46 AM, Endi Sukma Dewata wrote:

On 9/5/2012 10:00 AM, Petr Vobornik wrote:

While deleting an entry it now resets a facet if there are unsaved
changes. It prevents pop up of various error dialogs when UI tries to
redirect to search page after successful delete.

https://fedorahosted.org/freeipa/ticket/3047


ACK.


Pushed to master, ipa-3-0.

--
Petr Vobornik

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 206-208 Fixed number parameters MIN boundary validation in Web UI

2012-09-06 Thread Petr Vobornik

On 09/05/2012 11:40 PM, Endi Sukma Dewata wrote:

On 9/5/2012 9:08 AM, Petr Vobornik wrote:

Integers were missing most of minimum checks and Decimals boundaries
weren't checked at all in Web UI.

First part is done in ipalib, second in Web UI.

1) [PATCH] 206 Fixed metadata serialization of Numbers and DNs:
There were following problems:
1. DNs and Decimals weren't properly serialized. Serialization output
was object with empty __base64__ attribute. It was fixed by converting
them to string.
2. numberical values equal to 0 were excluded from metadata. It broke
many of minvalue checks in Web UI. Now excluding only None and False
values as initially intended.


ACK.


2)[PATCH] 207 Added decimal checks to metadata validator:
Metadata validator didn't have check for decimal values. It was added.


ACK.


3)[PATCH] 208 Generated metadata for testing updated
Testing metadata needs to be updated because of fix in json
serialization.


ACK.


Pushed all 3 patches to master, ipa-3-0.

--
Petr Vobornik

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 204 Update of confirmation of actions

2012-09-06 Thread Petr Vobornik

On 09/05/2012 09:30 PM, Endi Sukma Dewata wrote:

On 9/3/2012 11:05 AM, Petr Vobornik wrote:

This patch is changing confirmation of actions according to ticket
#3035, see the ticket description.

It does following changes:
  * Confirmation of update action was removed.
  * Action lists resets to first action (which is usually a NOP: '--
select action --') on change of displayed entry.
  * New confirmation dialog was implemented. It is used for action
confirmation. It is used in IPA.action to replace the call of
window.confirm(message). The old call is a modal window which blocks all
JS functionality and has different style than other dialogs in Web UI.
The new one has same design and doesn't block background operations.

  https://fedorahosted.org/freeipa/ticket/3035


Some issues:

1. None of the confirmation dialogs have a default button. If you hit
Enter it will do nothing (except #2 below). All buttons are greyed out
until you hover with mouse or focus with Tab. Is this intentional? I
think usually a confirmation dialog would have a default button.

2. In the Users search facet the confirmation dialog doesn't show
default button either, but if you hit Enter it will execute the
operation. This is inconsistent with #1.



Root of the problem is that old custom dialogs which serves for 
confirmation (batch delete, various certificate and unprovisioning 
dialogs) or other dialogs (password change, adder dialogs) don't have 
confirmation by enter. Only the new dialog has confirmation by Enter. At 
the moment this new dialog is used only for mass enable/disable.


We might:
a) remove confirmation confirmation by pressing Enter key from the new 
dialog to be consistent. IMO wrong.

b) add this confirmation too all dialogs
c) change old confirm dialogs to inherit from IPA.confirm_dialog to 
receive this functionality


IMO, for most dialogs c) would be better but for some (highly 
customized) b). It should be probably done in other patch, maybe it the 
context of this ticket.


There is also: https://fedorahosted.org/freeipa/ticket/2910 ([Web UI] 
Use Enter to confirm add dialogs)


--
Petr Vobornik

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1041 pull in cachememsize logging

2012-09-06 Thread Martin Kosek
On 09/05/2012 08:04 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 389-ds-base added logging if the entry cache is smaller than the
 database so users will know they need to tune their DS install. Set this
 as the minimum for IPA.

 rob
 
 Rebased patch.
 
 rob
 

It looks like a changelog entry from your patch 1031-4 slipped in to this
patch. Otherwise it is an obvious patch to ACK.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] [PATCH] 304 Allow localhost in zone ACIs

2012-09-06 Thread Martin Kosek
Loopback address, localhost and localnets ACIs are no longer
an issue for bind-dyndb-ldap. Allow them in our validators.

-- 
Martin Kosek mko...@redhat.com
Senior Software Engineer - Identity Management Team
Red Hat Inc.
From 74dcac478622c502bab7aef9ba7bade0bd9a704f Mon Sep 17 00:00:00 2001
From: Martin Kosek mko...@redhat.com
Date: Thu, 6 Sep 2012 11:34:02 +0200
Subject: [PATCH] Allow localhost in zone ACIs

Loopback address, localhost and localnets ACIs are no longer
an issue for bind-dyndb-ldap. Allow them in our validators.
---
 ipalib/plugins/dns.py | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/ipalib/plugins/dns.py b/ipalib/plugins/dns.py
index 3987001f06dba1bcc5a311243e4f1fdcf83091c7..e9f8b0cc0103706c5bbf933b14c372c369ff86b2 100644
--- a/ipalib/plugins/dns.py
+++ b/ipalib/plugins/dns.py
@@ -299,18 +299,15 @@ def _validate_bind_aci(ugettext, bind_acis):
 bind_acis.pop(-1)
 
 for bind_aci in bind_acis:
-if bind_aci in (any, none):
+if bind_aci in (any, none, localhost, localnets):
 continue
 
-if bind_aci in (localhost, localnets):
-return _('ACL name %s is not supported') % bind_aci
-
 if bind_aci.startswith('!'):
 bind_aci = bind_aci[1:]
 
 try:
 ip = CheckedIPAddress(bind_aci, parse_netmask=True,
-  allow_network=True)
+  allow_network=True, allow_loopback=True)
 except (netaddr.AddrFormatError, ValueError), e:
 return unicode(e)
 except UnboundLocalError:
@@ -335,7 +332,7 @@ def _normalize_bind_aci(bind_acis):
 
 try:
 ip = CheckedIPAddress(bind_aci, parse_netmask=True,
-  allow_network=True)
+  allow_network=True, allow_loopback=True)
 if '/' in bind_aci:# addr with netmask
 netmask = /%s % ip.prefixlen
 else:
-- 
1.7.11.4

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH 0011] Make sure selinuxusemap behaves consistently to HBAC rule

2012-09-06 Thread Tomas Babej

On 09/05/2012 01:56 PM, Martin Kosek wrote:

On 09/03/2012 05:12 PM, Tomas Babej wrote:

Hi,

Both selinuxusermap-add and selinuxusermap-mod commands now behave
consistently in not allowing user/host category or user/host members
and HBAC rule being set at the same time. Also adds a bunch of unit
tests that check this behaviour.

https://fedorahosted.org/freeipa/ticket/2983

Tomas


I found few issues with this patch:

1) Patch needs a rebase

2) Patch does not expect attributes to be set to None, i.e. to be left empty or
to be deleted, e.g.:

# ipa selinuxusermap-add foo --selinuxuser=guest_u:s0 --usercat=all --hbacrule=
ipa: ERROR: HBAC rule and local members cannot both be set

# ipa selinuxusermap-add foo --selinuxuser=guest_u:s0 --usercat=all

Added SELinux User Map foo

   Rule name: foo
   SELinux User: guest_u:s0
   User category: all
   Enabled: TRUE

# ipa selinuxusermap-mod foo --usercat= --hbacrule=
ipa: ERROR: HBAC rule and local members cannot both be set

# ipa selinuxusermap-mod foo --usercat=
---
Modified SELinux User Map foo
---
   Rule name: foo
   SELinux User: guest_u:s0
   Enabled: TRUE

# ipa selinuxusermap-mod foo --hbacrule=foo
---
Modified SELinux User Map foo
---
   Rule name: foo
   SELinux User: guest_u:s0
   HBAC Rule: foo
   Enabled: TRUE

# ipa selinuxusermap-mod foo --hbacrule= --usercat=all
ipa: ERROR: HBAC rule and local members cannot both be set

All these validation failures are not valid.

3) Additionally, I think it would be more readable and less error prone that if
instead of this blob:

+are_local_members_to_be_set  = 'usercategory' in _entry_attrs or \
+   'hostcategory' in _entry_attrs or \
+   'memberuser' in _entry_attrs or \
+   'memberhost' in _entry_attrs

You would use something like that:

are_local_members_to_be_set  = any(attr in _entry_attrs
for attr in ('usercategory',
 'hostcategory',
 'memberuser',
 'memberhost'))

Martin

1.) Done.
2.) Corrected.
3.) Fixed.

Tomas
From d77004bce8644b0f3a64860174539c9a2d640ef1 Mon Sep 17 00:00:00 2001
From: Tomas Babej tba...@redhat.com
Date: Thu, 6 Sep 2012 07:03:42 -0400
Subject: [PATCH] Make sure selinuxusemap behaves consistently to HBAC rule

Both selinuxusermap-add and selinuxusermap-mod commands now behave
consistently in not allowing user/host category or user/host members
and HBAC rule being set at the same time. Also adds a bunch of unit
tests that check this behaviour.

https://fedorahosted.org/freeipa/ticket/2983
---
 ipalib/plugins/selinuxusermap.py|  49 +--
 tests/test_xmlrpc/test_selinuxusermap_plugin.py | 179 
 2 files changed, 216 insertions(+), 12 deletions(-)

diff --git a/ipalib/plugins/selinuxusermap.py b/ipalib/plugins/selinuxusermap.py
index 13bbb58ec0e6b7bd4275be17198c7452090a0781..131a83ce5674628041601e98028a013d47b40a4a 100644
--- a/ipalib/plugins/selinuxusermap.py
+++ b/ipalib/plugins/selinuxusermap.py
@@ -242,8 +242,21 @@ class selinuxusermap_add(LDAPCreate):
 # rules are enabled by default
 entry_attrs['ipaenabledflag'] = 'TRUE'
 validate_selinuxuser_inlist(ldap, entry_attrs['ipaselinuxuser'])
-if 'seealso' in options:
-entry_attrs['seealso'] = self.obj._normalize_seealso(options['seealso'])
+
+# hbacrule is not allowed when usercat or hostcat is set
+is_to_be_set = lambda x : x in entry_attrs and entry_attrs[x]!=None
+
+are_local_members_to_be_set  = any(is_to_be_set(attr)
+   for attr in ('usercategory',
+'hostcategory'))
+
+is_hbacrule_to_be_set = is_to_be_set('seealso')
+
+if is_hbacrule_to_be_set and are_local_members_to_be_set:
+raise errors.MutuallyExclusiveError(reason=notboth_err)
+
+if is_hbacrule_to_be_set:
+entry_attrs['seealso'] = self.obj._normalize_seealso(entry_attrs['seealso'])
 
 return dn
 
@@ -276,18 +289,30 @@ class selinuxusermap_mod(LDAPUpdate):
 except errors.NotFound:
 self.obj.handle_not_found(*keys)
 
-if 'seealso' in options and ('usercategory' in _entry_attrs or
-  'hostcategory' in _entry_attrs or
-  'memberuser' in _entry_attrs or
-  'memberhost' in _entry_attrs):
+# makes sure the local members and hbacrule is not set at the same time
+# memberuser or memberhost could have been set using --setattr
+is_to_be_set = lambda x: (x in _entry_attrs and 

Re: [Freeipa-devel] [PATCH] 304 Allow localhost in zone ACIs

2012-09-06 Thread Petr Vobornik

On 09/06/2012 11:51 AM, Martin Kosek wrote:

Loopback address, localhost and localnets ACIs are no longer
an issue for bind-dyndb-ldap. Allow them in our validators.



Martin's patch works and looks good - ACK.


Attaching patch for Web UI part.
--
Petr Vobornik
From 6777b81c95d0e34f216954a59341679471a8d134 Mon Sep 17 00:00:00 2001
From: Petr Vobornik pvobo...@redhat.com
Date: Thu, 6 Sep 2012 13:22:21 +0200
Subject: [PATCH] Allow localhost in zone ACIs - Web UI

Loopback address, localhost and localnets ACIs are no longer
an issue for bind-dyndb-ldap. Allow them in our Web UI validators as well.
---
 install/ui/dns.js | 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/install/ui/dns.js b/install/ui/dns.js
index 33db481b84c0518ec1b326f8b016a8e487e3120b..43703e03f3e6dc4061f52d1f865db85b0e9c8502 100644
--- a/install/ui/dns.js
+++ b/install/ui/dns.js
@@ -168,11 +168,8 @@ IPA.dns.zone_entity = function(spec) {
 type: 'netaddr',
 name: 'idnsallowquery',
 validators: [
-IPA.unsupported_validator({
-unsupported: ['localhost', 'localnets']
-}),
 IPA.network_validator({
-specials: ['any', 'none'],
+specials: ['any', 'none', 'localhost', 'localnets'],
 allow_negation: true,
 allow_host_address: true
 })]
@@ -181,11 +178,8 @@ IPA.dns.zone_entity = function(spec) {
 type: 'netaddr',
 name: 'idnsallowtransfer',
 validators: [
-IPA.unsupported_validator({
-unsupported: ['localhost', 'localnets']
-}),
 IPA.network_validator({
-specials: ['any', 'none'],
+specials: ['any', 'none', 'localhost', 'localnets'],
 allow_negation: true,
 allow_host_address: true
 })]
-- 
1.7.11.4

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Martin Kosek
On 09/05/2012 08:06 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/05/2012 08:39 PM, Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/03/2012 04:41 PM, Rob Crittenden wrote:
 Deleting a replica can leave a replication vector (RUV) on the
 other servers.
 This can confuse things if the replica is re-added, and it also
 causes the
 server to calculate changes against a server that may no longer exist.

 389-ds-base provides a new task that self-propogates itself to all
 available
 replicas to clean this RUV data.

 This patch will create this task at deletion time to hopefully
 clean things up.

 It isn't perfect. If any replica is down or unavailable at the time
 the
 cleanruv task fires, and then comes back up, the old RUV data may be
 re-propogated around.

 To make things easier in this case I've added two new commands to
 ipa-replica-manage. The first lists the replication ids of all the
 servers we
 have a RUV for. Using this you can call clean_ruv with the
 replication id of a
 server that no longer exists to try the cleanallruv step again.

 This is quite dangerous though. If you run cleanruv against a
 replica id that
 does exist it can cause a loss of data. I believe I've put in
 enough scary
 warnings about this.

 rob


 Good work there, this should make cleaning RUVs much easier than
 with the
 previous version.

 This is what I found during review:

 1) list_ruv and clean_ruv command help in man is quite lost. I think
 it would
 help if we for example have all info for commands indented. This way
 user could
 simply over-look the new commands in the man page.


 2) I would rename new commands to clean-ruv and list-ruv to make them
 consistent with the rest of the commands (re-initialize, force-sync).


 3) It would be nice to be able to run clean_ruv command in an
 unattended way
 (for better testing), i.e. respect --force option as we already do for
 ipa-replica-manage del. This fix would aid test automation in the
 future.


 4) (minor) The new question (and the del too) does not react too
 well for
 CTRL+D:

 # ipa-replica-manage clean_ruv 3 --force
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: unexpected error:


 5) Help for clean_ruv command without a required parameter is quite
 confusing
 as it reports that command is wrong and not the parameter:

 # ipa-replica-manage clean_ruv
 Usage: ipa-replica-manage [options]

 ipa-replica-manage: error: must provide a command [clean_ruv |
 force-sync |
 disconnect | connect | del | re-initialize | list | list_ruv]

 It seems you just forgot to specify the error message in the command
 definition


 6) When the remote replica is down, the clean_ruv command fails with an
 unexpected error:

 [root@vm-086 ~]# ipa-replica-manage clean_ruv 5
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: y
 unexpected error: {'desc': 'Operations error'}


 /var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: failed
 to connect to replagreement connection
 (cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

 cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping
 tree,cn=config), error 105
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: replica
 (cn=meTovm-055.idm.lab.
 bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping



 tree,   cn=config) has not been cleaned.  You will need to rerun the
 CLEANALLRUV task on this replica.
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: Task
 failed (1)

 In this case I think we should inform user that the command failed,
 possibly
 because of disconnected replicas and that they could enable the
 replicas and
 try again.


 7) (minor) pass is now redundant in replication.py:
 +except ldap.INSUFFICIENT_ACCESS:
 +# We can't make the server we're removing read-only but
 +# this isn't a show-stopper
 +root_logger.debug(No permission to switch replica to
 read-only,
 continuing anyway)
 +pass


 I think this addresses everything.

 rob

 Thanks, almost there! I just found one more issue which needs to be fixed
 before we push:

 # ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force
 Directory Manager password:

 Unable to connect to replica vm-055.idm.lab.bos.redhat.com, forcing
 removal
 Failed to get data from 

Re: [Freeipa-devel] [PATCH] 304 Allow localhost in zone ACIs

2012-09-06 Thread Martin Kosek
On 09/06/2012 01:35 PM, Petr Vobornik wrote:
 On 09/06/2012 11:51 AM, Martin Kosek wrote:
 Loopback address, localhost and localnets ACIs are no longer
 an issue for bind-dyndb-ldap. Allow them in our validators.

 
 Martin's patch works and looks good - ACK.
 
 
 Attaching patch for Web UI part.

Web UI validator works fine too, ACK.

Pushed both patches to master, ipa-3-0.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1053 support 389-ds posix-winsync plugin

2012-09-06 Thread Martin Kosek
On 09/05/2012 08:13 PM, Rich Megginson wrote:
 On 09/05/2012 12:08 PM, Rob Crittenden wrote:
 Add support for the 389-ds posix winsync plugin. This plugin will sync the
 POSIX attributes from AD. We need to avoid trying to re-add them in our 
 plugin.
 ack

I did a sanity check, that winsync replication still works, everything looks OK.

Pushed to master, ipa-3-0.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1053 support 389-ds posix-winsync plugin

2012-09-06 Thread Simo Sorce
On Thu, 2012-09-06 at 14:30 +0200, Martin Kosek wrote:
 On 09/05/2012 08:13 PM, Rich Megginson wrote:
  On 09/05/2012 12:08 PM, Rob Crittenden wrote:
  Add support for the 389-ds posix winsync plugin. This plugin will sync the
  POSIX attributes from AD. We need to avoid trying to re-add them in our 
  plugin.
  ack
 
 I did a sanity check, that winsync replication still works, everything looks 
 OK.
 
 Pushed to master, ipa-3-0.

Is this plugin configurable in IPA ?
From the commit I can't tell if it is enabled by default or not.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] Patch to allow IPA to work with dogtag 10 on f18

2012-09-06 Thread Rob Crittenden

Ade Lee wrote:

On Wed, 2012-09-05 at 16:20 -0400, Rob Crittenden wrote:

Martin Kosek wrote:

On 08/31/2012 04:53 PM, Petr Viktorin wrote:

On 08/28/2012 03:40 PM, Petr Viktorin wrote:

On 08/17/2012 06:04 PM, Ade Lee wrote:

On Fri, 2012-08-17 at 09:34 -0400, Ade Lee wrote:

On Thu, 2012-08-16 at 18:45 +0200, Martin Kosek wrote:

On 08/16/2012 01:28 PM, Ade Lee wrote:

Patch attached this time.  I should know better than to do this in the
middle of the night ..

On Thu, 2012-08-16 at 09:12 +0200, Martin Kosek wrote:

On 08/16/2012 07:53 AM, Ade Lee wrote:

On Wed, 2012-08-15 at 23:41 -0400, Ade Lee wrote:

On Wed, 2012-08-15 at 16:34 +0200, Martin Kosek wrote:

On 08/15/2012 03:54 PM, Ade Lee wrote:

On Wed, 2012-08-15 at 13:24 +0200, Martin Kosek wrote:

On 08/08/2012 10:05 PM, Ade Lee wrote:

Hi,

Dogtag 10 is being released on f18, and has a number of
changes that
will affect IPA.  In particular, the following changes will
affect
current IPA code.

* The directory layout of the dogtag instance has changed.
Instead of
using separate tomcat instances to host different
subsystems, the
standard dogtag installation will allow one to install a CA.
KRA, OCSP
and TKS within the same instance.  There have been
corresponding changes
in the directory layout, as well as the default instance name
(pki-tomcat instead of pki-ca), and startup daemon
(pki-tomcatd, instead
of pki-cad, pki-krad etc.)

* The default instance will use only four ports (HTTPS,
HTTP, AJP and
tomcat shutdown port) rather than the 6 previously used.
The default
ports will be changed to the standard tomcat ports.  As
these ports are
local to the ipa server machine, this should not cause too much
disruption.

* There is a new single step installer written in python.
(pkispawn/destroy) vs. pkicreate/pkisilent/pkiremove.

* Dogtag 10 runs on tomcat7 - with a new corresponding
version of
tomcatjss.

The attached patch integrates all the above changes in IPA
installation
and maintenance code.  Once the patch is applied, users will
be able to:

1. run ipa-server-install to completion on f18 with dogtag 10.
2. install a new replica on f18 on dogtag 10.
3. upgrade an f17 machine with an existing IPA instance to
f18/ dogtag
10 - and have that old-style dogtag instance continue to run
correctly.
This will require the installation of the latest version of
tomcatjss as
well as the installation of tomcat6.  The old-style instance
will
continue to use tomcat6.
4. in addition, the new cert renewal code has been patched
and should
continue to work.

What is not yet completed / supported:

1. Installation with an external CA is not yet completed in
the new
installer.  We plan to complete this soon.

2. There is some IPA upgrade code that has not yet been touched
(install/tools/ipa-upgradeconfig).

3. A script needs to be written to allow admins to convert
their
old-style dogtag instances to new style instances, as well
as code to
periodically prompt admins to do this.

4. Installation of old-style instances using
pkicreate/pkisilent on
dogtag 10 will no longer be supported, and will be disabled
soon.

5.  The pki-selinux policy has been updated to reflect these
changes,
but is still in flux.  In fact, it is our intention to place
the dogtag
selinux policy in the base selinux policy for f18.  In the
meantime, it
may be necessary to run installs in permissive mode.

The dogtag 10 code will be released shortly into f18.  Prior
to that
though, we have placed the new dogtag 10 and tomcatjss code
in a
developer repo that is located at
http://nkinder.fedorapeople.org/dogtag-devel/

Testing can be done on both f18 and f17 - although the
target platform -
and the only platform for which official builds will be
created is f18.

Thanks,
Ade



Hi Ade,

Thanks for the patch, I started with review and integration
tests (currently
running on Fedora 17 with Nathan's repo).

Installation on single master was smooth, it worked just
fine, even with
enforced SELinux, without any error - kudos to you and the
whole dogtag team.
The resulting logs and the structure of your log directory
seems improved. I
believe that the brand new Python installers will make it
easier to debug
issues with dogtag installation when they come.  When I tried
our unit tests or
some simple cert operation, it worked fine as well.

Now the bad news, or rather few issues and suggestions I found:

1) As we already discussed on IRC, tomcat 7 was not pulled
automatically on
Fedora 17 when I updated pki-ca, you somewhere miss a Requires.


We have a dogtag patch that is currently in review that will
address
this.  Once this is in, tomcatjss =7.0.0 will be required for
f17+,
rather than f18+


2) I had installed IPA with dogtag10 on master. However, CA
installation on a
replica (ipa-ca-install) with dogtag9 failed - is this
expectable?


Yes.  The current IPA patch is designed to work with dogtag 10
only,
which will be officially available on f18+.  So if you update
to dogtag
10, you must have this patch and 

Re: [Freeipa-devel] [PATCH] 1046 add e-mail by default

2012-09-06 Thread Martin Kosek
On 08/24/2012 07:54 PM, Rob Crittenden wrote:
 We weren't automatically creating the mail attribute despite having the 
 default
 e-mail domain. This patch will add it to all new users.
 
 To disable creating this set the default e-mail domain to empty in ipa config.
 
 rob
 

1) Patch needs a rebase

2) There are 2 test cases where new default mail attribute was not added:

==
FAIL: test_user[34]: user_find: Search for tuser2 with manager tuser1
--
...
  extra keys = ['mail']
...

==
FAIL: test_user[75]: user_add: Create 2nd admin user admin2
--
...
  extra keys = ['mail']
...

3) Some code could be simplified:

This:
+if 'ipadefaultemaildomain' in config:
+defaultdomain = config['ipadefaultemaildomain'][0]
+else:
+defaultdomain = None

To this:
 defaultdomain = config.get('ipadefaultemaildomain', [None])[0]


This:
if m.find('@') == -1 ...

To this:
if '@' not in m ...

IMHO, it is more readable than the find method.

3) When default e-mail domain is removed from config, users cannot be added any
more when e-mail is not explicitly specified:

# ipa config-mod --emaildomain=
  Maximum username length: 32
  Home directory base: /home
  Default shell: /bin/sh
  Default users group: ipausers
  Search time limit: 2
  Search size limit: 100
  User search fields: uid,givenname,sn,telephonenumber,ou,title
  Group search fields: cn,description
  Enable migration mode: FALSE
  Certificate Subject base: O=IDM.LAB.BOS.REDHAT.COM
  Password Expiration Notification (days): 4
  Password plugin features: AllowNThash
  SELinux user map order:
guest_u:s0$xguest_u:s0$user_u:s0-s0:c0.c1023$staff_u:s0-s0:c0.c1023$unconfined_u:s0-s0:c0.c1023
  Default SELinux user: guest_u:s0
  PAC type: MS-PAC

# ipa user-add --first=Foo --last=Bar fbar
ipa: ERROR: invalid 'email': invalid e-mail format: fbar

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] Patch to allow IPA to work with dogtag 10 on f18

2012-09-06 Thread Petr Viktorin

On 09/06/2012 02:35 PM, Rob Crittenden wrote:

Ade Lee wrote:

On Wed, 2012-09-05 at 16:20 -0400, Rob Crittenden wrote:

Martin Kosek wrote:

On 08/31/2012 04:53 PM, Petr Viktorin wrote:

On 08/28/2012 03:40 PM, Petr Viktorin wrote:

On 08/17/2012 06:04 PM, Ade Lee wrote:

On Fri, 2012-08-17 at 09:34 -0400, Ade Lee wrote:

On Thu, 2012-08-16 at 18:45 +0200, Martin Kosek wrote:

On 08/16/2012 01:28 PM, Ade Lee wrote:

Patch attached this time.  I should know better than to do
this in the
middle of the night ..

On Thu, 2012-08-16 at 09:12 +0200, Martin Kosek wrote:

On 08/16/2012 07:53 AM, Ade Lee wrote:

On Wed, 2012-08-15 at 23:41 -0400, Ade Lee wrote:

On Wed, 2012-08-15 at 16:34 +0200, Martin Kosek wrote:

On 08/15/2012 03:54 PM, Ade Lee wrote:

On Wed, 2012-08-15 at 13:24 +0200, Martin Kosek wrote:

On 08/08/2012 10:05 PM, Ade Lee wrote:

Hi,

Dogtag 10 is being released on f18, and has a number of
changes that
will affect IPA.  In particular, the following changes
will
affect
current IPA code.

* The directory layout of the dogtag instance has changed.
Instead of
using separate tomcat instances to host different
subsystems, the
standard dogtag installation will allow one to install
a CA.
KRA, OCSP
and TKS within the same instance.  There have been
corresponding changes
in the directory layout, as well as the default
instance name
(pki-tomcat instead of pki-ca), and startup daemon
(pki-tomcatd, instead
of pki-cad, pki-krad etc.)

* The default instance will use only four ports (HTTPS,
HTTP, AJP and
tomcat shutdown port) rather than the 6 previously used.
The default
ports will be changed to the standard tomcat ports.  As
these ports are
local to the ipa server machine, this should not cause
too much
disruption.

* There is a new single step installer written in python.
(pkispawn/destroy) vs. pkicreate/pkisilent/pkiremove.

* Dogtag 10 runs on tomcat7 - with a new corresponding
version of
tomcatjss.

The attached patch integrates all the above changes in IPA
installation
and maintenance code.  Once the patch is applied, users
will
be able to:

1. run ipa-server-install to completion on f18 with
dogtag 10.
2. install a new replica on f18 on dogtag 10.
3. upgrade an f17 machine with an existing IPA instance to
f18/ dogtag
10 - and have that old-style dogtag instance continue
to run
correctly.
This will require the installation of the latest
version of
tomcatjss as
well as the installation of tomcat6.  The old-style
instance
will
continue to use tomcat6.
4. in addition, the new cert renewal code has been patched
and should
continue to work.

What is not yet completed / supported:

1. Installation with an external CA is not yet
completed in
the new
installer.  We plan to complete this soon.

2. There is some IPA upgrade code that has not yet been
touched
(install/tools/ipa-upgradeconfig).

3. A script needs to be written to allow admins to convert
their
old-style dogtag instances to new style instances, as well
as code to
periodically prompt admins to do this.

4. Installation of old-style instances using
pkicreate/pkisilent on
dogtag 10 will no longer be supported, and will be
disabled
soon.

5.  The pki-selinux policy has been updated to reflect
these
changes,
but is still in flux.  In fact, it is our intention to
place
the dogtag
selinux policy in the base selinux policy for f18.  In the
meantime, it
may be necessary to run installs in permissive mode.

The dogtag 10 code will be released shortly into f18.
Prior
to that
though, we have placed the new dogtag 10 and tomcatjss
code
in a
developer repo that is located at
http://nkinder.fedorapeople.org/dogtag-devel/

Testing can be done on both f18 and f17 - although the
target platform -
and the only platform for which official builds will be
created is f18.

Thanks,
Ade



Hi Ade,

Thanks for the patch, I started with review and integration
tests (currently
running on Fedora 17 with Nathan's repo).

Installation on single master was smooth, it worked just
fine, even with
enforced SELinux, without any error - kudos to you and the
whole dogtag team.
The resulting logs and the structure of your log directory
seems improved. I
believe that the brand new Python installers will make it
easier to debug
issues with dogtag installation when they come.  When I
tried
our unit tests or
some simple cert operation, it worked fine as well.

Now the bad news, or rather few issues and suggestions I
found:

1) As we already discussed on IRC, tomcat 7 was not pulled
automatically on
Fedora 17 when I updated pki-ca, you somewhere miss a
Requires.


We have a dogtag patch that is currently in review that will
address
this.  Once this is in, tomcatjss =7.0.0 will be
required for
f17+,
rather than f18+


2) I had installed IPA with dogtag10 on master. However, CA
installation on a
replica (ipa-ca-install) with dogtag9 failed - is this
expectable?


Yes.  The current IPA patch is designed to work with
dogtag 10
only,
which will be officially available on f18+.  So if you
update

Re: [Freeipa-devel] [PATCH] 1053 support 389-ds posix-winsync plugin

2012-09-06 Thread Martin Kosek
On 09/06/2012 02:35 PM, Simo Sorce wrote:
 On Thu, 2012-09-06 at 14:30 +0200, Martin Kosek wrote:
 On 09/05/2012 08:13 PM, Rich Megginson wrote:
 On 09/05/2012 12:08 PM, Rob Crittenden wrote:
 Add support for the 389-ds posix winsync plugin. This plugin will sync the
 POSIX attributes from AD. We need to avoid trying to re-add them in our 
 plugin.
 ack

 I did a sanity check, that winsync replication still works, everything looks 
 OK.

 Pushed to master, ipa-3-0.
 
 Is this plugin configurable in IPA ?
From the commit I can't tell if it is enabled by default or not.
 
 Simo.
 

This plugin is configured and enabled by default when IPA server is being
installed. You can find LDIF with its configuration there:

daemons/ipa-slapi-plugins/ipa-winsync/ipa-winsync-conf.ldif

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 300-301 Fix DNS SOA serial parameters boundaries

2012-09-06 Thread Petr Viktorin

On 09/05/2012 01:31 PM, Martin Kosek wrote:

On 09/05/2012 12:26 PM, Petr Viktorin wrote:

On 09/05/2012 12:14 PM, Petr Viktorin wrote:

This works well, but please see some comments below.

On 09/04/2012 04:22 PM, Martin Kosek wrote:

To test, simply run the following command:

   ipa dnszone-mod example.com --serial=4294967295

This should work well on patched serverclient. Web UI should work too
as it
reads the max limit dynamically.


Please put this in the test suite.


Done.




---
[PATCH 2/2] Fix DNS SOA serial parameters boundaries:

Set correct boundaries for DNS SOA serial parameters (see RFC 1035,
2181).


[PATCH 1/2] Transfer long numbers over XMLRPC

Numeric parameters in ipalib were limited by XMLRPC boundaries for
integer (2^31-1) which is too low for some LDAP attributes like DNS
SOA serial field.

Transfer numbers which are not in XMLRPC boundary as a string and not
as a number to workaround this limitation. Int parameter had to be
updated to also accept Python's long type as valid int type.


freeipa-mkosek-300-transfer-long-numbers-over-xmlrpc.patch



 From 8782015a17b130c5ebae8b014a7241810b10dedd Mon Sep 17 00:00:00 2001

From: Martin Kosekmko...@redhat.com
Date: Tue, 4 Sep 2012 15:49:26 +0200
Subject: [PATCH 1/2] Transfer long numbers over XMLRPC

Numeric parameters in ipalib were limited by XMLRPC boundaries for
integer (2^31-1) which is too low for some LDAP attributes like DNS
SOA serial field.

Transfer numbers which are not in XMLRPC boundary as a string and not
as a number to workaround this limitation. Int parameter had to be
updated to also accept Python's long type as valid int type.

https://fedorahosted.org/freeipa/ticket/2568
---
   ipalib/parameters.py | 12 ++--
   ipalib/rpc.py|  5 -
   2 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/ipalib/parameters.py b/ipalib/parameters.py
index
de0d14faf08d1ab79c99e65dab9cc08f406e3a1d..21e30356b2a351bf7a3be7d47d7fabf0130cf6d4

100644
--- a/ipalib/parameters.py
+++ b/ipalib/parameters.py
@@ -1077,7 +1077,7 @@ class Number(Param):
   
   if type(value) is self.type:
   return value
-if type(value) in (unicode, int, float):
+if type(value) in (unicode, int, long, float):



PEP8 says that Object type comparisons should always use isinstance()
instead of comparing types directly.
It would be nice to change the old code whenever it is touched. It's
also in a few more places in the patch.



I considered doing this when I was developing the patch, but I did not want to
mix type/isinstance in the same code. Now, when I experimented and tried to
replace it in a larger scope, there were unexpected issues. Like bool type
suddenly passing an isinstance(value, (int, long)) test and causing issues 
later.

So I skipped this part (as we discussed off-list).


diff --git a/ipalib/rpc.py b/ipalib/rpc.py
index
d1764e3e30492d5855450398e86689bfcad7aa39..85239ac65903acf447a4d971cce70f819979ce8d

100644
--- a/ipalib/rpc.py
+++ b/ipalib/rpc.py
@@ -94,6 +95,8 @@ def xml_wrap(value):
   if type(value) is Decimal:
   # transfer Decimal as a string
   return unicode(value)
+if isinstance(value, (int, long)) and (value  MININT or value 
MAXINT):
+return unicode(value)
   if isinstance(value, DN):
   return str(value)
   assert type(value) in (unicode, int, float, bool, NoneType)


`long` should also be included here.

   api.Command.user_find(uidnumber=151100)
{'count': 1, 'truncated': False, 'result': ({...},), 'summary': u'1 user
matched'}
   api.Command.user_find(uidnumber=151100 + 0L)
Traceback (most recent call last):
File console, line 1, in module
...
File /usr/lib/python2.7/site-packages/ipalib/rpc.py, line 102, in
xml_wrap
  assert type(value) in (unicode, int, float, bool, NoneType)
AssertionError






One more thing: please update the VERSION file.



I did not want to do this because I just messed with validation, but yeah, I
can do that.

Fixed patches attached.



Both are good, ACK.


--
PetrĀ³

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1053 support 389-ds posix-winsync plugin

2012-09-06 Thread Rob Crittenden

Martin Kosek wrote:

On 09/06/2012 02:35 PM, Simo Sorce wrote:

On Thu, 2012-09-06 at 14:30 +0200, Martin Kosek wrote:

On 09/05/2012 08:13 PM, Rich Megginson wrote:

On 09/05/2012 12:08 PM, Rob Crittenden wrote:

Add support for the 389-ds posix winsync plugin. This plugin will sync the
POSIX attributes from AD. We need to avoid trying to re-add them in our plugin.

ack


I did a sanity check, that winsync replication still works, everything looks OK.

Pushed to master, ipa-3-0.


Is this plugin configurable in IPA ?
From the commit I can't tell if it is enabled by default or not.

Simo.



This plugin is configured and enabled by default when IPA server is being
installed. You can find LDIF with its configuration there:

daemons/ipa-slapi-plugins/ipa-winsync/ipa-winsync-conf.ldif

Martin


I think he means the posix-winsync plugin. It is configured by default 
in the latest 389-ds build. We do not enable it on upgrades.


rob

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 300-301 Fix DNS SOA serial parameters boundaries

2012-09-06 Thread Martin Kosek
On 09/06/2012 02:51 PM, Petr Viktorin wrote:
 On 09/05/2012 01:31 PM, Martin Kosek wrote:
 On 09/05/2012 12:26 PM, Petr Viktorin wrote:
 On 09/05/2012 12:14 PM, Petr Viktorin wrote:
 This works well, but please see some comments below.

 On 09/04/2012 04:22 PM, Martin Kosek wrote:
 To test, simply run the following command:

ipa dnszone-mod example.com --serial=4294967295

 This should work well on patched serverclient. Web UI should work too
 as it
 reads the max limit dynamically.

 Please put this in the test suite.

 Done.


 ---
 [PATCH 2/2] Fix DNS SOA serial parameters boundaries:

 Set correct boundaries for DNS SOA serial parameters (see RFC 1035,
 2181).


 [PATCH 1/2] Transfer long numbers over XMLRPC

 Numeric parameters in ipalib were limited by XMLRPC boundaries for
 integer (2^31-1) which is too low for some LDAP attributes like DNS
 SOA serial field.

 Transfer numbers which are not in XMLRPC boundary as a string and not
 as a number to workaround this limitation. Int parameter had to be
 updated to also accept Python's long type as valid int type.


 freeipa-mkosek-300-transfer-long-numbers-over-xmlrpc.patch


  From 8782015a17b130c5ebae8b014a7241810b10dedd Mon Sep 17 00:00:00 2001
 From: Martin Kosekmko...@redhat.com
 Date: Tue, 4 Sep 2012 15:49:26 +0200
 Subject: [PATCH 1/2] Transfer long numbers over XMLRPC

 Numeric parameters in ipalib were limited by XMLRPC boundaries for
 integer (2^31-1) which is too low for some LDAP attributes like DNS
 SOA serial field.

 Transfer numbers which are not in XMLRPC boundary as a string and not
 as a number to workaround this limitation. Int parameter had to be
 updated to also accept Python's long type as valid int type.

 https://fedorahosted.org/freeipa/ticket/2568
 ---
ipalib/parameters.py | 12 ++--
ipalib/rpc.py|  5 -
2 files changed, 10 insertions(+), 7 deletions(-)

 diff --git a/ipalib/parameters.py b/ipalib/parameters.py
 index
 de0d14faf08d1ab79c99e65dab9cc08f406e3a1d..21e30356b2a351bf7a3be7d47d7fabf0130cf6d4


 100644
 --- a/ipalib/parameters.py
 +++ b/ipalib/parameters.py
 @@ -1077,7 +1077,7 @@ class Number(Param):

if type(value) is self.type:
return value
 -if type(value) in (unicode, int, float):
 +if type(value) in (unicode, int, long, float):


 PEP8 says that Object type comparisons should always use isinstance()
 instead of comparing types directly.
 It would be nice to change the old code whenever it is touched. It's
 also in a few more places in the patch.


 I considered doing this when I was developing the patch, but I did not want 
 to
 mix type/isinstance in the same code. Now, when I experimented and tried to
 replace it in a larger scope, there were unexpected issues. Like bool type
 suddenly passing an isinstance(value, (int, long)) test and causing issues
 later.

 So I skipped this part (as we discussed off-list).

 diff --git a/ipalib/rpc.py b/ipalib/rpc.py
 index
 d1764e3e30492d5855450398e86689bfcad7aa39..85239ac65903acf447a4d971cce70f819979ce8d


 100644
 --- a/ipalib/rpc.py
 +++ b/ipalib/rpc.py
 @@ -94,6 +95,8 @@ def xml_wrap(value):
if type(value) is Decimal:
# transfer Decimal as a string
return unicode(value)
 +if isinstance(value, (int, long)) and (value  MININT or value 
 MAXINT):
 +return unicode(value)
if isinstance(value, DN):
return str(value)
assert type(value) in (unicode, int, float, bool, NoneType)

 `long` should also be included here.

api.Command.user_find(uidnumber=151100)
 {'count': 1, 'truncated': False, 'result': ({...},), 'summary': u'1 user
 matched'}
api.Command.user_find(uidnumber=151100 + 0L)
 Traceback (most recent call last):
 File console, line 1, in module
 ...
 File /usr/lib/python2.7/site-packages/ipalib/rpc.py, line 102, in
 xml_wrap
   assert type(value) in (unicode, int, float, bool, NoneType)
 AssertionError





 One more thing: please update the VERSION file.


 I did not want to do this because I just messed with validation, but yeah, I
 can do that.

 Fixed patches attached.

 
 Both are good, ACK.
 

Pushed to master, ipa-3-0.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1041 pull in cachememsize logging

2012-09-06 Thread Rob Crittenden

Martin Kosek wrote:

On 09/05/2012 08:04 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

389-ds-base added logging if the entry cache is smaller than the
database so users will know they need to tune their DS install. Set this
as the minimum for IPA.

rob


Rebased patch.

rob



It looks like a changelog entry from your patch 1031-4 slipped in to this
patch. Otherwise it is an obvious patch to ACK.

Martin



Yeah, I was in rebase hell and messed this up.

Fixed and pushed to master and ipa-3-0.

rob

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1053 support 389-ds posix-winsync plugin

2012-09-06 Thread Simo Sorce
On Thu, 2012-09-06 at 14:49 +0200, Martin Kosek wrote:
 On 09/06/2012 02:35 PM, Simo Sorce wrote:
  On Thu, 2012-09-06 at 14:30 +0200, Martin Kosek wrote:
  On 09/05/2012 08:13 PM, Rich Megginson wrote:
  On 09/05/2012 12:08 PM, Rob Crittenden wrote:
  Add support for the 389-ds posix winsync plugin. This plugin will sync 
  the
  POSIX attributes from AD. We need to avoid trying to re-add them in our 
  plugin.
  ack
 
  I did a sanity check, that winsync replication still works, everything 
  looks OK.
 
  Pushed to master, ipa-3-0.
  
  Is this plugin configurable in IPA ?
 From the commit I can't tell if it is enabled by default or not.
  
  Simo.
  
 
 This plugin is configured and enabled by default when IPA server is being
 installed. You can find LDIF with its configuration there:
 
 daemons/ipa-slapi-plugins/ipa-winsync/ipa-winsync-conf.ldif

Do we want to really set it up by default ?
What happen on upgrades ?

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1053 support 389-ds posix-winsync plugin

2012-09-06 Thread Rich Megginson

On 09/06/2012 06:35 AM, Simo Sorce wrote:

On Thu, 2012-09-06 at 14:30 +0200, Martin Kosek wrote:

On 09/05/2012 08:13 PM, Rich Megginson wrote:

On 09/05/2012 12:08 PM, Rob Crittenden wrote:

Add support for the 389-ds posix winsync plugin. This plugin will sync the
POSIX attributes from AD. We need to avoid trying to re-add them in our plugin.

ack

I did a sanity check, that winsync replication still works, everything looks OK.

Pushed to master, ipa-3-0.

Is this plugin configurable in IPA ?

Yes.

 From the commit I can't tell if it is enabled by default or not.

It is not enabled by default.


Simo.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCHES] Various fixes for trust and range CLI

2012-09-06 Thread Petr Vobornik

On 09/06/2012 01:39 PM, Sumit Bose wrote:

Hi,

this series of patches touches couple of tickets related to the trust
and (id)range CLI. I post them together because some of them depend on
each other. I already rebased them on Martin's Add range safety check
for range_mod and range_del patch which I'm currently reviewing.

bye,
Sumit




ACK for the UI changes in patch 60.

There is a minor issue in ipa_init_objects.json and 
ipa_init_command.json files: Labels aren't changed so there are still 
'Ranges' instead of 'ID ranges'. I don't think it matters because it 
doesn't affect actual functionality and I will definitely regenerate 
those files sometime in a future. They are only for developer and 
related purposes.

--
Petr Vobornik

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1050 prevent replica orphans

2012-09-06 Thread Martin Kosek
On 08/31/2012 07:40 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 It was possible use ipa-replica-manage connect/disconnect/del to end up
 orphaning or or more IPA masters. This is an attempt to catch and
 prevent that case.

 I tested with this topology, trying to delete B.

 A - B - C

 I got here by creating B and C from A, connecting B to C then deleting
 the link from A to B, so it went from A - B and A - C to the above.

 What I do is look up the servers that the delete candidate host has
 connections to and see if we're the last link.

 I added an escape clause if there are only two masters.

 rob
 
 Oh, this relies on my cleanruv patch 1031.
 
 rob
 

1) When I run ipa-replica-manage del --force to an already uninstalled host,
the new code will prevent me the deletation because it cannot connect to it. It
also crashes with UnboundLocalError:

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force

Unable to connect to replica vm-055.idm.lab.bos.redhat.com, forcing removal
Traceback (most recent call last):
  File /sbin/ipa-replica-manage, line 708, in module
main()
  File /sbin/ipa-replica-manage, line 677, in main
del_master(realm, args[1], options)
  File /sbin/ipa-replica-manage, line 476, in del_master
sys.exit(Failed read master data from '%s': %s % (delrepl.hostname, 
str(e)))
UnboundLocalError: local variable 'delrepl' referenced before assignment


I also hit this error when removing a winsync replica.


2) As I wrote before, I think having --force option override the user inquiries
would benefit test automation:

+if not ipautil.user_input(Continue to delete?, False):
+sys.exit(Aborted)


3) I don't think this code won't cover this topology:

A - B - C - D - E

It would allow you deleting a replica C even though it would separate A-B and
D-E. Though we may not want to cover this situation now, what you got is
definitely helping.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1046 add e-mail by default

2012-09-06 Thread Rob Crittenden

Martin Kosek wrote:

On 08/24/2012 07:54 PM, Rob Crittenden wrote:

We weren't automatically creating the mail attribute despite having the default
e-mail domain. This patch will add it to all new users.

To disable creating this set the default e-mail domain to empty in ipa config.

rob



1) Patch needs a rebase

2) There are 2 test cases where new default mail attribute was not added:

==
FAIL: test_user[34]: user_find: Search for tuser2 with manager tuser1
--
...
   extra keys = ['mail']
...

==
FAIL: test_user[75]: user_add: Create 2nd admin user admin2
--
...
   extra keys = ['mail']
...

3) Some code could be simplified:

This:
+if 'ipadefaultemaildomain' in config:
+defaultdomain = config['ipadefaultemaildomain'][0]
+else:
+defaultdomain = None

To this:
  defaultdomain = config.get('ipadefaultemaildomain', [None])[0]


This:
if m.find('@') == -1 ...

To this:
if '@' not in m ...

IMHO, it is more readable than the find method.

3) When default e-mail domain is removed from config, users cannot be added any
more when e-mail is not explicitly specified:

# ipa config-mod --emaildomain=
   Maximum username length: 32
   Home directory base: /home
   Default shell: /bin/sh
   Default users group: ipausers
   Search time limit: 2
   Search size limit: 100
   User search fields: uid,givenname,sn,telephonenumber,ou,title
   Group search fields: cn,description
   Enable migration mode: FALSE
   Certificate Subject base: O=IDM.LAB.BOS.REDHAT.COM
   Password Expiration Notification (days): 4
   Password plugin features: AllowNThash
   SELinux user map order:
guest_u:s0$xguest_u:s0$user_u:s0-s0:c0.c1023$staff_u:s0-s0:c0.c1023$unconfined_u:s0-s0:c0.c1023
   Default SELinux user: guest_u:s0
   PAC type: MS-PAC

# ipa user-add --first=Foo --last=Bar fbar
ipa: ERROR: invalid 'email': invalid e-mail format: fbar

Martin



Rebased, issues addressed.

rob
From f396af6e3851673744e27e0ba7c4bba59b809c07 Mon Sep 17 00:00:00 2001
From: Rob Crittenden rcrit...@redhat.com
Date: Mon, 23 Jul 2012 14:00:51 -0400
Subject: [PATCH] Set the e-mail attribute using the default domain name by
 default.

https://fedorahosted.org/freeipa/ticket/2810
---
 ipalib/plugins/user.py  | 26 +++--
 tests/test_xmlrpc/test_attr.py  |  1 +
 tests/test_xmlrpc/test_automember_plugin.py |  2 ++
 tests/test_xmlrpc/test_group_plugin.py  |  2 ++
 tests/test_xmlrpc/test_krbtpolicy.py|  1 +
 tests/test_xmlrpc/test_nesting.py   |  4 
 tests/test_xmlrpc/test_netgroup_plugin.py   |  2 ++
 tests/test_xmlrpc/test_selinuxusermap_plugin.py |  1 +
 tests/test_xmlrpc/test_user_plugin.py   | 25 +++-
 9 files changed, 57 insertions(+), 7 deletions(-)

diff --git a/ipalib/plugins/user.py b/ipalib/plugins/user.py
index bf25bc3c3155fc931050a0e2e5ba8a1603a39d98..3f0050917f96577ca5b21e7d6ba24ed39ad1bd2d 100644
--- a/ipalib/plugins/user.py
+++ b/ipalib/plugins/user.py
@@ -28,6 +28,7 @@ from ipalib.request import context
 from ipalib import _, ngettext
 from ipalib import output
 from ipapython.ipautil import ipa_generate_password
+from ipapython.ipavalidate import Email
 import posixpath
 from ipalib.util import validate_sshpubkey, output_sshpubkey
 if api.env.in_server and api.env.context in ['lite', 'server']:
@@ -367,19 +368,26 @@ class user(LDAPObject):
 ),
 )
 
-def _normalize_email(self, email, config=None):
+def _normalize_and_validate_email(self, email, config=None):
 if not config:
 config = self.backend.get_ipa_config()[1]
 
 # check if default email domain should be added
-if email and 'ipadefaultemaildomain' in config:
+defaultdomain = config.get('ipadefaultemaildomain', [None])[0]
+if email:
 norm_email = []
 if not isinstance(email, (list, tuple)):
 email = [email]
 for m in email:
-if isinstance(m, basestring) and m.find('@') == -1:
-norm_email.append(m + u'@' + config['ipadefaultemaildomain'][0])
+if isinstance(m, basestring):
+if '@' not in m and defaultdomain:
+m = m + u'@' + defaultdomain
+if not Email(m):
+raise errors.ValidationError(name='email', error=_('invalid e-mail format: %(email)s') % dict(email=m))
+norm_email.append(m)
 else:
+if not Email(m):
+raise errors.ValidationError(name='email', error=_('invalid e-mail format: %(email)s') % dict(email=m))
  

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Rob Crittenden

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:

Deleting a replica can leave a replication vector (RUV) on the
other servers.
This can confuse things if the replica is re-added, and it also
causes the
server to calculate changes against a server that may no longer
exist.

389-ds-base provides a new task that self-propogates itself to all
available
replicas to clean this RUV data.

This patch will create this task at deletion time to hopefully
clean things up.

It isn't perfect. If any replica is down or unavailable at the time
the
cleanruv task fires, and then comes back up, the old RUV data
may be
re-propogated around.

To make things easier in this case I've added two new commands to
ipa-replica-manage. The first lists the replication ids of all the
servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step again.

This is quite dangerous though. If you run cleanruv against a
replica id that
does exist it can cause a loss of data. I believe I've put in
enough scary
warnings about this.

rob



Good work there, this should make cleaning RUVs much easier than
with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite lost. I think
it would
help if we for example have all info for commands indented. This way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv to make
them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we already
do for
ipa-replica-manage del. This fix would aid test automation in the
future.


4) (minor) The new question (and the del too) does not react too
well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter is quite
confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command [clean_ruv |
force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the command
definition


6) When the remote replica is down, the clean_ruv command fails
with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping

tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping




tree,   cn=config) has not been cleaned.  You will need to rerun the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the command failed,
possibly
because of disconnected replicas and that they could enable the
replicas and
try again.


7) (minor) pass is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing read-only but
+# this isn't a show-stopper
+root_logger.debug(No permission to switch replica to
read-only,
continuing anyway)
+pass



I think this addresses everything.

rob


Thanks, almost there! I just found one more issue which needs to be
fixed
before we push:

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force
Directory Manager password:

Unable to connect to replica vm-055.idm.lab.bos.redhat.com, forcing
removal
Failed to get data from 'vm-055.idm.lab.bos.redhat.com': {'desc':
Can't
contact LDAP server}
Forcing 

Re: [Freeipa-devel] [PATCH] 204 Update of confirmation of actions

2012-09-06 Thread Petr Vobornik

On 09/06/2012 11:24 AM, Petr Vobornik wrote:

On 09/05/2012 09:30 PM, Endi Sukma Dewata wrote:

On 9/3/2012 11:05 AM, Petr Vobornik wrote:

This patch is changing confirmation of actions according to ticket
#3035, see the ticket description.

It does following changes:
  * Confirmation of update action was removed.
  * Action lists resets to first action (which is usually a NOP: '--
select action --') on change of displayed entry.
  * New confirmation dialog was implemented. It is used for action
confirmation. It is used in IPA.action to replace the call of
window.confirm(message). The old call is a modal window which blocks all
JS functionality and has different style than other dialogs in Web UI.
The new one has same design and doesn't block background operations.

  https://fedorahosted.org/freeipa/ticket/3035


Some issues:

1. None of the confirmation dialogs have a default button. If you hit
Enter it will do nothing (except #2 below). All buttons are greyed out
until you hover with mouse or focus with Tab. Is this intentional? I
think usually a confirmation dialog would have a default button.

2. In the Users search facet the confirmation dialog doesn't show
default button either, but if you hit Enter it will execute the
operation. This is inconsistent with #1.



Root of the problem is that old custom dialogs which serves for
confirmation (batch delete, various certificate and unprovisioning
dialogs) or other dialogs (password change, adder dialogs) don't have
confirmation by enter. Only the new dialog has confirmation by Enter. At
the moment this new dialog is used only for mass enable/disable.

We might:
a) remove confirmation confirmation by pressing Enter key from the new
dialog to be consistent. IMO wrong.
b) add this confirmation too all dialogs
c) change old confirm dialogs to inherit from IPA.confirm_dialog to
receive this functionality

IMO, for most dialogs c) would be better but for some (highly
customized) b). It should be probably done in other patch, maybe it the
context of this ticket.

There is also: https://fedorahosted.org/freeipa/ticket/2910 ([Web UI]
Use Enter to confirm add dialogs)



Pushed to master, ipa 3-0

I agreed with Endi over IRC that this patch will be pushed to make to RC1.

Ticket will remain opened and moved to RC2 where combination of (b) and 
(c) will be addressed.


--
Petr Vobornik

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 83 Use OpenSSH-style public keys as the preferred format of SSH public keys

2012-09-06 Thread Jan Cholasta

Dne 5.9.2012 22:57, Rob Crittenden napsal(a):

Jan Cholasta wrote:

Hi,

this patch changes the format of the sshpubkey parameter to the format
used by OpenSSH (see sshd(8)).

Public keys in the old format (raw RFC 4253 blob) are automatically
converted to OpenSSH-style public keys. OpenSSH-style public keys are
now stored in LDAP.

Changed sshpubkeyfp to be an output parameter, as that is what it
actually is.

Allow parameter normalizers to be used on values of any type, not just
unicode, so that public key blobs (which are str) can be normalized to
OpenSSH-style public keys.

Note that you need a SSSD build including
https://fedorahosted.org/sssd/changeset/f130a609a840d4548c795ce5e63afb5891358e20/

(SSSD 1.9.0beta7-to-be) in order to make OpenSSH integration actually
work with OpenSSH-style public keys.

https://fedorahosted.org/freeipa/ticket/2932
https://fedorahosted.org/freeipa/ticket/2935

Honza


NACK.

I think a bunch of tests are needed for this.

Because you abstracted out the pubkey class it should be straightforward
to add a bunch of class-based unit tests on it.

There are also no user or host-based tests, either for adding or
managing keys.


Tests added.



I tested backwards compatibility with 2.2 and the initial tests are mixed.

I installed 2.2 and created a 3.0 clone from it, including your patch.


Do people actually do that in real deployments?



I added a user in 3.0 with a key and it added ok, but on the 2.2 side it
returns the entire base64 encoded blob of key type, key and comment,
which I presume is unusable. At least things don't blow up.


The format of ipasshpubkey in LDAP has changed, so there's not much I 
can do about this.




The reverse works fine. An old-style key added to 2.2 appears to work
fine in 3.0, we just lack a comment.

On the 2.2 server:

$ ipa user-show tuser1 --all | grep -i ssh
   Base-64 encoded SSH public key:
c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDNUQyRTI2dHU5YXM2cHhlUVlSdUgzelYyUDUzMjFpR1U5aC9XNElpd0tGSGlOc2p5cXFyemhCUFB3am83dGlYRDlHbUo1M25KS21OTGd0K01XUnFTZEx2R0V3NjM3SkVTWEpGL0VWeUxvZEFWRGltdXFRVkNLWjBRcm1kYjErRUg1VGRrd3ByOExyd0g1a0RzMEVpcGc2c0xoRUZ5NzMvaXNjRkJqcmk0NGxSU1BZNXFHTWFLOVE0cjY1WFEyaytlZ1RDQnBNZnc0b0J6Mzh0ZHVEVVE2bW9XNFhQSnhZeWJ3MGFDMnRUK2RBOU42WndFSFZXREUzdzg0bHRHa0JRZFRaKzViRnBFdlladm9PbkZXdDlNZFIzYVd6UklnY1o5VDlySDFFT2Z3eE5zWVRCLzRjTmg3dS9adGxnMVV0Z1VteWN3TkpMTUYrMTNzNTl2OFFpSFogcmNyaXRAZWRzZWwuZ3JleW9hay5jb20=

$ python
Python 2.7.3 (default, Jul 24 2012, 10:05:38)
[GCC 4.7.0 20120507 (Red Hat 4.7.0-5)] on linux2
Type help, copyright, credits or license for more information.
  import base64
  s =
'c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDNUQyRTI2dHU5YXM2cHhlUVlSdUgzelYyUDUzMjFpR1U5aC9XNElpd0tGSGlOc2p5cXFyemhCUFB3am83dGlYRDlHbUo1M25KS21OTGd0K01XUnFTZEx2R0V3NjM3SkVTWEpGL0VWeUxvZEFWRGltdXFRVkNLWjBRcm1kYjErRUg1VGRrd3ByOExyd0g1a0RzMEVpcGc2c0xoRUZ5NzMvaXNjRkJqcmk0NGxSU1BZNXFHTWFLOVE0cjY1WFEyaytlZ1RDQnBNZnc0b0J6Mzh0ZHVEVVE2bW9XNFhQSnhZeWJ3MGFDMnRUK2RBOU42WndFSFZXREUzdzg0bHRHa0JRZFRaKzViRnBFdlladm9PbkZXdDlNZFIzYVd6UklnY1o5VDlySDFFT2Z3eE5zWVRCLzRjTmg3dS9adGxnMVV0Z1VteWN3TkpMTUYrMTNzNTl2OFFpSFogcmNyaXRAZWRzZWwuZ3JleW9hay5jb20='

  base64.b64decode(s)
'ssh-rsa
B3NzaC1yc2EDAQABAAABAQC5D2E26tu9as6pxeQYRuH3zV2P5321iGU9h/W4IiwKFHiNsjyqqrzhBPPwjo7tiXD9GmJ53nJKmNLgt+MWRqSdLvGEw637JESXJF/EVyLodAVDimuqQVCKZ0Qrmdb1+EH5Tdkwpr8LrwH5kDs0Eipg6sLhEFy73/iscFBjri44lRSPY5qGMaK9Q4r65XQ2k+egTCBpMfw4oBz38tduDUQ6moW4XPJxYybw0aC2tT+dA9N6ZwEHVWDE3w84ltGkBQdTZ+5bFpEvYZvoOnFWt9MdR3aWzRIgcZ9T9rH1EOfwxNsYTB/4cNh7u/Ztlg1UtgUmycwNJLMF+13s59v8QiHZ
rc...@edsel.greyoak.com'

Now show an old style key:

$ ipa user-show tuser2 --all | grep -i ssh
   Base-64 encoded SSH public key:
B3NzaC1yc2EDAQABAAABAQCbRLyizFGyfucNRnHpWdUG8dBD7W2PfvTQ42k+LmAdUFudTytO89oTRXcVEYMDL42OyRth12JRMUjYTEmFwo9a9Mb7cP8+bo7N2lV4iCB0CUybcZARF0MV6NeYhhWlC9DV40nkqs3Goe8X8tMPXn/HZn8Rz33703w8K/G6STnN0txhAT4tY7D3e0DA9UY87wNnpJ7dXoJqMXRv2dRgmUnGih/8cLHypyxBoLoL8qR9cWxAf/Cs+qQmsk15lzIGQUAJwwXBBjbnXKwykEeHjTHsvjd7zzC1cWtz5Zz/8aop7AsVwaBqb9u+5dVOMxdzLGD24NKTjhtG86ADU4Mpnlb5


rob


Updated patch attached.

Honza

--
Jan Cholasta
From d5b215bd318abdb0d32653508ddc82006d7e6c9a Mon Sep 17 00:00:00 2001
From: Jan Cholasta jchol...@redhat.com
Date: Mon, 3 Sep 2012 09:33:30 -0400
Subject: [PATCH] Use OpenSSH-style public keys as the preferred format of SSH
 public keys.

Public keys in the old format (raw RFC 4253 blob) are automatically converted
to OpenSSH-style public keys. OpenSSH-style public keys are now stored in LDAP.

Changed sshpubkeyfp to be an output parameter, as that is what it actually is.

Allow parameter normalizers to be used on values of any type, not just unicode,
so that public key blobs (which are str) can be normalized to OpenSSH-style
public keys.

ticket 2932, 2935
---
 API.txt   |   8 +-
 VERSION   |   2 +-
 ipa-client/ipa-install/ipa-client-install |  29 ++---
 ipalib/parameters.py

Re: [Freeipa-devel] [PATCH] 303 Add range safety check for range_mod and range_del

2012-09-06 Thread Sumit Bose
On Wed, Sep 05, 2012 at 05:13:41PM +0200, Martin Kosek wrote:
 range_mod and range_del command could easily create objects with
 ID which is suddenly out of specified range. This could cause issues
 in trust scenarios where range objects are used for computation of
 remote IDs.
 
 Add validator for both commands to check if there is any object with
 ID in the range which would become out-of-range as a pre_callback.
 Also add unit tests testing this new validator.
 
 https://fedorahosted.org/freeipa/ticket/2919

ACK

bye,
Sumit

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Rob Crittenden

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:

Deleting a replica can leave a replication vector (RUV) on the
other servers.
This can confuse things if the replica is re-added, and it also
causes the
server to calculate changes against a server that may no longer
exist.

389-ds-base provides a new task that self-propogates itself to all
available
replicas to clean this RUV data.

This patch will create this task at deletion time to hopefully
clean things up.

It isn't perfect. If any replica is down or unavailable at the
time
the
cleanruv task fires, and then comes back up, the old RUV data
may be
re-propogated around.

To make things easier in this case I've added two new commands to
ipa-replica-manage. The first lists the replication ids of all the
servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step again.

This is quite dangerous though. If you run cleanruv against a
replica id that
does exist it can cause a loss of data. I believe I've put in
enough scary
warnings about this.

rob



Good work there, this should make cleaning RUVs much easier than
with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite lost. I
think
it would
help if we for example have all info for commands indented. This
way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv to make
them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we already
do for
ipa-replica-manage del. This fix would aid test automation in the
future.


4) (minor) The new question (and the del too) does not react too
well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter is quite
confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command [clean_ruv |
force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command fails
with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping





tree,   cn=config) has not been cleaned.  You will need to rerun
the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the command failed,
possibly
because of disconnected replicas and that they could enable the
replicas and
try again.


7) (minor) pass is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing read-only
but
+# this isn't a show-stopper
+root_logger.debug(No permission to switch replica to
read-only,
continuing anyway)
+pass



I think this addresses everything.

rob


Thanks, almost there! I just found one more issue which needs to be
fixed
before we push:

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force
Directory Manager password:

Unable to connect to replica vm-055.idm.lab.bos.redhat.com, forcing
removal
Failed to get data from 'vm-055.idm.lab.bos.redhat.com': {'desc':
Can't
contact 

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Martin Kosek
On 09/06/2012 05:55 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 Rob Crittenden wrote:
 Martin Kosek wrote:
 On 09/05/2012 08:06 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/05/2012 08:39 PM, Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/03/2012 04:41 PM, Rob Crittenden wrote:
 Deleting a replica can leave a replication vector (RUV) on the
 other servers.
 This can confuse things if the replica is re-added, and it also
 causes the
 server to calculate changes against a server that may no longer
 exist.

 389-ds-base provides a new task that self-propogates itself to all
 available
 replicas to clean this RUV data.

 This patch will create this task at deletion time to hopefully
 clean things up.

 It isn't perfect. If any replica is down or unavailable at the
 time
 the
 cleanruv task fires, and then comes back up, the old RUV data
 may be
 re-propogated around.

 To make things easier in this case I've added two new commands to
 ipa-replica-manage. The first lists the replication ids of all the
 servers we
 have a RUV for. Using this you can call clean_ruv with the
 replication id of a
 server that no longer exists to try the cleanallruv step again.

 This is quite dangerous though. If you run cleanruv against a
 replica id that
 does exist it can cause a loss of data. I believe I've put in
 enough scary
 warnings about this.

 rob


 Good work there, this should make cleaning RUVs much easier than
 with the
 previous version.

 This is what I found during review:

 1) list_ruv and clean_ruv command help in man is quite lost. I
 think
 it would
 help if we for example have all info for commands indented. This
 way
 user could
 simply over-look the new commands in the man page.


 2) I would rename new commands to clean-ruv and list-ruv to make
 them
 consistent with the rest of the commands (re-initialize,
 force-sync).


 3) It would be nice to be able to run clean_ruv command in an
 unattended way
 (for better testing), i.e. respect --force option as we already
 do for
 ipa-replica-manage del. This fix would aid test automation in the
 future.


 4) (minor) The new question (and the del too) does not react too
 well for
 CTRL+D:

 # ipa-replica-manage clean_ruv 3 --force
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: unexpected error:


 5) Help for clean_ruv command without a required parameter is quite
 confusing
 as it reports that command is wrong and not the parameter:

 # ipa-replica-manage clean_ruv
 Usage: ipa-replica-manage [options]

 ipa-replica-manage: error: must provide a command [clean_ruv |
 force-sync |
 disconnect | connect | del | re-initialize | list | list_ruv]

 It seems you just forgot to specify the error message in the
 command
 definition


 6) When the remote replica is down, the clean_ruv command fails
 with an
 unexpected error:

 [root@vm-086 ~]# ipa-replica-manage clean_ruv 5
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: y
 unexpected error: {'desc': 'Operations error'}


 /var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: failed
 to connect to replagreement connection
 (cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

 cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


 tree,cn=config), error 105
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: replica
 (cn=meTovm-055.idm.lab.
 bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping






 tree,   cn=config) has not been cleaned.  You will need to rerun
 the
 CLEANALLRUV task on this replica.
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: Task
 failed (1)

 In this case I think we should inform user that the command failed,
 possibly
 because of disconnected replicas and that they could enable the
 replicas and
 try again.


 7) (minor) pass is now redundant in replication.py:
 +except ldap.INSUFFICIENT_ACCESS:
 +# We can't make the server we're removing read-only
 but
 +# this isn't a show-stopper
 +root_logger.debug(No permission to switch replica to
 read-only,
 continuing anyway)
 +pass


 I think this addresses everything.

 rob

 Thanks, almost there! I just found one more issue which needs to be
 fixed
 before we push:

 # ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force
 Directory Manager 

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Martin Kosek
On 09/06/2012 06:05 PM, Martin Kosek wrote:
 On 09/06/2012 05:55 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 Rob Crittenden wrote:
 Martin Kosek wrote:
 On 09/05/2012 08:06 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/05/2012 08:39 PM, Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/03/2012 04:41 PM, Rob Crittenden wrote:
 Deleting a replica can leave a replication vector (RUV) on the
 other servers.
 This can confuse things if the replica is re-added, and it also
 causes the
 server to calculate changes against a server that may no longer
 exist.

 389-ds-base provides a new task that self-propogates itself to all
 available
 replicas to clean this RUV data.

 This patch will create this task at deletion time to hopefully
 clean things up.

 It isn't perfect. If any replica is down or unavailable at the
 time
 the
 cleanruv task fires, and then comes back up, the old RUV data
 may be
 re-propogated around.

 To make things easier in this case I've added two new commands to
 ipa-replica-manage. The first lists the replication ids of all the
 servers we
 have a RUV for. Using this you can call clean_ruv with the
 replication id of a
 server that no longer exists to try the cleanallruv step again.

 This is quite dangerous though. If you run cleanruv against a
 replica id that
 does exist it can cause a loss of data. I believe I've put in
 enough scary
 warnings about this.

 rob


 Good work there, this should make cleaning RUVs much easier than
 with the
 previous version.

 This is what I found during review:

 1) list_ruv and clean_ruv command help in man is quite lost. I
 think
 it would
 help if we for example have all info for commands indented. This
 way
 user could
 simply over-look the new commands in the man page.


 2) I would rename new commands to clean-ruv and list-ruv to make
 them
 consistent with the rest of the commands (re-initialize,
 force-sync).


 3) It would be nice to be able to run clean_ruv command in an
 unattended way
 (for better testing), i.e. respect --force option as we already
 do for
 ipa-replica-manage del. This fix would aid test automation in the
 future.


 4) (minor) The new question (and the del too) does not react too
 well for
 CTRL+D:

 # ipa-replica-manage clean_ruv 3 --force
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: unexpected error:


 5) Help for clean_ruv command without a required parameter is quite
 confusing
 as it reports that command is wrong and not the parameter:

 # ipa-replica-manage clean_ruv
 Usage: ipa-replica-manage [options]

 ipa-replica-manage: error: must provide a command [clean_ruv |
 force-sync |
 disconnect | connect | del | re-initialize | list | list_ruv]

 It seems you just forgot to specify the error message in the
 command
 definition


 6) When the remote replica is down, the clean_ruv command fails
 with an
 unexpected error:

 [root@vm-086 ~]# ipa-replica-manage clean_ruv 5
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: y
 unexpected error: {'desc': 'Operations error'}


 /var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: failed
 to connect to replagreement connection
 (cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

 cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


 tree,cn=config), error 105
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: replica
 (cn=meTovm-055.idm.lab.
 bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping






 tree,   cn=config) has not been cleaned.  You will need to rerun
 the
 CLEANALLRUV task on this replica.
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: Task
 failed (1)

 In this case I think we should inform user that the command failed,
 possibly
 because of disconnected replicas and that they could enable the
 replicas and
 try again.


 7) (minor) pass is now redundant in replication.py:
 +except ldap.INSUFFICIENT_ACCESS:
 +# We can't make the server we're removing read-only
 but
 +# this isn't a show-stopper
 +root_logger.debug(No permission to switch replica to
 read-only,
 continuing anyway)
 +pass


 I think this addresses everything.

 rob

 Thanks, almost there! I just found one more issue which needs to be
 fixed
 before we push:

 # ipa-replica-manage del 

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Martin Kosek
On 09/06/2012 06:09 PM, Martin Kosek wrote:
 On 09/06/2012 06:05 PM, Martin Kosek wrote:
 On 09/06/2012 05:55 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 Rob Crittenden wrote:
 Martin Kosek wrote:
 On 09/05/2012 08:06 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/05/2012 08:39 PM, Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/03/2012 04:41 PM, Rob Crittenden wrote:
 Deleting a replica can leave a replication vector (RUV) on the
 other servers.
 This can confuse things if the replica is re-added, and it also
 causes the
 server to calculate changes against a server that may no longer
 exist.

 389-ds-base provides a new task that self-propogates itself to all
 available
 replicas to clean this RUV data.

 This patch will create this task at deletion time to hopefully
 clean things up.

 It isn't perfect. If any replica is down or unavailable at the
 time
 the
 cleanruv task fires, and then comes back up, the old RUV data
 may be
 re-propogated around.

 To make things easier in this case I've added two new commands to
 ipa-replica-manage. The first lists the replication ids of all the
 servers we
 have a RUV for. Using this you can call clean_ruv with the
 replication id of a
 server that no longer exists to try the cleanallruv step again.

 This is quite dangerous though. If you run cleanruv against a
 replica id that
 does exist it can cause a loss of data. I believe I've put in
 enough scary
 warnings about this.

 rob


 Good work there, this should make cleaning RUVs much easier than
 with the
 previous version.

 This is what I found during review:

 1) list_ruv and clean_ruv command help in man is quite lost. I
 think
 it would
 help if we for example have all info for commands indented. This
 way
 user could
 simply over-look the new commands in the man page.


 2) I would rename new commands to clean-ruv and list-ruv to make
 them
 consistent with the rest of the commands (re-initialize,
 force-sync).


 3) It would be nice to be able to run clean_ruv command in an
 unattended way
 (for better testing), i.e. respect --force option as we already
 do for
 ipa-replica-manage del. This fix would aid test automation in the
 future.


 4) (minor) The new question (and the del too) does not react too
 well for
 CTRL+D:

 # ipa-replica-manage clean_ruv 3 --force
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: unexpected error:


 5) Help for clean_ruv command without a required parameter is quite
 confusing
 as it reports that command is wrong and not the parameter:

 # ipa-replica-manage clean_ruv
 Usage: ipa-replica-manage [options]

 ipa-replica-manage: error: must provide a command [clean_ruv |
 force-sync |
 disconnect | connect | del | re-initialize | list | list_ruv]

 It seems you just forgot to specify the error message in the
 command
 definition


 6) When the remote replica is down, the clean_ruv command fails
 with an
 unexpected error:

 [root@vm-086 ~]# ipa-replica-manage clean_ruv 5
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: y
 unexpected error: {'desc': 'Operations error'}


 /var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: failed
 to connect to replagreement connection
 (cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

 cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


 tree,cn=config), error 105
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: replica
 (cn=meTovm-055.idm.lab.
 bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping






 tree,   cn=config) has not been cleaned.  You will need to rerun
 the
 CLEANALLRUV task on this replica.
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: Task
 failed (1)

 In this case I think we should inform user that the command failed,
 possibly
 because of disconnected replicas and that they could enable the
 replicas and
 try again.


 7) (minor) pass is now redundant in replication.py:
 +except ldap.INSUFFICIENT_ACCESS:
 +# We can't make the server we're removing read-only
 but
 +# this isn't a show-stopper
 +root_logger.debug(No permission to switch replica to
 read-only,
 continuing anyway)
 +pass


 I think this addresses everything.

 rob

 Thanks, almost there! I just found one more issue which needs to be
 fixed
 before we 

Re: [Freeipa-devel] [PATCH 0013] Remove user-unfriendly u character from error messages

2012-09-06 Thread Petr Viktorin

On 09/05/2012 04:35 PM, Tomas Babej wrote:

On 09/05/2012 03:42 PM, Petr Viktorin wrote:

On 09/05/2012 03:19 PM, Tomas Babej wrote:

Hi,

User-unfriendly errors were caused by re-raising errors
from external python module netaddr.

https://fedorahosted.org/freeipa/ticket/2588

Tomas


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel



I don't agree with this approach. Raising another module's errors in
our code is ugly, and will break if netaddr changes. The arguments to
pass to the exceptions are undocumented (see
http://packages.python.org/netaddr/api.html#custom-exceptions). The
wording of error messages in libraries can usually change at any time,
and is intended for developers, not end users.

This should be either fixed upstream (unlikely, using the repr() of
the argument is a sane thing to do at their side), or we should pass
bytestrings to netaddr (a possible quick fix, not sure it it'll work),
or, ideally, we should raise IPA's own errors.


Well, this particular fix wouldn't have broken anything, since it was
raising the same error that the except clause in which the raising
occured caught. However, I changed this to StandardError, since the
error message is extracted and packed into ValidationError during
further validation and therefore simple format message is suitable.


I know this is a minor issue and unlikely to cause problems, but it 
still should be fixed properly.


The patch assumes AddrFormatError takes only one argument, the message. 
In another case something like this might be a reasonable assumption, 
but having a prettier error message doesn't justify it.
Taking free-form text from a library and fixing it up like this is also 
not maintainable. Again, it assumes too much about the library.


I won't ack this approach. Please consult someone else if you think it 
really is the best way.


Adding `addr = str(addr)` would work around the issue without 
indroducing assumptions about an external library.




Some technical issues with your patch, in case my ideology is 
incompatible with the project:


ValueError would be more appropriate than StandardError. We already 
raise it in similar situations in this method.


There is a case where your fix doesn't work: 
CheckedIPAddress(u'percent%sign').


Please adjust the test in test_dns_plugin that checks the error message.

--
PetrĀ³

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Rich Megginson

On 09/06/2012 10:09 AM, Martin Kosek wrote:

On 09/06/2012 06:09 PM, Martin Kosek wrote:

On 09/06/2012 06:05 PM, Martin Kosek wrote:

On 09/06/2012 05:55 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:

Deleting a replica can leave a replication vector (RUV) on the
other servers.
This can confuse things if the replica is re-added, and it also
causes the
server to calculate changes against a server that may no longer
exist.

389-ds-base provides a new task that self-propogates itself to all
available
replicas to clean this RUV data.

This patch will create this task at deletion time to hopefully
clean things up.

It isn't perfect. If any replica is down or unavailable at the
time
the
cleanruv task fires, and then comes back up, the old RUV data
may be
re-propogated around.

To make things easier in this case I've added two new commands to
ipa-replica-manage. The first lists the replication ids of all the
servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step again.

This is quite dangerous though. If you run cleanruv against a
replica id that
does exist it can cause a loss of data. I believe I've put in
enough scary
warnings about this.

rob


Good work there, this should make cleaning RUVs much easier than
with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite lost. I
think
it would
help if we for example have all info for commands indented. This
way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv to make
them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we already
do for
ipa-replica-manage del. This fix would aid test automation in the
future.


4) (minor) The new question (and the del too) does not react too
well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter is quite
confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command [clean_ruv |
force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command fails
with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping






tree,   cn=config) has not been cleaned.  You will need to rerun
the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the command failed,
possibly
because of disconnected replicas and that they could enable the
replicas and
try again.


7) (minor) pass is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing read-only
but
+# this isn't a show-stopper
+root_logger.debug(No permission to switch replica to
read-only,
continuing anyway)
+pass


I think this addresses everything.

rob

Thanks, almost there! I just found one more issue which needs to be
fixed
before we push:

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force

Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-09-06 Thread Martin Kosek
On 09/04/2012 04:40 PM, Rich Megginson wrote:
 On 09/03/2012 08:42 AM, Martin Kosek wrote:
 On 08/27/2012 06:29 PM, Rich Megginson wrote:
 On 08/27/2012 10:24 AM, Martin Kosek wrote:
 On 08/17/2012 04:00 PM, Rich Megginson wrote:
 On 08/17/2012 07:44 AM, Martin Kosek wrote:
 Hi guys,

 I am now investigating ticket #2866:
 https://fedorahosted.org/freeipa/ticket/2866

 And I am thinking about possible solutions for this problem. In a
 nutshell, we do not properly check referential integrity in some IPA
 objects where we keep one-way DN references to other objects, e.g. in
 - managedBy attribute for a host object
 - memberhost attribute for HBAC rule object
 - memberuser attribute for user object
 - memberallowcmd or memberdenycmd for SUDO command object (reported in
 #2866)
 ...

 Currently, I see 2 approaches to solve this:
 1) Add relevant checks to our ipalib plugins where problematic
 operations with these operations are being executed (like we do for
 selinuxusermap's seealso attribute in HBAC plugin)
 This of course would not prevent direct LDAP deletes.

 2) Implement a preop DS plugin that would hook to MODRDN and DELETE
 callbacks and check that this object's DN is not referenced in other
 objects. And if it does, it would reject such modification. Second
 option would be to delete the attribute value with now invalid
 reference. This would be probably  more suitable for example for
 references to user objects.

 Any comments to these possible approaches are welcome.

 Rich, do you think that as an alternative to these 2 approaches,
 memberOf plugin could be eventually modified to do this task?
 This is very similar to the referential integrity plugin already in 389,
 except
 instead of cleaning up references to moved and deleted entries, you want
 it to
 prevent moving or deleting an entry if that entry is referenced by the
 managedby/memberhost/memberuser/memberallowcmd/memberdenycmd of some other
 entry.
 I think that using or enhancing current DS referential integrity plugin
 will be
 the best and the most consistent way to go.

 We already use that plugin for some user attributes like manager or
 secretary. seeAlso is already covered by default, so for example seeAlso
 attribute in SELinux usermap object referencing an HBAC rule will get 
 removed
 when relevant HBAC rule is removed (I just checked that).

 Note that the managed entry plugin (mep) already handles this for the
 managedby
 attribute.
 I assume you are referencing mepmanagedby and mepmanagedentry 
 attributes
 which then produce errors like this one:

 # ipa netgroup-del foo
 ipa: ERROR: Server is unwilling to perform: Deleting a managed entry is not
 allowed. It needs to be manually unlinked first.

 managedBy attribute used by hosts objects I had in mind seems to not be
 covered.

 But you are right, this is pretty much what I wanted. Though in case of 
 MEP,
 there is a link in both referenced objects, but in our case, we have just 
 the
 one-way link.

 Are you already using the memberof plugin for
 memberhost/memberuser/memberallowcmd/memberdenycmd?

 This doesn't seem like a job for memberof, this seems like more of a new
 check
 for the referential integrity plugin.

 I am now considering if current move/delete clean up already present in
 Referential Integrity plugin would not be sufficient for us.

 Rich, please correct me if I am wrong, but in that case, we would just 
 need to
 add relevant attribute names
 (memberhost/memberuser/memberallowcmd/memberdenycmd...) to Referential
 Integrity plugin configuration as nsslapd-pluginarg7, nsslapd-pluginarg8, 
 ...
 I wonder if there would be some performance issues if we add attributes to 
 the
 list this way.
 No, not if they are indexed for presence and equality.

 Hello Rich,
 I am back to investigate this ticket. In order to be able to deliver some
 working solution to IPA 3.0, I plan to take advantage of current Referential
 Integrity Plugin to clean up dangling references.

 This is the plan I plan to take:
 1) Add pres,eq indexes for all un-indexed attributes that we want to check:
 sourcehost
 memberservice
 managedby
 memberallowcmd
 memberdenycmd
 ipasudorunas
 ipasudorunasgroup
 ok

 2) Add missing pres index for attributes we want to check, but only have eq
 index:
 manager
 secretary
 memberuser
 memberhost

 I assume this step is also needed in order to keep the server performance.
 yes

 3) Add all these attributes do Referential Integrity Plugin attribute list of
 not already
 ok

 4) Also add Index task (nsIndexAttribute) for all these new indexes so that
 they are created during IPA server upgrade.
 ok

 Is this procedure OK DS-wise?
 Yes

 I also have question regarding the following note in RHDS doc chapter 3.6.
 Maintaining Referential Integrity:

 The Referential Integrity Plug-in should only be enabled on one supplier
 replica in a multi-master replication environment to avoid conflict 
 resolution
 loops...

 Currently, we enable this plugin on all 

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Martin Kosek
On 09/06/2012 06:13 PM, Rich Megginson wrote:
 On 09/06/2012 10:09 AM, Martin Kosek wrote:
 On 09/06/2012 06:09 PM, Martin Kosek wrote:
 On 09/06/2012 06:05 PM, Martin Kosek wrote:
 On 09/06/2012 05:55 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 Rob Crittenden wrote:
 Martin Kosek wrote:
 On 09/05/2012 08:06 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/05/2012 08:39 PM, Rob Crittenden wrote:
 Martin Kosek wrote:
 On 07/03/2012 04:41 PM, Rob Crittenden wrote:
 Deleting a replica can leave a replication vector (RUV) on the
 other servers.
 This can confuse things if the replica is re-added, and it also
 causes the
 server to calculate changes against a server that may no longer
 exist.

 389-ds-base provides a new task that self-propogates itself to 
 all
 available
 replicas to clean this RUV data.

 This patch will create this task at deletion time to hopefully
 clean things up.

 It isn't perfect. If any replica is down or unavailable at the
 time
 the
 cleanruv task fires, and then comes back up, the old RUV data
 may be
 re-propogated around.

 To make things easier in this case I've added two new commands to
 ipa-replica-manage. The first lists the replication ids of all 
 the
 servers we
 have a RUV for. Using this you can call clean_ruv with the
 replication id of a
 server that no longer exists to try the cleanallruv step again.

 This is quite dangerous though. If you run cleanruv against a
 replica id that
 does exist it can cause a loss of data. I believe I've put in
 enough scary
 warnings about this.

 rob

 Good work there, this should make cleaning RUVs much easier than
 with the
 previous version.

 This is what I found during review:

 1) list_ruv and clean_ruv command help in man is quite lost. I
 think
 it would
 help if we for example have all info for commands indented. This
 way
 user could
 simply over-look the new commands in the man page.


 2) I would rename new commands to clean-ruv and list-ruv to make
 them
 consistent with the rest of the commands (re-initialize,
 force-sync).


 3) It would be nice to be able to run clean_ruv command in an
 unattended way
 (for better testing), i.e. respect --force option as we already
 do for
 ipa-replica-manage del. This fix would aid test automation in the
 future.


 4) (minor) The new question (and the del too) does not react too
 well for
 CTRL+D:

 # ipa-replica-manage clean_ruv 3 --force
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: unexpected error:


 5) Help for clean_ruv command without a required parameter is 
 quite
 confusing
 as it reports that command is wrong and not the parameter:

 # ipa-replica-manage clean_ruv
 Usage: ipa-replica-manage [options]

 ipa-replica-manage: error: must provide a command [clean_ruv |
 force-sync |
 disconnect | connect | del | re-initialize | list | list_ruv]

 It seems you just forgot to specify the error message in the
 command
 definition


 6) When the remote replica is down, the clean_ruv command fails
 with an
 unexpected error:

 [root@vm-086 ~]# ipa-replica-manage clean_ruv 5
 Clean the Replication Update Vector for
 vm-055.idm.lab.bos.redhat.com:389

 Cleaning the wrong replica ID will cause that server to no
 longer replicate so it may miss updates while the process
 is running. It would need to be re-initialized to maintain
 consistency. Be very careful.
 Continue to clean? [no]: y
 unexpected error: {'desc': 'Operations error'}


 /var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: failed
 to connect to replagreement connection
 (cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

 cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


 tree,cn=config), error 105
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: replica
 (cn=meTovm-055.idm.lab.
 bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping







 tree,   cn=config) has not been cleaned.  You will need to rerun
 the
 CLEANALLRUV task on this replica.
 [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
 cleanAllRUV_task: Task
 failed (1)

 In this case I think we should inform user that the command 
 failed,
 possibly
 because of disconnected replicas and that they could enable the
 replicas and
 try again.


 7) (minor) pass is now redundant in replication.py:
 +except ldap.INSUFFICIENT_ACCESS:
 +# We can't make the server we're removing read-only
 but
 +# this isn't a show-stopper
 +root_logger.debug(No permission to switch replica to
 read-only,
 continuing anyway)
 +pass

 I think this addresses 

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Mark Reynolds



On 09/06/2012 12:13 PM, Rich Megginson wrote:

On 09/06/2012 10:09 AM, Martin Kosek wrote:

On 09/06/2012 06:09 PM, Martin Kosek wrote:

On 09/06/2012 06:05 PM, Martin Kosek wrote:

On 09/06/2012 05:55 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:
Deleting a replica can leave a replication vector (RUV) 
on the

other servers.
This can confuse things if the replica is re-added, and 
it also

causes the
server to calculate changes against a server that may no 
longer

exist.

389-ds-base provides a new task that self-propogates 
itself to all

available
replicas to clean this RUV data.

This patch will create this task at deletion time to 
hopefully

clean things up.

It isn't perfect. If any replica is down or unavailable 
at the

time
the
cleanruv task fires, and then comes back up, the old RUV 
data

may be
re-propogated around.

To make things easier in this case I've added two new 
commands to
ipa-replica-manage. The first lists the replication ids 
of all the

servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step 
again.


This is quite dangerous though. If you run cleanruv 
against a

replica id that
does exist it can cause a loss of data. I believe I've 
put in

enough scary
warnings about this.

rob

Good work there, this should make cleaning RUVs much 
easier than

with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite 
lost. I

think
it would
help if we for example have all info for commands 
indented. This

way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv 
to make

them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we 
already

do for
ipa-replica-manage del. This fix would aid test automation 
in the

future.


4) (minor) The new question (and the del too) does not 
react too

well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter 
is quite

confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command 
[clean_ruv |

force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command 
fails

with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping 




tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping 








tree,   cn=config) has not been cleaned.  You will need to 
rerun

the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the 
command failed,

possibly
because of disconnected replicas and that they could 
enable the

replicas and
try again.


7) (minor) pass is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing 
read-only

but
+# this isn't a show-stopper
+root_logger.debug(No permission to switch 
replica to

read-only,
continuing anyway)
+pass


I think this addresses everything.

rob
Thanks, almost there! I just found one more issue which 

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Rich Megginson

On 09/06/2012 10:40 AM, Mark Reynolds wrote:



On 09/06/2012 12:13 PM, Rich Megginson wrote:

On 09/06/2012 10:09 AM, Martin Kosek wrote:

On 09/06/2012 06:09 PM, Martin Kosek wrote:

On 09/06/2012 06:05 PM, Martin Kosek wrote:

On 09/06/2012 05:55 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:
Deleting a replica can leave a replication vector (RUV) 
on the

other servers.
This can confuse things if the replica is re-added, and 
it also

causes the
server to calculate changes against a server that may no 
longer

exist.

389-ds-base provides a new task that self-propogates 
itself to all

available
replicas to clean this RUV data.

This patch will create this task at deletion time to 
hopefully

clean things up.

It isn't perfect. If any replica is down or unavailable 
at the

time
the
cleanruv task fires, and then comes back up, the old RUV 
data

may be
re-propogated around.

To make things easier in this case I've added two new 
commands to
ipa-replica-manage. The first lists the replication ids 
of all the

servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step 
again.


This is quite dangerous though. If you run cleanruv 
against a

replica id that
does exist it can cause a loss of data. I believe I've 
put in

enough scary
warnings about this.

rob

Good work there, this should make cleaning RUVs much 
easier than

with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite 
lost. I

think
it would
help if we for example have all info for commands 
indented. This

way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv 
to make

them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command 
in an

unattended way
(for better testing), i.e. respect --force option as we 
already

do for
ipa-replica-manage del. This fix would aid test 
automation in the

future.


4) (minor) The new question (and the del too) does not 
react too

well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required 
parameter is quite

confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command 
[clean_ruv |

force-sync |
disconnect | connect | del | re-initialize | list | 
list_ruv]


It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command 
fails

with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping 




tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping 








tree,   cn=config) has not been cleaned.  You will need 
to rerun

the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the 
command failed,

possibly
because of disconnected replicas and that they could 
enable the

replicas and
try again.


7) (minor) pass is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing 
read-only

but
+# this isn't a show-stopper
+root_logger.debug(No permission to switch 
replica to

read-only,
continuing anyway)
+pass


I think this addresses everything.

rob

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Mark Reynolds



On 09/06/2012 12:27 PM, Martin Kosek wrote:

On 09/06/2012 06:13 PM, Rich Megginson wrote:

On 09/06/2012 10:09 AM, Martin Kosek wrote:

On 09/06/2012 06:09 PM, Martin Kosek wrote:

On 09/06/2012 06:05 PM, Martin Kosek wrote:

On 09/06/2012 05:55 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:

Deleting a replica can leave a replication vector (RUV) on the
other servers.
This can confuse things if the replica is re-added, and it also
causes the
server to calculate changes against a server that may no longer
exist.

389-ds-base provides a new task that self-propogates itself to all
available
replicas to clean this RUV data.

This patch will create this task at deletion time to hopefully
clean things up.

It isn't perfect. If any replica is down or unavailable at the
time
the
cleanruv task fires, and then comes back up, the old RUV data
may be
re-propogated around.

To make things easier in this case I've added two new commands to
ipa-replica-manage. The first lists the replication ids of all the
servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step again.

This is quite dangerous though. If you run cleanruv against a
replica id that
does exist it can cause a loss of data. I believe I've put in
enough scary
warnings about this.

rob


Good work there, this should make cleaning RUVs much easier than
with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite lost. I
think
it would
help if we for example have all info for commands indented. This
way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv to make
them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we already
do for
ipa-replica-manage del. This fix would aid test automation in the
future.


4) (minor) The new question (and the del too) does not react too
well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter is quite
confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command [clean_ruv |
force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command fails
with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping







tree,   cn=config) has not been cleaned.  You will need to rerun
the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the command failed,
possibly
because of disconnected replicas and that they could enable the
replicas and
try again.


7) (minor) pass is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing read-only
but
+# this isn't a show-stopper
+root_logger.debug(No permission to switch replica to
read-only,
continuing anyway)
+pass


I think this addresses everything.

rob

Thanks, almost there! I just found one more issue which needs to 

Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-09-06 Thread Nathan Kinder

On 09/06/2012 09:28 AM, Martin Kosek wrote:

On 09/04/2012 04:40 PM, Rich Megginson wrote:

On 09/03/2012 08:42 AM, Martin Kosek wrote:

On 08/27/2012 06:29 PM, Rich Megginson wrote:

On 08/27/2012 10:24 AM, Martin Kosek wrote:

On 08/17/2012 04:00 PM, Rich Megginson wrote:

On 08/17/2012 07:44 AM, Martin Kosek wrote:

Hi guys,

I am now investigating ticket #2866:
https://fedorahosted.org/freeipa/ticket/2866

And I am thinking about possible solutions for this problem. In a
nutshell, we do not properly check referential integrity in some IPA
objects where we keep one-way DN references to other objects, e.g. in
- managedBy attribute for a host object
- memberhost attribute for HBAC rule object
- memberuser attribute for user object
- memberallowcmd or memberdenycmd for SUDO command object (reported in
#2866)
...

Currently, I see 2 approaches to solve this:
1) Add relevant checks to our ipalib plugins where problematic
operations with these operations are being executed (like we do for
selinuxusermap's seealso attribute in HBAC plugin)
This of course would not prevent direct LDAP deletes.

2) Implement a preop DS plugin that would hook to MODRDN and DELETE
callbacks and check that this object's DN is not referenced in other
objects. And if it does, it would reject such modification. Second
option would be to delete the attribute value with now invalid
reference. This would be probably  more suitable for example for
references to user objects.

Any comments to these possible approaches are welcome.

Rich, do you think that as an alternative to these 2 approaches,
memberOf plugin could be eventually modified to do this task?

This is very similar to the referential integrity plugin already in 389,
except
instead of cleaning up references to moved and deleted entries, you want
it to
prevent moving or deleting an entry if that entry is referenced by the
managedby/memberhost/memberuser/memberallowcmd/memberdenycmd of some other
entry.

I think that using or enhancing current DS referential integrity plugin
will be
the best and the most consistent way to go.

We already use that plugin for some user attributes like manager or
secretary. seeAlso is already covered by default, so for example seeAlso
attribute in SELinux usermap object referencing an HBAC rule will get removed
when relevant HBAC rule is removed (I just checked that).


Note that the managed entry plugin (mep) already handles this for the
managedby
attribute.

I assume you are referencing mepmanagedby and mepmanagedentry attributes
which then produce errors like this one:

# ipa netgroup-del foo
ipa: ERROR: Server is unwilling to perform: Deleting a managed entry is not
allowed. It needs to be manually unlinked first.

managedBy attribute used by hosts objects I had in mind seems to not be
covered.

But you are right, this is pretty much what I wanted. Though in case of MEP,
there is a link in both referenced objects, but in our case, we have just the
one-way link.


Are you already using the memberof plugin for
memberhost/memberuser/memberallowcmd/memberdenycmd?

This doesn't seem like a job for memberof, this seems like more of a new
check
for the referential integrity plugin.


I am now considering if current move/delete clean up already present in
Referential Integrity plugin would not be sufficient for us.

Rich, please correct me if I am wrong, but in that case, we would just need to
add relevant attribute names
(memberhost/memberuser/memberallowcmd/memberdenycmd...) to Referential
Integrity plugin configuration as nsslapd-pluginarg7, nsslapd-pluginarg8, ...
I wonder if there would be some performance issues if we add attributes to the
list this way.

No, not if they are indexed for presence and equality.


Hello Rich,
I am back to investigate this ticket. In order to be able to deliver some
working solution to IPA 3.0, I plan to take advantage of current Referential
Integrity Plugin to clean up dangling references.

This is the plan I plan to take:
1) Add pres,eq indexes for all un-indexed attributes that we want to check:
sourcehost
memberservice
managedby
memberallowcmd
memberdenycmd
ipasudorunas
ipasudorunasgroup

ok

2) Add missing pres index for attributes we want to check, but only have eq
index:
manager
secretary
memberuser
memberhost

I assume this step is also needed in order to keep the server performance.

yes

3) Add all these attributes do Referential Integrity Plugin attribute list of
not already

ok

4) Also add Index task (nsIndexAttribute) for all these new indexes so that
they are created during IPA server upgrade.

ok

Is this procedure OK DS-wise?

Yes

I also have question regarding the following note in RHDS doc chapter 3.6.
Maintaining Referential Integrity:

The Referential Integrity Plug-in should only be enabled on one supplier
replica in a multi-master replication environment to avoid conflict resolution
loops...

Currently, we enable this plugin on all IPA replicas. Is this something we need
to be concerned about 

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Martin Kosek
On Thu, 2012-09-06 at 12:42 -0400, Mark Reynolds wrote:
 
 On 09/06/2012 12:27 PM, Martin Kosek wrote:
  On 09/06/2012 06:13 PM, Rich Megginson wrote:
  On 09/06/2012 10:09 AM, Martin Kosek wrote:
  On 09/06/2012 06:09 PM, Martin Kosek wrote:
  On 09/06/2012 06:05 PM, Martin Kosek wrote:
  On 09/06/2012 05:55 PM, Rob Crittenden wrote:
  Rob Crittenden wrote:
  Rob Crittenden wrote:
  Martin Kosek wrote:
  On 09/05/2012 08:06 PM, Rob Crittenden wrote:
  Rob Crittenden wrote:
  Martin Kosek wrote:
  On 07/05/2012 08:39 PM, Rob Crittenden wrote:
  Martin Kosek wrote:
  On 07/03/2012 04:41 PM, Rob Crittenden wrote:
  Deleting a replica can leave a replication vector (RUV) on the
  other servers.
  This can confuse things if the replica is re-added, and it 
  also
  causes the
  server to calculate changes against a server that may no 
  longer
  exist.
 
  389-ds-base provides a new task that self-propogates itself 
  to all
  available
  replicas to clean this RUV data.
 
  This patch will create this task at deletion time to hopefully
  clean things up.
 
  It isn't perfect. If any replica is down or unavailable at the
  time
  the
  cleanruv task fires, and then comes back up, the old RUV data
  may be
  re-propogated around.
 
  To make things easier in this case I've added two new 
  commands to
  ipa-replica-manage. The first lists the replication ids of 
  all the
  servers we
  have a RUV for. Using this you can call clean_ruv with the
  replication id of a
  server that no longer exists to try the cleanallruv step 
  again.
 
  This is quite dangerous though. If you run cleanruv against a
  replica id that
  does exist it can cause a loss of data. I believe I've put in
  enough scary
  warnings about this.
 
  rob
 
  Good work there, this should make cleaning RUVs much easier 
  than
  with the
  previous version.
 
  This is what I found during review:
 
  1) list_ruv and clean_ruv command help in man is quite lost. I
  think
  it would
  help if we for example have all info for commands indented. 
  This
  way
  user could
  simply over-look the new commands in the man page.
 
 
  2) I would rename new commands to clean-ruv and list-ruv to 
  make
  them
  consistent with the rest of the commands (re-initialize,
  force-sync).
 
 
  3) It would be nice to be able to run clean_ruv command in an
  unattended way
  (for better testing), i.e. respect --force option as we already
  do for
  ipa-replica-manage del. This fix would aid test automation in 
  the
  future.
 
 
  4) (minor) The new question (and the del too) does not react 
  too
  well for
  CTRL+D:
 
  # ipa-replica-manage clean_ruv 3 --force
  Clean the Replication Update Vector for
  vm-055.idm.lab.bos.redhat.com:389
 
  Cleaning the wrong replica ID will cause that server to no
  longer replicate so it may miss updates while the process
  is running. It would need to be re-initialized to maintain
  consistency. Be very careful.
  Continue to clean? [no]: unexpected error:
 
 
  5) Help for clean_ruv command without a required parameter is 
  quite
  confusing
  as it reports that command is wrong and not the parameter:
 
  # ipa-replica-manage clean_ruv
  Usage: ipa-replica-manage [options]
 
  ipa-replica-manage: error: must provide a command [clean_ruv |
  force-sync |
  disconnect | connect | del | re-initialize | list | list_ruv]
 
  It seems you just forgot to specify the error message in the
  command
  definition
 
 
  6) When the remote replica is down, the clean_ruv command fails
  with an
  unexpected error:
 
  [root@vm-086 ~]# ipa-replica-manage clean_ruv 5
  Clean the Replication Update Vector for
  vm-055.idm.lab.bos.redhat.com:389
 
  Cleaning the wrong replica ID will cause that server to no
  longer replicate so it may miss updates while the process
  is running. It would need to be re-initialized to maintain
  consistency. Be very careful.
  Continue to clean? [no]: y
  unexpected error: {'desc': 'Operations error'}
 
 
  /var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
  [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
  cleanAllRUV_task: failed
  to connect to replagreement connection
  (cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,
 
  cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping
 
 
  tree,cn=config), error 105
  [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
  cleanAllRUV_task: replica
  (cn=meTovm-055.idm.lab.
  bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping
 
 
 
 
 
 
 
  tree,   cn=config) has not been cleaned.  You will need to 
  rerun
  the
  CLEANALLRUV task on this replica.
  [04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
  cleanAllRUV_task: Task
  failed (1)
 
  In this case I think we should inform user that the command 
  failed,
  possibly
  because of disconnected replicas and that they could enable the
  replicas and
  try again.
 
 
  7) (minor) pass is now redundant in replication.py:
  +   

Re: [Freeipa-devel] [PATCH] 303 Add range safety check for range_mod and range_del

2012-09-06 Thread Martin Kosek
On Thu, 2012-09-06 at 17:55 +0200, Sumit Bose wrote:
 On Wed, Sep 05, 2012 at 05:13:41PM +0200, Martin Kosek wrote:
  range_mod and range_del command could easily create objects with
  ID which is suddenly out of specified range. This could cause issues
  in trust scenarios where range objects are used for computation of
  remote IDs.
  
  Add validator for both commands to check if there is any object with
  ID in the range which would become out-of-range as a pre_callback.
  Also add unit tests testing this new validator.
  
  https://fedorahosted.org/freeipa/ticket/2919
 
 ACK
 
 bye,
 Sumit
 

Thanks for the review. Pushed to master, ipa-3-0.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Rob Crittenden

Martin Kosek wrote:

On 09/06/2012 05:55 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:

Deleting a replica can leave a replication vector (RUV) on the
other servers.
This can confuse things if the replica is re-added, and it also
causes the
server to calculate changes against a server that may no longer
exist.

389-ds-base provides a new task that self-propogates itself to all
available
replicas to clean this RUV data.

This patch will create this task at deletion time to hopefully
clean things up.

It isn't perfect. If any replica is down or unavailable at the
time
the
cleanruv task fires, and then comes back up, the old RUV data
may be
re-propogated around.

To make things easier in this case I've added two new commands to
ipa-replica-manage. The first lists the replication ids of all the
servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step again.

This is quite dangerous though. If you run cleanruv against a
replica id that
does exist it can cause a loss of data. I believe I've put in
enough scary
warnings about this.

rob



Good work there, this should make cleaning RUVs much easier than
with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite lost. I
think
it would
help if we for example have all info for commands indented. This
way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv to make
them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we already
do for
ipa-replica-manage del. This fix would aid test automation in the
future.


4) (minor) The new question (and the del too) does not react too
well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter is quite
confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command [clean_ruv |
force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command fails
with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping






tree,   cn=config) has not been cleaned.  You will need to rerun
the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the command failed,
possibly
because of disconnected replicas and that they could enable the
replicas and
try again.


7) (minor) pass is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing read-only
but
+# this isn't a show-stopper
+root_logger.debug(No permission to switch replica to
read-only,
continuing anyway)
+pass



I think this addresses everything.

rob


Thanks, almost there! I just found one more issue which needs to be
fixed
before we push:

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force
Directory Manager password:

Unable to connect to replica vm-055.idm.lab.bos.redhat.com, forcing
removal
Failed to 

Re: [Freeipa-devel] [PATCH] 1050 prevent replica orphans

2012-09-06 Thread Rob Crittenden

Martin Kosek wrote:

On 08/31/2012 07:40 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

It was possible use ipa-replica-manage connect/disconnect/del to end up
orphaning or or more IPA masters. This is an attempt to catch and
prevent that case.

I tested with this topology, trying to delete B.

A - B - C

I got here by creating B and C from A, connecting B to C then deleting
the link from A to B, so it went from A - B and A - C to the above.

What I do is look up the servers that the delete candidate host has
connections to and see if we're the last link.

I added an escape clause if there are only two masters.

rob


Oh, this relies on my cleanruv patch 1031.

rob



1) When I run ipa-replica-manage del --force to an already uninstalled host,
the new code will prevent me the deletation because it cannot connect to it. It
also crashes with UnboundLocalError:

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force

Unable to connect to replica vm-055.idm.lab.bos.redhat.com, forcing removal
Traceback (most recent call last):
   File /sbin/ipa-replica-manage, line 708, in module
 main()
   File /sbin/ipa-replica-manage, line 677, in main
 del_master(realm, args[1], options)
   File /sbin/ipa-replica-manage, line 476, in del_master
 sys.exit(Failed read master data from '%s': %s % (delrepl.hostname, 
str(e)))
UnboundLocalError: local variable 'delrepl' referenced before assignment


Fixed.




I also hit this error when removing a winsync replica.


Fixed.




2) As I wrote before, I think having --force option override the user inquiries
would benefit test automation:

+if not ipautil.user_input(Continue to delete?, False):
+sys.exit(Aborted)


Fixed.




3) I don't think this code won't cover this topology:

A - B - C - D - E

It would allow you deleting a replica C even though it would separate A-B and
D-E. Though we may not want to cover this situation now, what you got is
definitely helping.


I think you may be right. I only tested with 4 servers. With this B and 
D would both still have 2 agreements so wouldn't be covered by the last 
link test.


rob

From 66217dd61b8271d6282eaad729c92e6bf961123a Mon Sep 17 00:00:00 2001
From: Rob Crittenden rcrit...@redhat.com
Date: Fri, 31 Aug 2012 11:56:58 -0400
Subject: [PATCH] When deleting a master, try to prevent orphaning other
 servers.

If you have a replication topology like A - B - C and you try
to delete server B that will leave A and C orphaned. It may also
prevent re-installation of a new master on B because the cn=masters
entry for it probably still exists on at least one of the other masters.

Check on each master that it connects to to ensure that it isn't the
last link, and fail if it is. If any of the masters are not up then
warn that this could be a bad thing but let the user continue if
they want.

Document how to remove a cn=masters entry in the man page.

https://fedorahosted.org/freeipa/ticket/2797
---
 install/tools/ipa-replica-manage   | 66 ++
 install/tools/man/ipa-replica-manage.1 | 12 +++
 2 files changed, 78 insertions(+)

diff --git a/install/tools/ipa-replica-manage b/install/tools/ipa-replica-manage
index c6ef51b7215164c9538afae942e3d42285ca860b..24a33bfb5ea51035eb12eaf7944a7e566640c2ff 100755
--- a/install/tools/ipa-replica-manage
+++ b/install/tools/ipa-replica-manage
@@ -398,9 +398,52 @@ def clean_ruv(realm, ruv, options):
 thisrepl.cleanallruv(ruv)
 print Cleanup task created
 
+def check_last_link(delrepl, realm, dirman_passwd, force):
+
+We don't want to orphan a server when deleting another one. If you have
+A topology that looks like this:
+
+ A B
+ | |
+ | |
+ | |
+ C D
+
+If we try to delete host D it will orphan host B.
+
+What we need to do is if the master being deleted has only a single
+agreement, connect to that master and make sure it has agreements with
+more than just this master.
+
+@delrepl: a ReplicationManager object of the master being deleted
+
+returns: hostname of orphaned server or None
+
+replica_names = delrepl.find_ipa_replication_agreements()
+
+orphaned = []
+# Connect to each remote server and see what agreements it has
+for replica in replica_names:
+try:
+repl = replication.ReplicationManager(realm, replica, dirman_passwd)
+except ldap.SERVER_DOWN, e:
+print Unable to validate that '%s' will not be orphaned. % replica
+if not force and not ipautil.user_input(Continue to delete?, False):
+sys.exit(Aborted)
+continue
+names = repl.find_ipa_replication_agreements()
+if len(names) == 1 and names[0] == delrepl.hostname:
+orphaned.append(replica)
+
+if len(orphaned):
+return ', '.join(orphaned)
+else:
+return None
+
 def del_master(realm,