Re: [Freeipa-devel] MemberOf and Referential Integrity plugin failures cause abort of operation

2015-09-15 Thread Rich Megginson

On 09/15/2015 04:58 AM, Jan Cholasta wrote:

On 15.9.2015 10:23, Tomas Babej wrote:

Hi,

from DS 1.3.3, the memberOf and referential integrity plugins have been
converted to backend transaction plugins, which means that failures in
these plugins will propagate and cause abort of the operation that
triggered them. [1]

I.e. in case of memberOf plugin, if a operation triggered an addition of
memberOf attribute, and that addition failed, the operation itself did
succeed in spite of this failure. This is no longer the case.


IMO the new transacted behavior is correct - the original operation and 
all of the triggered operations should succeed or fail together.




We have been already hit by this issue in winsync agreement setup:

https://bugzilla.redhat.com/show_bug.cgi?id=1262315

However, there is little special about this case and there might be
multiple such entries in IPA which are added as group members,
but do not contain an objectclass which allows memberOf attribute.

So we need to step back and think - are there any other entries where
this change of behaviour will hit us?


As far as ipalib is concerned, these are the objects which may have 
the memberOf attribute (with object class providing it in parentheses):


group (netstedGroup)
hbacsvc (ipaHBACService)
host (ipaHost)
hostgroup (netstedGroup)
netgroup (ipaNISNetgroup)
privilege (nestedGroup)
role (nestedGroup)
service (ipaService)
sudocmd (NONE)
user (inetUser)

so memberOf needs to be added to ipaSudoCmd.

The config plugin lists memberOf as an operational attribute, which I 
guess is no longer the case?


It should never have been an operational attribute.  Perhaps this was a 
"hack" to workaround the fact that there were objects/objectclasses 
missing memberOf?




Also, memberOf is excluded from replication in 
ipaserver/install/replication.py.


By design - all servers are expected to have the same memberOf plugin 
configuration, and add memberOf locally.


--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] IPA 3.0 migrated to 4.1 users break winsync agreement when deleted in active directory

2015-09-09 Thread Rich Megginson

On 09/09/2015 03:39 AM, Martin Basti wrote:



On 09/09/2015 10:50 AM, Andreas Calminder wrote:
Forgot to write that deleting users in active directory not migrated 
with the migrate-ds command works fine, it's only migrated users 
present in the ad that breaks the winsync agreement on deletion.


On 09/09/2015 10:35 AM, Andreas Calminder wrote:

Hi,
I've asked in #freeipa on freenode but to no avail, figured I'll ask 
here as well, since I think I've actually hit a bug or (quite) 
possibly I've done something moronic configuration/migration -wise.


I've got an existing FreeIPA 3.0.0 environment running with a fully 
functioning winsync agreement and passsync service with the windows 
environments active directory, I'm trying to migrate the 3.0.0 
environments users into a freshly installed 4.1 (rhel7) environment, 
after migration I setup a winsync agreement and make it 
bi-directional  (one-way sync from windows) everything seems to be 
working alright until I delete a migrated user from the Active 
Directory, after the winsync picks up on the change it'll break and 
suggests a re-initialize. After the re-initialization the agreement 
seems to be fine, however the deleted user are still present in the 
ipa 4.1 environment and cannot be deleted. The webgui and ipa cli 
says: ipauser1: user not found. ipa user-find ipauser1 finds the 
user and it's visible in the ui.


Anyone had the same problem or anything similar or any pointers on 
where to start looking?


Regards,
Andreas





Hello, this might be a replication conflict.

Can you list that user via ldapsearch to check if this is replication 
conflict?


https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/8.2/html/Administration_Guide/Managing_Replication-Solving_Common_Replication_Conflicts.html 



Use the latest docs, just in case they are more accurate: 
https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Managing_Replication-Solving_Common_Replication_Conflicts.html
-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Re: [Freeipa-devel] How to support Designate?

2015-09-02 Thread Rich Megginson

On 09/01/2015 07:34 AM, Simo Sorce wrote:

On Tue, 2015-09-01 at 07:17 -0600, Rich Megginson wrote:

On 09/01/2015 05:39 AM, Petr Spacek wrote:

On 1.9.2015 00:42, Rich Megginson wrote:

On 08/31/2015 11:00 AM, Simo Sorce wrote:

On Mon, 2015-08-31 at 10:15 -0600, Rich Megginson wrote:

On 08/31/2015 01:35 AM, Petr Spacek wrote:

On 26.8.2015 20:09, Rich Megginson wrote:

On 08/25/2015 09:08 AM, Petr Spacek wrote:

On 8.7.2015 19:56, Rich Megginson wrote:

On 07/08/2015 10:11 AM, Petr Spacek wrote:

Assuming that Designate wants to own DNS and be Primary Master, it
would be
awesome if they could support standard DNS UPDATE protocol (RFC 2136)
alongside their own JSON API.

The JSON API is superset of DNS UPDATE protocol because it allows to add
zones
but still, standard protocol would mean that standard client (possibly
guest
OS inside VM) can update its records without any OpenStack dependency,
which
is very much desirable.

The use case here is to allow the guest OS to publish it's SSH key
(which was
generated inside the VM after first boot) to prevent Man in the middle
attacks.

I'm working on a different approach for guest OS registration.  This
involves
a Nova hook/plugin:
* build_instance pre-hook to generate an OTP and call ipa host-add with the
OTP - add OTP to new host metadata - add ipa-client-registration script
to new
host cloud-init
* new instance calls script - will wait for OTP to become available in
metadata, then call ipa-client-install with OTP
* Floating IP is assigned to VM - Nova hook will call dnsrecord-add with
new IP

BTW dnsrecord-add can be omitted if standard DNS UPDATE is supported.
ipa-client-install is using DNS UPDATE today.

I already have to support the IPA JSON REST interface with kerberos
credentials to do the host add, so it is easy to support dsrecord-add.


https://github.com/richm/rdo-vm-factory/tree/master/rdo-ipa-nova


The same goes for all other sorts of DANE/DNSSEC data or service
discovery using DNS, where a guest/container running a distributed
service
can
publish it's existence in DNS.

DNS UPDATE supports GSS(API) for authentication via RFC 3007 and that is
widely supported, too.

So DNS UPDATE is my biggest wish :-)


Ok.  There was a Designate blueprint for such a feature, but I can't
find it
and neither can the Designate guys.  There is a mention of nsupdate in the
minidns blueprint, but that's about it.  The fact that Designate upstream
can't find the bp suggests that this is not a high priority for them
and will
not likely implement it on their own i.e. we would have to contribute this
feature.

If Designate had such a feature, how would this help us integrate
FreeIPA with
Designate?

It would greatly simplify integration with FreeIPA. There is a plan to
support
DNS updates as described in RFC 2136 to push updates from FreeIPA
servers to
external DNS servers, so we could use the same code to integrate with AD &
Designate at the same time.

(I'm sorry for the delay, it somehow slipped through the cracks.)


For Designate for our use cases, we want IPA to be the authoritative
source of
DNS data.

Why? In my eyes it is additional complexity for no obvious benefit. DNS is
built around assumption that there is only one authoritative source of data
and as far as I can tell all attempts to bend this assumption did not end
well.

But what about users/operators who want to integrate OpenStack with
their existing DNS deployment (e.g. IPA or AD)?  Will they allow
converting their IPA/AD DNS to be a replica of Designate?

No, they would not want to, or have no permissions to do so.
But that shouldn't be a big issue, designate will probably be made to
manage a completely unrelated namespace.


This seems to
be the obverse of most of the ways OpenStack is integrated into existing
deployments.  For example, for Keystone Identity, you don't configure
Keystone to be the authoritative source of data for identity, then
configure IPA or AD to be a replica of Keystone.  You configure Keystone
to use IPA/AD for its identity information.

Indeed.


In my eyes IPA should have ability to integrate with whatever DNS server
admin
wants to use, using standard protocols.

Does this mean the best way to support Designate will be to change IPA
DNS so that it can be a replica of Designate, and get its data via AXFR
from Designate?

No, we should probably just make it possible for IPA to talk to
designate to add the necessary records. If Designate is in use, the IPA
DNS will not be in use and turned off.

Then why use IPA at all?  Would be much simpler for the user to stand up a
PowerDNS or BIND9 which are supported out of the box.

Yes, that is basically what I'm saying :-) In my eyes IPA should integrate
with whatever DNS server you want to use, be it Designate or anything else. If
we have such integration then there is no point in doing two-way
synchronization between IPA DNS and  DNS.

What does "integration" mean in this context, if it doesn't mean
synchroni

Re: [Freeipa-devel] How to support Designate?

2015-09-02 Thread Rich Megginson

On 09/02/2015 08:07 AM, Simo Sorce wrote:

On Wed, 2015-09-02 at 08:00 -0600, Rich Megginson wrote:

On 09/01/2015 07:34 AM, Simo Sorce wrote:

On Tue, 2015-09-01 at 07:17 -0600, Rich Megginson wrote:

On 09/01/2015 05:39 AM, Petr Spacek wrote:

On 1.9.2015 00:42, Rich Megginson wrote:

On 08/31/2015 11:00 AM, Simo Sorce wrote:

On Mon, 2015-08-31 at 10:15 -0600, Rich Megginson wrote:

On 08/31/2015 01:35 AM, Petr Spacek wrote:

On 26.8.2015 20:09, Rich Megginson wrote:

On 08/25/2015 09:08 AM, Petr Spacek wrote:

On 8.7.2015 19:56, Rich Megginson wrote:

On 07/08/2015 10:11 AM, Petr Spacek wrote:

Assuming that Designate wants to own DNS and be Primary Master, it
would be
awesome if they could support standard DNS UPDATE protocol (RFC 2136)
alongside their own JSON API.

The JSON API is superset of DNS UPDATE protocol because it allows to add
zones
but still, standard protocol would mean that standard client (possibly
guest
OS inside VM) can update its records without any OpenStack dependency,
which
is very much desirable.

The use case here is to allow the guest OS to publish it's SSH key
(which was
generated inside the VM after first boot) to prevent Man in the middle
attacks.

I'm working on a different approach for guest OS registration.  This
involves
a Nova hook/plugin:
* build_instance pre-hook to generate an OTP and call ipa host-add with the
OTP - add OTP to new host metadata - add ipa-client-registration script
to new
host cloud-init
* new instance calls script - will wait for OTP to become available in
metadata, then call ipa-client-install with OTP
* Floating IP is assigned to VM - Nova hook will call dnsrecord-add with
new IP

BTW dnsrecord-add can be omitted if standard DNS UPDATE is supported.
ipa-client-install is using DNS UPDATE today.

I already have to support the IPA JSON REST interface with kerberos
credentials to do the host add, so it is easy to support dsrecord-add.


https://github.com/richm/rdo-vm-factory/tree/master/rdo-ipa-nova


The same goes for all other sorts of DANE/DNSSEC data or service
discovery using DNS, where a guest/container running a distributed
service
can
publish it's existence in DNS.

DNS UPDATE supports GSS(API) for authentication via RFC 3007 and that is
widely supported, too.

So DNS UPDATE is my biggest wish :-)


Ok.  There was a Designate blueprint for such a feature, but I can't
find it
and neither can the Designate guys.  There is a mention of nsupdate in the
minidns blueprint, but that's about it.  The fact that Designate upstream
can't find the bp suggests that this is not a high priority for them
and will
not likely implement it on their own i.e. we would have to contribute this
feature.

If Designate had such a feature, how would this help us integrate
FreeIPA with
Designate?

It would greatly simplify integration with FreeIPA. There is a plan to
support
DNS updates as described in RFC 2136 to push updates from FreeIPA
servers to
external DNS servers, so we could use the same code to integrate with AD &
Designate at the same time.

(I'm sorry for the delay, it somehow slipped through the cracks.)


For Designate for our use cases, we want IPA to be the authoritative
source of
DNS data.

Why? In my eyes it is additional complexity for no obvious benefit. DNS is
built around assumption that there is only one authoritative source of data
and as far as I can tell all attempts to bend this assumption did not end
well.

But what about users/operators who want to integrate OpenStack with
their existing DNS deployment (e.g. IPA or AD)?  Will they allow
converting their IPA/AD DNS to be a replica of Designate?

No, they would not want to, or have no permissions to do so.
But that shouldn't be a big issue, designate will probably be made to
manage a completely unrelated namespace.


This seems to
be the obverse of most of the ways OpenStack is integrated into existing
deployments.  For example, for Keystone Identity, you don't configure
Keystone to be the authoritative source of data for identity, then
configure IPA or AD to be a replica of Keystone.  You configure Keystone
to use IPA/AD for its identity information.

Indeed.


In my eyes IPA should have ability to integrate with whatever DNS server
admin
wants to use, using standard protocols.

Does this mean the best way to support Designate will be to change IPA
DNS so that it can be a replica of Designate, and get its data via AXFR
from Designate?

No, we should probably just make it possible for IPA to talk to
designate to add the necessary records. If Designate is in use, the IPA
DNS will not be in use and turned off.

Then why use IPA at all?  Would be much simpler for the user to stand up a
PowerDNS or BIND9 which are supported out of the box.

Yes, that is basically what I'm saying :-) In my eyes IPA should integrate
with whatever DNS server you want to use, be it Designate or anything else. If
we have such integration then there is no point in doing two-way
synchronization between IPA

Re: [Freeipa-devel] How to support Designate?

2015-09-01 Thread Rich Megginson

On 09/01/2015 05:39 AM, Petr Spacek wrote:

On 1.9.2015 00:42, Rich Megginson wrote:

On 08/31/2015 11:00 AM, Simo Sorce wrote:

On Mon, 2015-08-31 at 10:15 -0600, Rich Megginson wrote:

On 08/31/2015 01:35 AM, Petr Spacek wrote:

On 26.8.2015 20:09, Rich Megginson wrote:

On 08/25/2015 09:08 AM, Petr Spacek wrote:

On 8.7.2015 19:56, Rich Megginson wrote:

On 07/08/2015 10:11 AM, Petr Spacek wrote:

Assuming that Designate wants to own DNS and be Primary Master, it
would be
awesome if they could support standard DNS UPDATE protocol (RFC 2136)
alongside their own JSON API.

The JSON API is superset of DNS UPDATE protocol because it allows to add
zones
but still, standard protocol would mean that standard client (possibly
guest
OS inside VM) can update its records without any OpenStack dependency,
which
is very much desirable.

The use case here is to allow the guest OS to publish it's SSH key
(which was
generated inside the VM after first boot) to prevent Man in the middle
attacks.

I'm working on a different approach for guest OS registration.  This
involves
a Nova hook/plugin:
* build_instance pre-hook to generate an OTP and call ipa host-add with the
OTP - add OTP to new host metadata - add ipa-client-registration script
to new
host cloud-init
* new instance calls script - will wait for OTP to become available in
metadata, then call ipa-client-install with OTP
* Floating IP is assigned to VM - Nova hook will call dnsrecord-add with
new IP

BTW dnsrecord-add can be omitted if standard DNS UPDATE is supported.
ipa-client-install is using DNS UPDATE today.

I already have to support the IPA JSON REST interface with kerberos
credentials to do the host add, so it is easy to support dsrecord-add.


https://github.com/richm/rdo-vm-factory/tree/master/rdo-ipa-nova


The same goes for all other sorts of DANE/DNSSEC data or service
discovery using DNS, where a guest/container running a distributed
service
can
publish it's existence in DNS.

DNS UPDATE supports GSS(API) for authentication via RFC 3007 and that is
widely supported, too.

So DNS UPDATE is my biggest wish :-)


Ok.  There was a Designate blueprint for such a feature, but I can't
find it
and neither can the Designate guys.  There is a mention of nsupdate in the
minidns blueprint, but that's about it.  The fact that Designate upstream
can't find the bp suggests that this is not a high priority for them
and will
not likely implement it on their own i.e. we would have to contribute this
feature.

If Designate had such a feature, how would this help us integrate
FreeIPA with
Designate?

It would greatly simplify integration with FreeIPA. There is a plan to
support
DNS updates as described in RFC 2136 to push updates from FreeIPA
servers to
external DNS servers, so we could use the same code to integrate with AD &
Designate at the same time.

(I'm sorry for the delay, it somehow slipped through the cracks.)


For Designate for our use cases, we want IPA to be the authoritative
source of
DNS data.

Why? In my eyes it is additional complexity for no obvious benefit. DNS is
built around assumption that there is only one authoritative source of data
and as far as I can tell all attempts to bend this assumption did not end
well.

But what about users/operators who want to integrate OpenStack with
their existing DNS deployment (e.g. IPA or AD)?  Will they allow
converting their IPA/AD DNS to be a replica of Designate?

No, they would not want to, or have no permissions to do so.
But that shouldn't be a big issue, designate will probably be made to
manage a completely unrelated namespace.


This seems to
be the obverse of most of the ways OpenStack is integrated into existing
deployments.  For example, for Keystone Identity, you don't configure
Keystone to be the authoritative source of data for identity, then
configure IPA or AD to be a replica of Keystone.  You configure Keystone
to use IPA/AD for its identity information.

Indeed.


In my eyes IPA should have ability to integrate with whatever DNS server
admin
wants to use, using standard protocols.

Does this mean the best way to support Designate will be to change IPA
DNS so that it can be a replica of Designate, and get its data via AXFR
from Designate?

No, we should probably just make it possible for IPA to talk to
designate to add the necessary records. If Designate is in use, the IPA
DNS will not be in use and turned off.

Then why use IPA at all?  Would be much simpler for the user to stand up a
PowerDNS or BIND9 which are supported out of the box.

Yes, that is basically what I'm saying :-) In my eyes IPA should integrate
with whatever DNS server you want to use, be it Designate or anything else. If
we have such integration then there is no point in doing two-way
synchronization between IPA DNS and  DNS.


What does "integration" mean in this context, if it doesn't mean 
synchronization or zone transfers?





It makes little to no sense to replicate stuff out of designate if we
are 

Re: [Freeipa-devel] How to support Designate?

2015-08-31 Thread Rich Megginson

On 08/31/2015 01:35 AM, Petr Spacek wrote:

On 26.8.2015 20:09, Rich Megginson wrote:

On 08/25/2015 09:08 AM, Petr Spacek wrote:

On 8.7.2015 19:56, Rich Megginson wrote:

On 07/08/2015 10:11 AM, Petr Spacek wrote:

Assuming that Designate wants to own DNS and be Primary Master, it would be
awesome if they could support standard DNS UPDATE protocol (RFC 2136)
alongside their own JSON API.

The JSON API is superset of DNS UPDATE protocol because it allows to add
zones
but still, standard protocol would mean that standard client (possibly guest
OS inside VM) can update its records without any OpenStack dependency, which
is very much desirable.

The use case here is to allow the guest OS to publish it's SSH key (which was
generated inside the VM after first boot) to prevent Man in the middle
attacks.

I'm working on a different approach for guest OS registration.  This involves
a Nova hook/plugin:
* build_instance pre-hook to generate an OTP and call ipa host-add with the
OTP - add OTP to new host metadata - add ipa-client-registration script to new
host cloud-init
* new instance calls script - will wait for OTP to become available in
metadata, then call ipa-client-install with OTP
* Floating IP is assigned to VM - Nova hook will call dnsrecord-add with new IP

BTW dnsrecord-add can be omitted if standard DNS UPDATE is supported.
ipa-client-install is using DNS UPDATE today.


I already have to support the IPA JSON REST interface with kerberos 
credentials to do the host add, so it is easy to support dsrecord-add.





https://github.com/richm/rdo-vm-factory/tree/master/rdo-ipa-nova


The same goes for all other sorts of DANE/DNSSEC data or service
discovery using DNS, where a guest/container running a distributed service
can
publish it's existence in DNS.

DNS UPDATE supports GSS(API) for authentication via RFC 3007 and that is
widely supported, too.

So DNS UPDATE is my biggest wish :-)


Ok.  There was a Designate blueprint for such a feature, but I can't find it
and neither can the Designate guys.  There is a mention of nsupdate in the
minidns blueprint, but that's about it.  The fact that Designate upstream
can't find the bp suggests that this is not a high priority for them and will
not likely implement it on their own i.e. we would have to contribute this
feature.

If Designate had such a feature, how would this help us integrate FreeIPA with
Designate?

It would greatly simplify integration with FreeIPA. There is a plan to support
DNS updates as described in RFC 2136 to push updates from FreeIPA servers to
external DNS servers, so we could use the same code to integrate with AD &
Designate at the same time.

(I'm sorry for the delay, it somehow slipped through the cracks.)


For Designate for our use cases, we want IPA to be the authoritative source of
DNS data.

Why? In my eyes it is additional complexity for no obvious benefit. DNS is
built around assumption that there is only one authoritative source of data
and as far as I can tell all attempts to bend this assumption did not end well.


But what about users/operators who want to integrate OpenStack with 
their existing DNS deployment (e.g. IPA or AD)?  Will they allow 
converting their IPA/AD DNS to be a replica of Designate?  This seems to 
be the obverse of most of the ways OpenStack is integrated into existing 
deployments.  For example, for Keystone Identity, you don't configure 
Keystone to be the authoritative source of data for identity, then 
configure IPA or AD to be a replica of Keystone.  You configure Keystone 
to use IPA/AD for its identity information.




In my eyes IPA should have ability to integrate with whatever DNS server admin
wants to use, using standard protocols.


Does this mean the best way to support Designate will be to change IPA 
DNS so that it can be a replica of Designate, and get its data via AXFR 
from Designate?




What is the benefit of the other approach?

Petr^2 Spacek



When a client wants to read data from Designate, that data should somehow come
from IPA.  I don't think Designate has any sort of proxy or pass-through
feature, so the data would have be sync'd from IPA.  If IPA supports being a
server for AXFR/IXFR, Designate could be changed to support AXFR/IXFR client
side, then would just be a slave of IPA.  If IPA does not support zone
transfers, then we would need some sort of initial sync of data from IPA to
Designate (I wrote such a script for Designate
(https://github.com/openstack/designate/blob/master/contrib/ipaextractor.py).
Then, perhaps some sort of proxy/service that would poll for changes
(syncrepl?) in IPA, then submit those changes to Designate (using Designate
REST API, or DNS UPDATE when Designate mDNS supports it).

When a client wants to update data in Designate, we need to somehow get that
data into IPA.  The only way Designate pushes data out currently is via AXFR,
which doesn't work for IPA to be a direct slave of Designate.  What might work
is to have an "agent&q

Re: [Freeipa-devel] How to support Designate?

2015-08-31 Thread Rich Megginson

On 08/31/2015 11:00 AM, Simo Sorce wrote:

On Mon, 2015-08-31 at 10:15 -0600, Rich Megginson wrote:

On 08/31/2015 01:35 AM, Petr Spacek wrote:

On 26.8.2015 20:09, Rich Megginson wrote:

On 08/25/2015 09:08 AM, Petr Spacek wrote:

On 8.7.2015 19:56, Rich Megginson wrote:

On 07/08/2015 10:11 AM, Petr Spacek wrote:

Assuming that Designate wants to own DNS and be Primary Master, it would be
awesome if they could support standard DNS UPDATE protocol (RFC 2136)
alongside their own JSON API.

The JSON API is superset of DNS UPDATE protocol because it allows to add
zones
but still, standard protocol would mean that standard client (possibly guest
OS inside VM) can update its records without any OpenStack dependency, which
is very much desirable.

The use case here is to allow the guest OS to publish it's SSH key (which was
generated inside the VM after first boot) to prevent Man in the middle
attacks.

I'm working on a different approach for guest OS registration.  This involves
a Nova hook/plugin:
* build_instance pre-hook to generate an OTP and call ipa host-add with the
OTP - add OTP to new host metadata - add ipa-client-registration script to new
host cloud-init
* new instance calls script - will wait for OTP to become available in
metadata, then call ipa-client-install with OTP
* Floating IP is assigned to VM - Nova hook will call dnsrecord-add with new IP

BTW dnsrecord-add can be omitted if standard DNS UPDATE is supported.
ipa-client-install is using DNS UPDATE today.

I already have to support the IPA JSON REST interface with kerberos
credentials to do the host add, so it is easy to support dsrecord-add.


https://github.com/richm/rdo-vm-factory/tree/master/rdo-ipa-nova


The same goes for all other sorts of DANE/DNSSEC data or service
discovery using DNS, where a guest/container running a distributed service
can
publish it's existence in DNS.

DNS UPDATE supports GSS(API) for authentication via RFC 3007 and that is
widely supported, too.

So DNS UPDATE is my biggest wish :-)


Ok.  There was a Designate blueprint for such a feature, but I can't find it
and neither can the Designate guys.  There is a mention of nsupdate in the
minidns blueprint, but that's about it.  The fact that Designate upstream
can't find the bp suggests that this is not a high priority for them and will
not likely implement it on their own i.e. we would have to contribute this
feature.

If Designate had such a feature, how would this help us integrate FreeIPA with
Designate?

It would greatly simplify integration with FreeIPA. There is a plan to support
DNS updates as described in RFC 2136 to push updates from FreeIPA servers to
external DNS servers, so we could use the same code to integrate with AD &
Designate at the same time.

(I'm sorry for the delay, it somehow slipped through the cracks.)


For Designate for our use cases, we want IPA to be the authoritative source of
DNS data.

Why? In my eyes it is additional complexity for no obvious benefit. DNS is
built around assumption that there is only one authoritative source of data
and as far as I can tell all attempts to bend this assumption did not end well.

But what about users/operators who want to integrate OpenStack with
their existing DNS deployment (e.g. IPA or AD)?  Will they allow
converting their IPA/AD DNS to be a replica of Designate?

No, they would not want to, or have no permissions to do so.
But that shouldn't be a big issue, designate will probably be made to
manage a completely unrelated namespace.


This seems to
be the obverse of most of the ways OpenStack is integrated into existing
deployments.  For example, for Keystone Identity, you don't configure
Keystone to be the authoritative source of data for identity, then
configure IPA or AD to be a replica of Keystone.  You configure Keystone
to use IPA/AD for its identity information.

Indeed.


In my eyes IPA should have ability to integrate with whatever DNS server admin
wants to use, using standard protocols.

Does this mean the best way to support Designate will be to change IPA
DNS so that it can be a replica of Designate, and get its data via AXFR
from Designate?

No, we should probably just make it possible for IPA to talk to
designate to add the necessary records. If Designate is in use, the IPA
DNS will not be in use and turned off.


Then why use IPA at all?  Would be much simpler for the user to stand up 
a PowerDNS or BIND9 which are supported out of the box.




It makes little to no sense to replicate stuff out of designate if we
are not the master server.

Simo.



--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] How to support Designate?

2015-08-26 Thread Rich Megginson

On 08/25/2015 09:08 AM, Petr Spacek wrote:

On 8.7.2015 19:56, Rich Megginson wrote:

On 07/08/2015 10:11 AM, Petr Spacek wrote:

Assuming that Designate wants to own DNS and be Primary Master, it would be
awesome if they could support standard DNS UPDATE protocol (RFC 2136)
alongside their own JSON API.

The JSON API is superset of DNS UPDATE protocol because it allows to add zones
but still, standard protocol would mean that standard client (possibly guest
OS inside VM) can update its records without any OpenStack dependency, which
is very much desirable.

The use case here is to allow the guest OS to publish it's SSH key (which was
generated inside the VM after first boot) to prevent Man in the middle
attacks.


I'm working on a different approach for guest OS registration.  This 
involves a Nova hook/plugin:
* build_instance pre-hook to generate an OTP and call ipa host-add with 
the OTP - add OTP to new host metadata - add ipa-client-registration 
script to new host cloud-init
* new instance calls script - will wait for OTP to become available in 
metadata, then call ipa-client-install with OTP
* Floating IP is assigned to VM - Nova hook will call dnsrecord-add with 
new IP


https://github.com/richm/rdo-vm-factory/tree/master/rdo-ipa-nova


The same goes for all other sorts of DANE/DNSSEC data or service
discovery using DNS, where a guest/container running a distributed service can
publish it's existence in DNS.

DNS UPDATE supports GSS(API) for authentication via RFC 3007 and that is
widely supported, too.

So DNS UPDATE is my biggest wish :-)


Ok.  There was a Designate blueprint for such a feature, but I can't find it
and neither can the Designate guys.  There is a mention of nsupdate in the
minidns blueprint, but that's about it.  The fact that Designate upstream
can't find the bp suggests that this is not a high priority for them and will
not likely implement it on their own i.e. we would have to contribute this
feature.

If Designate had such a feature, how would this help us integrate FreeIPA with
Designate?

It would greatly simplify integration with FreeIPA. There is a plan to support
DNS updates as described in RFC 2136 to push updates from FreeIPA servers to
external DNS servers, so we could use the same code to integrate with AD 
Designate at the same time.

(I'm sorry for the delay, it somehow slipped through the cracks.)



For Designate for our use cases, we want IPA to be the authoritative 
source of DNS data.


When a client wants to read data from Designate, that data should 
somehow come from IPA.  I don't think Designate has any sort of proxy or 
pass-through feature, so the data would have be sync'd from IPA.  If IPA 
supports being a server for AXFR/IXFR, Designate could be changed to 
support AXFR/IXFR client side, then would just be a slave of IPA.  If 
IPA does not support zone transfers, then we would need some sort of 
initial sync of data from IPA to Designate (I wrote such a script for 
Designate 
(https://github.com/openstack/designate/blob/master/contrib/ipaextractor.py). 
Then, perhaps some sort of proxy/service that would poll for changes 
(syncrepl?) in IPA, then submit those changes to Designate (using 
Designate REST API, or DNS UPDATE when Designate mDNS supports it).


When a client wants to update data in Designate, we need to somehow get 
that data into IPA.  The only way Designate pushes data out currently is 
via AXFR, which doesn't work for IPA to be a direct slave of Designate.  
What might work is to have an agent that gets the AXFR, then somehow 
converts that into IPA updates.   This would only work if the volume of 
updates is fairly low.  If Designate supported IXFR it would be much better.


--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Sync useradd from IPA to AD

2015-07-15 Thread Rich Megginson

On 07/15/2015 09:42 AM, Email wrote:
Hi everyone, my name is Tony and this is my first post, so it's nice 
to meet all of you.  I've been tasked with creating an AD and FreeIPA 
environment, and I'm looking into the sync between the two.  It looks 
like creating a user in AD causes that user to be created in IPA, but 
not the other way around.  But if I create them in IPA they will not 
be auto created in AD.  I'm wondering why this is.


This is intentional.  If you are using FreeIPA and windows sync, it is 
assumed you want AD to be the provisioning system for new users, and not 
FreeIPA.


I would seriously consider using trusts instead of windows sync.

See section 8.1 of the fedora documentation as a reference. 


Link please?  We may need to clarify the language.


Thanks in advance!

~Tony





-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Re: [Freeipa-devel] How to support Designate?

2015-07-08 Thread Rich Megginson

On 07/08/2015 04:31 AM, Petr Spacek wrote:

On 1.7.2015 17:12, Rich Megginson wrote:

On 07/01/2015 09:10 AM, Petr Spacek wrote:

On 1.7.2015 16:43, Rich Megginson wrote:

How much work would it be to support IPA as an AXFR/IXFR client or server with
Designate?  Right now, their miniDNS component only supports being a master
and sending updates via AXFR, but they have IXFR support planned.

I need to read more about it. Could you please point me to some comprehensive
docs about Designate?

Thanks!


http://docs.openstack.org/developer/designate/architecture.html

Designate in setups with mini-DNS acts as DNS master server, i.e. the only
source of DNS data/truth. Currently FreeIPA can act only as master, too, which
is not possible.


By master do you mean unable to accept AXFR/IXFR from another server?



I can see several alternatives:

A) Add support for slave zones to FreeIPA.
It should be relatively easy and I guess doable in Fedora 23 time frame if it
gets appropriate priority.

For plain/insecure DNS zones it will allow us to use FreeIPA in place of any
other DNS server but the added value will be negligible because FreeIPA acting
as a slave cannot change the data.

The real added value could be the ability of FreeIPA to DNSSEC-sign zones and
do the DNSSEC key management. I believe that we should be able to re-use
machinery we implemented for master zones in FreeIPA so DNSSEC signing for
slave zones should be almost 'for free'.

When implemented, FreeIPA could become the easiest way how to secure DNS in
Designate with DNSSEC technology even in cases where all the data are managed
by Designate API.


This sounds interesting.  This seems like it would fit in with the 
typical OpenStack use case - create a new host, assign it a hostname in 
a sub-zone.





B) We can avoid implementing slave zones by using 'agent':
http://docs.openstack.org/developer/designate/glossary.html

If I'm not mistaken, this is what you implemented last year.


I implemented support in Designate for a FreeIPA backend which used the 
JSON HTTPS API to send updates from Designate to FreeIPA.

Designate has deprecated support for backends.

The agent approach is basically putting a mini-DNS-like daemon on each 
system which can accept AXFR from Designate.  This agent would then use 
the backend code I developed to send the data to FreeIPA.





C) We can say that combining FreeIPA DNS and Designate does not make sense and
drop what you did last year.


It was already dropped when the backend approach was deprecated.


In current architecture it really does not add
any value *unless* we add DNSSEC to the mix.


D) Integrate IPA installers with Designate API.
This is somehow complementary to variants A (and C) and would allow us to
automatically add DNS records required by FreeIPA to Designate during FreeIPA
installation and replica management.


I wrote a script (ipaextractor.py) that will extract DNS data from 
FreeIPA and store it in Designate.  That would be a good place to start.





In my opinion variants A+D are the best way to move forward. What do you think?



If we could change Designate in some way to work better with FreeIPA, 
what would you propose?


--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] How to support Designate?

2015-07-08 Thread Rich Megginson

On 07/08/2015 11:56 AM, Rich Megginson wrote:

On 07/08/2015 10:11 AM, Petr Spacek wrote:

On 8.7.2015 17:10, Rich Megginson wrote:

On 07/08/2015 04:31 AM, Petr Spacek wrote:

On 1.7.2015 17:12, Rich Megginson wrote:

On 07/01/2015 09:10 AM, Petr Spacek wrote:

On 1.7.2015 16:43, Rich Megginson wrote:
How much work would it be to support IPA as an AXFR/IXFR client 
or server

with
Designate?  Right now, their miniDNS component only supports 
being a master

and sending updates via AXFR, but they have IXFR support planned.
I need to read more about it. Could you please point me to some 
comprehensive

docs about Designate?

Thanks!


http://docs.openstack.org/developer/designate/architecture.html
Designate in setups with mini-DNS acts as DNS master server, i.e. 
the only
source of DNS data/truth. Currently FreeIPA can act only as master, 
too, which

is not possible.
By master do you mean unable to accept AXFR/IXFR from another 
server?
Sort of. DNS is conceptually built around concept of single 
authoritative
database hosted on Primary Master server. The database is then 
transferred

using AXFR to Slave servers, which are read-only (and can forward update
requests to the Primary Master).

See http://tools.ietf.org/html/rfc2136#section-1

The Primary Master server is the place where changes are made. There 
is by
definition only one primary master server per zone, so FreeIPA and 
Designare

cannot be Primary Masters at the same time.

We need to decide who is going to have control over the data.


I can see several alternatives:

A) Add support for slave zones to FreeIPA.
It should be relatively easy and I guess doable in Fedora 23 time 
frame if it

gets appropriate priority.

For plain/insecure DNS zones it will allow us to use FreeIPA in 
place of any
other DNS server but the added value will be negligible because 
FreeIPA acting

as a slave cannot change the data.

The real added value could be the ability of FreeIPA to DNSSEC-sign 
zones and
do the DNSSEC key management. I believe that we should be able to 
re-use
machinery we implemented for master zones in FreeIPA so DNSSEC 
signing for

slave zones should be almost 'for free'.

When implemented, FreeIPA could become the easiest way how to 
secure DNS in
Designate with DNSSEC technology even in cases where all the data 
are managed

by Designate API.
This sounds interesting.  This seems like it would fit in with the 
typical
OpenStack use case - create a new host, assign it a hostname in a 
sub-zone.

To be sure we understood each other:
In the scenarios where FreeIPA acts as Slave server, the change is 
done in
Designate and then a new version of the DNS zone is transferred to 
FreeIPA.
After that FreeIPA can DNSSEC-sign the zone and serve the signed 
version to

the clients.



B) We can avoid implementing slave zones by using 'agent':
http://docs.openstack.org/developer/designate/glossary.html

If I'm not mistaken, this is what you implemented last year.
I implemented support in Designate for a FreeIPA backend which used 
the JSON

HTTPS API to send updates from Designate to FreeIPA.
Designate has deprecated support for backends.

The agent approach is basically putting a mini-DNS-like daemon on 
each
system which can accept AXFR from Designate.  This agent would then 
use the

backend code I developed to send the data to FreeIPA.

Wow, that is a lot of complexity. I suspect that something like this is
already implemented in dnssyncd written by Martin Basti:
https://github.com/bastiak/dnssyncd


How does this work?  Does it receive zone transfer (AXFR? IXFR?) from a 
DNS master, then update LDAP with those records?




Anyway, I do not see any value in doing so in this particular scenario.
Designate would be the authoritative source of data (Primary Master) 
so from
functional point of view it would be the same (or worse) than variant 
(A),

just with more code and more error prone.


C) We can say that combining FreeIPA DNS and Designate does not 
make sense and

drop what you did last year.

It was already dropped when the backend approach was deprecated.


In current architecture it really does not add
any value *unless* we add DNSSEC to the mix.


D) Integrate IPA installers with Designate API.
This is somehow complementary to variants A (and C) and would allow 
us to
automatically add DNS records required by FreeIPA to Designate 
during FreeIPA

installation and replica management.
I wrote a script (ipaextractor.py) that will extract DNS data from 
FreeIPA and

store it in Designate.  That would be a good place to start.
Generally FreeIPA should integrate with other DNS server 
implementations in a

way similar to this:
https://fedorahosted.org/freeipa/ticket/4424
http://www.freeipa.org/page/V4/External_DNS_integration_with_installer

Hopefully 4.3 timeframe will allow us to work on that.

In my opinion variants A+D are the best way to move forward. What 
do you think?


If we could change Designate in some way to work better

Re: [Freeipa-devel] How to support Designate?

2015-07-08 Thread Rich Megginson

On 07/08/2015 10:11 AM, Petr Spacek wrote:

On 8.7.2015 17:10, Rich Megginson wrote:

On 07/08/2015 04:31 AM, Petr Spacek wrote:

On 1.7.2015 17:12, Rich Megginson wrote:

On 07/01/2015 09:10 AM, Petr Spacek wrote:

On 1.7.2015 16:43, Rich Megginson wrote:

How much work would it be to support IPA as an AXFR/IXFR client or server
with
Designate?  Right now, their miniDNS component only supports being a master
and sending updates via AXFR, but they have IXFR support planned.

I need to read more about it. Could you please point me to some comprehensive
docs about Designate?

Thanks!


http://docs.openstack.org/developer/designate/architecture.html

Designate in setups with mini-DNS acts as DNS master server, i.e. the only
source of DNS data/truth. Currently FreeIPA can act only as master, too, which
is not possible.

By master do you mean unable to accept AXFR/IXFR from another server?

Sort of. DNS is conceptually built around concept of single authoritative
database hosted on Primary Master server. The database is then transferred
using AXFR to Slave servers, which are read-only (and can forward update
requests to the Primary Master).

See http://tools.ietf.org/html/rfc2136#section-1

The Primary Master server is the place where changes are made. There is by
definition only one primary master server per zone, so FreeIPA and Designare
cannot be Primary Masters at the same time.

We need to decide who is going to have control over the data.


I can see several alternatives:

A) Add support for slave zones to FreeIPA.
It should be relatively easy and I guess doable in Fedora 23 time frame if it
gets appropriate priority.

For plain/insecure DNS zones it will allow us to use FreeIPA in place of any
other DNS server but the added value will be negligible because FreeIPA acting
as a slave cannot change the data.

The real added value could be the ability of FreeIPA to DNSSEC-sign zones and
do the DNSSEC key management. I believe that we should be able to re-use
machinery we implemented for master zones in FreeIPA so DNSSEC signing for
slave zones should be almost 'for free'.

When implemented, FreeIPA could become the easiest way how to secure DNS in
Designate with DNSSEC technology even in cases where all the data are managed
by Designate API.

This sounds interesting.  This seems like it would fit in with the typical
OpenStack use case - create a new host, assign it a hostname in a sub-zone.

To be sure we understood each other:
In the scenarios where FreeIPA acts as Slave server, the change is done in
Designate and then a new version of the DNS zone is transferred to FreeIPA.
After that FreeIPA can DNSSEC-sign the zone and serve the signed version to
the clients.



B) We can avoid implementing slave zones by using 'agent':
http://docs.openstack.org/developer/designate/glossary.html

If I'm not mistaken, this is what you implemented last year.

I implemented support in Designate for a FreeIPA backend which used the JSON
HTTPS API to send updates from Designate to FreeIPA.
Designate has deprecated support for backends.

The agent approach is basically putting a mini-DNS-like daemon on each
system which can accept AXFR from Designate.  This agent would then use the
backend code I developed to send the data to FreeIPA.

Wow, that is a lot of complexity. I suspect that something like this is
already implemented in dnssyncd written by Martin Basti:
https://github.com/bastiak/dnssyncd

Anyway, I do not see any value in doing so in this particular scenario.
Designate would be the authoritative source of data (Primary Master) so from
functional point of view it would be the same (or worse) than variant (A),
just with more code and more error prone.



C) We can say that combining FreeIPA DNS and Designate does not make sense and
drop what you did last year.

It was already dropped when the backend approach was deprecated.


In current architecture it really does not add
any value *unless* we add DNSSEC to the mix.


D) Integrate IPA installers with Designate API.
This is somehow complementary to variants A (and C) and would allow us to
automatically add DNS records required by FreeIPA to Designate during FreeIPA
installation and replica management.

I wrote a script (ipaextractor.py) that will extract DNS data from FreeIPA and
store it in Designate.  That would be a good place to start.

Generally FreeIPA should integrate with other DNS server implementations in a
way similar to this:
https://fedorahosted.org/freeipa/ticket/4424
http://www.freeipa.org/page/V4/External_DNS_integration_with_installer

Hopefully 4.3 timeframe will allow us to work on that.


In my opinion variants A+D are the best way to move forward. What do you think?


If we could change Designate in some way to work better with FreeIPA, what
would you propose?

How much can we change? :-D I liked the original architecture where Designate
just 'proxied' change requests to DNS implementations/backends.


Me too, but we didn't/don't have much say

Re: [Freeipa-devel] RFE - Number of thoughts on FreeIPA

2014-11-24 Thread Rich Megginson

On 11/24/2014 03:01 PM, William B wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I have been using FreeIPA for some time now. I have done a lot of
testing for the project, and have a desire to see FreeIPA do well.

As some background, I'm a system admin for a University, who currently
runs an unmanaged instance of 389ds. In the future I would love to move
to FreeIPA, but I want to explain some concerns first.

I have always noticed that FreeIPA is feature rich, but over time I
have noticed that this comes at a cost. Most components don't get as
much testing as they deserve, and just installing and running FreeIPA
for a few hours, one can quickly find rough edges and issues. Run it
for longer, and you quickly find more. As a business we value reliable,
and consistent software that doesn't have any surprises when we use
it. Unforseen issues sour peoples taste for things like FreeIPA, as
many people get stuck on their first impressions.

With these features also comes a lack of advanced documentation. Too
often the basics are well covered, but there are lots of simple tasks
that an admin would wish to perform that aren't covered in the
documentation. High quality, and advanced documentation is really key
to my work, as not everyone has as much time as I might to learn the
inside-out of FreeIPA. People want to reference documentation. Again,
one only needs to sit down and use FreeIPA for a few days, to try and
use it in their environment and you will quickly find that many tasks
aren't covered by the documentation. The documentation itself is also
hard to find, or out of date (Such as on fedoraproject.org, which is
the first google hit for me).

FreeIPA also pushes a some policies and ideas by default. Consider the
password reset functionality. By default, when the password is reset,
the password is also expired. In our environment, where we have a third
party tool that does password resets on accounts (Password manager),
this breaks our expectation as a user would then have to reset their
password again in the FreeIPA environment. Little things like this
remove flexibility and inhibit people making the swap. These options
shouldn't be hardcoded, they should be defaults that can be tuned. If
someone wants to do stupid things with those options, that is their
choice, but that flexibility will help FreeIPA gain acceptance into
businesses.

Finally, back to our rich features. Not all businesses want all the
features of FreeIPA. For example, we don't want the Dogtag CA, NTP, DNS
or Kerberos components.


Then why do you want to use FreeIPA?  Is it just for 389 LDAP + nice 
command line interface + nice web based UI?



But the default install, installs all these
packages even if we don't use them, and it configures services that we
don't necesarily need. Kerberos is especially a risk for us as there
are plenty of unforseen issues that can arise when you have an AD
kerberos domain on the same network, even if they live in different DNS
namespaces. Contractors install systems into unix networks, unix
systems end up in windows networks. Over time, as process and better
discipline is involved, these little mistakes will be removed, but if
we were to deploy FreeIPA tomorrow, I have no doubt the kerberos
install would interfere with other parts of the network. I would really
like to see the FreeIPA build, split into freeipa-servers and
freeipa-servers-core where the core has only the 389ds, web ui and
kerberos components, and perhaps even one day, could even be kerberos
free. This might be taking a step back in some ways, but the
simplicity would be attractive to complex environments wanting to step
up from unmanaged 389ds, to something like FreeIPA, but without all the
complexity and overhead of a full install. Over time the extra modules
can be enabled as administrators please in a controlled fashion.
- - Yes, these things can be controlled through the use of the server
install command line switches, but if I'm installing and using only 389
and krb from FreeIPA, I shouldn't need to install all of dogtag as well.


These are just my thoughts on the project, and I think it boils down to
a few things:

* RFE to split freeipa packages to core and full.
* Asking for a review and enhancement of documentation.
* Better functional testing of FreeIPA server and tasks to help iron out
 obvious issues before release.

Don't take this as harsh criticism. I think FreeIPA is a great project,
and a great system. I would like to see it improve and be used more
widely.

- -- 
Sincerely,


William Brown

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBAgAGBQJUc6q9AAoJEO/EFteBqAmaa+gP/1kB4recq6BFI/RkyBTMvdal
mUnjJHm5M07xvaYVvO56tjhyQFwja9MrDHO5XWekQFmyKG34/5uOEc5rSUX3Jgvh
r09r8Tfc8yUmLGtPcbTgw9mK3KgxYKgYCTkucvQPGZN8Yk/mLlkWk8ttGHrO01O4
ZUZnoG0GxdM6q4NP6Iy/U1hpZTOs2jllir0CGUt6v1gUnGXN6vlpF0SLUcto19XN
PLBUQRmuwDmGjlCsvBu4k1o4Zjf0rn52+FXNYCNpY95geFxQ4M6tU0N7iw3g4hW2

Re: [Freeipa-devel] Releasing testing tools as standalone projects

2014-11-04 Thread Rich Megginson

On 11/04/2014 10:30 AM, Petr Viktorin wrote:

On 11/03/2014 04:47 PM, Rob Crittenden wrote:

Petr Viktorin wrote:

Hello!

There's been some interest in releasing pieces of FreeIPA's testing
infrastructure so it can be reused in other projects.
I will soon take the pytest-beakerlib plugin (currently in my patch
0672), and making a stand-alone project out of it. Later I'll extract
the common pieces of the integration testing framework, and release 
that

independently.


Do we want projects projects like these to be hosted on Fedorahosted?
That would be the 100% open-source solution.

Or do we want to put it under a freeipa organization on Github, since
we're more likely to get external contributors there?


Why do you think it would get more contributors from github? Because yet
another account isn't required, or the contributor process is perhaps
better understood (via pull requests)?


Both. The community is larger (i.e. contributors are likely to already 
have an account on Github), and the contribution process is nowadays 
more familiar to most people.


+1, from my experience with the openstack community, and with redhat - 
see github.com/redhat-openstack, et. al.




And I'm not talking about a proprietary process here: the pull request 
process is publish a Git repo, and nag people to merge from it. It's 
built into Git itself – see git-request-pull(1).
Github makes this easy, and adds a Web UI and some inevitable (but 
optional) proprietary perks. But underneath it's still Git and 
e-mail if you care to use those.


+1




Or both? (Would we want to officially mirror the project to Github
from FH?)


I'd be in favor of fedorahosted because you get a tracker and wiki as
well, and having the repo there would round things out.


Yeah, the tracker is a reason for FH. Github does host git-backed 
wikis using an open-source backend, but it doesn't have an acceptable 
bug tracker.



What's wrong with the github issue tracker?

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] Releasing testing tools as standalone projects

2014-11-04 Thread Rich Megginson

On 11/04/2014 12:00 PM, Petr Viktorin wrote:

On 11/04/2014 11:50 AM, Rich Megginson wrote:

On 11/04/2014 10:30 AM, Petr Viktorin wrote:

On 11/03/2014 04:47 PM, Rob Crittenden wrote:

[...]

Or both? (Would we want to officially mirror the project to Github
from FH?)


I'd be in favor of fedorahosted because you get a tracker and wiki as
well, and having the repo there would round things out.


Yeah, the tracker is a reason for FH. Github does host git-backed
wikis using an open-source backend, but it doesn't have an acceptable
bug tracker.


What's wrong with the github issue tracker?


It's stored in a closed format and hosted on a proprietary service; if 
Github goes down or goes evil we lose the issues.



Ah, ok.  That does tilt things in favor of using fedorahosted for trac.  
I believe we can configure fedorahosted trac to use a different git repo 
(github) than git.fedorahosted.


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 0064] Create ipa-otp-decrement 389DS plugin

2014-10-08 Thread Rich Megginson

On 10/08/2014 01:45 PM, thierry bordaz wrote:

On 10/08/2014 07:30 PM, Nathaniel McCallum wrote:

On Wed, 2014-10-08 at 17:30 +0200, thierry bordaz wrote:

On 10/07/2014 06:00 PM, Nathaniel McCallum wrote:

Attached is the latest patch. I believe this includes all of our
discussions up until this point. However, a few bits of additional
information are needed.

First, I have renamed the plugin to ipa-otp-counter. I believe all
replay prevention work can land inside this plugin, so the name is
appropriate.

Second, I uncovered a bug in 389 which prevents me from validating the
non-replication request in bepre. This is the reason for the 
additional

betxnpre callback. If the upstream 389 bug is fixed, we can merge this
check back into bepre. https://fedorahosted.org/389/ticket/47919

Third, I believe we are now handling replication correct. An error is
never returned. When a replication would cause the counter to 
decrease,

we remove all counter/watermark related mods from the operation. This
will allow the replication to apply without decrementing the value.
There is also a new bepost method which check to see if the 
replication

was discarded (via CSN) while having a higher counter value. If so, we
apply the higher counter value.

For me the code is good. It took me some time to understand the benefit
of removing mods in preop.
In fact I think it is a good idea, as it prevents extra repair ops and
also make more easy the computation of the value to set in repair mod.

Here is the scenario. Server X receives two quick authentications;
replications A and B are sent to server Y. Before server Y can process
server X's replications, an authentication is performed on server Y;
replication C is sent to server X. The following logic holds true:
   * csnA  csnB  csnC
   * valueA = valueC, valueB  valueC

When server X receives replication C, ipa-otp-counter will strip 
out all
counter mod operations; applying the update but not the lower 
value. The

value of replication B is retained. This is the correct behavior.

When server Y processes replications A and B, ipa-otp-counter will
detect that a higher value has a lower CSN and will manually set the
higher value (in bepost). This initiates replication D, which is 
sent to

server X. Here is the logic:
   * csnA  csnB  csnC  csnD
   * valueA = valueC, valueB = valueD, valueD  valueC

Server X receives replication D. D has the highest CSN. It has the 
same

value as replication B (server X's current value). Because the values
are the same, ipa-otp-counter will strip all counter mod operations.
This reduces counter write contention which might become a problem in
N-way replication when N2.

I think we should rather let the mods going on. So  that the full
topology will have
valueD (or valueB)/csnD rather having a set of servers having
valueD/csnB and an other set valueD/csnD.

I think you misunderstand. The value for csnD is only discarded when the
server already has valueB (valueB == valueD). Only the value is
discarded, so csnD is still applied. The full topology will have either
valueB w/ csnD or valueD w/ csnD. Since, valueB always equals valueD, by
substitution, all servers have valueD w/ csnD.

Nathaniel



There are several parts where the CSN are stored.
One is used to allow replication protocol to send the approriate 
updates. This part is stored into a dedicated entry: RUV.
In fact when the update valudD/CSND will be received and applied, the 
RUV will be updated with csnD.


An other part is the attribute/attribute values. An attribute value 
contains the actual value and the CSN associated to that value.
This CSN is updated by entry_apply_mod_wsi when it decides which value 
to keep and which CSN is associated to this value.


In the example above, on the server X, the counter attribute has 
valueB/csnB. Then it receives the update ValueD/csnD it discard this 
update because valueD=ValueB. That means that on serverX we will have 
valueB/csnB.


Now if on an other server we receive the updates in the reverse order: 
valueD/csnD first then valueB/csnB.

This server will apply and valueD/csnD then will discard valueB/csnB.

ValueD and ValueB being identical it is not a big issue. But we will 
have some server with csnD and others with csnB.


The CSN is also the key in the changelog database.



thanks
thierry



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 0069] Adds 389DS plugin to enforce UUID token IDs

2014-09-22 Thread Rich Megginson

On 09/22/2014 09:14 AM, Nathaniel McCallum wrote:

On Mon, 2014-09-22 at 10:55 -0400, Simo Sorce wrote:

On Mon, 22 Sep 2014 10:02:01 -0400
Nathaniel McCallum npmccal...@redhat.com wrote:


On Mon, 2014-09-22 at 09:50 -0400, Simo Sorce wrote:

On Mon, 22 Sep 2014 10:34:54 +0200
Martin Kosek mko...@redhat.com wrote:


On 09/22/2014 09:33 AM, thierry bordaz wrote:

Hello Nathaniel,

Just a remark, in is_token if the entry is
objectclass=ipaToken it returns without freeing the
'objectclass' char array.

thanks
thierry

On 09/21/2014 09:07 PM, Nathaniel McCallum wrote:

Users that can rename the token (such as admins) can also
create non-UUID token names.

https://fedorahosted.org/freeipa/ticket/4456

NOTE: this patch is an alternate approach to my patch 0065.
This version has two main advantages compared to 0065:
1. Permissions are more flexible (not tied to the admin group).
2. Enforcement occurs at the DS-level

It should also be noted that this patch does not enforce UUID
randomness, only syntax. Users can still specify a token ID so
long as it is in UUID format.

Nathaniel

I am still thinking we may be overcomplicating it. Why cannot we
use the similar UUID generation mechanism as we do for SUDO
commands for example:

# ipa sudocmd-add barbar --all --raw
---
Added Sudo Command barbar
---
   dn:
ipaUniqueID=3a96de14-4232-11e4-9d66-001a4a104ec9,cn=sudocmds,cn=sudo,dc=mkosek-fedora20,dc=test
   sudocmd: barbar
   ipaUniqueID: 3a96de14-4232-11e4-9d66-001a4a104ec9
   objectClass: ipasudocmd
   objectClass: ipaobject

It lets DS generaterename the object's DN when it finds out that
the ipaUniqueID is set to autogenerate (in baseldap.py). We
could let DS generate the UUID and only add the autogenerate
keyword in otptoken-add command.

For authorization, we can simply allow users to only add tokens
with autogenerate ID, see my example here:

http://www.redhat.com/archives/freeipa-devel/2014-September/msg00438.html

Admin's or special privilege-owners would have more generous ACI
allowing other values than just autogenerate.

IMO, then the whole ipatoken-add mechanism would be a lot simpler
and we would not need a special DS plugin (unless we want regular
users to generate their own UUIDs instead of letting IPA DS to
generate it
- which I do not think is the case).

Good point Martin.

This is the avenue I first pursued. The problem is that the client has
no way to look up the DN after the entry is added. In the case of
sudocmd-add, the lookup is performed using the sudocmd attribute (see
sudocmd.get_dn()). We have no similar attribute in this case and the
lookup cannot be performed.

Well in theory we could search with creatorName and createTimestamp set
to the user's own DN and a range a few seconds in the past ...
It is not robust if you add multiple tokens at the same time, but would
this be a concern for user created tokens ?

That would fundamentally break the import script in which many tokens
are being created by the same user rapidly.

In a phone call with mkosek, we tossed around the idea of framework
support for a (preexisting?) control which notifies the client of
changes to the operation. This would notify the client that the
ipatokenUniqueID changed from autogenerate to
86725e75-a307-4c10-9de5-3ce15b963552. This would resolve all
ambiguity.


What about the POST READ control?  The purpose of this control is to 
return the contents of the entry (e.g. what would be returned by an LDAP 
search request) after the update has been applied.




Nathaniel

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 0065] Don't allow users to create tokens with a specified ID

2014-09-22 Thread Rich Megginson

On 09/22/2014 01:28 PM, Martin Kosek wrote:

On 09/22/2014 06:58 PM, Simo Sorce wrote:

On Mon, 22 Sep 2014 17:42:39 +0200
thierry bordaz tbor...@redhat.com wrote:


RFC 4527


Thanks a lot Thierry, this is exactly the control I had in mind last
week. If we could implement it then we could solve any issue where the
RDN needs to be modified by the ADD operation.

Simo.



Ah, so do I understand it correctly that we do not have that control 
in the DS implemented yet?


It was implemented in 1.3.2, which means Fedora 20 and later.

If this is the case, we should file a DS ticket and do some simple 
temporary solution in FreeIPA for now.


I would personally did not want to go with the custom DS plugin or 
other complicated route is it takes development time and may be 
difficult to get rid of it later.


Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 0001 User Life Cycle: create containers and scoping DS plugins

2014-08-13 Thread Rich Megginson

On 08/13/2014 08:48 AM, Petr Viktorin wrote:

On 08/08/2014 09:24 AM, thierry bordaz wrote:

Hi,

The attached patch is a first patch related to 'User Life Cycle'
(https://fedorahosted.org/freeipa/ticket/3813)

It creates 'Stage' and 'Delete' containers and configure DS plugin to
scope only 'Active' container or exclude 'Stage'/'Delete'


Hello,

The .ldif files are copied only during initial installation. When 
upgrading to a version with this patch, changes in .ldif files are not 
applied.


So all updates need to be in .update files. For example, for DNA 
plugin configuration you would need something like this in an .update 
file:


dn: cn=Posix IDs,cn=Distributed Numeric Assignment 
Plugin,cn=plugins,cn=config

remove:dnaScope: $SUFFIX
add:dnaScope: cn=accounts,$SUFFIX


.update files, on the other hand, are applied both on installation and 
on upgrade. To avoid duplication you can put whole entries in .update 
and delete them from the .ldif, provided the entries always end up 
being created in a correct order.



Patch submission technicalities:
Please don't add the Reviewed by tag to the commit message, it's 
added when pushing. The other tags are not used FreeIPA. (What's a 
Flag Day?)


Flag Day is a warning to other developers - Hey, this change will break 
something in your usual workflow, plan accordingly


When you send more patches that depend on each other, either attach 
them all to one e-mail, or explicitly say what each patch depends on.




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 471 Fix objectClass casing in LDIF to prevent schema update error

2014-06-27 Thread Rich Megginson

On 06/27/2014 05:41 AM, Martin Kosek wrote:

When a new objectclass was defined as objectclass and not
objectClass, it made the schema updater skip some objectclasses.

https://fedorahosted.org/freeipa/ticket/4405

---

This fixed the 3.3.5 - 4.0 upgrade for me. The root cause is quite strange for
me though and I am not sure if this is intended. I assume there may be other
issue in updater or python-ldap.


ack, although the ldap updater code should be changed - attribute types 
should be case insensitive.





___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] LDAPI + autobind instead of Kerberos (for named)?

2014-06-19 Thread Rich Megginson

On 06/19/2014 09:16 AM, Alexander Bokovoy wrote:

On Thu, 19 Jun 2014, Martin Kosek wrote:

On 06/19/2014 04:58 PM, Alexander Bokovoy wrote:

On Thu, 19 Jun 2014, Simo Sorce wrote:

On Thu, 2014-06-19 at 17:47 +0300, Alexander Bokovoy wrote:

On Thu, 19 Jun 2014, Simo Sorce wrote:
 I may need to revive my sysaccounts module...

There is one more issue though, and this one really concerns me.
If you need to put there multiple accounts because different servers
have different local accounts, then you open up access to unrelated
services. Because all these uids are shared on all systems.

I think this kills my own proposal of sticking these entries in
cn=sysaccounts.

However we could have something in cn=config maybe ?
So that each server can:
A) use the same name/DN
B) have ids that match exactly the local named account no matter how
many different variants we have
C) no management issues when the server is killed from the
infrastructure as cn=config is local to that server and goes away 
with

it.

What do you think ?
This is what Petr proposed too.

389-ds autobind code searches starting from a base defined in 
cn=config.

IPA defines it to be $SUFFIX. If we move these entries to cn=config,
they will not be found by the code in
ds/ldap/servers/slapd/daemon.c:slapd_bind_local_user(). If we 
change a
search base to something in cn=config, we wouldn't be able to use 
user

accounts for autobind -- something which is possible right now.

I'm not really concerned about user accounts' autobind but this is
actually a behavior change for IPA.


And I guess we can't list multiple bases for now ?
We do not use autobind for anything now though, and I do not see it as
useful for normal users on an IPA server, so I would be ok with the
change, even if it breaks backward compatibility on masters 
themselves.

The only thing we use is root autobind which is handled by a separate
mechanism, I think.

Thus, it suits me.

Petr, can you please make a ticket?


How can you be sure that people do not already use the autobind 
feature? IMO,
it is a bad move to just break it because we have no better idea how 
to handle

named autobind.

autobind is a feature of 389-ds only. Howard Chu (OpenLDAP) considers it
a violation of RFC4513


A violation even when using EXTERNAL bind?


and if we limit who can use it I don't think
anyone will be crying too much.


If we change it to be incompatible, we may break existing _389_ 
customers, even if they are potentially using something that violates 
RFC4513.




I would rather like to see improved autobind capability in 
389-ds-base which
would allow us to do the autobind configuration in cn=config and do 
entries like:


uidnumber=25+gidnumber=25,cn=autobind,cn=config
...
binddn: krbprincipalname=DNS/ipa.server.test,cn=computers...

And thus have a per-server configuration without breaking existent 
functionality.

That would work too but the main ide is to simply change our, IPA,
defaults, rather than implementing something new. If somebody relies on
autobind to work for regular users on IPA masters without explicit
authentication,


By explicit authentication do you mean using EXTERNAL bind?


it is already a question of a security breach.


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Move replication topology to the shared tree

2014-06-02 Thread Rich Megginson

On 06/02/2014 02:46 AM, Ludwig Krispenz wrote:
Ticket 4302 is a request for an enhancement: Move replication topology 
to the shared tree



There has been some discussion in comments in the ticket, but I'd like 
to open the discussion to a wider audience to get an agreement on what 
should be implemented, before writing a design spec.


The implementation requires a new IPA plugin for 389 DS and eventually 
an enhancement of the 389 replication plugin (which depends on some 
decisions below). In the following I will use the terms “topology 
plugin” for the new plugin and “replication plugin” for the existing 
389 multimaster replication plugin.



Lets start with the requirements: What should be achieved by this RFE ?

In my opinion there are three different levels of features to 
implement with this request


- providing all replication configuration information consistent 
across all deployed servers on all servers, eg to easily visualize the 
replication topology.


- Allowing to do sanity checks on replication configuration, denying 
modifications which would break replication topology or issue warnings.


- Use the information in the shared tree to trigger changes to the 
replication configuration in the corresponding servers, this means to 
allow to completely control replication configuration with 
modifications of entries in the shared tree



The main questions are

1] which information is needed in the shared tree (eg what params in 
the repl config should be modifiable)


2] how is the information organized and stored (layout of the repl 
config information shared tree)


3] how is the interaction of the info in the shared tree and 
configuration in cn=config and the interaction between the topology 
plugin and the replication plugin


I apologize that I have not yet finished reading through all of this 
thread and the comments/replies, so perhaps my following comment is out 
of line:


Why not (selectively) replicate cn=config?  We keep moving more and more 
stuff out of cn=config and into the main tree (dna, automember, etc.), 
to work around the problem that data underneath cn=config is not 
replicated.  We already have customers who have asked for things like 
database configuration, index configuration, suffix configuration, and 
many other configurations, to be replicated. And, for a bonus, if we do 
this right, we might be able to leverage this work to do real schema 
replication.


I will note that openldap syncrepl does allow cn=config to be replicated.




ad 1] to verify the topology, eg connectivity info about all existing 
replication agreements is needed, the replication agreements only 
contain info about the target, and the parameters for connection to 
the target, but not about the origin. If the data have to evaluated on 
any server, information about the origin has to be added, eg 
replicaID, serverID,...


In addition, if agreement config has to be changed based on the shared 
tree all required parameters need to be present, eg 
replicatedAttributeList, strippedAttrs, replicationEnabled, .


Replication agreements only provide information on connections where 
replication is configured, if connectivity is to be checked 
independent info about all deployed serevers/replicas is needed.


If topology should be validated, do we need params definig 
requirements, eg each replica to be connected to 1,2,3,... others, 
type of topology (ring, mesh, star,.) ?



ad 2] the data required are available in the replicationAgreement (and 
eventually replica) entries, but the question is if there should be a 
1:1 relationship to entries in the shared tree or a condensed 
representation, if there should be a server or connection oriented view.


In my opinion a 1:1 relation is straight forward, easy to handle and 
easy to extend (not the full data of a repl agreement need to be 
present, other attributes are possible). The downside may be a larger 
number of entries, but this is no problem for the directory server and 
replication and the utilities eg to visualize a topology will handle 
this.


If the number of entries should be reduced information on multiple 
replication agreements would have to be stored in one entry, and the 
problem arises ho to group data belonging to one agreement. LDAP does 
not provide a simple way to group attribute values in one entry, so 
all the info related to one agreement (origin, target, replicated 
attrs and other repl configuration info) could be stored in a single 
attribute, which will make the attribute as nicely readable and 
managable as acis.


If topology verification and connectivity check is an integral part of 
the feature, I think a connection oriented view is not sufficient, it 
might be incomplete, so a server view is required, the server entry 
would then have the connection information as subentries or as 
attributes.



Ad 3] The replication configuration is stored under cn=config and can 
be modified either by ldap operations or by 

Re: [Freeipa-devel] Move replication topology to the shared tree

2014-06-02 Thread Rich Megginson

On 06/02/2014 08:38 AM, Simo Sorce wrote:

On Mon, 2014-06-02 at 10:08 -0400, Rob Crittenden wrote:

Simo Sorce wrote:

However we may want to be able to mark a topology for 'multiple' sets.
For example we may want to have by default the same topology both for
the main database and for the CA database.

I think we should store them separately and making them the same would
be applied by a tool, but the data would just reflect the connections.

I was thinking the object DN would contain the LDAP database name (or
some rough equivalent), so we would store the IPA connections separate
from the CA connections.

Ok, we can debate about this, maybe we simply have a flag in the
framework that 'links' two topologies and simply replicates any change
from one to the other.

The only reason I had to 'mark' stuff in a single topology in order to
share it is that this way any change is atomic and the 2 topologies
cannot diverge, as the objects are the same, if we think the chance of
divergence is low or that it is not important because the topology
plugin will always prevent disconnected states anyway then we may avoid
it, and let the framework try to keep topologies in sync and just loudly
warn if they somehow get out of sync (which will happen briefly every
time replication of the topology objects happens :).


ad 2] the data required are available in the replicationAgreement (and
eventually replica) entries, but the question is if there should be a
1:1 relationship to entries in the shared tree or a condensed
representation, if there should be a server or connection oriented view.

My answer is no, we need only one object per connection, but config
entries are per direction (and different ones on different servers).

We also need to store the type, MMR, read-only, etc, for future-proofing.

Store where ?


One entry per connection would mirror what we have now in the mapping
tree (which is generally ok). I wonder if this would be limiting with
other agreement types depending on the schema we use.

My idea is that on the connection object you have a set of attributes
that tells you how replication happen.

So normally you'll have:
dn: uuid?
objectclass: ipaReplicationTopologySegment
left: master-X
right: master-Y
direction: both || left-right || right-left (|| none ?)

If we have other special types we change direction accordingly or add
another attribute.


We already have the list of servers, so we need to add only the list of
connections in the topology view. We may need to amend the servers
objects to add additional data in some cases. For example indicate
whether it is fully installed or not (on creation the topology plugin
would complain the server is disconnected until we create the first
segment, but that may actually be a good thing :-)

Not sure I grok the fully installed part. A server isn't added as a
master until it is actually installed, so a prepared master shouldn't
show here.

Uhmm you may be right, if we can make this a non-problem, all the
better.


Next question: How to handle changes directly done in the dse.ldif, if
everything should be done by the topology plugin it would have to verify
and compare the info in cn=config and in the shared tree at every
startup of the directory server, which might be complicated by the fact
that the replication plugin might already be started and repl agreemnts
are active before the topology plugin is started and could do its work.
(plugin starting order and dependencies need to be checked).

Why do we care which one starts first ?
We can simply change replication agreements at any time, so the fact the
replication topology (and therefore agreements) can change after startup
should not be an issue.

Someone could delete an agreement, or worse, add one we don't know
about. Does that matter?

Someone can do this at any time after startup, so we already need to
handle this, why should it be a problem ?


It shouldn't be a problem for replication, since everything is dynamic.




However I agree we want to avoid churn, so to answer to Ludwig as well I
guess we just want to make sure the topology plugin always starts before
the replication plugin and amends replication agreements accordingly.


+1 - I think this will avoid some problems.




What happens to values in the mapping tree that aren't represented in
our own topology view?

I think we should ignore them if they reference a machine that is not a
recognized master, I guess the main issue here is a case when a master
got deleted and somehow the cn=config entry was not and we end up with
an orphan agreement that the topology plugin initially created but does
not recognize as its own.
I see 2 options here:
1) We ignore it, and let the admin deal with the issue
2) We mark agreements with a special attribute that indicates they have
been generated by the topology plugin, so the plugin can delete any it
does not recognize as currently valid. The only problem here is initial
migration, but that is not a huge issue 

Re: [Freeipa-devel] LDAP schema for DNSSEC keys

2014-05-02 Thread Rich Megginson

On 05/02/2014 12:48 PM, Petr Spacek wrote:

On 1.5.2014 16:10, Rich Megginson wrote:

On 04/30/2014 10:19 AM, Petr Spacek wrote:

Hello list,

following text summarizes schema  DIT layout for DNSSEC key storage 
in LDAP.


This is subset of full PKCS#11 schema [0]. It stores bare keys with few
metadata attributes when necessary.

The intention is to make transition to full PKCS#11-in-LDAP schema 
[0] as
easy as possible. This transition should happen in next minor 
version of

FreeIPA.

In theory, the transition should be just adding few object classes to
existing objects and populating few new metadata attributes. Related 
object

classes are marked below with (in long-term).

Please comment on it soon. We want to implement it ASAP :-)


DNSSEC key
==
- Asymmetric
- Private key is stored in LDAP as encrypted PKCS#8 blob
- Public key is published in LDAP
- Encrypted with symmetric DNSSEC master key (see below)
- Private key - represented as LDAP object with object classes:
ipaEPrivateKey  [1] # encrypted data
ipaWrappedKey   [2] # pointer to master key, outside scope of pure 
PKCS#11

ipk11PrivateKey [3] (in long-term) # PKCS#11 metadata
- Public key - represented as LDAP object with object classes:
ipaPublicKey[1] # public key data
ipk11PublicKey  [3] (in long-term) # PKCS#11 metadata


Master key
==
- Symmetric
- Stored in LDAP as encrypted blob
- Encrypted with asymmetric replica key (see below)
- 1 replica = 1 blob, n replicas = n blobs encrypted with different 
keys

- A replica uses it's own key for master key en/decryption
- Represented as LDAP object with object classes:
ipaESecretKey  [1]
ipk11SecretKey [3] (in long-term)

Replica key
===
- Asymmetric
- Private key is stored on replica's disk only
- Public key for all replicas is stored in LDAP
- Represented as LDAP object with object classes:
ipaPublicKey   [1]
ipk11PublicKey [3] (in long-term)


DIT layout
==
 DNSSEC key material
 ---
 - Container: cn=keys, cn=sec, cn=dns, dc=example
 - Private and public keys are stored as separate objects to 
accommodate all

PKCS#11 metadata.
 - We need to decide about object naming:
  - One obvious option for RDN is to use uniqueID but I don't like 
it. It is

hard to read for humans.
  - Other option is to use uniqueID+PKCS#11 label or other 
attributes to
make it more readable. Can we use multi-valued RDN? If not, why? 
What are

technical reasons behind it?


I would encourage you not to use multi-valued RDNs.  There aren't any
technical reasons - multi-valued RDNs are part of the LDAP standards 
and all
conforming LDAP implementations must support them.  However, they are 
hard to
deal with - you _must_ have some sort of DN class/api on the client 
side to
handle them, and not all clients do - many clients expect to be able 
to just

do dnstr.lower() == dnstr2.lower() or possibly do simple escaping.

As far as being human readable - the whole goal is that humans 
_never_ have to
look at a DN.  If humans have to look at and understand a DN to 
accomplish a

task, then we have failed.
I agree, users should not see them. I want to make life easier for 
administrators and developers *debugging* it.


I'm facing UUIDs-only logs and database in oVirt for more than year 
now and I can tell you that it is horrible, horrible, horrible. It is 
PITA when I have to debug something in oVirt because I have to search 
for UUIDs all the time. I want to scream and jump out of the window 
when I see single log line with 4 or more different UUIDs... :-)


I sympathize, having to go through this with Designate, parsing up to 4 
UUIDs per debug log line . . .




Has the DogTag team reviewed this proposal?  Their data storage and 
workflows

are similar.
That is very good point! Nathan, could somebody from DS team (maybe 
somebody involved in Password Vault) review this vault without Vault?




Thank you!


It is question if we like:
 nsUniqID = 0b0b7e53-957d11e3-a51dc0e5-9a05ecda
 nsUniqID = 8ae4190d-957a11e3-a51dc0e5-9a05ecda
more than:
 ipk11Label=meaningful_label+ipk11Private=TRUE
 ipk11Label=meaningful_label+ipk11Private=FALSE

 DNSSEC key metadata
 ---
 - Container (per-zone): cn=keys, idnsname=example.net, cn=dns
 - Key metadata can be linked to key material via DN or ipk11Id.
 - This allows key sharing between zones.
(DNSSEC-metadata will be specified later. That is not important for key
storage.)

 Replica public keys
 ---
 - Container: cn=DNS,cn=replica 
FQDN,cn=masters,cn=ipa,cn=etc,dc=example

  - or it's child object like cn=wrappingKey

 Master keys
 ---
 - Container: cn=master, cn=keys, cn=sec, cn=dns, dc=example
 - Single key = single object.
 - We can use ipk11Label or ipk11Id for naming:
 ipk11Label=dnssecMaster1, ipk11Label=dnssecMaster2, etc.


Work flows
==
 Read DNSSEC private key
 ---
  1) read DNSSEC private key from LDAP
  2) ipaWrappedKey objectClass is present - key is encrypted

Re: [Freeipa-devel] LDAP schema for DNSSEC keys

2014-05-01 Thread Rich Megginson

On 04/30/2014 10:19 AM, Petr Spacek wrote:

Hello list,

following text summarizes schema  DIT layout for DNSSEC key storage 
in LDAP.


This is subset of full PKCS#11 schema [0]. It stores bare keys with 
few metadata attributes when necessary.


The intention is to make transition to full PKCS#11-in-LDAP schema [0] 
as easy as possible. This transition should happen in next minor 
version of FreeIPA.


In theory, the transition should be just adding few object classes to 
existing objects and populating few new metadata attributes. Related 
object classes are marked below with (in long-term).


Please comment on it soon. We want to implement it ASAP :-)


DNSSEC key
==
- Asymmetric
- Private key is stored in LDAP as encrypted PKCS#8 blob
- Public key is published in LDAP
- Encrypted with symmetric DNSSEC master key (see below)
- Private key - represented as LDAP object with object classes:
ipaEPrivateKey  [1] # encrypted data
ipaWrappedKey   [2] # pointer to master key, outside scope of pure 
PKCS#11

ipk11PrivateKey [3] (in long-term) # PKCS#11 metadata
- Public key - represented as LDAP object with object classes:
ipaPublicKey[1] # public key data
ipk11PublicKey  [3] (in long-term) # PKCS#11 metadata


Master key
==
- Symmetric
- Stored in LDAP as encrypted blob
- Encrypted with asymmetric replica key (see below)
- 1 replica = 1 blob, n replicas = n blobs encrypted with different keys
- A replica uses it's own key for master key en/decryption
- Represented as LDAP object with object classes:
ipaESecretKey  [1]
ipk11SecretKey [3] (in long-term)

Replica key
===
- Asymmetric
- Private key is stored on replica's disk only
- Public key for all replicas is stored in LDAP
- Represented as LDAP object with object classes:
ipaPublicKey   [1]
ipk11PublicKey [3] (in long-term)


DIT layout
==
 DNSSEC key material
 ---
 - Container: cn=keys, cn=sec, cn=dns, dc=example
 - Private and public keys are stored as separate objects to 
accommodate all PKCS#11 metadata.

 - We need to decide about object naming:
  - One obvious option for RDN is to use uniqueID but I don't like it. 
It is hard to read for humans.
  - Other option is to use uniqueID+PKCS#11 label or other attributes 
to make it more readable. Can we use multi-valued RDN? If not, why? 
What are technical reasons behind it?


I would encourage you not to use multi-valued RDNs.  There aren't any 
technical reasons - multi-valued RDNs are part of the LDAP standards and 
all conforming LDAP implementations must support them.  However, they 
are hard to deal with - you _must_ have some sort of DN class/api on the 
client side to handle them, and not all clients do - many clients expect 
to be able to just do dnstr.lower() == dnstr2.lower() or possibly do 
simple escaping.


As far as being human readable - the whole goal is that humans _never_ 
have to look at a DN.  If humans have to look at and understand a DN to 
accomplish a task, then we have failed.


Has the DogTag team reviewed this proposal?  Their data storage and 
workflows are similar.




It is question if we like:
 nsUniqID = 0b0b7e53-957d11e3-a51dc0e5-9a05ecda
 nsUniqID = 8ae4190d-957a11e3-a51dc0e5-9a05ecda
more than:
 ipk11Label=meaningful_label+ipk11Private=TRUE
 ipk11Label=meaningful_label+ipk11Private=FALSE

 DNSSEC key metadata
 ---
 - Container (per-zone): cn=keys, idnsname=example.net, cn=dns
 - Key metadata can be linked to key material via DN or ipk11Id.
 - This allows key sharing between zones.
(DNSSEC-metadata will be specified later. That is not important for 
key storage.)


 Replica public keys
 ---
 - Container: cn=DNS,cn=replica 
FQDN,cn=masters,cn=ipa,cn=etc,dc=example

  - or it's child object like cn=wrappingKey

 Master keys
 ---
 - Container: cn=master, cn=keys, cn=sec, cn=dns, dc=example
 - Single key = single object.
 - We can use ipk11Label or ipk11Id for naming:
 ipk11Label=dnssecMaster1, ipk11Label=dnssecMaster2, etc.


Work flows
==
 Read DNSSEC private key
 ---
  1) read DNSSEC private key from LDAP
  2) ipaWrappedKey objectClass is present - key is encrypted
  3) read master key denoted by ipaWrappingKey attribute in DNSSEC key 
object

  4) use local replica key to decrypt master key
  5) use decrypted master key to decrypt DNSSEC private key

 Add DNSSEC private key
 --
  1) use local replica key to decrypt master key
  2) encrypt DNSSEC private key with master key
  3) add ipaWrappingKey attribute pointing to master key
  4) store encrypted blob in a new LDAP object

 Add a replica
 -
 ipa-replica-prepare:
  1) generate a new replica-key pair for the new replica
  2) store key pair to replica-file (don't scream yet :-)
  4) add public key for the new replica to LDAP
  3) fetch master key from LDAP
  4) encrypt master key with new replica public key
  5) store resulting master key blob to LDAP
 ipa-replica-install:

Re: [Freeipa-devel] global account lockout

2014-04-09 Thread Rich Megginson

On 04/09/2014 07:57 AM, Petr Spacek wrote:

On 9.4.2014 15:50, Ludwig Krispenz wrote:


On 04/09/2014 12:31 AM, Simo Sorce wrote:

On Tue, 2014-04-08 at 12:00 +0200, Ludwig Krispenz wrote:

Replication storms. In my opinion the replication of a mod of one or
two attribute in a entry will be faster than the bind itself.
Think about the amplification effect in an environment with 20 
replicas.

1 login attempt - 20+ replication messages

Now think about what happen bandwidth wise when a few thousand people
all authenticate at the same time across the infrastructure, you deploy
more servers to scale better and you get *more* traffic, at some point
servers actually get slower as they are busy with replication related
operations.

Think what happen if one of these servers is in a satellite office on a
relatively slow link and every morning it receives a flooding of
replication data ... that is 99% useless because most of tat data is 
not

relevant in that office.

ok, lets leave it with that, there might be scenarios where it becomes
unacceptable and as long as we have an acceptable solution we need 
not enforce

full replication



  If an attacker knows all the dns of the entries in a server the
denial of service could be that it just does a sequence of failed
logins for any user and nobody will be able to login any more,

This is perfectly true which is why we do not permanently lockout users
by default and which is why I personally dislike lockouts. A much 
better
mechanism to deal with brute force attacks is throttling, but it is 
also

somewhat harder to implement as you need to either have an async model
to delay answers or you need to tie threads for the delay time.
Still a far superior measure than replicating status around at all
times.

yes, that could be a good solution, but not trivial



  replication would help to propagate this to other servers, but not
prevent it. This would also be the case if only the final lockout
state is replicated.
Yes but the amount of replicated information would be far less. With 
our

default 1/5th less on average as 5 is the number of failed attempts
before the final lockout kicks in. So you save a lot of bandwidth.


I like the idea of replicating the attributes changed at failed logins
(or reset) only.
I think this is reasonable indeed, the common case is that users 
tend to

get their password right, and if you are under a password guessing
attack you should stop it. The issue is though that sometimes you have
misconfigured services with bad keytabs that will try over and over
again to init, even if the account is locked, or maybe (even worse) 
they

try a number of bad keys, but lower than the failed count, before
getting to the right one (thus resetting the failed count). If they do
this often you can still self-DoS even without a malicious attacker :-/

Something like this is what we have experienced for real and cause 
us to

actually disable replication of all the lockout related attributes in
the past.
But also here it can get complicated, we cannot really use 
failedlogincount
and replicate it, eg if it is 2 on each server an their are 
parallel login
attempts, we would increment it to 3 and replicate, so we would 
have 3 on

all servers, not what we wanted.


Maybe it is totally of topic, but ... could something like
Modify-Increment Extension
http://tools.ietf.org/html/rfc4525
help?

(I don't know how replications works, this would help only if it 
replicates operations and not only results of modifications.)


Replication does replicate the operation.  It essentially just 
forwards the modify operation received by the initial server, along 
with some replication meta-data.




I meant - it would replicate an command to increment the value by 1 
instead of replicating the new value.


The problem is that servers would quickly get out of sync.  I'm not sure 
how we would ensure eventual convergence.




Petr^2 Spacek

We could replicate changes to lastfailedauth and when receiving an 
update for
this attribute locally increase failedcount, but it would also have 
to be used
for resets (deleting lastFailedAuth), but there could also be race 
conditions,

maybe there are other local attrs needed.

And the bad news: I claimed that the replication protocol ensures 
that the
last change wins except for bugs, and looks like we have one bug for 
single
valued attributes in some scenarios. I have to repeat the test to 
double check.
The update resolution code for single valued attrs is a nightmare, 
Rich and I

several times said we need to rewrite it :-(

PS: Martin, if you are looking for subjects for a thesis, maybe some
theoretical model for replication update resolution and what is required
history could be a challenge.


Simo.


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list

Re: [Freeipa-devel] global account lockout

2014-04-09 Thread Rich Megginson

On 04/09/2014 08:09 AM, Simo Sorce wrote:

On Wed, 2014-04-09 at 15:50 +0200, Ludwig Krispenz wrote:

Something like this is what we have experienced for real and cause

us to

actually disable replication of all the lockout related attributes

in

the past.

But also here it can get complicated, we cannot really use
failedlogincount and replicate it, eg if it is 2 on each server an
their are parallel login attempts, we would increment it to 3 and
replicate, so we would have 3 on all servers, not what we wanted.
We could replicate changes to lastfailedauth and when receiving an
update for this attribute locally increase failedcount, but it would
also have to be used for resets (deleting lastFailedAuth), but there
could also be race conditions, maybe there are other local attrs
needed.

Yes, the current mechanism is deficient in many ways. For example the
failedcount/lastfailedauth attibutes are really suboptimal, a better
mechanism woul dbe to have a failedauths (not plural) multivalued
attribute and just append dates there (perhaps pre/postfixed with the
replica idto avoid any possible conflict). This way if 2 servers are
being attacked simultaneously they still should replicate their own
failure and each server can see that 5 dates are present in the last X
minutes and quickly lock the user, nor failedcount would be necessary
and no races incrementing it would occur.


This is an interesting idea.  Please file a ticket in the 389 trac 
explaining this.




The only issue would be in cleaning up the attribute to not let it grow
to much, but that could be accomplished by simply *not adding any more
failed counts once the account is locked (only logging locally that
someone tried to log in on a locked account) and deleting the attribute
completely when the acocunt is unlocked, this again would reduce the
attributes necessary for handling locking own to 1 from the current 3
(lastsuccessauth/lastafiledauth/failedcounter) however it still does
nothing to solve replication issues and has other replication races
problems (not sure what happens if a server try store a new failed auth
date and the other is deleting the old values at the same time.
Perhaps deleting by value is safe enough and won't cause issues,
Deleting the whole attribute may cause issues instead).


Handling of simultaneous updates of multi-valued attributes and update 
resolution works well.





And the bad news: I claimed that the replication protocol ensures that
the last change wins except for bugs, and looks like we have one bug
for single valued attributes in some scenarios. I have to repeat the
test to double check.
The update resolution code for single valued attrs is a nightmare,
Rich and I several times said we need to rewrite it :-(

Is there a ticket that tracks this and explains the issue(s) ?


https://fedorahosted.org/389/ticket/47442



Simo.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] global account lockout

2014-04-07 Thread Rich Megginson

On 04/07/2014 10:13 AM, Simo Sorce wrote:

On Mon, 2014-04-07 at 12:10 -0400, Simo Sorce wrote:

On Mon, 2014-04-07 at 12:01 -0400, Simo Sorce wrote:

On Mon, 2014-04-07 at 11:26 -0400, Rob Crittenden wrote:

Ludwig Krispenz wrote:

Hi,

please review the following feature design. It introduces a global
account lockout, while trying to keep the replication traffic minimal.
In my opinion for a real global account lockout the basic lockout
attributes have to be replicated otherwise the benefit is minimal: an
attacker could perform (maxFailedcount -1) login attempts on every
server before the global lockout is set. But the design page describes
how it could be done if it should be implemented - maybe the side effect
that accounts could the be unlocked on any replica has its own benefit.

http://www.freeipa.org/page/V4/Replicated_lockout

One weakness with this is there is still a window for extra password
attempts if one is clever, (m * (f-1))+1 to be exact, where m is the
number of masters and f is the # of allowed failed logins.

Yes, but that is a problem that cannot be solved w/o full replication at
every authentication attempt.

What we tried to achieve is a middle ground to at least ease
administration and still lock em up earlier.

Let me add that we could have yet another closer step by finding a way
to replicate only failed attempts and not successful attempts in some
case. Assuming a setup where most people do not fail to enter their
password it would make for a decent compromise.

That could be achieved by not storing lastsuccessful auth except when
that is needed to clear failed logon attempts (ie when the failed logon
counter is  0)

If we did that then we would not need a new attribute actually, as
failed logins would always be replicated.
However it would mean that last Successful auth would never be accurate
on any server.

Or perhaps we could have a local last successful auth and a global one
by adding one new attribute, and keeping masking only the successful
auth.

The main issue about all these possibilities is how do we present them ?
And how do we make a good default ?

I think a good default is defined by these 2 characteristics:
1. lockouts can be dealt with on any replica w/o having the admin hunt
down where a user is locked.
2. at least successful authentications will not cause replication storms

If we can afford to cause replications on failed authentication by
default, then we could open up replication for failedauth and
failedcount attributes but still bar the successful auth attribute.
Unlock would simply consist in forcibly setting failed count to 0 (which
is replicated so it would unlock all servers).
This would work w/o introducing new attributes and only with minimal
logic changes in the KDC/pwd-extop plugins I think.

Sigh re[plying again to myself.
note that the main issue with replicating failed accounts is that you
can cause replication storms by simply probing all user accounts with
failed binds or AS requests. In some environments that may cause DoSs
(if you have slow/high latency links on which replication runs for
example).
So I think we should always give the option to turn off failed
date/count attributes replication, which in turn would mean we still
require a new attribute to replicate for when a user is finally locked
out on one of the servers or we fail requirement 1.

Simo.

Another problem with keeping track of bind attributes in a replicated 
environment is the sheer size of the replication metadata.  Replicating 
1 failed bind attempt might be 100kbytes or more data to all servers.  
We should have a way to perhaps say only keep last N CSNs or maybe 
even don't keep CSNs for these attributes.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] global account lockout

2014-04-07 Thread Rich Megginson

On 04/07/2014 12:31 PM, Simo Sorce wrote:

On Mon, 2014-04-07 at 10:22 -0600, Rich Megginson wrote:

On 04/07/2014 10:13 AM, Simo Sorce wrote:

On Mon, 2014-04-07 at 12:10 -0400, Simo Sorce wrote:

On Mon, 2014-04-07 at 12:01 -0400, Simo Sorce wrote:

On Mon, 2014-04-07 at 11:26 -0400, Rob Crittenden wrote:

Ludwig Krispenz wrote:

Hi,

please review the following feature design. It introduces a global
account lockout, while trying to keep the replication traffic minimal.
In my opinion for a real global account lockout the basic lockout
attributes have to be replicated otherwise the benefit is minimal: an
attacker could perform (maxFailedcount -1) login attempts on every
server before the global lockout is set. But the design page describes
how it could be done if it should be implemented - maybe the side effect
that accounts could the be unlocked on any replica has its own benefit.

http://www.freeipa.org/page/V4/Replicated_lockout

One weakness with this is there is still a window for extra password
attempts if one is clever, (m * (f-1))+1 to be exact, where m is the
number of masters and f is the # of allowed failed logins.

Yes, but that is a problem that cannot be solved w/o full replication at
every authentication attempt.

What we tried to achieve is a middle ground to at least ease
administration and still lock em up earlier.

Let me add that we could have yet another closer step by finding a way
to replicate only failed attempts and not successful attempts in some
case. Assuming a setup where most people do not fail to enter their
password it would make for a decent compromise.

That could be achieved by not storing lastsuccessful auth except when
that is needed to clear failed logon attempts (ie when the failed logon
counter is  0)

If we did that then we would not need a new attribute actually, as
failed logins would always be replicated.
However it would mean that last Successful auth would never be accurate
on any server.

Or perhaps we could have a local last successful auth and a global one
by adding one new attribute, and keeping masking only the successful
auth.

The main issue about all these possibilities is how do we present them ?
And how do we make a good default ?

I think a good default is defined by these 2 characteristics:
1. lockouts can be dealt with on any replica w/o having the admin hunt
down where a user is locked.
2. at least successful authentications will not cause replication storms

If we can afford to cause replications on failed authentication by
default, then we could open up replication for failedauth and
failedcount attributes but still bar the successful auth attribute.
Unlock would simply consist in forcibly setting failed count to 0 (which
is replicated so it would unlock all servers).
This would work w/o introducing new attributes and only with minimal
logic changes in the KDC/pwd-extop plugins I think.

Sigh re[plying again to myself.
note that the main issue with replicating failed accounts is that you
can cause replication storms by simply probing all user accounts with
failed binds or AS requests. In some environments that may cause DoSs
(if you have slow/high latency links on which replication runs for
example).
So I think we should always give the option to turn off failed
date/count attributes replication, which in turn would mean we still
require a new attribute to replicate for when a user is finally locked
out on one of the servers or we fail requirement 1.

Simo.


Another problem with keeping track of bind attributes in a replicated
environment is the sheer size of the replication metadata.  Replicating
1 failed bind attempt might be 100kbytes or more data to all servers.
We should have a way to perhaps say only keep last N CSNs or maybe
even don't keep CSNs for these attributes.

Yes, but this look a lot like general replication improvement (would
also be cool to have better conflict resolution),


Ludwig has made some improvements with how 389 stores replication 
metadata for conflict resolution, but in this case it's not nearly enough.



not lockout
specific.

Simo.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] global account lockout

2014-04-07 Thread Rich Megginson

On 04/07/2014 01:00 PM, Simo Sorce wrote:

On Mon, 2014-04-07 at 14:47 -0400, Dmitri Pal wrote:

On 04/07/2014 02:31 PM, Simo Sorce wrote:

On Mon, 2014-04-07 at 10:22 -0600, Rich Megginson wrote:

On 04/07/2014 10:13 AM, Simo Sorce wrote:

On Mon, 2014-04-07 at 12:10 -0400, Simo Sorce wrote:

On Mon, 2014-04-07 at 12:01 -0400, Simo Sorce wrote:

On Mon, 2014-04-07 at 11:26 -0400, Rob Crittenden wrote:

Ludwig Krispenz wrote:

Hi,

please review the following feature design. It introduces a global
account lockout, while trying to keep the replication traffic minimal.
In my opinion for a real global account lockout the basic lockout
attributes have to be replicated otherwise the benefit is minimal: an
attacker could perform (maxFailedcount -1) login attempts on every
server before the global lockout is set. But the design page describes
how it could be done if it should be implemented - maybe the side effect
that accounts could the be unlocked on any replica has its own benefit.

http://www.freeipa.org/page/V4/Replicated_lockout

One weakness with this is there is still a window for extra password
attempts if one is clever, (m * (f-1))+1 to be exact, where m is the
number of masters and f is the # of allowed failed logins.

Yes, but that is a problem that cannot be solved w/o full replication at
every authentication attempt.

What we tried to achieve is a middle ground to at least ease
administration and still lock em up earlier.

Let me add that we could have yet another closer step by finding a way
to replicate only failed attempts and not successful attempts in some
case. Assuming a setup where most people do not fail to enter their
password it would make for a decent compromise.

That could be achieved by not storing lastsuccessful auth except when
that is needed to clear failed logon attempts (ie when the failed logon
counter is  0)

If we did that then we would not need a new attribute actually, as
failed logins would always be replicated.
However it would mean that last Successful auth would never be accurate
on any server.

Or perhaps we could have a local last successful auth and a global one
by adding one new attribute, and keeping masking only the successful
auth.

The main issue about all these possibilities is how do we present them ?
And how do we make a good default ?

I think a good default is defined by these 2 characteristics:
1. lockouts can be dealt with on any replica w/o having the admin hunt
down where a user is locked.
2. at least successful authentications will not cause replication storms

If we can afford to cause replications on failed authentication by
default, then we could open up replication for failedauth and
failedcount attributes but still bar the successful auth attribute.
Unlock would simply consist in forcibly setting failed count to 0 (which
is replicated so it would unlock all servers).
This would work w/o introducing new attributes and only with minimal
logic changes in the KDC/pwd-extop plugins I think.

Sigh re[plying again to myself.
note that the main issue with replicating failed accounts is that you
can cause replication storms by simply probing all user accounts with
failed binds or AS requests. In some environments that may cause DoSs
(if you have slow/high latency links on which replication runs for
example).
So I think we should always give the option to turn off failed
date/count attributes replication, which in turn would mean we still
require a new attribute to replicate for when a user is finally locked
out on one of the servers or we fail requirement 1.

Simo.


Another problem with keeping track of bind attributes in a replicated
environment is the sheer size of the replication metadata.  Replicating
1 failed bind attempt might be 100kbytes or more data to all servers.
We should have a way to perhaps say only keep last N CSNs or maybe
even don't keep CSNs for these attributes.

Yes, but this look a lot like general replication improvement (would
also be cool to have better conflict resolution), not lockout
specific.

Simo.


My only comment is actually about conflict resolution. What would happen
if I attack (flood) two replicas at the same time beating the
replication. It would mean both servers would generate the global
attributes and try to replicate to each other. If the replicas are on
the edges of topology it might take some time and it might even happen
that admin already unlocked the account while the old lock is still
trying to propagate. IMO we need collisions resolution logic taken care
of first. I suspect that any real attack would lead to collisions and if
it would leave the deployment unstable even after the attack ended we lost.

Yes, this is a valid concern. We need a last-wins conflict resolution
strategy for some cases.


I'm not sure what you mean.  The 389 conflict resolution strategy is 
last-wins already.  Or do you mean for some cases, but not all cases?




Simo.



___
Freeipa

Re: [Freeipa-devel] LDAP Queue Length Control for better LDAP client performance?

2014-03-13 Thread Rich Megginson

On 03/13/2014 03:08 AM, Petr Spacek wrote:

Hello list,

my journey to the IETF wonderland revealed one more RFC draft:

LDAP Queue Length Control
http://tools.ietf.org/html/draft-hollstein-queuelength-control-01

I have no idea if this can really improve LDAP client performance or 
not but IMHO it is worth exploring it.


Maybe only an IPA replica with thousands of SSSD clients could benefit 
from it, I don't know.


I have finally ran out of notes from yesterday so you don't need to 
worry about more RFC drafts - today :-)




389 allows you to turn on and off TCP Nagle (TCP_NODELAY) and TCP_CORK.  
Someone could try running different workloads with different settings 
for these to see if it makes a difference.


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Is there RPC documentation?

2014-02-27 Thread Rich Megginson

On 02/27/2014 06:19 AM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/26/2014 03:48 PM, Simo Sorce wrote:

On Wed, 2014-02-26 at 15:28 -0700, Rich Megginson wrote:

On 02/26/2014 03:22 PM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/26/2014 02:19 PM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/26/2014 08:53 AM, Petr Viktorin wrote:

On 02/26/2014 04:45 PM, Rich Megginson wrote:

I'm working on adding support for freeipa DNS to openstack
designate
(DNSaaS).  I am assuming I need to use RPC (XML?  JSON? 
REST?) to

communicate with freeipa.  Is there documentation about how to
construct
and send RPC messages?

The JSON-RPC and XML-RPC API is still not officially supported
(read: documented), though it's extremely unlikely to change.
If you need an example, run any ipa command with -vv, this will
print
out the request  response.
API.txt in the source tree lists all the commands and params.
This blog post still applies (but be sure to read the update 
about

--cacert):
http://adam.younglogic.com/2010/07/talking-to-freeipa-json-web-api-via-curl/ 








Ok.  Next question is - how does one do the equivalent of the curl
command in python code?
Here is a pretty stripped-down way to add a user. Other commands 
are

similar, you just may care more about the output:

from ipalib import api
from ipalib import errors

api.bootstrap(context='cli')
api.finalize()
api.Backend.xmlclient.connect()

try:
 api.Command['user_add'](u'testuser',
 givenname=u'Test', sn=u'User',
 loginshell=u'/bin/sh')
except errors.DuplicateEntry:
 print user already exists
else:
 print User added


How would one do this from outside of ipa?  If ipalib is not
available?

You'd need to go to either /ipa/xml or /ipa/json (depending on what
protocol you want to use) and issue one request there. This requires
Kerberos authentication. The response will include a cookie which you
should either ignore or store safely (like in the kernel keyring).
Using the cookie will significantly improve performance.

This is for the ipa dns backend for designate.  I'm assuming I will
either be using a keytab, or perhaps the new proxy?

At any rate, I have to do everything in python - including the kinit
with the keytab.

Lok at rob's damon but you should *not* do a kinit, you should just use
gssapi (see python-kerberos) and do a gss_init_sec_context there, if 
the

environment is configured (KRB5_KTNAME set correctly) then gssapi will
automatically kinit for you under the hood.

I guess I'm really looking for specifics - I've seen 
recommendations to

use the python libraries requests and json.  I don't know if
requests supports negotiate/kerberos.  If not, is there a recommended
library to use?  As this particular project will be part of openstack,
perhaps there is a more openstack-y library, or even something
built-in to openstack (oslo?).  I think amqp support kerberos, so
perhaps there is some oslo.messaging thing that will do the http +
kerberos stuff.

Afaik there is nothing that does kerberos in openstack, you'll have to
introduce all that stuff.


Egads - implementing openstack-wide kerberos client libraries in order
to add an ipa dns backend to designate.

Rob, need any help with your proxy?


Well, something occurred to me this morning. You need SSL on top of 
this too, which means you need the IPA CA. The easiest way to get that 
is to enroll the designate server as an IPA client. This pulls in the 
freeipa-python package which gives you ipalib, so no reinventing the 
wheel required.


I'm trying to use python-kerberos to do auth with a keytab 
(KRB5_KTNAME), without first doing a kinit from the command line. It is 
not working.


Does anyone know how I can do client side kerberos auth with a keytab in 
python without first doing a kinit?




rob


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNSSEC design page

2014-02-27 Thread Rich Megginson

On 02/27/2014 09:37 AM, Petr Spacek wrote:

On 27.2.2014 17:24, Ludwig Krispenz wrote:


On 02/27/2014 03:56 PM, Jan Cholasta wrote:

On 27.2.2014 15:23, Ludwig Krispenz wrote:


On 02/27/2014 02:14 PM, Jan Cholasta wrote:

On 18.2.2014 17:19, Martin Kosek wrote:

On 02/18/2014 04:38 PM, Jan Cholasta wrote:

On 18.2.2014 16:35, Petr Spacek wrote:

On 18.2.2014 16:31, Jan Cholasta wrote:


2] low level replacement for eg the sqlite3 database in 
softhsm.

That's what I sometimes get the impression what is wanted.
SoftHsm has
one component Softdatabase with an API, which more or less
passes sets
of attributes (attributes defined by PKCS#11) and then stores
it as
records in sql where each record has a keytype and opaque 
blob of

data.
If that is what is wanted the decision would be how 
fingrained the

pkcs
objects/attribute types would have to be mapped to ldap: 
one ldap

attribute for each possible attribute type ?


One-to-one mapping of attributes from PKCS#11 to LDAP would be
the most
straightforward way of doing this, but I think we can do some
optimization for our needs. For example, like you said 
above, we

can
use
a single attribute containing PKCS#8 encoded private key rather
than
using one attribute per private key component.

I don't think we need an LDAP attribute for every possible 
PKCS#11
attribute, ATM it would be sufficient to have just these 
attributes

necessary to represent private key, public key and certificate
objects.

So, I would say it should be something between high-level and
low-level.


There won't be a separate public key, it's represented by the
certificate.


I'm not sure if this is the case for DNSSEC.


Honzo,

we really need the design page with some goal statement, 
high-level

overview etc. There is still some confusion, probably from fact
that we
want to use the same module for cert distribution and at the 
same time

for DNSSEC key storage.



It's on my TODO list, I'll try to get it out ASAP.



+1, please do. We clearly need some design to start with.

Martin



I already posted the link in other thread, but here it is anyway:
http://www.freeipa.org/page/V3/PKCS11_in_LDAP.

Some more comments on the schema:

I think I may have been too quick to dismiss RFC 4523. There is
CKA_CERTIFICATE_CATEGORY which can have values unspecified, token
user, authority and other entity. We could map entries with
object class pkiUser to certificate object with
CKA_CERTIFICATE_CATEGORY token user and entries with object class
pkiCA to certificate object with CKA_CERTIFICATE_CATEGORY 
authority.

There are no object classes in RFC 4523 for unspecified and other
entity, but we will not be storing any certificates using PKCS#11
anyway, so I think it's OK.

not sure I understand what exactly you want here. If we don't store
certificates using the pkcs#11 schema we don't need to define them, 
but
on the other hand you talk about the usage of 
CKA_CERTIFICATE_CATEGORY.

Do you mean to have a pkcs11 cerificate object with
CKA_CERTIFICATE_CATEGORY and allow the the rfc4523 attributes
userCertificate and cACertificate to store them ?


Hopefully an example will better illustrate what I mean. We could map
PKCS#11 objects like this:

CKA_CLASS: CKO_CERTIFICATE
CKA_CERTIFICATE_TYPE: CKC_X_509
CKA_CERTIFICATE_CATEGORY: 1
CKA_VALUE: cert
other attrs

to LDAP entries like this:

dn: pkcs11uniqueId=id,suffix
objectClass: pkcs11storageObject
objectClass: pkiUser
pkcs11uniqueId: id
userCertificate;binary: cert
other attrs

and PKCS#11 object like this:

CKA_CLASS: CKO_CERTIFICATE
CKA_CERTIFICATE_TYPE: CKC_X_509
CKA_CERTIFICATE_CATEGORY: 2
CKA_VALUE: cert
other attrs

to LDAP entries like this:

dn: pkcs11uniqueId=id,suffix
objectClass: pkcs11storageObject
objectClass: pkiCA
pkcs11uniqueId: id
caCertificate;binary: cert
other attrs

In other words, the value of CKA_CERTIFICATE_CATEGORY is implied from
objectClass, CKA_CERTIFICATE_CATEGORY: 1 = objectClass: 
pkiUser and

CKA_CERTIFICATE_CATEGORY: 2 = objectClass: pkiCA.

so you want to directly use the pkiUser|CA objectclass, that would be ok




Also the above got me thinking, is there any standard LDAP schema
for private keys? If so, can we use it?

I didn't find any, the only keys in ldap I found is a definition for
sshPublicKey for openssh.


And even this schema is for public keys only :-) OK, nevermind then.



I'm going to store NSS trust objects along with CA certificates, so
I'm going to need an object class for that. You can find the details
on CKO_NSS_TRUST at
http://p11-glue.freedesktop.org/doc/storing-trust-policy/storing-trust-existing.html. 



so this is a nss  extension to pkcs11, not in the standard ? If we 
add trust
objects, should the naming reflect this like pkcs11nssattr or 
pkcs11extattr ?



If we store multiple related PKCS#11 objects in a single LDAP entry,
there is going to be some redundancy. For example, public key value
can be 

Re: [Freeipa-devel] DNSSEC design page

2014-02-27 Thread Rich Megginson

On 02/27/2014 01:10 PM, Petr Spacek wrote:

On 27.2.2014 17:55, Ludwig Krispenz wrote:


On 02/27/2014 05:46 PM, Rich Megginson wrote:

On 02/27/2014 09:37 AM, Petr Spacek wrote:

On 27.2.2014 17:24, Ludwig Krispenz wrote:


On 02/27/2014 03:56 PM, Jan Cholasta wrote:

On 27.2.2014 15:23, Ludwig Krispenz wrote:


On 02/27/2014 02:14 PM, Jan Cholasta wrote:

On 18.2.2014 17:19, Martin Kosek wrote:

On 02/18/2014 04:38 PM, Jan Cholasta wrote:

On 18.2.2014 16:35, Petr Spacek wrote:

On 18.2.2014 16:31, Jan Cholasta wrote:


2] low level replacement for eg the sqlite3 database in 
softhsm.

That's what I sometimes get the impression what is wanted.
SoftHsm has
one component Softdatabase with an API, which more or less
passes sets
of attributes (attributes defined by PKCS#11) and then 
stores

it as
records in sql where each record has a keytype and 
opaque blob of

data.
If that is what is wanted the decision would be how 
fingrained the

pkcs
objects/attribute types would have to be mapped to ldap: 
one ldap

attribute for each possible attribute type ?


One-to-one mapping of attributes from PKCS#11 to LDAP 
would be

the most
straightforward way of doing this, but I think we can do 
some
optimization for our needs. For example, like you said 
above, we

can
use
a single attribute containing PKCS#8 encoded private key 
rather

than
using one attribute per private key component.

I don't think we need an LDAP attribute for every 
possible PKCS#11
attribute, ATM it would be sufficient to have just these 
attributes
necessary to represent private key, public key and 
certificate

objects.

So, I would say it should be something between high-level 
and

low-level.


There won't be a separate public key, it's represented by the
certificate.


I'm not sure if this is the case for DNSSEC.


Honzo,

we really need the design page with some goal statement, 
high-level

overview etc. There is still some confusion, probably from fact
that we
want to use the same module for cert distribution and at the 
same time

for DNSSEC key storage.



It's on my TODO list, I'll try to get it out ASAP.



+1, please do. We clearly need some design to start with.

Martin



I already posted the link in other thread, but here it is anyway:
http://www.freeipa.org/page/V3/PKCS11_in_LDAP.

Some more comments on the schema:

I think I may have been too quick to dismiss RFC 4523. There is
CKA_CERTIFICATE_CATEGORY which can have values unspecified, 
token

user, authority and other entity. We could map entries with
object class pkiUser to certificate object with
CKA_CERTIFICATE_CATEGORY token user and entries with object 
class
pkiCA to certificate object with CKA_CERTIFICATE_CATEGORY 
authority.
There are no object classes in RFC 4523 for unspecified and 
other

entity, but we will not be storing any certificates using PKCS#11
anyway, so I think it's OK.

not sure I understand what exactly you want here. If we don't store
certificates using the pkcs#11 schema we don't need to define 
them, but
on the other hand you talk about the usage of 
CKA_CERTIFICATE_CATEGORY.

Do you mean to have a pkcs11 cerificate object with
CKA_CERTIFICATE_CATEGORY and allow the the rfc4523 attributes
userCertificate and cACertificate to store them ?


Hopefully an example will better illustrate what I mean. We could 
map

PKCS#11 objects like this:

CKA_CLASS: CKO_CERTIFICATE
CKA_CERTIFICATE_TYPE: CKC_X_509
CKA_CERTIFICATE_CATEGORY: 1
CKA_VALUE: cert
other attrs

to LDAP entries like this:

dn: pkcs11uniqueId=id,suffix
objectClass: pkcs11storageObject
objectClass: pkiUser
pkcs11uniqueId: id
userCertificate;binary: cert
other attrs

and PKCS#11 object like this:

CKA_CLASS: CKO_CERTIFICATE
CKA_CERTIFICATE_TYPE: CKC_X_509
CKA_CERTIFICATE_CATEGORY: 2
CKA_VALUE: cert
other attrs

to LDAP entries like this:

dn: pkcs11uniqueId=id,suffix
objectClass: pkcs11storageObject
objectClass: pkiCA
pkcs11uniqueId: id
caCertificate;binary: cert
other attrs

In other words, the value of CKA_CERTIFICATE_CATEGORY is implied 
from
objectClass, CKA_CERTIFICATE_CATEGORY: 1 = objectClass: 
pkiUser and

CKA_CERTIFICATE_CATEGORY: 2 = objectClass: pkiCA.
so you want to directly use the pkiUser|CA objectclass, that would 
be ok




Also the above got me thinking, is there any standard LDAP 
schema

for private keys? If so, can we use it?
I didn't find any, the only keys in ldap I found is a definition 
for

sshPublicKey for openssh.


And even this schema is for public keys only :-) OK, nevermind then.



I'm going to store NSS trust objects along with CA 
certificates, so
I'm going to need an object class for that. You can find the 
details

on CKO_NSS_TRUST at
http://p11-glue.freedesktop.org/doc/storing-trust-policy/storing-trust-existing.html. 




so this is a nss  extension to pkcs11, not in the standard ? If we 
add trust

objects, should the naming reflect this like pkcs11nssattr

Re: [Freeipa-devel] How to restore an IPA Replica when the CSN number generator has moved impossibly far into the future or past

2014-02-27 Thread Rich Megginson
actl start
  
  • 6: When the daemon starts, it will see that it does not have
  an nsState and will write new CSN's to -all- of the newly
  imported good data with today's timetamp, we need to take that
  data and write -it- out to an ldif file
    On  the master supplier:
    /var/lib/dirsrv/scripts-EXAMPLE-COM/db2ldif.pl -D
  'cn=Directory Manager' -w - -n userRoot -r -a
  /tmp/replication-master-389.ldif
    ^ the -r tells it to include all replica data which includes
  the newly blessed CSN data
    transfer the file to all of the ipa servers in the fleet
  
  • 7: Now we must re-initialize _every other_ ipa consumer
  server in the fleet with the new good data.
    Steps 7-10 need to be done 1 at a time on each ipa consumer
  server
    ipactl stop
  
  • 8: Sanitize the dse.ldif Configuration File
     On the ipa server: 
     edit the /etc/dirsrv/slapd-EXAMPLE-COM/dse.ldif file and
  remove the nsState attribute from the replica config entry
     You DO NOT want to remove the nsState from: dn: cn=uniqueid
  generator,cn=config
     The stanza you want to remove the value from is: dn:
  cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping
  tree,cn=config
     The attribute will look like this: nsState::
  cwA3QPBSAQABAA==
     Delete the entire line
  
  • 8.1: Remove traces of stale CSN tracking in the Replica
  Agreements themeselves
     File location: /etc/dirsrv/slapd-EXAMPLE-COM/dse.ldif
     cat dse.ldif | sed -n '1 {h; $ !d}; $ {x; s/\n //g; p}; /^
  / {H; d}; /^ /! {x; s/\n //g; p}' | grep -v nsds50ruv 
  new.dse.ldif
     backup the old dse.ldif and replace it with the new one
     # mv dse.ldif dse.saved.ldif
     # mv new.dse.ldif dse.ldif
  
  • 9: Import the data from the known good ldif. This will mark
  all the changes with CSNs that match the current time/date
  stamps
     On the auth server:
     chmod 644 /tmp/replication-master-389.ldif
     /var/lib/dirsrv/scripts-EXAMPLE-COM/ldif2db -n userRoot -i
  /tmp/replication-master-389.ldif
  
  • 10: Restart the ipa daemons on the ipa server
     On the ipa server:
     ipactl start
  







    From Rich Megginson:
Further reading for those interested in the particulars
  of CSN tracking or the MultiMaster Replication algorithm,
  you can read up about it here:
  

It all starts with the Leslie Lamport paper:
  http://www.stanford.edu/class/cs240/readings/lamport.pdf
  "Time, Clocks, and the Ordering of Events in a Distributed
  System"
  
  The next big impact on MMR protocols was the work done at
  Xerox PARC on the Bayou project.
  
  These and other sources formed the basis of the IETF LDUP
  working group.  Much of the MMR protocol is based on the
  LDUP work.
  
  
  The tl;dr version is this:
  
  The MMR protocol is based on ordering operations by time
  so that when you have two updates to the same attribute,
  the "last one wins"
  So how do you guarantee some sort of consistent ordering
  throughout many systems that do not have clocks in sync
  down to the millisecond? If you say "ntp" then you lose...
  The protocol itself has to have some notion of time
  differences among servers
  The ordering is done by CSN (Change Sequence Number)
  The first part of the CSN is the timestamp of the
  operation in unix time_t (number of seconds since the
  epoch).
  In order to guarantee ordering, the MMR protocol has a
  major constraint
  You must never, never, issue a CSN that is the same or
  less than another CSN
  In order to guarantee that, the MMR protocol keeps track
  of the time differences among _all_ of the servers that it
  knows about.
  When it generates CSNs, it uses the largest time
  difference among all servers that it knows about.
  
  So how does the time skew grow at all?
  Due to timing differences, network latency, etc. the
  directory server cannot always generate the 

[Freeipa-devel] Is there RPC documentation?

2014-02-26 Thread Rich Megginson
I'm working on adding support for freeipa DNS to openstack designate 
(DNSaaS).  I am assuming I need to use RPC (XML?  JSON?  REST?) to 
communicate with freeipa.  Is there documentation about how to construct 
and send RPC messages?


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Is there RPC documentation?

2014-02-26 Thread Rich Megginson

On 02/26/2014 09:18 AM, Petr Vobornik wrote:

On 26.2.2014 16:53, Petr Viktorin wrote:

On 02/26/2014 04:45 PM, Rich Megginson wrote:

I'm working on adding support for freeipa DNS to openstack designate
(DNSaaS).  I am assuming I need to use RPC (XML?  JSON? REST?) to
communicate with freeipa.  Is there documentation about how to 
construct

and send RPC messages?


The JSON-RPC and XML-RPC API is still not officially supported (read:
documented), though it's extremely unlikely to change.
If you need an example, run any ipa command with -vv, this will print
out the request  response.
API.txt in the source tree lists all the commands and params.
This blog post still applies (but be sure to read the update about
--cacert):
http://adam.younglogic.com/2010/07/talking-to-freeipa-json-web-api-via-curl/ 






Web UI communicates with API through JSON-RPC so you can open browser 
developer tools (F12) and inspect requests/responses in network tab.


Thanks.  Would rather use cli, but that's good to know.

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Is there RPC documentation?

2014-02-26 Thread Rich Megginson

On 02/26/2014 08:53 AM, Petr Viktorin wrote:

On 02/26/2014 04:45 PM, Rich Megginson wrote:

I'm working on adding support for freeipa DNS to openstack designate
(DNSaaS).  I am assuming I need to use RPC (XML?  JSON?  REST?) to
communicate with freeipa.  Is there documentation about how to construct
and send RPC messages?


The JSON-RPC and XML-RPC API is still not officially supported 
(read: documented), though it's extremely unlikely to change.
If you need an example, run any ipa command with -vv, this will print 
out the request  response.

API.txt in the source tree lists all the commands and params.
This blog post still applies (but be sure to read the update about 
--cacert): 
http://adam.younglogic.com/2010/07/talking-to-freeipa-json-web-api-via-curl/




Ok.  Next question is - how does one do the equivalent of the curl 
command in python code?


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Is there RPC documentation?

2014-02-26 Thread Rich Megginson

On 02/26/2014 02:19 PM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/26/2014 08:53 AM, Petr Viktorin wrote:

On 02/26/2014 04:45 PM, Rich Megginson wrote:

I'm working on adding support for freeipa DNS to openstack designate
(DNSaaS).  I am assuming I need to use RPC (XML?  JSON? REST?) to
communicate with freeipa.  Is there documentation about how to 
construct

and send RPC messages?


The JSON-RPC and XML-RPC API is still not officially supported
(read: documented), though it's extremely unlikely to change.
If you need an example, run any ipa command with -vv, this will print
out the request  response.
API.txt in the source tree lists all the commands and params.
This blog post still applies (but be sure to read the update about
--cacert):
http://adam.younglogic.com/2010/07/talking-to-freeipa-json-web-api-via-curl/ 






Ok.  Next question is - how does one do the equivalent of the curl
command in python code?


Here is a pretty stripped-down way to add a user. Other commands are 
similar, you just may care more about the output:


from ipalib import api
from ipalib import errors

api.bootstrap(context='cli')
api.finalize()
api.Backend.xmlclient.connect()

try:
api.Command['user_add'](u'testuser',
givenname=u'Test', sn=u'User',
loginshell=u'/bin/sh')
except errors.DuplicateEntry:
print user already exists
else:
print User added



How would one do this from outside of ipa?  If ipalib is not available?

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Is there RPC documentation?

2014-02-26 Thread Rich Megginson

On 02/26/2014 03:22 PM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/26/2014 02:19 PM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/26/2014 08:53 AM, Petr Viktorin wrote:

On 02/26/2014 04:45 PM, Rich Megginson wrote:

I'm working on adding support for freeipa DNS to openstack designate
(DNSaaS).  I am assuming I need to use RPC (XML?  JSON? REST?) to
communicate with freeipa.  Is there documentation about how to
construct
and send RPC messages?


The JSON-RPC and XML-RPC API is still not officially supported
(read: documented), though it's extremely unlikely to change.
If you need an example, run any ipa command with -vv, this will print
out the request  response.
API.txt in the source tree lists all the commands and params.
This blog post still applies (but be sure to read the update about
--cacert):
http://adam.younglogic.com/2010/07/talking-to-freeipa-json-web-api-via-curl/ 







Ok.  Next question is - how does one do the equivalent of the curl
command in python code?


Here is a pretty stripped-down way to add a user. Other commands are
similar, you just may care more about the output:

from ipalib import api
from ipalib import errors

api.bootstrap(context='cli')
api.finalize()
api.Backend.xmlclient.connect()

try:
api.Command['user_add'](u'testuser',
givenname=u'Test', sn=u'User',
loginshell=u'/bin/sh')
except errors.DuplicateEntry:
print user already exists
else:
print User added



How would one do this from outside of ipa?  If ipalib is not available?


You'd need to go to either /ipa/xml or /ipa/json (depending on what 
protocol you want to use) and issue one request there. This requires 
Kerberos authentication. The response will include a cookie which you 
should either ignore or store safely (like in the kernel keyring). 
Using the cookie will significantly improve performance.


This is for the ipa dns backend for designate.  I'm assuming I will 
either be using a keytab, or perhaps the new proxy?


At any rate, I have to do everything in python - including the kinit 
with the keytab.


I guess I'm really looking for specifics - I've seen recommendations to 
use the python libraries requests and json.  I don't know if 
requests supports negotiate/kerberos.  If not, is there a recommended 
library to use?  As this particular project will be part of openstack, 
perhaps there is a more openstack-y library, or even something 
built-in to openstack (oslo?).  I think amqp support kerberos, so 
perhaps there is some oslo.messaging thing that will do the http + 
kerberos stuff.




If you store the cookie then you can make future requests to 
/ipa/session/{xml|json} unless a Kerberos error is raised, in which 
case things start over again.


You'll need to include a Referer header in your request, see the -vv 
output of the ipa command for samples.


rob


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Is there RPC documentation?

2014-02-26 Thread Rich Megginson

On 02/26/2014 03:48 PM, Simo Sorce wrote:

On Wed, 2014-02-26 at 15:28 -0700, Rich Megginson wrote:

On 02/26/2014 03:22 PM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/26/2014 02:19 PM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/26/2014 08:53 AM, Petr Viktorin wrote:

On 02/26/2014 04:45 PM, Rich Megginson wrote:

I'm working on adding support for freeipa DNS to openstack designate
(DNSaaS).  I am assuming I need to use RPC (XML?  JSON? REST?) to
communicate with freeipa.  Is there documentation about how to
construct
and send RPC messages?

The JSON-RPC and XML-RPC API is still not officially supported
(read: documented), though it's extremely unlikely to change.
If you need an example, run any ipa command with -vv, this will print
out the request  response.
API.txt in the source tree lists all the commands and params.
This blog post still applies (but be sure to read the update about
--cacert):
http://adam.younglogic.com/2010/07/talking-to-freeipa-json-web-api-via-curl/





Ok.  Next question is - how does one do the equivalent of the curl
command in python code?

Here is a pretty stripped-down way to add a user. Other commands are
similar, you just may care more about the output:

from ipalib import api
from ipalib import errors

api.bootstrap(context='cli')
api.finalize()
api.Backend.xmlclient.connect()

try:
 api.Command['user_add'](u'testuser',
 givenname=u'Test', sn=u'User',
 loginshell=u'/bin/sh')
except errors.DuplicateEntry:
 print user already exists
else:
 print User added


How would one do this from outside of ipa?  If ipalib is not available?

You'd need to go to either /ipa/xml or /ipa/json (depending on what
protocol you want to use) and issue one request there. This requires
Kerberos authentication. The response will include a cookie which you
should either ignore or store safely (like in the kernel keyring).
Using the cookie will significantly improve performance.

This is for the ipa dns backend for designate.  I'm assuming I will
either be using a keytab, or perhaps the new proxy?

At any rate, I have to do everything in python - including the kinit
with the keytab.

Lok at rob's damon but you should *not* do a kinit, you should just use
gssapi (see python-kerberos) and do a gss_init_sec_context there, if the
environment is configured (KRB5_KTNAME set correctly) then gssapi will
automatically kinit for you under the hood.


I guess I'm really looking for specifics - I've seen recommendations to
use the python libraries requests and json.  I don't know if
requests supports negotiate/kerberos.  If not, is there a recommended
library to use?  As this particular project will be part of openstack,
perhaps there is a more openstack-y library, or even something
built-in to openstack (oslo?).  I think amqp support kerberos, so
perhaps there is some oslo.messaging thing that will do the http +
kerberos stuff.

Afaik there is nothing that does kerberos in openstack, you'll have to
introduce all that stuff.


Egads - implementing openstack-wide kerberos client libraries in order 
to add an ipa dns backend to designate.


Rob, need any help with your proxy?



HTH,
Simo.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 0032] Update ACIs to permit users to add/delete their own tokens

2014-01-09 Thread Rich Megginson

On 01/09/2014 02:32 PM, Nathaniel McCallum wrote:

This patch is independent from my patches 0028-0031 and can be merged in
any order.

This patch has a bug, but I can't figure it out. We need to set
nsslapd-access-userattr-strict on cn=config to off. However, during
the rpm installation, I get this error:

DEBUG Unhandled LDAPError: UNWILLING_TO_PERFORM: {'info': 'Deleting
attributes is not allowed', 'desc': 'Server is unwilling to perform'}
ERROR Update failed: Server is unwilling to perform: Deleting attributes
is not allowed

I'm not sure what is causing this. Does anyone have any suggestions?
I believe the IPA update mechanism works by doing a modify/del of the 
attribute followed by a modify/add.  By default, cn=config restricts the 
attributes which can be deleted.  You can add 
nsslapd-access-userattr-strict to this list.  Unfortunately, it is 
rather painful to do so.


Method one: Don't use the ipa update mechanism to update this 
attribute.  Instead, just use an ldap modify directly e.g. using ldapmodify:

ldapmodify 
dn: cn=config
changetype: modify
replace: nsslapd-access-userattr-strict
nsslapd-access-userattr-strict: off

or in python-ldap:
conn = ldap.connection(my ldap url)
conn.simple_bind_s(cn=directory manager, password)
mod = [(ldap.MOD_REPLACE, nsslapd-access-userattr-strict, ['off'])]
conn.modify_s(cn=config, mod)

Method two: allow deletion of nsslapd-access-userattr-strict in order to 
use ipa update method
This will unfortunately require the use of something other than the ipa 
update method, again.
1) do a search to get the current value in cn=config 
nsslapd-allowed-to-delete-attrs - it is a single space delimited list

2) add nsslapd-access-userattr-strict to the list
3) mod/replace the value



Nathaniel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

[Freeipa-devel] Update: Re: Fedora 20 Release

2013-12-17 Thread Rich Megginson

On 12/16/2013 08:07 AM, Petr Spacek wrote:

Hello list,

we have to decide what we will do with 389-ds-base package in Fedora 20.

Currently, we know about following problems:

Schema problems:
   https://fedorahosted.org/389/ticket/47631 (regression)


Fixed.



Referential Integrity:
   https://fedorahosted.org/389/ticket/47621 (new functionality)
   https://fedorahosted.org/389/ticket/47624 (regression)

Fixed.


Replication:
   https://fedorahosted.org/389/ticket/47632 (?)


Cannot reproduce.  Closed as WORKSFORME.



Stability:
   https://bugzilla.redhat.com/show_bug.cgi?id=1041732

Fixed.
https://fedorahosted.org/389/ticket/47629 (we are not sure if the 
syncrepl really plays some role or not)


We are still trying to determine the cause, and if this is related to 
the use of syncrepl.  If it turns out to be related to syncrepl, I would 
like to release 1.3.2.9 in F20, and just disable the use of syncrepl in 
389 clients.


Is everyone ok with this?



One option is to fix 1.3.2.x as quickly as possible.

Another option is to build 1.3.1.x for F20 with Epoch == 1 and release 
it as quickly as possible.


The problem with downgrade to 1.3.1.x is that it requires manual 
change in dse.ldif file. You have to disable 'content synchronization' 
(syncrepl) and 'whoami' plugins which are not in 1.3.1.x packages but 
were added and enabled by 1.3.2.x packages.


In our tests, the downgraded DS server starts and works after manual 
dse.ldif correction (but be careful - we didn't test replication).


Here is the main problem:
389-ds-base 1.3.2.8 is baked to Fedora 20 ISO images and there is not 
way how to replace it there. It means that somebody can do F19-F20 
upgrade from ISO and *then* upgrade from repos will break his DS 
configuration (because of new plugins...).


Simo thinks that this is a reason why 'downgrade package' with 1.3.1.x 
inevitably needs automated script which will purge two missing plugins 
from dse.ldif.


Nathan, is it manageable before Christmas? One or either way? Is you 
think that the downgrade is safe from data format perspective? (I mean 
DB format upgrades etc.?)




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Update: Re: Fedora 20 Release

2013-12-17 Thread Rich Megginson

On 12/17/2013 11:19 AM, Mark Reynolds wrote:


On 12/17/2013 11:35 AM, Rich Megginson wrote:

On 12/16/2013 08:07 AM, Petr Spacek wrote:

Hello list,

we have to decide what we will do with 389-ds-base package in Fedora 
20.


Currently, we know about following problems:

Schema problems:
   https://fedorahosted.org/389/ticket/47631 (regression)


Fixed.



Referential Integrity:
   https://fedorahosted.org/389/ticket/47621 (new functionality)
   https://fedorahosted.org/389/ticket/47624 (regression)

Fixed.


Replication:
   https://fedorahosted.org/389/ticket/47632 (?)


Cannot reproduce.  Closed as WORKSFORME.



Stability:
   https://bugzilla.redhat.com/show_bug.cgi?id=1041732

Fixed.
https://fedorahosted.org/389/ticket/47629 (we are not sure if the 
syncrepl really plays some role or not)


We are still trying to determine the cause, and if this is related to 
the use of syncrepl.  If it turns out to be related to syncrepl, I 
would like to release 1.3.2.9 in F20, and just disable the use of 
syncrepl in 389 clients.


Is everyone ok with this?

Rich I found a crash in 1.3.2 and 1.3.1.  This should go into 
1.3.2.9(or a 1.3.2.10).


Ok.



One option is to fix 1.3.2.x as quickly as possible.

Another option is to build 1.3.1.x for F20 with Epoch == 1 and 
release it as quickly as possible.


The problem with downgrade to 1.3.1.x is that it requires manual 
change in dse.ldif file. You have to disable 'content 
synchronization' (syncrepl) and 'whoami' plugins which are not in 
1.3.1.x packages but were added and enabled by 1.3.2.x packages.


In our tests, the downgraded DS server starts and works after manual 
dse.ldif correction (but be careful - we didn't test replication).


Here is the main problem:
389-ds-base 1.3.2.8 is baked to Fedora 20 ISO images and there is 
not way how to replace it there. It means that somebody can do 
F19-F20 upgrade from ISO and *then* upgrade from repos will break 
his DS configuration (because of new plugins...).


Simo thinks that this is a reason why 'downgrade package' with 
1.3.1.x inevitably needs automated script which will purge two 
missing plugins from dse.ldif.


Nathan, is it manageable before Christmas? One or either way? Is you 
think that the downgrade is safe from data format perspective? (I 
mean DB format upgrades etc.?)








___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Update: Re: Fedora 20 Release

2013-12-17 Thread Rich Megginson

On 12/17/2013 11:23 AM, Rich Megginson wrote:

On 12/17/2013 11:19 AM, Mark Reynolds wrote:


On 12/17/2013 11:35 AM, Rich Megginson wrote:

On 12/16/2013 08:07 AM, Petr Spacek wrote:

Hello list,

we have to decide what we will do with 389-ds-base package in 
Fedora 20.


Currently, we know about following problems:

Schema problems:
   https://fedorahosted.org/389/ticket/47631 (regression)


Fixed.



Referential Integrity:
   https://fedorahosted.org/389/ticket/47621 (new functionality)
   https://fedorahosted.org/389/ticket/47624 (regression)

Fixed.


Replication:
   https://fedorahosted.org/389/ticket/47632 (?)


Cannot reproduce.  Closed as WORKSFORME.



Stability:
   https://bugzilla.redhat.com/show_bug.cgi?id=1041732

Fixed.
https://fedorahosted.org/389/ticket/47629 (we are not sure if the 
syncrepl really plays some role or not)


We are still trying to determine the cause, and if this is related 
to the use of syncrepl.  If it turns out to be related to syncrepl, 
I would like to release 1.3.2.9 in F20, and just disable the use of 
syncrepl in 389 clients.


Is everyone ok with this?

Rich I found a crash in 1.3.2 and 1.3.1.  This should go into 
1.3.2.9(or a 1.3.2.10).


Ok.


389-ds-base-1.3.2.9 is now in Fedora 20 updates testing.  Please test 
and give karma.  This release fixes everything except 
https://fedorahosted.org/389/ticket/47629 random crash in 
send_ldap_search_entry_ext(), which, in my testing, appears to be 
related to syncrepl, and therefore imo should not hold up the release of 
1.3.2.9 into Fedora 20.







One option is to fix 1.3.2.x as quickly as possible.

Another option is to build 1.3.1.x for F20 with Epoch == 1 and 
release it as quickly as possible.


The problem with downgrade to 1.3.1.x is that it requires manual 
change in dse.ldif file. You have to disable 'content 
synchronization' (syncrepl) and 'whoami' plugins which are not in 
1.3.1.x packages but were added and enabled by 1.3.2.x packages.


In our tests, the downgraded DS server starts and works after 
manual dse.ldif correction (but be careful - we didn't test 
replication).


Here is the main problem:
389-ds-base 1.3.2.8 is baked to Fedora 20 ISO images and there is 
not way how to replace it there. It means that somebody can do 
F19-F20 upgrade from ISO and *then* upgrade from repos will break 
his DS configuration (because of new plugins...).


Simo thinks that this is a reason why 'downgrade package' with 
1.3.1.x inevitably needs automated script which will purge two 
missing plugins from dse.ldif.


Nathan, is it manageable before Christmas? One or either way? Is 
you think that the downgrade is safe from data format perspective? 
(I mean DB format upgrades etc.?)








___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] ou, st, l missing from organizationalPerson

2013-12-16 Thread Rich Megginson

On 12/16/2013 04:11 AM, Petr Viktorin wrote:

On 12/16/2013 10:52 AM, Alexander Bokovoy wrote:

On Mon, 16 Dec 2013, Petr Viktorin wrote:

On 12/13/2013 03:22 PM, Rich Megginson wrote:

On 12/13/2013 02:45 AM, Petr Viktorin wrote:

I finally got to investigating this failure.

It seems ou, along with a bunch of other attributes, is missing from
organizationalPerson in Fedora 20. I don't think it's IPA's fault, as
we don't define organizationalPerson.
Nathan, could it be related to the new schema parser?


Yes, looks like it.


Thanks for filing the bug, sorry I didn't get to it on Friday.

URL for the record: https://fedorahosted.org/389/ticket/47631

Should we consider this a release blocker for tomorrow's Fedora 20
release?

Yes.


At the very least this bug has to be fixed and the fix pushed to F20
as soon as possible.

Yes.


AFAIU 389-ds will get a downgrade for f20, complete with an epoch bump.


We are having a meeting today to decide this issue.

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] ou, st, l missing from organizationalPerson

2013-12-16 Thread Rich Megginson

On 12/16/2013 09:00 AM, Adam Williamson wrote:

On Mon, 2013-12-16 at 11:52 +0200, Alexander Bokovoy wrote:

On Mon, 16 Dec 2013, Petr Viktorin wrote:

On 12/13/2013 03:22 PM, Rich Megginson wrote:

On 12/13/2013 02:45 AM, Petr Viktorin wrote:

I finally got to investigating this failure.

It seems ou, along with a bunch of other attributes, is missing from
organizationalPerson in Fedora 20. I don't think it's IPA's fault, as
we don't define organizationalPerson.
Nathan, could it be related to the new schema parser?

Yes, looks like it.

Thanks for filing the bug, sorry I didn't get to it on Friday.

URL for the record: https://fedorahosted.org/389/ticket/47631

Should we consider this a release blocker for tomorrow's Fedora 20
release?

You'll need a TARDIS; we signed off on it on Thursday, and nothing stops
the Fedora release train (at least, nothing short of epic data loss,
this seems well short).


The schema problem could have serious repercussions.  So what are our 
options for getting a stable 389-ds-base into F20?





At the very least this bug has to be fixed and the fix pushed to F20
as soon as possible.

Indeed, is there an RHBZ for me to link a commonbugs note to?


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Fedora 20 Release

2013-12-16 Thread Rich Megginson

On 12/16/2013 08:07 AM, Petr Spacek wrote:

Hello list,

we have to decide what we will do with 389-ds-base package in Fedora 20.

Currently, we know about following problems:

Schema problems:
   https://fedorahosted.org/389/ticket/47631 (regression)

Referential Integrity:
   https://fedorahosted.org/389/ticket/47621 (new functionality)
   https://fedorahosted.org/389/ticket/47624 (regression)

Replication:
   https://fedorahosted.org/389/ticket/47632 (?)

Stability:
   https://bugzilla.redhat.com/show_bug.cgi?id=1041732
   https://fedorahosted.org/389/ticket/47629 (we are not sure if the 
syncrepl really plays some role or not)


One option is to fix 1.3.2.x as quickly as possible.

Another option is to build 1.3.1.x for F20 with Epoch == 1 and release 
it as quickly as possible.


The problem with downgrade to 1.3.1.x is that it requires manual 
change in dse.ldif file. You have to disable 'content synchronization' 
(syncrepl) and 'whoami' plugins which are not in 1.3.1.x packages but 
were added and enabled by 1.3.2.x packages.


In our tests, the downgraded DS server starts and works after manual 
dse.ldif correction (but be careful - we didn't test replication).


Here is the main problem:
389-ds-base 1.3.2.8 is baked to Fedora 20 ISO images and there is not 
way how to replace it there. It means that somebody can do F19-F20 
upgrade from ISO and *then* upgrade from repos will break his DS 
configuration (because of new plugins...).


Simo thinks that this is a reason why 'downgrade package' with 1.3.1.x 
inevitably needs automated script which will purge two missing plugins 
from dse.ldif.


Nathan, is it manageable before Christmas? One or either way? Is you 
think that the downgrade is safe from data format perspective? (I mean 
DB format upgrades etc.?)




We will have a meeting at 11:30 AM US EST to discuss
The number is the usual bridge (US Toll Free 18004518679 - 3150468279#)

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Fedora 20 Release

2013-12-16 Thread Rich Megginson

On 12/16/2013 08:07 AM, Petr Spacek wrote:

Hello list,

we have to decide what we will do with 389-ds-base package in Fedora 20.

Currently, we know about following problems:

Schema problems:
   https://fedorahosted.org/389/ticket/47631 (regression)


Fixed.



Referential Integrity:
   https://fedorahosted.org/389/ticket/47621 (new functionality)


Does it matter if new functionality is a problem?


https://fedorahosted.org/389/ticket/47624 (regression)

Replication:
   https://fedorahosted.org/389/ticket/47632 (?)

Stability:
   https://bugzilla.redhat.com/show_bug.cgi?id=1041732


Fixed.  However, there is a problem with slapi-nis: 
https://bugzilla.redhat.com/show_bug.cgi?id=1043546


https://fedorahosted.org/389/ticket/47629 (we are not sure if the 
syncrepl really plays some role or not)


How can we find out?



One option is to fix 1.3.2.x as quickly as possible.

Another option is to build 1.3.1.x for F20 with Epoch == 1 and release 
it as quickly as possible.


The problem with downgrade to 1.3.1.x is that it requires manual 
change in dse.ldif file. You have to disable 'content synchronization' 
(syncrepl) and 'whoami' plugins which are not in 1.3.1.x packages but 
were added and enabled by 1.3.2.x packages.


In our tests, the downgraded DS server starts and works after manual 
dse.ldif correction (but be careful - we didn't test replication).


Here is the main problem:
389-ds-base 1.3.2.8 is baked to Fedora 20 ISO images and there is not 
way how to replace it there. It means that somebody can do F19-F20 
upgrade from ISO and *then* upgrade from repos will break his DS 
configuration (because of new plugins...).


Simo thinks that this is a reason why 'downgrade package' with 1.3.1.x 
inevitably needs automated script which will purge two missing plugins 
from dse.ldif.


We have an upgrade/downgrade framework, it should be easy to 
disable/remove these plugins.


Is that it?  Are there any other problems found attempting to downgrade 
1.3.2 to 1.3.1 in F20?




Nathan, is it manageable before Christmas? One or either way? Is you 
think that the downgrade is safe from data format perspective? (I mean 
DB format upgrades etc.?)


The db format in 1.3.1 and 1.3.2 is the same, so there should be no 
problems there.


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Fedora 20 Release

2013-12-16 Thread Rich Megginson

On 12/16/2013 09:33 AM, Alexander Bokovoy wrote:

On Mon, 16 Dec 2013, Rich Megginson wrote:

On 12/16/2013 09:21 AM, Alexander Bokovoy wrote:

On Mon, 16 Dec 2013, Rich Megginson wrote:

On 12/16/2013 08:07 AM, Petr Spacek wrote:

Hello list,

we have to decide what we will do with 389-ds-base package in 
Fedora 20.


Currently, we know about following problems:

Schema problems:
 https://fedorahosted.org/389/ticket/47631 (regression)


Fixed.



Referential Integrity:
 https://fedorahosted.org/389/ticket/47621 (new functionality)


Does it matter if new functionality is a problem?

Only if there is a crash.


I don't think there is a crash here.






https://fedorahosted.org/389/ticket/47624 (regression)

Replication:
 https://fedorahosted.org/389/ticket/47632 (?)

Stability:
 https://bugzilla.redhat.com/show_bug.cgi?id=1041732


Fixed.  However, there is a problem with slapi-nis: 
https://bugzilla.redhat.com/show_bug.cgi?id=1043546

slapi-nis part seems to be a double-free on plugin shutdown.
I'll look into it tomorrow morning if Nalin will not find it earlier.


https://fedorahosted.org/389/ticket/47629 (we are not sure if the 
syncrepl really plays some role or not)


How can we find out?



One option is to fix 1.3.2.x as quickly as possible.

Another option is to build 1.3.1.x for F20 with Epoch == 1 and 
release it as quickly as possible.


The problem with downgrade to 1.3.1.x is that it requires manual 
change in dse.ldif file. You have to disable 'content 
synchronization' (syncrepl) and 'whoami' plugins which are not in 
1.3.1.x packages but were added and enabled by 1.3.2.x packages.


In our tests, the downgraded DS server starts and works after 
manual dse.ldif correction (but be careful - we didn't test 
replication).


Here is the main problem:
389-ds-base 1.3.2.8 is baked to Fedora 20 ISO images and there is 
not way how to replace it there. It means that somebody can do 
F19-F20 upgrade from ISO and *then* upgrade from repos will break 
his DS configuration (because of new plugins...).


Simo thinks that this is a reason why 'downgrade package' with 
1.3.1.x inevitably needs automated script which will purge two 
missing plugins from dse.ldif.


We have an upgrade/downgrade framework, it should be easy to 
disable/remove these plugins.


Is that it?  Are there any other problems found attempting to 
downgrade 1.3.2 to 1.3.1 in F20?

Packaging issue -- epoch will have to be increased and maintained
forever. It is weird but that's what it is.


Sure.  But that's a one time thing.  And, it's only for F20 - once we 
go to F21, we can remove the epoch.

No, and that's key here. Once Epoch is in place, it is forever.


Why?





And then making sure
disabling the plugins will happen only on downgrade, this is 
actually an

RPM trigger which is something people easily get wrong.


I think it's simpler than that - if the version is 1.3.1, 
disable/remove the plugins.  If the version is 1.3.2, add/enable the 
plugins.  I don't think this will be a big deal.

Ok, fine.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Fedora 20 Release

2013-12-16 Thread Rich Megginson

On 12/16/2013 09:21 AM, Alexander Bokovoy wrote:

On Mon, 16 Dec 2013, Rich Megginson wrote:

On 12/16/2013 08:07 AM, Petr Spacek wrote:

Hello list,

we have to decide what we will do with 389-ds-base package in Fedora 
20.


Currently, we know about following problems:

Schema problems:
  https://fedorahosted.org/389/ticket/47631 (regression)


Fixed.



Referential Integrity:
  https://fedorahosted.org/389/ticket/47621 (new functionality)


Does it matter if new functionality is a problem?

Only if there is a crash.


I don't think there is a crash here.






https://fedorahosted.org/389/ticket/47624 (regression)

Replication:
  https://fedorahosted.org/389/ticket/47632 (?)

Stability:
  https://bugzilla.redhat.com/show_bug.cgi?id=1041732


Fixed.  However, there is a problem with slapi-nis: 
https://bugzilla.redhat.com/show_bug.cgi?id=1043546

slapi-nis part seems to be a double-free on plugin shutdown.
I'll look into it tomorrow morning if Nalin will not find it earlier.


https://fedorahosted.org/389/ticket/47629 (we are not sure if the 
syncrepl really plays some role or not)


How can we find out?



One option is to fix 1.3.2.x as quickly as possible.

Another option is to build 1.3.1.x for F20 with Epoch == 1 and 
release it as quickly as possible.


The problem with downgrade to 1.3.1.x is that it requires manual 
change in dse.ldif file. You have to disable 'content 
synchronization' (syncrepl) and 'whoami' plugins which are not in 
1.3.1.x packages but were added and enabled by 1.3.2.x packages.


In our tests, the downgraded DS server starts and works after manual 
dse.ldif correction (but be careful - we didn't test replication).


Here is the main problem:
389-ds-base 1.3.2.8 is baked to Fedora 20 ISO images and there is 
not way how to replace it there. It means that somebody can do 
F19-F20 upgrade from ISO and *then* upgrade from repos will break 
his DS configuration (because of new plugins...).


Simo thinks that this is a reason why 'downgrade package' with 
1.3.1.x inevitably needs automated script which will purge two 
missing plugins from dse.ldif.


We have an upgrade/downgrade framework, it should be easy to 
disable/remove these plugins.


Is that it?  Are there any other problems found attempting to 
downgrade 1.3.2 to 1.3.1 in F20?

Packaging issue -- epoch will have to be increased and maintained
forever. It is weird but that's what it is.


Sure.  But that's a one time thing.  And, it's only for F20 - once we go 
to F21, we can remove the epoch.



And then making sure
disabling the plugins will happen only on downgrade, this is actually an
RPM trigger which is something people easily get wrong.


I think it's simpler than that - if the version is 1.3.1, disable/remove 
the plugins.  If the version is 1.3.2, add/enable the plugins.  I don't 
think this will be a big deal.





Nathan, is it manageable before Christmas? One or either way? Is you 
think that the downgrade is safe from data format perspective? (I 
mean DB format upgrades etc.?)


The db format in 1.3.1 and 1.3.2 is the same, so there should be no 
problems there.






___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Fedora 20 Release

2013-12-16 Thread Rich Megginson

On 12/16/2013 10:12 AM, Petr Spacek wrote:

On 16.12.2013 17:55, Alexander Bokovoy wrote:

On Mon, 16 Dec 2013, Rich Megginson wrote:
Simo thinks that this is a reason why 'downgrade package' with 
1.3.1.x
inevitably needs automated script which will purge two missing 
plugins

from dse.ldif.


We have an upgrade/downgrade framework, it should be easy to
disable/remove these plugins.

Is that it?  Are there any other problems found attempting to 
downgrade

1.3.2 to 1.3.1 in F20?

Packaging issue -- epoch will have to be increased and maintained
forever. It is weird but that's what it is.


Sure.  But that's a one time thing.  And, it's only for F20 - once 
we go

to F21, we can remove the epoch.

No, and that's key here. Once Epoch is in place, it is forever.


Why?

Because that's how RPM is built. When Epoch value is absent it is
assumed to be equal to 0.
1.3.1.18-1 will be equal to 0:1.3.1.18-1 and less than 1.3.2.8-1,
however, 1:1.3.1.18-1 will be greater than 1.3.2.8-1 because the latter
is equal to 0:1.3.2.8-1.

Once epoch is there, it is to stay.


Anyway, is it a real problem? Personally, I consider it like 
yet-another-version-number.


On my Fedora 19:
$ repoquery -qa | wc -l
46645
(packages in total)

$ repoquery -qa | grep -- '-[1-9][0-9]*:' | wc -l
6581
(packages with non-zero epoch)

No, not a real problem, but just one more hassle I'd rather not have to 
deal with.


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Fedora 20 Release

2013-12-16 Thread Rich Megginson

On 12/16/2013 01:14 PM, Simo Sorce wrote:

On Mon, 2013-12-16 at 10:16 -0700, Rich Megginson wrote:

On 12/16/2013 10:12 AM, Petr Spacek wrote:

On 16.12.2013 17:55, Alexander Bokovoy wrote:

On Mon, 16 Dec 2013, Rich Megginson wrote:

Simo thinks that this is a reason why 'downgrade package' with
1.3.1.x
inevitably needs automated script which will purge two missing
plugins
from dse.ldif.

We have an upgrade/downgrade framework, it should be easy to
disable/remove these plugins.

Is that it?  Are there any other problems found attempting to
downgrade
1.3.2 to 1.3.1 in F20?

Packaging issue -- epoch will have to be increased and maintained
forever. It is weird but that's what it is.

Sure.  But that's a one time thing.  And, it's only for F20 - once
we go
to F21, we can remove the epoch.

No, and that's key here. Once Epoch is in place, it is forever.

Why?

Because that's how RPM is built. When Epoch value is absent it is
assumed to be equal to 0.
1.3.1.18-1 will be equal to 0:1.3.1.18-1 and less than 1.3.2.8-1,
however, 1:1.3.1.18-1 will be greater than 1.3.2.8-1 because the latter
is equal to 0:1.3.2.8-1.

Once epoch is there, it is to stay.

Anyway, is it a real problem? Personally, I consider it like
yet-another-version-number.

On my Fedora 19:
$ repoquery -qa | wc -l
46645
(packages in total)

$ repoquery -qa | grep -- '-[1-9][0-9]*:' | wc -l
6581
(packages with non-zero epoch)


No, not a real problem, but just one more hassle I'd rather not have to
deal with.

Yes it is a real problem, it is extremely confusing to people, because
it is not in the rpm file name.

It should be avoided if at all possible, and it is an unremovable tattoo
once you have it on.

So a decision to add an epoch number should never be taken lightly.

If you do not understand why, you should probably not set epochs.


Ok.  We're going to try to fix the bugs in 1.3.2 ASAP, and do some 
testing in F20.


I have some 1.3.2.9 packages for F20 here: 
http://rmeggins.fedorapeople.org/rpms/


1.3.2.9 fixes the following issues:
- Ticket #47631 objectclass may, must lists skip rest of objectclass 
once first is found in sup

-- NOTE: this is the schema issue
- Ticket 47627 - Fix replication logging
- Ticket #47313 - Indexed search with filter containing '' and ! with 
attribute subtypes gives wrong result
-- NOTE: this is one of the crashing issues (not the crash that appears 
to be syncrepl related)

- Ticket 47613 - Issues setting allowed mechanisms
- Ticket 47617 - allow configuring changelog trim interval
- Ticket 47601 - Plugin library path validation prevents intentional 
loading of out-of-tree modules
- Ticket 47627 - changelog iteration should ignore cleaned rids when 
getting the minCSN

- Ticket #47623 fix memleak caused by 47347
- Ticket 47622 - Automember betxnpreoperation - transaction not aborted 
when group entry does not exist

- Ticket 47620 - 389-ds rejects nsds5ReplicaProtocolTimeout attribute


I'm trying to use copr to set up a repo: 
http://copr.fedoraproject.org/coprs/rmeggins/389-ds-base-testing/repo/fedora-20-x86_64/

but copr seems to be having some issues.

I also have a patch for 1:389-ds-base-1.3.1.16 for F20 ready to go - I 
tested upgrade from 1.3.2.7 on F20, works fine, disables whoami and 
syncrepl.




Simo.




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] ou, st, l missing from organizationalPerson (Was: FreeIPA 3.3.latest failing tests: config_mod)

2013-12-13 Thread Rich Megginson

On 12/13/2013 02:45 AM, Petr Viktorin wrote:

I finally got to investigating this failure.

It seems ou, along with a bunch of other attributes, is missing from 
organizationalPerson in Fedora 20. I don't think it's IPA's fault, as 
we don't define organizationalPerson.

Nathan, could it be related to the new schema parser?


Yes, looks like it.



f19 has:
objectclasses: ( 2.5.6.7 NAME 'organizationalPerson' SUP person 
STRUCTURAL MAY ( title $ x121Address $ registeredAddress $ 
destinationIndicator $ preferredDeliveryMethod $ telexNumber $ 
teletexTerminalIdentifier $ internationalISDNNumber $ 
facsimileTelephoneNumber $ street $ postOfficeBox $ postalCode $ 
postalAddress $ physicalDeliveryOfficeName $ ou $ st $ l ) X-ORIGIN 
'RFC 4519' )


f20 has:
objectclasses: ( 2.5.6.7 NAME 'organizationalPerson' SUP person 
STRUCTURAL MAY ( title $ x121Address $ registeredAddress $ 
destinationIndicator $ preferredDeliveryMethod $ telexNumber $ 
teletexTerminalIdentifier ) X-ORIGIN 'RFC 4519' )





On 11/14/2013 04:44 PM, Petr Spacek wrote:

Hello,

latest FreeIPA build from branch ipa-3-3 (built today on Fedora 20,
latest bits) fails following tests:

==
ERROR: test_config[0]: config_mod: Try to add an unrelated objectclass
to ipauserobjectclasses
--
Traceback (most recent call last):
   File /usr/lib/python2.7/site-packages/nose/case.py, line 197, in
runTest
 self.test(*self.arg)
   File /tmp/git/ipatests/test_xmlrpc/xmlrpc_test.py, line 283, in
lambda
 func = lambda: self.check(nice, **test)
   File /tmp/git/ipatests/test_xmlrpc/xmlrpc_test.py, line 301, in 
check

 self.check_output(nice, cmd, args, options, expected, extra_check)
   File /tmp/git/ipatests/test_xmlrpc/xmlrpc_test.py, line 340, in
check_output
 got = api.Command[cmd](*args, **options)
   File /tmp/git/ipalib/frontend.py, line 436, in __call__
 ret = self.run(*args, **options)
   File /tmp/git/ipalib/frontend.py, line 761, in run
 return self.forward(*args, **options)
   File /tmp/git/ipalib/frontend.py, line 782, in forward
 return self.Backend.xmlclient.forward(self.name, *args, **kw)
   File /tmp/git/ipalib/rpc.py, line 712, in forward
 raise error(message=e.faultString)
ValidationError: invalid 'ipauserobjectclasses': user default attribute
ou would not be allowed!

==
ERROR: test_config[1]: config_mod: Remove the unrelated objectclass from
ipauserobjectclasses
--
Traceback (most recent call last):
   File /usr/lib/python2.7/site-packages/nose/case.py, line 197, in
runTest
 self.test(*self.arg)
   File /tmp/git/ipatests/test_xmlrpc/xmlrpc_test.py, line 283, in
lambda
 func = lambda: self.check(nice, **test)
   File /tmp/git/ipatests/test_xmlrpc/xmlrpc_test.py, line 301, in 
check

 self.check_output(nice, cmd, args, options, expected, extra_check)
   File /tmp/git/ipatests/test_xmlrpc/xmlrpc_test.py, line 340, in
check_output
 got = api.Command[cmd](*args, **options)
   File /tmp/git/ipalib/frontend.py, line 436, in __call__
 ret = self.run(*args, **options)
   File /tmp/git/ipalib/frontend.py, line 761, in run
 return self.forward(*args, **options)
   File /tmp/git/ipalib/frontend.py, line 782, in forward
 return self.Backend.xmlclient.forward(self.name, *args, **kw)
   File /tmp/git/ipalib/rpc.py, line 712, in forward
 raise error(message=e.faultString)
AttrValueNotFound: ipauserobjectclasses does not contain 'ipahost'

--
Ran 10 tests in 1.233s

FAILED (errors=2)
==
FAILED under '/usr/bin/python2.7'


Other tests from ipatests/test_xmlrpc/test_config_plugin.py are okay.






___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 416 Use valid LDAP search base in migration plugin

2013-07-26 Thread Rich Megginson

On 07/26/2013 05:43 AM, Martin Kosek wrote:

One find_entry_by_attr call did not set a search base leading to
LDAP search call with zero search base. This leads to false negative
results from LDAP.



Pushed to master, ipa-3-2 as a one-liner.


Does the migrate code correctly handle the search return?  Before it was 
working fine when it got the err=32 - it just assumed the user did not 
already exist.  With the correct search base, the search will return 
err=0, and will return no search entries, which migration should assume 
means the user does not already exist.




Martin


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH] 416 Use valid LDAP search base in migration plugin

2013-07-26 Thread Rich Megginson

On 07/26/2013 09:28 AM, Martin Kosek wrote:

On 07/26/2013 04:04 PM, Rich Megginson wrote:

On 07/26/2013 05:43 AM, Martin Kosek wrote:

One find_entry_by_attr call did not set a search base leading to
LDAP search call with zero search base. This leads to false negative
results from LDAP.



Pushed to master, ipa-3-2 as a one-liner.

Does the migrate code correctly handle the search return?  Before it was
working fine when it got the err=32 - it just assumed the user did not already
exist.  With the correct search base, the search will return err=0, and will
return no search entries, which migration should assume means the user does not
already exist.


Thanks for double-checking this Rich. But our LDAP library raised exception
when LDAP returns no entry, I double checked this particular call I changed:


conn.find_entry_by_attr('krbprincipalname', 'ad...@example.com',

'krbprincipalaux', [''], DN(api.env.container_user, api.env.basedn))
LDAPEntry(ipapython.dn.DN('uid=admin,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com'),
{})


conn.find_entry_by_attr('krbprincipalname', 'doesnotex...@example.com',

'krbprincipalaux', [''], DN(api.env.container_user, api.env.basedn))
Traceback (most recent call last):
   File stdin, line 1, in module
   File /usr/lib/python2.7/site-packages/ipapython/ipaldap.py, line 1299, in
find_entry_by_attr
 (entries, truncated) = self.find_entries(filter, attrs_list, base_dn)
   File /usr/lib/python2.7/site-packages/ipapython/ipaldap.py, line 1248, in
find_entries
 raise errors.NotFound(reason='no such entry')
ipalib.errors.NotFound: no such entry

So the change should work correctly.

Martin

ok - ack

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCHES] 143-147 Improve performance with large groups

2013-06-27 Thread Rich Megginson

On 06/27/2013 09:31 AM, Jan Cholasta wrote:

On 27.6.2013 17:23, Martin Kosek wrote:

Thanks for this effort!

I quickly went through the patches, they mostly look harmless. Except 
the

following:

Subject: [PATCH 4/5] Add missing substring indices for attributes 
managed by

  the referint plugin.

AFAIK, sub index is a very expensive index - as we discussed offline 
- adding
Rich to advise and confirm this. I think you added it because some 
plugin was
doing substring/wildcard search when an LDAP entry was being deleted 
- did you
identify which one it is? Because I would rather get rid of the bad 
search than

adding so many sub indices.


The search is hard-coded in the referint plugin, see 
https://git.fedorahosted.org/cgit/389/ds.git/tree/ldap/servers/plugins/referint/referint.c#n745.


Not sure if it makes sense to do a wildcard/substr search here - please 
file a ticket with 389 to investigate.


sub index isn't necessarily a bad thing - in this case it may be more 
beneficial than harmful - if you have enough nsslapd-idlistscanlimit to 
hold the entire candidate list in a single id list without hurting 
performance (i.e. a list of 1 entries is probably ok - a list of 
100 entries is not)






Secondly, did you also check Web UI performance? I think we could 
noticeable
improve user/group lists performance if we added a new (hidden) 
option to
suppress loading membership information which could then be utilized 
by Web UI.

Adding Petr Vobornik to CC to consider this.


No, not yet.

Honza



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] krb 1.12's OTP-Over-RADIUS

2013-04-26 Thread Rich Megginson

On 04/26/2013 04:30 PM, Rob Crittenden wrote:

Nathaniel McCallum wrote:

On Fri, 2013-04-12 at 17:39 -0400, Nathaniel McCallum wrote:

On Fri, 2013-04-12 at 11:53 -0400, Nathaniel McCallum wrote:

On Fri, 2013-04-12 at 11:34 -0400, Nathaniel McCallum wrote:

On Thu, 2013-04-11 at 14:48 -0400, Rob Crittenden wrote:

Nathaniel McCallum wrote:

On Wed, 2013-04-10 at 15:35 -0400, Rob Crittenden wrote:

I'm not sure how I'd test it if I got it built.


I'm working on this. I hope to have a clear answer next week. 
Bear with

me...


Overall looks really good.


I've split up the patch into multiple commits. I've also added 
.update

files and a patch for ipa-kdb to feed krb5 the right user string.

https://github.com/npmccallum/freeipa/commits/otp

Please take a look. I *think* I've got everything worked out so 
far with
the exception of bug numbers / urls. Should every patch have a 
separate

bug and a link to the design page?


The ticket should go into every commit. I'd probably put the 
design link
there too, just for completeness. Future bug fixes, et all aren't 
going
to require the design page, but since these commits are all 
related to

the initial feature it will be nice to have.

You can have multiple patches on the same ticket/bug.


https://github.com/npmccallum/freeipa/commits/otp

All four commits now have bug numbers and design page links. I'm 
adding

the design page link to the tickets as we speak.

Remaining issues (AFAICS):
1. The daemon (ipa-otpd) runs as root and binds anonymously
2. ipatokenRadiusSecret is readable by an anonymous bind

3. ipatokenT?OTP.* are readable by an anonymous bind

In the case of both #2 and #3, only admins should have RW. ipa-otpd
needs read access to ipatokenRadiusSecret. The DS bind plugin below 
(#2)

needs read access to ipatokenT?OTP.*.


Outstanding pieces:
1. CLI tool -- https://fedorahosted.org/freeipa/ticket/3368
2. DS bind plugin -- https://fedorahosted.org/freeipa/ticket/3367
3. UI -- https://fedorahosted.org/freeipa/ticket/3369
4. Self Service UI -- https://fedorahosted.org/freeipa/ticket/3370

#1 and #2 are within the scope of F19 and should hopefully land 
shortly

(in separate commits). #3 and #4 are probably $nextrelease.



FYI - Here is an RPM with all of the code so far:
http://koji.fedoraproject.org/koji/taskinfo?taskID=5247029


Updated RPMs, containing the new 389DS bind plugin and build for F19,
are here:
http://koji.fedoraproject.org/koji/taskinfo?taskID=5270926

Nathaniel


BuildRequires needed for whatever provides krad.h

A bunch of the files declare functions in each other. Is it cleaner to 
put these into an include file? I'm gathering that this will always be 
self-contained, so maybe this is ok.


In entry_attr_get_berval() is it worth pointing out that there is no 
need to free the value, or is that assumed because it uses 
slapi_value_get_berval()?


If we detect that there is clock drift should we log it? Will we ever 
try to report to the client (e.g. future enhancement)?


I wonder if the NSPR-version of some functions should be used since 
this is running inside 389-ds, like PL_strcasecmp for strcasecmp()


Nah - at this point, strcasecmp is supported pretty much everywhere.  
However, there are some interesting NSPR functions that help with buffer 
overrun detection and string null terminating - see plstr.h for details 
- if you need to do something like that.




ops.c:

pedantic: lack of spacing between if and parens

sha384 is an allowed type only in otp.c. Is that needed?

rob


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] WIP backup and restore

2013-03-25 Thread Rich Megginson

On 03/25/2013 12:08 PM, Petr Viktorin wrote:

On 03/23/2013 05:06 AM, Rob Crittenden wrote:

TL;DR. Sorry.

Here is my current progress on backup and restore. I have not documented
any of this in the Implementation section of the wiki yet.

I've added two new commands, ipa-backup and ipa-restore.

The files go into /var/lib/ipa/backup. When doing a restore you should
only reference the directory in backup, not the full path. This needs to
change, but it is what it is.

There are strict limits on what can be restored where. Only exact
matching hostnames and versions are allowed right now. We can probably
relax the hostname requirement if we're only restoring data, and the
version perhaps for only the the first two values (so you can restore a
3.0.0 backup on 3.0.1 but not on 3.1.0).


Do we also need to limit the versions of Dogtag, 389, Kerberos...?


No.


Or is what they put in /var/lib guaranteed portable across versions?


Mostly.  We always suggest doing ldif dumps (db2ldif) for more long term 
storage.





I've done 99.99% of testing in F-18 with a single instance. I did some
initial testing in 6.4 so I think the roots are there, but they are
untested currently.

I spent a lot of time going in circles when doing a restore and getting
replication right. I'm open to discussion on this, but my purpose for
restoration was to define a new baseline for the IPA installation. It is
basically the catastrophic case, where your data is
hosed/untested/whatever and you just want to get back to some sane 
point.


Ok, so given that we need to make sure that any other masters don't send
us any updates in their changelog when they come back online. So I use a
new feature in 1.3.0 to disable the replication agreements. This works
really, really well.

The only problem is you have to re-enable the agreement in order to
re-initialize a master (https://fedorahosted.org/389/ticket/47304). I
have the feeling that this leaves a small window where replication can
occur and pollute our restored master. I noticed that we do a
force_sync() when doing a full re-init. It may be that if we dropped it
that would also mitigate this.

I did the majority of my testing using an A - B - C replication
topology. This exposed a lot of issues that A - B did not. I don't
know if it was the third server or having the extra hop, but I hopefully
closed a bunch of the corner cases.

So what I would do is either a full or a data restore on A. This would
break replication on B and C, as expected. So in this scenario A and B
are CAs.

Then I'd run this on B:

# ipa-replica-manage re-initialize --from=A
# ipa-csreplica-manage re-initialize --from=A

Once that was done I'd run this on C:

# ipa-replica-manage re-initialize --from=B

The restoration of the dogtag databases was the last thing I did so it
isn't super-well tested. I had to move a fair bit of code around. I
think it's the sort of thing that will work when the everything goes
well but exceptions may not be well-handled.

The man pages are just a shel right now, they need a lot of work.

It should also be possible to do a full system restore. I tested with:

# ipa-server-install ...
# add a bunch of data, 100 entries or more
# ipa-backup
# add one or more users
# ipa-server-install --uninstall -U
# ipa-restore ipa-full-...

The last batch of users should be gone. I did similar tests with the
A/B/C set up.

I ran the unit tests against it and all was well.

I have done zero testing in a Trust environment, though at least some of
the files are backed up in the full case. I did some testing with DNS.

I did no testing of a master that was down at the time of restoration
and then was brought online later, so it never had its replication
agreement disabled. I have the feeling it will hose the data.

I have some concern over space requirements. Because I tar things up one
basically needs double-the backup space in order to do a restore, and a
bit more when encrypted. I'm open to suggestions on how to store the
data, but we have many files for the 389-ds bak backup and I didn't want
to have to encrypt them all.

On that note, that I'm doing a db2bak AND a db2ldif backup and currently
using the ldif2db for the restore. My original intention was to use
db2bak/bak2db in order to retain the changelog, but retaining the
changelog is actually a problem if we're restoring to a known state and
forcing a re-init. It wouldn't take much to convince me to drop that,
which reduces the # of files we have to deal with.

I also snuck in a change to the way that replication is displayed. It
has long bothered me that we print out an Updating message during
replication because it gives no context. I changed it to be more of a
progress indicator, using \r to over-write itself and include the # of
seconds elapsed. The log files are still readable but I'd hate to see
what this looks like in a typescript :-)

Finally, sorry about the huge patch. I looked at the incremental commits
I had done and I didn't think 

Re: [Freeipa-devel] DESIGN: Recover DNA Ranges

2013-02-25 Thread Rich Megginson

On 02/25/2013 06:09 AM, Martin Kosek wrote:

On 02/25/2013 01:44 PM, Petr Viktorin wrote:

On 02/22/2013 09:19 PM, Rob Crittenden wrote:

Design to allow one to recover DNA ranges when deleting a replica or
just for normal range management.

http://freeipa.org/page/V3/Recover_DNA_Ranges

Supporting ticket https://fedorahosted.org/freeipa/ticket/3321

rob

I wonder if it would be possible to have more on-deck ranges. Could
dnaNextRange be multi-valued, and when the low-water mark is hit the plugin
would pick one of them?


Not at the moment, this is a single valued attribute type:

attributetypes: ( 2.16.840.1.113730.3.1.2129 NAME 'dnaNextRange' DESC 'DNA ran
  ge of values to get from replica' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE
  -VALUE X-ORIGIN '389 Directory Server' )

But it is a good question for 389-ds guys, it would be a good extension to the
DNA plugin and would prevent us from not-loosing the range when there is no
master with empty dnaNextRange. But maybe there is a strong reason why this was
made single value...


If you make it multi-valued, then you probably want to have some sort of 
ordering to the values . . .





As for the RFE, I have few comments/questions for Rob:

1) I would expand Setting the on-deck range section and add an information
what should we do when the remote master is not accessible (this would result
only in a warning probably).


2) We may want to make sure that the removed replica is readonly before we copy
the range (just to be sure that we do not miss some value due to race condition.


3) In Enhancing ipa-replica-manage:

What does ipa-replica-manage dnarange-set masterA.example.com 250-499 exactly
do? I though that it would just overwrite active range, but based on the next
ipa-replica-manage dnanextrange-show example, it moved the currently active
range of masterA.example.com to the on-deck range. Do we want to do that?


4) What does NOTE: We will need to be clear that this range has nothing to do
with Trust ranges. actually mean? AFAIU, IPA should have all local ranges
covered with a local idrange range(s).

If it does not have it covered, it could happen that for example a new trust
would overlap with this user-defined local range and we would have colliding
POSIX IDs...

IMO, dnarange-set and dnanextrange-set should at first check if the range is
covered with some local idrange and only then allowed setting the new range.

Martin


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DESIGN: Recover DNA Ranges

2013-02-25 Thread Rich Megginson

On 02/25/2013 09:23 AM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/25/2013 06:09 AM, Martin Kosek wrote:

On 02/25/2013 01:44 PM, Petr Viktorin wrote:

On 02/22/2013 09:19 PM, Rob Crittenden wrote:

Design to allow one to recover DNA ranges when deleting a replica or
just for normal range management.

http://freeipa.org/page/V3/Recover_DNA_Ranges

Supporting ticket https://fedorahosted.org/freeipa/ticket/3321

rob

I wonder if it would be possible to have more on-deck ranges. Could
dnaNextRange be multi-valued, and when the low-water mark is hit the
plugin
would pick one of them?


Not at the moment, this is a single valued attribute type:

attributetypes: ( 2.16.840.1.113730.3.1.2129 NAME 'dnaNextRange' DESC
'DNA ran
  ge of values to get from replica' SYNTAX
1.3.6.1.4.1.1466.115.121.1.15 SINGLE
  -VALUE X-ORIGIN '389 Directory Server' )

But it is a good question for 389-ds guys, it would be a good
extension to the
DNA plugin and would prevent us from not-loosing the range when there
is no
master with empty dnaNextRange. But maybe there is a strong reason why
this was
made single value...


If you make it multi-valued, then you probably want to have some sort of
ordering to the values . . .


I don't know. We don't have a whole lot of control of ordering when 
DNA gets a range, so I think that holes in the range happen now, so I 
wouldn't have a problem with lack of control.


Ok.  Please file an RFE ticket.  There are some code changes that we 
will need to make to DNA to make next range take multiple values.




rob



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Backup and Restore design

2013-02-20 Thread Rich Megginson

On 02/20/2013 08:38 AM, Rob Crittenden wrote:

Simo Sorce wrote:

On Tue, 2013-02-19 at 22:43 -0500, Rob Crittenden wrote:

I've looked into some basic backup and restore procedures for IPA. My
findings are here: http://freeipa.org/page/V3/Backup_and_Restore


Great summary!

For the catastrofic failure scenario, should we mention how to put back
a full failed and restored machine online ? I am thinking the restored
server may be behind (even if only by a few entries) in the replication,
so the CSNs in the other replicas will not match.
I guess we should mention a full resync may be needed ? Or is there a
way to bring back CSNs on replicas ?


Good questions. It depends on how long the machine was down and how 
many changes have happened. It is possible that one would want to do a 
full re-init. I'll add that to the design.


The replication protocol will detect if a replica is too out of date to 
bring up to date with an incremental update, and requires a re-init.





In the 'Returning to a good state' case, can we consider some split
brain approach, were we sever replication and rebuild one server at a
time ?


Perhaps using a firewall, but then you run the risk of each of those 
servers accepting changes during the rebuild and you could end up with 
a lot of collisions, which sort of goes against the point of restoring 
to a known good state.


The changelog is the key here. I'll have to ponder this one a bit, I'm 
a bit conflicted on the right approach.



Maybe we can think of a way to 'mark' all server as 'bad' so that on
restore the replication agreements do not need to be changed but changes
from 'bad' servers will not be accepted ?
I guess this crosses also a request by someone to be able to 'pause'
replication, would use the same mechanism I guess.


AFAIK there is an option to pause replication now (at least in 1.3).

What you can't do is drop the changelog AFAIK. That is the real 
problem. If you want to restore to a known state you need to drop all 
the changelog entries since that time. I'll check with the 389-ds team 
to see if that is possible. Since we know the time of the backup, we 
might be able to drop newer entries than that.


Not sure what you mean - what exactly do you want to do with the changelog?




Full system backup:
in the first part it is said the process is offline, but in the 'LDAP'
section you say ldapi is used, but that would mean DS is running ?
Also are we sure we can get all data we need via ldapi ? Are we going to
miss any operational attribute ?


The full backup is offline because it is just using tar. This is sort 
of a brute-force backup, copying bits from A to B.


The data backup is online and creates a task in 389-ds to write the 
data and changelog to a file. It should write everything completely. 
We don't do an ldapsearch.


I chose to not back up in ldif because this would back up just the 
data and not the changelog. The other advantage is that the db2bak 
format includes the indexes and ldif restore would require a rebuild.


It is good to have a long term backup in LDIF format - no matter what 
happens to the database, if you have an LDIF backup, you can always 
recreate your data.  So it's good to have both - db2bak format for 
shorter term/frequent backups, and LDIF for longer term/infrequent backups.





For restore are we sure we can reload data w/o alterations ? What about
plugins ? will we have to disable all plugins during a restore ?


Yes, it should be fine. I'm hoping that the version will help us with 
this, to prevent someone from restoring an ancient backup on a new 
system, for example (or the reverse).



For the open questions.

Size of backup:
I think we should make it easy to configure the system to use a custom
directory to dump the backups. This way admins can make sure it is on a
different partition/disk or even over NFS and that the backup will not
fill up the disk on which DS is running.


That's a good idea. I'll have to think about where we would configure 
that. Perhaps as an optional argument to the backup command.
You'll have to figure out a way around selinux, or add some sort of 
selinux magic that allows db2bak to write there.



We should definitely allow to encrypt backup files, a gpg public key
would be sufficient.


Ok. I wasn't sure if there would be any corruption concerns.


For replica cases, maybe we can create a command that dumps the
changelog from a good replica and then allow us to reply all changes
that are missing from the backup to bring the server up to the last
minute ?


This would happen when we went online anyway though, at least for the 
entries currently in the changelog. I guess this would have the 
advantage of doing it in bulk and not over a (potentially) slow link.


That should happen automatically with the replication protocol - it will 
attempt to bring older replicas up-to-date or, if they are too far out 
of date, will complain that they need a re-init.




rob


Re: [Freeipa-devel] Backup and Restore design

2013-02-20 Thread Rich Megginson

On 02/20/2013 09:44 AM, Rob Crittenden wrote:

Rich Megginson wrote:

On 02/20/2013 08:38 AM, Rob Crittenden wrote:

Simo Sorce wrote:

On Tue, 2013-02-19 at 22:43 -0500, Rob Crittenden wrote:

I've looked into some basic backup and restore procedures for IPA. My
findings are here: http://freeipa.org/page/V3/Backup_and_Restore


Great summary!

For the catastrofic failure scenario, should we mention how to put 
back

a full failed and restored machine online ? I am thinking the restored
server may be behind (even if only by a few entries) in the 
replication,

so the CSNs in the other replicas will not match.
I guess we should mention a full resync may be needed ? Or is there a
way to bring back CSNs on replicas ?


Good questions. It depends on how long the machine was down and how
many changes have happened. It is possible that one would want to do a
full re-init. I'll add that to the design.


The replication protocol will detect if a replica is too out of date to
bring up to date with an incremental update, and requires a re-init.


Ok, I'll update the design with this, thanks.






In the 'Returning to a good state' case, can we consider some split
brain approach, were we sever replication and rebuild one server at a
time ?


Perhaps using a firewall, but then you run the risk of each of those
servers accepting changes during the rebuild and you could end up with
a lot of collisions, which sort of goes against the point of restoring
to a known good state.

The changelog is the key here. I'll have to ponder this one a bit, I'm
a bit conflicted on the right approach.


Maybe we can think of a way to 'mark' all server as 'bad' so that on
restore the replication agreements do not need to be changed but 
changes

from 'bad' servers will not be accepted ?
I guess this crosses also a request by someone to be able to 'pause'
replication, would use the same mechanism I guess.


AFAIK there is an option to pause replication now (at least in 1.3).

What you can't do is drop the changelog AFAIK. That is the real
problem. If you want to restore to a known state you need to drop all
the changelog entries since that time. I'll check with the 389-ds team
to see if that is possible. Since we know the time of the backup, we
might be able to drop newer entries than that.


Not sure what you mean - what exactly do you want to do with the 
changelog?


As an example, if we have 2 IPA masters and we restore the data on one 
of them, as soon as it comes back upon the other is going to push the 
changelog onto it (as it should) so they are in sync again.


So the question is, how do we restore several masters at the same time 
without apply changings from the changelog?


What I was going to ask is, can we delete all changelog entries from 
Time X until now? That would prevent the sync issues, but it would 
retain the part of the changelog we care about.


Is this the problem you are trying to solve?

You have a situation where some bogus data was introduced into your 
system, and that bogus data has now been replicated everywhere.  You 
want to rollback the state of everything to before the bogus data was 
introduced.  Let's assume you want to delete the bogus data and 
everything that happened after that.


The first step is to pick a server to restore, and restore that server 
from a backup.  The first problem is that this server will need to 
reject any replicated updates, but still allow regular client updates, 
after the restore process is complete (the db is in read-only mode 
during the restore).  The only way to do this now would be to first 
disable all replication agreements on all other replicas going to this 
server, which would be quite painful. Alternately - during the restore 
process, change the replica generation of the restored server - other 
servers would see the different replica generation and would refuse to 
send updates (and report lots of replication errors).


Once the first server is restored, you would just use the online or 
offline replica init procedure.









Full system backup:
in the first part it is said the process is offline, but in the 'LDAP'
section you say ldapi is used, but that would mean DS is running ?
Also are we sure we can get all data we need via ldapi ? Are we 
going to

miss any operational attribute ?


The full backup is offline because it is just using tar. This is sort
of a brute-force backup, copying bits from A to B.

The data backup is online and creates a task in 389-ds to write the
data and changelog to a file. It should write everything completely.
We don't do an ldapsearch.

I chose to not back up in ldif because this would back up just the
data and not the changelog. The other advantage is that the db2bak
format includes the indexes and ldif restore would require a rebuild.


It is good to have a long term backup in LDIF format - no matter what
happens to the database, if you have an LDIF backup, you can always
recreate your data.  So it's good to have

Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-14 Thread Rich Megginson

On 02/14/2013 01:59 AM, Petr Viktorin wrote:

On 02/13/2013 07:11 PM, Simo Sorce wrote:

On Wed, 2013-02-13 at 10:57 -0700, Rich Megginson wrote:


Rich,
is there potential from deadlocking here due to the new transaction
stuff ? Or can we single out this plugin to run before *any*

transaction

is started ?



If you do this in a regular pre-op, not a betxn pre-op, then it
should be fine.


Ok in this case we should be able to create a regular pre-op plugin to
intercept the ldap add call and then use the following flow:
client --(LDAP)-- 389DS --(HTTP/json)-- framework --(LDAP)-- add

So no deadlocks will happen, the remaining issue is how to make sure we
do not loop by mistake in the second add.

One way could be to have loop detection so that if more then two (1.
original, 2. framework) adds for the same DN come in we just return
errors. Another way is to use a special objectclass as I proposed in the
thread and make sure the framework explictly blacklists it so that it
can never try to send an add with the special oc even by mistake or user
misconfiguration.



And a third way is a separate LDAP tree.

I'm not familiar with what DS plugins can do. Can we craft one that 
intercepts all read operations on cn=HR tree,$SUFFIX and does them 
on cn=users,cn=accounts,$SUFFIX instead (using that tree's 
permissions), and forwards all write operations on cn=HR tree to IPA 
via JSON?

Yes.



A fourth way is a proxy/gateway, essentially the same but with a 
separate server.




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-14 Thread Rich Megginson

On 02/14/2013 09:01 AM, Simo Sorce wrote:

On Thu, 2013-02-14 at 16:29 +0100, Petr Viktorin wrote:

Then I recommend doing this. It avoids problems with delegation (the
real tree permissions are used) and looping (plugin and IPA write to
separate trees).

Virtual objects are not free of issues, you are just trading some issues
for others here, and I do not think you are properly evaluating the
trade here.

I am dead set against using staging areas, and I want to see proof they
are a good idea, not handwaving 'fears' of using json in the server to
forward operations directly to the framework.


  Other operations (deletes, mods) can be either
similarly delegated to the real tree,

And *this* is a slippery slope. you are trading delegating one single
operation to the framework directly, with proper error propagation back
to the client, with now implementing full virtualized operations for mod
and delete, and bind ? and search and then ?

You are now basically paving the way for a virtual directory in tree.

Sorry, but no.


or passed through IPA if we want to do that in the future.
Problems with replication can be solved by just not replicating the
separate tree.

More and more hacks, for this supposedly 'cleaner' or 'simpler'
solution ... sorry if I don't bite.


It also doesn't pollute core IPA with special cases, which is what
worries me.

What does this mean ?

We *have* a special case, and we are discussing how to handle it.

The situation here (I do not want to call it a 'problem') is that we
decided to put business logic around user creation in the framework
because we thought that would be easier to prototype and develop in
python code rather than in a plugin.

However this special handling *must* be limited. LDAP is our main
storage, and we must allow LDAP operations against it. Special cases
needs to be limited to things that are really harder done in plugins
rather then python code.

For example if we need triggers for some operations in LDAP, they *must*
be done through a 389ds plugin. Otherwise LDAP quickly becomes a
read-only interface and interoperability quickly goes out of the window.

I always treated the framework as a *management interface* on top. We
can do things that are 'convenient', and 'help' admins avoid mistakes,
but we cannot move core functionality in the framework, it would be a
grave mistake. User creation *is* a special case, but should remain one
of very few special exceptions.

This very thread and the need for the interface proposed in this thread
is a clear example of why we need to be extremely careful not to be too
liberal with what business logic we move in the framework.

LDAP keeps us honest, so we need to limit what we do in the framework,
otherwise we'll keep taking shortcuts and soon enough it goes out of
control and we loose interoperability with anything that is not
purpose-built to talk to our framework.

This should be an unacceptable scenario because it is like putting
ourselves in a getto.

We trade greater interoperability and acceptance for small conveniences
here and there.

We must thread *very* carefully along this line.


+1 - virtual trees usually end up being rat holes with no end

What is the problem we're trying to solve?  To be able to call python 
code in response to an LDAP operation?  What if we added a python 
interpreter to 389, like OpenLDAP back-python?




Simo.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-13 Thread Rich Megginson

On 02/13/2013 07:53 AM, Simo Sorce wrote:

Hello list,

with recently seen a few requests to add FreeIPA users via LDAP
directly. This is a common method supported by many meta-directory/HR
systems, however so far we cannot really recommend it because we add
quite a number of attributes automatically in our framework code when we
create users, and those functions can change in future versions.

However these external tools are usually not very flexible and
supporting them as they are would make for a much better experience for
integrators.

I had a brief discussion with Rob on IRC on how to address this
situation.

If we limit ourselves to users we could probably resolve this problem
with a relatively simple 389ds plugin that intercept add operations that
try to add a user.

The idea is that the remote system would be allowed to set a minimum of
attributes (even incomplete according to our schema). But as long as a
specific set of objectclasses is set (say person and posixaccount) the
operation would be recognized as an attempt to create a user account.

In this case the plugin would take over the operation and perform a call
against our framework using json.


So the 389 plugin would make an http/json call to the framework?

The call would send a reformatted
request using the data we got in input so that any custom
objectclass /attribute can be preserved. The call would also add a
special flag so the framework knows this is coming from 389ds itself.

The framework would treat this requests in a slightly special way, it
would use all the common code currently used to properly format a user
entry adding all the ancillary data we need, but instead of trying to
ldapadd the entry, it would instead return it back to the caller.

389ds at this point gets back a json reply, convert it into an ldap add
operation and proceeds with this new 'augmented' operation instead of
the original one.

What do people think about this option ?
I think it would be extremely valuable for admins, as it would allow
them to drive user 'synchronization' in a very simple way.
It could also be used to properly augment winsync users so we can allow
full creation when syncing from AD with all the proper attributes
created through the json request. So I see a lot of optential here.

The only doubt is the json back and forth communication.

What do people on the framework side think ? Is there going to be any
big problem in adapting the framework so we can use common code to
prepare the object but then change the last stp and return a json reply
instead of perfroming an ldap add operation ?

Simo.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-13 Thread Rich Megginson

On 02/13/2013 09:57 AM, Simo Sorce wrote:

On Wed, 2013-02-13 at 11:44 -0500, Rob Crittenden wrote:

Simo Sorce wrote:

On Wed, 2013-02-13 at 16:12 +0100, Petr Viktorin wrote:

Our own post-callback assumes the user is already in LDAP, and who
knows what user-supplied callbacks will do. Keep in mind IPA is
plugable; at least for outside plugins' sake (if not our own sanity's)
we should keep the number of code paths to a minimum.

True which is why my proposal is to not use the standard user-add RPC
call, but have a separate one.

This separate call would only call the core business logic to create the
user account add operation, but none of the external plumbing.

Ideally we spit the framework flow like this:

Normal user - Real user-add --- . . . . . . . . .  --- LDAP add
  \  /
   -- common logic --
  /  \
389ds plugin - Mock user-add -- . . . . . . . . .  --- json reply


custom plugins should be called in the custom logic an operate on the
object before the ADD is attempted.

If  we do it this way then most of the code path will be in common which
is what we want, and only the mechanical operation of adding the actual
object to ldap will be different.

Simo.


There is one missing a few steps. A plugin execution looks like:

Normal user - Real user-add --- pre-op call(s) --- execute (LPAP add
record) --- post-op call(s) which may do additional add/modify

It is the postop calls that would be the problem. They assume that the
entry has already been written (so, for example, it has a valid
UID/GID/ipaUniqueId, etc).

Why are they done after the add ? It seem dangerous.
What happens id the first ldap add succeed and the post op fails ?


Are you talking about 389 plugins?  If so, then in 1.3.0 and later, if 
you do all of the pre-op/post-op as betxn plugins, then they all take 
place in the same transaction, and they all succeed or all fail.




We should exceute the ldap call after the post ops are perfomed imho.

Simo.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-13 Thread Rich Megginson

On 02/13/2013 10:50 AM, Simo Sorce wrote:

On Wed, 2013-02-13 at 18:11 +0100, Petr Viktorin wrote:

1. create some new subtree, e.g. cn=useradd-playground,dc=example,dc=com

This has more consequences than you may think.
I do not like the separate field idea because you need to treat it in a
special way. We would probably need special ACIs to anoty allow any
other client to see this subtree otherwise they may see incomplete
objects. Yet we need to allow some user to write to it.
We need to decide case by case which plugins to enable, not all DS
plugins can use filters or exclude subtrees so we may have issues with
some plugins if we do not want them to operate on these special entries
and so on.

Is it possible to use read ACIs of the original tree?

Not sure what's the question here. Care to elaborate ?


The problem is that the framework may do more than LDAP operations. It
supports user-written plugins that can literally do anything.
We'd need to limit what IPA plugins can do, or how they do it, otherwise
it's impossible to just return the result.

Ok, then maybe we can have a 'core' common logic part that has some more
limitations.


So I can see a DS plugin calling IPA, but IPA really has to write the
results itself.

This is harder, but could be done too. The way it would work would be to
make 389ds able to perform s4u2self for the case a user bound to LDAP
with a simple bind, and then s4u2proxy with the evidence ticket to get
HTTP/server ticket, then we can connect to the framework using the
user's identity. In turn the framework will perform an additional
s4u2proxy delegation to get a ticket back to ldap/server to perform the
add operation.

This means the flow would be:

client --(LDAP)-- 389DS --(HTTP/json)-- framework --(LDAP)-- add

The problem with this solution is that there is a risk of loops, so we
need a mechanism that makes it impossible for the 389ds plugin to
recurse into the framework. Also we need to make sure the add operation
is not blocking add operations by keeping a transaction open causing a
deadlock. And we cannot do that by immediately returning to the client
because we need to wait for the framework reply to know if the operation
was successful or not.
I see some more low level risks doing it this way which is why I made my
proposal to avoid this potential loop.

Rich,
is there potential from deadlocking here due to the new transaction
stuff ? Or can we single out this plugin to run before *any* transaction
is started ?
If you do this in a regular pre-op, not a betxn pre-op, then it 
should be fine.



I do not see this as a slippery slope, as it would be limited to user
creation by definition.

I'd be extremely surprised if all of these inflexible external HR
systems happened to limit themselves to user creation.

HR systems only deal with hiring and firing employees, they *may* deal
with adding users to groups, but that doesn't require special operations
for us, it is just a matter of adding a 'member' attribute to a group
object.
I do not think we need to offer anything else here. We already have
proof this is sufficient, because we have experience with the AD winsync
plugin which has a similar function, the only differnce is that the
driver is an AD server instead of an HR system directly. And with that
plugin we also only create users by default, and we haven't heard many
requests to do anything more. The only request we really had was to make
sure we could sync also posix attrs from AD, but that's it, still just
basic user info.

Simo.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1072 enable transaction support

2012-11-16 Thread Rich Megginson

On 11/15/2012 09:53 PM, Rob Crittenden wrote:
This patch enables transaction support in 389-ds-base and fixes a few 
transaction issues within IPA.


This converts parts of the password and modrnd plugins to support 
transactions. The password plugin still largely runs as 
non-transactional because extop plugins aren't supported in 
transactions yet.


I've left the wait_for_attr code in place for now but on reflection we 
should probably remove it. I'll leave that up to the reviewer, but I 
can't see the need for it any more.


In order for this to work you'll need to apply the last two patches 
(both 0001) to slapi-nis and spin it up yourself, otherwise you'll 
have serious deadlock issues. I know this is extra work but this patch 
is potentially disruptive so I figure the earlier it is out the better.


Noriko/Rich/Nalin, can you guys review the slapi-nis pieces? I may 
have been too aggressive in my cleanup.


Noriko/Rich, can you review the 389-ds plugin parts of my 1072 patch?

ack


Once we have an official slapi-nis build with these patches we'll need 
to set the minimum n-v-r in our spec file.


rob


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] RFC: freeipa-asterisk plugin

2012-11-01 Thread Rich Megginson

On 11/01/2012 09:32 AM, Simo Sorce wrote:

On Thu, 2012-11-01 at 09:30 -0430, Loris Santamaria wrote:

Hi all,

we plan to write a freeIPA configuration plugin for Asterisk, aiming to
be general and useful enough to be included in Fedora and EPEL, so we
would like to have your input on some issues before we write any code.

Hi Loris,
this is really exciting!


I wrote down the plans so far on this wiki page:

https://github.com/sorbouc/sorbo/wiki/freeipa-asterisk

Basically we would like to know if:

   * It is ok to use cn=asterisk as the base object

This looks like a good choice, maybe check with the asterisk people if
they are ok with using the name that way ?
Anyway any product specific name would work here, as it makes it
extremely unlikely to clash with any future work in upstream FreeIPA or
for any custom data in users' sites.


   * The planned DIT, separating object per type and not per site, is
 ok
   * The whole stuff of using CoS as a mechanism to apply default
 values to every new object seems right

CoS may have some performance implications, and some usage implication,
you need to evaluate if you are ok with those, but in general setting
defaults is its job so it may be a good fit.

I am CCing Nathan and Rich to ask them about the CoS definitions and
whether using that many attributes would be problematic, so far I've
only seen CoS used for overriding a single attribute so I am not sure
what are the implications with that many.

(Nathan, Rich, can you take a quick look at the paragraph named 'CoS
definition entries' around the middle of the github wiki page pointed by
Loris ?)


The one major drawback of CoS attributes is that they cannot currently 
be indexed, but you could write a virtual attribute indexing plugin.  
That is, you cannot do a search like (astAccountNAT=somevalue) and have 
it be indexed.  You would have to write a virtual attribute indexing 
plugin (similar to what Roles does to allow searches like (nsRole=some 
role dn)).





Another issue is that Asterisk SIP objects in real life are generally
associated with real people and with physical devices.

The physical devices are configured with a piece of software called the
endpoint manager, which could pull from the directory the data
required to generate the IP phones configuration. We have to choices
here. Store the IP phone extra data _with_ the Asterisk SIP object,
adding a ieee802device objectClass to the asteriskSIPuser object. The
other option is to store the ieee802device object separately in a more
appropriate part of the IPA tree and have it reference the SIP object
vía a seeAlso or managedBy attribute.

I am not sure that there is an actual 'more appropriate' part of the
tree. Although we do manage 'devices' (computer objects) that is for
machines that are joined to the IPA domain so it would not be applicable
in cases where the device can't actually 'join' an ipa domain. However I
would stay flexible here and allow both cases.
Ie allow to have objects both within the cn=asterisk subtree or in some
other subtree.
The ieee802device is an auxiliary class so it can be associated with any
object in the schema at any time. The AsteriskSIPUser is also an
auxiliray class, so as long as you allow searches that span the whole
tree you can allow people to choose whether to associate these classes
to external objects or to create device objects under cn=asterisk.
Of course you need to decide if allowing that will make your plugin more
complex and how you will manage those objects then.


As for linking SIP users to real people, it would be great to link the
asteriskSIPuser object to an IPA user, but probably not all
organizations interested in this kind of functionality for Asterisk
would manage all of their users with IPA. What if the real user belongs
to a trusted directory, for example? So it seems that for simplicity's
sake we will have to store the name of the person using the phone in the
asteriskSIPuser description attribute.

As for devices I think it would be nice if you could allow both options.
Some deployments may choose to provision new user accounts from the get
go with all the data including asterisk data.
Also putting the data on the user entry make it simpler to allow the
user to change some of the fields in a self service fashion (assuming
there is any attribute that users should be able to change in a self
service way).

Other deployments that may want to handle additional users may need to
be able to add additional unrelated users though, so being able to do
that is also nice.


Speaking of packaging, reading http://abbra.fedorapeople.org/guide.html
it doesn't seems clear to me how to have an extra category of
configuration pages added to the Web UI without modifying the main IPA
page. What is the proper way to add extra pages to the web UI ?

I will let the UI expert reply on this point.


More questions follow :-)

I am reading the project page description and I see your 

[Freeipa-devel] using 389-ds-base with betxn plugins enabled

2012-10-17 Thread Rich Megginson
I'm testing with f18, freeipa-server 3.0.0, 389-ds-base-1.3.0.a1, with 
betxn manually enabled in all plugins in 389.  I did an ipa-server-install.


I have ipa user-add --all --raw working - it returns the mep and 
memberof attributes immediately.  I had to do something like this:


diff --git a/ipalib/plugins/user.py b/ipalib/plugins/user.py
index 5d667dc..5a490bb 100644
--- a/ipalib/plugins/user.py
+++ b/ipalib/plugins/user.py
@@ -568,6 +568,11 @@ class user_add(LDAPCreate):
 newentry = wait_for_value(ldap, dn, 'objectclass', 
'mepOriginEntry')

 entry_from_entry(entry_attrs, newentry)

+if not self.api.env.wait_for_attr:
+# have to update memberof, mep data in entry to return
+(newdn, newentry) = ldap.get_entry(dn, ['*'])
+entry_attrs.update(newentry)
+
 if options.get('random', False):
 try:
 entry_attrs['randompassword'] = 
unicode(getattr(context, 'randompassword'))


That is, after user_add.post_callback adds the user to the group, it 
needs to get the updated memberof attribute from the user entry, as well 
as the mep data.  I think there are several other places in the code 
where wait_for_attr and wait_for_attr_memberof are used, that will have 
to change in a similar manner.  I don't know if this patch is the best 
way to solve the problem - I suppose it would be better to update only 
the memberof and objectclass and mepmanagedentry attributes in 
entry_attrs, but I'm not sure how to do that.


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-09-11 Thread Rich Megginson

On 09/11/2012 10:51 AM, Martin Kosek wrote:

On 09/04/2012 04:40 PM, Rich Megginson wrote:

On 09/03/2012 08:42 AM, Martin Kosek wrote:

On 08/27/2012 06:29 PM, Rich Megginson wrote:

...

This is the plan I plan to take:
1) Add pres,eq indexes for all un-indexed attributes that we want to check:
sourcehost
memberservice
managedby
memberallowcmd
memberdenycmd
ipasudorunas
ipasudorunasgroup

ok

...

Implementation of the Referential Integrity in IPA works OK so far, I just hit
a strange issue when indexing memberallowcmd and memberdenycmd attributes in 
IPA:

dirsrv errors log:
...
[11/Sep/2012:11:39:53 -0400] - The attribute [memberdenycmd] does not have a
valid ORDERING matching rule - error 2:s
[11/Sep/2012:11:39:58 -0400] - userRoot: Indexing attribute: memberdenycmd
[11/Sep/2012:11:39:58 -0400] - userRoot: Finished indexing.
...

This cause RI to fail to handle this attribute in IPA (which is expected based
on the error message).

I checked the attributes types, they look like that:

attributetypes: ( 2.16.840.1.113730.3.8.7.1 NAME 'memberAllowCmd' DESC 'Refere
  nce to a command or group of commands that are allowed by the rule.' SUP dist
  inguishedName EQUALITY distinguishedNameMatch ORDERING distinguishedNameMatch
   SUBSTR distinguishedNameMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.12 X-ORIGIN
  'IPA v2' )

attributetypes: ( 2.16.840.1.113730.3.8.7.2 NAME 'memberDenyCmd' DESC 'Referen
  ce to a command or group of commands that are denied by the rule.' SUP distin
  guishedName EQUALITY distinguishedNameMatch ORDERING distinguishedNameMatch S
  UBSTR distinguishedNameMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.12 X-ORIGIN 'I
  PA v2' )

distinguishedNameMatch ORDERING rule looks wrong, it is only a matching rule.
Anyone knows why it was defined that way? Or what should be a correct setting
for this attribute instead? caseIgnoreOrderingMatch seems as a logical choice...
Not sure.  The problem is that, in LDAP, there isn't a concept of 
ordering or substring matching on DN values.  
http://www.ietf.org/rfc/rfc4517.txt - there is only 
distinguishedNameMatch which is an EQUALITY rule.  Do you really need 
ordering and substring here?  The problem with using some other ordering 
or substring matching rule is that it will not properly normalize the DN 
valued string, so you may get incorrect results.


I will have to fix the attributeTypes definition in IPA before we can enable
index  RI for them.

Thanks,
Martin


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1053 support 389-ds posix-winsync plugin

2012-09-06 Thread Rich Megginson

On 09/06/2012 06:35 AM, Simo Sorce wrote:

On Thu, 2012-09-06 at 14:30 +0200, Martin Kosek wrote:

On 09/05/2012 08:13 PM, Rich Megginson wrote:

On 09/05/2012 12:08 PM, Rob Crittenden wrote:

Add support for the 389-ds posix winsync plugin. This plugin will sync the
POSIX attributes from AD. We need to avoid trying to re-add them in our plugin.

ack

I did a sanity check, that winsync replication still works, everything looks OK.

Pushed to master, ipa-3-0.

Is this plugin configurable in IPA ?

Yes.

 From the commit I can't tell if it is enabled by default or not.

It is not enabled by default.


Simo.



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Rich Megginson

On 09/06/2012 10:09 AM, Martin Kosek wrote:

On 09/06/2012 06:09 PM, Martin Kosek wrote:

On 09/06/2012 06:05 PM, Martin Kosek wrote:

On 09/06/2012 05:55 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:

Deleting a replica can leave a replication vector (RUV) on the
other servers.
This can confuse things if the replica is re-added, and it also
causes the
server to calculate changes against a server that may no longer
exist.

389-ds-base provides a new task that self-propogates itself to all
available
replicas to clean this RUV data.

This patch will create this task at deletion time to hopefully
clean things up.

It isn't perfect. If any replica is down or unavailable at the
time
the
cleanruv task fires, and then comes back up, the old RUV data
may be
re-propogated around.

To make things easier in this case I've added two new commands to
ipa-replica-manage. The first lists the replication ids of all the
servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step again.

This is quite dangerous though. If you run cleanruv against a
replica id that
does exist it can cause a loss of data. I believe I've put in
enough scary
warnings about this.

rob


Good work there, this should make cleaning RUVs much easier than
with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite lost. I
think
it would
help if we for example have all info for commands indented. This
way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv to make
them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we already
do for
ipa-replica-manage del. This fix would aid test automation in the
future.


4) (minor) The new question (and the del too) does not react too
well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter is quite
confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command [clean_ruv |
force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command fails
with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping






tree,   cn=config) has not been cleaned.  You will need to rerun
the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the command failed,
possibly
because of disconnected replicas and that they could enable the
replicas and
try again.


7) (minor) pass is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing read-only
but
+# this isn't a show-stopper
+root_logger.debug(No permission to switch replica to
read-only,
continuing anyway)
+pass


I think this addresses everything.

rob

Thanks, almost there! I just found one more issue which needs to be
fixed
before we push:

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-06 Thread Rich Megginson

On 09/06/2012 10:40 AM, Mark Reynolds wrote:



On 09/06/2012 12:13 PM, Rich Megginson wrote:

On 09/06/2012 10:09 AM, Martin Kosek wrote:

On 09/06/2012 06:09 PM, Martin Kosek wrote:

On 09/06/2012 06:05 PM, Martin Kosek wrote:

On 09/06/2012 05:55 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:
Deleting a replica can leave a replication vector (RUV) 
on the

other servers.
This can confuse things if the replica is re-added, and 
it also

causes the
server to calculate changes against a server that may no 
longer

exist.

389-ds-base provides a new task that self-propogates 
itself to all

available
replicas to clean this RUV data.

This patch will create this task at deletion time to 
hopefully

clean things up.

It isn't perfect. If any replica is down or unavailable 
at the

time
the
cleanruv task fires, and then comes back up, the old RUV 
data

may be
re-propogated around.

To make things easier in this case I've added two new 
commands to
ipa-replica-manage. The first lists the replication ids 
of all the

servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step 
again.


This is quite dangerous though. If you run cleanruv 
against a

replica id that
does exist it can cause a loss of data. I believe I've 
put in

enough scary
warnings about this.

rob

Good work there, this should make cleaning RUVs much 
easier than

with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite 
lost. I

think
it would
help if we for example have all info for commands 
indented. This

way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv 
to make

them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command 
in an

unattended way
(for better testing), i.e. respect --force option as we 
already

do for
ipa-replica-manage del. This fix would aid test 
automation in the

future.


4) (minor) The new question (and the del too) does not 
react too

well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required 
parameter is quite

confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command 
[clean_ruv |

force-sync |
disconnect | connect | del | re-initialize | list | 
list_ruv]


It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command 
fails

with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping 




tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping 








tree,   cn=config) has not been cleaned.  You will need 
to rerun

the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the 
command failed,

possibly
because of disconnected replicas and that they could 
enable the

replicas and
try again.


7) (minor) pass is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing 
read-only

but
+# this isn't a show-stopper
+root_logger.debug(No permission to switch 
replica to

read-only,
continuing anyway)
+pass


I think this addresses everything.

rob

Re: [Freeipa-devel] [PATCH] 1053 support 389-ds posix-winsync plugin

2012-09-05 Thread Rich Megginson

On 09/05/2012 12:08 PM, Rob Crittenden wrote:
Add support for the 389-ds posix winsync plugin. This plugin will sync 
the POSIX attributes from AD. We need to avoid trying to re-add them 
in our plugin.

ack


rob


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-09-04 Thread Rich Megginson

On 09/03/2012 08:42 AM, Martin Kosek wrote:

On 08/27/2012 06:29 PM, Rich Megginson wrote:

On 08/27/2012 10:24 AM, Martin Kosek wrote:

On 08/17/2012 04:00 PM, Rich Megginson wrote:

On 08/17/2012 07:44 AM, Martin Kosek wrote:

Hi guys,

I am now investigating ticket #2866:
https://fedorahosted.org/freeipa/ticket/2866

And I am thinking about possible solutions for this problem. In a
nutshell, we do not properly check referential integrity in some IPA
objects where we keep one-way DN references to other objects, e.g. in
- managedBy attribute for a host object
- memberhost attribute for HBAC rule object
- memberuser attribute for user object
- memberallowcmd or memberdenycmd for SUDO command object (reported in
#2866)
...

Currently, I see 2 approaches to solve this:
1) Add relevant checks to our ipalib plugins where problematic
operations with these operations are being executed (like we do for
selinuxusermap's seealso attribute in HBAC plugin)
This of course would not prevent direct LDAP deletes.

2) Implement a preop DS plugin that would hook to MODRDN and DELETE
callbacks and check that this object's DN is not referenced in other
objects. And if it does, it would reject such modification. Second
option would be to delete the attribute value with now invalid
reference. This would be probably  more suitable for example for
references to user objects.

Any comments to these possible approaches are welcome.

Rich, do you think that as an alternative to these 2 approaches,
memberOf plugin could be eventually modified to do this task?

This is very similar to the referential integrity plugin already in 389, except
instead of cleaning up references to moved and deleted entries, you want it to
prevent moving or deleting an entry if that entry is referenced by the
managedby/memberhost/memberuser/memberallowcmd/memberdenycmd of some other
entry.

I think that using or enhancing current DS referential integrity plugin will be
the best and the most consistent way to go.

We already use that plugin for some user attributes like manager or
secretary. seeAlso is already covered by default, so for example seeAlso
attribute in SELinux usermap object referencing an HBAC rule will get removed
when relevant HBAC rule is removed (I just checked that).


Note that the managed entry plugin (mep) already handles this for the managedby
attribute.

I assume you are referencing mepmanagedby and mepmanagedentry attributes
which then produce errors like this one:

# ipa netgroup-del foo
ipa: ERROR: Server is unwilling to perform: Deleting a managed entry is not
allowed. It needs to be manually unlinked first.

managedBy attribute used by hosts objects I had in mind seems to not be covered.

But you are right, this is pretty much what I wanted. Though in case of MEP,
there is a link in both referenced objects, but in our case, we have just the
one-way link.


Are you already using the memberof plugin for
memberhost/memberuser/memberallowcmd/memberdenycmd?

This doesn't seem like a job for memberof, this seems like more of a new check
for the referential integrity plugin.


I am now considering if current move/delete clean up already present in
Referential Integrity plugin would not be sufficient for us.

Rich, please correct me if I am wrong, but in that case, we would just need to
add relevant attribute names
(memberhost/memberuser/memberallowcmd/memberdenycmd...) to Referential
Integrity plugin configuration as nsslapd-pluginarg7, nsslapd-pluginarg8, ...
I wonder if there would be some performance issues if we add attributes to the
list this way.

No, not if they are indexed for presence and equality.


Hello Rich,
I am back to investigate this ticket. In order to be able to deliver some
working solution to IPA 3.0, I plan to take advantage of current Referential
Integrity Plugin to clean up dangling references.

This is the plan I plan to take:
1) Add pres,eq indexes for all un-indexed attributes that we want to check:
sourcehost
memberservice
managedby
memberallowcmd
memberdenycmd
ipasudorunas
ipasudorunasgroup

ok


2) Add missing pres index for attributes we want to check, but only have eq
index:
manager
secretary
memberuser
memberhost

I assume this step is also needed in order to keep the server performance.

yes


3) Add all these attributes do Referential Integrity Plugin attribute list of
not already

ok


4) Also add Index task (nsIndexAttribute) for all these new indexes so that
they are created during IPA server upgrade.

ok


Is this procedure OK DS-wise?

Yes


I also have question regarding the following note in RHDS doc chapter 3.6.
Maintaining Referential Integrity:

The Referential Integrity Plug-in should only be enabled on one supplier
replica in a multi-master replication environment to avoid conflict resolution
loops...

Currently, we enable this plugin on all IPA replicas. Is this something we need
to be concerned about and fix ASAP (before we do all this RefInt effort)?


Mark/Nathan - I know you guys

Re: [Freeipa-devel] [PATCH 0012] Change slapi_mods_init in ipa_winsync_pre_ad_mod_user_mods_cb

2012-09-04 Thread Rich Megginson

On 09/04/2012 07:36 AM, Tomas Babej wrote:

Hi,

https://fedorahosted.org/freeipa/ticket/2953

ack


Tomas.


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] Paging in Web UI

2012-08-29 Thread Rich Megginson

On 08/29/2012 07:34 AM, John Dennis wrote:

On 08/28/2012 02:31 PM, Endi Sukma Dewata wrote:

The server can keep the search result (either just the pkey list or the
entire entries) in memcached, but the result might be large and there
might be multiple users accessing multiple search pages, so the total
memory requirement could be large.


The default max size of an entry in memcached is 1MB. It can be 
increased to an upper limit of 128MB (but the memcached implementors 
do not recommend this due to degraded performance and the impact on 
the system).


The session data is stored in a dict. You would be sharing the session 
data with other parts of the system. Currently that only includes the 
authentication data which is relatively small. I believe there is also 
some minor bookkeeping overhead that detracts from the per item total.


If we need to exceed the upper bound for paged data I suppose we could 
implement caching within the cache. Almost 1MB of data is a lot of 
paging (and that limit can be increased), it would take a fair amount 
of paging to consume all that data. But the cached query could be 
broken up into cached chunks to limit the impact on memcached and to 
accommodate truly unlimited paging. In most instance you would fetch 
the next/prev page from the cache but if you walked off either end of 
the cached query you could query again and cache that result. In fact 
two levels of caching might be an actual implementation requirement to 
handle all cases.




We can also use Simple Paged Results, but if I understood correctly it
requires the httpd to maintain an open connection to the LDAP server for
each user and for each page.


Not for each user.  In 389-ds-base-1.2.11 you can have multiple simple 
paged result searches on a single connection - see 
https://fedorahosted.org/389/ticket/260


This is the problem that VLV and Simple Paged Results are trying to 
solve - how to allow users to scroll/page through very large result sets.



I'm not sure memcached can be used to move
the connection object among forked httpd processes. Also Simple Paged
Results can only go forward, so no Prev button unless somebody keeps the
results.


No, the connection object cannot be moved between processes via 
memcached because sockets are a property of the process that created it.




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Paging in Web UI

2012-08-29 Thread Rich Megginson

On 08/29/2012 09:16 AM, Endi Sukma Dewata wrote:

On 8/29/2012 9:49 AM, Rich Megginson wrote:
We can also use Simple Paged Results, but if I understood 
correctly it

requires the httpd to maintain an open connection to the LDAP
server foreach user and for each page.


Not for each user.  In 389-ds-base-1.2.11 you can have multiple simple
paged result searches on a single connection - see
https://fedorahosted.org/389/ticket/260


Well this is the crux of the problem. We do not maintain a connection
per user. LDAP connections exist for the duration of a single IPA RPC
call. Those RPC calls may be multiplexed across multiple IPA server
instances, each of which is it's own process.

Our LDAP connections are very short lived and are scattered across
processes.


So it sounds like, in order to be useful to IPA, we need to extend
simple paged results:
1) ability to have the cookie (i.e. the results list and current
position in that list) live outside of a connection
2) ability to go backwards in a list

Is this correct?  If so, please file 389 RFE's for these.


For (1) how does the httpd send the information that it wants to use 
the result list from a previous connection? Is it going to use a new 
LDAP control?


Not sure.  Might be able to use the existing simple paged result control.


Or would there be a session ID?

If we implement (2) does it mean the pages still need to be accessed 
sequentially, or can we jump to any random page?


We should be able support random page access.  But I think we could 
support the ability to go backwards from the current page without random 
access support.




Also if I understood correctly the LDAP connections are made using 
user credentials, not Directory Manager, so things like 
nsslapd-sizelimit will apply. Does it mean a non-admin cannot browse 
the entire directory?

in 1.2.10 we have different limits for paged result searches:

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/User_Account_Management-Setting_Resource_Limits_Based_on_the_Bind_DN.html


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-08-27 Thread Rich Megginson

On 08/27/2012 06:41 AM, Dmitri Pal wrote:

On 08/17/2012 10:00 AM, Rich Megginson wrote:

On 08/17/2012 07:44 AM, Martin Kosek wrote:

Hi guys,

I am now investigating ticket #2866:
https://fedorahosted.org/freeipa/ticket/2866

And I am thinking about possible solutions for this problem. In a
nutshell, we do not properly check referential integrity in some IPA
objects where we keep one-way DN references to other objects, e.g. in
- managedBy attribute for a host object
- memberhost attribute for HBAC rule object
- memberuser attribute for user object
- memberallowcmd or memberdenycmd for SUDO command object (reported in
#2866)
...

Currently, I see 2 approaches to solve this:
1) Add relevant checks to our ipalib plugins where problematic
operations with these operations are being executed (like we do for
selinuxusermap's seealso attribute in HBAC plugin)
This of course would not prevent direct LDAP deletes.

2) Implement a preop DS plugin that would hook to MODRDN and DELETE
callbacks and check that this object's DN is not referenced in other
objects. And if it does, it would reject such modification. Second
option would be to delete the attribute value with now invalid
reference. This would be probably  more suitable for example for
references to user objects.

Any comments to these possible approaches are welcome.

Rich, do you think that as an alternative to these 2 approaches,
memberOf plugin could be eventually modified to do this task?

This is very similar to the referential integrity plugin already in
389, except instead of cleaning up references to moved and deleted
entries, you want it to prevent moving or deleting an entry if that
entry is referenced by the
managedby/memberhost/memberuser/memberallowcmd/memberdenycmd of some
other entry.

Note that the managed entry plugin (mep) already handles this for the
managedby attribute.

Are you already using the memberof plugin for
memberhost/memberuser/memberallowcmd/memberdenycmd?

This doesn't seem like a job for memberof, this seems like more of a
new check for the referential integrity plugin.

Did it translate into a DS ticket?

No.  Is there an IPA ticket to link to?

I suspect it is not a big change and would solve a bunch of ugly
referential integrity problems.


Thank you,
Martin


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-08-27 Thread Rich Megginson

On 08/27/2012 09:25 AM, Rich Megginson wrote:

On 08/27/2012 06:41 AM, Dmitri Pal wrote:

On 08/17/2012 10:00 AM, Rich Megginson wrote:

On 08/17/2012 07:44 AM, Martin Kosek wrote:

Hi guys,

I am now investigating ticket #2866:
https://fedorahosted.org/freeipa/ticket/2866

And I am thinking about possible solutions for this problem. In a
nutshell, we do not properly check referential integrity in some IPA
objects where we keep one-way DN references to other objects, e.g. in
- managedBy attribute for a host object
- memberhost attribute for HBAC rule object
- memberuser attribute for user object
- memberallowcmd or memberdenycmd for SUDO command object (reported in
#2866)
...

Currently, I see 2 approaches to solve this:
1) Add relevant checks to our ipalib plugins where problematic
operations with these operations are being executed (like we do for
selinuxusermap's seealso attribute in HBAC plugin)
This of course would not prevent direct LDAP deletes.

2) Implement a preop DS plugin that would hook to MODRDN and DELETE
callbacks and check that this object's DN is not referenced in other
objects. And if it does, it would reject such modification. Second
option would be to delete the attribute value with now invalid
reference. This would be probably  more suitable for example for
references to user objects.

Any comments to these possible approaches are welcome.

Rich, do you think that as an alternative to these 2 approaches,
memberOf plugin could be eventually modified to do this task?

This is very similar to the referential integrity plugin already in
389, except instead of cleaning up references to moved and deleted
entries, you want it to prevent moving or deleting an entry if that
entry is referenced by the
managedby/memberhost/memberuser/memberallowcmd/memberdenycmd of some
other entry.

Note that the managed entry plugin (mep) already handles this for the
managedby attribute.

Are you already using the memberof plugin for
memberhost/memberuser/memberallowcmd/memberdenycmd?

This doesn't seem like a job for memberof, this seems like more of a
new check for the referential integrity plugin.

Did it translate into a DS ticket?

No.  Is there an IPA ticket to link to?


https://fedorahosted.org/389/ticket/438


I suspect it is not a big change and would solve a bunch of ugly
referential integrity problems.


Thank you,
Martin


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-08-27 Thread Rich Megginson

On 08/27/2012 10:24 AM, Martin Kosek wrote:

On 08/17/2012 04:00 PM, Rich Megginson wrote:

On 08/17/2012 07:44 AM, Martin Kosek wrote:

Hi guys,

I am now investigating ticket #2866:
https://fedorahosted.org/freeipa/ticket/2866

And I am thinking about possible solutions for this problem. In a
nutshell, we do not properly check referential integrity in some IPA
objects where we keep one-way DN references to other objects, e.g. in
- managedBy attribute for a host object
- memberhost attribute for HBAC rule object
- memberuser attribute for user object
- memberallowcmd or memberdenycmd for SUDO command object (reported in
#2866)
...

Currently, I see 2 approaches to solve this:
1) Add relevant checks to our ipalib plugins where problematic
operations with these operations are being executed (like we do for
selinuxusermap's seealso attribute in HBAC plugin)
This of course would not prevent direct LDAP deletes.

2) Implement a preop DS plugin that would hook to MODRDN and DELETE
callbacks and check that this object's DN is not referenced in other
objects. And if it does, it would reject such modification. Second
option would be to delete the attribute value with now invalid
reference. This would be probably  more suitable for example for
references to user objects.

Any comments to these possible approaches are welcome.

Rich, do you think that as an alternative to these 2 approaches,
memberOf plugin could be eventually modified to do this task?

This is very similar to the referential integrity plugin already in 389, except
instead of cleaning up references to moved and deleted entries, you want it to
prevent moving or deleting an entry if that entry is referenced by the
managedby/memberhost/memberuser/memberallowcmd/memberdenycmd of some other 
entry.

I think that using or enhancing current DS referential integrity plugin will be
the best and the most consistent way to go.

We already use that plugin for some user attributes like manager or
secretary. seeAlso is already covered by default, so for example seeAlso
attribute in SELinux usermap object referencing an HBAC rule will get removed
when relevant HBAC rule is removed (I just checked that).


Note that the managed entry plugin (mep) already handles this for the managedby
attribute.

I assume you are referencing mepmanagedby and mepmanagedentry attributes
which then produce errors like this one:

# ipa netgroup-del foo
ipa: ERROR: Server is unwilling to perform: Deleting a managed entry is not
allowed. It needs to be manually unlinked first.

managedBy attribute used by hosts objects I had in mind seems to not be covered.

But you are right, this is pretty much what I wanted. Though in case of MEP,
there is a link in both referenced objects, but in our case, we have just the
one-way link.


Are you already using the memberof plugin for
memberhost/memberuser/memberallowcmd/memberdenycmd?

This doesn't seem like a job for memberof, this seems like more of a new check
for the referential integrity plugin.


I am now considering if current move/delete clean up already present in
Referential Integrity plugin would not be sufficient for us.

Rich, please correct me if I am wrong, but in that case, we would just need to
add relevant attribute names
(memberhost/memberuser/memberallowcmd/memberdenycmd...) to Referential
Integrity plugin configuration as nsslapd-pluginarg7, nsslapd-pluginarg8, ...
I wonder if there would be some performance issues if we add attributes to the
list this way.

No, not if they are indexed for presence and equality.

But referential integrity will not prevent deletion or moving entries - 
it will delete/move references.


Rob, do you think that cleaning up the broken references during a DS postop
instead of raising an preop error is OK for IPA references? I went through the
referential attributes we use (git grep LDAPAddMember) and I think it should
be sufficient. We could cover some special cases with a query in our framework
like you did in hbacrule-del.

Martin


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-08-27 Thread Rich Megginson

On 08/27/2012 11:12 AM, Rob Crittenden wrote:

Rich Megginson wrote:

On 08/27/2012 06:41 AM, Dmitri Pal wrote:

On 08/17/2012 10:00 AM, Rich Megginson wrote:

On 08/17/2012 07:44 AM, Martin Kosek wrote:

Hi guys,

I am now investigating ticket #2866:
https://fedorahosted.org/freeipa/ticket/2866

And I am thinking about possible solutions for this problem. In a
nutshell, we do not properly check referential integrity in some IPA
objects where we keep one-way DN references to other objects, e.g. in
- managedBy attribute for a host object
- memberhost attribute for HBAC rule object
- memberuser attribute for user object
- memberallowcmd or memberdenycmd for SUDO command object 
(reported in

#2866)
...

Currently, I see 2 approaches to solve this:
1) Add relevant checks to our ipalib plugins where problematic
operations with these operations are being executed (like we do for
selinuxusermap's seealso attribute in HBAC plugin)
This of course would not prevent direct LDAP deletes.

2) Implement a preop DS plugin that would hook to MODRDN and DELETE
callbacks and check that this object's DN is not referenced in other
objects. And if it does, it would reject such modification. Second
option would be to delete the attribute value with now invalid
reference. This would be probably  more suitable for example for
references to user objects.

Any comments to these possible approaches are welcome.

Rich, do you think that as an alternative to these 2 approaches,
memberOf plugin could be eventually modified to do this task?

This is very similar to the referential integrity plugin already in
389, except instead of cleaning up references to moved and deleted
entries, you want it to prevent moving or deleting an entry if that
entry is referenced by the
managedby/memberhost/memberuser/memberallowcmd/memberdenycmd of some
other entry.

Note that the managed entry plugin (mep) already handles this for the
managedby attribute.

Are you already using the memberof plugin for
memberhost/memberuser/memberallowcmd/memberdenycmd?

This doesn't seem like a job for memberof, this seems like more of a
new check for the referential integrity plugin.

Did it translate into a DS ticket?

No.  Is there an IPA ticket to link to?


Not yet. I wonder if we need to flesh out what this means (and may 
mean) going forward. Are there any downsides to linking entries in 
this way.


Performance is one downside.  Although with the work that Noriko is 
doing on transactions, this may not be as big an issue.  SQL databases 
have long supported this sort of referential integrity checking, so it 
makes sense to put this sort of business logic in the database.




rob


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-08-27 Thread Rich Megginson

On 08/27/2012 02:27 PM, Martin Kosek wrote:

On Mon, 2012-08-27 at 10:29 -0600, Rich Megginson wrote:

On 08/27/2012 10:24 AM, Martin Kosek wrote:

On 08/17/2012 04:00 PM, Rich Megginson wrote:

On 08/17/2012 07:44 AM, Martin Kosek wrote:

Hi guys,

I am now investigating ticket #2866:
https://fedorahosted.org/freeipa/ticket/2866

And I am thinking about possible solutions for this problem. In a
nutshell, we do not properly check referential integrity in some IPA
objects where we keep one-way DN references to other objects, e.g. in
- managedBy attribute for a host object
- memberhost attribute for HBAC rule object
- memberuser attribute for user object
- memberallowcmd or memberdenycmd for SUDO command object (reported in
#2866)
...

Currently, I see 2 approaches to solve this:
1) Add relevant checks to our ipalib plugins where problematic
operations with these operations are being executed (like we do for
selinuxusermap's seealso attribute in HBAC plugin)
This of course would not prevent direct LDAP deletes.

2) Implement a preop DS plugin that would hook to MODRDN and DELETE
callbacks and check that this object's DN is not referenced in other
objects. And if it does, it would reject such modification. Second
option would be to delete the attribute value with now invalid
reference. This would be probably  more suitable for example for
references to user objects.

Any comments to these possible approaches are welcome.

Rich, do you think that as an alternative to these 2 approaches,
memberOf plugin could be eventually modified to do this task?

This is very similar to the referential integrity plugin already in 389, except
instead of cleaning up references to moved and deleted entries, you want it to
prevent moving or deleting an entry if that entry is referenced by the
managedby/memberhost/memberuser/memberallowcmd/memberdenycmd of some other 
entry.

I think that using or enhancing current DS referential integrity plugin will be
the best and the most consistent way to go.

We already use that plugin for some user attributes like manager or
secretary. seeAlso is already covered by default, so for example seeAlso
attribute in SELinux usermap object referencing an HBAC rule will get removed
when relevant HBAC rule is removed (I just checked that).


Note that the managed entry plugin (mep) already handles this for the managedby
attribute.

I assume you are referencing mepmanagedby and mepmanagedentry attributes
which then produce errors like this one:

# ipa netgroup-del foo
ipa: ERROR: Server is unwilling to perform: Deleting a managed entry is not
allowed. It needs to be manually unlinked first.

managedBy attribute used by hosts objects I had in mind seems to not be covered.

But you are right, this is pretty much what I wanted. Though in case of MEP,
there is a link in both referenced objects, but in our case, we have just the
one-way link.


Are you already using the memberof plugin for
memberhost/memberuser/memberallowcmd/memberdenycmd?

This doesn't seem like a job for memberof, this seems like more of a new check
for the referential integrity plugin.


I am now considering if current move/delete clean up already present in
Referential Integrity plugin would not be sufficient for us.

Rich, please correct me if I am wrong, but in that case, we would just need to
add relevant attribute names
(memberhost/memberuser/memberallowcmd/memberdenycmd...) to Referential
Integrity plugin configuration as nsslapd-pluginarg7, nsslapd-pluginarg8, ...
I wonder if there would be some performance issues if we add attributes to the
list this way.

No, not if they are indexed for presence and equality.

But referential integrity will not prevent deletion or moving entries -
it will delete/move references.

I understand that. After some reconsideration, I think that cleaning up
dangling references as postop should be OK for most of the referential
attributes we use. But I would like a second opinion on that.

So do I understand it correctly, that in case we want to go this way in
IPA, the recommended approach DS-wise would be to:
- add presence and equality indexes to IPA for the attributes we want to
have checked for referential integrity
- configure DS Referential Integrity plugin to check these attributes

Or would it be better to wait on relevant DS changes you mentioned that
Noriko is working on?


Also look at the Linked Attribute plugin - it may be able to do what you 
want right now - http://port389.org/wiki/Linked_Attributes_Design




Thanks,
Martin


Rob, do you think that cleaning up the broken references during a DS postop
instead of raising an preop error is OK for IPA references? I went through the
referential attributes we use (git grep LDAPAddMember) and I think it should
be sufficient. We could cover some special cases with a query in our framework
like you did in hbacrule-del.

Martin




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https

Re: [Freeipa-devel] DN patch and documentation

2012-08-09 Thread Rich Megginson

On 08/09/2012 01:31 PM, John Dennis wrote:

On 08/09/2012 02:08 PM, Rob Crittenden wrote:

I've been going through the diffs and have some questions in ldap2.py,
these are primarily pedantic:


Some of your questions can be answered by:

http://jdennis.fedorapeople.org/dn_summary.html



What is the significance of L here:

'2.16.840.1.113730.3.8.L.8'  : DN,  # ipaTemplateRef


These came from:

install/share/60policyv2.ldif

I didn't notice the .L in the OID which isn't legal (correct?). So 
beats me, I can't explain why the OID is specified that way.


hmm - did you see this comment at the top of the file?

# Policy related schema.
# This file should not be loaded.
# Remove this comment and assign right OIDs when time comes to do something
# about this functionality.







There are quite a few assert isinstance(dn, DN). I assume this is mainly
meant for developers, we aren't expecting to handle that gracefully at
runtime?


Correct. They are there to prevent future use of dn strings by 
developers whose habits die hard. The goal is 100% DN usage 100% of 
the time.


If we allow strings some of the time we're on a slippery slope. Think 
of it as type enforcement meant to protect ourselves.


The assertions also proved valuable in finding a number of places 
where functions failed to return values or correct values. This showed 
up a lot in the pre and post callbacks whose signature specifies a 
return value but the developer forgot to return a value. Apparently 
pylint does not pick these things up.


In production we should disable assertions, we should open a ticket to 
disable assertions in production.




It seems to me that allowing a DN passed in as a string would be nice in
some cases. Calling bind, for example, looks very awkward and isn't as
readable as cn=directory manager, IMHO. What is the downside of having a
converter for these?


See the above documentation for the rationale. In particular the 
section called Why did I use tuples instead of strings when 
initializing DN's?



search_ext() has an extra embedded line which makes the function a bit
confusing to read.


O.K. you lost me on that one :-)


Do we need to keep _debug_log_ldap?


I don't think it hurts. I found it very useful to see the actual LDAP 
data when debugging.



In search_ext_s() (and a few others) should we just return
self.convert_result() directly?


Petr asked this question too. I don't have strong feelings either way. 
The reason it's stored in another variable is it's easy to add a print 
statement or stop in the debugger and examine it when need be. It also 
makes it clear it's the IPA formatted results. I don't think there is 
much cost to using the variable. I'm not attached to it, it can be 
changed.


In ldap2::__init__ what would raise an AttributeError and would we want
to hide that fact with an empty base_dn? Is this attempting to allow the
use of ldap2.ldap2 in cases where the api is not initialized?


Beats me, that's been in the code for a long time. An empty DN is the 
same as an empty string. Maybe we should set it to None instead so we 
know base_dn was never initialized properly.



In ipaserver/ipaldap.py there is a continuance of the assertions, some
of which don't seem to be needed. For example, in toDict() it shouldn't
be possible to create an Entry object without having self.dn be a DN
object, so is this assertion necessary?


Many objects now enforce DN's for their dn attribute. A dn attribute 
may only ever be None or an instance of a DN. This is implemented with


dn = ipautil.dn_attribute_property('_dn')

In objects which define their dn property this way an assert 
isinstance(self.dn, DN) is not necessary because the dn property 
enforces it. So you're correct, those particular asserts could be 
removed. They were added before dn type enforcement via object 
property was added. I could go through and clean those up, but perhaps 
we should open a separate ticket for that.






___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] slow response

2012-08-02 Thread Rich Megginson

On 08/02/2012 11:17 AM, Stephen Ingram wrote:

On Wed, Aug 1, 2012 at 7:35 AM, Simo Sorces...@redhat.com  wrote:

On Tue, 2012-07-31 at 08:49 -0700, Stephen Ingram wrote:

On Tue, Jul 31, 2012 at 1:39 AM, Petr Spacekpspa...@redhat.com  wrote:

On 07/31/2012 12:27 AM, John Dennis wrote:


What is taking so long with session bookkeeping? I don't know yet. I would
need more timing instrumentation. I will say when I looked at the
python-krb5
code (which we use to populate the ccache from the session and read back
to
store in the session) seemed to be remarkably inefficient. We also elected
to
use file based ccache rather than in-memory ccache (that means there is a
bit
of file-IO occurring).


A note regarding python-krbV:
I used python-krbV extensively in my thesis for KDC stress test. Python-krbV
can obtain several thousands of TGTs per second (even with ccache in a
file). AFAIK VFS calls are not done synchronously. But others parts of
python-krbV were left uncovered, so it can contain some surprises.

=== Wild speculation follows ===
1.5 second is incredibly long time, it sounds like some kind of timeout.
Krb5 libs have usual timeout = 1 second per request.

Are all KDCs in /etc/krb5.conf alive and reachable?

In this case, as I'm referring to the extreme slowness of the Web UI,
the KDC is on the same system (the ipa server) that is making the
request, correct?


Is SSSD running on problematic server?

Yes. Again, I'm guessing the problematic server is the IPA server itself.


Is proper KDC selected by SSSD KDC auto-locator plugin?
(See /var/lib/sss/pubconf/)

Yes, I checked that file and it is the IP address of the IPA server on
the same server. Perhaps should this be 127.0.0.1 instead?

I also have checked the resolv.conf file, and indeed the IP points to
the IPA server itself (same machine) as expected. Both forward and
reverse DNS work. I'm not really sure what else could be setup
incorrectly to cause any KDC slowness.

Due to the extreme UI slowness issue, I have not created any replicas
so this system is it. I'm not so sure I would be able to see the 1.5 s
delay if it weren't compounded by the overall slowness of the Web UI,
however, the KDC seems to perform well for other systems in the realm.
I'm certainly not taxing it with a huge load, but tickets seem to be
issued without delay.

Stephen,
another user sent me a wireshark trace for a similar performance issue.
So far I see a pause when doing the first leg of a SASL authentication.
This may well explain also your issue.

Can you test connecting to the ldap server using ldapsearch -Y GSSAPI
(you need a kerberos ticket) and telling me if you experience any
delay ?
If you could run a bunch of searches in a loop and take a wireshark
trace that may help analyzing the timings and seeing if there is a
correlation.

I've done this. It looks like this delay has been uncovered already
though? I can still send you the dump privately if you think it would
help.


We have a somewhat reproducible test case - see 
https://bugzilla.redhat.com/show_bug.cgi?id=845125 - note that the 
problem only seems to occur on RHEL 6.3 with the latest Z-stream packages.




Steve


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] slow response

2012-08-02 Thread Rich Megginson

On 08/02/2012 11:29 AM, Stephen Ingram wrote:

On Thu, Aug 2, 2012 at 10:23 AM, Simo Sorces...@redhat.com  wrote:

On Thu, 2012-08-02 at 10:17 -0700, Stephen Ingram wrote:

On Wed, Aug 1, 2012 at 7:35 AM, Simo Sorces...@redhat.com  wrote:

On Tue, 2012-07-31 at 08:49 -0700, Stephen Ingram wrote:

On Tue, Jul 31, 2012 at 1:39 AM, Petr Spacekpspa...@redhat.com  wrote:

On 07/31/2012 12:27 AM, John Dennis wrote:


What is taking so long with session bookkeeping? I don't know yet. I would
need more timing instrumentation. I will say when I looked at the
python-krb5
code (which we use to populate the ccache from the session and read back
to
store in the session) seemed to be remarkably inefficient. We also elected
to
use file based ccache rather than in-memory ccache (that means there is a
bit
of file-IO occurring).


A note regarding python-krbV:
I used python-krbV extensively in my thesis for KDC stress test. Python-krbV
can obtain several thousands of TGTs per second (even with ccache in a
file). AFAIK VFS calls are not done synchronously. But others parts of
python-krbV were left uncovered, so it can contain some surprises.

=== Wild speculation follows ===
1.5 second is incredibly long time, it sounds like some kind of timeout.
Krb5 libs have usual timeout = 1 second per request.

Are all KDCs in /etc/krb5.conf alive and reachable?

In this case, as I'm referring to the extreme slowness of the Web UI,
the KDC is on the same system (the ipa server) that is making the
request, correct?


Is SSSD running on problematic server?

Yes. Again, I'm guessing the problematic server is the IPA server itself.


Is proper KDC selected by SSSD KDC auto-locator plugin?
(See /var/lib/sss/pubconf/)

Yes, I checked that file and it is the IP address of the IPA server on
the same server. Perhaps should this be 127.0.0.1 instead?

I also have checked the resolv.conf file, and indeed the IP points to
the IPA server itself (same machine) as expected. Both forward and
reverse DNS work. I'm not really sure what else could be setup
incorrectly to cause any KDC slowness.

Due to the extreme UI slowness issue, I have not created any replicas
so this system is it. I'm not so sure I would be able to see the 1.5 s
delay if it weren't compounded by the overall slowness of the Web UI,
however, the KDC seems to perform well for other systems in the realm.
I'm certainly not taxing it with a huge load, but tickets seem to be
issued without delay.

Stephen,
another user sent me a wireshark trace for a similar performance issue.
So far I see a pause when doing the first leg of a SASL authentication.
This may well explain also your issue.

Can you test connecting to the ldap server using ldapsearch -Y GSSAPI
(you need a kerberos ticket) and telling me if you experience any
delay ?
If you could run a bunch of searches in a loop and take a wireshark
trace that may help analyzing the timings and seeing if there is a
correlation.

I've done this. It looks like this delay has been uncovered already
though? I can still send you the dump privately if you think it would
help.

I think we reproduced it, can you confirm you are also running on RHEL ?
So far it seem the only platfrom we can reproduce is RHEL 6.3


Yes, I'm running RHEL 6.3. I just ran the command in the BZ and it
takes 1.542s for me. What are Z-stream packages?


They are packages delivered between minor RHEL releases e.g. between 
RHEL 6.3 and RHEL 6.4 - the 389-ds-base package released with RHEL 6.3 
was 389-ds-base-1.2.10.2-15.el6 - since RHEL 6.3, we released some 
bugfix packages, and the latest is now 389-ds-base-1.2.10.2-20.el6_3 - 
note the _3 at the end - this means it is a bugfix release for RHEL 
6.3 - Z-stream is just Red Hat terminology for a package released 
between minor RHEL releases - the Z stands for the Z in the 
versioning scheme X.Y.Z




Is this new for
389ds?

Steve


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] slow response

2012-08-02 Thread Rich Megginson

On 08/02/2012 11:56 AM, Stephen Ingram wrote:

On Thu, Aug 2, 2012 at 10:33 AM, Rich Megginsonrmegg...@redhat.com  wrote:

On 08/02/2012 11:29 AM, Stephen Ingram wrote:

On Thu, Aug 2, 2012 at 10:23 AM, Simo Sorces...@redhat.com   wrote:

On Thu, 2012-08-02 at 10:17 -0700, Stephen Ingram wrote:

On Wed, Aug 1, 2012 at 7:35 AM, Simo Sorces...@redhat.com   wrote:

On Tue, 2012-07-31 at 08:49 -0700, Stephen Ingram wrote:

On Tue, Jul 31, 2012 at 1:39 AM, Petr Spacekpspa...@redhat.com
wrote:

On 07/31/2012 12:27 AM, John Dennis wrote:


What is taking so long with session bookkeeping? I don't know yet. I
would
need more timing instrumentation. I will say when I looked at the
python-krb5
code (which we use to populate the ccache from the session and read
back
to
store in the session) seemed to be remarkably inefficient. We also
elected
to
use file based ccache rather than in-memory ccache (that means there
is a
bit
of file-IO occurring).


A note regarding python-krbV:
I used python-krbV extensively in my thesis for KDC stress test.
Python-krbV
can obtain several thousands of TGTs per second (even with ccache in
a
file). AFAIK VFS calls are not done synchronously. But others parts
of
python-krbV were left uncovered, so it can contain some surprises.

=== Wild speculation follows ===
1.5 second is incredibly long time, it sounds like some kind of
timeout.
Krb5 libs have usual timeout = 1 second per request.

Are all KDCs in /etc/krb5.conf alive and reachable?

In this case, as I'm referring to the extreme slowness of the Web UI,
the KDC is on the same system (the ipa server) that is making the
request, correct?


Is SSSD running on problematic server?

Yes. Again, I'm guessing the problematic server is the IPA server
itself.


Is proper KDC selected by SSSD KDC auto-locator plugin?
(See /var/lib/sss/pubconf/)

Yes, I checked that file and it is the IP address of the IPA server on
the same server. Perhaps should this be 127.0.0.1 instead?

I also have checked the resolv.conf file, and indeed the IP points to
the IPA server itself (same machine) as expected. Both forward and
reverse DNS work. I'm not really sure what else could be setup
incorrectly to cause any KDC slowness.

Due to the extreme UI slowness issue, I have not created any replicas
so this system is it. I'm not so sure I would be able to see the 1.5 s
delay if it weren't compounded by the overall slowness of the Web UI,
however, the KDC seems to perform well for other systems in the realm.
I'm certainly not taxing it with a huge load, but tickets seem to be
issued without delay.

Stephen,
another user sent me a wireshark trace for a similar performance issue.
So far I see a pause when doing the first leg of a SASL authentication.
This may well explain also your issue.

Can you test connecting to the ldap server using ldapsearch -Y GSSAPI
(you need a kerberos ticket) and telling me if you experience any
delay ?
If you could run a bunch of searches in a loop and take a wireshark
trace that may help analyzing the timings and seeing if there is a
correlation.

I've done this. It looks like this delay has been uncovered already
though? I can still send you the dump privately if you think it would
help.

I think we reproduced it, can you confirm you are also running on RHEL ?
So far it seem the only platfrom we can reproduce is RHEL 6.3


Yes, I'm running RHEL 6.3. I just ran the command in the BZ and it
takes 1.542s for me. What are Z-stream packages?


They are packages delivered between minor RHEL releases e.g. between RHEL
6.3 and RHEL 6.4 - the 389-ds-base package released with RHEL 6.3 was
389-ds-base-1.2.10.2-15.el6 - since RHEL 6.3, we released some bugfix
packages, and the latest is now 389-ds-base-1.2.10.2-20.el6_3 - note the
_3 at the end - this means it is a bugfix release for RHEL 6.3 -
Z-stream is just Red Hat terminology for a package released between minor
RHEL releases - the Z stands for the Z in the versioning scheme X.Y.Z

Got it. We are using those packages now. Although this might or might
not be the cause of the slow Web UI, the Web UI has been slow since
the initial 6.3 release. If this 389 slowness was only introduced in
the Z-stream 389ds, then it is likely not, or not the only, cause of
the Web UI slowness.
The 389 slowness was not introduced in the Z-stream package (-20.el6_3) 
- the slowness is present in the RHEL 6.3.0 package (-15.el6)


Steve


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] slow response

2012-08-01 Thread Rich Megginson

On 08/01/2012 09:20 AM, Loris Santamaria wrote:

El mié, 01-08-2012 a las 10:35 -0400, Simo Sorce escribió:

On Tue, 2012-07-31 at 08:49 -0700, Stephen Ingram wrote:

On Tue, Jul 31, 2012 at 1:39 AM, Petr Spacekpspa...@redhat.com  wrote:

On 07/31/2012 12:27 AM, John Dennis wrote:


What is taking so long with session bookkeeping? I don't know yet. I would
need more timing instrumentation. I will say when I looked at the
python-krb5
code (which we use to populate the ccache from the session and read back
to
store in the session) seemed to be remarkably inefficient. We also elected
to
use file based ccache rather than in-memory ccache (that means there is a
bit
of file-IO occurring).


A note regarding python-krbV:
I used python-krbV extensively in my thesis for KDC stress test. Python-krbV
can obtain several thousands of TGTs per second (even with ccache in a
file). AFAIK VFS calls are not done synchronously. But others parts of
python-krbV were left uncovered, so it can contain some surprises.

=== Wild speculation follows ===
1.5 second is incredibly long time, it sounds like some kind of timeout.
Krb5 libs have usual timeout = 1 second per request.

Are all KDCs in /etc/krb5.conf alive and reachable?

In this case, as I'm referring to the extreme slowness of the Web UI,
the KDC is on the same system (the ipa server) that is making the
request, correct?


Is SSSD running on problematic server?

Yes. Again, I'm guessing the problematic server is the IPA server itself.


Is proper KDC selected by SSSD KDC auto-locator plugin?
(See /var/lib/sss/pubconf/)

Yes, I checked that file and it is the IP address of the IPA server on
the same server. Perhaps should this be 127.0.0.1 instead?

I also have checked the resolv.conf file, and indeed the IP points to
the IPA server itself (same machine) as expected. Both forward and
reverse DNS work. I'm not really sure what else could be setup
incorrectly to cause any KDC slowness.

Due to the extreme UI slowness issue, I have not created any replicas
so this system is it. I'm not so sure I would be able to see the 1.5 s
delay if it weren't compounded by the overall slowness of the Web UI,
however, the KDC seems to perform well for other systems in the realm.
I'm certainly not taxing it with a huge load, but tickets seem to be
issued without delay.

Stephen,
another user sent me a wireshark trace for a similar performance issue.
So far I see a pause when doing the first leg of a SASL authentication.
This may well explain also your issue.

Hi, I experience the same delay in SASL authentication. The number I
posted on freeipa-users, show a 1-2 second delay with SASL
authentication:

# time ldapsearch -x uid=bdteg01662 dn
# extended LDIF
#
# LDAPv3
# basedc=xxx,dc=gob,dc=ve  (default) with scope subtree
# filter: uid=bdteg01662
# requesting: dn
#

# bdteg01662, users, accounts, xxx.gob.ve
dn: uid=bdteg01662,cn=users,cn=accounts,dc=xxx,dc=gob,dc=ve

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1

real0m0.006s
user0m0.001s
sys 0m0.003s

# time ldapsearch -Y GSSAPI uid=bdteg01662 dn
SASL/GSSAPI authentication started
SASL username: ad...@xxx.gob.ve
SASL SSF: 56
SASL data security layer installed.
# extended LDIF
#
# LDAPv3
# basedc=xxx,dc=gob,dc=ve  (default) with scope subtree
# filter: uid=bdteg01662
# requesting: dn
#

# bdteg01662, users, accounts, xxx.gob.ve
dn: uid=bdteg01662,cn=users,cn=accounts,dc=xxx,dc=gob,dc=ve

# search result
search: 4
result: 0 Success

# numResponses: 2
# numEntries: 1

real0m2.344s
user0m0.007s
sys 0m0.005s


Can you post excerpts from your 389 access log showing the sequence of 
operations for this connection, bind and search?








Can you test connecting to the ldap server using ldapsearch -Y GSSAPI
(you need a kerberos ticket) and telling me if you experience any
delay ?
If you could run a bunch of searches in a loop and take a wireshark
trace that may help analyzing the timings and seeing if there is a
correlation.

Simo.




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] slow response

2012-08-01 Thread Rich Megginson

On 08/01/2012 01:34 PM, Loris Santamaria wrote:

El mié, 01-08-2012 a las 09:58 -0600, Rich Megginson escribió:

On 08/01/2012 09:20 AM, Loris Santamaria wrote:

El mié, 01-08-2012 a las 10:35 -0400, Simo Sorce escribió:

On Tue, 2012-07-31 at 08:49 -0700, Stephen Ingram wrote:

On Tue, Jul 31, 2012 at 1:39 AM, Petr Spacekpspa...@redhat.com  wrote:

On 07/31/2012 12:27 AM, John Dennis wrote:

What is taking so long with session bookkeeping? I don't know yet. I would
need more timing instrumentation. I will say when I looked at the
python-krb5
code (which we use to populate the ccache from the session and read back
to
store in the session) seemed to be remarkably inefficient. We also elected
to
use file based ccache rather than in-memory ccache (that means there is a
bit
of file-IO occurring).

A note regarding python-krbV:
I used python-krbV extensively in my thesis for KDC stress test. Python-krbV
can obtain several thousands of TGTs per second (even with ccache in a
file). AFAIK VFS calls are not done synchronously. But others parts of
python-krbV were left uncovered, so it can contain some surprises.

=== Wild speculation follows ===
1.5 second is incredibly long time, it sounds like some kind of timeout.
Krb5 libs have usual timeout = 1 second per request.

Are all KDCs in /etc/krb5.conf alive and reachable?

In this case, as I'm referring to the extreme slowness of the Web UI,
the KDC is on the same system (the ipa server) that is making the
request, correct?


Is SSSD running on problematic server?

Yes. Again, I'm guessing the problematic server is the IPA server itself.


Is proper KDC selected by SSSD KDC auto-locator plugin?
(See /var/lib/sss/pubconf/)

Yes, I checked that file and it is the IP address of the IPA server on
the same server. Perhaps should this be 127.0.0.1 instead?

I also have checked the resolv.conf file, and indeed the IP points to
the IPA server itself (same machine) as expected. Both forward and
reverse DNS work. I'm not really sure what else could be setup
incorrectly to cause any KDC slowness.

Due to the extreme UI slowness issue, I have not created any replicas
so this system is it. I'm not so sure I would be able to see the 1.5 s
delay if it weren't compounded by the overall slowness of the Web UI,
however, the KDC seems to perform well for other systems in the realm.
I'm certainly not taxing it with a huge load, but tickets seem to be
issued without delay.

Stephen,
another user sent me a wireshark trace for a similar performance issue.
So far I see a pause when doing the first leg of a SASL authentication.
This may well explain also your issue.

Hi, I experience the same delay in SASL authentication. The number I
posted on freeipa-users, show a 1-2 second delay with SASL
authentication:

# time ldapsearch -x uid=bdteg01662 dn
# extended LDIF
#
# LDAPv3
# basedc=xxx,dc=gob,dc=ve  (default) with scope subtree
# filter: uid=bdteg01662
# requesting: dn
#

# bdteg01662, users, accounts, xxx.gob.ve
dn: uid=bdteg01662,cn=users,cn=accounts,dc=xxx,dc=gob,dc=ve

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1

real0m0.006s
user0m0.001s
sys 0m0.003s

# time ldapsearch -Y GSSAPI uid=bdteg01662 dn
SASL/GSSAPI authentication started
SASL username: ad...@xxx.gob.ve
SASL SSF: 56
SASL data security layer installed.
# extended LDIF
#
# LDAPv3
# basedc=xxx,dc=gob,dc=ve  (default) with scope subtree
# filter: uid=bdteg01662
# requesting: dn
#

# bdteg01662, users, accounts, xxx.gob.ve
dn: uid=bdteg01662,cn=users,cn=accounts,dc=xxx,dc=gob,dc=ve

# search result
search: 4
result: 0 Success

# numResponses: 2
# numEntries: 1

real0m2.344s
user0m0.007s
sys 0m0.005s

Can you post excerpts from your 389 access log showing the sequence of
operations for this connection, bind and search?

Here they are:

[01/Aug/2012:10:39:40 -041800] conn=33 fd=70 slot=70 connection from 
172.18.32.246 to 172.18.32.246
[01/Aug/2012:10:39:40 -041800] conn=33 op=0 BIND dn= method=sasl version=3 
mech=GSSAPI
[01/Aug/2012:10:39:42 -041800] conn=33 op=0 RESULT err=14 tag=97 nentries=0 
etime=2, SASL bind in progress


Yep, this is it - this should not be taking 2 seconds.  I'd like to see 
what internal operations are going on in this time.  Try this - follow 
the directions at http://port389.org/wiki/FAQ#Troubleshooting but for 
the access log, to turn on Heavy trace output debugging:


dn: cn=config
changetype: modify
replace: nsslapd-accesslog-level
nsslapd-accesslog-level: 4

Then turn the access log level back to the default (256) after 
reproducing the problem.  We should then be able to see the sequence of 
internal operations triggered by the BIND dn= method=sasl version=3 
mech=GSSAPI



[01/Aug/2012:10:39:42 -041800] conn=33 op=1 BIND dn= method=sasl version=3 
mech=GSSAPI
[01/Aug/2012:10:39:42 -041800] conn=33 op=1 RESULT err=14 tag=97 nentries=0 
etime=0, SASL bind in progress
[01/Aug/2012:10:39:42 -041800] conn=33 op=2 BIND dn= method=sasl version

Re: [Freeipa-devel] freeIPA as a samba backend

2012-06-26 Thread Rich Megginson

On 06/26/2012 11:13 AM, Dmitri Pal wrote:

On 06/26/2012 11:11 AM, Loris Santamaria wrote:

El mar, 26-06-2012 a las 10:35 -0400, Dmitri Pal escribió:

On 06/25/2012 09:02 PM, Loris Santamaria wrote:

Hi,

while using freeIPA as a user database for a samba installation I found
a problem in the enforcement of password policies. FreeIPA password
policies are more detailed than samba's, in freeIPA one may enforce
password history and the number of character classes in a password, but
normally samba connects to freeIPA with the Directory Manager so those
policies are not enforced.

Reading the source of ipa_pwd_extop I see there are three possibilities
when changing passwords:

   * Password change by the user, with full enforcement of policies
   * Password change by an admin, with no enforcement of policies and
 the new password is set as expired so the user has to change it
 on next logon
   * Password change by Directory Manager, with no enforcement of
 policies and the password is not set as expired.

None of the aforementioned possibilities are ideal for samba, samba
should connect to freeIPA with a user privileged enough to change
password for all users but with fully enforced policies.

What do you think about this? Would you consider adding such feature?
Would you accept patches?


Can you please explain why samba needs to connect to IPA and change
the passwords?
In what role you use samba? As a file server or as something else?
I am not sure I follow why you need the password change functionality.
There is a way to setup Samba FS with IPA without trying to make IPA a
back end for Samba.
I can try to dig some writeups on the matter if you are interested.

Samba 3 when used as a PDC/BDC can use a LDAP server as its user/group
database. To do that samba connects with a privileged user to the LDAP
directory and manages some attributes of users and groups in the
directory, adding the sambaSAMAccount objectclass and the sambaSID
attribute to users, groups and machines of the domain.

When users of Windows workstations in a samba domain change their
passwords samba updates the sambaNTPassword, userPassword,
sambaLastPwdChange, sambaPwdMustChange attributes of the corresponding
ldap user.

Using freeIPA as ldap user backend for samba works quite well, except
for the password policy problem mentioned in last mail and that it is
hard to mantain in sync the enabled/disabled status of an account.


What is the value of using FreeIPA as a Samba back end in comparison 
to other variants?

Why IPA is more interesting than say 389-DS or OpenLDAP or native Samba?


IPA will keep all of your passwords in sync - userPassword, 
sambaNTPassword, sambaLMPassword, and your kerberos passwords.  389 
cannot do this - the functionality that does this is provided by an IPA 
password plugin.  Openldap has a similar plugin, but I think it is 
contrib and not officially supported.



What other features of IPA are used in such setup?

Answering these (and may be other) questions would help us to 
understand how common is the use case that you brought up.




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel



--
Thank you,
Dmitri Pal

Sr. Engineering Manager IPA project,
Red Hat Inc.


---
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] freeIPA as a samba backend

2012-06-26 Thread Rich Megginson

On 06/26/2012 11:39 AM, Dmitri Pal wrote:

On 06/26/2012 01:28 PM, Rich Megginson wrote:

On 06/26/2012 11:13 AM, Dmitri Pal wrote:

On 06/26/2012 11:11 AM, Loris Santamaria wrote:

El mar, 26-06-2012 a las 10:35 -0400, Dmitri Pal escribió:

On 06/25/2012 09:02 PM, Loris Santamaria wrote:

Hi,

while using freeIPA as a user database for a samba installation I found
a problem in the enforcement of password policies. FreeIPA password
policies are more detailed than samba's, in freeIPA one may enforce
password history and the number of character classes in a password, but
normally samba connects to freeIPA with the Directory Manager so those
policies are not enforced.

Reading the source of ipa_pwd_extop I see there are three possibilities
when changing passwords:

   * Password change by the user, with full enforcement of policies
   * Password change by an admin, with no enforcement of policies and
 the new password is set as expired so the user has to change it
 on next logon
   * Password change by Directory Manager, with no enforcement of
 policies and the password is not set as expired.

None of the aforementioned possibilities are ideal for samba, samba
should connect to freeIPA with a user privileged enough to change
password for all users but with fully enforced policies.

What do you think about this? Would you consider adding such feature?
Would you accept patches?


Can you please explain why samba needs to connect to IPA and change
the passwords?
In what role you use samba? As a file server or as something else?
I am not sure I follow why you need the password change functionality.
There is a way to setup Samba FS with IPA without trying to make IPA a
back end for Samba.
I can try to dig some writeups on the matter if you are interested.

Samba 3 when used as a PDC/BDC can use a LDAP server as its user/group
database. To do that samba connects with a privileged user to the LDAP
directory and manages some attributes of users and groups in the
directory, adding the sambaSAMAccount objectclass and the sambaSID
attribute to users, groups and machines of the domain.

When users of Windows workstations in a samba domain change their
passwords samba updates the sambaNTPassword, userPassword,
sambaLastPwdChange, sambaPwdMustChange attributes of the corresponding
ldap user.

Using freeIPA as ldap user backend for samba works quite well, except
for the password policy problem mentioned in last mail and that it is
hard to mantain in sync the enabled/disabled status of an account.


What is the value of using FreeIPA as a Samba back end in comparison 
to other variants?

Why IPA is more interesting than say 389-DS or OpenLDAP or native Samba?


IPA will keep all of your passwords in sync - userPassword, 
sambaNTPassword, sambaLMPassword, and your kerberos passwords.  389 
cannot do this - the functionality that does this is provided by an 
IPA password plugin.  Openldap has a similar plugin, but I think it 
is contrib and not officially supported.





I know that Endi did the work to make 389 be a viable back end for 
Samba and it passed all the Samba torture tests so I am not sure I 
agree with you.


Was that for samba4 or samba3?


Samba does the kerberos operations itself and uses LDAP as a storage only.


Samba4 or samba3?

This is why I am struggling to understand the use case. It seems that 
Loris has a different configuration that I do not quite understand, 
thus questions.



What other features of IPA are used in such setup?

Answering these (and may be other) questions would help us to 
understand how common is the use case that you brought up.




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel



--
Thank you,
Dmitri Pal

Sr. Engineering Manager IPA project,
Red Hat Inc.


---
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel





--
Thank you,
Dmitri Pal

Sr. Engineering Manager IPA project,
Red Hat Inc.


---
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/




___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

[Freeipa-devel] 389 ticket 392 - where does kerberos close /var/tmp/ldap_499?

2012-06-25 Thread Rich Megginson

https://fedorahosted.org/389/ticket/392

The platform is F-17 with 389 1.2.11 and ipa 2.2

Attached to the ticket are various gdb bt of calls to open 
/var/tmp/ldap_499 - where are these supposed to be closed?  With the 
server under a very light load, calling ipa commands to do sasl/gssapi 
binds, I can see the lsof on this file fluctuating between 0 and 16.  I 
have not been able to get more than 16 sustained, although I can see it 
high slightly higher while an operation is in progress.


My question is - where is the corresponding close() for the open in 
krb5int_labeled_open()?


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] F-17 install fail - Command '/bin/systemctl start messagebus.service' returned non-zero exit status

2012-06-10 Thread Rich Megginson

Steps to reproduce:
setup new F-17 machine
yum -y update
yum install freeipa-server
ipa-server-install -N --selfsign

  [16/35]: configuring ssl for ds instance
Unexpected error - see ipaserver-install.log for details:
 Command '/bin/systemctl start messagebus.service' returned non-zero 
exit status 1


The log has this:
2012-06-10T14:48:30Z DEBUG stderr=Failed to issue method call: Unit 
var-run.mount failed to load: No such file or directory. See system logs 
and 'systemctl status var-run.mount' for details.


systemctl status var-run.mount
  Loaded: error (Reason: No such file or directory)
  Active: inactive (dead)
  start condition failed at Sun, 10 Jun 2012 09:59:05 
-0400; 54min ago

   Where: /var/run
  CGroup: name=systemd:/system/var-run.mount
ll /var/run
lrwxrwxrwx. 1 root root 6 Jun 10 05:49 /var/run - ../run


Any ideas?  Am I not supposed to yum -y upgrade with the current version 
of freeipa in F-17?


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] 389-ds-base-1.2.11.3 has been pushed to testing

2012-05-05 Thread Rich Megginson
This is the F-17 candidate.  It fixes the deadlock and the managed entry 
deletion issues found by Rob.  Please give it some karma before the F-17 
deadline on Monday.  Thanks!


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DNS zone serial number updates [#2554]

2012-04-18 Thread Rich Megginson

On 04/18/2012 09:21 AM, Petr Spacek wrote:

On 04/18/2012 04:04 PM, Simo Sorce wrote:

On Wed, 2012-04-18 at 15:29 +0200, Petr Spacek wrote:

Hello,

first of all - snippet moved from the end:
  I think we need to try to be more consistent than what we are 
now. There
  may always be minor races, but the current races are too big to 
pass on

  IMHO.

I definitely agree. Current state = completely broken zone transfer.

Rest is in-line.

On 04/17/2012 06:13 PM, Simo Sorce wrote:

On Tue, 2012-04-17 at 17:49 +0200, Petr Spacek wrote:

Hello,

there is IPA ticket #2554 DNS zone serial number is not updated 
[1],
which is required by RFE Support zone transfers in 
bind-dyndb-ldap [2].


I think we need to discuss next steps with this issue:

Basic support for zone transfers is already done in 
bind-dyndb-ldap. We

need second part - correct behaviour during SOA serial number update.

Bind-dyndb-ldap plugin handles dynamic update in correct way (each
update increment serial #), so biggest problem lays in IPA for now.

Modifying SOA serial number can be pretty hard, because of DS
replication. There are potential race conditions, if records are
modified/added/deleted on two or more places, replication takes some
time (because of network connection latency/problem) and zone 
transfer

is started in meanwhile.

Question is: How consistent we want to be?


Enough, what we want to do is stop updating the SOA from 
bind-dyndb-ldap
and instead update it in a DS plugin. That's because a DS plugin is 
the

only thing that can see entries coming in from multiple servers.
If you update the SOA from bind-dyndb-ldap you can potentially set it
back in time because last write win in DS.

This will require a persistent sarch so bind-dyndb-ldap can be updated
with the last SOA serial number, or bind-dyndb-ldap must not cache it
and always try to fetch it from ldap.


Bind-dyndb-ldap has users with OpenLDAP. I googled a bit and 
OpenLDAP should

support Netscape SLAPI [3][4], but I don't know how hard is to code
interoperable plugin.
Accidentally I found existing SLAPI plugin for concurrent 
BIND-LDAP backend

project [5].


I don't think we need to provide plugins for other platforms, we just
need an optiono in bind-dyndb-ldap to tell it to assume the SOA is being
handled by the LDAP server.
For servers that do not have a suitable plugin bind-dyndb-ldap will keep
working as it does now. In those cases I would suggest people to use a
single master, but up to the integrator of the other solution.


Can we think for a while about another ways? I would like to find 
some (even
sub-optimal) solution without DS plugin, if it's possible and 
comparable hard

to code.


Yes, as I said you may still do something with a persistent search, but
I do not know if persistent searches are available in OpenLDAP either.
OpenLDAP has support for (newer and standardized) SyncRepl [6][7]. I 
plan to look into it and consider writing some compatibility layer for 
psearch/syncrepl in bind-dyndb-ldap. It should not be hard, I think.



However with a persistent search you would see entries coming in in
real time even replicated ones from other replicas, so you could
always issue a SOA serial update. Of course you still need to check for
SOA serial updates from other DNS master servers where another
bind-dyndb-ldap plugin is running.

You have N servers potentially updating the serial at the same time. As
long as you do not update the serial just because the serial was itself
updated you are just going to eat one or more serial numbers off.

We also do not need to make it a requirement to have the serial updated
atomically. If 2 servers both update the number to the same value it is
ok because they will basically be both in sync in terms of hosted
entries.

Otherwise one of the servers will update the serial again as soon as
other entries are received.

If this happens, it is possible that on one of the masters the serial
will be updated twice even though no other change was performed on the
entry-set. That is not a big deal though, at most it will cause a
useless zone transfer, but zone transfer should already be somewhat rate
limited anyway, because our zones do change frequently due to DNS
updates from clients.
SOA record has also refresh, retry and expiry fields. These define how 
often zone transfer should happen. It's nicely described in [8].


There is next problem with transfers: Currently we support only full 
zone transfers (AXFR), not incremental updates (IXFR), because there 
is no last record change-SOA# information. For now it's postponed, 
because nobody wanted it.



   Can we accept these
absolutely improbable race conditions? It will be probably 
corrected by

next SOA update = by (any) next record change. It won't affect normal
operations, only zone transfers.


Yes and No, the problem is that if 2 servers update the SOA
independently you may have the serial go backwards on replication. See
above.


(IMHO we should consider DNS 

Re: [Freeipa-devel] More types of replica in FreeIPA

2012-04-18 Thread Rich Megginson

On 04/17/2012 06:42 AM, Simo Sorce wrote:

On Tue, 2012-04-17 at 01:13 +0200, Ondrej Hamada wrote:

Sorry for inactivity, I was struggling with a lot of school stuff.

I've summed up the main goals, do you agree on them or should I
add/remove any?


GOALS
===
Create Hub and Consumer types of replica with following features:

* Hub is read-only

* Hub interconnects Masters with Consumers or Masters with Hubs
 or Hubs with other Hubs

* Hub is hidden in the network topology

* Consumer is read-only

* Consumer interconnects Masters/Hubs with clients

* Write operations should be forwarded to Master

* Consumer should be able to log users into system without
 communication with master

We need to define how this can be done, it will almost certainly mean
part of the consumer is writable, plus it also means you need additional
access control and policies, on what the Consumer should be allowed to
see.


* Consumer should cache user's credentials

Ok what credentials ? As I explained earlier Kerberos creds cannot
really be cached. Either they are transferred with replication or the
KDC needs to be change to do chaining. Neither I consider as 'caching'.
A password obtained through an LDAP bind could be cached, but I am not
sure it is worth it.


* Caching of credentials should be configurable

See above.


* CA server should not be allowed on Hubs and Consumers

Missing points:
- Masters should not transfer KRB keys to HUBs/Consumers by default.

- We need selective replication if you want to allow distributing a
partial set of Kerberos credentials to consumers. With Hubs it becomes
complicated to decide what to replicate about credentials.

Simo.


Can you please have a look at this draft and comment it please?


Design document draft: More types of replicas in FreeIPA

GOALS
=

Create Hub and Consumer types of replica with following features:

* Hub is read-only

* Hub interconnects Masters with Consumers or Masters with Hubs
or Hubs with other Hubs

* Hub is hidden in the network topology

* Consumer is read-only

* Consumer interconnects Masters/Hubs with clients

* Write operations should be forwarded to Master

Do we need to specify how this is done ? Referrals vs Chain-on-update ?


* Consumer should be able to log users into system without
communication with master

* Consumer should be able to store user's credentials

Can you expand on this ? Do you mean user keys ?


* Storing of credentials should be configurable and disabled by default

* Credentials expiration on replica should be configurable

What does this mean ?


* CA server should not be allowed on Hubs and Consumers

ISSUES
=

- SSSD is currently supposed to cooperate with one LDAP server only

Is this a problem in having an LDAP server that doesn't also have a KDC
on the same host ? Or something else ?


- OpenLDAP client and its support for referrals

Should we avoid referrals and use chain-on-update ?
What does it mean for access control ?
How do consumers authenticate to masters ?
Should we use s4u2proxy ?


- 389-DS allows replication of whole suffix only

What kind of filters do we think we need ? We can already exclude
specific attributes from replication.


fractional replication had originally planned to support search filters 
in addition to attribute lists - I think Ondrej wants to include or 
exclude certain entries from being replicated





- Storing credentials and allowing authentication against Consumer server


POSSIBLE SOLUTIONS
=

389-DS allows replication of whole suffix only:

* Rich said that they are planning to allow the fractional replication
in DS to
use LDAP filters. It will allow us to do selective replication what
is mainly
important for replication of user's credentials.

I guess we want to do this to selectively prevent replication of only
some kerberos keys ? Based on groups ? Would filtes allow that using
memberof ?


Using filters with fractional replication would allow you to include or 
exclude anything that can be expressed as an LDAP search filter





__

Forwarding of requests in LDAP:

* use existing 389-DS plugin Chain-on-update - we can try it as a proof of
concept solution, but for real deployment it won't be very cool solution
as it
will increase the demands on Hubs.

Why do you think it would increase demands for hubs ? Doesn't the
consumer directly contact the masters skipping the hubs ?


Yeah, not sure what you mean here, unless you are taking the document 
http://port389.org/wiki/Howto:ChainOnUpdate as the only way to implement 
chain on update - it is not - that document was taken from an early 
proof-of-concept for a planned deployment at a customer many years ago.





* better way is to use the 

Re: [Freeipa-devel] More types of replica in FreeIPA

2012-04-09 Thread Rich Megginson

On 04/06/2012 09:15 AM, Ondrej Hamada wrote:

On 04/04/2012 06:16 PM, Ondrej Hamada wrote:

On 04/04/2012 03:02 PM, Simo Sorce wrote:

On Tue, 2012-04-03 at 18:45 +0200, Ondrej Hamada wrote:

On 03/13/2012 01:13 AM, Dmitri Pal wrote:

On 03/12/2012 06:10 PM, Simo Sorce wrote:

On Mon, 2012-03-12 at 17:40 -0400, Dmitri Pal wrote:

On 03/12/2012 04:16 PM, Simo Sorce wrote:

On Mon, 2012-03-12 at 20:38 +0100, Ondrej Hamada wrote:

USER'S operations when connection is OK:
---
read data -   local
write data -   forwarding to master
authentication:
-credentials cached -- authenticate against credentials in 
local cache
   -on failure: log failure locally, 
update

data
about failures only on lock-down of account
-credentials not cached -- forward request to master, on success
cache
the credentials


This scheme doesn't work with Kerberos.
Either you have a copy of the user's keys locally or you don't, 
there is

nothing you can really cache if you don't.

Simo.

Yes this is what we are talking about here - the cache would 
have to
contain user Kerberos key but there should be some expiration on 
the
cache so that fetched and stored keys periodically cleaned 
following the

policy an admin has defined.
We would need a mechanism to transfer Kerberos keys, but that 
would not

be sufficient, you'd have to give read-only servers also the realm
krbtgt in order to be able to do anything with those keys.

The way MS solves hits (I think) is by giving a special RODC 
krbtgt to

each RODC, and then replicating all RODC krbtgt's with full domain
controllers. Full domain controllers have logic to use RODC's krbtgt
keys instead of the normal krbtgt to perform operations when user's
krbtgt are presented to a different server. This is a lot of work 
and

changes in the KDC, not something we can implement easily.

As a first implementation I would restrict read-only replicas to 
not do
Kerberos at all, only LDAP for all the lookup stuff necessary. to 
add a

RO KDC we will need to plan a lot of changes in the KDC.

We will also need intelligent partial replication where the rules 
about
which object (and which attributes in the object) need/can be 
replicated
are established based on some grouping+filter mechanism. This 
also is a

pretty important change to 389ds.

Simo.

I agree. I am just trying to structure the discussion a bit so 
that all
what you are saying can be captured in the design document and 
then we

can pick a subset of what Ondrej will actually implement. So let us
capture all the complexity and then do a POC for just LDAP part.


Sorry for inactivity, I was struggling with a lot of school stuff.

I've summed up the main goals, do you agree on them or should I
add/remove any?


GOALS
===
Create Hub and Consumer types of replica with following features:

* Hub is read-only

* Hub interconnects Masters with Consumers or Masters with Hubs
or Hubs with other Hubs

* Hub is hidden in the network topology

* Consumer is read-only

* Consumer interconnects Masters/Hubs with clients

* Write operations should be forwarded to Master

* Consumer should be able to log users into system without
communication with master

We need to define how this can be done, it will almost certainly mean
part of the consumer is writable, plus it also means you need 
additional

access control and policies, on what the Consumer should be allowed to
see.
Right, in such case the Consumers and Hubs will have to be masters 
(from 389-DS's point of view).



* Consumer should cache user's credentials

Ok what credentials ? As I explained earlier Kerberos creds cannot
really be cached. Either they are transferred with replication or the
KDC needs to be change to do chaining. Neither I consider as 'caching'.
A password obtained through an LDAP bind could be cached, but I am not
sure it is worth it.


* Caching of credentials should be configurable

See above.


* CA server should not be allowed on Hubs and Consumers

Missing points:
- Masters should not transfer KRB keys to HUBs/Consumers by default.

Add point:
- storing of the Krb creds must be configurable and disabled by 
default

- We need selective replication if you want to allow distributing a
partial set of Kerberos credentials to consumers. With Hubs it becomes
complicated to decide what to replicate about credentials.

Simo.

Rich mentioned that they are planning support for LDAP filters in 
fractional replication in the future, but currently it is not supported.



Ad distribution of user's Krb creds:
When the user logs on any Consumer for a first time, he has to 
authenticate against master. If succeeds, he will be added to a 
specific user group. Each consumer will have one of these groups. 
These groups will be used by LDAP filters in fractional replication to 
distribute the Krb creds to the chosen Consumers only.


This will be more complicated 

[Freeipa-devel] Please review: take 2: Ticket #1891 - Rewrite IPA plugins to take advantage of the single transaction

2012-03-14 Thread Rich Megginson




freeipa-rmeggins-0004-Rewrite-IPA-plugins-to-take-advantage-of.patch
Description: application/mbox
___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

  1   2   >