Re: [Freeipa-devel] [DESIGN] FreeIPA on FIPS + NSS question

2016-12-19 Thread John Dennis

On 12/19/2016 03:12 AM, Standa Laznicka wrote:

On 12/16/2016 03:23 PM, Rob Crittenden wrote:

Standa Laznicka wrote:

Hello,

I started a design page for FreeIPA on FIPS-enabled systems:
https://www.freeipa.org/page/V4/FreeIPA-on-FIPS

Me and Tomáš are still investigating what of all things will need to
change in order to have FreeIPA on FIPS-enabled RHEL. So far I managed
to install and run patched FreeIPA server and client and connect them
together.

There are some issues with NSS when trying to create an HTTPS request
(apparently, NSS requires an NSS database password to set up an SSL
connection). I am actually thinking of removing NSSConnection from the
client altogether.

Can you expand on this a bit? NSS should only need a pin when it needs
access to a private key. What connection(s) are you talking about, and
what would you replace NSSConnection with?

rob


Hello Rob,

Thank you for this excellent question, in order to cut the email short I
seem to have omitted quite a few information.

One of the very first problems I had with FreeIPA with FIPS was that NSS
was always asking for password/pin. I was discussing this with the NSS
guys on their IRC chat last week and it turns out that NSS tries to
create a private key every time you want to use it as a backend for an
SSL connection on FIPS. I still don't think this is quite right so I may
open a bugzilla for that.


I don't understand, I thought the case you were having problems with was 
the FreeIPA client, not the server. I assume when you use the term 
"backend" you mean server, and yes when NSS is in server mode it will 
access to keys. So isn't the problem NSS is not being initialized 
correctly so that it recognizes it is in client mode and not server mode?




Anyway, the guys suggested me that we could try to create the database
with an empty password and everything will work. I don't quite like
that, too, but it's at least something if you don't want the `ipa`
command to always bug you for password you have no way knowing if you're
just a regular user.

What I think would be a better way to go is to use
httplib.HTTPSConnection. We have the needed certificates in
/etc/ipa/ca.crt anyway so why not use them instead. We had a discussion
with Honza this morning and it seems that with this approach we may get
rid of the NSSConnection class altogether (although I still need to
check a few spots) and start the process of moving away from NSS which
was discussed some year ago in an internal mailing list (for some reason).

Will be happy to hear thoughts on this,
Standa


I'm not a big fan of NSS, it has it's issues. As the author of the 
Python binding I'm quite aware of all the nasty behaviors NSS has and 
needs to be worked around. I wouldn't be sad to see it go but OpenSSL 
has it's own issues too. If you remove NSS you're also removing the 
option to support smart cards, HSM's etc. Perhaps before removing 
functionality it would be good to assess what the requirements are.



--
John

--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Re: [Freeipa-devel] NTP in FreeIPA

2016-11-28 Thread John Dennis

On 11/28/2016 02:57 PM, Rob Crittenden wrote:

David Kupka wrote:

On 22/11/16 23:15, Gabe Alford wrote:

I would say that it is worth keeping in FreeIPA. I know myself and some
customers use its functionality by having the clients sync to the IPA
servers and have the servers sync to the NTP source. This way if the NTP
source ever gets disrupted for long periods of time (which has
happened in
my environment) the client time drifts with the authentication source.
This
is the way that AD often works and is configured.


Hello Gabe,
I agree that it's common practice to synchronize all nodes in network
with single source in order to have the same time and save bandwidth.
Also I understand that it's comfortable to let FreeIPA installer take
care of it.
But I don't think FreeIPA should do it IMO this is job for Ansible or
similar tool. Also the problem is that in some situations FreeIPA
installer makes it worse.

Example:

1. Install FreeIPA server (ipa1.example.org)
2. Install FreeIPA client on all nodes in network
3. Install replica (ipa2.example.org) of FreeIPA server to increase
redundancy

Now all the clients have ipa1.example.org as the only server in
/etc/ntp.conf. If the first FreeIPA server becomes unreachable all
clients will be able to contact KDC on the other server thanks to DNS
autodiscovery in libkrb5 but will be unable to synchronize time.


Remember that the goal of IPA was to herd together a bunch of software
to make hard things easier. This included dealing with the 5-minute
Kerberos window so ntp was configured on the client and server (which is
less of any issue now).

When making changes you have to ask yourself who are you making this
easier for: you or the user.

Yes, getting NTP right is hard, but does it meet the 80/20 rule in terms
of success? I'd think so. I

If someone wants to configure it using Ansible they can use the
--no-ntp. If they want to use different time servers they can pass in
--ntp-server. But by default IMHO it should do something sane to give a
good experience.

There don't seem to be a ton of NTP tickets and I don't recall a lot of
user's pressing for it to go away (the reverse, many times their
problems revolve around time not being synced). I wonder if a survey on
freeipa-users would be in order to see how hot an issue this really is.


+1 Thanks Rob for taking the words out of my mouth.


--
John

--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] import rpm causes failure during IPA caless install

2016-01-11 Thread John Dennis

On 01/11/2016 01:50 AM, Alexander Bokovoy wrote:

On Mon, 11 Jan 2016, Jan Cholasta wrote:

On 8.1.2016 14:28, Alexander Bokovoy wrote:

On Fri, 08 Jan 2016, Jan Cholasta wrote:

I would be surprised if NSS was used in this particular function.


I will try it


No NSS here:




Anyway, the function looks simple, so it might be safer to just
rewrite it to Python, with no new dependencies.

Still, we need to be careful in a case where RPM team would decide to
upgrade rpm version comparison algorithm. It has been fixed for quite
some years (the core wasn't changed since 2008) but occasionally there
are additions that expand supported formats like addition of dpkg-style
versions in 2012.


I don't think we need to care about that, as we use versioning scheme
compatible with the original comparison algorithm on "our" platforms.

In that case I'd prefer John's code as it is pure Python and doesn't
have any other dependencies.


If you do use it I just noticed a couple of places which do not use good 
Python style and isn't Py2/Py3 safe (I wrote this shortly after having 
learned Python many years ago). The following lines


if type(component1) in (int, long):
if type(component2) in (int, long):
if type(component2) is str:

should be changed to

import six

if isinstance(component1, six.integer_types)
if isinstance(component2, six.integer_types)
if isinstance(component2, six.string_types)
--
John

--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] import rpm causes failure during IPA caless install

2016-01-08 Thread John Dennis

On 01/08/2016 08:22 AM, Jan Cholasta wrote:

On 8.1.2016 14:13, Martin Basti wrote:



On 08.01.2016 14:14, Jan Cholasta wrote:

On 8.1.2016 14:09, Martin Basti wrote:



On 08.01.2016 14:00, Martin Kosek wrote:

On 01/08/2016 01:45 PM, Martin Basti wrote:

Hello all,

fix for ticket https://fedorahosted.org/freeipa/ticket/5535
requires to import rpm module

This import somehow breaks nsslib in IPA
https://fedorahosted.org/freeipa/ticket/5572


We have 2 ways how to fix it:

1) move import rpm to body of methods (attached patch)
We are not sure how stable is this solution.

2) use solution with rpmdevtools proposed here:
https://www.redhat.com/archives/freeipa-devel/2016-January/msg00092.html


This should be rock stable but it needs many dependencies (rpm-python
too, perl)

The second way looks safer, so I would like to reimplement it, do you
all agree
or do you have better idea?
Feedback welcome, please ASAP.

Martin^2

Since it's Friday, I invested 15 minutes to practice my C skills and
use the
python-cffi library to call rpm rpmvercmp library call directly
(attached):

$ python rpm.py 4.2.0-15.el7 4.2.0-15.el7_2.3
4.2.0-15.el7 < 4.2.0-15.el7_2.3

This would not introduce any additional dependency besides rpm-devel,
right? :-)


Not rpm-devel, but rpm-libs (you should dlopen "librpm.so.3").


I'm afraid that this can cause the same issue as import rpm, because
the
nsslib is used from C library


I would be surprised if NSS was used in this particular function.


I will try it


No NSS here:



Anyway, the function looks simple, so it might be safer to just rewrite
it to Python, with no new dependencies.



Leaving aside the whole question of whether re-implementing rpmvercmp in 
Python is a good idea or not because of possible divergence from RPM I 
offer to you an implementation of rpmvercmp written in Python I did 
years ago. It was written based on the published documentation of how 
RPM version comparison is implemented (as close to a spec as I was able 
to find). I believe I also used the C implementation as a guide but my 
memory is fuzzy on that point. I've used it a lot and I've also cross 
checked it's results with librpm and I've never seen a differing result.


Use at your pleasure or displeasure :-)

HTH,

John


--
John
#!/usr/bin/python

import re
rpm_name_re = re.compile(r'^(.+)-([^-]+)-([^-]+)$')

def split_rpm_name(rpm_name):
'''
Split an RPM's NVR returning 
[name, version, release]
'''
match = rpm_name_re.match(rpm_name)
if match:
name= match.group(1)
version = match.group(2)
release = match.group(3)
return name, version, release
else:
raise ValueError("cannot split rpm NVR for '%s'" % rpm_name)

def split_rpm_label(label):
'''
Each label is separated into a list of maximal alphabetic or numeric
components, with separators (non-alphanumeric characters) ignored. 
Alphbetic components are inserted into the list as a Python str object.
Numeric components are inserted into the list as either Python int
or long objects depending on the numeric magnitude of the component.

For example:
'2.0.1' => [2, 0, 1]
'2xFg33.+f.5' => [2, 'xFg', 33, 'f', 5]
'''
components = []
component = None
for c in label:
if c.isalpha():
if component is None:
component = c
else:
if component.isalpha():
component += c
else:
components.append(int(component))
component = c
elif c.isdigit():
if component is None:
component = c
else:
if component.isdigit():
component += c
else:
components.append(component)
component = c
else:
if component is not None:
if component.isdigit():
component = int(component)
components.append(component)
component = None

if component is not None:
if component.isdigit():
component = int(component)
components.append(component)
component = None

return components

def rpm_label_cmp(label1, label2):
'''
The version and release components of a rpm NVR are considered labels.
To compare a label we split the label into components, see split_rpm_label()
for an explanation of how the components are split.

The components in the list are compared one by one using the following
algorithm. If two components are considered to be different, the label with
the newer component wins as the newer label. If the components are
considered to be equal, the next components are compared until we either
reach different components or one of the lists runs out. In case one of 

Re: [Freeipa-devel] ipa-devel repos on jdennis.fedorapeople.org

2015-12-22 Thread John Dennis

On 12/22/2015 09:50 AM, Petr Spacek wrote:

John, the machines which used to generate the repos are basically dead now.

Could you remove the directories and replace them with a README with sentence
that the repos were discontinued, please?


I'd be happy to, but shouldn't the README contain a pointer to the 
preferred repo/copr whatever it is now? What is it?



--
John

--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] ipa-devel repos on jdennis.fedorapeople.org

2015-08-27 Thread John Dennis

On 08/27/2015 04:27 AM, Petr Spacek wrote:

On 15.7.2015 09:44, Jan Pazdziora wrote:

On Tue, Jul 14, 2015 at 12:49:23PM -0400, John Dennis wrote:

On 07/14/2015 12:03 PM, Petr Spacek wrote:

Hello,

Is anyone using repos
https://jdennis.fedorapeople.org/ipa-devel/
?

AFAIK nobody in Brno is seriously using it but I'm not sure about people
outside the Brno.

Could we use COPR instead and get out of builder business? Upcoming lab
maintenance window could be a good time to do that.


I would love to get out of the builder business and I suspect Nalin would as
well [1]. The question came up in our Monday meeting as well. Nobody seem to
know if anyone was using these builds and why we weren't using COPR. The


The Fedora infra admins should be able to provide HTTP logs for the
repo, if you needs some numbers about potential usage.


That is a good idea! I got logs from Fedora admins and as far as I can tell,
in the last month there were only 7 RPM downloads and nothing else.

The 7 hits I found was for
/ipa-devel/rhel/6/x86_64/os/sssd-1.13.1-0.20150813T1121Zgit137d5dd.el6.i686.rpm 
and
other packages from the same version.

I did not find any hits for IPA packages at all. All the remaining traffic
(except the 7 RPM hits) was from repo data refreshes:
- 83 % is RHEL 5 repodata
- 13 % is RHEL 6 repodata
- remaining ~ 4 % of noise is Fedora repodata

It seems to me that we can get out the builder business completely and
decommission ipa-devel and replace it with COPR.

Do you agree? John? Nathaniel? Stephen? Anyone? :-)


Yes, I agree. Do we have a cut off date when I can stop the service?



--
John

--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Splitting out ipaldap

2015-04-14 Thread John Dennis
On 04/14/2015 10:38 AM, Petr Viktorin wrote:
 Hello!
 
 As some of you know, I'm looking to help porting FreeIPA to Python 3.
 One of the major dependencies holding this back is python-ldap, which 
 hasn't been ported yet. Some preliminary porting patches by Raphaël 
 Barrois [0] are ready and have been sent to the python-ldap list. The 
 python-ldap upstream has been very quiet about reviewing them so far, 
 but they're something for me to test against, and maybe improve.

Openstack is successfully running a py3 version of python-ldap. Maybe
you should look at what Openstack is doing.


-- 
John

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Re: [Freeipa-devel] Time-based account policies

2015-03-10 Thread John Dennis
On 03/10/2015 12:13 PM, Alexander Bokovoy wrote:
 HBAC rule is a tuple (user|group, host|hostgroup, service|servicegroup).
 This tuple would get extension representing time/date information in a
 multivalued attribute that would describe all time/date intervals
 applicable to this rule.

I must be misunderstanding something. Recurrence rules yield an
unbounded number of allow intervals. Certainly you do not want to
enumerate and store all the intervals, instead you want to evaluate the
rule locally and obtain a simple yes/no answer, don't you?

-- 
John

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Time-based account policies

2015-03-10 Thread John Dennis
On 03/10/2015 12:56 PM, Alexander Bokovoy wrote:
 See my answer to John. We don't need to end up with iCal at all since
 iCal doesn't have procedural definitions of holidays. It has
 EXDATE/RRULE allowing to express exceptions and repeating rules (EXRULE
 for exception rules was removed in RFC5545 and is not used anymore) but
 nothing more concrete.
 
 RFC5545 does define multiple things which are part of iCalendar format
 and which we don't really need to deal with in SSSD so we don't need
 full iCal at all. We need to be able to represent recurring events and
 some of exceptions to them within the rules but that is a subset of what
 is needed and can be implemented without involving a fully-compliant
 iCal library.

I always get a bit concerned when I hear we'll factor out or just import
only the minimal code we need to support the minimal functionality we
need from an otherwise large complex body of code implementing an entire
RFC.

Maybe the code in libical is perfectly suited to extracting the snippets
we need (I don't know) but experience tells me complex code has complex
inter-dependencies and reducing libical to our minimal requirements
might be a significant effort. Is there really a problem with just
linking with the entire libical library even if it's more than we need?



-- 
John

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Time-based account policies

2015-03-09 Thread John Dennis
On 03/09/2015 03:45 PM, Simo Sorce wrote:
 We've been through this a few times already, it just doesn't work.
 At a minimum you need to be able to select between UTC and Local Time
 and it is a rathole down there (What time is it *here* may be a hard
 question to answer :-/)

O.K. I'll bite, Converting from UTC to local using static UTC offsets is
fraught with perils. But if you do the local conversion utilizing the tz
(i.e. Olson) database why would be it hard to determine local time?

-- 
John

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Move FreeIPA translations to Zanata?

2015-01-09 Thread John Dennis
On 01/09/2015 07:42 AM, Martin Kosek wrote:
 I am forwarding my conversation with the Noriko from Fedora localization team 
 to the devel list, see below. What do you guys think? Any concerns with 
 moving 
 FreeIPA translations from Transifex to Zanata? SSSD project is moving there 
 as 
 well.

I have no personal experience with Zanata so my comment is worth what
you paid for it :-)

Just following the online discussions it does seem like Zanata is the
future, so probably better to be using the tool everyone else is using
instead of a lone hold out. Anecdotal reports suggest migrations are
relatively smooth. YMMV.


-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] FreeIPA translations

2014-11-12 Thread John Dennis
On 11/12/2014 07:44 AM, Martin Kosek wrote:
 Hi folks,
 
 With Petr changing focus out of FreeIPA, somebody will need to replace his 
 work
 as the person behind FreeIPA Transifex translation.
 
 I think this is now the right time and place to ask - Is Transifex still the
 translation tool we want to continue supporting? I know there were objections
 in the past it is not completely open source. But AFAIK, it is one of the most
 used ones, even OpenStack uses it.

I have not been following it closely but there was talk of both Fedora
and Openstack moving from Transifex to Zanata

http://zanata.org/

As a matter of fact there was recent emails concerning the new Fedora
Zanta instance

Move to Zanata Stage 1 - fedora.zanata instance available now!
https://lists.fedoraproject.org/pipermail/trans/2014-November/011717.html

Like I said, I haven't followed this closely but it seems like Zanata is
the future.


 
 If no, we need to find something better. If yes, we will need to make sure the
 process is well documented on the FreeIPA.org wiki + integrated to our Release
 process so that it can be handed to someone else. (volunteers? :)
 
 Thanks.
 


-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] FreeIPA Copr repo plan

2014-11-10 Thread John Dennis
On 11/10/2014 06:07 AM, Martin Kosek wrote:

 c) Daily repos Should we deprecate old John's repos 
 (http://www.freeipa.org/page/Downloads#Bleeding_Edge) which is
 difficult to maintain and replace them with Copr ones? I.e. to have
 common repo (e.g. mkosek/freeipa-daily) built for the supported
 Fedoras (F20, F21, rawhide ATM) including dependencies?

Nalin does the actual builds, I noticed he wasn't on the CC list so I
just added Nalin to this reply.

From what I know of Copr it's a better tool than our homegrown solution.
If you're already doing Copr builds then I don't see much logic in
keeping the old system. It makes sense to me there should be 1 entity
pumping out the builds using the same technology, why duplicate efforts?
Let's use the better technology.

 Should it contain daily master builds for all tracked projects
 (FreeIPA, SSSD, 389 DS, bind-dyndb-ldap)? Or do we simply want to let
 distribute it across projects mkosek/freeipa-master,
 someone/sssd-master, someone/389-ds-base-master? Second option may
 scale better better, the list of such repos may be maintained
 somewhere (freeipa.org wiki).

 3) Scalability of the approach Some dependencies are more difficult
 to maintain than the others. Especially the PKI ones often required
 custom Java packaging (resteasy-base) or a complicated dependency
 chain (the latest jackson-jaxrs-json-provider). It would be great if
 PKI team helps with this effort, as Lukas proposed. Downside is that
 mkosek/freeipa installation would require 2 Copr repos. But maybe we
 could have a job syncing the PKI build/runtime dependencies directly
 to FreeIPA copr. Whatever works and scale.

I'm not sure I'm a fan of the scattered repo approach simply because
it's hard for end users, too many yum repo configs to manage. One thing
I think did work well with the old setup is the repo contained all the
necessary dependencies which could not be satisfied from the system
repos. I recognize the difficulty of trying to be a build master for a
collection of difficult to build packages. What were we doing with the
old system was to pull packages built elsewhere (i.e. Kevin did the CS
builds) and merge them into the repo thus a user needed only point to
one user, but we weren't responsible for doing builds for everything,
just integrating, this makes the most sense to me.


-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] FreeIPA Copr repo plan

2014-11-10 Thread John Dennis
 It *is not* possible to merge one COPR repo into another.
 It is possible to add another yum repo into build dependencies in COPR
 but it usually mean you will need to enable 2nd repo for installation of
 freeipa as well.

The script I wrote to manage the IPA repo entire purpose is to pull
packages from diverse locations and merge them into one single unified
repo. We call this the repo builder.

The way this system currently works is the repo builder listens for
messages from any number of builders, when it receives a message a new
build is available it merges the new package into the repo. To be more
precise it actually merges all the different OS versions, arches,
debuginfo, multilib etc. to produce one single repo whose layout is
identical to a Fedora repo. This is how we get a one-stop shopping repo
for users to point to.

My contribution to this process does not include doing any builds,
instead my repo and the script that drives it assembles a unified repo
from builds others do. I though Copr was capable of doing essentially
the same thing, but at the same time doing the actual builds. If Copr
cannot assemble an entire repo tree of OS's, arches, debuginfo,
multilib, etc, then that is a big missing piece which already
implemented and has been working well.


-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Switching to pytest

2014-10-07 Thread John Dennis
On 10/07/2014 04:37 AM, Petr Viktorin wrote:
 So in the test framework war I'm betting on this one.

That is the info I was looking for. Thanks!!

-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Switching to pytest

2014-10-06 Thread John Dennis
On 10/03/2014 09:24 AM, Petr Viktorin wrote:
 https://fedorahosted.org/freeipa/ticket/4610
 
 Our test suite is currently not very maintainable. I want to dedicate 
 some time to improve this.
 The biggest part of this effort will be switching to a different test 
 framework, [pytest]. Compared to Nose, it makes complicated tests easier 
 to write -- at the expense of using more magic.

Just looking for information, why pytest?

Here is why I'm asking. After leaving IPA I went to another large Python
based project, OpenStack. OpenStack is really a number of loosely
affiliated autonomous projects, each of which gets to decide how they
are going to write and run their unit tests. As might be expected it's a
bit of a mess. But there has been some convergence lately using tox,
testr and testtools in combination. Python like Java has no shortage of
test frameworks and that diversity is actually a pain point. What
happens is very few folks actually understand the intricacies of test
construction and execution (because there are so many options, it can be
convoluted, there are many layers and each project does things
differently). The net result is a lot of wasted time, I groan every time
tests get ported to the test framework de jour, which seems to happen
with alarming frequency.

I'd really like to see some convergence in the Python community on test
frameworks so as a developer you only have to be an expert in one
framework because debugging test failures (including simply reproducing
the failure) is currently a time sink and just when you think you
understand what the heck is going on someone announces we're moving to
the next big thing. The proliferation of test frameworks is further
compounded by weak documentation. If something goes boom it can
literally take days to piece together enough information to diagnose the
issue (often because there are so many layers).

So how does pytest fit in with everything else? Is stable, long term and
widely adopted? What about mocking? Is there a consensus on fixtures vs.
mocks (that isn't religious). Can pytest handle both?

I guess what I'm asking is there some convergence with Python test
tools? Or with testing are we in the equivalent of the SCM wars of 5-7
years ago until most everyone figured out git was superior and had the
most traction?

Picking the test framework other than the one that bubbles to the top
and becomes the de facto standard has consequences. I clearly do not
have enough information to make a judgment here and any additional
information would be very much appreciated.



-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] OpenLMI integration

2014-08-19 Thread John Dennis
Seeing all the blue lines of CIM -- DBus I thought I would remind
folks that the realmd cim provider has a nice DBus module that sits
between the extremely low level DBus API and the very high level DBus
GObject/GType API. In other words it lets you do things in C code which
uses gVariants, etc. *without* having pull in the entire GObject/GType
complexity.

I always thought this should be a separate library so others who have to
code DBus in C could use it. [1] As Goldilocks said It's not too small,
it's not too big, it's just right :-)

[1] Writing DBus in C can be tedious and frustrating because you're
forced to use either something really primitive or something
extraordinarily complicated (unless you happen to be a Gnome guru and
totally grok the entire GObject/GType pseudo objected-orientated class
framework that you have to code to). If the DBus interface you're coding
to includes things like variants I think you'll appreciate the middle
approach.
-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] RPM's of different ipa versions

2014-07-14 Thread John Dennis
On 07/14/2014 04:19 AM, Petr Spacek wrote:
 On 11.7.2014 08:40, James wrote:
 This page seems to suggest that there are continuous builds available:

 http://www.freeipa.org/page/Downloads#Bleeding_Edge

 It seems this hasn't been updated since 2013, except the .repo files
 have recently? Does this still exist? Are there archives for each
 point release somewhere?

 In particular, I'm interested in knowing if there are repos with rpm's
 for each version/os. (=v.3.0.0 and Fedora/CentOS6+/RHEL6+)
 John, could you comment on this?


The bleeding edge repo mentioned on that page is what we call the
devel repo.

Is the devel repo still being updated?

Yes. However being an automated process sometimes snafu's occur that we
may not catch right away. For instance I see the last update was on 7/2.
It looks like builds are failing for some reason. I don't do the builds,
Nalin does, I'll ping Nalin and see what the problem is.

Are there archives?

No! These builds are *not* official, they are intended for developers
*only*, they are *ephemeral*. On any given day the builds might me
updated multiple times. The repo only has the *latest* devel builds.
Once an automated build completes we purge any previous builds from the
repo.

Is there a build for every version/os?

Probably not. Once again, these builds are for developers only, we only
build what serves our developers at the moment. The list of what we
build changes. Typically we build a current Fedora releases and current
RHEL releases. The packages versions *only* the newest based on the
source tree (see above).

-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] user certificates

2014-06-11 Thread John Dennis
On 06/11/2014 04:02 AM, Fraser Tweedale wrote:
 There are other use cases for user certificates, e.g. client
 authentication for HTTP or other network services.  Perhaps you know
 of others - in which case let us know.

802.11 wireless authentication using EAP-TLS

A common discussion on the RADIUS mailing lists is the desire to deploy
using EAP-TLS but the difficulty of provisioning user certs is always
the stumbling block.

-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] user certificates

2014-06-11 Thread John Dennis
On 06/11/2014 12:12 PM, Nathaniel McCallum wrote:
 On Wed, 2014-06-11 at 08:55 -0400, John Dennis wrote:
 On 06/11/2014 04:02 AM, Fraser Tweedale wrote:
 There are other use cases for user certificates, e.g. client
 authentication for HTTP or other network services.  Perhaps you know
 of others - in which case let us know.

 802.11 wireless authentication using EAP-TLS

 A common discussion on the RADIUS mailing lists is the desire to deploy
 using EAP-TLS but the difficulty of provisioning user certs is always
 the stumbling block.
 
 Why EAP-TLS over EAP-TTLS? Legacy support? You can use a combo of
 mechanisms to support older OSes (mainly Windows).

Because EAP-TLS is what is used for mutual client/server authentication
using PKI. EAP-TLS is supported on more legacy OS's (e.g. older
Windows). Microsoft only started supporting EAP-TTLS in Windows 8.
EAP-TLS is considered very secure and my (unconfirmed) understanding is
it's somewhat common with enterprise Windows deployments because
Microsoft makes it easy to provision client certs.

EAP-TTLS is primarily to set up a tunnel for other (less secure) methods
so that sensitive information is not in the clear. Note the leading T in
TTLS refers to tunnel. Client authentication is optional with
EAP-TTLS. You could establish a TLS tunnel with EAP-TTLS and then run
EAP-TLS inside the tunnel but the two TLS sessions make it much less
efficient, the advantage is the username can be anonymous with
EAP-TTLS/EAP-TLS if that's actually a concern. If you're not concerned
about user anonymity (outer identity) then there is no value in
establishing a tunnel to run other authentication protocols in, with
EAP-TLS simply being able to complete the SSL handshake (with the
required client cert) is sufficient to establish authentication.

-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Internationalized domain names in freeIPA

2013-11-05 Thread John Dennis
On 11/05/2013 11:13 AM, Martin Basti wrote:
 Hi list,
 
 I'm working on ticket: https://fedorahosted.org/freeipa/ticket/3169
 UTF-8 DNS names will be converted to punycode ASCII string and stored
 
 But there is a question, how to show DNS names to user (in UI or
 dnsrecord-show/find):
 * show them in punycode
 * convert them to UTF-8 and show
 * both ways
 * add options to show them in UTF-8
 
 I'll be thankful for your opinion.
 

We have a rule that all strings use UCS and that UCS be interchanged by
encoding UCS text in UTF-8. Therefore it seems to me the only time
punycode should ever exist is when it's necessary to encode/decode
punycode for dns operations. Since punycode is a standard Python codec
this should be trivial, you just need to determine where you do the
encode/decode (perhaps also validating user input can be successfully
encoded).

-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Multiple CA certificates in LDAP, questions

2013-09-09 Thread John Dennis
On 09/09/2013 10:02 AM, Nalin Dahyabhai wrote:
 On Mon, Sep 09, 2013 at 11:17:02AM +0200, Jan Cholasta wrote:
 Should each IPA service (LDAP, HTTP, PKINIT) have its own
 distinctive set of trusted CAs, or is using one set for everything
 good enough? Using distinctive sets would allow granular control
 over what CA is trusted for what service (e.g. trust CA1 to issue
 certificates for LDAP and HTTP, but trust CA2 only to issue
 certificates for HTTP), but I'm not sure how useful that would be in
 the real world.
 
 I'd expect it to depend heavily on whether or not you're chaining up to
 an external CA.  Personally, I'd very much want to keep a different set
 of trust anchors for PKINIT in that situation.

If you've got an external CA you still effectively have one trust anchor
that can be revoked because we create a sub-CA from the external CA. Or
perhaps I misunderstood what you were suggesting.


-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Multiple CA certificates in LDAP, questions

2013-09-09 Thread John Dennis
On 09/09/2013 10:24 AM, Nalin Dahyabhai wrote:
 On Mon, Sep 09, 2013 at 10:05:59AM -0400, John Dennis wrote:
 On 09/09/2013 10:02 AM, Nalin Dahyabhai wrote:
 I'd expect it to depend heavily on whether or not you're chaining up to
 an external CA.  Personally, I'd very much want to keep a different set
 of trust anchors for PKINIT in that situation.

 If you've got an external CA you still effectively have one trust anchor
 that can be revoked because we create a sub-CA from the external CA. Or
 perhaps I misunderstood what you were suggesting.
 
 My main concern is that the external CA, having issued one sub CA to us,
 can do so again for another customer, and trusting certificates because
 they chain up to that CA also allows that CA's other clients to issue
 certificates that we'd then also automatically trust.
 
 We can't revoke such certificates (which is done by noting the
 combination of issuer and serial number) until we know about them, and
 we'll only know about one of them after someone's used it to attempt to
 authenticate, possibly successfully.

Good point. Isn't there an X509 extension (possibly part of PKIX?) which
restricts membership in the chain path to a criteria. In other words you
can require your sub-CA to be present in the chain. Sorry, but my memory
is a bit fuzzy on this.


-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Multiple CA certificates in LDAP, questions

2013-09-09 Thread John Dennis
On 09/09/2013 05:17 AM, Jan Cholasta wrote:
 Another question:
 
 Should each IPA service (LDAP, HTTP, PKINIT) have its own distinctive 
 set of trusted CAs, or is using one set for everything good enough? 
 Using distinctive sets would allow granular control over what CA is 
 trusted for what service (e.g. trust CA1 to issue certificates for LDAP 
 and HTTP, but trust CA2 only to issue certificates for HTTP), but I'm 
 not sure how useful that would be in the real world.

That would complicate things quickly. Managing CA certs is already
challenging enough. Exploding this via combinations does not seem to
present enough real value for the complexity.

In the real world most deployments boil down to a single CA and that
trust model been effective. Don't forget you can always revoke any cert
issued by a CA. Having granular control over individual CA's does not
seem to present value, just complications. If your CA is compromised
you've got big things to worry about, having it be 1 in N does not seem
to change that equation radically. If one CA got compromised you've got
a lot of work to do to replace the trusted CA list everywhere. If one is
compromised why aren't the other CA's? Having to update just one CA
trust rather than potentially N is better.


-- 
John

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [freeipa] #3668: CA-less install fails when intermediate CA is used

2013-06-07 Thread John Dennis

On 06/07/2013 08:57 AM, Jan Cholasta wrote:

Yes, this is correct. The DS certificate must be directly signed by the
CA trusted by IPA (specified by --root-ca-cert in ipa-server-install),
there may be no intermediate CAs, because ldapsearch and friends and
python-ldap don't like them.


That doesn't sound right. Do we understand why a chain length  1 is 
failing?


John


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [freeipa] #3668: CA-less install fails when intermediate CA is used

2013-06-07 Thread John Dennis

On 06/07/2013 09:26 AM, Jan Cholasta wrote:

On 7.6.2013 15:17, John Dennis wrote:

On 06/07/2013 08:57 AM, Jan Cholasta wrote:

Yes, this is correct. The DS certificate must be directly signed by the
CA trusted by IPA (specified by --root-ca-cert in ipa-server-install),
there may be no intermediate CAs, because ldapsearch and friends and
python-ldap don't like them.


That doesn't sound right. Do we understand why a chain length  1 is
failing?



LDAP utilities and python-ldap only trust certificates directly issued
by CAs you point them to (at least on Fedora 18).


This sounds like a bug in MozLDAP (i.e. the NSS LDAP crypto provider). 
Have we filed a bug? Let's file the bug here in the Red Hat bugzilla, 
not upstream, we're the maintainers of MozLDAP and upstream is already 
frustrated with it.


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [Pki-announce] Announcing the release of Dogtag 10.0.2

2013-05-03 Thread John Dennis

On 05/02/2013 10:09 PM, Ade Lee wrote:

The Dogtag team is proud to announce the second errata build for
Dogtag v10.0.0.


Just wanted to say the CS team is doing great work. Thanks for all these 
improvements!


John

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFE] CA-less install

2013-04-02 Thread John Dennis

On 04/02/2013 11:33 AM, Petr Viktorin wrote:

On 04/02/2013 02:14 AM, Robert Relyea wrote:

On 03/29/2013 07:40 AM, John Dennis wrote:

On 03/29/2013 07:57 AM, Petr Viktorin wrote:

On 03/27/2013 04:40 PM, John Dennis wrote:

On 03/27/2013 11:23 AM, Petr Viktorin wrote:

I don't want to check the subject because this RFE was prompted by
IPA's
normal CA rejecting valid wildcart certs. Is there a reasonable way to
ask NSS if it will trust the cert?


Yes. NSS provides a variety of tools to test validation.

Going just on memory here, our current version of python-nss has a
simple call to test validation. Sometime in the last year I added a
fair
amount of new support for certificate validation including getting back
diagnostic information for validation failures, however if I recall
correctly the extended functionality in python-nss has not been
released
yet.


Does the new code include downloading and importing CRLs?


Cert verification is a complex topic. This is further exacerbated by
the introduction of PKIX. My understanding is NSS had classic
verification code and later introduced PKIX. There has been an
evolution between classic verification and PKIX. This is outside my
domain of expertise. How and when CRL's are loaded in NSS is not
something I can give advice on, especially in an area undergoing change.

I'm going to have to defer to an expert in this area, Bob Relyea, I've
CC'ed him on this email.

It's hard to get the context in the middle, and and John had noted, NSS
is transitioning from the old Cert_Verify interface to the new PKIX_ code.

I'll answer the question for the traditional CERTVerify code, which is
the only you get in SSL by default, and the one most people still use:

No, CRLs are not downloaded and imported automatically, but if you
download and import CRL's, NSS will use them. There's an installable
PKCS #11 module which can be configured to download and install CRLs,
then provide them to NSS. It's call mod_revocator.

The expected revocation strategy NSS uses is OCSP, and you can turn on
automatic OCSP fetching.


Bob, to put this in context [1] the functionality in python-nss being
discussed is the binding of the CERT_VerifyCertificate() function,
something I added recently. Now the question arises as to how CRL's
are meant to play into the verification process. Can you please
explain how NSS expects this to be done? Pointers to existing
documentation and code examples would also be helpful.


There's a separate CERT_ImportCRL() which will import the CRL into the
database. mod_revocator() can also be used to do the fetching for you,
Matthew has examples on how various servers set them up (I believe the
only NSS set up is installing the module in your secmod.db/pkcs11.txt
with modutil.



It would also be helpful to understand the PKIX roadmap and how this
might affect coding decisions at the API level.


the PKIX interface is available now, and is actually used by Chrome (for
all certs) and Firefox (for ev processing). Firefox is in the process of
moving to libpkix for everything.  There is an environment variable you
can set (I don't remember it specifically, but I could look it up for
you if you want) that will cause the transitional
CERT_VerifyCertificate() interface to use the libpkix engine, but it
keeps the old CERT_VerifyCertificate semantics (like no CRL or AIA cert
fetching)..

With libpkix, the revocation options are quite broad and complexed. We
really expect people would use a set of preconfigured policies, though
libpkix API allows for quite some variance. It would take me some time
to dig up all the descriptions, but I can if you want them.


[1] Some additional context, the original motivation for exposing NSS
cert verification to IPA was to solve the following problem. If
someone wants to make the IPA CA a sub-CA (as opposed to a self-signed
CA) we want to validate the externally provided CA cert *before*
proceeding with the IPA installation. This is because if the CA cert
is invalid everything will hugely blow-up (because we use the CA cert
to sign all the certs issued in IPA, especially those used to validate
cooperating components/agents, if those certs do not work nothing in
IPA works). In addition to this narrow goal we in general want to be
able to perform cert verification correctly in other contexts as well
so the extent to which you can educate us in general on this topic
will be appreciated.

OK, thanks. I'd go ahead and start with CERT_VerifyCertificate() unless
you specifically need some of the advanced libpkix features.


The original context is sanity checking: is a SSL server cert we get
from a user valid? If it is then we install the corresponding server.
Requirements here are:
- No extra information from the user, other than the cert itself (the
admin gives us a cert, we don't want to ask how to find out if it's valid)
- It needs to be a simple call/tool, since there's little gain over just
documenting that we want good certs.
So it looks it's

Re: [Freeipa-devel] [RFE] CA-less install

2013-03-29 Thread John Dennis

On 03/29/2013 07:57 AM, Petr Viktorin wrote:

On 03/27/2013 04:40 PM, John Dennis wrote:

On 03/27/2013 11:23 AM, Petr Viktorin wrote:

I don't want to check the subject because this RFE was prompted by IPA's
normal CA rejecting valid wildcart certs. Is there a reasonable way to
ask NSS if it will trust the cert?


Yes. NSS provides a variety of tools to test validation.

Going just on memory here, our current version of python-nss has a
simple call to test validation. Sometime in the last year I added a fair
amount of new support for certificate validation including getting back
diagnostic information for validation failures, however if I recall
correctly the extended functionality in python-nss has not been released
yet.


Does the new code include downloading and importing CRLs?


Cert verification is a complex topic. This is further exacerbated by the 
introduction of PKIX. My understanding is NSS had classic verification 
code and later introduced PKIX. There has been an evolution between 
classic verification and PKIX. This is outside my domain of expertise. 
How and when CRL's are loaded in NSS is not something I can give advice 
on, especially in an area undergoing change.


I'm going to have to defer to an expert in this area, Bob Relyea, I've 
CC'ed him on this email.


Bob, to put this in context [1] the functionality in python-nss being 
discussed is the binding of the CERT_VerifyCertificate() function, 
something I added recently. Now the question arises as to how CRL's are 
meant to play into the verification process. Can you please explain how 
NSS expects this to be done? Pointers to existing documentation and code 
examples would also be helpful.


It would also be helpful to understand the PKIX roadmap and how this 
might affect coding decisions at the API level.


[1] Some additional context, the original motivation for exposing NSS 
cert verification to IPA was to solve the following problem. If someone 
wants to make the IPA CA a sub-CA (as opposed to a self-signed CA) we 
want to validate the externally provided CA cert *before* proceeding 
with the IPA installation. This is because if the CA cert is invalid 
everything will hugely blow-up (because we use the CA cert to sign all 
the certs issued in IPA, especially those used to validate cooperating 
components/agents, if those certs do not work nothing in IPA works). In 
addition to this narrow goal we in general want to be able to perform 
cert verification correctly in other contexts as well so the extent to 
which you can educate us in general on this topic will be appreciated.




--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] git versions for rpms in makefile

2013-03-27 Thread John Dennis

On 03/26/2013 10:41 PM, Orion Poplawski wrote:

On 03/26/2013 07:36 PM, Simo Sorce wrote:

On Tue, 2013-03-26 at 19:14 -0400, Rob Crittenden wrote:

Orion Poplawski wrote:

This patch uses the Fedora standard for git versioning
(version-#.gittag) when making rpms.  I'm afraid I haven't been able
to test for the non-git version case.


What is the purpose of this? We don't do upstream releases from
developer build, so having the wrong Source0 doesn't seem like a big
deal (though I'll admit no strictly correct).


Sound like a reasonable improvement to me, and makes it sure you do not
confuse an upstream build with a developer build, I am not so sure about
using __GIT__, it's kinda annoying to type, although shell completion
here helps I guess.

I lean toward acking this approach, if you do not have objections Rob.

Simo.


My motivation for this was from testing the pkcs12 patches.  First I did
an srpm build and got 3.1.99GITec94138-0.fc18.src.rpm.  Then I updated
the git repo, did another srpm build and got
3.1.99GIT5acd43d-1.fc18.src.rpm which would have been lower EVR wise.
Now I can get:

3.1.99-1.GIT5acd43d.fc18.src.rpm

which would have been newer than

3.1.99-0.GITec94138.fc18.src.rpm

and allowed me to do yum update.

(actually, for pre-releases it should be -0.1.GIT, -0.2.GIT, ...)

So it doesn't impact releases, just local developer testing - which I
don't know how much is done via rpms.


I think you're confusing issues. You cannot use a git hash to properly 
collate based on build sequence. This is why in our development builds 
we prefix the git hash with a timestamp. It's the timestamp which 
affords the proper collation, the git hash is only informative. These 
issues to not arise in release builds because version and or release 
value is incremented.


But the above is independent of whether we should follow the fedora 
convention for git hash, that's probably a good idea, just realize it 
won't solve your problem of version collation.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFE] CA-less install

2013-03-27 Thread John Dennis

On 03/27/2013 11:23 AM, Petr Viktorin wrote:

I don't want to check the subject because this RFE was prompted by IPA's
normal CA rejecting valid wildcart certs. Is there a reasonable way to
ask NSS if it will trust the cert?


Yes. NSS provides a variety of tools to test validation.

Going just on memory here, our current version of python-nss has a 
simple call to test validation. Sometime in the last year I added a fair 
amount of new support for certificate validation including getting back 
diagnostic information for validation failures, however if I recall 
correctly the extended functionality in python-nss has not been released 
yet.


Finding time to work on python-nss has been a problem. This is further 
complicated by the fact Mozilla has changed from CVS to Mercurial while 
I had this code in development and I haven't moved over to the new 
distributed SCM yet.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFE] CA-less install

2013-03-27 Thread John Dennis

On 03/27/2013 12:42 PM, Petr Viktorin wrote:

On 03/27/2013 05:09 PM, Rob Crittenden wrote:
[...]

Well, I don't like how PEM file duplicates an unnecessary amount of
information (the whole certificate). Also, copy-pasting subject might be
faster than exporting certificate in PEM and uploading it to the
server...


We're talking a one-time operation. I don't think it's asking too much.
It also gives the user some amount of control rather than assuming that
whatever tool their using to create the PKCS#12 file is also smart
enough to include the right CAs.


Well, to be fair, if there are any intermediate CAs, they need to be in
the PKCS#12. (In the future there may be support for multiple root CAs,
which would all get explicit trust. Those would all go in the PEM, so
intermediate ones must be somewhere else -- in the PKCS#12.)

Anyway I think it's unlikely that everybody will have the certs in the
right format for IPA by default, whatever that format is.
Honza has a point, but... If one solution is clearly better (in terms of
best/common practices in organizations this feature is for), I'm happy
to change it. Otherwise let's paint the bikeshed with the color I have
ready :)

[...]

I don't want to check the subject because this RFE was prompted by IPA's
normal CA rejecting valid wildcart certs. Is there a reasonable way to
ask NSS if it will trust the cert? If there is I can put it in, but I
don't want to re-create the validation.


I'm not sure TBH. Maybe someone with more NSS experience could answer
this?


certutil -V -u V will do it.


The usage is already checked -- and with this command, too :)
The problem here is hostname validation.


I don't think it would be onerous to assure that either the FQDN is in
the CN or it is a '*'. python-nss has fairly easy ways to grab the
subject out of a cert for this comparison.



The code checks for the whole cert chain, and that's there only one
server cert. Does that not work?


Actually I didn't check this specifically. But, I used a server
certificate with wrong subject and that made ipa-server-install fail.



One of the many cases that we will need to handle.


I found that python-nss has a verify_hostname call. I'll add it.


It also has Certificate.verify_now(). There are examples of usage in 
either the doc/examples directory or the test directory.


NB, the cert has to be in the database, a possible limitation for the 
intended usage. The enhanced unreleased code dispenses with that 
restriction and adds additional functionality.





--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFE] CA-less install

2013-03-27 Thread John Dennis

On 03/27/2013 12:44 PM, Petr Viktorin wrote:

On 03/27/2013 04:40 PM, John Dennis wrote:

On 03/27/2013 11:23 AM, Petr Viktorin wrote:

I don't want to check the subject because this RFE was prompted by IPA's
normal CA rejecting valid wildcart certs. Is there a reasonable way to
ask NSS if it will trust the cert?


Yes. NSS provides a variety of tools to test validation.


Thanks! I'll take a look at it again.


Going just on memory here, our current version of python-nss has a
simple call to test validation. Sometime in the last year I added a fair
amount of new support for certificate validation including getting back
diagnostic information for validation failures, however if I recall
correctly the extended functionality in python-nss has not been released
yet.


I'll add verify_hostname from the validation example; if there's
anything else please give me a pointer -- I haven't read all the docs, yet.



doc/examples/verify_server.py
test/test_client_server.py

illustrate example usage.

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] master is broken on F18

2013-03-17 Thread John Dennis

On 03/16/2013 05:19 PM, Kai Engert wrote:

On Fri, 2013-03-15 at 16:07 -0400, John Dennis wrote:

On 03/15/2013 12:56 PM, Alexander Bokovoy wrote:

Hi!

I was investigating why installing master fails on F18 +
updates-testing and found out that install fails with
freeipa-server-3.1.99-0.20130313T1838Zgit158bf45.fc18.x86_64 from
ipa-devel repo

2013-03-15T16:17:40Z DEBUG args=/usr/bin/certutil -d /etc/httpd/alias
-R -s CN=jano.ipa.team,O=IPA.TEAM -o
/var/lib/ipa/ipa-aza7Wg/tmpcertreq -k rsa -g 2048 -z
/etc/httpd/alias/noise.txt -f /etc/httpd/alias/pwdfile.txt -a
2013-03-15T16:17:41Z DEBUG Process finished, return code=0
2013-03-15T16:17:41Z DEBUG stdout= 2013-03-15T16:17:41Z DEBUG
stderr=

Generating key.  This may take a few moments...


I believe this is a known problem in certutil where it writes data to
the wrong file descriptor. The problem was fixed upstream about 10 days
ago, I'm not sure if Fedora has the fix yet or not. Kai would know, I've
added him on the cc list.



Hi John,

in the above cited message, you didn't include the failure you were
seeing, so I have to guess.

This one is the only functional patch to certutil during the last 8-9
weeks that I could find. Do you refer to this one?

Bug 840714 - certutil -a does not produce ASCII output
https://bugzilla.mozilla.org/show_bug.cgi?id=840714

It was a regression in NSS 3.14.2, and it got fixed in 3.14.3. Fedora 18
apparently received that update on Feb 24.
http://tinyurl.com/bym4rlh

If the above didn't help, please send more details or ping me on IRC.


Thank you Kai. Yes, that was the regression I was referring to. It's 
good to know when the fix appeared because we've had a number of folks 
report problems due to it. However Alexander's issue may be something 
else. In any event, thank you.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] master is broken on F18

2013-03-15 Thread John Dennis

On 03/15/2013 12:56 PM, Alexander Bokovoy wrote:

Hi!

I was investigating why installing master fails on F18 +
updates-testing and found out that install fails with
freeipa-server-3.1.99-0.20130313T1838Zgit158bf45.fc18.x86_64 from
ipa-devel repo

2013-03-15T16:17:40Z DEBUG args=/usr/bin/certutil -d /etc/httpd/alias
-R -s CN=jano.ipa.team,O=IPA.TEAM -o
/var/lib/ipa/ipa-aza7Wg/tmpcertreq -k rsa -g 2048 -z
/etc/httpd/alias/noise.txt -f /etc/httpd/alias/pwdfile.txt -a
2013-03-15T16:17:41Z DEBUG Process finished, return code=0
2013-03-15T16:17:41Z DEBUG stdout= 2013-03-15T16:17:41Z DEBUG
stderr=

Generating key.  This may take a few moments...


I believe this is a known problem in certutil where it writes data to
the wrong file descriptor. The problem was fixed upstream about 10 days
ago, I'm not sure if Fedora has the fix yet or not. Kai would know, I've
added him on the cc list.

John

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Using the new LDAP code

2013-02-27 Thread John Dennis

On 02/27/2013 06:46 AM, Petr Viktorin wrote:

Hello,
A big refactoring of our LDAP code should be merged soon-ish now. Here's
a summary for developers.


Great, that's fabulous news and thanks for the good work!


IPA plugins traditionally use (dn, entry_attrs) pairs to represent
entries. To make that work, iterating over an LDAPEntry will, for now,
yield the DN and the entry itself. Always use keys() or values() when
iterating over an entry.


I'm trying parse what the above means, it seems to be one of the two:

1) There is still a bunch of code that continues to use 2-tuples in the 
plugins and additional work remains to converge on a single API.


2) All the code that used 2-tuples has been cleaned up and expunged, 
however if the old 2-tuple methodology was used it would still work.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Using the new LDAP code

2013-02-27 Thread John Dennis

On 02/27/2013 11:23 AM, Jan Cholasta wrote:

Hi,

On 27.2.2013 17:09, John Dennis wrote:

IPA plugins traditionally use (dn, entry_attrs) pairs to represent
entries. To make that work, iterating over an LDAPEntry will, for now,
yield the DN and the entry itself. Always use keys() or values() when
iterating over an entry.


I'm trying parse what the above means, it seems to be one of the two:

1) There is still a bunch of code that continues to use 2-tuples in the
plugins and additional work remains to converge on a single API.

2) All the code that used 2-tuples has been cleaned up and expunged,
however if the old 2-tuple methodology was used it would still work.



it's the former, there is still code that uses 2-tuples.


O.K. so we're still a ways away from completing the task. Would this be 
a correct summary?


Phase 1 is completed, a consistent API has been defined and implemented. 
Phase 2 is converting all the code to use the API defined in Phase 1, a 
task yet to be done.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Using the new LDAP code

2013-02-27 Thread John Dennis

On 02/27/2013 12:22 PM, Jan Cholasta wrote:

On 27.2.2013 18:14, John Dennis wrote:

On 02/27/2013 11:23 AM, Jan Cholasta wrote:

Hi,

On 27.2.2013 17:09, John Dennis wrote:

IPA plugins traditionally use (dn, entry_attrs) pairs to represent
entries. To make that work, iterating over an LDAPEntry will, for now,
yield the DN and the entry itself. Always use keys() or values() when
iterating over an entry.


I'm trying parse what the above means, it seems to be one of the two:

1) There is still a bunch of code that continues to use 2-tuples in the
plugins and additional work remains to converge on a single API.

2) All the code that used 2-tuples has been cleaned up and expunged,
however if the old 2-tuple methodology was used it would still work.



it's the former, there is still code that uses 2-tuples.


O.K. so we're still a ways away from completing the task. Would this be
a correct summary?

Phase 1 is completed, a consistent API has been defined and implemented.
Phase 2 is converting all the code to use the API defined in Phase 1, a
task yet to be done.



Correct. But I don't think converting 2-tuples to the new API will be a
huge task.


Cool!


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] NSS 3.14.3 Release, this fixes the certutil bug encountered during install

2013-02-20 Thread John Dennis
NSS 3.14.3 was just released by the NSS team. This is critical for us 
because it fixes https://bugzilla.mozilla.org/show_bug.cgi?id=840714 
which was causing install failures as reported on this list. I expect a 
new RPM will follow shortly.


John



 Original Message 
Subject: [ANNOUNCE] NSS 3.14.3 Release
Date: Wed, 20 Feb 2013 12:07:38 -0800
From: Ryan Sleevi ryan-mozdevtechcry...@sleevi.com
Reply-To: mozilla's crypto code discussion list 
dev-tech-cry...@lists.mozilla.org

To: dev-tech-cry...@lists.mozilla.org

The NSS Development Team is pleased to announce the release of NSS 3.14.3.

The official release notes are available at
https://developer.mozilla.org/en-US/docs/NSS/NSS_3.14.3_release_notes ,
and are reproduced at the end of this message.

This release includes mitigations for recently discussed Lucky Thirteen
attack (CVE-2013-1620). However, please note the limitations of the
mitigations discussed in the release notes below.



Introduction:

Network Security Services (NSS) 3.14.3 is a patch release for NSS 3.14.
The bug fixes in NSS 3.14.3 are described in the Bugs Fixed section
below.

Distribution Information

* The CVS tag is NSS_3_14_3_RTM. NSS 3.14.3 requires NSPR 4.9.5 or newer.
* NSS 3.14.3 source distributions are also available on ftp.mozilla.org
for secure HTTPS download:
  - Source tarballs:
https://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_14_3_RTM/src/

New in NSS 3.14.3

* No new major functionality is introduced in this release. This release
is a patch release to address CVE-2013-1620.

New Functions

* in pk11pub.h
 - PK11_SignWithSymKey - Similar to PK11_Sign, performs a signing
operation in a single operation. However, unlike PK11_Sign, which uses a
SECKEYPrivateKey, PK11_SignWithSymKey performs the signature using a
symmetric key, such as commonly used for generating MACs.

New Types

* CK_NSS_MAC_CONSTANT_TIME_PARAMS - Parameters for use with
CKM_NSS_HMAC_CONSTANT_TIME and CKM_NSS_SSL3_MAC_CONSTANT_TIME.
New PKCS #11 Mechanisms
* CKM_NSS_HMAC_CONSTANT_TIME - Constant-time HMAC operation for use when
verifying a padded, MAC-then-encrypted block of data.
CKM_NSS_SSL3_MAC_CONSTANT_TIME - Constant-time MAC operation for use when
verifying a padded, MAC-then-encrypted block of data using the SSLv3 MAC.

Notable Changes in NSS 3.14.3

* CVE-2013-1620
Recent research by Nadhem AlFardan and Kenny Patterson has highlighted a
weakness in the handling of CBC padding as used in SSL, TLS, and DTLS that
allows an attacker to exploit timing differences in MAC processing. The
details of their research and the attack can be found at
http://www.isg.rhul.ac.uk/tls/, and has been referred to as Lucky
Thirteen.

NSS 3.14.3 includes changes to the softoken and ssl libraries to address
and mitigate these attacks, contributed by Adam Langley of Google. This
attack is mitigated when using NSS 3.14.3 with an NSS Cryptographic Module
(softoken) version 3.14.3 or later. However, this attack is only
partially mitigated if NSS 3.14.3 is used with the current FIPS validated
NSS Cryptographic Module, version 3.12.9.1.

* Bug 840714 - certutil -a was not correctly producing ASCII output as
requested.
* Bug 837799 - NSS 3.14.2 broke compilation with older versions of sqlite
that lacked the SQLITE_FCNTL_TEMPFILENAME file control. NSS 3.14.3 now
properly compiles when used with older versions of sqlite.

Acknowledgements

* The NSS development team would like to thank Nadhem AlFardan and Kenny
Patterson (Royal Holloway, University of London) for responsibly
disclosing the issue by providing advance copies of their research. In
addition, thanks to Adam Langley (Google) for the development of a
mitigation for the issues raised in the paper, along with Emilia Kasper
and Bodo Möller (Google) for assisting in the review and improvements to
the initial patches.

Bugs fixed in NSS 3.14.3

*
https://bugzilla.mozilla.org/buglist.cgi?list_id=5689256;resolution=FIXED;classification=Components;query_format=advanced;target_milestone=3.14.3;product=NSS

Compatibility

* NSS 3.14.3 shared libraries are backward compatible with all older NSS
3.x shared libraries. A program linked with older NSS 3.x shared libraries
will work with NSS 3.14.3 shared libraries without recompiling or
relinking. Furthermore, applications that restrict their use of NSS APIs
to the functions listed in NSS Public Functions will remain compatible
with future versions of the NSS shared libraries.

Feedback

* Bugs discovered should be reported by filing a bug report with
bugzilla.mozilla.org (product NSS).

--
dev-tech-crypto mailing list
dev-tech-cry...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] Backup and Restore design

2013-02-19 Thread John Dennis

On 02/19/2013 10:43 PM, Rob Crittenden wrote:

I've looked into some basic backup and restore procedures for IPA. My
findings are here: http://freeipa.org/page/V3/Backup_and_Restore


Good write up Rob!

It seems to me there are two critical sub-issues to solve first that 
could benefit us in the long run anyway.


1) Replacing certs. This has been a pain point for many admins who for 
various reasons now have invalid certs and need to redeploy all certs to 
bring back up our integrated web of services. Can this be a separate 
work item? It would be generally useful in contexts other than backups.


2) Tracking every file we modify. Aren't we already a good way there 
with the sysrestore functionality already in use in the install tools? 
It is just a matter of enhancing it a bit and being diligent about 
making sure it's used in every modification we make? Actually I'm not 
thinking we track every modification, rather we keep a manifest of 
everything we've touched and then just snapshot those files during 
backup, this would capture any modifications not made by us but perhaps 
are critical to restoring a working system (i.e. puppet modifications). 
The sysrestore module could provide the manifest, right?


John

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-14 Thread John Dennis

On 02/14/2013 09:05 AM, Simo Sorce wrote:

So as I proposed we can call ipa user-add from LDAP from a
non-transactional pre-op plugin. We just need to be careful about when
we allow that to avoid loops, but besides that problem it seem
relatively easy and does not require crazy things like playgrounds or
even full LDAP proxies.


I think I need a clarification because perhaps I didn't fully understand 
your proposal.


Is the idea with a non-transactional pre-op plugin it invokes user-add 
and then the pre-op returns *without* having modified ldap? In effect it 
acts as a trigger?


That still implies there has to be a separate tree where the foreign 
entity writes (and the pre-op plugin watches) because otherwise how 
could the pre-op plugin distinguish between framework writes and foreign 
writes?


If there is a separate tree where is the looping issue? You still 
haven't explained this.


Also, under the scenario that a foreign entity writes something into 
LDAP (somewhere) and it triggers us to call user-add via some mechanism 
then what happens when errors occur? The foreign entity will not know we 
rejected the operation nor why.


Also, don't forget they want to delete users, remove group membership, 
add group membership, add groups, remove groups etc. Some of these 
operations are dependent upon logic in our framework. I don't see how 
some of these operations can be reliably managed by a foreign entity 
simultaneously performing LDAP operations.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-14 Thread John Dennis

On 02/14/2013 09:29 AM, Rob Crittenden wrote:

In reality we don't actually pass migrated users to user-add for
performance reasons.


What parts of user-add are we bypassing?


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-13 Thread John Dennis
I appreciate Simo's concern for authorization and audit in this process, 
we must solve that problem. If I understand the proposal correctly it's 
akin to recording a macro that can be replayed. The framework executes 
as normal but instead of issuing the LDAP modify commands we record 
them. Then after the entire command completes we send the recorded 
operations back to 389DS in some form Did I understand this correctly? 
If so I'm very much against the idea of sending JSON back to 389DS to 
execute the totality of the operation. Why? It either breaks or has the 
potential to break our entire processing model, pre and post operations, 
validity checks (e.g. querying the current state) user supplied plugins, 
etc. I could see this working in some limited cases which might give you 
the illusion it would work. But the only robust general solution I think 
we can sign up for supporting is to use the API commands we designed, 
period. Anything else just seems like a nightmare scenario of corner cases.


Therefore I think the proposal of watching something (yet to be 
determined), calling our API commands, and then cleaning up the watched 
entity afterwards is the best approach. Figuring out how to 
authenticate/authorize/audit this is the primary challenge, a challenge 
far more manageable then trying to subvert the framework with every 
known and unknown risk that introduces. It's hard enough as it is 
assuring our documented API works correctly. Our API is the only thing I 
think we can realistically commit to supporting.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-13 Thread John Dennis

On 02/13/2013 12:53 PM, Simo Sorce wrote:


If we can solve the looping and potential deadlocking concerns I think
we can avoid the json reply and let the framework do the actual final
ldap add.


Could you elaborate on your looping and deadlock concerns? I don't see 
where they would arise if what we're watching is entirely independent of 
our LDAP tree.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-13 Thread John Dennis

On 02/13/2013 01:30 PM, Rob Crittenden wrote:

Simo Sorce wrote:

On Wed, 2013-02-13 at 12:59 -0500, John Dennis wrote:

On 02/13/2013 12:53 PM, Simo Sorce wrote:


If we can solve the looping and potential deadlocking concerns I think
we can avoid the json reply and let the framework do the actual final
ldap add.


Could you elaborate on your looping and deadlock concerns? I don't see
where they would arise if what we're watching is entirely independent of
our LDAP tree.


I do not understand what you are 'watching' ?

Simo.



I think he means have a persistent search to watch for new entries and
then act upon them.


Yes, it could either be a persistent search or an external (cron) 
process that periodically polls LDAP. In each case there is an LDAP tree 
used as a staging area. The staging area is completely independent of 
IPA's LDAP area. If something shows up in the staging area the contents 
of the staging area are used to drive our IPA commands. The staging area 
is then cleaned of all the entries which succeeded.


Under this scenario what are the looping and deadlock concerns?


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] installing from built RPMs on F18

2013-02-13 Thread John Dennis

On 02/13/2013 02:59 PM, Brian Cook wrote:

Hello,

When I try install from RPMs created from 'make rpms' on F18 I get the 
following error:
2013-02-13T19:49:27Z INFO The ipa-server-install command failed, exception: 
IndexError: list index out of range


Here are the few log entries before it:

2013-02-13T19:49:27Z INFO   File 
/usr/lib/python2.7/site-packages/ipaserver/install/installutils.py, line 617, 
in run_script
 return_value = main_function()

   File /sbin/ipa-server-install, line 986, in main
 dm_password, subject_base=options.subject)

   File /usr/lib/python2.7/site-packages/ipaserver/install/cainstance.py, 
line 623, in configure_instance
 self.start_creation(runtime=210)

   File /usr/lib/python2.7/site-packages/ipaserver/install/service.py, line 
358, in start_creation
 method()

   File /usr/lib/python2.7/site-packages/ipaserver/install/cainstance.py, 
line 1225, in __request_ra_certificate
 self.requestId = item_node[0].childNodes[0].data

2013-02-13T19:49:27Z INFO The ipa-server-install command failed, exception: 
IndexError: list index out of range


Any idea what I am doing wrong?


You're not doing anything wrong.

Caused by a bug in the nss package, see this thread
https://www.redhat.com/archives/freeipa-users/2013-February/msg00195.html

downgrade nss
install
upgrade nss



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-13 Thread John Dennis

On 02/13/2013 10:40 PM, Nathan Kinder wrote:

With the DS plug-in approach that calls into the IPA framework with a
'mock add' to form the resulting entry at the pre-op stage, we could
take care of the initial ADD operation of the user entry.  We would
still need to have a way to trigger the additional write operations that
the IPA framework performs.  The proposed DS plug-in could do an
internal search operation after the add, then it could perform the
additional write operations that are needed.  This logic would need to
be in the DS plug-in, or we would need another way to get the details
from the IPA framework via JSON.  The approach begins to get a bit messy.


Messy is one word for it :-) Herein lies the problem. user-add is 
actually a simple case where in this particular version of our code we 
know the LDAP operations that occur and in what order. But there is 
nothing in our coding guidelines that guarantees this, it just happens 
to be true today. Pre and post callbacks are free to do as they please. 
The logic in our ldap module can change (in fact it's undergoing a 
rewrite as we speak). We've committed to supporting an extension 
mechanism (plugins) that ties into the framework operations and who 
knows what might occur in that code. At the moment it's not used, but 
hopefully it will be.


Then comes the question, where do we draw the line? Do we say only 
user-add with no additional options is supported because we know what 
occurs during it's processing and hence we feel safe emulating user-add 
outside the framework?


We all know this is a slippery slope, as soon as you support user-add 
this way we'll be getting requests to support other commands or other 
options. And after we make framework changes (which we do frequently) 
are we going to test these out-of-band operations continue to emulate 
the ever changing framework?


It's a slippery slope that can expose us to problems we don't need.

Jenny, do you want to test an entirely different mechanism or do you 
want to limit it to our defined API?


We have a defined API, I really think that's the only thing we should 
support. Backdoor shortcuts that sidestep our framework should be off 
the table IMHO, it's just inviting too many problems.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RFC] Creating a new plugin to make it simpler to add users via LDAP

2013-02-13 Thread John Dennis

On 02/14/2013 12:16 AM, Nathan Kinder wrote:

On 02/13/2013 08:30 PM, John Dennis wrote:

On 02/13/2013 10:40 PM, Nathan Kinder wrote:

With the DS plug-in approach that calls into the IPA framework with a
'mock add' to form the resulting entry at the pre-op stage, we could
take care of the initial ADD operation of the user entry.  We would
still need to have a way to trigger the additional write operations that
the IPA framework performs.  The proposed DS plug-in could do an
internal search operation after the add, then it could perform the
additional write operations that are needed.  This logic would need to
be in the DS plug-in, or we would need another way to get the details
from the IPA framework via JSON.  The approach begins to get a bit
messy.


Messy is one word for it :-) Herein lies the problem. user-add is
actually a simple case where in this particular version of our code we
know the LDAP operations that occur and in what order. But there is
nothing in our coding guidelines that guarantees this, it just happens
to be true today. Pre and post callbacks are free to do as they
please. The logic in our ldap module can change (in fact it's
undergoing a rewrite as we speak). We've committed to supporting an
extension mechanism (plugins) that ties into the framework operations
and who knows what might occur in that code. At the moment it's not
used, but hopefully it will be.

Then comes the question, where do we draw the line? Do we say only
user-add with no additional options is supported because we know what
occurs during it's processing and hence we feel safe emulating
user-add outside the framework?

We all know this is a slippery slope, as soon as you support user-add
this way we'll be getting requests to support other commands or other
options. And after we make framework changes (which we do frequently)
are we going to test these out-of-band operations continue to emulate
the ever changing framework?

It's a slippery slope that can expose us to problems we don't need.

Jenny, do you want to test an entirely different mechanism or do you
want to limit it to our defined API?

We have a defined API, I really think that's the only thing we should
support. Backdoor shortcuts that sidestep our framework should be off
the table IMHO, it's just inviting too many problems.



This is why I think that the most sane approach is to put something in
front of 'ipa user-add'.  This is not without it's own set of
difficulties, but if we need a basic LDAP interface that can be used to
create IPA users, it should end up calling into the same code that 'ipa
user-add' uses.  This would ensure that any extensions (plugins) for the
framework are called.



I actually see this as very similar to the LDAP migration case.  For
migration, we basically take entries in LDIF format, extract the
relevant data, then feed it into 'ipa user-add', right?  That's more or
less what we need to do here for HR systems that provision users, but
the user addition needs to be pushed to IPA via an LDAP client instead
of IPA pulling from an LDAP server.


Yes, this is exactly what I'm saying and we concur.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Move ipaldap to ipalib?

2013-01-30 Thread John Dennis

On 01/30/2013 09:41 AM, Petr Viktorin wrote:

On 01/30/2013 03:28 PM, Rob Crittenden wrote:

Petr Viktorin wrote:

Hello,

I've noticed that the client installer uses python-ldap directly, and
duplicates some code we already have.

Most recently, a convert_ldap_error function appeared in ipautil.
Similar mechanisms are already in ipaldap, ipa-csreplica-manage,
ipa-replica-manage, and my patches remove one in ldap2.

Are there any objections to moving ipaldap.py to ipalib?



ipaldap.py is going away eventually, right?


No, the common code is in ipaldap.py.
ldap2.py only has the ldap2 class, which is a CRUDBackend, manages the
magic per-thread connections, and has some extra helpers that can assume
IPA is installed.


It is probably better suited for ipapython if we move it though.


Sure, makes no difference to me :)
To be honest though, I never got the difference between ipalib and
ipapython. Could you explain it for me?



I asked the same question a while ago, see this thread for the answers

https://www.redhat.com/archives/freeipa-devel/2011-October/msg00392.html

FWIW I still couldn't fully reconcile what was said with what is in the 
code, I suspect that's because we might not have been disciplined when 
adding new code. I continue to find it a source of confusion FWIW. What 
was said in that thread implies the client should not depend on ipalib, 
but it does, go figure.


Getting everything partitioned into their logical locations, namespaces, 
imports, etc. is a code clean-up effort that should be addressed at some 
point. Hopefully we'll iterate on that piecemeal, but only if we 
actually understand the rules :-)


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data

2013-01-21 Thread John Dennis

On 01/16/2013 10:45 AM, Jan Cholasta wrote:

Hi,

this patch adds initial support for custom LDAP entry objects, as
described in http://freeipa.org/page/V3/LDAP_code.



Just in case you missed it, I added some requirements to the above 
design page about making LDAP attributes and their values be smarter.


An LDAP attribute has a syntax defining how comparisons are to be 
performed. Python code using standard Python operators, sorting 
functions, etc. should just work because underneath the object is 
aware of it's LDAP syntax.


The same holds true for attribute names, it should just work correctly 
any place we touch an attribute name because it's an object implementing 
the desired comparison and hashing behavior.


Thus the keys in an Entry dict would need to be a new class and the 
values would need to be a new class as well. Simple strings do not give 
rich enough semantic behavior (we shouldn't be providing this semantic 
behavior every place in the code where we touch an attribute name or 
value, rather it should just automatically work using standard Python 
operators.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Command instantiation

2013-01-14 Thread John Dennis

On 01/14/2013 11:06 AM, Petr Viktorin wrote:


IPA Command objects sometimes need to pass some data between their
various methods. Currently that's done using the thread-local context.
For an example see dnsrecord_del, which sets a del_all flag in
pre_callback and then checks it in execute.

While that works for now, it's far from best practice. For example, if
some Command can call another Command, we need to carefully check that
the global data isn't overwritten.


The other way data is passed around is arguments. The callback methods
take a bunch of arguments that are the same for a particular Command
invocation (ldap, entry_attrs, *keys, **options). By now, there's no
hope of adding a new one, since all the callbacks would need to be
rewritten. (Well, one could add an artificial option, but that's clearly
not a good solution.)
In OOP, this problem is usually solved by putting the data in an object
and passing that around. Or better, putting it in the object the methods
are called on.

This got me thinking -- why do we not make a new Command instance for
every command invocation? Currently Command objects are only created
once and added to the api, so they can't be used to store per-command data.
It seems that having `api.Command.user_add` create a new instance would
work better for us. (Of course it's not the best syntax for creating a
new object, but having to change all the calling code would be too
disruptive).
What do you think?



Just a few thoughts, no answers ... :-)

I agree with you that using thread local context blocks to pass 
cooperating data is indicative of a design flaw elsewhere.


See the discussion from a couple of days ago concerning api locking and 
making our system robust against foreign plugins. As I understand it 
that design prohibits modifying the singleton command objects. Under 
your proposal how would we maintain that protection? The fact the 
commands are singletons is less of an issue than the fact they lock 
themselves when the api is finalized. If the commands instead acted as a 
factory and produced new instances wouldn't they also have to observe 
the same locking rules thus defeating the value of instantiating copies 
of the command?


I think the entire design of the plugin system has at it's heart 
non-modifiable singleton command objects which does not carry state.


FWIW, I was never convinced the trade-off between protecting our API and 
being able to make smart coding choices (such as your suggestion) struck 
the right balance.


Going back to one of your suggestions of passing a context block as a 
parameter. Our method signatures do not currently support that as you 
observe. But given the fact by conscious design a thread only executes 
one *top-level* command at a time and then clears it state the thread 
local context block is effectively playing the almost exactly same role 
as if it were passed as a parameter.




--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Command instantiation

2013-01-14 Thread John Dennis

On 01/14/2013 01:17 PM, Rob Crittenden wrote:

Petr Viktorin wrote:



As for clearing the state, you already can't rely on that: the batch
plugin doesn't do it.



Yes, speaking of bolted on, that defines the batch plugin pretty well.
It should be fairly straightforward to clear the state between
executions though (or at least parts of it, there may be some
batch-specif cthings we'd want to maintain).


I think the problem is there are different groups of data being 
maintained in the context and we don't separate them into their logical 
domains. There is connection information that persists across all 
commands in the batch. Then there is the per command information 
specific to each command in the batch. Each should have it's own context.


But as Petr3 points out the per thread context state is kinda a junk box 
where we throw in a undisciplined manner all the things we can't fit 
into a best practices programming model.


Some of this is the fault of the framework design and the priorities 
that drove it. But not all of this can be laid at the feet of our 
framework. Some of it is due to the inflexibility of the core Python 
modules we use and their poor design. A good example is the fact the 
request URL is unavailable when processing a HTTP response in one of the 
libraries we're using, thats just bad design and because we use that 
library we have to live with it, hence sticking the request URL into the 
thread local context. There are numerous example of things like this, 
most are expedient workarounds to bad code design (some under our 
control, some not).



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Command instantiation

2013-01-14 Thread John Dennis

On 01/14/2013 12:56 PM, Jan Cholasta wrote:

On 14.1.2013 18:50, Petr Viktorin wrote:

Ah, yes, you've discovered my ultimate goal: rewriting the whole
framefork :)


It would seem we share the same ultimate goal, sir! :-)


Well it's reassuring I'm not alone in my frustration with elements of 
the framework. I thought it was just me :-)


I have one other general complaint about the framework:

Too much magic!

What do I mean by magic? Things which spring into existence at run time 
for which there is no static definition. I've spent (wasted) significant 
amounts of time trying to figure out how something gets instantiated and 
initialized. These things don't exist in the static code, you can't 
search for them because they are synthetic. You can see these things 
being referenced but you'll never find a class definition or __init__() 
method or assignment to a specific object attribute. It's all very very 
clever but at the same time very obscure. If you just use the framework 
in a cookie-cutter fashion this has probably never bothered you, but if 
you have modify what the framework provides it can be difficult.


But I don't want to carp on the framework too much without giving credit 
to Jason first. His mandate was to produce a pluggable framework that 
was robust, extensible, and supported easy plugin authorship. Jason was 
dedicated, almost maniacal in his attention to detail and best 
practices. He also had to design most of this himself (the rest of the 
team was heads down on other things at the time). It has mostly stood 
the test of time. It's pretty hard to anticipate the pain points, that's 
something only experience with system can give you down the road, which 
is where we find ourselves now.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Redesigning LDAP code

2013-01-11 Thread John Dennis

On 01/11/2013 09:10 AM, Rob Crittenden wrote:

Petr Viktorin wrote:

We had a small discussion off-list about how we want IPA's LDAP handling
to look in the future.
To continue the discussion publicly I've summarized the results and
added some of my own ideas to a page.
John gets credit for the overview (the mistakes  WTFs are mine).

The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below.




IIRC some of the the python-ldap code is used b/c ldap2 may require a
configuration to be set up prior to working. That is one of the nice
things about the IPAdmin interface, it is much easier to create
connections to other hosts.


Good point. But I don't believe that issue affects having a common API 
or a single point where LDAP data flows through. It might mean having 
more than one initialization method or subclassing.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Redesigning LDAP code

2013-01-11 Thread John Dennis

On 01/11/2013 09:55 AM, Rob Crittenden wrote:

John Dennis wrote:

On 01/11/2013 09:10 AM, Rob Crittenden wrote:

Petr Viktorin wrote:

We had a small discussion off-list about how we want IPA's LDAP handling
to look in the future.
To continue the discussion publicly I've summarized the results and
added some of my own ideas to a page.
John gets credit for the overview (the mistakes  WTFs are mine).

The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below.




IIRC some of the the python-ldap code is used b/c ldap2 may require a
configuration to be set up prior to working. That is one of the nice
things about the IPAdmin interface, it is much easier to create
connections to other hosts.


Good point. But I don't believe that issue affects having a common API
or a single point where LDAP data flows through. It might mean having
more than one initialization method or subclassing.




Right. We may need to decouple from api a bit. I haven't looked at this
for a while but one of the problems is that api locks its values after
finalization which can make things a bit inflexible. We use some nasty
override code in some place but it isn't something I'd want to see
spread further.


Ah, object locking, yes I've been bitten by that too. I'm not sure I 
recall having problems with locked ldap objects but I've certainly have 
been frustrated with trying to modify other objects which were locked at 
api creation time.


I wonder if the object locking Jason introduced at the early stages of 
our development is an example of a good idea that's not wonderful in 
practice. You either have to find the exact moment where an object gets 
created and update it then which is sometimes awkward or worse 
impossible or you have to resort to overriding it with the setattr() big 
hammer. Judging by our use of setattr it's obvious there are numerous 
places we need to modify a locked object.


It's not clear to me if the issues with modifying a locked object are 
indicative of problems with the locking concept or bad code structure 
which forces us to violate the locking concept.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread John Dennis

On 01/11/2013 10:04 AM, Petr Viktorin wrote:

Hello list,
This discussion was started in private; I'll continue it here.

On 01/10/2013 05:41 PM, John Dennis wrote:

On 01/10/2013 04:27 AM, Petr Viktorin wrote:

On 01/09/2013 03:55 PM, John Dennis wrote:



And I could work on improving the i18n/translations infrastructure,
starting by writing up a RFE+design.



Could you elaborate as to what you perceive as the current problems and
what this work would address.



Here are my notes:



- Use fake translations for tests


We already do (but perhaps not sufficiently).


I mean use it in *all* tests, to ensure all the right things are
translated and weird characters are handled well.
See https://www.redhat.com/archives/freeipa-devel/2012-October/msg00278.html


Ah yes, I like the idea of a test domain for strings, this is a good 
idea. Not only would it exercise our i18n code more but it could 
insulate the tests from string changes (the test would look for a 
canonical string in the test domain)





- Split up huge strings so the entire text doesn't have to be
retranslated each time something changes/is added


Good idea. But one question I have is should we be optimizing for our
programmers time or the translators time? The Transifex tool should make
available to translators similar existing translations (in fact it
might, I seem to recall some functionality in this area). Wouldn't it be
better to address this issue in Transifex where all projects would benefit?

Also the exact same functionality is needed to support release versions.
The strings between releases are often close but not identical. The
Transifex tool should make available a close match from a previous
version to the translator working on a new version (or visa versa). See
your issue below concerning versions.

IMHO this is a Transifex issue which needs to be solved there, not
something we should be investing precious IPA programmers time on. Plus
if it's solved in Transifex it's a *huge* win for *everyone*, not just IPA.


Huh? Splitting the strings provides additional information
(paragraph/context boundaries) that Transifex can't get otherwise. From
what I hear it's a pretty standard technique when working with gettext.


I'm not sure how splitting text into smaller units gives more context 
but I can see the argument for each msgid being a logical paragraph. We 
don't have too many multi-paragraph strings now so it shouldn't be too 
involved.




For typos, gettext has the fuzzy functionality that we explicitly turn
off. I think we're on our own here.


Be very afraid of turning on fuzzy matching. Before we moved to TX we 
used the entire gnu tool chain. I discovered a number of our PO files 
were horribly corrupted. With a lot of work I traced this down to fuzzy 
matches. If memory serves me right here is what happened.


When a msgstr was absent a fuzzy match was performed and inserted as a 
candidate msgstr. Somehow the fuzzy candidates got accepted as actual 
msgstr's. I'm not sure if we ever figured out how this happened. The two 
most likely explanations were 1) a known bug in TX that stripped the 
fuzzy flag off the msgstr or 2) a translator who blindly accepted all 
TX suggestions. (A suggestion in TX comes from a fuzzy match).


But the real problem is the fuzzy matching is horribly bad. Most of the 
fuzzy suggestions (primarily on short strings) were wildly incorrect.


I had to go back to a number of PO files and manually locate all fuzzy 
suggestions that had been promoted to legitimate msgstr's. A tedious 
process I hope to never repeat.


BTW, if memory serves me correctly the fuzzy suggestions got into the PO 
files in the first place because we were running the full gnu tool chain 
(sorry off the top of my head I don't recall exactly which component 
inserts the fuzzy suggestion), but I think we've since turned that off, 
for a very good reason.






- Keep a history/repo of the translations, since Transifex only stores
the latest version


We already do keep a history, it's in git.


It's not updated often enough. If I mess something up before a release
and Transifex gets wiped, or if a rogue translator deletes some
translations, the work is gone.


Yes, updating more frequently is an excellent goal.




- Update the source strings on Transifex more often (ideally as soon as
patches are pushed)


Yes, great idea, this would be really useful and is necessary.


- Break Git dependencies: make it possible generate the POT in an
unpacked tarball


Are you talking about the fact our scripts invoke git to determine what
files to process? If so then yes, this would be a good dependency to get
rid of. However it does mean we somehow have to maintain a manifest list
of some sort somewhere.


A directory listing is fine IMO. We use it for more critical things,
like loading plugins, without any trouble.
Also, when run in a Git repo the Makefile can compare the file list with
what Git says and warn accordingly.


How do you

Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread John Dennis

On 01/11/2013 02:44 PM, Jérôme Fenal wrote:

2013/1/11 John Dennis jden...@redhat.com mailto:jden...@redhat.com


Thank you Jérôme for your insights as a translator. We have a lop-sided 
perspective mostly from the developer point of view. We need to better 
understand the translator's perspective.



I'm not sure how splitting text into smaller units gives more
context but I can see the argument for each msgid being a logical
paragraph. We don't have too many multi-paragraph strings now so it
shouldn't be too involved.


One issue also discussed on this list is the problem of 100+ lines
strings in man pages generated from ___doc___ tags in scripts.
Those are a _real_ pain for translators to maintain when only one line
is changed.


I still think TX should attempt to match the msgid from a previous pot 
with an updated pot and show the *word* differences between the strings 
along with an edit window for the original translation. That would be so 
useful to translators I can't believe TX does not have that feature. All 
you would have to do is make a few trivial edits in the translation and 
save it.


But heck, I'm not a translator and I haven't used the translator's part 
of the TX tool much other than to explore how it works (and that was a 
while ago).




I'd see a few remarks here:
- this massive .po file would grow wildly, especially when a typo is
corrected in huge strings (__doc___), when additional sentences are
added to those, etc.
- breaking down bigger strings in smaller ones will certainly help here
in avoiding duplicated content,



- in Transifex, it is easy to upload a .po onto another branch, and only
untranslated matching strings would be updated. I used it on ananconda
where there are multiple branches between Fedora  RHEL5/6  master,
that worked easily without breaking anything.


When you say easy to upload a .po onto another branch I assume you don't 
mean branch (TX has no such concept) but rather another TX resource. 
Anyway this is good to know, perhaps the way TX handles versions is not 
half as bad as it would appear.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread John Dennis

On 01/11/2013 04:00 PM, Jérôme Fenal wrote:

When you say easy to upload a .po onto another branch I assume you
don't mean branch (TX has no such concept) but rather another TX
resource. Anyway this is good to know, perhaps the way TX handles
versions is not half as bad as it would appear.


You're right. See how anaconda is organized, for instance:
https://fedora.transifex.com/projects/p/fedora/language/en/?project=2059


We follow the same model as anaconda, a new TX resource per version.

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCHES] 0117-0118 Port ipa-replica-prepare to the admintool framework

2013-01-03 Thread John Dennis

On 01/03/2013 08:00 AM, Petr Viktorin wrote:

Hello,

The first patch implements logging-related changes to the admintool
framework and ipa-ldap-updater (the only thing ported to it so far).
The design document is at http://freeipa.org/page/V3/Logging_and_output

John, I decided to go ahead and put an explicit logger attribute on
the tool class rather than adding debug, info, warn. etc methods
dynamically using log_mgr.get_logger. I believe it's the cleanest solution.
We had a discussion about this in this thread:
https://www.redhat.com/archives/freeipa-devel/2012-July/msg00223.html; I
didn't get a reaction to my conclusion so I'm letting you know in case
you have more to say.


I'm fine with not directly adding the debug, info, warn etc. methods, 
that practice was historical dating back to the days of Jason. However I 
do think it's useful to use a named logger and not the global 
root_logger. I'd prefer we got away from using the root_logger, it's 
continued existence is historical as well and the idea was over time we 
would slowly eliminate it's usage. FWIW the log_mgr.get_logger() is 
still useful for what you want to do.


def get_logger(self, who, bind_logger_names=False)

If you don't set bind_logger_names to True (and pass the class instance 
as who) you won't get the offensive debug, info, etc. methods added to 
the class instance. But it still does all the other bookeeping.


The 'who' in this instance could be either the name of the admin tool or 
the class instance.


Also I'd prefer using the attribute 'log' rather than 'logger'. That 
would make it consistent with code which does already use get_logger() 
passing a class instance because it's adds a 'log' attribute which is 
the logger. Also 'log' is twice as succinct than 'logger' (shorter line 
lengths).


Thus if you do:

  log_mgr.get_logger(self)

I think you'll get exactly what you want. A logger named for the class 
and being able to say


  self.log.debug()
  self.log.error()

inside the class.

In summary, just drop the True from the get_logger() call.





The second patch ports ipa-replica-prepare to the framework. (I chose
ipa-replica-prepare because there was a bug filed against its error
handling, something the framework should take care of.)
As far as Git can tell, it's a complete rewrite, so it might be hard to
do a review. I have several smaller patches that make it easier to see
what gets moved where. Please say I'm wrong, but as I understand, broken
commits aren't allowed in the FreeIPA repo so I can only present the
squashed patch for review.
To get the smaller commits, do `git fetch
git://github.com/encukou/freeipa.git
replica-prepare:pviktori-replica-prepare`.

Part of: https://fedorahosted.org/freeipa/ticket/2652
Fixes: https://fedorahosted.org/freeipa/ticket/3285




--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] [PATCH 83] Cookie Expires date should be locale insensitive

2012-12-19 Thread John Dennis


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From 49213139d31fe402c616ec468d2b593685232ef9 Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Wed, 19 Dec 2012 12:47:46 -0500
Subject: [PATCH 83] Cookie Expires date should be locale insensitive
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

The Expires attribute in a cookie is supposed to follow the RFC 822
(superseded by RFC 1123) date format. That format includes a weekday
abbreviation (e.g. Tue) which must be in English according to the
RFC's.

ipapython/cooke.py has methods to parse and format the Expires
attribute but they were based on strptime() and strftime() which
respects the locale. If a non-English locale is in effect the wrong
date string will be produced and/or it won't be able to parse the date
string.

The fix is to use the date parsing and formatting functions from
email.utils which specifically follow the RFC's and are not locale
sensitive.

This patch also updates the unit test to use email.utils as well.

The patch should be applied to the following branches:

master, 3.0, 3.1

Ticket: https://fedorahosted.org/freeipa/ticket/3313
---
 ipapython/cookie.py | 42 ++---
 tests/test_ipapython/test_cookie.py | 15 +++--
 2 files changed, 20 insertions(+), 37 deletions(-)

diff --git a/ipapython/cookie.py b/ipapython/cookie.py
index b45cb2b..bf551b5 100644
--- a/ipapython/cookie.py
+++ b/ipapython/cookie.py
@@ -20,6 +20,7 @@
 import re
 import time
 import datetime
+import email.utils
 from urllib2 import urlparse
 from calendar import timegm
 from ipapython.ipa_log_manager import log_mgr
@@ -172,47 +173,26 @@ class Cookie(object):
 if utcoffset is not None and utcoffset.total_seconds() != 0.0:
 raise ValueError(timezone is not UTC)
 
-# At this point we've validated as much as possible the
-# timezone is UTC or GMT but we can't use the %Z timezone
-# format specifier because the timezone in the string must be
-# 'GMT', not something equivalent to GMT, so hardcode the GMT
-# timezone string into the format.
+# Do not use strftime because it respects the locale, instead
+# use the RFC 1123 formatting function which uses only English
 
-return datetime.datetime.strftime(dt, '%a, %d %b %Y %H:%M:%S GMT')
+return email.utils.formatdate(cls.datetime_to_time(dt), usegmt=True)
 
 @classmethod
 def parse_datetime(cls, s):
 '''
-Parse a RFC 822, RFC 1123 date string, return a datetime aware object in UTC.
-Accommodates some non-standard formats found in the wild.
+Parse a RFC 822, RFC 1123 date string, return a datetime naive object in UTC.
 '''
 
-formats = ['%a, %d %b %Y %H:%M:%S',
-   '%a, %d-%b-%Y %H:%M:%S',
-   '%a, %d-%b-%y %H:%M:%S',
-   '%a, %d %b %y %H:%M:%S',
-   ]
 s = s.strip()
 
-# strptime does not read the time zone and generate a tzinfo
-# object to insert in the datetime object so there is little point
-# in specifying a %Z format, instead verify GMT is specified and
-# generate the datetime object as if it were UTC.
+# Do not use strptime because it respects the locale, instead
+# use the RFC 1123 parsing function which uses only English
 
-if not s.endswith(' GMT'):
-raise ValueError(http date string '%s' does not end with GMT time zone % s)
-s = s[:-4]
-
-dt = None
-for format in formats:
-try:
-dt = datetime.datetime(*(time.strptime(s, format)[0:6]))
-break
-except Exception:
-continue
-
-if dt is None:
-raise ValueError(unable to parse expires datetime '%s' % s)
+try:
+dt = datetime.datetime(*email.utils.parsedate(s)[0:6])
+except Exception, e:
+raise ValueError(unable to parse expires datetime '%s': %s % (s, e))
 
 return dt
 
diff --git a/tests/test_ipapython/test_cookie.py b/tests/test_ipapython/test_cookie.py
index f8c5daf..b8a2d36 100644
--- a/tests/test_ipapython/test_cookie.py
+++ b/tests/test_ipapython/test_cookie.py
@@ -20,6 +20,7 @@
 import unittest
 import time
 import datetime
+import email.utils
 import calendar
 from ipapython.cookie import Cookie
 
@@ -129,15 +130,16 @@ class TestExpires(unittest.TestCase):
 # Force microseconds to zero because cookie timestamps only have second resolution
 self.now = datetime.datetime.utcnow().replace(microsecond=0)
 self.now_timestamp = calendar.timegm(self.now.utctimetuple())
-self.now_string = datetime.datetime.strftime(self.now, '%a, %d %b %Y %H:%M:%S GMT')
+self.now_string = email.utils.formatdate(self.now_timestamp, usegmt=True

Re: [Freeipa-devel] [PATCH 82] Compliant client side session cookie behavior

2012-12-10 Thread John Dennis

On 12/10/2012 07:30 AM, Petr Viktorin wrote:

Just two issues:

When testing with lite-server listening on localhost, every request
outputs ipa: ERROR: not sending session cookie, URL mismatch. Is the
message necessary?


Rob asked for this to be changed from a debug message to an error which 
made sense, in theory we should never get into the situation, if we do 
something is terribly wrong. However neither of us thought about the 
lite-server case. There are two possible ways to address this.


1) test for the lite server context and don't emit the message. We test 
for lite server elsewhere and treat things differently. But I'm not a 
big fan of this approach, it's a way for mistakes to creep in because 
we're not exercising the same code paths during testing as we do during 
production.


2) Make the domain in the cookie match the domain of the lite-server. 
Currently we read the domain from api.env.host (technically it's the URL 
host). Perhaps there should be a utility to return the URL host 
component for those places that need it which detects which mode the 
server is running in. I'll take a quick look and see if that makes sense.




Replying to a previous mail:

   diff --git a/ipalib/session.py b/ipalib/session.py
   index 36beece..900259a 100644
   --- a/ipalib/session.py
   +++ b/ipalib/session.py
   @@ -955,13 +955,18 @@ class MemcacheSessionManager(SessionManager):
[...]
   +try:
   +session_cookie =
   Cookie.get_named_cookie_from_string(cookie_header,
   self.session_cookie_name)
   +except Exception, e:
   +session_cookie = None
   +else:
   +session_id = session_cookie.value
  
   When the user first accesses the Web UI, session_cookie will be None,
   resulting in an Internal Server Error.


Ah yes I see the mistake now, I was thinking it raised an exception 
instead of returning None. Good catch, thanks!


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 82] Compliant client side session cookie behavior

2012-12-10 Thread John Dennis

On 12/10/2012 09:00 AM, John Dennis wrote:

On 12/10/2012 07:30 AM, Petr Viktorin wrote:

Just two issues:

When testing with lite-server listening on localhost, every request
outputs ipa: ERROR: not sending session cookie, URL mismatch. Is the
message necessary?


Rob asked for this to be changed from a debug message to an error which
made sense, in theory we should never get into the situation, if we do
something is terribly wrong. However neither of us thought about the
lite-server case. There are two possible ways to address this.

1) test for the lite server context and don't emit the message. We test
for lite server elsewhere and treat things differently. But I'm not a
big fan of this approach, it's a way for mistakes to creep in because
we're not exercising the same code paths during testing as we do during
production.

2) Make the domain in the cookie match the domain of the lite-server.
Currently we read the domain from api.env.host (technically it's the URL
host). Perhaps there should be a utility to return the URL host
component for those places that need it which detects which mode the
server is running in. I'll take a quick look and see if that makes sense.


Much simpler solution, using api.env.host was the wrong source of the 
server domain, it should have been the host value in api.env.xmlrpc_uri 
instead. Changing it to reference xmlrpc_uri fixes the problem without 
any other changes and is the value we should have been using.


Patch will follow shortly after some more testing.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 82] Compliant client side session cookie behavior

2012-12-08 Thread John Dennis

On 12/07/2012 06:19 PM, Simo Sorce wrote:

On Fri, 2012-12-07 at 16:21 -0500, John Dennis wrote:



I'll send a revised patch with the above mentioned fixes once someone
else puts their eyeballs on the RFC, or maybe we should just remove the
check for the time being.


I think that the algorithm fails to follow the RFC when you do:
   elif url_path.endswith('/'):
   request_path = url_path[:-1]

Point 4 of the RFC doesn't say the path needs to end with a / it says
you need to take everything before the last / wherever it is.

Ie if the patch is /ipa/ui/foo then the path for the cookie is /ipa/ui
Conversely if the path is /ipa/ui/foo/ the path is /ipa/ui/foo

Basically these rules threat the last 'leaf' component as not part of
the path and are meant to remove it.


Thank you, yes you're correct. The fundamental misconception is one I've 
stumbled on in the past as well as many others what is the significance 
of a trailing slash in a URL path component. The trailing slash is 
quite significant but many of us get lulled into believing it's not 
because of the common HTTP server behavior of performing a redirect on a 
path without a trailing slash to a directory of the same name. The URL's 
http://example.com/foo; and http://example.com/foo/; are *not* the 
same URL. The path in a URL is considered a directory if and only if it 
ends with a trailing slash.


RFC 6265 in Section 4.1.2.4. The Path Attribute clearly states the 
matching is performed on *directory* components. Without a trailing 
slash the leaf component is not a directory and hence must be stripped.


Rob, the above is the answer to your question (and mine). /ipa and 
/ipa/ are *not* the same and /ipa will not match /ipa as a cookie 
path component because /ipa is not a directory, the directory is /. 
FWIW the cookies path attribute is defined to be a directory path and 
does not require the trailing slash (or so I believe).


I'll send an updated patch shortly with the above fix. I also noticed 
that http_return_ok() omitted the validation for the HttpOnly and Secure 
flags I'll add that too.





--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 82] Compliant client side session cookie behavior

2012-12-07 Thread John Dennis

On 11/12/2012 12:39 PM, Petr Viktorin wrote:

On 11/11/2012 11:18 PM, John Dennis wrote:


In the future please do not quote the entire patch in the email. It 
makes it much too difficult to respond. It took me a long time to remove 
all the unnecessary text for this reply.


New patch following in a moment, as well as the response to your second 
email.



Cookie library issues:
I decided to solve the cookie library problems by writing a minimal
cookie library that does what we need and no more than that. It is a
new module in ipapython shared by both client and server and comes
with a new unit test. The module has plenty of documentation, no need
to repeat it here.


This sucks. Designing, implementing and maintaining our own version of
general code is never good. I'd like to discuss with the other
developers so everyone knows what's happening in this patch.

Also, API design is a hard thing to do. I'd prefer if we made a subset
of an existing API (especially if there's any hope of the standard
library being fixed) rather than invented a new one.


Yes it does suck, no one was more dismayed about this than
myself. Here is bug references and expanded justification.

The inability of Cookie.py to parse an Expires attribute was opened
6/10/2008, approximately 4.5 years ago.

http://bugs.python.org/issue3073

To this day it remains open and unfixed. I discovered the bug report
when I also ran afoul of the failed behavior. It was shortly
thereafter I also discovered Cookie.py could not parse the HttpOnly
and Secure boolean flags. There are only 7 defined items in a cookie
and Cookie.py fails to parse 3 of them, close to a 50% failure
rate. The bug has stayed open 4.5 years without a fix making into the
upstream distribution. If we were dependent on getting Cookie.py fixed
we would have to make the core Python packages in RHEL 5, RHEL 6, RHEL
7, Fedora 16, Fedora 17, and Fedora 18 were patched. Tracking and
making sure the fix gets into all those distributions is a signifcant
effort, but at least we own those distributions and have some control
over them. But wait! We also expect the IPA clients to install and
successfully run on other systems as well, Debian, Ubuntu, Solaris,
etc. Who is going to make sure Cookie.py has been patched on all the
releases of all those systems? Frankly I have no confidence the
various bugs in Cookie.py will be taken seriously by upstream after
it's languished 4.5 years without a fix making it into an upstream
release. Nor is it just these 3 parse bugs. I can demonstrate many
other ways Cookie.py fails in both cookie generation and cookie
parsing.

The amount of work needed to assure patched versions of
Cookie.py was available in all the systems we hope to deploy the IPA
client on would be huge. Compare that to spending less than a day
writing a new module and unit test that works and installs with our
client. To my thinking it's a no-brainer which is the better choice
for the team.

But there is another issue with Cookie.py. It's API is horrible, it's
difficult to use and important features for working with cookies are
completely absent. That's just at the API level, it's actual
implementation is frightenly bad. You ask, why don't we at least make
the new module API compatible so we can swap it out later when
upstream finally gets around to fixing the problems. Emulating a
terrible API and then having to extend it into something usable
doesn't make sense to me, nor does it seem a good use of team
resources nor beneficial to the IPA product.

The bug I opened about the other issues in Cookie.py is:

http://bugs.python.org/issue16611


Please add the following paragraph to the cookie.py docstring, so people
know our implementation is incomplete:


It doesn't have every bell and whistle nor can it
handle all the cookie scenarios found in the wild with random servers
and browsers, it does what we need in an RFC manner (but not all RFC
features).


The implementation is complete because it follows the RFC. I was perhaps 
being too cautious. A lot of the extra code in the other libraries 
exists to accmodate behavior prior to standardization. Those days are 
hopefully long behind use. Plus our use of cookies is very constrained, 
we're only going to connect to a server we control. I'm not really 
worried at this juncture about historical misbehavior. I'm pretty sure 
the implementation will catch non-RFC compliant behavior and reject it 
which is proably the right thing to do.


The only thing I think might be missing is automatically escaping and 
unescaping reserved characters. But the RFC is actually mum on this 
topic. It defines a legal character set which the parsing code observes. 
The libary does not currently validate all values being assigned to a 
cookie do not contain invalid charaters. It's currently up to the caller 
to supply valid URL components for the domain and path components which 
might be subject to escaping with HTML entities. Yes, that would be a 
good thing

Re: [Freeipa-devel] [PATCH 82] Compliant client side session cookie behavior

2012-12-07 Thread John Dennis

On 11/13/2012 07:39 AM, Petr Viktorin wrote:

Continuing from yesterday. I tried building the RPMs, installing a
server, running the tests, and checking the Web UI. Each of these steps
failed.


On 11/11/2012 11:18 PM, John Dennis wrote:

Note: This has been tested with both the command line api and the
browser on both Fedora and RHEL-6. It has also been tested to make sure
any cookies stored before an upgrade will work correctly.

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

freeipa-jdennis-0082-Compliant-client-side-session-cookie-behavior.patch


From 089d69a1e06636bbd2836fcb9072b5a2ffef7ae2 Mon Sep 17 00:00:00 2001
From: John Dennisjden...@redhat.com
Date: Sun, 11 Nov 2012 17:05:32 -0500
Subject: [PATCH 82] Compliant client side session cookie behavior
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit


[...]


Ticket https://fedorahosted.org/freeipa/ticket/3022
---
   ipalib/rpc.py   | 224 +
   ipalib/session.py   |  39 +--
   ipapython/cookie.py | 486 

   ipaserver/rpcserver.py  |   6 +-
   tests/test_ipapython/test_cookie.py | 332 
   5 files changed, 1017 insertions(+), 70 deletions(-)
   create mode 100644 ipapython/cookie.py
   create mode 100644 tests/test_ipapython/test_cookie.py

diff --git a/ipalib/rpc.py b/ipalib/rpc.py
index c555105..b2021c1 100644
--- a/ipalib/rpc.py
+++ b/ipalib/rpc.py
@@ -61,7 +64,8 @@ from ipalib.krb_utils import KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN, 
KRB5KRB_AP_ERR_TKT
KRB5_FCC_PERM, KRB5_FCC_NOFILE, KRB5_CC_FORMAT, 
KRB5_REALM_CANT_RESOLVE
   from ipapython.dn import DN

-COOKIE_NAME = 'ipa_session_cookie:%s'
+COOKIE_NAME = 'ipa_session'


COOKIE_NAME is used in ipa-client-install, which promptly fails when it
does `kernel_keyring.del_key(COOKIE_NAME % host_principal)`.
(There should probably be a helper function to encapsulate this call.)


Fixed. The whole keyring storage mechanism should never have been 
exposed in the first place. Instead it should have been encapsulated in 
some other functions that hide the implementation and instead export a 
logical abstraction. rpc.py now exports these functions:


update_persistent_client_session_data(principal, data)
read_persistent_client_session_data(principal)
delete_persistent_client_session_data(principal)




+KEYRING_COOKIE_NAME = '%s_cookie:%%s' % COOKIE_NAME

   def xml_wrap(value):
   



diff --git a/ipalib/session.py b/ipalib/session.py
index 36beece..900259a 100644
--- a/ipalib/session.py
+++ b/ipalib/session.py
@@ -955,13 +955,18 @@ class MemcacheSessionManager(SessionManager):
 Session id as string or None if not found.
   '''
   session_id = None
-if cookie_header is not None:
-cookie = Cookie.SimpleCookie()
-cookie.load(cookie_header)
-session_cookie = cookie.get(self.session_cookie_name)
-if session_cookie is not None:
-session_id = session_cookie.value
-self.debug('found session cookie_id = %s', session_id)
+try:
+session_cookie = 
Cookie.get_named_cookie_from_string(cookie_header, self.session_cookie_name)
+except Exception, e:
+session_cookie = None
+else:
+session_id = session_cookie.value


When the user first accesses the Web UI, session_cookie will be None,
resulting in an Internal Server Error.


Hmm... I didn't see this in testing. I think you mean the cookie_header 
will be None, not the session_cookie being None. That case should have 
been caught by the try/except block surrounding 
get_named_cookie_from_string(). But in any event I added a check for the 
cookie_header being None at the top of the function. Or am I 
misunderstanding the problem you saw?





diff --git a/ipapython/cookie.py b/ipapython/cookie.py
new file mode 100644
index 000..0033aed
--- /dev/null
+++ b/ipapython/cookie.py

[...]

+
+@property
+def timestamp(self):


Pylint complains here and in similar places below:
ipapython/cookie.py:313: [E0202, Cookie.timestamp] An attribute affected
in ipapython.cookie line 310 hide this method


This is documented bug in pylint. The errors have been disabled along 
with a comment list the two bug reports and a FIXME saying to remove the 
error exclusion when pylint has fixed the bug.





diff --git a/tests/test_ipapython/test_cookie.py 
b/tests/test_ipapython/test_cookie.py
new file mode 100644
index 000..4b3c317
--- /dev/null
+++ b/tests/test_ipapython/test_cookie.py
@@ -0,0 +1,332 @@
+# Authors:
+#   John Dennisjden...@redhat.com
+#
+# Copyright (C) 2012  Red Hat
+# see file 'COPYING' for use and warranty information
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License

Re: [Freeipa-devel] [PATCH 82] Compliant client side session cookie behavior

2012-12-07 Thread John Dennis

On 12/07/2012 03:44 PM, Rob Crittenden wrote:

John Dennis wrote:

Revised patch attached.



Why catch exceptions from client_session_keyring_keyname() when it
doesn't raise any?


It may not explicitly raise an exception but one can still be raised if 
either KEYRING_COOKIE_NAME or principal is invalid. It's not likely that 
KEYRING_COOKIE_NAME would be invalid but the principal might be due to 
logic failures earlier.




In store_session_cookie() shouldn't we log an error if a cookie can't be
parsed, not a debug?


Good point. Actually there is another problem there, if None is returned 
we need to stop processing and return. I've fixed both.




In apply_session_cookie() I think we should log Cookie.URLMismatch and
Exception at the error level instead of debug.


Good point, changed that to an error message as well as the catch-all 
handler immediately below it.




My knowledge of cookies is rusty, but I don't understand this bit in
path_valid()

+if not url_path or not url_path.startswith('/'):
+request_path = '/'
+elif url_path.count('/') = 1:
+request_path = '/'
+elif url_path.endswith('/'):
+request_path = url_path[:-1]
+else:
+request_path = url_path

If my url_path cis /ipa isn't this going to treat it as /? That seems
wrong.


I agree, that's confusing, it confused me too, but that's exactly what's 
in the RFC (http://tools.ietf.org/html/rfc6265#page-16). I stared at 
that a long time myself with exactly the same concerns.


I'd appreciate it you or someone else would look at the RFC because I 
wonder if I'm not reading it correctly. I tend to agree that check is wrong.


I'll send a revised patch with the above mentioned fixes once someone 
else puts their eyeballs on the RFC, or maybe we should just remove the 
check for the time being.



Functionally the patch appears to be fine.

rob




--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] [PATCH 82] Compliant client side session cookie behavior

2012-11-11 Thread John Dennis
Note: This has been tested with both the command line api and the 
browser on both Fedora and RHEL-6. It has also been tested to make sure 
any cookies stored before an upgrade will work correctly.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From 089d69a1e06636bbd2836fcb9072b5a2ffef7ae2 Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Sun, 11 Nov 2012 17:05:32 -0500
Subject: [PATCH 82] Compliant client side session cookie behavior
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

In summary this patch does:

* Follow the defined rules for cookies when:

  - receiving a cookie (process the attributes)

  - storing a cookie (store cookie + attributes)

  - sending a cookie

+ validate the cookie domain against the request URL

+ validate the cookie path against the request URL

+ validate the cookie expiration

+ if valid then send only the cookie, no attribtues

* Modifies how a request URL is stored during a XMLRPC
  request/response sequence.

* Refactors a bit of the request/response logic to allow for making
  the decision whether to send a session cookie instead of full
  Kerberous auth easier.

* The server now includes expiration information in the session cookie
  it sends to the client. The server always had the information
  available to prevent using an expired session cookie. Now that
  expiration timestamp is returned to the client as well and now the
  client will not send an expired session cookie back to the server.

* Adds a new module and unit test for cookies (see below)

Formerly we were always returning the session cookie no matter what
the domain or path was in the URL. We were also sending the cookie
attributes which are for the client only (used to determine if to
return a cookie). The attributes are not meant to be sent to the
server and the previous behavior was a protocol violation. We also
were not checking the cookie expiration.

Cookie library issues:

We need a library to create, parse, manipulate and format cookies both
in a client context and a server context. Core Python has two cookie
libraries, Cookie.py and cookielib.py. Why did we add a new cookie
module instead of using either of these two core Python libaries?

Cookie.py is designed for server side generation but can be used to
parse cookies on the client. It's the library we were using in the
server. However when I tried to use it in the client I discovered it
has some serious bugs. There are 7 defined cookie elements, it fails
to correctly parse 3 of the 7 elements which makes it unusable because
we depend on those elements. Since Cookie.py was designed for server
side cookie processing it's not hard to understand how fails to
correctly parse a cookie because that's a client side need. (Cookie.py
also has an awkward baroque API and is missing some useful
functionality we would have to build on top of it).

cookielib.py is designed for client side. It's fully featured and obeys
all the RFC's. It would be great to use however it's tightly coupled
with another core library, urllib2.py. The http request and response
objects must be urllib2 objects. But we don't use urllib2, rather we use
httplib because xmlrpclib uses httplib. I don't see a reason why a
cookie library should be so tightly coupled to a protocol library, but
it is and that means we can't use it (I tried to just pick some isolated
entrypoints for our use but I kept hitting interaction/dependency problems).

I decided to solve the cookie library problems by writing a minimal
cookie library that does what we need and no more than that. It is a
new module in ipapython shared by both client and server and comes
with a new unit test. The module has plenty of documentation, no need
to repeat it here. It doesn't have every bell and whistle nor can it
handle all the cookie scenarios found in the wild with random servers
and browsers, it does what we need in an RFC manner (but not all RFC
features).

Request URL issues:

We also had problems in rpc.py whereby information from the request
which is needed when we process the response is not available. Most
important was the requesting URL. It turns out that the way the class
and object relationships are structured it's impossible to get this
information. Someone else must have run into the same issue because
there was a routine called reconstruct_url() which attempted to
recreate the request URL from other available
information. Unfortunately reconstruct_url() was not callable from
inside the response handler. So I decided to store the information in
the thread context and when the request is received extract it from
the thread context. It's perhaps not an ideal solution but we do
similar things elsewhere so at least it's consistent. I removed the
reconstruct_url() function because the exact information is now in the
context and trying to apply heuristics to recreate the url is probably
not robust.

Ticket

Re: [Freeipa-devel] apache segfaults

2012-11-10 Thread John Dennis

On 11/10/2012 06:41 AM, Alexander Bokovoy wrote:

* anybody seen something similar?

I do see it, on F18 VMs.

Actually, found it:
https://bugzilla.redhat.com/show_bug.cgi?id=867276



Ah thank you Alexander and Martin, this is exactly the problem I was 
seeing, good to know there is a fix.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] apache segfaults

2012-11-09 Thread John Dennis
I'm wondering if anyone else has seen this. I've been running the server 
with debug=True to verify it's behaving properly which means I've been 
reading /var/log/httpd/error_log and what to my wondering eyes did appear?


[core:notice] [pid 9089] AH00052: child pid 9092 exit signal 
Segmentation fault (11)


The server seems to be running normally, except after the above segfault 
notice there are debug messages about loading our plugins which seems to 
imply it's restarting a wsgi process, but it's not clear that's what's 
actually happening. This is on a F18 vm.


The only other information in the log is:

*** glibc detected *** /usr/sbin/httpd: corrupted double-linked list: 
0x7f60952cddc0 ***


So a few questions:

* anybody seen something similar?

* does apache respawn a child process that crashes? (the log seems to 
suggest so)


* does systemd respawn a daemon that crashes?

* how does one get a coredump? (Is that an apache config item?, a 
systemd flag?)


* any reason to believe this has anything to do with us?

* any suggestions for diagnosing this?

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Dojo and Web UI in 3.2

2012-11-07 Thread John Dennis

On 11/07/2012 03:15 PM, Dmitri Pal wrote:

Does it make sense to everybody else besides me?
If yes let us agree with the plan.


I've been following the discussion but I'm not a web developer and many 
of the technical issues are outside my scope of knowledge. However the 
proposal does seem to be well researched and thought out so my instinct 
is to defer to the experts and proceed based on their advice.


John


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [RANT] Patchwork process

2012-11-02 Thread John Dennis

On 11/02/2012 10:56 AM, Simo Sorce wrote:

I do not like the trac approach because it is not automatic, so it is
completely inconsistent, and also because trac is extremely slow.


Factoring out the whole patchwork issue I do have to agree with Simo 
that using trac is painful because it's so slow. I've learned to live 
with it but it is annoying.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] beware of abrt.pth

2012-10-30 Thread John Dennis
I've been adding some functionality to python-nss to support IPA. Right 
before I was ready to wrap up the work I upgraded my system and started 
to see failures in things that had previously worked. I finally tracked 
the problem down to the abrt-addon-python package which installs 
abrt.pth into Python's site-packages directory. abrt.pth causes the 
abrt_exception_handler to be loaded into every Python application which 
then pulls in a lot of other modules which execute during initialization 
with the potential for damaging (silent) side effects.


In particular any application using nss_init() to initialize NSS to a 
NSS database will fail all it's PKI operations (which we do in IPA) 
because abrt loads rpm which initializes NSS without a database.

We should be using nss_init_context() instead as explained in this document:

https://wiki.mozilla.org/NSS_Library_Init

The following trac ticket has been opened, #3227

I have filed these bugs against abrt and rpm

https://bugzilla.redhat.com/show_bug.cgi?id=871506
https://bugzilla.redhat.com/show_bug.cgi?id=871485

In the near term we need to aware the abrt-addon-python package has the 
potential to cause problems with PKI.


IPA may be immune from the issue because we initialize and shutdown NSS 
multiple times which may undo the damage done by abrt, yet on the other 
hand if we've shutdown NSS and the abrt exception handler runs it may fail.


The initialization of NSS by libraries loaded by us on on behalf of 
external agents may explain some of the NSS shutdown problems we've been 
having (mostly because NSS was never designed to support the use of NSS 
by libraries as explained in the above document. The introduction of NSS 
context's was grafted onto NSS to mitigate but not fully solve the issue.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] beware of abrt.pth

2012-10-30 Thread John Dennis

On 10/30/2012 11:42 AM, Simo Sorce wrote:

Should we make freeipa packages conflict with that module for the time
being ?


I don't know because I don't know how this might affect IPA 
specifically. I discovered the problem while testing a new release of 
python-nss. I can't say I've seen a specific IPA failure due to this, 
rather the potential is there and my intent was to bring awareness to 
the rest of the developers in case they see odd behavior in which case 
the simple workaround is to remove site-packages/abrt.pth.


Personally I don't care for packages which silently inject all sorts of 
unknown code at run time because of the potential problems. Yet on the 
other hand abrt can be quite useful.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] obscene spam associated with this list

2012-10-30 Thread John Dennis
Shortly after posting my email about abrt.pth I received 3 obscene 
replies to the email directly to me (not the list) offering a variety of 
sexual favors. Anyone else getting these? I imagine they've got a bot 
subscribed to the list and start spamming people who post.


Is there anything we can do to mitigate this?


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Experimental patchwork server

2012-10-23 Thread John Dennis

On 10/23/2012 09:00 AM, Simo Sorce wrote:

I strongly suggest you use git-send-email instead of thunderbird, it
makes everything a lot faster, see the instructions I sent in my
followup email.


I wrote a python script to manage my patch submissions a while ago which 
might be useful to folks, it's attached.


The basic idea is you keep a directory of your patch submissions. Inside 
the directory is also a file that has then next number to use for your 
submission. By default it runs git format-patch selecting the last 
commit. It creates a patch file using the patch submission format 
defined for IPA. If you use the -s option it also sends it to the list. 
It doesn't use git-send-email, rather it builds an email with a mime 
attachment according to our IPA recommendations. I don't recall why I 
didn't use git-send-email, but there was some reason (probably because I 
couldn't get it follow the IPA conventions, not sure though).


If you have to rework a patch use the -n option to specify which patch 
you're modifying. The script automatically searches the patch directory 
and finds the next revision number for the patch.


The config dict at the top will have to be modified to match your 
username, smtp server, etc. look for anything in UPPERCASE and replace 
with your specifics.


I like to use it because I don't have to remember my patch numbers and 
the result will always follow the IPA conventions without any fumbling 
around.


Petr3 will probably complain about using getopt and a config dict 
instead of optparse but it works and it wasn't worth it to me to port it 
to a different mechanism. Anybody which wants to is more than welcome.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
#!/usr/bin/python

import getopt
import os
import errno
import sys
import subprocess
import re
import smtplib
import email
import traceback
from cStringIO import StringIO
from email.generator import Generator
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase

#---

prog_name = os.path.basename(sys.argv[0])

config = {
'project_name'  : 'freeipa',
'user_name' : 'USERNAME',
'patch_dir' : '/home/USERNAME/freeipa-patches',
'smtp_server'   : 'SMTP_SERVER',
'email_from_addr'   : 'FULLNAME USERNAME@USER_DOMAIN',
'email_to_addrs': ['freeipa-devel@redhat.com'],
'start_number_basename' : 'StartNumber',
'default_number': 1,
'number': None,
'send'  : False,
'run_format': True,
'dry_run'   : False,
'verbose'   : False,
'revision_range': '-1',
}

signature = '''
--
FULLNAME USERNAME@USER_DOMAIN
'''


#---
git_filename_re = re.compile(r'^(\d+)-(.*).patch$')
ipa_filename_re = re.compile(r'^([^-]+)-([^-]+)-(\d+)(-(\d+))?-(.*)\.patch$')
patch_subject_re = re.compile(r'Subject:\s+\[PATCH\s+(\d+)(-(\d+))?\]')
#---

class CommandError(Exception):
def __init__(self, cmd, msg):
self.cmd = cmd
self.msg = msg

def __str__(self):
return COMMAND ERROR: cmd='%s'\n%s % (self.cmd, self.msg)

#---

class PatchInfo:
def __init__(self, project, user, number, revision, description, path=None):
self.project = project
self.user = user
self.number = int(number)
self.revision = int(revision)
self.description = description
self.path = path
self.extension = 'patch'

def __str__(self):
extension = 'patch'

if self.revision:
filename = '%s-%s-%04d-%d-%s.%s' % \
(self.project, self.user, self.number, self.revision, 
self.description, self.extension)
else:
filename = '%s-%s-%04d-%s.%s' % \
(self.project, self.user, self.number, self.description, 
self.extension)

return filename

def __cmp__(self, other):
result = cmp(self.project, other.project)
if result != 0: return result

result = cmp(self.user, other.user)
if result != 0: return result

result = cmp(self.number, other.number)
if result != 0: return result

result = cmp(self.revision, other.revision)
if result != 0: return result

result = cmp(self.description, other.description)
if result != 0: return result

return 0

#---
def run_cmd(cmd):
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, 
stderr=subprocess.PIPE)
status = os.waitpid

Re: [Freeipa-devel] Experimental patchwork server

2012-10-23 Thread John Dennis

On 10/23/2012 10:17 AM, Simo Sorce wrote:

Does this script send separate emails for each patch ?
Otherwise it will cause issues for the patchwork server.
As I said it can only handle one patch per email, that's why
git-send-email is handy
however git-send-email will not respect the rules we currently have in
freeipa.
I think using patchwork to track patches rather than relying on a
subject line is superior, but I do not want to impose it.


It only sends one email per invocation but it only expects git 
format-patch to generate one patch, not a patch series. It would have to 
be enhanced to handle patch series.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 75] log dogtag errors

2012-10-18 Thread John Dennis

On 10/18/2012 05:06 AM, Petr Viktorin wrote:

This looks much better. I found one more issue, though.


+if detail is not None:
+err_msg += ' (%s)' % detail


Here I get TypeError: unsupported operand type(s) for +=: 'Gettext' and
'unicode'.
Until our Gettext class supports addition (part of #3188), please use
`err_msg = u'%s (%s)' % (err_msg, detail)` instead.


Good catch, fixed. New patch attached.

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From 503e1e5939bb4fd9af8ecfab2e5a07accf03c3fa Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Wed, 17 Oct 2012 13:29:13 -0400
Subject: [PATCH 75-2] log dogtag errors
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

If we get an error from dogtag we always did raise a
CertificateOperationError exception with a message describing the
problem. Unfortuanately that error message did not go into the log,
just sent back to the caller. The fix is to format the error message
and send the same message to both the log and use it to initialize the
CertificateOperationError exception. This is done in the utility
method raise_certificate_operation_error().

https://fedorahosted.org/freeipa/ticket/2622
---
 ipaserver/plugins/dogtag.py | 68 -
 1 file changed, 48 insertions(+), 20 deletions(-)

diff --git a/ipaserver/plugins/dogtag.py b/ipaserver/plugins/dogtag.py
index baa41ad..d52bb7e 100644
--- a/ipaserver/plugins/dogtag.py
+++ b/ipaserver/plugins/dogtag.py
@@ -1233,6 +1233,27 @@ class ra(rabase.rabase):
 self.password = ''
 super(ra, self).__init__()
 
+def raise_certificate_operation_error(self, func_name, err_msg=None, detail=None):
+
+:param func_name: function name where error occurred
+
+:param err_msg:   diagnostic error message, if not supplied it will be
+  'Unable to communicate with CMS'
+:param detail:extra information that will be appended to err_msg
+  inside a parenthesis
+
+Raise a CertificateOperationError and log the error message.
+
+
+if err_msg is None:
+err_msg = _('Unable to communicate with CMS')
+
+if detail is not None:
+err_msg = u'%s (%s)' % (err_msg, detail)
+
+self.error('%s.%s(): %s', self.fullname, func_name, err_msg)
+raise CertificateOperationError(error=err_msg)
+
 def _host_has_service(self, host, service='CA'):
 
 :param host: A host which might be a master for a service.
@@ -1376,14 +1397,15 @@ class ra(rabase.rabase):
 
 # Parse and handle errors
 if (http_status != 200):
-raise CertificateOperationError(error=_('Unable to communicate with CMS (%s)') % \
-  http_reason_phrase)
+self.raise_certificate_operation_error('check_request_status',
+   detail=http_reason_phrase)
 
 parse_result = self.get_parse_result_xml(http_body, parse_check_request_result_xml)
 request_status = parse_result['request_status']
 if request_status != CMS_STATUS_SUCCESS:
-raise CertificateOperationError(error='%s (%s)' % \
-  (cms_request_status_to_string(request_status), parse_result.get('error_string')))
+self.raise_certificate_operation_error('check_request_status',
+   cms_request_status_to_string(request_status),
+   parse_result.get('error_string'))
 
 # Return command result
 cmd_result = {}
@@ -1461,14 +1483,15 @@ class ra(rabase.rabase):
 
 # Parse and handle errors
 if (http_status != 200):
-raise CertificateOperationError(error=_('Unable to communicate with CMS (%s)') % \
-  http_reason_phrase)
+self.raise_certificate_operation_error('get_certificate',
+   detail=http_reason_phrase)
 
 parse_result = self.get_parse_result_xml(http_body, parse_display_cert_xml)
 request_status = parse_result['request_status']
 if request_status != CMS_STATUS_SUCCESS:
-raise CertificateOperationError(error='%s (%s)' % \
-  (cms_request_status_to_string(request_status), parse_result.get('error_string')))
+self.raise_certificate_operation_error('get_certificate',
+   cms_request_status_to_string(request_status),
+   parse_result.get('error_string'))
 
 # Return command result
 cmd_result = {}
@@ -1527,15 +1550,17 @@ class ra(rabase.rabase):
  xml='true')
 # Parse and handle errors
 if (http_status != 200

Re: [Freeipa-devel] [PATCH 75] log dogtag errors

2012-10-17 Thread John Dennis

On 10/12/2012 04:35 AM, Petr Viktorin wrote:

On 10/11/2012 06:53 PM, John Dennis wrote:

On 04/28/2012 09:50 AM, John Dennis wrote:

On 04/27/2012 04:45 AM, Petr Viktorin wrote:

On 04/20/2012 08:07 PM, John Dennis wrote:

Ticket #2622

If we get an error from dogtag we always did raise a
CertificateOperationError exception with a message describing the
problem. Unfortuanately that error message did not go into the log,
just sent back to the caller. The fix is to format the error message
and send the same message to both the log and use it to initialize the
CertificateOperationError exception.



The patch contains five hunks with almost exactly the same code,
applying the same changes in each case.
Wouldn't it make sense to move the _sslget call, parsing, and error
handling to a common method?



Yes it would and ordinarily I would have taken that approach. However on
IRC (or phone?) with Rob we decided not to perturb the code too much for
this particular issue because we intend to refactor the code later. This
was one of the last patches destined for 2.2 which is why we took the
more conservative approach.



I went back and looked at this. It's not practical to collapse
everything into a common subroutine unless you paramaterize the heck out
of a common subroutine. That's because all the patched locations have
subtly different things going on, different parameters to sslget
followed by different result parsing and handling. In retrospect I think
it's clearer to keep things separate rather than one subroutine that
needs a lot of parameters and/or convoluted logic to handle each unique
case.


I don't agree that it's clearer, but I guess that's debatable.


Part of the problem is the dogtag interface. Every command has the
potential to behave differently making it difficult to work with. I
wrote this code originally and got it reduced to as many common parts as
I could. At some point soon we'll be switching to a new dogtag REST
interface which hopefully will allow for greater commonality due to
interface consistency.

In summary: I still stand by the original patch.



However, I see no reason to not use a method such as:

def raise_certop_error(self, method_name, error=None, detail=None):
  Log and raise a CertificateOperationError

  :param method_name: Name of the method in which the error occured
  :param error: Error string. If None, Unable to communicate with
  CMS is used.
  :param detail: Detailed or low-level information. Will be put in
  parentheses.
  

to at least get rid of the repetition this patch is adding - almost the
same format+log+raise sequence is used twice in each of the five hunks.



Added a utility function as suggested. Revised patch attached.
--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From 48f1560745cfa0281a387aacf0737841bb78b065 Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Wed, 17 Oct 2012 13:29:13 -0400
Subject: [PATCH 75-1] log dogtag errors
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

If we get an error from dogtag we always did raise a
CertificateOperationError exception with a message describing the
problem. Unfortuanately that error message did not go into the log,
just sent back to the caller. The fix is to format the error message
and send the same message to both the log and use it to initialize the
CertificateOperationError exception. This is done in the utility
method raise_certificate_operation_error().

https://fedorahosted.org/freeipa/ticket/2622
---
 ipaserver/plugins/dogtag.py | 68 -
 1 file changed, 48 insertions(+), 20 deletions(-)

diff --git a/ipaserver/plugins/dogtag.py b/ipaserver/plugins/dogtag.py
index baa41ad..66fef57 100644
--- a/ipaserver/plugins/dogtag.py
+++ b/ipaserver/plugins/dogtag.py
@@ -1233,6 +1233,27 @@ class ra(rabase.rabase):
 self.password = ''
 super(ra, self).__init__()
 
+def raise_certificate_operation_error(self, func_name, err_msg=None, detail=None):
+
+:param func_name: function name where error occurred
+
+:param err_msg:   diagnostic error message, if not supplied it will be
+  'Unable to communicate with CMS'
+:param detail:extra information that will be appended to err_msg
+  inside a parenthesis
+
+Raise a CertificateOperationError and log the error message.
+
+
+if err_msg is None:
+err_msg = _('Unable to communicate with CMS')
+
+if detail is not None:
+err_msg += ' (%s)' % detail
+
+self.error('%s.%s(): %s', self.fullname, func_name, err_msg)
+raise CertificateOperationError(error=err_msg)
+
 def _host_has_service(self, host, service='CA'):
 
 :param host: A host which might be a master for a service.
@@ -1376,14 +1397,15 @@ class ra

Re: [Freeipa-devel] Unit tests failing on F18

2012-10-15 Thread John Dennis

On 10/15/2012 02:27 AM, Martin Kosek wrote:

On 10/12/2012 06:16 PM, John Dennis wrote:

O.K. I'll take a look at it. I seem to recall Rob looked into something similar
a couple of days ago. Rob, do you have any additional information to share?


Great, with your NSS+Python knowledge this should be walk in the park :-)
  Any luck with investigation of this issue?


Not yet, it's next on my to-do list.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Unit tests failing on F18

2012-10-12 Thread John Dennis

On 10/12/2012 11:20 AM, Martin Kosek wrote:

Hello,

I was investigating global unit test failure on Fedora 18 for most of today, I
would like to share results I found so far.

Unit test and its related scripts on F18 now reports NSS BUSY exception, just
like this one:

# ./make-testcert
Traceback (most recent call last):
   File ./make-testcert, line 134, in module
 sys.exit(makecert(reqdir))
   File ./make-testcert, line 111, in makecert
 add=True)
   File ./make-testcert, line 68, in run
 result = self.execute(method, *args, **options)
   File /root/freeipa-master2/ipalib/backend.py, line 146, in execute
 raise error #pylint: disable=E0702
ipalib.errors.NetworkError: cannot connect to
'http://vm-042.idm.lab.bos.redhat.com/ipa/session/xml': [Errno -8053]
(SEC_ERROR_BUSY) NSS could not shutdown. Objects are still in use.

Something In F18 must have changed, this worked before... But leaked
NSSConnection objects without proper close() now ends with the exception above.

In case of make-testcert script, the exception is raised because the script
does the following procedure:

1) connect, do one command
2) disconnect
3) connect, do second command

However, during disconnect, NSSConnection is leaked which makes NSS very
uncomfortable during second connection atempt (and nss_shutdown()). I managed
to fix this issue with attached patch. ./make-testcert or ./make-test
tests/test_xmlrpc/test_group_plugin.py works fine now.

But global ./make-test still fails, I think there is some remaining
NSSConnection leak, I suspect there is something wrong with how we use our
context (threading.local object). It looses a connection or some other thread
invoked in ldap2 module may be kicking in, here is my debug output:

CONTEXT[xmlclient] = ipalib.request.Connection object at 0x9a1f5ec

Test a simple LDAP bind using ldap2 ... SKIP: No directory manager password in
/root/.ipa/.dmpw
Test the `ipaserver.rpcserver.jsonserver.unmarshal` method. ... ok
tests.test_ipaserver.test_rpcserver.test_session.test_mount ... CONTEXT
150714476: GET languages

CONTEXT[xmlclient] = None

The connection is in the context, but then something happens and it is gone.
Then, unit tests try to connect again and NSS fails.

I would be really glad if somebody with a knowledge of NSS or how threads in
Python/IPA work could give me some advice...


O.K. I'll take a look at it. I seem to recall Rob looked into something 
similar a couple of days ago. Rob, do you have any additional 
information to share?



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 75] log dogtag errors

2012-10-11 Thread John Dennis

On 04/28/2012 09:50 AM, John Dennis wrote:

On 04/27/2012 04:45 AM, Petr Viktorin wrote:

On 04/20/2012 08:07 PM, John Dennis wrote:

Ticket #2622

If we get an error from dogtag we always did raise a
CertificateOperationError exception with a message describing the
problem. Unfortuanately that error message did not go into the log,
just sent back to the caller. The fix is to format the error message
and send the same message to both the log and use it to initialize the
CertificateOperationError exception.



The patch contains five hunks with almost exactly the same code,
applying the same changes in each case.
Wouldn't it make sense to move the _sslget call, parsing, and error
handling to a common method?



Yes it would and ordinarily I would have taken that approach. However on
IRC (or phone?) with Rob we decided not to perturb the code too much for
this particular issue because we intend to refactor the code later. This
was one of the last patches destined for 2.2 which is why we took the
more conservative approach.



I went back and looked at this. It's not practical to collapse 
everything into a common subroutine unless you paramaterize the heck out 
of a common subroutine. That's because all the patched locations have 
subtly different things going on, different parameters to sslget 
followed by different result parsing and handling. In retrospect I think 
it's clearer to keep things separate rather than one subroutine that 
needs a lot of parameters and/or convoluted logic to handle each unique 
case.


Part of the problem is the dogtag interface. Every command has the 
potential to behave differently making it difficult to work with. I 
wrote this code originally and got it reduced to as many common parts as 
I could. At some point soon we'll be switching to a new dogtag REST 
interface which hopefully will allow for greater commonality due to 
interface consistency.


In summary: I still stand by the original patch.

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 78] Ticket #2979 - prevent last admin from being disabled

2012-09-03 Thread John Dennis

On 09/03/2012 07:53 AM, Petr Viktorin wrote:

On 08/26/2012 07:19 PM, John Dennis wrote:

On 08/20/2012 01:37 PM, Petr Viktorin wrote:

(Sorry if you're getting this twice; I didn't send it to the list)

On 08/16/2012 08:38 PM, John Dennis wrote:


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

freeipa-jdennis-0078-Ticket-2979-prevent-last-admin-from-being-disabled.patch



From c47109c63530e188db76986fdda48c76bf681d10 Mon Sep 17 00:00:00 2001
From: John Dennisjden...@redhat.com
Date: Thu, 16 Aug 2012 20:28:44 -0400
Subject: [PATCH 78] Ticket #2979 - prevent last admin from being
disabled
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

We prevent the last member of the admin group from being deleted. The
same check needs to be performed when disabling a user.

Moved the code in del_user to a common subroutine and call it from
both user_del and user_disable. Note, unlike user_del user_disable
does not have a 'pre' callback therefore the check function is called
in user_disable's execute routine.


This should also prevent disabling all admins if there's more than one:

# ipa user-add admin2 --first=a --last=b
---
Added user admin2
---
...
# ipa group-add-member admins --user=admin2
-
Number of members added 1
-
# ipa user-disable admin2
--
Disabled user account admin2
--
# ipa user-disable admin
--
Disabled user account admin
--
# ipa ping
ipa: ERROR: Server is unwilling to perform: Account inactivated. Contact
system administrator.

Also with one enabled and one disabled admin, it shouldn't be possible
to delete the enabled one.


Please add some tests; you can extend the ones added in commit f8e7b51.


Good catch with respect to disabled users, thank you.

Reworked patch attached, see patch comments.






Works well now, just the error message is incorrect: it mentions only
deleting, not disabling.

$ ipa user-disable admin
ipa: ERROR: admin cannot be deleted because it is the last member of
group admins


Updated the error message to say

... cannot be deleted or disabled because ...


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From 798069ca293ce98ea62b20a27664ebd78d1e4a4d Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Thu, 16 Aug 2012 20:28:44 -0400
Subject: [PATCH 78-2] prevent last admin from being disabled
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

We prevent the last member of the admin group from being deleted. The
same check needs to be performed when disabling a user.

* Moved the code in del_user to the common subroutine
  check_protected_member() and call it from both user_del and
  user_disable. Note, unlike user_del user_disable does not have a
  'pre' callback therefore the check function is called in
  user_disable's execute routine.

* Make check_protected_member() aware of disabled members. It's not
  sufficient to check which members of the protected group are
  present, one must only consider those members which are enabled.

* Add tests to test_user_plugin.py.

  - verify you cannot delete nor disable the last member of the admin
group

  - verify when the admin group contains disabled users in addition to
enabled users only the enabled users are considered when
determining if the last admin is about to be disabled or deleted.

* Replace duplicated hardcoded values in the tests with variables or
  subroutines, this makes the individual tests a bit more succinct and
  easier to copy/modify.

* Update error msg to reflect either deleting or disabling is an error.

https://fedorahosted.org/freeipa/ticket/2979
---
 ipalib/errors.py  |   6 +-
 ipalib/plugins/user.py|  27 ++-
 tests/test_xmlrpc/test_user_plugin.py | 443 +-
 3 files changed, 303 insertions(+), 173 deletions(-)

diff --git a/ipalib/errors.py b/ipalib/errors.py
index c25560b..1bff2ac 100644
--- a/ipalib/errors.py
+++ b/ipalib/errors.py
@@ -1627,18 +1627,18 @@ class DependentEntry(ExecutionError):
 
 class LastMemberError(ExecutionError):
 
-**4308** Raised when an entry being deleted is last member of a protected group
+**4308** Raised when an entry being deleted or disabled is last member of a protected group
 
 For example:
  raise LastMemberError(key=u'admin', label=u'group', container=u'admins')
 Traceback (most recent call last):
   ...
-LastMemberError: admin cannot be deleted because it is the last member of group admins
+LastMemberError: admin cannot be deleted or disabled because it is the last member of group admins
 
 
 
 errno = 4308
-format = _('%(key)s cannot be deleted because it is the last member of %(label)s

[Freeipa-devel] [PATCH 81] ipa user-find --manager does not find matches

2012-08-31 Thread John Dennis


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From eda845c33c20259cce39621b707906808b2b9335 Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Fri, 31 Aug 2012 08:16:49 -0400
Subject: [PATCH 81] ipa user-find --manager does not find matches
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

The manager LDAP attribute is a dn pointing inside the user
container. When passed on the command it is typically a bare user
uid. The search filter will only succeed if the bare uid is converted
to a full dn because that is what is stored in the value for the
manager attribute.

The search failure is solved by calling _normalize_manager() which
does the conversion to a dn (if not already a dn).

It feels like this type of conversion should be performed in the pre
callback which allows one to modify the filter. But when the pre
callback is invoked it's complex string with the manager attribute
already inserted. This is because the LDAPSearch.execute() method
processes the options dict and constructs a filter component for each
key/value in the options dict prior to invoking the pre callback. If
we wanted to modify the manager value in the filter in the pre
callback we would have to decompose the filter string, perform dn
checking and then reassemble the filter. It's much cleaner to perform
the dn operations on the manager value before it gets embedded into
what otherwise might be a very complex filter. This is the reason why
the normalization is perfored in the execute method as opposed to the
pre callback. Other classes do similar things in their execute methods
as opposed to their callbacks's, selinuxusermap_find is one example.

https://fedorahosted.org/freeipa/ticket/2264
---
 ipalib/plugins/user.py | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/ipalib/plugins/user.py b/ipalib/plugins/user.py
index d0f7da2..c024e85 100644
--- a/ipalib/plugins/user.py
+++ b/ipalib/plugins/user.py
@@ -642,6 +642,13 @@ class user_find(LDAPSearch):
 ),
 )
 
+def execute(self, *args, **options):
+# assure the manager attr is a dn, not just a bare uid
+manager = options.get('manager')
+if manager is not None:
+options['manager'] = self.obj._normalize_manager(manager)
+return super(user_find, self).execute(self, *args, **options)
+
 def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options):
 assert isinstance(base_dn, DN)
 if options.get('whoami'):
-- 
1.7.11.4

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH 81] ipa user-find --manager does not find matches

2012-08-31 Thread John Dennis
The original patch was missing a unit test to verify the fix. This 
version of the patch now includes a unit test.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From a989885f065a6d05eff16f92b259f76c3fceaace Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Fri, 31 Aug 2012 08:16:49 -0400
Subject: [PATCH 81-1] ipa user-find --manager does not find matches
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

The manager LDAP attribute is a dn pointing inside the user
container. When passed on the command it is typically a bare user
uid. The search filter will only succeed if the bare uid is converted
to a full dn because that is what is stored in the value for the
manager attribute.

The search failure is solved by calling _normalize_manager() which
does the conversion to a dn (if not already a dn).

It feels like this type of conversion should be performed in the pre
callback which allows one to modify the filter. But when the pre
callback is invoked it's complex string with the manager attribute
already inserted. This is because the LDAPSearch.execute() method
processes the options dict and constructs a filter component for each
key/value in the options dict prior to invoking the pre callback. If
we wanted to modify the manager value in the filter in the pre
callback we would have to decompose the filter string, perform dn
checking and then reassemble the filter. It's much cleaner to perform
the dn operations on the manager value before it gets embedded into
what otherwise might be a very complex filter. This is the reason why
the normalization is perfored in the execute method as opposed to the
pre callback. Other classes do similar things in their execute methods
as opposed to their callbacks's, selinuxusermap_find is one example.

Patch also introduces new unit test to verify.

https://fedorahosted.org/freeipa/ticket/2264
---
 ipalib/plugins/user.py|  7 +++
 tests/test_xmlrpc/test_user_plugin.py | 27 +++
 2 files changed, 34 insertions(+)

diff --git a/ipalib/plugins/user.py b/ipalib/plugins/user.py
index d0f7da2..c024e85 100644
--- a/ipalib/plugins/user.py
+++ b/ipalib/plugins/user.py
@@ -642,6 +642,13 @@ class user_find(LDAPSearch):
 ),
 )
 
+def execute(self, *args, **options):
+# assure the manager attr is a dn, not just a bare uid
+manager = options.get('manager')
+if manager is not None:
+options['manager'] = self.obj._normalize_manager(manager)
+return super(user_find, self).execute(self, *args, **options)
+
 def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options):
 assert isinstance(base_dn, DN)
 if options.get('whoami'):
diff --git a/tests/test_xmlrpc/test_user_plugin.py b/tests/test_xmlrpc/test_user_plugin.py
index 5240c8c..f146b34 100644
--- a/tests/test_xmlrpc/test_user_plugin.py
+++ b/tests/test_xmlrpc/test_user_plugin.py
@@ -657,6 +657,33 @@ class test_user(Declarative):
 ),
 ),
 
+dict(
+desc='Search for %s with manager %s' % (user2, user1),
+command=(
+'user_find', [user2], {'manager': user1}
+),
+expected=dict(
+result=[
+dict(
+dn=get_user_dn(user2),
+givenname=[u'Test'],
+homedirectory=[u'/home/tuser2'],
+loginshell=[u'/bin/sh'],
+sn=[u'User2'],
+uid=[user2],
+nsaccountlock=False,
+has_keytab=False,
+has_password=False,
+uidnumber=[fuzzy_digits],
+gidnumber=[fuzzy_digits],
+manager=[user1],
+),
+],
+summary=u'1 user matched',
+count=1,
+truncated=False,
+),
+),
 
 dict(
 desc='Delete %s and %s at the same time' % (user1, user2),
-- 
1.7.11.4

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] Paging in Web UI

2012-08-29 Thread John Dennis

On 08/29/2012 10:35 AM, Rich Megginson wrote:

On 08/29/2012 07:34 AM, John Dennis wrote:

On 08/28/2012 02:31 PM, Endi Sukma Dewata wrote:



We can also use Simple Paged Results, but if I understood correctly it
requires the httpd to maintain an open connection to the LDAP server for
each user and for each page.


Not for each user.  In 389-ds-base-1.2.11 you can have multiple simple
paged result searches on a single connection - see
https://fedorahosted.org/389/ticket/260


Well this is the crux of the problem. We do not maintain a connection 
per user. LDAP connections exist for the duration of a single IPA RPC 
call. Those RPC calls may be multiplexed across multiple IPA server 
instances, each of which is it's own process.


Our LDAP connections are very short lived and are scattered across 
processes.



This is the problem that VLV and Simple Paged Results are trying to
solve - how to allow users to scroll/page through very large result sets.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Paging in Web UI

2012-08-28 Thread John Dennis

On 08/28/2012 09:30 AM, Petr Vobornik wrote:

I would like to point out a problem in Web UI related to paging and
suggest a possible solution:

Current implementation of paging in Web UI is not serving it purpose
which should be to do operations faster and be able to find all users.

How current implementation should work:
   1) get all primary keys (pkeys) of certain object type (users, ...)
   2) load first 20 of them
   3) from 1) we know record count so we can choose other page (different
subset of 20)

How it actually works:
   * get up to 2000 keys, with more items a LDAP limit is reached and
result is truncated
   * we don't know how many are there, paging doesn't work as it should

Problem:
Let's say that paging is working (no LDAP limit) and consider this: Our
page size is 20 records. With 400 users it is 20 pages. With 2000 users
(current search limit) it is 200 pages and so on. Personally I wouldn't
like to go through 20 pages. 200 or more pages sounds like a punishment.
There is also a question if we even need to do it. I don't see a benefit
of paging here. For smaller setup an increase of page size may help
admin to see more users on a page more quickly (rolling mouse wheel is
faster than click on next page button).

For large setups, size of the pkey list must also be taken on mind. For
each record there is a pkey and dn (dn is not required but it would need
some hacking in python code to remove it). At the moment the list size
raises drastically because JSON response includes indenting by spaces
and is not compressed (easy to fix, but still there). With 10 000 or
more users this pkey and dn list is pretty big (slow load on slower
networks -vpns). This list is also loaded on each page change
(record-wise, can be improved - cached).

IMO with hundreds or thousands of users the most common use case is
search by login or name or something. Paging is not required this case.
It may be required if the result count is greater than size limit. In
such case an option to temporary increase size limit or enable paging
would be beneficial. Apart from this case, paging should be off. It will
speed up page load because it executes only one request.

Possible solution:
1) I suggest to introduce configuration options for paging. They can be
global (default for all search pages) or individual for pages or Web
UI's users. Per-user configuration can be stored in browser (to not
pollute LDAP with application stuff). Global configuration on server in
config plugin (minor polluting).

Those options should be (per page and global):
   * enable paging
   * page size

Note: when paging is disabled page size is basically --sizelimit option.

2) On search pages we should have controls to enable/disable paging and
to change sizelimit for this particular moment.
3) Compress JSON responses (on httpd level)

This way admin can configure default's for his deployment and user's can
adjust for certain situations.

Btw always including member_xx attributes in find or show commands is
not good for search pages either but that's another topic.

Comments are welcome.


Your possible solution does not address how many results are fetched 
(unless I misunderstood).


I'm not sure how per-user preferences are handled in browsers, but don't 
forget we now have session support in the server. Server session data is 
available for use.


If you don't want to fetch every record to implement paging smartly in 
conjunction with it's performance issues why not do the query on the 
server, cache the results in the session, and have the RPC return the 
total number of results plus a subset of the result. Another RPC could 
retrieve the next/previous subset of results from the cached result in 
the session.


I don't think there any need in JSON formatted data for pretty printing 
with indentation. Is it an accident or oversight we're pretty printing 
the JSON data in an RPC. For large data sets compression would be a win, 
but most of our RPC communication is not large. Compression has an 
overhead. With small data you'll use more cycles compressing and 
uncompressing than you would sending verbose but small data blobs. 
Compression should be an RPC option specified by either side of the 
connection and the receiver should be prepared to conditionally 
uncompress based on a flag value in the RPC. If you use server session 
data to support paging you may not need to introduce compression since 
you would only be passing one page of data at a time.


User specified page size (without limitation) is an absolute necessity. 
I am frequently annoyed by web sites which either do not allow me to 
specify page size or constrain it to ridiculous hard coded limits such 
as 10, 20 or 30.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Ticket #2866 - referential integrity in IPA

2012-08-27 Thread John Dennis
Just out of curiosity, I saw something this weekend while testing and 
I'm wondering if it's expected behavior or if referential integrity 
would address it.


I was able to add a non-existent user to a group. Shouldn't that have 
been an error? Do we check for that in the ldap pre callback? Do we 
intend for referential integrity to catch those sorts of things?


Or do we allow non-existent users to be members of group for some reason?

--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH 78] Ticket #2979 - prevent last admin from being disabled

2012-08-26 Thread John Dennis

On 08/20/2012 01:37 PM, Petr Viktorin wrote:

(Sorry if you're getting this twice; I didn't send it to the list)

On 08/16/2012 08:38 PM, John Dennis wrote:


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

freeipa-jdennis-0078-Ticket-2979-prevent-last-admin-from-being-disabled.patch


From c47109c63530e188db76986fdda48c76bf681d10 Mon Sep 17 00:00:00 2001
From: John Dennisjden...@redhat.com
Date: Thu, 16 Aug 2012 20:28:44 -0400
Subject: [PATCH 78] Ticket #2979 - prevent last admin from being disabled
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

We prevent the last member of the admin group from being deleted. The
same check needs to be performed when disabling a user.

Moved the code in del_user to a common subroutine and call it from
both user_del and user_disable. Note, unlike user_del user_disable
does not have a 'pre' callback therefore the check function is called
in user_disable's execute routine.


This should also prevent disabling all admins if there's more than one:

# ipa user-add admin2 --first=a --last=b
---
Added user admin2
---
...
# ipa group-add-member admins --user=admin2
-
Number of members added 1
-
# ipa user-disable admin2
--
Disabled user account admin2
--
# ipa user-disable admin
--
Disabled user account admin
--
# ipa ping
ipa: ERROR: Server is unwilling to perform: Account inactivated. Contact
system administrator.

Also with one enabled and one disabled admin, it shouldn't be possible
to delete the enabled one.


Please add some tests; you can extend the ones added in commit f8e7b51.


Good catch with respect to disabled users, thank you.

Reworked patch attached, see patch comments.




--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From c635ac107de3b0a3965c596224b0f8c73c9139ac Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Thu, 16 Aug 2012 20:28:44 -0400
Subject: [PATCH 78-1] Ticket #2979 - prevent last admin from being disabled
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

Ticket #2979 - prevent last admin from being disabled

We prevent the last member of the admin group from being deleted. The
same check needs to be performed when disabling a user.

* Moved the code in del_user to the common subroutine
  check_protected_member() and call it from both user_del and
  user_disable. Note, unlike user_del user_disable does not have a
  'pre' callback therefore the check function is called in
  user_disable's execute routine.

* Make check_protected_member() aware of disabled members. It's not
  sufficient to check which members of the protected group are
  present, one must only consider those members which are enabled.

* Add tests to test_user_plugin.py.

  - verify you cannot delete nor disable the last member of the admin
group

  - verify when the admin group contains disabled users in addition to
enabled users only the enabled users are considered when
determining if the last admin is about to be disabled or deleted.

* Replace duplicated hardcoded values in the tests with variables or
  subroutines, this makes the individual tests a bit more succinct and
  easier to copy/modify.
---
 ipalib/plugins/user.py|  27 ++-
 tests/test_xmlrpc/test_user_plugin.py | 443 +-
 2 files changed, 300 insertions(+), 170 deletions(-)

diff --git a/ipalib/plugins/user.py b/ipalib/plugins/user.py
index 529699f..d0f7da2 100644
--- a/ipalib/plugins/user.py
+++ b/ipalib/plugins/user.py
@@ -166,6 +166,24 @@ def normalize_principal(principal):
 return unicode('%s@%s' % (user, realm))
 
 
+def check_protected_member(user, protected_group_name=u'admins'):
+'''
+Ensure the last enabled member of a protected group cannot be deleted or
+disabled by raising LastMemberError.
+'''
+
+# Get all users in the protected group
+result = api.Command.user_find(in_group=protected_group_name)
+
+# Build list of users in the protected group who are enabled
+result = result['result']
+enabled_users = [entry['uid'][0] for entry in result if not entry['nsaccountlock']]
+
+# If the user is the last enabled user raise LastMemberError exception
+if enabled_users == [user]:
+raise errors.LastMemberError(key=user, label=_(u'group'),
+container=protected_group_name)
+
 class user(LDAPObject):
 
 User object.
@@ -550,11 +568,7 @@ class user_del(LDAPDelete):
 
 def pre_callback(self, ldap, dn, *keys, **options):
 assert isinstance(dn, DN)
-protected_group_name = u'admins'
-result = api.Command.group_show(protected_group_name)
-if result['result'].get('member_user', []) == [keys[-1

[Freeipa-devel] [PATCH 80] Ticket #2850 - Ipactl exception not handled well

2012-08-20 Thread John Dennis


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From fda504233ee46a494b7ed6b85593e7e586739425 Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Mon, 20 Aug 2012 16:47:52 -0400
Subject: [PATCH 80] Ticket #2850 - Ipactl exception not handled well
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

Ticket #2850 - Ipactl exception not handled well

There were various places in ipactl which intialized IpactlError with
None as the msg. If you called str() on that exception all was well
because ScriptError.__str__() converted a msg with None to the empty
string (IpactlError is subclassed from ScriptError). But a few places
directly access e.msg which will be None if initialized that way. It's
hard to tell from the stack traces but I'm pretty sure it's those
places which use e.msg directly which will cause the problems seen in
the bug report.

I do not believe it is ever correct to initialize an exception message
to None, I don't even understand what that means. On the other hand
initializing to the empty string is sensible and for that matter is
the default for the class.

This patch makes two fixes:

1) The ScriptError initializer will now convert a msg parameter of
None to the empty string.

2) All places that initialized IpactlError's msg parameter to None
removed the None initializer allowing the msg parameter to default
to the empty string.

I don't know how to test the fix for Ticket #2850 because it's not
clear how it got into that state in the first place, but I do believe
initialing the msg value to None is clearly wrong and should fix the
problem.
---
 install/tools/ipactl   | 10 +-
 ipapython/admintool.py |  4 +++-
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/install/tools/ipactl b/install/tools/ipactl
index e173d10..d4b2c08 100755
--- a/install/tools/ipactl
+++ b/install/tools/ipactl
@@ -186,9 +186,9 @@ def ipa_start(options):
 pass
 if isinstance(e, IpactlError):
 # do not display any other error message
-raise IpactlError(None, e.rval)
+raise IpactlError(rval=e.rval)
 else:
-raise IpactlError(None)
+raise IpactlError()
 
 if len(svc_list) == 0:
 # no service to stop
@@ -235,7 +235,7 @@ def ipa_stop(options):
 # just try to stop it, do not read a result
 dirsrv.stop()
 finally:
-raise IpactlError(None)
+raise IpactlError()
 
 if len(svc_list) == 0:
 # no service to stop
@@ -277,9 +277,9 @@ def ipa_restart(options):
 pass
 if isinstance(e, IpactlError):
 # do not display any other error message
-raise IpactlError(None, e.rval)
+raise IpactlError(rval=e.rval)
 else:
-raise IpactlError(None)
+raise IpactlError()
 
 if len(svc_list) == 0:
 # no service to stop
diff --git a/ipapython/admintool.py b/ipapython/admintool.py
index 1ba8b6b..b644516 100644
--- a/ipapython/admintool.py
+++ b/ipapython/admintool.py
@@ -36,11 +36,13 @@ class ScriptError(StandardError):
 An exception that records an error message and a return value
 
 def __init__(self, msg='', rval=1):
+if msg is None:
+msg = ''
 self.msg = msg
 self.rval = rval
 
 def __str__(self):
-return self.msg or ''
+return self.msg
 
 
 class AdminTool(object):
-- 
1.7.11.4

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

[Freeipa-devel] [PATCH 80] Ticket #2850 - Ipactl exception not handled well

2012-08-20 Thread John Dennis


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From fda504233ee46a494b7ed6b85593e7e586739425 Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Mon, 20 Aug 2012 16:47:52 -0400
Subject: [PATCH 80] Ticket #2850 - Ipactl exception not handled well
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

Ticket #2850 - Ipactl exception not handled well

There were various places in ipactl which intialized IpactlError with
None as the msg. If you called str() on that exception all was well
because ScriptError.__str__() converted a msg with None to the empty
string (IpactlError is subclassed from ScriptError). But a few places
directly access e.msg which will be None if initialized that way. It's
hard to tell from the stack traces but I'm pretty sure it's those
places which use e.msg directly which will cause the problems seen in
the bug report.

I do not believe it is ever correct to initialize an exception message
to None, I don't even understand what that means. On the other hand
initializing to the empty string is sensible and for that matter is
the default for the class.

This patch makes two fixes:

1) The ScriptError initializer will now convert a msg parameter of
None to the empty string.

2) All places that initialized IpactlError's msg parameter to None
removed the None initializer allowing the msg parameter to default
to the empty string.

I don't know how to test the fix for Ticket #2850 because it's not
clear how it got into that state in the first place, but I do believe
initialing the msg value to None is clearly wrong and should fix the
problem.
---
 install/tools/ipactl   | 10 +-
 ipapython/admintool.py |  4 +++-
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/install/tools/ipactl b/install/tools/ipactl
index e173d10..d4b2c08 100755
--- a/install/tools/ipactl
+++ b/install/tools/ipactl
@@ -186,9 +186,9 @@ def ipa_start(options):
 pass
 if isinstance(e, IpactlError):
 # do not display any other error message
-raise IpactlError(None, e.rval)
+raise IpactlError(rval=e.rval)
 else:
-raise IpactlError(None)
+raise IpactlError()
 
 if len(svc_list) == 0:
 # no service to stop
@@ -235,7 +235,7 @@ def ipa_stop(options):
 # just try to stop it, do not read a result
 dirsrv.stop()
 finally:
-raise IpactlError(None)
+raise IpactlError()
 
 if len(svc_list) == 0:
 # no service to stop
@@ -277,9 +277,9 @@ def ipa_restart(options):
 pass
 if isinstance(e, IpactlError):
 # do not display any other error message
-raise IpactlError(None, e.rval)
+raise IpactlError(rval=e.rval)
 else:
-raise IpactlError(None)
+raise IpactlError()
 
 if len(svc_list) == 0:
 # no service to stop
diff --git a/ipapython/admintool.py b/ipapython/admintool.py
index 1ba8b6b..b644516 100644
--- a/ipapython/admintool.py
+++ b/ipapython/admintool.py
@@ -36,11 +36,13 @@ class ScriptError(StandardError):
 An exception that records an error message and a return value
 
 def __init__(self, msg='', rval=1):
+if msg is None:
+msg = ''
 self.msg = msg
 self.rval = rval
 
 def __str__(self):
-return self.msg or ''
+return self.msg
 
 
 class AdminTool(object):
-- 
1.7.11.4

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] Announcing FreeIPA v3.0.0 beta 2 Release

2012-08-20 Thread John Dennis

On 08/20/2012 05:27 PM, Rob Crittenden wrote:

Jérôme Fenal wrote:


2012/8/17 Rob Crittenden rcrit...@redhat.com mailto:rcrit...@redhat.com

 The FreeIPA team is proud to announce version FreeIPA v3.0.0 beta 2.


Hi Rob,

Regarding translations, I don't see yet a 3.0 branch on Transifex. Is it
in the pipeline, so we could have time to work on translations for 3.0 GA?


The master branch in Transifex equivalent to the master branch in IPA,
and has been updated recently.

We plan to branch for 3.0 in the IPA repo, I suppose we can look into
branching in Transifex at the same time, but for now it should be safe
to work from master. I don't believe Transifex really handles branches
very well, but John knows more about it than I.


Transifex has no concept of branches much to many developers dismay. The 
suggested solution in Transifex to create a new resource for each branch 
(which is what we did with 2.2). Currently in TX we have an ipa resource 
which maps to git's master and a 2.2 resource which maps to git's 2.2 
branch..


It's kind of sucky because there is no connection in TX between 
resources and hence branches, each resource stands on it's own. That 
means even though 90% of the strings in one resource are the same as 
another resource (because they're just different branch versions) 
translators won't have the advantage of that information (unless they know).


I brought the question of software branches up in the TX forums and was 
met with deafening silence.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

[Freeipa-devel] [PATCH 79] Ticket #3008: DN objects hash differently depending on case

2012-08-17 Thread John Dennis


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From 18182bb02a01718a2fc837670521ab757f58bfd4 Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Fri, 17 Aug 2012 15:34:40 -0400
Subject: [PATCH 79] Ticket #3008: DN objects hash differently depending on
 case
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

Because the attrs  values in DN's, RDN's and AVA's are comparison case-
insensitive the hash value between two objects which compare as equal but
differ in case must also yield the same hash value. This is critical when
these objects are used as a dict key or in a set because dicts and sets
use the object's __hash__ value in conjunction with the objects __eq__
method to lookup the object.

The defect is the DN, RDN  AVA objects computed their hash from the case-
preserving string representation thus two otherwise equal objects
incorrectly yielded different hash values.

The problem manifests itself when one of these objects is used as a key in
a dict, for example a dn.

dn1 = DN(('cn', 'Bob'))
dn2 = DN(('cn', 'bob'))

dn1 == dn2 -- True

hash(dn1) == hash(dn2) -- False

d = {}

d[dn1] = x
d[dn2] = y

len(d) -- 2

The patch fixes the above by lower casing the string representation of
the object prior to computing it's hash.

The patch also corrects a spelling mistake and a bogus return value in
ldapupdate.py which happened to be discovered while researching this
bug.
---
 ipapython/dn.py | 43 +++--
 ipaserver/install/ldapupdate.py |  2 +-
 tests/test_ipapython/test_dn.py | 30 ++--
 3 files changed, 45 insertions(+), 30 deletions(-)

diff --git a/ipapython/dn.py b/ipapython/dn.py
index 6cce40e..834291f 100644
--- a/ipapython/dn.py
+++ b/ipapython/dn.py
@@ -385,7 +385,7 @@ if container_dn in dn:
 # (e.g. cmp()). The general rule is for objects supporting multiple
 # values first their lengths are compared, then if the lengths match
 # the respective components of each are pair-wise compared until one
-# is discovered to be  non-equal. The comparision is case insensitive.
+# is discovered to be  non-equal. The comparison is case insensitive.
 
 Cloning (Object Copy):
 
@@ -544,7 +544,7 @@ class AVA(object):
 
 AVA objects support equality testing and comparsion (e.g. cmp()). When they
 are compared the attr is compared first, if the 2 attr's are equal then the
-values are compared. The comparision is case insensitive (because attr's map
+values are compared. The comparison is case insensitive (because attr's map
 to numeric OID's and their values derive from from the 'name' atribute type
 (OID 2.5.4.41) whose EQUALITY MATCH RULE is caseIgnoreMatch.
 
@@ -648,8 +648,13 @@ class AVA(object):
 (key.__class__.__name__))
 
 def __hash__(self):
-# Hash is computed from AVA's string representation because it's immutable
-return hash(str(self))
+# Hash is computed from AVA's string representation because it's immutable.
+#
+# Because attrs  values are comparison case-insensitive the
+# hash value between two objects which compare as equal but
+# differ in case must yield the same hash value.
+
+return hash(str(self).lower())
 
 def __eq__(self, other):
 '''
@@ -676,7 +681,7 @@ class AVA(object):
 if not isinstance(other, AVA):
 return False
 
-# Perform comparision between objects of same type
+# Perform comparison between objects of same type
 return self._attr_unicode.lower() == other.attr.lower() and \
 self._value_unicode.lower() == other.value.lower()
 
@@ -684,7 +689,7 @@ class AVA(object):
 return not self.__eq__(other)
 
 def __cmp__(self, other):
-'comparision is case insensitive, see __eq__ doc for explanation'
+'comparison is case insensitive, see __eq__ doc for explanation'
 
 if not isinstance(other, AVA):
 raise TypeError(expected AVA but got %s % (other.__class__.__name__))
@@ -803,8 +808,8 @@ class RDN(object):
 the first AVA's properties, programmer beware! Recall the AVA's in the RDN
 are sorted according the to AVA collating semantics.
 
-RDN objects support equality testing and comparision. See AVA for the
-definition of the comparision method.
+RDN objects support equality testing and comparison. See AVA for the
+definition of the comparison method.
 
 RDN objects support concatenation and addition with other RDN's or AVA's
 
@@ -933,7 +938,12 @@ class RDN(object):
 
 def __hash__(self):
 # Hash is computed from RDN's string representation because it's immutable
-return hash(str(self))
+#
+# Because attrs  values are comparison case-insensitive the
+# hash value between two objects which compare as equal but
+# differ

[Freeipa-devel] [PATCH 78] Ticket #2979 - prevent last admin from being disabled

2012-08-16 Thread John Dennis


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From c47109c63530e188db76986fdda48c76bf681d10 Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Thu, 16 Aug 2012 20:28:44 -0400
Subject: [PATCH 78] Ticket #2979 - prevent last admin from being disabled
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

We prevent the last member of the admin group from being deleted. The
same check needs to be performed when disabling a user.

Moved the code in del_user to a common subroutine and call it from
both user_del and user_disable. Note, unlike user_del user_disable
does not have a 'pre' callback therefore the check function is called
in user_disable's execute routine.
---
 ipalib/plugins/user.py | 20 ++--
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/ipalib/plugins/user.py b/ipalib/plugins/user.py
index 529699f..3cc667b 100644
--- a/ipalib/plugins/user.py
+++ b/ipalib/plugins/user.py
@@ -166,6 +166,17 @@ def normalize_principal(principal):
 return unicode('%s@%s' % (user, realm))
 
 
+def check_protected_member(user, protected_group_name=u'admins'):
+'''
+Ensure the last member of a protected group cannot be deleted or
+disabled by raising LastMemberError.
+'''
+
+result = api.Command.group_show(protected_group_name)
+if result['result'].get('member_user', []) == [user]:
+raise errors.LastMemberError(key=user, label=_(u'group'),
+container=protected_group_name)
+
 class user(LDAPObject):
 
 User object.
@@ -550,11 +561,7 @@ class user_del(LDAPDelete):
 
 def pre_callback(self, ldap, dn, *keys, **options):
 assert isinstance(dn, DN)
-protected_group_name = u'admins'
-result = api.Command.group_show(protected_group_name)
-if result['result'].get('member_user', []) == [keys[-1]]:
-raise errors.LastMemberError(key=keys[-1], label=_(u'group'),
-container=protected_group_name)
+check_protected_member(keys[-1])
 return dn
 
 api.register(user_del)
@@ -679,8 +686,9 @@ class user_disable(LDAPQuery):
 def execute(self, *keys, **options):
 ldap = self.obj.backend
 
-dn = self.obj.get_dn(*keys, **options)
+check_protected_member(keys[-1])
 
+dn = self.obj.get_dn(*keys, **options)
 ldap.deactivate_entry(dn)
 
 return dict(
-- 
1.7.11.2

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

[Freeipa-devel] [PATCH 77] Ticket #2584 - Installation fails when CN is set in, certificate subject base

2012-08-15 Thread John Dennis


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
From 32cf59ac8963982d2de59562f3f1570e67e92a3e Mon Sep 17 00:00:00 2001
From: John Dennis jden...@redhat.com
Date: Wed, 15 Aug 2012 21:33:15 -0400
Subject: [PATCH 77] Ticket #2584 - Installation fails when CN is set in
 certificate subject base
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

It is illegal to have more than one CN attribute in a certificate
subject. The subject command line arg is actually inserting a dn
between a leading RDN with a CN attribute and a suffix. The final
subject must have only CN attribute therefore the subject command line
arg must not contain CN. The patch modifies the subject validation to
prohibit CN. It also improves the error messages to clearly indicate
which command line parameter caused the failure and why.

While fixing the above it discovered the logic used for subject
validation with an external CA was flawed. DN objects were not being
used when they should be (certificate subject and issuer fields are dn
syntax). That code was also fixed so that the comparisions between
subjects and issuers were performed with DN objects. While fixing this
it was noted the object type relationship between IPA DN objects and
x509 DN objects was awkward, ticket 3003 was opened to address this.
---
 install/tools/ipa-server-install | 27 +--
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/install/tools/ipa-server-install b/install/tools/ipa-server-install
index d9682bb..16a9e33 100755
--- a/install/tools/ipa-server-install
+++ b/install/tools/ipa-server-install
@@ -70,7 +70,7 @@ pw_name = None
 uninstalling = False
 installation_cleanup = True
 
-VALID_SUBJECT_ATTRS = ['cn', 'st', 'o', 'ou', 'dnqualifier', 'c',
+VALID_SUBJECT_ATTRS = ['st', 'o', 'ou', 'dnqualifier', 'c',
'serialnumber', 'l', 'title', 'sn', 'givenname',
'initials', 'generationqualifier', 'dc', 'mail',
'uid', 'postaladdress', 'postalcode', 'postofficebox',
@@ -82,7 +82,6 @@ def subject_callback(option, opt_str, value, parser):
 
 Make sure the certificate subject base is a valid DN
 
-name = opt_str.replace('--','')
 v = unicode(value, 'utf-8')
 if any(ord(c)  0x20 for c in v):
 raise OptionValueError(Subject base must not contain control characters)
@@ -92,10 +91,10 @@ def subject_callback(option, opt_str, value, parser):
 dn = DN(v)
 for rdn in dn:
 if rdn.attr.lower() not in VALID_SUBJECT_ATTRS:
-raise OptionValueError('invalid attribute: %s' % rdn.attr)
+raise OptionValueError('%s=%s has invalid attribute: %s' % (opt_str, value, rdn.attr))
 except ValueError, e:
-raise OptionValueError('Invalid subject base format: %s' % str(e))
-parser.values.subject = str(dn) # may as well normalize it
+raise OptionValueError('%s=%s has invalid subject base format: %s' % (opt_str, value, e))
+parser.values.subject = dn
 
 def validate_dm_password(password):
 if len(password)  8:
@@ -638,9 +637,9 @@ def main():
 print '%s' is not a valid PEM-encoded certificate. % options.external_cert_file
 sys.exit(1)
 
-certsubject = unicode(extcert.subject)
-wantsubject = unicode(DN(('CN','Certificate Authority'), options.subject))
-if certsubject.lower() != wantsubject.lower():
+certsubject = DN(str(extcert.subject))
+wantsubject = DN(('CN','Certificate Authority'), options.subject)
+if certsubject != wantsubject:
 print Subject of the PKCS#10 certificate is not correct (got %s, expected %s). % (certsubject, wantsubject)
 sys.exit(1)
 
@@ -653,19 +652,19 @@ def main():
 print '%s' is not a valid PEM-encoded certificate chain. % options.external_ca_file
 sys.exit(1)
 
-certdict = dict((unicode(cert.subject).lower(), cert) for cert in extchain)
-certissuer = unicode(extcert.issuer)
-if certissuer.lower() not in certdict:
+certdict = dict((DN(str(cert.subject)), cert) for cert in extchain)
+certissuer = DN(str(extcert.issuer))
+if certissuer not in certdict:
 print The PKCS#10 certificate is not signed by the external CA (unknown issuer %s). % certissuer
 sys.exit(1)
 
 cert = extcert
 while cert.issuer != cert.subject:
-certissuer = unicode(cert.issuer)
-if certissuer.lower() not in certdict:
+certissuer = DN(str(cert.issuer))
+if certissuer not in certdict:
 print The external CA chain is incomplete (%s is missing from the chain). % certissuer
 sys.exit(1)
-cert = certdict[certissuer.lower()]
+cert = certdict[certissuer]
 
 print

Re: [Freeipa-devel] DN patch and documentation

2012-08-10 Thread John Dennis

On 08/10/2012 06:57 AM, Petr Vobornik wrote:


I found another issue while adding external member to a group:



Thank you, fix was simple, get_trusted_domains() was converting a dn to 
unicode when calling find_entries(), not sure why since we've never used 
unicode for dn's (I had searched the code looking for str conversions). 
Fix is in the fedorapeople repo and the final patch I'm about to send out.


I'll also do another search looking for unicode conversions.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DN patch and documentation

2012-08-09 Thread John Dennis

On 08/09/2012 03:34 PM, Rich Megginson wrote:

On 08/09/2012 01:31 PM, John Dennis wrote:

On 08/09/2012 02:08 PM, Rob Crittenden wrote:



What is the significance of L here:

'2.16.840.1.113730.3.8.L.8'  : DN,  # ipaTemplateRef


These came from:

install/share/60policyv2.ldif

I didn't notice the .L in the OID which isn't legal (correct?). So
beats me, I can't explain why the OID is specified that way.


hmm - did you see this comment at the top of the file?

# Policy related schema.
# This file should not be loaded.
# Remove this comment and assign right OIDs when time comes to do something
# about this functionality.


Ah, good catch Rich! No I didn't see that comment. What I did was 
grep'ed for all dn syntax attributes in all our ldiff files so I didn't 
see the comment.


I'll remove these as they are obviously incorrect. Thank you both Rob 
and Rich for the sharp eyes.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] DN patch and documentation

2012-08-08 Thread John Dennis

On 08/08/2012 09:37 AM, Martin Kosek wrote:

I started reviewing the latest state of your DN effort in your git repo. It is
in much better shape than before, but I still found some issues in utilities we
use. I am sending what I have found so far.


Thanks!



1) ipa-managed-entries is broken
# ipa-managed-entries -l
Available Managed Entry Definitions:
[u'UPG Definition']
[u'NGP Definition']

# ipa-managed-entries -e 'UPG Definition' status
Unexpected error
AttributeError: 'LDAPEntry' object has no attribute 'originfilter'


O.K. will investigate


2) ipa-replica-prepare is broken when --ip-address is passed
# ipa-replica-prepare vm-055.idm.lab.bos.redhat.com --ip-address=10.16.78.55
Directory Manager (existing master) password:

Preparing replica for vm-055.idm.lab.bos.redhat.com from
vm-086.idm.lab.bos.redhat.com
Creating SSL certificate for the Directory Server
Creating SSL certificate for the dogtag Directory Server
Creating SSL certificate for the Web Server
Exporting RA certificate
Copying additional files
Finalizing configuration
Packaging replica information into
/var/lib/ipa/replica-info-vm-055.idm.lab.bos.redhat.com.gpg
Adding DNS records for vm-055.idm.lab.bos.redhat.com
preparation of replica failed: invalid 'ip_address': Gettext('invalid IP
address format', domain='ipa', localedir=None)
invalid 'ip_address': Gettext('invalid IP address format', domain='ipa',
localedir=None)
   File /sbin/ipa-replica-prepare, line 464, in module
 main()

   File /sbin/ipa-replica-prepare, line 452, in main
 add_zone(domain)

   File /usr/lib/python2.7/site-packages/ipaserver/install/bindinstance.py,
line 302, in add_zone
 idnsallowtransfer=u'none',)

   File /usr/lib/python2.7/site-packages/ipalib/frontend.py, line 433, in 
__call__
 self.validate(**params)

   File /usr/lib/python2.7/site-packages/ipalib/frontend.py, line 705, in 
validate
 param.validate(value, self.env.context, supplied=param.name in kw)

   File /usr/lib/python2.7/site-packages/ipalib/parameters.py, line 879, in
validate
 self._validate_scalar(value)

   File /usr/lib/python2.7/site-packages/ipalib/parameters.py, line 900, in
_validate_scalar
 rule=rule,


Yes, I saw the same thing, but I don't think it's has anything to do 
with dn's. I even asked about this on IRC yesterday. Are you sure this 
isn't broken on master as well? When I looked at the code it just looked 
wrong and I didn't touch anything in this area. Can someone do a quick 
check on master and see if the problem exists there too?



3) ipa-replica-manage list is broken:
# ipa-replica-manage list
Failed to get data from 'vm-086.idm.lab.bos.redhat.com':
base=cn=replicas,cn=ipa,cn=etc,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com,
scope=1, filterstr=(objectClass=*)

I think the problem here is that the following code in ipa-replica-manage
returns an exception when no entry in cn=replicas is found (which is ok):

 dn = DN(('cn', 'replicas'), ('cn', 'ipa'), ('cn', 'etc'),
ipautil.realm_to_suffix(realm))
 entries = conn.getList(dn, ldap.SCOPE_ONELEVEL)


O.K. thanks, will investigate, seems like a simple fix.



4) IPA compliance is broken

# ipa-compliance
IPA compliance checking failed:

This is the traceback (some DN was left in string format):
Traceback (most recent call last):
   File /sbin/ipa-compliance, line 198, in module
 main()
   File /sbin/ipa-compliance, line 179, in main
 check_compliance(tmpdir, options.debug)
   File /sbin/ipa-compliance, line 121, in check_compliance
 size_limit = -1)
   File /usr/lib/python2.7/site-packages/ipaserver/plugins/ldap2.py, line
1087, in find_entries
 assert isinstance(base_dn, DN)
AssertionError


O.K. will investigate, seems like a simple fix.



Btw. Petr Vobornik is testing Web UI, so far so good


Great.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Data source-agnostic parameters

2012-08-06 Thread John Dennis

On 08/06/2012 10:10 AM, Alexander Bokovoy wrote:

On Mon, 06 Aug 2012, Jan Cholasta wrote:

Dne 6.8.2012 15:20, Simo Sorce napsal(a):

On Mon, 2012-08-06 at 10:55 +0200, Jan Cholasta wrote:

Hi,

while thinking about https://fedorahosted.org/freeipa/ticket/2933, I
had an idea how to make loading data from files available for all
parameters:

I think we can use URI-like strings in parameter values that the CLI
would interpret and extract the wanted information from them (similar to
what openssl does in the -pass command line option, see PASS PHRASE
ARGUMENTS in openssl(1)).

So, instead of adding a new parameter as a file-accepting alternative to
any existing parameter (i.e. what is suggested in the ticket), the user
would be able to specify the file in a URI-like string:

(use new parameter --sshpubkeyfile)
$ ipa user-mod --sshpubkey=ssh-rsa  ...
$ ipa user-mod --sshpubkeyfile=.ssh/id_rsa.pub

vs.

(use file URI-like string)
$ ipa user-mod --sshpubkey=ssh-rsa  ...
$ ipa user-mod --sshpubkey=file:.ssh/id_rsa.pub

and the CLI would take care of reading the file and using its contents
as the parameter value.

This could be extended with additional URI(-like) schemes:

- data:data - use data as the value (useful for escaping values
that look like URIs, but you don't want them to be treated as such)
- base64:data - use the value of base64 decoded data (useful for
--delattr on ugly raw binary values)
- fd:num - read value from file descriptor num
- env:var - read value from environment variable var
- ask: - always prompt interactively for the value
- default: - use default value, never prompt interactively

Thoughts?


How do you handle values that are actually URI strings, how do you tell
the command to not interpret them ?

Simo.



You can escape them like this: --someparam data:actual://uri/string

As for backward compatibility, this change has the potential to break
things (user scripts which use the CLI, etc.), but AFAIK there is no
parameter in IPA which expects URI string values specifically, so the
impact would be miminal IMHO.


I don't think you need to invent anything here. RFC2397 is available for
long time and provides already effective way to represent any data in
URI.

http://tools.ietf.org/html/rfc2397


I think it would be much better to follow RFC's including the one 
Alexander cited above.


Rather than

$ ipa user-mod --sshpubkey=file:.ssh/id_rsa.pub

shouldn't it be:

$ ipa user-mod --sshpubkey=file:///.ssh/id_rsa.pub


I'm not sure we need or want to pass file descriptors and I wonder about 
the security implications of reading from environment variables. So I'm 
not sure about supporting the fd and env input.


I don't think we need 'default' either, if the parameter does not begin 
with URI syntax then it's processes just as before.


I can see a value in 'ask', but I'd rather see this expressed as a URI 
scheme.


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Data source-agnostic parameters

2012-08-06 Thread John Dennis

On 08/06/2012 10:27 AM, Jan Cholasta wrote:

Dne 6.8.2012 16:10, Alexander Bokovoy napsal(a):

On Mon, 06 Aug 2012, Jan Cholasta wrote:

Dne 6.8.2012 15:20, Simo Sorce napsal(a):

On Mon, 2012-08-06 at 10:55 +0200, Jan Cholasta wrote:

Hi,

while thinking about https://fedorahosted.org/freeipa/ticket/2933, I
had an idea how to make loading data from files available for all
parameters:

I think we can use URI-like strings in parameter values that the CLI
would interpret and extract the wanted information from them
(similar to
what openssl does in the -pass command line option, see PASS PHRASE
ARGUMENTS in openssl(1)).

So, instead of adding a new parameter as a file-accepting
alternative to
any existing parameter (i.e. what is suggested in the ticket), the user
would be able to specify the file in a URI-like string:

(use new parameter --sshpubkeyfile)
$ ipa user-mod --sshpubkey=ssh-rsa  ...
$ ipa user-mod --sshpubkeyfile=.ssh/id_rsa.pub

vs.

(use file URI-like string)
$ ipa user-mod --sshpubkey=ssh-rsa  ...
$ ipa user-mod --sshpubkey=file:.ssh/id_rsa.pub

and the CLI would take care of reading the file and using its contents
as the parameter value.

This could be extended with additional URI(-like) schemes:

- data:data - use data as the value (useful for escaping values
that look like URIs, but you don't want them to be treated as such)
- base64:data - use the value of base64 decoded data (useful for
--delattr on ugly raw binary values)
- fd:num - read value from file descriptor num
- env:var - read value from environment variable var
- ask: - always prompt interactively for the value
- default: - use default value, never prompt interactively

Thoughts?


How do you handle values that are actually URI strings, how do you tell
the command to not interpret them ?

Simo.



You can escape them like this: --someparam data:actual://uri/string

As for backward compatibility, this change has the potential to break
things (user scripts which use the CLI, etc.), but AFAIK there is no
parameter in IPA which expects URI string values specifically, so the
impact would be miminal IMHO.


I don't think you need to invent anything here. RFC2397 is available for
long time and provides already effective way to represent any data in
URI.

http://tools.ietf.org/html/rfc2397



I have considered this, but given that these URL-like strings would be
mainly used directly by users on the command-line, I think that
base64:stuff is more user friendly than data:;base64,stuff.


Hmm... user friendly to me means not having to remember odd proprietary 
exceptions. It's easier to remember what's a standard because in theory 
it will always be the same no matter what piece of software is I'm using.



--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 0073 Update translations

2012-07-31 Thread John Dennis

On 07/26/2012 10:57 AM, Petr Viktorin wrote:

On 07/26/2012 03:49 PM, Petr Viktorin wrote:

On 07/26/2012 10:17 AM, Petr Viktorin wrote:

Update the pot to match current source, and pull translations from
Transifex.


Warning: when this patch is pushed, the source strings on Transifex will
update. The old versions will be lost from the site.


The patch is quite large (5MB), so I haven't attached it here (should
I?). Please download it from
https://github.com/encukou/freeipa/commit/fd638306ada102204494ec2e7f1b8d2bb7f6f8b1.patch





NACK. I forgot to validate the messages, so some messages might cause
exceptions when used.




Validation found a few bad messages in Spanish, and one in Tajik. I've
corrected obvious errors; in two cases I wasn't sure how to fix so I've
removed the translation.
Attaching my fixes for reference.

The new patch is here:
https://github.com/encukou/freeipa/commit/4eb5b21fe3bcfec33e07c45050d5a56f4328206c.patch



We should also change these on Transifex.
I assume maintainers can edit all translations. John, could you give me
maintainer rights? My username there is EnCuKou.



ACK, the patch applied cleanly and all the validations passed. Please 
make sure any po files you manually edited have the same edits applied 
on TX otherwise the next time we pull we'll reintroduce the same problems.


FYI, the message stats after applying the patch are:

ipa.pot has 1872 messages. There are 13 po translation files.
bn_IN:  12/1872   0.6%  1860 untranslated, 0 fuzzy
de: 32/1872   1.7%  1840 untranslated, 0 fuzzy
es:891/1872  47.6%   981 untranslated, 0 fuzzy
fr:   1635/1872  87.3%   237 untranslated, 0 fuzzy
id: 86/1872   4.6%  1786 untranslated, 0 fuzzy
ja: 34/1872   1.8%  1838 untranslated, 0 fuzzy
kn:236/1872  12.6%  1636 untranslated, 0 fuzzy
nl:  1/1872   0.1%  1871 untranslated, 0 fuzzy
pl:452/1872  24.1%  1420 untranslated, 0 fuzzy
ru:368/1872  19.7%  1504 untranslated, 0 fuzzy
tg:105/1872   5.6%  1767 untranslated, 0 fuzzy
uk:   1635/1872  87.3%   237 untranslated, 0 fuzzy
zh_CN: 130/1872   6.9%  1742 untranslated, 0 fuzzy


--
John Dennis jden...@redhat.com

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


  1   2   3   4   >