Re: [openstack-dev] [keystone] Domain-namespaced user attributes in SAML assertions from Keystone IdPs

2018-09-24 Thread John Dennis
ame}"
   }
 }
   }
 ], "remote":
 [
   {
 "type": "openstack_groups"
   }
 ]
   }
]
(paste: http://paste.openstack.org/show/730622/ )

Alternatively, we could forget about the namespacing problem and simply say we 
only pass group names in the assertion, and if you have ambiguous group names 
you're on your own. We could also try to support both, e.g. have an 
openstack_groups mean a list of group names for simpler use cases, and 
openstack_groups_unique mean the list of encoded group+domain strings for 
advanced use cases.

Finally, whatever we decide for groups we should also apply to openstack_roles 
which currently only supports global roles and not domain-specific roles.

(It's also worth noting, for clarity, that the samlize function does handle 
namespaced projects, but this is because it's retrieving the project from the 
token and therefore there is only ever one project and one project domain so 
there is no ambiguity.)



A few thoughts to help focus the discussion:

* Namespacing is critical, no design should be permitted which allows 
for ambiguous names. Ambiguous names are a security issue and can be 
used by an attacker. The SAML designers recognized the importance to 
disambiguate names. In SAML names are conveyed inside a NameIdentifier 
element which (optionally) includes "name qualifier" attributes which in 
SAML lingo is a namespace name.


* SAML does not define the format of an attribute value. You can use 
anything you want as long as it can be expressed in valid XML as long as 
the cooperating parties know how to interpret the XML content. But 
herein lies the problem. Very few SAML implementations know how to 
consume an attribute value other than a string. In the real world, 
despite what the SAML spec says is permitted is the constraint attribute 
values is a string.


* I haven't looked at the pysaml implementation but I'd be surprised if 
it treated attribute values as anything other than a string. In theory 
it could take any Python object (or JSON) and serialize it into XML but 
you would still be stuck with the receiver being unable to parse the 
attribute value (see above point).


* You can encode complex data in an attribute value while only using a 
simple string. The only requirement is the relying party knowing how to 
interpret the string value. Note, this is distinctly different than 
using non-string attribute values because of who is responsible for 
parsing the value. If you use a non-string attribute value the SAML 
library need to know how to parse it, none or very few will know how to 
process that element. But if it's a string value the SAML library will 
happily pass that string back up to the application who can then 
interpret it. The easiest way to embed complex data in a string is with 
JSON, we do it all the time, all over the place in OpenStack. [1][2]


So my suggestion would be to give the attribute a meaningful name. 
Define a JSON schema for the data and then let the upper layers decode 
the JSON and operate on it. This is no different than any other SAML 
attribute passed as a string, the receive MUST know how to interpret the 
string value.


[1] We already pass complex data in a SAML attribute string value. We 
permit a comma separated list of group names to appear in the 'groups' 
mapping rule (although I don't think this feature is documented in our 
mapping rules documentation). The receiver (our mapping engine) has 
hard-coded logic to look for a list of names.


[2] We might want to prepend a format specifier to string containing 
complex data, e.g. "JSON:{json object}". Our parser could then look for 
a leading format tag and if if finds one strip it off and pass the rest 
of the string into the proper parser.


--
John Dennis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-03 Thread John Dennis

On 08/03/2018 12:58 PM, Ben Nemec wrote:

Hi,

Zane has been doing some good work in oslo.service recently and I would 
like to add him to the core team.  I know he's got a lot on his plate 
already, but he has taken the time to propose and review patches in 
oslo.service and has demonstrated an understanding of the code.


Please respond with +1 or any concerns you may have.  Thanks.


+1


--
John Dennis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposing Moisés Guimarães for oslo.config core

2018-08-01 Thread John Dennis

On 08/01/2018 09:27 AM, Doug Hellmann wrote:

Moisés Guimarães (moguimar) did quite a bit of work on oslo.config
during the Rocky cycle to add driver support. Based on that work,
and a discussion we have had since then about general cleanup needed
in oslo.config, I think he would make a good addition to the
oslo.config review team.

Please indicate your approval or concerns with +1/-1.


+1


--
John Dennis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-05-31 Thread John Dennis

On 05/30/2018 08:23 PM, Jeremy Stanley wrote:

I think this is orthogonal to the thread. The idea is that we should
avoid nettling contributors over minor imperfections in their
submissions (grammatical, spelling or typographical errors in code
comments and documentation, mild inefficiencies in implementations,
et cetera). Clearly we shouldn't merge broken features, changes
which fail tests/linters, and so on. For me the rule of thumb is,
"will the software be better or worse if this is merged?" It's not
about perfection or imperfection, it's about incremental
improvement. If a proposed change is an improvement, that's enough.
If it's not perfect... well, that's just opportunity for more
improvement later.


I appreciate the sentiment concerning accepting any improvement yet on 
the other hand waiting for improvements to the patch to occur later is 
folly, it won't happen.


Those of us familiar with working with large bodies of code from 
multiple authors spanning an extended time period will tell you it's 
very confusing when it's obvious most of the code follows certain 
conventions but there are odd exceptions (often without comments). This 
inevitably leads to investing a lot of time trying to understand why the 
exception exists because "clearly it's there for a reason and I'm just 
missing the rationale" At that point the reason for the inconsistency is 
lost.


At the end of the day it is more important to keep the code base clean 
and consistent for those that follow than it is to coddle in the near term.


--
John Dennis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Could keystone to keystone federation be deployed on Centos?

2018-04-04 Thread John Dennis

On 04/04/2018 02:52 AM, 何健乐 wrote:

Hi all,
Could keystone to keystone  federation be deployed on Centos. I have 
notice all the document was deployment on Ubuntu. If could, is there 
some documents that is about deploying k2k on centos.


Yes k2k should work on centos, there is nothing OS specific in the 
implementation. There is OpenStack documentation on setting up 
federation and k2k. If there are deficiencies in the doc it would be 
helpful to point them out so we can remedy that situation.


If you need more information on setting up mod_auth_mellon you might 
want to check out the Mellon User Guide I recently wrote and contributed 
to upstream Mellon (it's not part of the OpenStack doc as it's more of a 
SAML SP setup guide, not an OpenStack federation guide) 
https://github.com/UNINETT/mod_auth_mellon/blob/master/doc/user_guide/mellon_user_guide.adoc



--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-03 Thread John Dennis

On 10/02/2017 10:13 PM, Jamie Lennox wrote:

Hi All,

I'm really sad to announce that I'll be leaving the OpenStack community 
(at least for a while), I've accepted a new position unrelated to 
OpenStack that'll begin in a few weeks, and am going to be mostly on 
holiday until then.


It's a shame to see you go Jamie. Aside from the OpenStack community you 
were also a co-worker. I have high regards for your technical skills as 
well as being a pleasure to work with. I wish you all the best in your 
future endeavors and based on past experience I expect whatever it is 
you'll succeed.


--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Additional documentation for mod_auth_mellon

2017-09-06 Thread John Dennis
The existing documentation on setting up mod_auth_mellon 
(https://docs.openstack.org/keystone/latest/advanced-topics/federation/mellon.html) 
is sparse.


Our experience with using mod_auth_mellon either in the context of 
OpenStack federation or simply as a SAML SP working in conjunction with 
an IdP is the process is often fraught with problems of the following 
nature:


* Lack of understanding SAML concepts and terminology
* Inability to collect relevant data when problems occur
* Inability to diagnose the root cause of problems
* Inability read and comprehend the content of SAML messages
* Improper use of Mellon configuration directives
* Lack of understanding with regards to SAML metadata, it's importance,
  it's generation, it's consumption, it's distribution and it's
  synchronization (e.g. consistency).
* Inability to understand how SAML authentication information
  is communicated to Web Apps (e.g. Keystone and it's mapping engine).
* Configuration problems related to proxies, load balancers,
  and other HA issues.
* Improper use of TLS or TLS configuration issues.

I tried to collect every piece of relevant information related to 
deploying mod_auth_mellon such that you get all you need to know but 
nothing you don't need to know. I tried to organize the material so you 
don't need to read it in a linear fashion, you can jump into a topic and 
there are enough links inside you can easily navigate to related 
material. I also tried to make the document vendor neutral with 
callout's to specific operating system concerns.


We are proposing this document be included with upstream Mellon as part 
of it's documentation. Hopefully this will be a living document with 
others contributing. The source format is AsciiDoc.


We haven't decided on a final place for the document to live. Red Hat 
will maintain a version of the document in it's documentation set. It's 
not clear yet how upstream will offer the document but they are 
appreciative of contribution, it will almost certainly be incorporated 
into their github repository, but I'm not sure about how a "rendered" 
version would be hosted.


For now you can view the initial version of the document on my personal 
page.


https://jdennis.fedorapeople.org/doc/mellon-doc/mellon.html

Comments, corrections, additions, etc. are welcome and encouraged.
--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-08 Thread John Dennis

On 08/07/2016 11:16 PM, Adam Young wrote:

On 08/06/2016 08:44 AM, John Dennis wrote:

On 08/05/2016 06:06 PM, Adam Young wrote:

Ah...just noticed the redirect is to :5000, not port :13000 which is
the HA Proxy port.


OK, this is due to the SAML request:


https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml";

Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
ForceAuthn="false"
IsPassive="false"

AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse";
>

https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata




My guess is HA proxy is not passing on the proper, and the
mod_auth_mellon does not know to rewrite it from 5000 to 13000


You can't change the contents of a SAML AuthnRequest, often they are
signed. Also, the AssertionConsumerServiceURL's and other URL's in
SAML messages are validated to assure they match the metadata
associated with EntityID (issuer). The addresses used inbound and
outbound have to be correctly handled by the proxy configuration
without modifying the content of the message being passed on the
transport.


Got a a little further by twerking HA proxy settings.  Added in

  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
  rsprep ^Location:\ http://(.*) Location:\ https://\1

whicxh tells HA proxy to translate Location headers (used in redirects)
from http to https.


As of now, it looks good up until the response comes back from the IdP
and mod mellon rejects it.  I think this is due to Mellon issuing a
request for http://:  but it gets translated through the
proxy as https://:.


mod_auth_mellon is failing the following check in auth_mellon_handler.c


  url = am_reconstruct_url(r);

  ...

  if (response->parent.Destination) {

if (strcmp(response->parent.Destination, url)) {
ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
  "Invalid Destination on Response. Should be: %s",
  url);
lasso_login_destroy(login);
return HTTP_BAD_REQUEST;
}
}

It does not spit out the parent.Destination value, but considering I am
seeing http and not https in the error message, I assume that at least
the protocol does not match.  Full error message at the bottom.

Assuming the problem is just that the URL is http and not https,   I
have an approach that should work.  I need to test it out, but want to
record it here, and also get feedback:

I can clone the current 10-keystone_wsgi_main.conf which listens for
straight http on port 5000.  If I make a file
11-keystone_wsgi_main.conf  that listens on port 13000 (not on the
external VIP)  but that enables SSL, I should be able to make HA proxy
talk to that port and re-encrypt traffic, maintaining the 'https://'
protocol.


However, I am not certain that Destination means the SP URL.  It seems
like it should mean the IdP.  Further on in auth_mellon_handler.c

  destination_url = lasso_provider_get_metadata_one(
provider, "SingleSignOnService HTTP-Redirect");
if (destination_url == NULL) {
/* HTTP-Redirect unsupported - try HTTP-POST. */
http_method = LASSO_HTTP_METHOD_POST;
destination_url = lasso_provider_get_metadata_one(
provider, "SingleSignOnService HTTP-POST");
}

Looking in the metadata, it seems that this value should be:

 https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml";
/>

So maybe something has rewritten the value used as the url ?


Here is the full error message


Invalid Destination on Response. Should be:
http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse,
referer:
https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml?SAMLRequest=nZJba%2BMwEEb%2FitG7I%2BXi1Igk4OYCge5S0m4f%2BlKEM2lFLcmrGWc3%2F35HDu22D22hIDCMZ%2FTpHGmGxjWtrjp68jv43QFS9tc1HnX%2FYy666HUwaFF74wA11fqm%2BnGlRwOl2xgo1KERb0Y%2BnzCIEMkGL7Ltai4e1LoYq%2FFoXapJWU2GhSouN5vhelpNyqIcX2xEdgcRuX8ueJyHEDvYeiTjiUtqOM1VmavprRppXkVxL7IVM1hvqJ96ImpRS2n34MnSaWBOofOP%2BR6aJqfhhVID4n5pWICMYBqHMrSQEupn%2BQIoE5nIlsEjpODPEOtzk667GPmbW9c2trYksk2INfSm5%2BJgGoTEc81K7BFeK9WLoRTWOYg3EI%2B2hl%2B7q%2F80ryf8AEcXSil5HEvH9eBlG5B2gG06mljMEo3uVcbFd7d0QGZvyMzk291m5%2Bf0k61sV9eBwU8J25kvpKWK3eeHvlVTNB4ty2MdHPZnyRdDrIhiB0IuzpHvH%2B3iHw%3D%3D&RelayState=http%3A%2F%2Fopenstack.ayoung-dell-t1700.test%3A5000%2Fv3%2Fauth%2FOS-FEDERATION%2Fwebsso%2Fsaml2%3Forigin%3Dhttp%3A%2F%2Fopenstack.ayoung-dell-t1700.test%2Fdashboard%2Fauth%2Fwebsso%2F&SigAlg=http%3A%2F%2Fwww.w3.org%2F2000%2F09%2Fxmldsig%23rsa-sha1&Signature=oJzAwE7ma3m0gZtO%2FvPQKCnk18u4OsjKcRQ3wiDu7txUGiPr4Cc9XIzKIGwzSGPSaWi8j1qbN76XwdNICOk!

HI5RsTdeS2Yeufw5Q5Ahol5cJHGEQOKa84iMzxkW9OtWgoYZnnXH3n2SCZkhLebabvJ72wfxskZ9iJ9JlVog

Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-06 Thread John Dennis

On 08/05/2016 06:06 PM, Adam Young wrote:

Ah...just noticed the redirect is to :5000, not port :13000 which is
the HA Proxy port.


OK, this is due to the SAML request:


https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml";

Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
ForceAuthn="false"
IsPassive="false"

AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse";
>

https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata




My guess is HA proxy is not passing on the proper, and the
mod_auth_mellon does not know to rewrite it from 5000 to 13000


You can't change the contents of a SAML AuthnRequest, often they are 
signed. Also, the AssertionConsumerServiceURL's and other URL's in SAML 
messages are validated to assure they match the metadata associated with 
EntityID (issuer). The addresses used inbound and outbound have to be 
correctly handled by the proxy configuration without modifying the 
content of the message being passed on the transport.



--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] X509 Management

2016-06-21 Thread John Dennis

On 06/21/2016 10:55 AM, Ian Cordasco wrote:

-Original Message-
From: Adam Young 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: June 21, 2016 at 09:40:39
To: OpenStack Development Mailing List 
Subject:  [openstack-dev] [Tripleo] X509 Management


When deploying the overcloud with TLS, the current "no additional
technology" approach is to use opensssl and self signed. While this
works for a Proof of concept, it does not make sense if the users need
to access the resources from remote systems.

It seems to me that the undercloud, as the system of record for
deploying the overcloud, should be responsible for centralizing the
signing of certificates.

When deploying a service, the puppet module sure trigger a getcert call,
which registers the cert with Certmonger. Certmonger is responsible
for making sure the CSR gets to the signing authority, and fetching the
cert.

Certmonger works via helper apps. While there is currently a "self
signed" helper, this does not do much if two or more systems need to
have the same CA sign their certs.

It would be fairly simple to write a certmonger helper program that
sends a CSR from a controller or compute node to the undercloud, has the
Heat instance on the undercloud validate the request, and then pass it
on to the signing application.

I'm not really too clear on how callbacks are done from the
os-collect-config processes to Heat, but I am guessing it is some form
of Rest API that could be reused for this work flow?


I would see this as the lowest level of deployment. We can make use of
Anchor or Dogtag helper apps already. This might also prove a decent
middleground for people that need an automated approach to tie in with a
third party CA, where they need some confirmation from the deployment
process that the data in the CSR is valid and should be signed.


I'm not familiar with TripleO or it's use of puppet, but I would
strongly advocate for Anchor (or DogTag) to be the recommended
solution. OpenStack Ansible has found it a little bit of an annoyance
to generate and distribute self-signed certificates.


Ah, but the idea is that certmonger is a front to whatever CA you chose 
to use, it provides a consistent interface to a range of CA's as well as 
providing functionality not present in most CA's, for instance the 
ability detect when certs need renewal etc. So the idea would be 
certmonger+Dogtag or certmongner+Anchor, or certmonger+XXX

--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-04-21 Thread John Dennis

On 04/18/2016 12:34 PM, Martin Millnert wrote:

(** ECP is a new feature, not supported by all IdP's, that at (second)
best requires reconfiguration of core authentication services at each
customer, and at worst requires customers to change IdP software
completely. This is a varying degree of showstopper for various
customers.)


The majority of work to support ECP is in the SP, not the IdP. In fact 
IdP's are mostly agnostic with respect to ECP, there is nothing ECP 
specific an IdP must implement other than supporting the SOAP binding 
for the SingleSignOnService which is trivial. I've yet to encounter an 
IdP that does not support the SOAP binding.


What IdP are you utilizing which is incapable of receiving an 
AuthnRequest via the SOAP binding?



--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone federation] some questions about keystone IDP with SAML supported

2015-10-14 Thread John Dennis

On 10/14/2015 11:58 AM, Marek Denis wrote:

pretty much - yes! Luckily for you the reference libraries (shibboleth)
are written in Java so it should be easier to integrate with your
application.


Only the Shibboleth IdP is written in Java. Shibboleth the SP is written 
in C++. If you're trying to implement an ECP client you'll probably find 
more support in the C++ SP implementation libraries for what you need.


Actually writing an ECP client is not difficult, you could probably 
cobble one together pretty easily from the standard Java libraries. An 
ECP client only needs to be able to parse and generate XML and 
communicate via HTTP. It does not need to be able to read or generate 
any SAML specific XML because an ECP client encapsulates the SAML in 
other XML (e.g. SOAP).


--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone federation] some questions about keystone IDP with SAML supported

2015-10-14 Thread John Dennis

On 10/14/2015 07:10 AM, wyw wrote:

hello, keystoners.  please help me

Here is my use case:
1. use keystone as IDP , supported with SAML
2. keystone integrates with LDAP
3. we use a java application as Service Provider, and to integrate it
with keystone IDP.
4. we use a keystone as Service Provider, and to integrate it withe
keystone IDP.


Keystone is not an identity provider, or at least it's trying to get out 
of that business, the goal is to have keystone utilize actual IdP's 
instead for authentication.


K2K utilizes a limited subset of the SAML profiles and workflow. 
Keystone is not a general purpose SAML IdP supporting Web SSO.


Keystone implements those portions of various SAMLv2 profiles necessary 
to support federated Keystone and derive tokens from federated IdP's. 
Note this distinctly different than Keystone being a federated IdP.



The problems:
in the k2k federation case, keystone service provider requests
authentication info with IDP via Shibboleth ECP.


Nit, "Shibboleth ECP" is a misnomer, ECP (Enhanced Client & Proxy) is a 
SAMLv2 profile, a SAML profile Shibboleth happens to implement, however 
there other SP's and IdP's that also support ECP (e.g. mellon, Ipsilon)



in the java application, we use websso to request IDP, for example:
idp_sso_endpoint = http://10.111.131.83:5000/v3/OS-FEDERATION/saml2/sso
but, the java redirect the sso url , it will return 404 error.
so, if we want to integrate a java application with keystone IDP,
  should we need to support ECP in the java application?


You're misapplying SAML, Keystone is not a traditional IdP, if it were 
your web application could use SAML HTTP-Redirect or it could also 
function as an ECP client, but not against Keystone. Why? Keystone is 
not a general purpose federated IdP.


--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Difference between certs stored in keystone and certs stored in barbican

2015-09-01 Thread John Dennis

On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:



The reason that is compelling is that you can have Barbican generate,
sign, and store a keypair without transmitting the private key over the
network to the client that originates the signing request. It can be
directly stored, and made available only to the clients that need access
to it.


This is absolutely _not_ how PKI for TLS is supposed to work, yes Barbican
can create keypairs etc because sometimes that¹s useful but in the
public-private PKI model that TLS expects this is completely wrong. Magnum
nodes should be creating their own private key and CSR and submitting them
to some CA for signing.

Now this gets messy because you probably don¹t want to push keystone
credentials onto each node (that they would use to communicate with
Barbican).

I¹m a bit conflicted writing this next bit because I¹m not particularly
familiar with the Kubernetes/Magnum architectures and also because I¹m one
of the core developers for Anchor but here goesŠ.

Have you considered using Anchor for this? It¹s a pretty lightweight
ephemeral CA that is built to work well in small PKI communities (like a
Kubernetes cluster) you can configure multiple methods for authentication
and build pretty simple validation rules for deciding if a host should be
given a certificate. Anchor is built to provide short-lifetime
certificates where each node re-requests a certificate typically every
12-24 hours, this has some really nice properties like ³passive
revocation² (Think revocation that actually works) and strong ways to
enforce issuing logic on a per host basis.

Anchor or not, I¹d like to talk to you more about how you¹re attempting to
secure Magnum - I think it¹s an extremely interesting project that I¹d
like to help out with.

-Rob
(Security Project PTL / Anchor flunkie)


Let's not reinvent the wheel. I can't comment on what Magnum is doing 
but I do know the members of the Barbican project are PKI experts and 
understand CSR's, key escrow, revocation, etc. Some of the design work 
is being done by engineers who currently contribute to products in use 
by the Dept. of Defense, an agency that takes their PKI infrastructure 
very seriously. They also have been involved with Keystone. I work with 
these engineers on a regular basis.


The Barbican blueprint states:

Barbican supports full lifecycle management including provisioning, 
expiration, reporting, etc. A plugin system allows for multiple 
certificate authority support (including public and private CAs).


Perhaps Anchor would be a great candidate for a Barbican plugin.

What I don't want to see is spinning our wheels, going backward, or 
inventing one-off solutions to a very demanding and complex problem 
space. There have been way too many one-off solutions in the past, we 
want to consolidate the expertise in one project that is designed by 
experts and fully vetted, this is the role of Barbican. Would you like 
to contribute to Barbican? I'm sure your skills would be a tremendous 
asset.



--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Difference between certs stored in keystone and certs stored in barbican

2015-09-01 Thread John Dennis

On 09/01/2015 02:49 AM, Tim Bell wrote:

Will it also be possible to use a different CA ? In some
environments, there is already a corporate certificate authority
server. This would ensure compliance with site security standards.


A configurable CA was one of the original design goals when the Barbican 
work began. I have not tracked the work so I don't know if that is still 
the case, Ade Lee would know for sure.


--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] SSL keys saving

2015-08-24 Thread John Dennis

On 08/21/2015 05:10 AM, Stanislaw Bogatkin wrote:

Hi folks.

Today I want to discuss the way we save SSL keys for Fuel environments.
As you maybe know we have 2 ways to get a key:
a. Generate it by Fuel (self-signed certificate will be created in this
case). In this case we will generate private key, csr and crt in a
pre-deployment hook on master node and then copy keypair to the nodes
which needed it.

b. Get a pre-generated keypair from user. In this case user should
create keypair by himself and then upload it through Fuel UI settings
tab. In this case keypair will be saved in nailgun database and then
will serialized into astute.yaml on cluster nodes, pulled from it by
puppet and saved into a file.

Second way has some flaws:
1. We already have some keys for nodes and we store them on master node.
Store keys in different places is bad, cause:
1.1. User experience - user should remember that in some cases keys will
be store in FS and in some other cases - in DB.
1.2. It brings problems for implementation in other different places -
for example, we need to get certificate for properly run OSTF tests and
now we should implement two different ways to deliver that certificate
to OSTF container. The same for fuel-cli - we should somehow get
certificate from DB and place it in FS to use it.
2. astute.yaml is similar for all nodes. Not all of nodes needs to have
private key, but now we cannot control this.
3. If keypair data serializes into astute.yaml it means than that data
automatically will be fetched when diagnostic snapshot will created. So
in some cases in can lead to security vulnerability, or we will must to
write another crutch to cut it out of diagnostic snapshot.


So I propose to get rid of saving keypair in nailgun database and
implement a way to always saving it to local FS on master node. We need
to implement next items:

- Change UI logic that saving keypair into DB to logic that will save it
to local FS
- Implement according fixes in fuel-library


I have not been following this thread nor I do I know or understand all 
your requirements but I wanted to bring to your attention the fact 
OpenStack has a project called Barbican whose primary purpose is to 
safely store keys, plus it has many other features for handling keys. 
Key handling is tricky so rather than try to design something yourself 
perhaps it might make sense to leverage the Barbican work.


https://wiki.openstack.org/wiki/Barbican/Incubation

http://www.rackspace.com/blog/keeping-openstack-secrets-safe-with-barbican/

http://www.slideshare.net/jarito030506/barbican-10-open-source-key-management-for-openstack


--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3] use of six.iteritems()

2015-06-11 Thread John Dennis

On 06/11/2015 01:46 PM, Mike Bayer wrote:
I am firmly in the "let's use items()" camp.  A 100 ms difference for 
a totally not-real-world case of a dictionary 1M items in size is no 
kind of rationale for the Openstack project - if someone has a 
dictionary that's 1M objects in size, or even 100K, that's a bug in 
and of itself.


the real benchmarks we should be using, if we are to even bother at 
all (which we shouldn't), is to observe if items() vs. iteritems() has 
*any* difference that is at all measurable in terms of the overall 
execution of real-world openstack use cases.   These nano-differences 
in speed are immediately dwarfed by all those operations surrounding 
them long before we even get to the level of RPC overhead.


Lessons learned in the trenches:

* The best code is the simplest [1] and easiest to read.

* Code is write-once, read-many; clarity is a vital part of the read-many.

* Do not optimize until functionality is complete.

* Optimize only after profiling real world use cases.

* Prior assumptions about what needs optimization are almost always 
proven wrong by a profiler.


* I/O latency vastly overwhelms most code optimization making obtuse 
optimization pointless and detrimental to long term robustness.


* The amount of optimization needed is usually minimal, restricted to 
just a few code locations and 80% of the speed increases occur in just 
the first few tweaks after analyzing profile data.


[1] Compilers can optimize simple code best, simple code is easy to 
write and easier to read while at the same time giving the tool chain 
the best chance of turning your simple code into efficient code. (Not 
sure how much this applies to Python, but it's certainly true of other 
compiled languages.)


John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-05-12 Thread John Dennis
On 05/12/2015 08:47 AM, Clint Byrum wrote:
> I agree with all the things above, and I want to add that I think SVG
> is probably the most appropriate candidate as a W3C approved drawing
> format. We can even enforce style rules and use a reformatter so that
> diffs make sense.

+1 for SVG

As already mentioned blockdiag and it's variants is an easy to use
Python tool whose source input is dead simple text and produces lovely
SVG diagrams

http://blockdiag.com/en/

it also does sequence diagrams, activity diagrams and network diagrams.
One could commit both the source diag file and the SVG output into git.

Inkscape (https://inkscape.org/) is the premier opensource SVG drawing
tool which is an alternative to Adobe Illustrator. It's not hard to learn.

But the important point is the format should be SVG because then it can
easily be edited later by your preferred SVG editing tool. Plus SVG
makes the nicest looking diagrams especially when you need to cater to
different output resolutions (think print).




-- 
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2014-12-24 Thread John Dennis
Can't this be solved with a couple of environment variables? The two
keys pieces of information needed are:

1) who authenticated the subject?

2) what authentication method was used?

There is already precedence for AUTH_TYPE, it's used in AJP to
initialize the authType property in a Java Servelet. AUTH_TYPE would
cover item 2. Numerous places in Apache already set AUTH_TYPE. Perhaps
there could be a convention that AUTH_TYPE could carry extra qualifying
parameters much like HTTP headers do. The first token would be the
primary mechanism, e.g. saml, negotiate, x509, etc. For authentication
types that support multiple mechanisms (e.g. EAP, SAML, etc.) an extra
parameter would qualify the actual mechanism used. For SAML that
qualifying extra parameter could be the value from AuthnContextClassRef.

Item 1 could be covered by a new environment variable AUTH_AUTHORITY.

If AUTH_TYPE is negotiate (i.e. kerberos) then the AUTH_AUTHORITY would
be the KDC. For SAML it would probably be taken from the
AuthenticatingAuthority element or the IdP entityID.

I'm not sure I see the need for other layers to receive the full SAML
assertion and validate the signature. One has to trust the server you're
running in. It's the same concept as trusting REMOTE_USER.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alternative federation mapping

2014-11-04 Thread John Dennis
On 11/04/2014 04:19 PM, David Stanek wrote:
> There are probably a few other assumptions, but the main one is that the
> mapper expects the incoming data to be a dictionary where the value is a
> string. If there are multiple values we expect them to be delimited with
> a semicolon in the string.

and ...

any value with a colon will be split regardless of the value's semantic
meaning. In other words the colon character is illegal in any context
other than as a list separator. And you had better hope that any value
whose semantic meaning is a list will use the colon separator and not
space, tab, comma, etc. as the separator.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alternative federation mapping

2014-11-04 Thread John Dennis
On 11/04/2014 02:46 AM, David Chadwick wrote:
> Hi John

Good morning David. I hope you're enjoying Paris and the summit is both
productive and enjoyable. Wish I was there :-)

> Its seems like your objective is somewhat different to what was intended
> with the current mapping rules. You seem to want a general purpose
> mapping engine that can map from any set of attribute values into any
> other set of values, whereas the primary objective of the current
> mapping rules is to map from any set of attribute values into a Keystone
> group, so that we can use the existing Keystone functions to assign
> roles (and hence permissions) to the group members. So the current
> mapping rules provide a means of identifying and then authorising a
> potentially random set of external users, who logically form a coherent
> group of users for authorisation purposes.

O.K. group assignment is the final goal in Keystone. I suppose the
relevant question then is the functionality in the current Keystone
mapper sufficiently rich such that you can present to it an arbitrary
set of values and yield a group assessment? It's been a while since I
looked at the mapper, things might have changed, but it seemed to me it
had a lot of baked in assumptions about the data (assertion) it would
receive. As long as those assumptions held true all is good.

My concern arose from real world experience where I saw a lot of "messy"
data (plus I had a task that had some other requirements). There is
often little consistency over how data is formatted and what data is
included when you receive it from a foreign source. Now combine the
messy data with complex rules dictated by management and you have an
admin with a headache who is asked to make sure the rules are
implemented. An admin might have to implement something like this:

"If the user is a member of domain D and has authenticated with
mechanisms X,Y or Z and has IdP attribute A but does not have suffix S
in their username and is not in a blacklist then assign them group G and
transform their username by stripping the suffix, replacing all hyphens
with underscores and lowercase it."

I'll grant you this example is perhaps a bit contrived but it's not too
far afield from the questions I've seen admins ask when trying to manage
actual RADIUS deployments. BTW, where is that domain information coming
from in the example? Usually it has to be extracted from the username in
any one of a number of formats.

It's things like this that motivate me towards a more general purpose
mechanism because at the end of the day the real world isn't pretty :-)

FWIW FreeRADIUS didn't start out with a policy language with
capabilities like this, it grew one out of necessity.

I'm definitely not trying to say Keystone needs to switch mappers,
instead what I'm offering is one approach you might want to consider
before the current mapping syntax becomes entrenched and ugly real world
problems begin to crop up. I don't have any illusions this solution is
ideal, these things are difficult to spec out and write. One advantage
is it's easy to extend in a backwards compatible manner with minimal
effort (basically it's just adding a new function you can "call").

FWIW the ideal mapper in my mind is something written in a general
purpose scripting language where you have virtually no limitations on
how you transform values and enforce conditions, but as I indicated in
my other document managing a script interpreter for this task has its
own problems. Which is the lesser of two evils, a script interpreter or
a custom policy "language"? I don't think I have the answer to that but
came down on the side of the custom policy "language" as being the most
palatable and portable.

> Am I right in assuming that
> you will also want this functionality after your general purpose mapping
> has taken place?

The mapper I designed does give you a lot of flexibility to assign a
foreign identity to a group (or multiple groups) with no additional
steps so I'm not sure I follow the above comment. There is no need for
any extra or multiple steps, it should be self contained.



-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alternative federation mapping

2014-11-03 Thread John Dennis
On 11/03/2014 08:50 AM, John Dennis wrote:
> I had a bunch of notes but I can't find them at the moment
> so I'm going from memory here (always a bit risky).

I found my notes, attached is an reStructured text document that
summarizes the issues I found with the current Keystone mapping, the
basic requirements for mapping based on my prior experience and reasons
for selecting the approach I did. I hope this explains the motivations
and rationale to put things into context.




-- 
John
Federation Mapping
==

Introduction


With federated authentication a trusted external IdP (Identity
Provider) authenticates an entity and provides attributes associated
with the authenticated entity such as full name, organization,
location, group membership, roles, etc. to the local authorization
system. The remote identity attributes usually do not directly
correlate to the local identity attributes as such they must be mapped
or transformed to be consistent with the local identity attributes.

Problems with existing contributed OpenStack federation mapping
---

A federated mapping implementation was contributed to OpenStack which
is based on static rules. However that mapping system lacks basic
features required in real world deployments. Examples of the
deficiencies are:


* Replacement substitutions are specified by an index, e.g. {0} making
  them difficult to use.

  * If you edit a rule all the indices shift and you have carefully
adjust multiple locations, this is tedious and error prone.

  * Numeric replacements are not friendly, it's difficult to remember what
index maps to which value. Likewise its difficult to read a rule
with a indexed substitutions and understand what is being
replaced, a number provides no context for the reader.

* Impossible to specify how strings containing lists of values are to
  be split.

* Any string containing a colon is automatically split irregardless of
  semantics of the string.

* Impossible to perform tests and perform conditional logic. For
  example you cannot test if the user is a member of a group and if so
  add a role based on the group membership. You cannot test if a list
  is empty, has a certain number of members, if a value starts with a
  prefix or ends with a suffix, etc.

* No mechanism to handle case sensitivity.

* Cannot split direct map value into multiple values, e.g. user@domain
  cannot be returned as ['user', 'domain'], i.e. {0}, {1}. This
  requires more robust regular expression support than is provided.

* Cannot operate on substrings or do replacements. Common operations
  such as replacing hyphens with underscores or stripping off prefixes
  or suffixes are impossible.

* Direct map values cannot be reassigned by later rules.
  e.g. email might be in assertion so it would be direct map,
  but if it was absent one can't synthesize it from other values
  in the assertion, i.e. user + @ + domain

* Difficult to build values by string interpolation or concatenation.

* The local array elements are dicts whose keys can contain 'user' or
  'group' or both, but you can't have more that one user or group in an
  element and elements that define a user more than once produce an
  error.

* Logical OR requires cut-n-paste copies of many rules with only a minor
  difference in each rule, rules quickly become unreadable and difficult
  to ascertain the logic.

* ``not_any_of`` does not work when regex is True, enabling regex
  effectively changes the condition to ``any_one_of``. (This is a bug
  that can be fixed)

Examples of real work tasks current mapping cannot handle
-

I've worked with RADIUS for years. RADIUS is often configured to
operate in a federated mode where the attributes supplied to the
RADIUS server have to be manipulated to match local conventions and
policies. Below are some of the most common issues I've seen admins
have to tackle.

* Split the realm from the username. Assign the username and realm
  independently in the result. This is by far the most common issue
  admins raise.

* Strip prefixes and suffixes and/or take a prefix or suffix and map
  it to a new independent value. Believe it or not many organizations
  embed group/role information in their usernames. Usernames such as
  "johndoe_staff" where the username is "johndoe" and role is "staff"
  are depressingly common.

* Behave differently depending on the realm, the IdP, a DNS name or a
  network address.

* Test for membership in a collection. Examples are whitelists,
  blacklists, groups, etc. The result of the membership test modifies
  how the user is ultimately mapped and the privileges they receive.

* Search for and/or extract substrings, usually demands regular
  expression support. The result of the re

Re: [openstack-dev] [Keystone] Alternative federation mapping

2014-11-03 Thread John Dennis
On 11/03/2014 05:52 AM, David Chadwick wrote:
> I agree with Morgan. We designed the current mapping functionality to
> cover all the use cases we were aware of, but if there are more, then we
> would love to hear about them and make the fixes that are necessary.
> Attribute mapping is a critical component of federation, and it should
> be fit for purpose
> 
> regards
> 
> David
> 
> 
> On 03/11/2014 09:08, Morgan Fainberg wrote:
>>
>> On Nov 2, 2014, at 22:21, Dolph Mathews > <mailto:dolph.math...@gmail.com>> wrote:
>>
>>> On Sunday, November 2, 2014, John Dennis >> <mailto:jden...@redhat.com>> wrote:
>>>
>>> It was hoped we could simply borrow the Keystone mapping
>>> implementation but it was found to be too limiting and not
>>> sufficiently
>>> expressive. We could not find another alternative so we designed a new
>>> mapper which is described in this PDF.
>>>
>>>
>>> In what way was it too limited? Did you consider extending the
>>> existing grammar and extending the existing mapping engine?
>>
>> I am very interested in knowing the limitations you ran into. I am
>> fairly certain we are willing to update the engine to meet the needs of
>> the deployers, but knowing what those limitations are and what this new
>> proposed engine provides that we don't (for this use case) is important. 

Of course, my apologies, I should have included that information
originally. I had a bunch of notes but I can't find them at the moment
so I'm going from memory here (always a bit risky).

A lot of what motivated me was years of experience with RADIUS which in
many respect is often deployed as in a federated manner. Where the
values that show up from foreign sources are often diverse. Admins have
to write policy to recognize these foreign inputs and transform then
into a local canonical format and operate on them.

One of the most common and pernicious problems was the presence or
absence of realm (i.e. domain) information included with the username.
One has to be able to recognize the various formats (prepended,
appended, diverse separator characters, etc.) and be able to extract the
username from the domain and carry that information forward independently.

Another problem that confronted us in OpenDaylight was the requirement
to assign roles based on information in the assertion. That meant being
able to form a set of roles and assign roles based on arbitrary data in
the assertion (examples include specific usernames, presence or absence
of attributes, presence or absence of a string value in an attribute,
recognizing the *exact* value in the assertion that contains group
membership, what the group separator is and treating the values as a set
with the ability to determine if a value is a member of that set, etc.)


* Hardcoded assumptions concerning values, e.g. any string value
containing a colon is split on the colon.

* Hardcoded assumption values are strings when in fact you may be
presented with a list or a map of key/value pairs, or a value that needs
to be treated as a number, boolean, or empty value (null).

* Seemed if the structure forced a lot of redundancy.

* Inability to store partial operations and reference them later. In
other words no variables.

* Inability to perform regular expression matching

* Inability to perform regular expression replacement

* Inability to split a string into components and reference those components

* Inability to perform comparison operations.

* Inability to determine the length of a string, list, or number of keys
in a map, but more importantly inability to test if a value is empty or
has exactly n members.

* Inability split a string on an arbitrary separator and save the result
in an ordered list.

* Inability to perform case conversions

* Inability to test for membership in a list or dict

* Inability to join components using an arbitrary separator.

* Inability to load site specific tables of information.

HTH,


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alternative federation mapping

2014-11-02 Thread John Dennis
On 11/02/2014 03:55 PM, Marek Denis wrote:
> Hi John,
> 
> It indeed looks interesting and  enhancing the mapping engine is on
> ours to-do list for a long time. I'd be happy to talk this through
> during the summit. Do you think you will be able to come for a
> Keystone websso/federation Design Session on Wednesday at 16.30?

Thank you Marek. I'd love to discuss this in person however I'm not
attending this summit. However, my manager Nathan Kinder who has been
following this work will be in Paris as well as Adam Young who is also
in our group. Of course I'm happy to discuss this electronically, turn
it into a blueprint or whatever is deemed appropriate.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Alternative federation mapping

2014-11-02 Thread John Dennis
While working on federated authentication for a different project
(OpenDaylight) we discovered we needed to map from the assertion
provided by an external federated IdP to local values. This is
essentially the same requirement which exists in Keystone's federated
support. It was hoped we could simply borrow the Keystone mapping
implementation but it was found to be too limiting and not sufficiently
expressive. We could not find another alternative so we designed a new
mapper which is described in this PDF.

https://jdennis.fedorapeople.org/doc/mapping.pdf

The mapper as described in the document has implementations in both Java
and Python. The Java implementation is currently in use in OpenDaylight
(a Java based project). For those interested I can provide a pointer to
OpenDaylight specific documentation on how this mapper is used in
conjunction with the Apache web server providing authentication and SSSD
providing identity attributes to a Java servlet container.

My goal here is to make Keystone developers aware of an alternative
mapper which may provide needed mapping features not currently available
and for which different language implementations already exist. Note,
the mapper is easily extended should a need arise.

Source code and documentation can be found here by cloning this git repo:

git clone git://fedorapeople.org/~jdennis/federated-mapping.git

Note, I put this git repo together quickly by pulling together things
from a variety of sources, as such there may be things needing to be
cleaned up in the repo, at the moment it's really just meant to browse.
Over the next few days I'll make sure everything builds and executes
cleanly. Posting this now in case folks want to have conversations at
the Paris Summit.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] User mapping and X509 for signing

2014-10-09 Thread John Dennis
On 10/08/2014 04:58 PM, Adam Young wrote:
> When gyee posted his X509 server-side auth plugin patch,  the feedback 
> we gave was that it should be using the mapping code from Federation to 
> transform the environment variables set by the web server to the 
> Keystone userid, username, domain name, and so forth.
> 
> The PKI token format currently allows for a single signing cert.  I have 
> a proposal to allow for multiple signers.  One issue, though, is how to 
> map from the certificates signer-info to the Keystone server that signed 
> the data.   signer-data is part of the CMS message format, and can be 
> used to uniquely identify the certificate that signed the document.  
>  From the signer-data, we can fetch the certificate.
> 
> SO, we could build a system that allowed us to fetch multiple certs for 
> checking signatures.  But then the question is, which cert maps to "the 
> entity authorized to sign for this data."
> 
> OpenStack lacks a way to enumerate the systems, endpoints or otherwise.  
> I'm going to propose that we create a service domain and that any system 
> responsible for signing a document have a user created inside that 
> domain.  I think we want to make the endpoint id match the user ID for 
> endpoints, and probably something comparable for Nova Compute services.
> 
> This means we can use the associated keystone user to determine what 
> Compute node signed a message.  It gives us PKI based, asymetric Oslo 
> message signing.
> 
> This same abstraction should be extended to Kite for symmetric keys.
> 
> In order to convert the certificate data to the Keystone User ID, we can 
> use the Mapping mechanism from Federation, just like we are planning on 
> for the X509 Auth Plugin.
> 
> One thing I want to make explicit, and get some validation on from the 
> community:  is it acceptable to say that there needs to be a mappable 
> link between AL  X509 certificates distributed by a certain CA, for a 
> certain Domain and the users in there?  It seems to me to be comparable 
> to the LDAP constraints.  Is this a reasonable assumption?  If not, it 
> seems like the X509 mechanism is really not much more than a naked 
> Public Key.

I don't fully understand your proposal, perhaps because a few details
were omitted. But here are my thoughts and let me know where I might
have misunderstood.

The mapping seems pretty straight forward to me thus I'm not sure I see
the need for an extra service and the associated complexity. You should
be able extract the signer subject from the signing data as well as the
signer's issuer information. Given that it's a simple lookup whose
mapping can be trivially managed using any of the key/value tables
available to Keystone, one only needs to populate that table in some
authoritative way, but that's mostly a deployment issue.

I also don't follow what you mean by "not much more than a naked Public
Key", can you elaborate?


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Add a hacking check to not use "Python Source Code Encodings" (PEP0263)

2014-07-22 Thread John Dennis
On 07/21/2014 04:45 AM, Christian Berendt wrote:
> Hello.
>
> There are some files using the Python source code encodings as the first
> line. That's normally not necessary and I want propose to introduce a
> hacking check to check for the absence of the source code encodings.
>

I assume you mean you want to prohibit the use of the source code
encoding declaration as opposed to mandating it's use in every file, if
so then ...

NAK. This is a very useful and essential feature, at least for the
specific case of UTF-8. Python 3 source files are defined to be UTF-8
encoded however the same is not true for Python 2 which requires an
explicit source coding for UTF-8. Properly handling
internationalization, especially in the unit tests is critical.
Embedding code points outside the ASCII range via obtuse notation is
both cumbersome and impossible to read without referring to a chart, a
definite drawback. OpenStack in general has not had a great track record
with respect to internationalization, let's not make it more difficult
instead let's embrace those features which promote better
internationalization support.

Given Python 3 has declared the source code encoding is UTF-8 I see no
justification for capriciously declaring Python 2 cannot share the same
encoding. Nor do we want to make it difficult for developers by forcing
them to use unnatural hexadecimal escapes in strings.

However my concerns are strictly limited to UTF-8, I do not think we
should allow any other source code encoding aside from UTF-8. I'd
advocate for a hacking rule to check for any encoding other than UTF-8
(no need to check for ASCII since it's a proper subset of UTF-8). For
example Latin-1 should be flagged as a definite problem.

Do you have a reason for advancing this restriction?

-- 
John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Nominating Ade Lee for barbican-core

2014-07-10 Thread John Dennis
On 07/10/2014 12:55 PM, Douglas Mendizabal wrote:
> Hi Everyone,
>
> I would like to nominate Ade Lee for the barbican-core team.
>
>

I'm not a member of the barbican core team but I'd like to endorse Ade
anyway with a +1.

I've worked with Ade for years and consider him an exceptional engineer
whose contributions I'm certain will be outstanding.

-- 
John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] subjAltName and CN extraction from x509 certificates

2014-06-27 Thread John Dennis
On 06/27/2014 12:21 AM, Carlos Garza wrote:
>   I don't know where we can check in experimental code so I have a 
> demonstration
> of how to extract CNs subjAltNames or what ever we want from x509 
> certificates. Later on
> I plan to use the OpenSSL libraries to verify certs coming from barbican are 
> valid and
> actually do sign the private_key it is associated with. 
>
> https://github.com/crc32a/ssl_exp.git
>
>
I'm always leary of reinventing the wheel, we already have code to
manage pem files (maybe this should be in oslo, it was proposed once)

keystone/common/pemutils.py

I'm also leary of folks writing their own ASN.1 parsing as opposed to
using existing libraries. Why? It's really hard to get right so you
correctly handle all the cases, long established robust libraries are
better at this.

python-nss (which is a Python binding to the NSS crypto library) has
easy to use code to extract just about anything from a cert, here is an
example python script using your example pem file. If using NSS isn't an
option I'd rather see us provide the necessary binding in pyopenssl than
handcraft one-off routines. FWIW virtually everything you see in the
cert output below can be accessed as Pythonically as a Python object(s)
when using python-nss.

#!/usr/bin/python

import sys
import nss.nss as nss

nss.nss_init_nodb()

filename = sys.argv[1]

# Read the PEM file
try:
binary_cert = nss.read_der_from_file(filename, True)
except Exception as e:
print e
sys.exit(1) 
else:
print "loaded cert from file: %s" % filename

# Create a Certificiate object from the binary data
cert = nss.Certificate(binary_cert)

# Dump some basic information
print
print "cert subject: %s " % cert.subject
print "cert CN: %s " % cert.subject_common_name
print "cert validity:"
print "Not Before: %s" % cert.valid_not_before_str
print "Not After: %s" % cert.valid_not_after_str

print
print "\ncert has %d extensions" % len(cert.extensions)

for extension in cert.extensions:
print "%s (critical: %s)" % (extension.name, extension.critical)

print
extension = cert.get_extension(nss.SEC_OID_X509_SUBJECT_ALT_NAME)
if extension:
print "Subject Alt Names:"
for name in nss.x509_alt_name(extension.value):
print "%s" % name
else:
print "cert does not have a subject alt name extension"

# Dump entire cert in friendly format
print
print ">>> Entire cert contents <<<"
print cert

sys.exit(0)

Yields this output:

loaded cert from file: cr1.pem

cert subject: CN=www.digicert.com,O="DigiCert, 
Inc.",L=Lehi,ST=Utah,C=US,postalCode=84043,STREET=2600 West Executive 
Parkway,STREET=Suite 
500,serialNumber=5299537-0142,incorporationState=Utah,incorporationCountry=US,businessCategory=Private
 Organization 
cert CN: www.digicert.com 
cert validity:
Not Before: Thu Mar 20 00:00:00 2014 UTC
Not After: Sun Jun 12 12:00:00 2016 UTC


cert has 10 extensions
Certificate Authority Key Identifier (critical: False)
Certificate Subject Key ID (critical: False)
Certificate Subject Alt Name (critical: False)
Certificate Key Usage (critical: True)
Extended Key Usage (critical: False)
CRL Distribution Points (critical: False)
Certificate Policies (critical: False)
Authority Information Access (critical: False)
Certificate Basic Constraints (critical: True)
OID.1.3.6.1.4.1.11129.2.4.2 (critical: False)

Subject Alt Names:
www.digicert.com
content.digicert.com
digicert.com
www.origin.digicert.com
login.digicert.com

>>> Entire cert contents <<<
Data:
Version:   3 (0x2)
Serial Number: 13518267578909330747227050733614153347 
(0xa2b860cca01f45fd7ee63601b1c3e83)
Signature Algorithm:
Algorithm: PKCS #1 SHA-256 With RSA Encryption
Issuer: CN=DigiCert SHA2 Extended Validation Server 
CA,OU=www.digicert.com,O=DigiCert Inc,C=US
Validity:
Not Before: Thu Mar 20 00:00:00 2014 UTC
Not After:  Sun Jun 12 12:00:00 2016 UTC
Subject: CN=www.digicert.com,O="DigiCert, 
Inc.",L=Lehi,ST=Utah,C=US,postalCode=84043,STREET=2600 West Executive 
Parkway,STREET=Suite 
500,serialNumber=5299537-0142,incorporationState=Utah,incorporationCountry=US,businessCategory=Private
 Organization
Subject Public Key Info:
Public Key Algorithm:
Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
a8:89:b3:3b:91:94:57:87:72:09:5b:5f:cb:2c:42:2a:
9e:ed:c2:fd:20:7b:2c:63:7f:dd:07:bf:fb:49:5c:ed:
1c:a2:70:79:75:c2:34:cc:eb:12:f0:40:88:3a:b9:ea:
29:a2:11:8f:53:e1:02:e1:87:04:f6:58:b9:86:b6:7f:
85:5e:0a:58:47:c3:bd:e7:6b:21:07:9d:db:ef:57:8b:
16:ce:38:f1:e3:e2:e4:5a:10:b8:39:bb:0a:ad:ca:c5:
10:85:3a:a1:6f:67:c9:18:c3:5b:b2:4c:a6:01:b6:c3:
50:be:7e:c8:79:ca:3c:53:5e:02:78:ae:96:5f:56:21:
  

Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

2014-05-23 Thread John Dennis
Using standard formats such as PEM and PKCS12 (most people don't use
PKCS8 directly) is a good approach. Be mindful that some cryptographic
services do not provide *any* direct access to private keys (makes
sense, right?). Private keys are shielded in some hardened container and
the only way to refer to the private key is via some form of name
association. Therefore your design should never depend on having access
to a private key and should permit having the private key stored in some
type of secure key storage.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Logging exceptions and Python 3

2014-05-21 Thread John Dennis
On 05/21/2014 01:11 PM, Igor Kalnitsky wrote:
>> So, write:
>>
>> LOG.debug(u'Could not do whatever you asked: %s', exc)
>>
>> or just:
>>
>> LOG.debug(exc)
> 
> Actually, that's a bad idea to pass an exception instance to
> some log function: LOG.debug(exc). Let me show you why.
> 
> Here a snippet from logging.py:
> 
> def getMessage(self):
> if not _unicode:
> msg = str(self.msg)
> else:
> msg = self.msg
> if not isinstance(msg, basestring):
> try:
> msg = str(self.msg)
> except UnicodeError:
> msg = self.msg  # we keep exception object as it is
> if self.args:   # this condition is obviously False
> msg = msg % self.args
> return msg  # returns an exception object, not a
> text
> 
> And here another snippet from the format() method:
> 
> record.message = record.getMessage()
> # ... some time formatting ...
> s = self._fmt % record.__dict__ # FAIL
> 
> the old string formatting will call str(), not unicode() and we will FAIL
> with UnicodeEncodeError.

But that's a bug in the logging implementation. Are we supposed to write
perverse code just to avoid coding mistakes in other modules? Why not
get the fundamental problem fixed?


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] strutils: enhance safe_decode() and safe_encode()

2014-05-21 Thread John Dennis
On 05/15/2014 11:41 AM, Victor Stinner wrote:
> Hi,
> 
> The functions safe_decode() and safe_encode() have been ported to Python 3, 
> and changed more than once. IMO we can still improve these functions to make 
> them more reliable and easier to use.
> 
> 
> (1) My first concern is that these functions try to guess user expectation 
> about encodings. They use "sys.stdin.encoding or sys.getdefaultencoding()" as 
> the default encoding to decode, but this encoding depends on the locale 
> encoding (stdin encoding), on stdin (is stdin a TTY? is stdin mocked?), and 
> on 
> the Python major version.
> 
> IMO the default encoding should be UTF-8 because most OpenStack components 
> expect this encoding.
> 
> Or maybe users want to display data to the terminal, and so the locale 
> encoding should be used? In this case, locale.getpreferredencoding() would be 
> more reliable than sys.stdin.encoding.

The problem is you can't know the correct encoding to use until you know
the encoding of the IO stream, therefore I don't think you can correctly
write a generic encode/decode functions. What if you're trying to send
the output to multiple IO streams potentially with different encodings?
Think that's far fetched? Nope, it's one of the nastiest and common
problems in Python2. The default encoding differs depending on whether
the IO target is a tty or not. Therefore code that works fine when
written to the terminal blows up with encoding errors when redirected to
a file (because the TTY probably has UTF-8 and all other encodings
default to ASCII due to sys.defaultencoding).

Another problem is that Python2 default encoding is ASCII but in Python3
it's UTF-8 (IMHO the default encoding in Python2 should have been UTF-8,
that fact it was set to ASCII is the cause of 99% of the encoding
exceptions in Python2).

Given that you don't know what the encoding of the IO stream is I don't
think you should base it on the locale nor sys.stdin. Rather I think we
should just agree everything is UTF-8. If that messes up someones
terminal output I think it's fair to say if you're running OpenStack
you'll need to switch to UTF-8. Anything else requires way more
knowledge than we have available in a generic function. Solving this so
the encodings match for each and every IO stream is very complicated,
note Python3 still punts on this.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Logging exceptions and Python 3

2014-05-19 Thread John Dennis
On 05/16/2014 09:11 AM, Johannes Erdfelt wrote:
> six.text_type(exc) is the recommended solution.

+1

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Timestamp formats in the REST API

2014-05-07 Thread John Dennis
On 04/29/2014 09:48 AM, Mark McLoughlin wrote:
> What appeared to be unusual was that the timestamp had both sub-second
> time resolution and timezone information. It was felt that this wasn't a
> valid timestamp format and then some debate about how to 'fix' it:

> My conclusions from all that:
> 
>   1) This sucks
> 
>   2) At the very least, we should be clear in our API samples tests 
>  which of the three formats we expect - we should only change the 
>  format used in a given part of the API after considering any 
>  compatibility considerations
> 
>   3) We should unify on a single format in the v3 API - IMHO, we should 
>  be explicit about use of the UTC timezone and we should avoid 
>  including microseconds unless there's a clear use case. In other 
>  words, we should use the 'isotime' format.
> 
>   4) The 'xmltime' format is just a dumb historical mistake and since 
>  XML support is now firmly out of favor, let's not waste time 
>  improving the timestamp situation in XML.
> 
>   5) We should at least consider moving to a single format in the v2 
>  (JSON) API. IMHO, moving from strtime to isotime for fields like 
>  created_at and updated_at would be highly unlikely to cause any 
>  real issues for API users.

Having dealt with timestamp issues in several other (Python) projects
I've come to the following conclusions:

* datetime values are always stored and transferred in UTC.

* UTC time values are only converted to local time for presentation
purposes.

* time zone info should be explicit (no guessing), therefore all
datetime objects should be "aware" not "naive" and when rendered in
string representation must include the tz offset (not a time zone name,
however, Z for UTC is acceptable).

* sub-second resolution is often useful and should be supported, but is
not mandatory.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security audit of OpenStack projects

2014-05-02 Thread John Dennis
On 04/07/2014 12:06 PM, Nathan Kinder wrote:
> Hi,
> 
> We don't currently collect high-level security related information about
> the projects for OpenStack releases.  Things like the crypto algorithms
> that are used or how we handle sensitive data aren't documented anywhere
> that I could see.  I did some thinking on how we can improve this.  I
> wrote up my thoughts in a blog post, which I'll link to instead of
> repeating everything here:
> 
>   http://blog-nkinder.rhcloud.com/?p=51
> 
> tl;dr - I'd like to have the development teams for each project keep a
> wiki page updated that collects some basic security information.  Here's
> an example I put together for Keystone for Icehouse:
> 
>   https://wiki.openstack.org/wiki/Security/Icehouse/Keystone
> 
> There would need to be an initial effort to gather this information for
> each project, but it shouldn't be a large effort to keep it updated once
> we have that first pass completed.  We would then be able to have a
> comprehensive overview of this security information for each OpenStack
> release, which is really useful for those evaluating and deploying
> OpenStack.
> 
> I see some really nice benefits in collecting this information for
> developers as well.  We will be able to identify areas of weakness,
> inconsistency, and duplication across the projects.  We would be able to
> use this information to drive security related improvements in future
> OpenStack releases.  It likely would even make sense to have something
> like a cross-project security hackfest once we have taken a pass through
> all of the integrated projects so we can have some coordination around
> security related functionality.
> 
> For this to effort to succeed, it needs buy-in from each individual
> project.  I'd like to gauge the interest on this.  What do others think?
>  Any and all feedback is welcome!

Catching up after having been away for a while.

Excellent write-up Nathan and a good idea.

The only suggestion I have at the moment is the information concerning
how sensitive data is protected needs more explicit detail. For example
saying that keys and certs are protected by file system permissions is
not sufficient IMHO.

Earlier this year when I went though the code that generates and stores
certs and keys I was surprised to find a number of mistakes in how the
permissions were set. Yes, they were set, but no they weren't set
correctly. I'd like to see explicit listing of the user and group as
well as the modes and SELinux security contexts of directories, files
(including unix sockets). This will not only help other developers
understand best practice but also allow us to understand if we're
following a consistent model across projects.

I realize some may say this falls into the domain of "installers" and
"packaging", but we should get it right ourselves and allow it to serve
as an example for installation scripts that may follow (many of which
just copy the values).


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-29 Thread John Dennis
You may want to take a peek at this recent thread, there may be
information in it you'll find useful.

http://lists.openstack.org/pipermail/openstack-dev/2014-March/029408.html

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sprint at Pycon: Port OpenStack to Python 3

2014-04-01 Thread John Dennis
On 04/01/2014 02:08 PM, John Dennis wrote:

>>> My concern is this. The singled biggest change in Py2 -> Py3 is
>>> string handling, especially with regards to str vs. unicode. We
>>> have a significant number of bugs in the current code base with
>>> regards to encoding exceptions, I just got done fixing a number of
>>> them, I know there are others.
>>
>> In which OpenStack component?
> 
> For one:
> 
> https://bugs.launchpad.net/keystone/+bug/1292311
> 
> But just looking at a lot of the OpenStack code it's easy to see things
> are going to blow up once you start passing around non-ASCII characters.

Oh almost forgot ...

The openstack log module blows up if you pass a UTF-8 encoded string.

For the LDAP code that limitation meant any logging had to be performed
before encoding and if any logging had to be done after encoding it
meant one had to be sure values were decoded again before logging.
That's very error prone.

I think the logging module needs to be fixed so that if it receives a
str or bytes object it will assume the encoding is UTF-8 (not default
ASCII) and properly decode it prior to forming the final message.

Not being able to log UTF-8 encoded strings is an accident waiting to
happen (nothing worse than an import message not getting seen because it
got trapped in an encode/decode exception handled (or not handled)
elsewhere)

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sprint at Pycon: Port OpenStack to Python 3

2014-04-01 Thread John Dennis
On 04/01/2014 12:28 PM, Victor Stinner wrote:
> Le mardi 1 avril 2014, 09:44:11 John Dennis a écrit :
>>> The goal of the sprint is to port OpenStack components and
>>> OpenStack dependencies to Python 3,
>> 
>> This is a great goal, thank you! But I'm concerned it might be
>> premature.
> 
> The portage is already in progress. There are many components (8
> clients) where the py33 (Python 3.3) gate is voting. We try to keep
> this page up to date:
> 
> https://wiki.openstack.org/wiki/Python3
> 
> There are already a lot of dependencies which are already Python 3
> compatible, and the portage of OpenStack "server" components already
> started.
> 
>> My concern is this. The singled biggest change in Py2 -> Py3 is
>> string handling, especially with regards to str vs. unicode. We
>> have a significant number of bugs in the current code base with
>> regards to encoding exceptions, I just got done fixing a number of
>> them, I know there are others.
> 
> In which OpenStack component?

For one:

https://bugs.launchpad.net/keystone/+bug/1292311

But just looking at a lot of the OpenStack code it's easy to see things
are going to blow up once you start passing around non-ASCII characters.

> To be honest, we invest more time on fixing Python 3 issues than on
> adding new tests to check for non-regression. The problem is that
> currently, you cannot even import the Python module, so it's hard to
> run tests and harder to add new tests.


> I hope that it will become easier to run tests on Python 2 and Python
> 3, and to add more tests for non-ASCII data.

Yes, the fact the vast majority of the unit tests only pass ASCII values
is a significant problem.

Most of the problems are data driven. If you don't test with the data
that causes the problems you're not fully testing and that allows a lot
of problems to sneak through.

IMHO code reviews should not permit the inclusion of unit tests which do
not utilize test data containing non-ASCII characters. All existing unit
tests should be updated to use non-ASCII strings.

FWIW my last patch to one of the keystone LDAP unit test converted most
all the strings to contain non-ASCII characters. We should be doing this
for a lot of the test code.

> It's not easy to detect Unicode issues using Python 2 since most
> setup are in english, only no test using non-ASCII data right now,
> and Python 2 uses implicit conversion between bytes and Unicode
> strings.

See above.

It's not quite fair to say Py2 implicit conversions mask problems, in
many instances Py2's implicit conversions are the root cause of problems
(mainly because ASCII is the default encoding applied during the
implicit conversion).

> 
> It's much easier to detect Unicode issues using Python 3. I don't
> want to drop Python 2 support, just *add* Python 3 support. The code
> will work on 2.6-3.3.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sprint at Pycon: Port OpenStack to Python 3

2014-04-01 Thread John Dennis
On 04/01/2014 12:15 PM, Victor Stinner wrote:
> Hi,
> 
> Le mardi 1 avril 2014, 09:11:52 John Dennis a écrit :
>> What are the plans for python-ldap? Only a small part of python-ldap is
>> pure python, are you also planning on tackling the CPython code?
> 
> Oh, python-ldap was just an example, I don't have concrete plan for each 
> dependency. We are porting dependencies since some weeks, and many have 
> already a pending patch or pull request:
> https://wiki.openstack.org/wiki/Python3#Dependencies
> 
> I know the Python C API and I know well all the Unicode issues, so I'm not 
> afraid of having to hack python-ldap if it's written in C ;-)
> 
> For your information, I am a Python core developer and I'm fixing Unicode 
> issues in Python since 4 years or more :-) I also wrote a free ebook 
> "Programming with Unicode":
> 
>http://unicodebook.readthedocs.org/

Great! It's wonderful to have someone steering the effort that actually
understands the issues. FWIW, I too have been fixing Python unicode
issues for years as well as using CPython and knowing it intimately.

Your book is good, I've seen it.

My general observation is i18n is a lot like security, 95% of developers
don't understand it, don't want to deal with it and have the mistaken
belief they can postpone addressing it until after the "coding is done"
instead of building it in from the very beginning. Security and
internationalization can't be bolted onto the side as an afterthought,
it has to be designed in from the beginning.

Since developers are not going to learn the issues what I think is
needed is a small set of do's and dont's, Follow some simple rules and
you'll be mostly O.K.

My simple rules go like this (I think you would concur):

* Every text string *internal* to your code is unicode.

* You encode/decode at the boundaries. Either an API boundary or an I/O
boundary. You must know and understand which encoding will be used at
the boundary and what the boundary requirements are.

* The use of str() should be banned, it's evil. Use six.text_type instead.

O.K. that might be a bit simplistic but it covers a large percentage.
The downside is the existing OpenStack code is nowhere near close to
following even these simple rules.



-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sprint at Pycon: Port OpenStack to Python 3

2014-04-01 Thread John Dennis
On 04/01/2014 09:44 AM, John Dennis wrote:
> FWIW projects that deal with web services, wire protocols, external
> datastores, etc. who have already started porting to Py3 have
> encountered significant pain points with Py3, some of which is just
> being resolved and which have caused on-going changes in Py3. We deal
> with a lot of these same issues in OpenStack.

Oh, almost forgot. One of the significant issues in Py3 string handling
occurs when dealing with the underlying OS, specifically Posix, the
interaction with Posix "objects" such as pathnames, hostnames,
environment values, etc. Virtually any place where in C you would pass a
pointer to char in the Posix API where the intention is you're passing a
character string. Unfortunately Posix does not enforce the concept of a
character or a character string, the pointer to char ends up being a
pointer to octets (e.g. binary data) which means you can end up with
strings that can't be encoded.

Py3 has attempted to deal with this by introducing something called
"surrogate escapes" which attempts to preserve non-encodable binary data
in what is supposed to be a character string so as not to corrupt data
as it transitions between Py3 and a host OS.

OpenStack deals a lot with Posix API's, thus this is another area where
we need to be careful and have clear guidelines. We're going to have to
deal with the whole problem of encoding/decoding in the presence of
surrogates.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sprint at Pycon: Port OpenStack to Python 3

2014-04-01 Thread John Dennis
On 04/01/2014 04:40 AM, victor stinner wrote:
> Hi,
> 
> I will organize a sprint to Port OpenStack to Python 3 during 4 days
> at Montreal (Canada) during Pycon Montreal 2014, between April, 14
> (Monday) and April, 17 (Thursday).
> 
> The goal of the sprint is to port OpenStack components and OpenStack
> dependencies to Python 3,

This is a great goal, thank you! But I'm concerned it might be premature.

My concern is this. The singled biggest change in Py2 -> Py3 is string
handling, especially with regards to str vs. unicode. We have a
significant number of bugs in the current code base with regards to
encoding exceptions, I just got done fixing a number of them, I know
there are others. While I was fixing them I searched the OpenStack
coding guidelines to find out coding practices were supposed to be
enforcing with regards to non-ASCII strings and discovered there is
isn't much, it seems incomplete. Some of it seems based more on
speculation than actual knowledge of defined Python behavior. I'm not
sure, but given we do not have clear guidelines for unicode in Py2,
never mind guidelines that will allow running under both Py2 and Py3 I'm
willing to guess we have little in the gate testing that enforces any
string handling guidelines.

I'm just in the process of finishing up a document to address these
concerns. Unfortunately I'm going to be off-line for several weeks and I
didn't want to start a discussion I couldn't participate in (plus there
are some Py3 issues in the document I need to clean up) so I was going
to wait to post it.

My concern is we need to get our Py2 house in order *before* tackling
Py3 porting. Doing Py3 porting before we have clear guidelines on
unicode, str, bytes, encoding, etc. along with gate tests that enforce
these guidelines is putting the cart before the horse. Whatever patches
come out of a Py3 porting sprint might have to be completely redone.

FWIW projects that deal with web services, wire protocols, external
datastores, etc. who have already started porting to Py3 have
encountered significant pain points with Py3, some of which is just
being resolved and which have caused on-going changes in Py3. We deal
with a lot of these same issues in OpenStack. Before we just start
hacking away I think it would behoove us to first have a very clear and
explicit document on how we're going to address these issues *before* we
start changing code.



-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sprint at Pycon: Port OpenStack to Python 3

2014-04-01 Thread John Dennis
On 04/01/2014 04:40 AM, victor stinner wrote:
> Hi,
> 
> I will organize a sprint to Port OpenStack to Python 3 during 4 days
> at Montreal (Canada) during Pycon Montreal 2014, between April, 14
> (Monday) and April, 17 (Thursday).
> 
> The goal of the sprint is to port OpenStack components and OpenStack
> dependencies to Python 3, send patches to port as much code as
> possible. If you don't know OpenStack, you may focus more on
> OpenStack dependencies (MySQL-python, python-ldap,  websockify, ...).

What are the plans for python-ldap? Only a small part of python-ldap is
pure python, are you also planning on tackling the CPython code? The
biggest change in Py3 is unicode/str. The biggest pain point in the 2.x
version of python-ldap is unicode <--> utf-8 at the API. Currently with
python-ldap we have to encode most every parameter to utf-8 before
calling python-ldap and then decode the result back from utf-8 to
unicode. I always thought this should have been done inside the
python-ldap binding and it was a design failure it didn't correctly
handle Python's unicode objects. FWIW the binding relied in CPython's
automatic encoding conversion which applied the default encoding of
ASCII which causes encoding encoding exceptions, the CPython binding
just never used the correct argument processing in Py_ParseTuple() and
PyParseTupleWithKeywords() which allows you to specify the desired
encoding (the C API for LDAP specifies UTF-8 as does the RFC's).

The Py3 porting work for python-ldap is probably going to have to
address the unicode changes in Py3. If the Py3 port of python-ldap
brings sanity to the unicode <--> utf-8 conversion then that makes a
significant API change between the Py2 and Py3 versions of python-ldap
making calling the python-ldap API significantly different between Py2
and Py3. Does that mean you're also planning on backporting the Py3
changes in python-ldap to Py2 to keep the API more or less consistent?

FWIW I just spent a long time fixing unicode handling for LDAP and the
patches were just merged. I've also dealt with the unicode issue in
python-ldap in other projects (IPA) and have a lot of familiarity with
the problem. Also, unfortunately for the purpose of this discussion will
be off-line for several weeks starting at the end of the day.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Multiple patches in one review

2014-03-24 Thread John Dennis
When a change is complex good practice is to break the change into a
series of smaller individual patches that show the individual
incremental steps needed to get to the final goal. When partitioned into
small steps each change is easier to review and hopefully illustrates
the progression.

In most cases such a series of patches are interdependent and order
dependent, jenkins cannot run tests on any patch unless the previous
patch has been applied.

I was under the impression gerrit review supported multiple commits. In
fact you can submit multiple commits with a single "git review" command.

But from that point forward it appears as if each commit is handled
independently rather than being an ordered list of commits that are
grouped together sharing a single review where their relationship is
explicit. Also the jenkins tests either needs to apply all the commits
in sequence and run the test or it needs to run the test after applying
the next commit in the sequence.

Can someone provide some explanation on how to handle this situation?

Or perhaps I'm just not understanding how the tools work when multiple
commits are submitted.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] 5 unicode unit test failures when building Debian package

2014-03-13 Thread John Dennis
On 03/13/2014 12:31 AM, Thomas Goirand wrote:
> Hi,
> 
> Since Havana, I've been ignoring the 5 unit test failures that I always
> get. Though I think it'd be nice to have them fixed. The log file is
> available over here:
> 
> https://icehouse.dev-debian.pkgs.enovance.com/job/keystone/59/console
> 
> Does anyone know what's going on? It'd be nice if I could solve these.

I've been fixing unicode errors in keystone (not these however). Please
open a bug for these and you can assign the bug to me.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-12 Thread John Dennis
On 03/12/2014 01:22 PM, Zane Bitter wrote:
> On 10/03/14 20:29, Robert Collins wrote:
>> Which bits look raw? It should only show text/* attachments, non-text
>> should be named but not dumped.
> 
> I was thinking of the:
> 
> pythonlogging:'': {{{
> 
> part.

Yes, this is the primary culprit, it's output obscures most everything
else concerning test results. Sometimes it's essential information.
Therefore you should be able to control whether it's displayed or not.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-10 Thread John Dennis
On 03/10/2014 02:31 PM, Zane Bitter wrote:
> Fewer logs is hardly ever what you want when debugging a unit test.
> 
> I think what John is looking for is a report at the end of each test run 
> that just lists the tests that failed instead of all the details (like 
> `testr failing --list`), or perhaps the complete list of tests with the 
> coloured pass/fail like IIRC nose does. Since testr's killer feature is 
> to helpfully store all of the results for later, maybe this output is 
> all you need in the first instance (along with a message telling the 
> user what command to run to see the full output, of course), or at least 
> that could be an option.

> It sounds like what John wants to do is pass a filter to something like 
> `testr failing` or `testr last` to only report a subset of the results, 
> in much the same way as it's possible to pass a filter to `testr` to 
> only run a subset of the tests.

Common vocabulary is essential to discuss this. A test result as emitted
by subunit is a metadata collection indexed by keys. To get the list of
failing tests one iterates over the set of results and looks for the
absense of the "successful" key in the result. That's the set of test
failures, as such it's a filter on the test results.

Therefore I see filtering as the act of producing a subset of the test
results (i.e, only those failing, or only those whose names match a
regexp, or the intersection of those). That is a filtered result set.
The filtering is performed by examining the key/value in each result
metadata to yield a subset of the results.

Next you have to display that filtered result set. When each result is
displayed one should be able to specify which pieces of metadata get
displayed. In my mind that's not filtering, it's a display option. One
common display option would be to emit only the test name. Another might
be to display the test name and the captured log data. As it stands now
it seems to display every piece of metadata in the result which is what
is producing the excessive verbosity.

Hopefully I'm making sense, yes/no?

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-07 Thread John Dennis
On 03/07/2014 04:30 PM, Clark Boylan wrote:

> Now onto indirectly answering some of the questions you have. If a
> single unittest is producing thousands of lines of log output that is
> probably a bug. The test scope is too large or the logging is too
> verbose or both. To work around this we probably need to make the
> fakeLogger fixture toggleable with configurable log level. Then you
> could do something like `OS_LOG_LEVEL=error tox` and avoid getting all
> of the debug level logs.
> 
> For examining test results you can `testr load $SUBUNIT_LOG_FILE` then
> run commands like `testr last`, `testr failing`, `testr slowest`, and
> `testr stats` against that run (if you want details on the last run
> you don't need an explicit load). There are also a bunch of tools that
> come with python-subunit like subunit-filter, subunit2pyunit,
> subunit-stats, and so on that you can use to do additional processing.

Thanks Clark!! That's great information it's helping me piece the pieces
together.

Here is where I think I'm stuck. The subunit test result stream seems to
have "details" which it attaches to the test result. For the failing
tests it often includes:

pythonlogging
stacktrace

Both very useful pieces of information. I don't want to get rid of them
in the test results because I'll make use of them while fixing problems.
Instead what I want to do is toggle the display of this extra detail
information on or off when I run any of the commands that show me test
results.

What I can't figure out is how to toggle the display of pythonlogging
and stacktrace on or off so that I can either get a simple summary or
full detail (but only when I need it).

I thought subunit-filter might be the right tool, but I don't see anyway
to toggle various pieces of detail data using that tool.

Once again, thanks for your help.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-07 Thread John Dennis
On 03/07/2014 04:33 PM, Steve Kowalik wrote:
> On 07/03/14 12:56, John Dennis wrote:
>> Question: How do you list just the failing tests? I don't want to see
>> the contents of the logging data stored under the pythonlogging: key.
>> Ideally I'd like to see the name of the test, what the failure was, and
>> possibly the associated stacktrace. Should be simple right? But I can't
>> figure it out.
> 
> "testr failing" or "testr failing --list". See also "testr run --failing".

Thanks, but those are exactly the problematic commands. The issue is the
output contains huge amounts of log data that obscures everything else.

>From what I can figure out the test result contains "details" which is a
set of additional information. I see two pieces of detail information

pythonlogging
traceback

It's valuable information but I need to omit it in order to see the list
of failures.

So what I'm trying to figure out is how do I toggle the display of the
detail information so I can just get a summary of the failing tests.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] testr help

2014-03-07 Thread John Dennis
I've read the following documents as well as the doc for subunit and
testtools but I'm still missing some big picture usage.

https://wiki.openstack.org/wiki/Testr
https://testrepository.readthedocs.org/en/latest/index.html

The biggest problem seems to be whenever tests are listed I get
thousands of lines of logging information and any information about the
test is obscured by the enormous volume of logging data.

>From what I can figure out the files in .testrepository are in subunit
version 1 protocol format. It seems to be a set of key/value pairs where
the key is the first word on the line followed by a colon. It seems like
one should be able to list just certain keys.

Question: How do you list just the failing tests? I don't want to see
the contents of the logging data stored under the pythonlogging: key.
Ideally I'd like to see the name of the test, what the failure was, and
possibly the associated stacktrace. Should be simple right? But I can't
figure it out.

Question: Suppose I'm debugging why a test failed. This is the one time
I actually do want to see the pythonlogging data, but only for exactly
the test I'm interested in. How does one do that?

Question: Is there any simple how-to's or any cohesive documentation?
I've read everything I can find but really simple tasks seem to elude
me. The suite is composed of implementations from testtools, subunit and
testr, each of which has decent doc but it's not always clear how these
pieces fit together into one piece of functionality. OpenStack seems to
have added something into the mix with the capture of the logging
stream, something which is not covered anywhere in the testtools,
subunit nor testtools doc that I can find. Any hints, suggestions or
pointers would be deeply appreciated.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 'if foo' vs 'if foo is not None'

2014-03-03 Thread John Dennis
On 03/03/2014 10:00 AM, Matthew Booth wrote:
> PEP 8, under 'Programming Recommendations' recommends against implicit
> comparison to None. This isn't just stylistic, either: we were actually
> bitten by it in the VMware driver
> (https://bugs.launchpad.net/nova/+bug/1262288). The bug was hard to
> spot, and had consequences to resource usage.
> 
> However, implicit comparison to None seems to be the default in Nova. Do
> I give up mentioning this in reviews, or is this something we care about?

IMHO testing for None is correct provided you've been disciplined in the
rest of your coding. None has a very specific meaning, it not just a
truth value [1]. My interpretation of None is None represents the
"Undefined Value" [2] which is semantically different than values which
evaluate to False (i.e. False, 0, empty sequence).

[1] http://docs.python.org/2/library/stdtypes.html#truth-value-testing

[2] Technically None is not an undefined value, however common
programming paradigms ascribe the undefined semantic to None.
-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] how to enable logging for unit tests

2014-02-28 Thread John Dennis
I'd like to enable debug logging while running some specific unit tests
and I've not been able to find the right combination of levers to pull
to get logging output on the console.

In keystone/etc/keystone.conf.sample (which is config file loaded for
the unit tests) I've set debug to True, I've verified CONF.debug is true
when the test executes. I've also tried setting log_file and log_dir to
see if I could get logging written to a log file instead, but no luck.

I have noticed when a test fails I'll see all the debug logging emitted
inbetween

{{{
}}}

which I think is something testtools is doing.

This leads me to the theory testtools is somehow consuming the logging
output. Is that correct?

How do I get the debug logging to show up on the console during a test run?

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-02-20 Thread John Dennis
On 02/19/2014 08:58 PM, Adam Young wrote:
>> Can you give more detail here? I can see arguments for both ways of
>> doing this but continuing to use ids for ownership is an easier
>> choice. Here is my thinking:
>>
>> 1. all of the projects use ids for ownership currently so it is a
>> smaller change
> That does not change.  It is the hierarchy that is labeled by name.
> 
>> 2. renaming a project in keystone would not invalidate the ownership
>> hierarchy (Note that moving a project around would invalidate the
>> hierarchy in both cases)
>>
> Renaming would not change anything.
> 
> I would say the rule should be this:  Ids are basically uuids, and are
> immutable.  Names a mutable.  Each project has a parent Id.  A project
> can either be referenced directly by ID, oir hierarchically by name.  In
> addition, you can navigate to a project by traversing the set of ids,
> but you need to know where you are going.  THus the array
> 
> ['abcd1234',fedd3213','3e3e3e3e'] would be a way to find a project, but
> the project ID for the lead node would still be just '3e3e3e3e'.

The analogy I see here is the unix file system which is organized into a
tree structure by inodes, each inode has a name (technically it can have
more than one name). But the fundamental point is the structure is
formed by id's (e.g. inodes), the path name of a file is transitory and
depends only on what name is bound to the id at the moment. It's a very
rich and powerful abstraction. The same concept is used in many database
schemas, an object has a primary key which is numeric and a name. You
can change the name easily without affecting any references to the id.



-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Representing PEM Format file as string

2014-01-27 Thread John Dennis
On 01/26/2014 05:36 PM, rajesh_moh...@dell.com wrote:
> I am working on SSL VPN BP.
> 
> CA certificate is one of the resources. We decided to use PEM formatted 
> certificates. It is multi-line string 
> 
>   1 -BEGIN CERTIFICATE-
>   2 MIID3TCCA0agAwIBAgIJAKRWnul3NJnrMA0GCSqGSIb3DQEBBQUAMIGmMQswCQYD
>  
>  21 0vO728pEcn6QtOpU7ZjEv8JLKRHwyq8kwd8gKMflWZRng4R2dj3cdd24oYJxn5HW
>  22 atXnq+N9H9dFgMfw5NNefwJrZ3zAE6mu0bAIoXVsKT2S
>  23 -END CERTIFICATE-
> 
> Is there a standard way to represent this as single line string? Maybe there 
> is some other project that passes certificates on command line/url. 
> 
> I am looking for some accepted way to represent PEM formatted file on command 
> line. 
> 
> I am thinking of concatenating all lines into single string and rebuilding 
> the file when configuration file is generated.Will we hit any CLI size 
> limitations if we pass long strings.

In general PEM formatted certificates and other X509 binary data objects
should be exchanged in the original PEM format for interoperabilty
purposes. For command line tools it's best to pass PEM objects via a
filename.

However, having said that there is at least one place in Openstack which
passes PEM data via a HTTP header and/or URL, it's the Keystone token id
which is a binary CMS object normally exchanged in PEM format. Keystone
strips the PEM header and footer, strips line endings and modifies one
of the base64 alphabet characters which was incompatible with HTTP and
URL encoding. However what keystone was doing was not correct and in
fact did not follow an existing RFC (e.g. URL safe base64).

I fixed these problems and in the process wrote two small Python modules
base64utils and pemutils to do PEM transformations correctly (plus
general utilities for working with base64 and PEM data). These were
submitted to both keystone and oslo, Oslo on the assumption they should
be general purpose utilities available to all of openstack. I believe
these have languished in review purgatory, because I was pulled off to
work on other issues I haven't had the time to babysit the review.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a "common" client library

2014-01-17 Thread John Dennis
>> Keeping them separate is awesome for *us* but really, really, really
>> sucks for users trying to use the system. 
> 
> I agree. Keeping them separate trades user usability for developer
> usability, I think user usability is a better thing to strive for.

I don't understand how multiple independent code bases with a lot of
overlapping code/logic is a win for developers. The more we can move to
single shared code the easier code comprehension and maintenance
becomes. From a software engineering perspective the amount of
duplicated code/logic in OpenStack is worrisome. Iterating towards
common code seems like a huge developer win as well as greatly enhancing
robustness in the process.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introduction: Rich Megginson - Designate project

2014-01-16 Thread John Dennis
On 01/15/2014 08:24 PM, Rich Megginson wrote:
> Hello.  My name is Rich Megginson.  I am a Red Hat employee interested 
> in working on Designate (DNSaaS), primarily in the areas of integration 
> with IPA DNS, DNSSEC, and authentication (Keystone).
> 
> I've signed up for the openstack/launchpad/gerrit accounts.
> 
> Be seeing you (online).

Welcome aboard Rich! Great to have an excellent developer with
tremendous experience join the crew.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-15 Thread John Dennis
On 11/14/2013 07:54 PM, Caitlin Bestler wrote:
> I would suggest that each *established* project (core or incubator) have 
> its own mailing list, and that "openstack-dev" be reserved for
> topics of potential interest across multiple projects (which new 
> projects would qualify as).

+1

This seems very sensible to me. When I first started on OpenStack I was
quite surprised *all* the diverse projects were jammed into one list
with no easy way to filter what you're working on as a developer.

After joining openstack-dev my email load has ballooned and it's
seriously affecting my productivity. The vast majority of the email
traffic is immediately deleted because I can't follow nuanced technical
discussions on parts of OpenStack I'm likely never to code in.

Having a seperate mailing list for each project seems to offer the best
of all worlds.

* Subscribe only to what you need to
* Subscribe to everything
* Easy filtering (because you can filter on the mailing list)

Keep openstack-dev for issues which cross project domains.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re : welcoming new committers (was Re: When is it okay for submitters to say 'I don't want to add tests' ?)

2013-11-04 Thread John Dennis
On 10/31/2013 10:36 PM, Jeremy Stanley wrote:
> As has been said many times already, OpenStack does not lack
> developers... it lacks reviewers.

In regards to reviews in general and in particular for welcoming new
committers I think we need to be careful about reviewers NAK'ing a
submission for what is essentially bikeshedding [1]. Reviewers should
focus on code correctness and adherence to required guidelines and not
NAK a submission because the submission offends their personal coding
preferences [2].

If a reviewer thinks the code would be better with changes which do not
affect correctness and are more in the vein of "style" modifications
they should make helpful suggestions but give the review a 0 instead of
actually NAK'ing the submission. NAK'ed reviews based on style issues
force the submitter to adhere to someone else's unsubstantiated opinion
and slows down the entire contribution process while submissions are
reworked multiple times without any significant technical change. It's
also demoralizing for submitters to have their contributions NAK'ed for
reasons that are issues of opinion only, the submitter has to literally
submit [3].

[1] http://en.wiktionary.org/wiki/bikeshedding

[2] Despite the best attempts of computer science researchers over the
years software development remains more of a craft than a science with
unambiguous rules yielding exactly one solution. Often there are many
valid approaches to solve a particular coding problem, the selection of
one approach often boils down to the personal preferences of the
craftsperson. This does not diminish the value of coding guidelines
gleaned from years of analyzing software issues, what it does mean is
those guidelines still leave plenty of room for different approaches and
no one is the arbiter of the "one and only correct way".

[3] to give over or yield to the power or authority of another.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-25 Thread John Dennis
On 10/25/2013 03:43 PM, Robert Collins wrote:
> On 26 October 2013 08:40, Dolph Mathews  wrote:
>>
>> On Thu, Oct 24, 2013 at 1:48 PM, Robert Collins 
>> wrote:
>>>
>>>
>>> *) They help casual contributors *more* than long time core
>>> contributors : and those are the folk that are most likely to give up
>>> and walk away. Keeping barriers to entry low is an important part of
>>> making OpenStack development accessible to new participants.
>>
>>
>> This is an interesting point. My reasoning for removing them was that I've
>> never seen *anyone* working to maintain them, or to add them to files where
>> they're missing. However, I suspect that the users benefiting from them
>> simply aren't deeply enough involved with the project to notice or care
>> about the inconsistency?
> 
> Thats my hypothesis too.
> 
>> I'm all for low barriers of entry, so if there's
>> any evidence that this is true, I'd want to make them more prolific.
> 
> I'm not sure how to gather evidence for this, either for or against ;(.

vim and it's cousins constitutes only a subset of popular editors. Emacs
is quite popular and it requires different syntax and requires the per
file variables to be the 1st line (or the 2nd line if there is a shell
interpreter line on the 1st line). In Emacs you can also use "Local
Variables" comments at the end of the file (a location many will not see
or cause to move during editing). So don't see how vim and emacs
specifications will coexist nicely and stay that way consistently.

And what about other editors? Where do you stop?

My personal feeling is you need to have enough awareness to configure
your editor correctly to contribute to a project. It's your
responsibility and our gate tools will hold you to that promise.

Let's just remove the mode lines, they really don't belong in every file.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-18 Thread John Dennis
On 10/18/2013 12:57 PM, Doug Hellmann wrote:
> 
> 
> 
> On Thu, Oct 17, 2013 at 2:24 PM, John Dennis  <mailto:jden...@redhat.com>> wrote:
> 
> On 10/17/2013 12:22 PM,  Luis A. Garcia wrote:
> > On 10/16/2013 1:11 PM, Doug Hellmann wrote:
> >>
> >> [snip]
> >> Option 3 is closer to the new plan for Icehouse, which is to have _()
> >> return a Message, allow Message to work in a few contexts like a
> string
> >> (so that, for example, log calls and exceptions can be left
> alone, even
> >> if they use % to combine a translated string with arguments), but
> then
> >> have the logging and API code explicitly handle the translation of
> >> Message instances so we can always pass unicode objects outside of
> >> OpenStack code (to logging or to web frameworks). Since the
> logging code
> >> is part of Oslo and the API code can be, this seemed to provide
> >> isolation while removing most of the magic.
> >>
> >
> > I think this is exactly what we have right now inherited form Havana.
> > The _() returns a Message that is then translated on-demand by the API
> > or in a special Translation log handler.
> >
> > We just did not make Message look and feel enough like a str() and
> some
> > outside components (jsonifier in Glance and log Formatter all
> over) did
> > not know how to handle non text types correctly when non-ascii
> > characters were present.
> >
> > I think extending from unicode and removing all the implementations in
> > place such that the unicode implementation kick in for all magic
> methods
> > will solve the problems we saw at the end of Havana.
> 
> I'm relatively new to OpenStack so I can't comment on prior OpenStack
> implementations but I'm a long standing veteran of Python i18n issues.
> 
> What you're describing sounds a lot like problems that result from the
> fact Python's default encoding is ASCII as opposed to the more sensible
> UTF-8. I have a long write up on this issue from a few years ago but
> I'll cut to the chase. Python will attempt to automatically encode
> Unicode objects into ASCII during output which will fail if there are
> non-ASCII code points in the Unicode. Python does this is in two
> distinct contexts depending on whether destination of the output is a
> file or terminal. If it's a terminal it attempts to use the encoding
> associated with the TTY. Hence you can different results if you output
> to a TTY or a file handle.
> 
> 
> That was related to the problem we had with logging and Message instances.
>  
> 
> 
> The simple solution to many of the encoding exceptions that Python will
> throw is to override the default encoding and change it to UTF-8. But
> the default encoding is locked by site.py due to internal Python string
> optimizations which cache the default encoded version of the string so
> the encoding happens only once. Changing the default encoding would
> invalidate cached strings and there is no mechanism to deal with that,
> that's why the default encoding is locked. But you can change the
> default encoding using this trick if you do early enough during the
> module loading process:
> 
> 
> I don't think we want to have force the encoding at startup. Setting the
> locale properly through the environment and then using unicode objects
> also solves the issue without any startup timing issues, and allows
> deployers to choose the encoding for output.


Setting the locale only solves some of the problems, the locale is only
respected some of the time. The discrepancies and inconsistencies in how
Unicode conversion occurs in Python2 is maddening and one of the worst
aspects of Python2, it was never carefully thought out, Unicode in
Python2 is basically a bolted on hack that only works if every piece of
code plays by the exact same rules which of course they don't and never
will. I can almost guarantee unless you attack this problem at the core
you'll continue to get bitten. Either code is encoding aware and
explicitly forces a codec (presumably utf-8) or the code is encoding
naive and allows the default encoding to be applied, except when the
locale is respected which overrides the default encoding for the naive
case.

When Python3 was being worked on one of the major objectives was to
clean up the horrible state of strings and unicode in Python2. Python3
to the best of my knowledge has gotten it right. What's the default
encoding i

Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-17 Thread John Dennis
On 10/17/2013 12:22 PM,  Luis A. Garcia wrote:
> On 10/16/2013 1:11 PM, Doug Hellmann wrote:
>>
>> [snip]
>> Option 3 is closer to the new plan for Icehouse, which is to have _()
>> return a Message, allow Message to work in a few contexts like a string
>> (so that, for example, log calls and exceptions can be left alone, even
>> if they use % to combine a translated string with arguments), but then
>> have the logging and API code explicitly handle the translation of
>> Message instances so we can always pass unicode objects outside of
>> OpenStack code (to logging or to web frameworks). Since the logging code
>> is part of Oslo and the API code can be, this seemed to provide
>> isolation while removing most of the magic.
>>
> 
> I think this is exactly what we have right now inherited form Havana. 
> The _() returns a Message that is then translated on-demand by the API 
> or in a special Translation log handler.
> 
> We just did not make Message look and feel enough like a str() and some 
> outside components (jsonifier in Glance and log Formatter all over) did 
> not know how to handle non text types correctly when non-ascii 
> characters were present.
> 
> I think extending from unicode and removing all the implementations in 
> place such that the unicode implementation kick in for all magic methods 
> will solve the problems we saw at the end of Havana.

I'm relatively new to OpenStack so I can't comment on prior OpenStack
implementations but I'm a long standing veteran of Python i18n issues.

What you're describing sounds a lot like problems that result from the
fact Python's default encoding is ASCII as opposed to the more sensible
UTF-8. I have a long write up on this issue from a few years ago but
I'll cut to the chase. Python will attempt to automatically encode
Unicode objects into ASCII during output which will fail if there are
non-ASCII code points in the Unicode. Python does this is in two
distinct contexts depending on whether destination of the output is a
file or terminal. If it's a terminal it attempts to use the encoding
associated with the TTY. Hence you can different results if you output
to a TTY or a file handle.

The simple solution to many of the encoding exceptions that Python will
throw is to override the default encoding and change it to UTF-8. But
the default encoding is locked by site.py due to internal Python string
optimizations which cache the default encoded version of the string so
the encoding happens only once. Changing the default encoding would
invalidate cached strings and there is no mechanism to deal with that,
that's why the default encoding is locked. But you can change the
default encoding using this trick if you do early enough during the
module loading process:

import sys
reload(sys)
sys.setdefaultencoding('utf-8')

The reason this works is because site.py deletes the setdefaultencoding
from the sys module, but after reloading sys it's available again. One
can also use a tiny CPython module to set the default encoding without
having to use the sys reload trick. The following illustrates the reload
trick:

$ python
Python 2.7.3 (default, Aug  9 2012, 17:23:57)
[GCC 4.7.1 20120720 (Red Hat 4.7.1-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
>>> sys.setdefaultencoding('utf-8')
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'module' object has no attribute 'setdefaultencoding'
>>> reload(sys)

>>> sys.setdefaultencoding('utf-8')
>>> sys.getdefaultencoding()
'utf-8'


Not fully undersanding the role of Python's default encoding and how
it's application differs between terminal and non-terminal output can
cause a lot of confusion and misunderstanding which can sometimes lead
to false conclusions as to what is going wrong.

If I get a chance I'll try to publicly post my write-up on Python i18n
issues.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Python dependencies: PyPI vs distro packages

2013-08-06 Thread John Dennis
On 08/06/2013 11:19 AM, Dean Troyer wrote:
> And that is the crux of the problem.  When both can be installed
> side-by-side and sys.path is in control of the order, things work.
> This is such a fundamental problem in the distro that I am beginning
> to thing that we need to address it ourselves.  Everything else is
> treating symptoms.

Are you aware of "Software Collections"? It's relatively new.

What Are Software Collections?

Software Collections is a way to concurrently install multiple versions
of specific software on the same system without affecting standard
software packages that are installed on the system with the classic RPM
package manager.


http://developerblog.redhat.com/2013/01/28/software-collections-on-red-hat-enterprise-linux/

http://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/Packagers_Guide/chap-Packagers_Guide-Introducing_Dynamic_Software_Collections.html

https://access.redhat.com/site/documentation/en-US/Red_Hat_Developer_Toolset/1/html/Software_Collections_Guide/
-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] common codes

2013-07-16 Thread John Dennis
On 07/16/2013 12:56 AM, Zhongyue Luo wrote:
> Gareth,
> 
> https://wiki.openstack.org/wiki/Oslo#Principles
> 
> I believe this link will answer most of your answers.

This is very topical, we've just been having a closely related discussion.

There is another class of common code and we're not sure if the Oslo
principles covers it, let me explain:

Many of the openstack projects are divided into client and server
packages. There is often common code shared between both the client and
server packages. Current practice appears to be cut-n-paste copying of
common code between the client and server components. This is poor
software practice, it makes maintenance difficult. Bug fixes applied to
one side are inadvertently missed on the other side. When making feature
enhancements one must be careful to update all copies of the code.

The oslo principles seem to require code that is generally useful across
projects, that seems to eliminate the intra-project common code which is
not shared across projects.

Is oslo the location for intra-project common code or not?

Or is a layout for project XXX like this more appropriate?

XXX
python-XXXclient
XXX-common

John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Config files overriding CLI: The path of most surprise.

2013-07-02 Thread John Dennis
On 07/01/2013 05:52 PM, Clint Byrum wrote:
> Last week I went to use oslo.config in a utility I am writing called
> os-collect-config[1]...
> 
> While running unit tests on the main() method that is used for the CLI,
> I was surprised to find that my unit tests were picking up values from
> a config file I had created just as a test. The tests can be fixed to
> disable config file lookups, but what was more troublesome was that the
> config file was overriding values I was passing in as sys.argv.
> 
> I have read the thread[2] which suggest that CLI should defer to config
> file because config files are somehow less permanent than the CLI.
> 
> I am writing today to challenge that notion, and also to suggest that even
> if that is the case, it is inappropriate to have oslo.config operate in
> such a profoundly different manner than basically any other config library
> or system software in general use. CLI options are _for config files_
> and if packagers are shipping configurations in systemd unit files,
> upstart jobs, or sysvinits, they are doing so to control the concerns
> of that particular invocation of whatever command they are running,
> and not to configure the software entirely.
> 
> CLI args are by definition ephemeral, even if somebody might make them
> "permanent" in their system, I doubt any packager would then expect that
> these CLI args would be overridden by any config files. This default is
> just wrong, and needs to be fixed.

+1

When I read "Option values in config files override those on the command
line." in the cfg.py docstring I thought surely that must be a typo
because it's it's the opposite of years of established practice.

I think the following captures the expected behavior

> I was also confused by the ordering in this list, though when I read more 
> carefully it seems to agree with me:
> 
>> - Default value in source code
>> - Overridden by value in config file
>> - Overridden by value in environment variable
>> - Overridden by value given as command line option
> 
> I'd like to rewrite that list as
> 
> - Value given as a command line option
> - Failing that, value in environment variable
> - Failing that, value in config file
> - Failing that, default value in source code 
> 

John



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The future of run_tests.sh

2013-06-18 Thread John Dennis
On 06/18/2013 04:59 AM, Thierry Carrez wrote:
> The issue here is not really the burden of maintaining an alternate
> testing method in the tree... it's that by default newcomers use the
> alternate method rather than the one used by our CI infrastructure,
> which should be their default choice.
> 
> Personally I would introduce a TESTING file that would describe all
> available methods for running tests, and rename/move run_tests.sh to
> something less discoverable... so that people looking for ways to run
> tests would find TESTING first.

+1

As a developer newly assigned to keystone work I will echo the value of
both the run_tests.sh script because of it's obvious presence and
utility. A TESTING file should be mandatory as well. I immediately found
the run_tests.sh script but fumbled around for a while until I found all
the information I needed to run the tests (despite already being
familiar with nose). It also took me a while to figure out how to run an
individual test (mostly because I assumed one had to include the tests
directory in either a test pathname or module path) but run_tests.sh
apparently points nose into the test directory. Had there been a TESTING
readme file it would have definitely saved me time and frustration.

I had to learn about tox after a suggestion in IRC.

Coming from a distro perspective I prefer not to see non-distro items
being installed (venv has worked well though). And I definitely like
being able to drop into the debugger, nose has served me well in the
past and I like it.

John



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev