[openstack-dev] FreeBSD host support

2014-10-18 Thread Roman Bogorodskiy
Hi,

In discussion of this spec proposal:
https://review.openstack.org/#/c/127827/ it was suggested by Joe Gordon
to start a discussion on the mailing list.

So I'll share my thoughts and a long term plan on adding FreeBSD host
support for OpenStack. 

An ultimate goal is to allow using libvirt/bhyve as a compute driver.
However, I think it would be reasonable to start with libvirt/qemu
support first as it will allow to prepare the ground.

High level overview of what needs to be done:

 - Nova
  * linux_net needs to be re-factored to allow to plug in FreeBSD
support (that's what the spec linked above is about)
  * nova.virt.disk.mount needs to be extended to support FreeBSD's
mdconfig(8) in a similar way to Linux's losetup
 - Glance and Keystone
These components are fairly free of system specifics. Most likely
they will require some small fixes like e.g. I made for Glance
https://review.openstack.org/#/c/94100/
 - Cinder
I didn't look close at Cinder from a porting perspective, tbh.
Obviously, it'll need some backend driver that would work on
FreeBSD, e.g. ZFS. I've seen some patches floating around for ZFS
though. Also, I think it'll need an implementation of iSCSI stack
on FreeBSD, because it has its own stack, not stgt. On the other
hand, Cinder is not required for a minimal installation and that
could be done after adding support of the other components.

Also, it's worth to mention that a discussion on this topic already
happened on this maillist: 

http://lists.openstack.org/pipermail/openstack-dev/2014-March/031431.html

Some of the limitations were resolved since then, specifically,
libvirt/bhyve has no limitation on count of disk and ethernet devices
anymore.

Roman Bogorodskiy


pgpYJ0yz8_DVc.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API recommendation

2014-10-18 Thread Jay Pipes
Sorry for top-posting, but Salvatore and Dean, you might find my 
proposed vNext Compute API interesting, since it does what you are 
describing you'd like to see with regards to task-based APIs:


http://docs.oscomputevnext.apiary.io/

Specifically, the schemas and endpoints for tasks and subtasks:

http://docs.oscomputevnext.apiary.io/#servertask
http://docs.oscomputevnext.apiary.io/#servertaskitem

When you call to POST /servers, you can create multiple servers:

http://docs.oscomputevnext.apiary.io/#server

And the _links JSON-HAL document returned by the POST /servers API call 
contains a describedby hyperlink that points to the JSONSchema 
document for a server object, and a rel/server_tasks hyperlink that 
points to a URL that may be used to determine the tasks and task states 
for any task involved in creating the server.


Best,
-jay

On 10/16/2014 05:57 AM, Salvatore Orlando wrote:

In an analysis we recently did for managing lifecycle of neutron
resources, it also emerged that task (or operation) API are a very
useful resource. Indeed several neutron resources introduced the
(in)famous PENDING_XXX operational statuses to note the fact that an
operation is in progress and its status is changing.

This could have been easily avoided if a facility for querying active
 tasks through the API was available.

From an API guideline viewpoint, I understand that
https://review.openstack.org/#/c/86938/ proposes the introduction of
a rather simple endpoint to query active tasks and filter them by
resource uuid or state, for example. While this is hardly
questionable, I wonder if it might be worth typifying the task, ie:
adding a resource_type attribute, and/or allowing to retrieve active
tasks as a chile resource of an object, eg.: GET
/servers/server_id/tasks?state=running or if just for running tasks
GET /servers/server_id/active_tasks

The proposed approach for the multiple server create case also makes
 sense to me. Other than bulk operations there are indeed cases
where a single API operation needs to perform multiple tasks. For
instance, in Neutron, creating a port implies L2 wiring, setting up
DHCP info, and securing it on the compute node by enforcing
anti-spoof rules and security groups. This means there will be 3/4
active tasks. For this reason I wonder if it might be the case of
differentiating between the concept of operation and tasks where
the former is the activity explicitly initiated by the API consumer,
and the latter are the activities which need to complete to fulfil
it. This is where we might leverage the already proposed request_id
attribute of the task data structure.

Finally, a note on persistency. How long a completed task,
successfully or not should be stored for? Do we want to store them
until the resource they operated on is deleted? I don't think it's a
great idea to store them indefinitely in the DB. Tying their lifespan
to resources is probably a decent idea, but time-based cleanup
policies might also be considered (e.g.: destroy a task record 24
hours after its completion)

Salvatore


On 16 October 2014 08:38, Christopher Yeoh cbky...@gmail.com
mailto:cbky...@gmail.com wrote:

On Thu, Oct 16, 2014 at 7:19 AM, Kevin L. Mitchell
kevin.mitch...@rackspace.com mailto:kevin.mitch...@rackspace.com
wrote:

On Wed, 2014-10-15 at 12:39 -0400, Andrew Laski wrote:

On 10/15/2014 11:49 AM, Kevin L. Mitchell wrote:

Now that we have an API working group forming, I'd like to

kick off some

discussion over one point I'd really like to see our APIs

using (and

I'll probably drop it in to the repo once that gets fully

set up): the

difference between synchronous and asynchronous

operations.  Using nova

as an example—right now, if you kick off a long-running

operation, such

as a server create or a reboot, you watch the resource

itself to

determine the status of the operation.  What I'd like to

propose is that

future APIs use a separate operation resource to track status
information on the particular operation.  For instance, if

we were to

rebuild the nova API with this idea in mind, booting a new

server would

give you a server handle and an operation handle; querying

the server

resource would give you summary information about the state

of the

server (running, not running) and pending operations, while

querying the

operation would give you detailed information about the

status of the

operation.  As another example, issuing a reboot would give

you the

operation handle; you'd see the operation in a queue on the

server

resource, but the actual state of the operation itself

would be listed

on that operation.  As a side effect, this would allow us

(not require,

though) to queue up operations on a resource, and allow us

to cancel an

operation that has not yet been started.

Thoughts?


Something like https://review.openstack.org/#/c/86938/ ?

I know that Jay has proposed a similar thing before as well.

I would

love to get some feedback from others on this as it's

Re: [openstack-dev] [keystone] Support for external authentication (i.e. REMOTE_USER) in Havana

2014-10-18 Thread Nathan Kinder


On 10/18/2014 08:43 AM, lohit.valleru wrote:
 Hello,
 
 Thank you for posting this issue to openstack-dev. I had posted this on the
 openstack general user list and was waiting for response.
 
 May i know, if we have any progress regarding this issue.
 
 I am trying to use external HTTPD authentication with kerberos and LDAP
 identity backend, in Havana.
 
 I think, few things have changed with Openstack Icehouse release and
 Keystone 0.9.0 on CentOS 6.5.
 
 Currently I face a similar issue to yours : I get a full username with
 domain as REMOTE_USER from apache, and keystone tries to search LDAP  along
 with my domain name. ( i have not mentioned any domain information to
 keystone. i assume it is called 'default', while my domain is: example.com )
 
 I see that - External Default and External Domain are no longer supported by
 keystone but intstead - 
 
 keystone.auth.plugins.external.DefaultDomain or
 external=keystone.auth.plugins.external.Domain are valid as of now.
 
 I also tried using keystone.auth.plugins.external.kerberos after checking
 the code, but it does not make any difference.
 
 For example:
 
 If i authenticate using kerberos with : lohit.vall...@example.com. I see the
 following in the logs.
 
 DEBUG keystone.common.ldap.core [-] LDAP search:
 dn=ou=People,dc=example,dc=come, scope=1,
 query=((uid=lohit.vall...@example.com)(objectClass=posixAccount)),
 attrs=['mail', 'userPassword', 'enabled', 'uid'] search_s
 /usr/lib/python2.6/site-packages/keystone/common/ldap/core.py:807
 2014-10-18 02:34:36.459 5592 DEBUG keystone.common.ldap.core [-] LDAP unbind
 unbind_s /usr/lib/python2.6/site-packages/keystone/common/ldap/core.py:777
 2014-10-18 02:34:36.460 5592 WARNING keystone.common.wsgi [-] Authorization
 failed. Unable to lookup user lohit.vall...@example.com from 172.31.41.104
 
 Also, i see that keystone always searches with uid, no matter what i enter
 as a mapping value for userid/username in keystone.conf . I do not
 understand if this is a bug or limitation. ( The above logs show that they
 are not able to find uid with lohit.vall...@example.com since LDAP contains
 uid without domain name)

Do you have more details on what your mapping is configured like?  There
have been some changes around this area in Juno, but it's still possible
that there is some sort of bug here.
 
 May i know, how do i request keystone to split REMOTE_USER? Do i need to
 mention default domain and sync with database in order for this to work?

REMOTE_USER is set to the full user principal name, which incudes the
kerberos realm.  Are you using mod_auth_kerb?  If so, you can set the
following in your httpd config to split the realm off of the user principal:

  KrbLocalUserMapping On

 
 Also, May i know - what modifications do i need to do to Havana to disable
 username and password authentication, but instead use external
 authentication such as Kerberos/REMOTE_USER.
 
 Is anyone working on these scenarios? or do we have any better solutions?

There is work going on to make Kerberos a more pracitical solution,
including a Kerberos auth plugin for Keystone:

  https://review.openstack.org/123614
 
 I have read about Federation and Shibboleth authentication, but i believe
 that is not the same as REMOTE_USER/Kerberos authentication.

SAML federation uses REMOTE_USER, but it's quite a bit different than
what you are tryign do do since you still need to look up user
information via LDAP (it's all provided as a part of the SAML assertion
in the federation case).  There are efforts going on in this area, but I
think it's still a release out (Kilo hopefully).

Thanks,
-NGK

 
 Thank you,
 
 Lohit
 
 Thank you,
 
 Lohit
 
 
 
 
 --
 View this message in context: 
 http://openstack.10931.n7.nabble.com/keystone-Support-for-external-authentication-i-e-REMOTE-USER-in-Havana-tp22185p55528.html
 Sent from the Developer mailing list archive at Nabble.com.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Kilo Summit Topic Proposals

2014-10-18 Thread Mike Perez
We will be discussing proposals [1] at the next Cinder meeting [2]
October 22nd at 16:00 UTC:

You may add proposals to the etherpad at this time. If you have
something proposed, please be present at the meeting to answer any
questions about your proposal.

--
Mike Perez

[1] - https://etherpad.openstack.org/p/kilo-cinder-summit-topics
[2] - https://wiki.openstack.org/wiki/CinderMeetings

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev