Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-14 Thread Sylvain Bauza


Le 14/10/2014 01:46, Adam Lawson a écrit :


/I think Adam is talking about this bp:
https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically/


Correct - yes. Sorry about that. ; )

So it would seem the question is not whether to support auto-evac but 
how it should be handled. If not handled by Nova, it gets complicated. 
Asking a user to configure a custom Nagios trigger/action... not sure 
if we'd recommend that as our definition of ideal.


  * I can foresee Congress being used to control whether auto-evac is
required and what other policies come into play by virtue of an
unplanned host removal from service. But that seems like a bit
overkill.
  * i can foresee Nova/scheduler being used to perform the evac
itself. Are they still pushing back?
  * I can foresee Ceilometer being used to capture service state and
define how long a node should be inaccessible before it's
considered offline. But seems a bit out of scope for what
ceilometer was meant to do.



Well, IMHO Gantt should just enforce policies (possibly defined by 
Congress or whatever else) so if a condition is not met (here, HA on a 
VM), it should issue a reschedule. That said, Gantt is not responsible 
for polling all events and updating its internal view, that's another 
project which should send those metrics to it.


I'm not having a preference in between Heat, Ceilometer or whatever else 
for notifying Gantt. I even think that whatever the solution would be 
(even a Nagios handler), that's Gantt at the end which would trigger the 
evacuation by calling Nova to fence that compute node and move the VM to 
another host (like rescheduling already does, but in a manual way).



-Sylvain


I'm all about making this super easy to do a simple task though, at 
least so the settings are all defined in one place. Nova seems logical 
but I'm wondering if there is still resistance.


So curious; how are these higher-level discussions 
initiated/facilitated? TC?


I proposed a cross-project session at the Paris Summit about scheduling 
and Gantt (yet to be accepted), that usecase could be discussed there.


-Sylvain



*/
Adam Lawson/*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Mon, Oct 13, 2014 at 3:21 PM, Russell Bryant rbry...@redhat.com 
mailto:rbry...@redhat.com wrote:


On 10/13/2014 06:18 PM, Jay Lau wrote:
 This is also a use case for Congress, please check use case 3 in the
 following link.



https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit#

Wow, really?  That honestly makes me very worried about the scope of
Congress being far too big (so early, and maybe period).

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-10-14 Thread Xu Han Peng
I was reminded that scapy is under GPLv2 license so we cannot make it as 
the dependency of Neutron.


There are also some IPv6 utilities from ryu.lib.packet which can be 
leveraged to send out neighbor advertisement. Is it OK to make ryu as a 
dependency and make this as a binary and call it from namespace?



Thanks,
Xu Han


On 09/26/2014 10:08 AM, Vishvananda Ishaya wrote:

You are going to have to make this as a separate binary and call it
via rootwrap ip netns exec. While it is possible to change network
namespaces in python, you aren't going to be able to do this consistently
without root access, so it will need to be guarded by rootwrap anyway.

Vish

On Sep 25, 2014, at 7:00 PM, Xu Han Peng pengxu...@gmail.com 
mailto:pengxu...@gmail.com wrote:



Sending unsolicited NA by scapy is like this:

from scapy.all import send, IPv6, ICMPv6ND_NA, ICMPv6NDOptDstLLAddr

target_ll_addr = ICMPv6NDOptDstLLAddr(lladdr = mac_address)
unsolicited_na=ICMPv6ND_NA(R=1, S=0, O=1, tgt=target)
packet=IPv6(src=source)/unsolicited_na/target_ll_addr
*send(packet, iface=interface_name, count=10, inter=0.2)*

It's not actually a python script but a python method. Any ideas?

On 09/25/2014 06:20 PM, Kevin Benton wrote:

Does running the python script with ip netns exec not work correctly?

On Thu, Sep 25, 2014 at 2:05 AM, Xu Han Pengpengxu...@gmail.com  wrote:

Hi,

As we talked in last IPv6 sub-team meeting, I was able to construct and send
IPv6 unsolicited neighbor advertisement for external gateway interface by
python tool scapy:

http://www.secdev.org/projects/scapy/

http://www.idsv6.de/Downloads/IPv6PacketCreationWithScapy.pdf


However, I am having trouble to send this unsolicited neighbor advertisement
in a given namespace. All the current namespace operations leverage ip netns
exec and shell command. But we cannot do this to scapy since it's python
code. Can anyone advise me on this?

Thanks,
Xu Han


On 09/05/2014 05:46 PM, Xu Han Peng wrote:

Carl,

Seem so. I think internal router interface and external gateway port GARP
are taken care by keepalived during failover. And if HA is not enable,
_send_gratuitous_arp is called to send out GARP.

I think we will need to take care IPv6 for both cases since keepalived 1.2.0
support IPv6. May need a separate BP. For the case HA is enabled externally,
we still need unsolicited neighbor advertisement for gateway failover. But
for internal router interface, since Router Advertisement is automatically
send out by RADVD after failover, we don't need to send out neighbor
advertisement anymore.

Xu Han


On 09/05/2014 03:04 AM, Carl Baldwin wrote:

Hi Xu Han,

Since I sent my message yesterday there has been some more discussion
in the review on that patch set.  See [1] again.  I think your
assessment is likely correct.

Carl

[1]https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

On Thu, Sep 4, 2014 at 3:32 AM, Xu Han Pengpengxu...@gmail.com  wrote:

Carl,

Thanks a lot for your reply!

If I understand correctly, in VRRP case, keepalived will be responsible for
sending out GARPs? By checking the code you provided, I can see all the
_send_gratuitous_arp_packet call are wrapped by if not is_ha condition.

Xu Han



On 09/04/2014 06:06 AM, Carl Baldwin wrote:

It should be noted that send_arp_for_ha is a configuration option
that preceded the more recent in-progress work to add VRRP controlled
HA to Neutron's router.  The option was added, I believe, to cause the
router to send (default) 3 GARPs to the external gateway if the router
was removed from one network node and added to another by some
external script or manual intervention.  It did not send anything on
the internal network ports.

VRRP is a different story and the code in review [1] sends GARPs on
internal and external ports.

Hope this helps avoid confusion in this discussion.

Carl

[1]https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

On Mon, Sep 1, 2014 at 8:52 PM, Xu Han Pengpengxu...@gmail.com  wrote:

Anthony,

Thanks for your reply.

If HA method like VRRP are used for IPv6 router, according to the VRRP RFC
with IPv6 included, the servers should be auto-configured with the active
router's LLA as the default route before the failover happens and still
remain that route after the failover. In other word, there should be no need
to use two LLAs for default route of a subnet unless load balance is
required.

When the backup router become the master router, the backup router should be
responsible for sending out an unsolicited ND neighbor advertisement with
the associated LLA (the previous master's LLA) immediately to update the
bridge learning state and sending out router advertisement with the same
options with the previous master to maintain the route and bridge learning.

This is shown inhttp://tools.ietf.org/html/rfc5798#section-4.1  and the
actions backup router should take after failover is documented here:

Re: [openstack-dev] [Fuel] Propose adding Igor K. to core reviewers for fuel-web projects

2014-10-14 Thread Aleksey Kasatkin
+1

Aleksey Kasatkin


Hi everyone!

 I would like to propose Igor Kalnitsky as a core reviewer on the
 Fuel-web team. Igor has been working on openstack patching,
 nailgun, fuel upgrade and provided a lot of good reviews [1]. In
 addition he's also very active in IRC and mailing list.

 Can the other core team members please reply with your votes
 if you agree or disagree.

 Thanks!


 [1]

 http://stackalytics.com/?project_type=stackforgerelease=junomodule=fuel-web

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon]Blueprint- showing a small message to the user for browser incompatibility

2014-10-14 Thread Aggarwal, Nikunj
Hi Everyone,
I have submitted a blueprint which targets the issues end-users are faces when 
they are using Horizon in the old browsers. So, this blueprint targets to 
overcome this problem by showing a small message on the Horizon login page.

I urge to all the Horizon community to take a look and share your views.

https://blueprints.launchpad.net/horizon/+spec/detecting-browser



Regards,
Nikunj
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Strategy for testing and merging the merge.py-free templates

2014-10-14 Thread Tomas Sedovic

Hi everyone,

As outlined in the Remove merge.py[1] spec, Peter Belanyi and I have 
built the templates for controller, nova compute, swift and cinder nodes 
that can be deploying directly to Heat (i.e. no merge.py pass is necessary).


The patches:

https://review.openstack.org/#/c/123100/
https://review.openstack.org/#/c/123713/

I'd like to talk about testing and merging them.

Both Peter and myself have successfully run them through devtest 
multiple times. The Tuskar and TripleO UI folks have managed to hook 
them up to the UI and make things work, too.


That said, there is a number of limitations which don't warrant making 
them the new default just yet:


* Juno Heat is required
* python-heatclient version 0.2.11 is required to talk to Heat
* There is currently no way in Heat to drop specific nodes from a 
ResourceGroup (say because of a hardware failure) so the ellision 
feature from merge.py is not supported yet
* I haven't looked into this yet, but I'm not very optimistic about an 
upgrade path from the merge.py-based templates to the heat-native ones


On the other hand, it would be great if we could add them to devtest as 
an alternative and to have them exercised by the CI. It would make it 
easier to keep them in sync and to iron out any inconsistencies.



James Slagle proposed something like this when I talked to him on IRC:

1. teach devtest about the new templates, driven by a 
OVERCLOUD_USE_MERGE_PY switch (defaulting to the merge.py-based templates)

2. Do a CI run of the new template patches, merge them
3. Add a (initially non-voting?) job to test the heat-only templates
4. When we've resolved all the issues stopping up from the switch, make 
the native templates default, deprecate the merge.py ones



This makes sense to me. Any objections/ideas?

Thanks,
Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites

2014-10-14 Thread David Chadwick


On 14/10/2014 01:25, Nathan Kinder wrote:
 
 
 On 10/13/2014 01:17 PM, Morgan Fainberg wrote:
 Description of the problem: Without attempting an action on an
 endpoint with a current scoped token, it is impossible to know what
 actions are available to a user.
 

This is not unusual in the physical world. If you think about all the
authz tokens you carry around in your pocket (as plastic cards), very
few of them (if any) list what you are entitled to do with them. This
gives the issuers and SPs flexibility to dynamically change your
accesses rights without changing your authorisation. What you can do, in
general terms, may be written in policy documents that you can consult
if you wish. So you may wish to introduce a service that is equivalent
to this (i.e. user may optionally consult some policy advice service).

If you introduce a service to allow a user to dynamically determine his
access rights (absolutely), you have to decide what to do about the
dynamics of this service compared to the lifetime of the keystone token,
as the rights may change more quickly than the token's lifetime.

 
 Horizon makes some attempts to solve this issue by sourcing all of
 the policy files from all of the services to determine what a user
 can accomplish with a given role. This is highly inefficient as it
 requires processing the various policy.json files for each request
 in multiple places and presents a mechanism that is not really
 scalable to understand what a user can do with the current
 authorization. Horizon may not be the only service that (in the
 long term) would want to know what actions a token can take.
 
 This is also extremely useful for being able to actually support
 more restricted tokens as well.  If I as an end user want to request
 a token that only has the roles required to perform a particular
 action, I'm going to need to have a way of knowing what those roles
 are.  I think that is one of the main things missing to allow the
 role-filtered tokens option that I wrote up after the last Summit
 to be a viable approach:
 
 https://blog-nkinder.rhcloud.com/?p=101
 
 
 I would like to start a discussion on how we should improve our
 policy implementation (OpenStack wide) to help make it easier to
 know what is possible with a current authorization context
 (Keystone token). The key feature should be that whatever the
 implementation is, it doesn’t require another round-trip to a third
 party service to “enforce” the policy which avoids another scaling
 point like UUID Keystone token validation.

Presumably this does not rule out the user, at his option, calling
another service to ask for advice what can I do with this token,
bearing in mind that the response will be advice and not a definite
answer (since the PDP will always be the one to provide the definitive
answer).



 
 Here are a couple of ideas that we’ve discussed over the last few
 development cycles (and none of this changes the requirements to
 manage scope of authorization, e.g. project, domain, trust, ...):
 
 1. Keystone is the holder of all policy files. Each service gets
 it’s policy file from Keystone and it is possible to validate the
 policy (by any other service) against a token provided they get the
 relevant policy file from the authoritative source (Keystone).

Can I suggest that this is made more abstract, to say, there is a
central policy administration service that stores all policies and
allows them to be updated, deleted, created, inherited etc.

Whether this service is combined with keystone or not in the
implementation is a separate issue. Conceptually it is a new type of
policy administration service for OpenStack.

 
 Pros: This is nearly completely compatible with the current policy
 system. The biggest change is that policy files are published to
 Keystone instead of to a local file on disk. This also could open
 the door to having keystone build “stacked” policies
 (user/project/domain/endpoint/service specific) where the deployer
 could layer policy definitions (layering would allow for stricter
 enforcement at more specific levels, e.g. users from project X
 can’t terminate any VMs).
 
 I think that there are a some additional advantages to centralizing 
 policy storage (not enforcement).
 
 - The ability to centralize management of policy would be very nice.
 If I want to update the policy for all of my compute nodes, I can do
 it in one location without the need for external configuration
 management solutions.
 
 - We could piggy-back on Keystone's signing capabilities to allow
 policy to be signed, providing protection against policy tampering on
 an individual endpoint.
 
 
 Cons: This doesn’t ease up the processing requirement or the need
 to hold (potentially) a significant number of policy files for each
 service that wants to evaluate what actions a token can do.

if you separate out an optional advice service from the
decision/enforcement service, this might provide the flexibility and
performance 

Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Alex Xu

On 2014年10月14日 12:52, Christopher Yeoh wrote:

On Mon, 13 Oct 2014 22:20:32 -0400
Jay Pipes jaypi...@gmail.com wrote:


On 10/13/2014 07:11 PM, Christopher Yeoh wrote:

On Mon, 13 Oct 2014 10:52:26 -0400
Jay Pipes jaypi...@gmail.com wrote:


On 10/10/2014 02:05 AM, Christopher Yeoh wrote:

I agree with what you've written on the wiki page. I think our
priority needs to be to flesh out
https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines
so we have something to reference when reviewing specs. At the
moment I see that document as something anyone should be able to
document a project's API convention even if they conflict with
another project for the moment. Once we've got a fair amount of
content we can start as a group resolving
any conflicts.

Agreed that we should be fleshing out the above wiki page. How
would you like us to do that? Should we have an etherpad to discuss
individual topics? Having multiple people editing the wiki page
offering commentary seems a bit chaotic, and I think we would do
well to have the Gerrit review process in place to handle proposed
guidelines and rules for APIs. See below for specifics on this...

Honestly I don't think we have enough content yet to have much of a
discussion. I started the wiki page

https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines

in the hope that people from other projects would start adding
conventions that they use in their projects. I think its fine for
the moment if its contradictory, we just need to gather what
projects currently do (or want to do) in one place so we can start
discussing any contradictions.

Actually, I don't care all that much about what projects *currently*
do. I want this API working group to come up with concrete guidelines
and rules/examples of what APIs *should* look like.

What projects currently do gives us a baseline to work from. It also
should expose where we have currently have inconsistencies between
projects.

And whilst I don't have a problem with having some guidelines which
suggest a future standard for APIs, I don't think we should be
requiring any type of feature which has not yet been implemented in
at least one, preferably two openstack projects and released and tested
for a cycle. Eg standards should be lagging rather than leading.


There is one reason to think about what projects *currently* do. When we 
choice which convention we want.
For example, the CamelCase and snake_case, if the most project use 
snake_case, then choice snake_case style

will be the right.




So I'd again encourage anyone interested in APIs from the various
projects to just start dumping their project viewpoint in there.

I went ahead and just created a repository that contained all the
stuff that should be pretty much agreed-to, and a bunch of stub topic
documents that can be used to propose specific ideas (and get
feedback on) here:

http://github.com/jaypipes/openstack-api

Hopefully, you can give it a look and get a feel for why I think the
code review process will be better than the wiki for controlling the
deliverables produced by this team...

I think it will be better in git (but we also need it in gerrit) when
it comes to resolving conflicts and after we've established a decent
document (eg when we have more content). I'm just looking to make it
as easy as possible for anyone to add any guidelines now. Once we've
actually got something to discuss then we use git/gerrit with patches
proposed to resolve conflicts within the document.


I like the idea of a repo and using Gerrit for discussions to
resolve issues. I don't think it works so well when people are
wanting to dump lots of information in initially.  Unless we agree
to just merge anything vaguely reasonable and then resolve the
conflicts later when we have a reasonable amount of content.
Otherwise stuff will get lost in gerrit history comments and
people's updates to the document will overwrite each other.

I guess we could also start fleshing out in the repo how we'll work
in practice too (eg once the document is stable what process do we
have for making changes - two +2's is probably not adequate for
something like this).

We can make it work exactly like the openstack/governance repo, where
ttx has the only ability to +2/+W approve a patch for merging, and he
tallies a majority vote from the TC members, who vote -1 or +1 on a
proposed patch.

Instead of ttx, though, we can have an API working group lead
selected from the set of folks currently listed as committed to the
effort?

Yep, that sounds fine, though I don't think a simple majority is
sufficient for something like api standards. We either get consensus
or we don't include it in the final document.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list

Re: [openstack-dev] [bashate] towards inbox zero on bashate changes, release?

2014-10-14 Thread Christian Berendt
On 10/14/2014 07:03 AM, Ian Wienand wrote:
 1) changes for auto-detection.  IMO, we should drop all these and just
leave bashate as taking a list of files to check, and let
test-harnesses fix it.  Everyone using it at the moment seems fine
without them

I am fine with this and abandoned the Introduce directories as possible
arguements review request. This way we can use already exisiting tools
like find and do not have to re-implement this.

Christian.

-- 
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Private external network

2014-10-14 Thread Édouard Thuleau
Hi Salvatore,

I like to propose a blueprint for the next Neutron release that permits to
dedicated an external network to a tenant. For that I though to rethink the
he conjunction of the two attributes `shared`
and `router:external' of the network resource.

I saw that you already initiate a work on that topic [1] and [2] but the bp
was un-targeted for an alternative approaches which might be more complete.
Does it alternative was released or in work in progress? To be sure to not
duplicating work/effort.

[1]
https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks
[2]
https://wiki.openstack.org/wiki/Neutron/sharing-model-for-external-networks

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Proposing Bogdan Dobrelia and Sergii Golovatiuk as core-reviewers for Fuel Library project

2014-10-14 Thread Tomasz Napierala
On 10 Oct 2014, at 11:35, Vladimir Kuklin vkuk...@mirantis.com wrote:

 Hi, Fuelers
 
 As you may have noticed our project is growing continuously. And this imposes 
 a requirement of increasing amount of core reviewers. I would like to propose 
 Bogdan Dobrelia(bogdando) and Sergii Golovatiuk(holser) as core reviewers. As 
 you know, they have been participating actively in development, design and 
 code review of the majority of project components as long as being our 
 topmost reviewers and contributors (#2 and #3) [1 and 2] (not to mention 
 being just brilliant engineers and nice people).
 
 Please, reply to my message if you agree or disagree separately for Bogdan 
 and Sergii (this is mandatory for existing core-reviewers).
 
 [1] http://stackalytics.com/report/contribution/fuel-library/90 
 [2] http://stackalytics.com/report/contribution/fuel-library/180

Strong +1 for both.

-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Chris Dent

On Tue, 14 Oct 2014, Angus Lees wrote:


2. I think we should separate out run the server from do once-off setup.


Yes! Otherwise it feels like the entire point of using containers
and dockerfiles is rather lost.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Proposing Bogdan Dobrelia and Sergii Golovatiuk as core-reviewers for Fuel Library project

2014-10-14 Thread Mike Scherbakov
Thanks all.
Bogdan, Sergii - you were given +2 rights.
To merge the patch, you should do +2 and +1 Approved in gerrit.

On Tue, Oct 14, 2014 at 2:35 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:

 On 10 Oct 2014, at 11:35, Vladimir Kuklin vkuk...@mirantis.com wrote:

  Hi, Fuelers
 
  As you may have noticed our project is growing continuously. And this
 imposes a requirement of increasing amount of core reviewers. I would like
 to propose Bogdan Dobrelia(bogdando) and Sergii Golovatiuk(holser) as core
 reviewers. As you know, they have been participating actively in
 development, design and code review of the majority of project components
 as long as being our topmost reviewers and contributors (#2 and #3) [1 and
 2] (not to mention being just brilliant engineers and nice people).
 
  Please, reply to my message if you agree or disagree separately for
 Bogdan and Sergii (this is mandatory for existing core-reviewers).
 
  [1] http://stackalytics.com/report/contribution/fuel-library/90
  [2] http://stackalytics.com/report/contribution/fuel-library/180

 Strong +1 for both.

 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for testing and merging the merge.py-free templates

2014-10-14 Thread Tomas Sedovic

On 10/14/2014 12:43 PM, Steven Hardy wrote:

On Tue, Oct 14, 2014 at 10:55:30AM +0200, Tomas Sedovic wrote:

Hi everyone,

As outlined in the Remove merge.py[1] spec, Peter Belanyi and I have built
the templates for controller, nova compute, swift and cinder nodes that can
be deploying directly to Heat (i.e. no merge.py pass is necessary).

The patches:

https://review.openstack.org/#/c/123100/
https://review.openstack.org/#/c/123713/

I'd like to talk about testing and merging them.

Both Peter and myself have successfully run them through devtest multiple
times. The Tuskar and TripleO UI folks have managed to hook them up to the
UI and make things work, too.

That said, there is a number of limitations which don't warrant making them
the new default just yet:

* Juno Heat is required
* python-heatclient version 0.2.11 is required to talk to Heat
* There is currently no way in Heat to drop specific nodes from a
ResourceGroup (say because of a hardware failure) so the ellision feature
from merge.py is not supported yet


FYI, I saw that comment from Clint in 123713, and have been looking
into ways to add this feature to Heat - hopefully will have some code to
post soon.


Oh, cool! It was on my todo list of things to tackle next. If I can help 
in any way (e.g. testing), let me know.




Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] allow-mac-to-be-updated

2014-10-14 Thread Gary Kotton
Hi,
I am really in favor of this. The implementation looks great! Nova can
surely benefit from this and we can make Neutron allocations at the API
level and save a ton of complexity at the compute level.
Kudos!
Thanks
Gary

On 10/13/14, 11:31 PM, Chuck Carlino chuckjcarl...@gmail.com wrote:

Hi,

Is anyone working on this blueprint[1]?  I have an implementation [2]
and would like to write up a spec.

Thanks,
Chuck

[1] https://blueprints.launchpad.net/neutron/+spec/allow-mac-to-be-updated
[2] https://review.openstack.org/#/c/112129/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] providing liaisons to the other cross-project teams

2014-10-14 Thread Doug Hellmann
The QA and Documentation teams have adopted a liaison program similar to what 
we did for Oslo during Juno. That means we need to provide someone from our 
team to work with each of their teams. If you would like to volunteer for one 
of the positions, put your name in the appropriate table on the wiki page: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons

Thanks,
Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Jay Pipes

On 10/14/2014 12:52 AM, Christopher Yeoh wrote:

On Mon, 13 Oct 2014 22:20:32 -0400
Jay Pipes jaypi...@gmail.com wrote:


On 10/13/2014 07:11 PM, Christopher Yeoh wrote:

On Mon, 13 Oct 2014 10:52:26 -0400
Jay Pipes jaypi...@gmail.com wrote:


On 10/10/2014 02:05 AM, Christopher Yeoh wrote:

I agree with what you've written on the wiki page. I think our
priority needs to be to flesh out
https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines
so we have something to reference when reviewing specs. At the
moment I see that document as something anyone should be able to
document a project's API convention even if they conflict with
another project for the moment. Once we've got a fair amount of
content we can start as a group resolving
any conflicts.


Agreed that we should be fleshing out the above wiki page. How
would you like us to do that? Should we have an etherpad to discuss
individual topics? Having multiple people editing the wiki page
offering commentary seems a bit chaotic, and I think we would do
well to have the Gerrit review process in place to handle proposed
guidelines and rules for APIs. See below for specifics on this...


Honestly I don't think we have enough content yet to have much of a
discussion. I started the wiki page

https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines

in the hope that people from other projects would start adding
conventions that they use in their projects. I think its fine for
the moment if its contradictory, we just need to gather what
projects currently do (or want to do) in one place so we can start
discussing any contradictions.


Actually, I don't care all that much about what projects *currently*
do. I want this API working group to come up with concrete guidelines
and rules/examples of what APIs *should* look like.


What projects currently do gives us a baseline to work from. It also
should expose where we have currently have inconsistencies between
projects.


Sure.


And whilst I don't have a problem with having some guidelines which
suggest a future standard for APIs, I don't think we should be
requiring any type of feature which has not yet been implemented in
at least one, preferably two openstack projects and released and tested
for a cycle. Eg standards should be lagging rather than leading.


What about features in some of our APIs that are *not* preferable? For 
instance: API extensions.


I think we've seen where API extensions leads us. And it isn't pretty. 
Would you suggest we document what a Nova API extension or a Neutron API 
extension looks like and then propose, for instance, not to ever do it 
again in future APIs and instead use schema discoverability?



So I'd again encourage anyone interested in APIs from the various
projects to just start dumping their project viewpoint in there.


I went ahead and just created a repository that contained all the
stuff that should be pretty much agreed-to, and a bunch of stub topic
documents that can be used to propose specific ideas (and get
feedback on) here:

http://github.com/jaypipes/openstack-api

Hopefully, you can give it a look and get a feel for why I think the
code review process will be better than the wiki for controlling the
deliverables produced by this team...


I think it will be better in git (but we also need it in gerrit) when
it comes to resolving conflicts and after we've established a decent
document (eg when we have more content). I'm just looking to make it
as easy as possible for anyone to add any guidelines now. Once we've
actually got something to discuss then we use git/gerrit with patches
proposed to resolve conflicts within the document.


Of course it would be in Gerrit. I just put it up on GitHub first 
because I can't just add a repo into the openstack/ code namespace... :)



I like the idea of a repo and using Gerrit for discussions to
resolve issues. I don't think it works so well when people are
wanting to dump lots of information in initially.  Unless we agree
to just merge anything vaguely reasonable and then resolve the
conflicts later when we have a reasonable amount of content.
Otherwise stuff will get lost in gerrit history comments and
people's updates to the document will overwrite each other.

I guess we could also start fleshing out in the repo how we'll work
in practice too (eg once the document is stable what process do we
have for making changes - two +2's is probably not adequate for
something like this).


We can make it work exactly like the openstack/governance repo, where
ttx has the only ability to +2/+W approve a patch for merging, and he
tallies a majority vote from the TC members, who vote -1 or +1 on a
proposed patch.

Instead of ttx, though, we can have an API working group lead
selected from the set of folks currently listed as committed to the
effort?


Yep, that sounds fine, though I don't think a simple majority is
sufficient for something like api standards. We either get consensus
or we don't include it in 

Re: [openstack-dev] Starting iSCSI initiator service iscsid failed while installing Devstack

2014-10-14 Thread Jesse Cook
Maybe this: https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/306693

Might want to manually fix runlevels to be 0 and 6 (not 1) using update-rc.d. 
You can look at the LSB headers (comments at top of init script) in /etc.


On 10/11/14, 12:58 PM, Nitika 
nitikaagarwa...@gmail.commailto:nitikaagarwa...@gmail.com wrote:

Hi,

I'm trying to install devstack on Ubuntu but getting the following error :
Please note that I'm not using any proxy to access the internet.


After this operation, 0 B of additional disk space will be used.
Setting up open-iscsi (2.0.871-0ubuntu9.12.04.2) ...
update-rc.d: warning: open-iscsi stop runlevel arguments (0 1 6) do not match 
LSB Default-Stop values (0 6)
 * Starting iSCSI initiator service iscsid  
  [fail]
 * Setting up iSCSI targets 
  [ OK ]
invoke-rc.d: initscript open-iscsi, action start failed.
dpkg: error processing open-iscsi (--configure):
 subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
 open-iscsi
E: Sub-process /usr/bin/dpkg returned an error code (1)
+ exit_trap
+ local r=100
++ jobs -p
+ jobs=
+ [[ -n '' ]]
+ kill_spinner
+ '[' '!' -z '' ']'
+ [[ 100 -ne 0 ]]
+ echo 'Error on exit'
Error on exit
+ [[ -z '' ]]
+ /home/user21/devstack/tools/worlddump.py
World dumping... see ./worlddump-2014-10-11-171333.txt for details
+ exit 100



See the output of worlddump.py file :


File System Summary
===

Filesystem  Size  Used Avail Use% Mounted on
/dev/simfs   10G  1.9G  8.2G  19% /
none 52M  1.1M   51M   2% /run
none5.0M 0  5.0M   0% /run/lock
none256M 0  256M   0% /run/shm


Process Listing
===

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root 1  0.0  0.3  24164  2084 ?Ss   May22   0:00 init
root 2  0.0  0.0  0 0 ?SMay22   0:00 
[kthreadd/14143]
root 3  0.0  0.0  0 0 ?SMay22   0:00 [khelper/14143]
mongodb297  0.2  2.8 984652 14964 ?Ssl  May22 607:00 
/usr/bin/mongod --config /etc/mongodb.conf
bind   454  0.0  6.3 1321880 33296 ?   Ssl  May22   3:27 
/usr/sbin/named -u bind
root 17882  0.0  0.5  49996  2952 ?Ss   07:05   0:00 /usr/sbin/sshd 
-D
root 18276  0.0  0.1  19072  1044 ?Ss   Sep25   0:01 cron
root 18283  0.0  0.0  17192   524 ?SSep25   0:00 
upstart-udev-bridge --daemon
root 18287  0.0  0.1  15148   604 ?SSep25   0:00 
upstart-socket-bridge --daemon
root 18296  0.0  0.2  21556  1324 ?Ss   Sep25   0:00 /sbin/udevd 
--daemon
101  18297  0.0  0.2  23776  1284 ?Ss   Sep25   0:00 dbus-daemon 
--system --fork --activation=upstart
mysql18309  0.0  8.0 523244 41948 ?Ssl  Sep25   4:26 
/usr/sbin/mysqld
syslog   19719  0.0  0.1  12712   864 ?Ss   Sep26   0:03 /sbin/syslogd 
-u syslog
root 21277  0.0  0.6  73400  3632 ?Ss   21:12   0:00 sshd: user21 
[priv]
user2121289  0.0  0.3  73400  1660 ?S21:12   0:00 sshd: 
user21@pts/0
user2121290  0.0  0.4  18176  2200 pts/0Ss   21:12   0:00 -bash
user2121305  0.6  1.0  12920  5292 pts/0S+   21:12   0:00 bash 
./stack.sh
user2121409  0.0  0.3  10648  1888 pts/0S+   21:12   0:00 bash 
./stack.sh
user2121410  0.0  1.2  34532  6632 pts/0S+   21:12   0:00 python 
/home/user21/work/devstack/tools/outfilter.py -v
user2122030  0.0  1.2  34540  6624 pts/0S+   21:13   0:00 python 
/home/user21/work/devstack/tools/worlddump.py
user2122033  0.0  0.1   4360   636 pts/0S+   21:13   0:00 sh -c ps auxw
user2122034  0.0  0.2  15236  1140 pts/0R+   21:13   0:00 ps auxw




I didn't find any best possible solution to resolve this issue.

Please help me resolve this issue.


Thanks,
Nitika



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting

2014-10-14 Thread Peter Pouliot
Hi All,

Some of us are travelling this week so we'll need to cancel the hyper-v meeting 
for today.
We will resume next week at the usual time.

p

Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites

2014-10-14 Thread Tim Hinrichs
First, some truth in advertising: I work on Congress (policy as a service), so 
I’ve mostly given thought to this problem in that context.

1) I agree with the discussion below about creating a token that encodes all 
the permitted actions for the user.  The cons seem substantial.  

(i) The token will get stale, requiring us to either revoke it when 
policy/roles change or to live with incorrect access control enforcement until 
the token expires.  

(ii) The token could become large, complex, or both.  Suppose the policy is 
granular enough to put restrictions on the arguments a user is permitted to 
provide to an action.  The token might end up encoding a significant portion of 
the policy itself.  Checking if the token permits a given action could be 
similar computationally to checking the original policy.json file.  


2) I like the idea of an out-of-band service that caches a copy of all the 
policy.jsons and allows users to interrogate/edit them.  I’ve definitely talked 
to operators who would like this kind of thing.  This would be a low-risk, 
low-friction solution to the problem because nothing about OpenStack today 
would need to change.  We’d just add an extra service and tell people to use 
it—sort of a new UI for the policy.json files.  And we could add interesting 
functionality, e.g. hypothetical queries such as “if I were to add role X, what 
changes would that make in my rights?

Perhaps some more context about why users want to know all of the actions they 
are permitted to execute might help.

Tim


 
On Oct 14, 2014, at 1:56 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote:

 
 
 On 14/10/2014 01:25, Nathan Kinder wrote:
 
 
 On 10/13/2014 01:17 PM, Morgan Fainberg wrote:
 Description of the problem: Without attempting an action on an
 endpoint with a current scoped token, it is impossible to know what
 actions are available to a user.
 
 
 This is not unusual in the physical world. If you think about all the
 authz tokens you carry around in your pocket (as plastic cards), very
 few of them (if any) list what you are entitled to do with them. This
 gives the issuers and SPs flexibility to dynamically change your
 accesses rights without changing your authorisation. What you can do, in
 general terms, may be written in policy documents that you can consult
 if you wish. So you may wish to introduce a service that is equivalent
 to this (i.e. user may optionally consult some policy advice service).
 
 If you introduce a service to allow a user to dynamically determine his
 access rights (absolutely), you have to decide what to do about the
 dynamics of this service compared to the lifetime of the keystone token,
 as the rights may change more quickly than the token's lifetime.
 
 
 Horizon makes some attempts to solve this issue by sourcing all of
 the policy files from all of the services to determine what a user
 can accomplish with a given role. This is highly inefficient as it
 requires processing the various policy.json files for each request
 in multiple places and presents a mechanism that is not really
 scalable to understand what a user can do with the current
 authorization. Horizon may not be the only service that (in the
 long term) would want to know what actions a token can take.
 
 This is also extremely useful for being able to actually support
 more restricted tokens as well.  If I as an end user want to request
 a token that only has the roles required to perform a particular
 action, I'm going to need to have a way of knowing what those roles
 are.  I think that is one of the main things missing to allow the
 role-filtered tokens option that I wrote up after the last Summit
 to be a viable approach:
 
 https://urldefense.proofpoint.com/v1/url?u=https://blog-nkinder.rhcloud.com/?p%3D101k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=XcBszEjYqiYgkoy9iUk3baKeyYoE%2Bb20k6zm3jIXGAs%3D%0As=ff78352cef9982b47c9e6cca97aa001f38b13837387332a38b56ce30e1394b87
 
 
 I would like to start a discussion on how we should improve our
 policy implementation (OpenStack wide) to help make it easier to
 know what is possible with a current authorization context
 (Keystone token). The key feature should be that whatever the
 implementation is, it doesn’t require another round-trip to a third
 party service to “enforce” the policy which avoids another scaling
 point like UUID Keystone token validation.
 
 Presumably this does not rule out the user, at his option, calling
 another service to ask for advice what can I do with this token,
 bearing in mind that the response will be advice and not a definite
 answer (since the PDP will always be the one to provide the definitive
 answer).
 
 
 
 
 Here are a couple of ideas that we’ve discussed over the last few
 development cycles (and none of this changes the requirements to
 manage scope of authorization, e.g. project, domain, trust, ...):
 
 1. Keystone is the holder of all policy files. Each 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Lars Kellogg-Stedman
On Tue, Oct 14, 2014 at 02:51:15PM +1100, Angus Lees wrote:
 1. It would be good if the interesting code came from python sdist/bdists 
 rather than rpms.

I agree in principal, although starting from packages right now lets
us ignore a whole host of issues.  Possibly we'll hit that change down
the road.

 2. I think we should separate out run the server from do once-off setup.
 
 Currently the containers run a start.sh that typically sets up the database, 
 runs the servers, creates keystone users and sets up the keystone catalog.  
 In 
 something like k8s, the container will almost certainly be run multiple times 
 in parallel and restarted numerous times, so all those other steps go against 
 the service-oriented k8s ideal and are at-best wasted.

All the existing containers [*] are designed to be idempotent, which I
think is not a bad model.  Even if we move initial configuration out
of the service containers I think that is a goal we want to preserve.

I pursued exactly the model you suggest on my own when working on an
ansible-driven workflow for setting things up:

  https://github.com/larsks/openstack-containers

Ansible made it easy to support one-off batch containers which, as
you say, aren't exactly supported in Kubernetes.  I like your
(ab?)use of restartPolicy; I think that's worth pursuing.

[*] That work, which includes rabbitmq, mariadb, keystone, and glance.

 I'm open to whether we want to make these as lightweight/independent as 
 possible (every daemon in an individual container), or limit it to one per 
 project (eg: run nova-api, nova-conductor, nova-scheduler, etc all in one 
 container).

My goal is one-service-per-container, because that generally makes the
question of process supervision and log collection a *host* problem
rather than a *container* problem. It also makes it easier to scale an
individual service, if that becomes necessary.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgp5UySJo2RQn.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Steven Dake

Angus,

On 10/13/2014 08:51 PM, Angus Lees wrote:

I've been reading a bunch of the existing Dockerfiles, and I have two humble
requests:


1. It would be good if the interesting code came from python sdist/bdists
rather than rpms.

This will make it possible to rebuild the containers using code from a private
branch or even unsubmitted code, without having to go through a redhat/rpm
release process first.

I care much less about where the python dependencies come from. Pulling them
from rpms rather than pip/pypi seems like a very good idea, given the relative
difficulty of caching pypi content and we also pull in the required C, etc
libraries for free.


With this in place, I think I could drop my own containers and switch to
reusing kolla's for building virtual testing environments.  This would make me
happy.

I've captured this requirement here:
https://blueprints.launchpad.net/kolla/+spec/run-from-master

I also believe it would be interesting to run from master or a stable 
branch for CD.  Unfortunately I'm still working on the nova-compute 
docker code, but if someone comes along and picks up that blueprint, i 
expect it will get implemented :)  Maybe that could be you.




2. I think we should separate out run the server from do once-off setup.

Currently the containers run a start.sh that typically sets up the database,
runs the servers, creates keystone users and sets up the keystone catalog.  In
something like k8s, the container will almost certainly be run multiple times
in parallel and restarted numerous times, so all those other steps go against
the service-oriented k8s ideal and are at-best wasted.

I suggest making the container contain the deployed code and offer a few thin
scripts/commands for entrypoints.  The main replicationController/pod _just_
starts the server, and then we have separate pods (or perhaps even non-k8s
container invocations) that do initial database setup/migrate, and post-
install keystone setup.
The server may not start before the configuration of the server is 
complete.  I guess I don't quite understand what you indicate here when 
you say we have separate pods that do initial database setup/migrate.  
Do you mean have dependencies in some way, or for eg:


glance-registry-setup-pod.yaml - the glance registry pod descriptor 
which sets up the db and keystone
glance-registry-pod.yaml - the glance registry pod descriptor which 
starts the application and waits for db/keystone setup


and start these two pods as part of the same selector (glance-registry)?

That idea sounds pretty appealing although probably won't be ready to go 
for milestone #1.


Regards,
-steve


I'm open to whether we want to make these as lightweight/independent as
possible (every daemon in an individual container), or limit it to one per
project (eg: run nova-api, nova-conductor, nova-scheduler, etc all in one
container).   I think the differences are run-time scalability and resource-
attribution vs upfront coding effort and are not hugely significant either way.

Post-install catalog setup we can combine into one cross-service setup like
tripleO does[1].  Although k8s doesn't have explicit support for batch tasks
currently, I'm doing the pre-install setup in restartPolicy: onFailure pods
currently and it seems to work quite well[2].

(I'm saying post install catalog setup, but really keystone catalog can
happen at any point pre/post aiui.)

[1] 
https://github.com/openstack/tripleo-incubator/blob/master/scripts/setup-endpoints
[2] 
https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-db-sync-pod.yaml




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Everett Toews
On Oct 14, 2014, at 8:57 AM, Jay Pipes jaypi...@gmail.com wrote:

 I personally think proposing patches to an openstack-api repository is the 
 most effective way to make those proposals. Etherpads and wiki pages are fine 
 for dumping content, but IMO, we don't need to dump content -- we already 
 have plenty of it. We need to propose guidelines for *new* APIs to follow.

+1

I’m all for putting a stake in the ground (in the form of docs in a repo) and 
having people debate that. I think it results in a much more focused discussion 
as opposed to dumping content into an etherpad/wiki page and trying to wade 
through it. If people want to dump content somewhere and use that to help 
inform their contributions to the repo, that’s okay too.

Another big benefit of putting things in a repo is provenance. Guidelines like 
these can be...contentious...at times. Having a clear history of how a 
guideline got into a repo is very valuable as you can link newcomers who 
challenge a guideline to the history of how it got there, who approved it, and 
the tradeoffs that were considered during the review process. Of course, this 
is possible with etherpad/wiki too but it’s more difficult to reconstruct the 
history.

Everett


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Lance Bragstad
I found a couple of free times available for a weekly meeting if people are
interested:

https://review.openstack.org/#/c/128332/2

Not sure if a meeting time has been hashed out already or not, and if it
has I'll change the patch accordingly. If not, we can iterate on possible
meeting times in the review if needed. This was to just get the ball
rolling if we want a weekly meeting. I proposed one review for Thursdays at
2100 and a second patch set for 2000, UTC. That can easily change, but
those were two times that didn't conflict with the existing meeting
schedules in #openstack-meeting.



On Tue, Oct 14, 2014 at 3:55 AM, Thierry Carrez thie...@openstack.org
wrote:

 Jay Pipes wrote:
  On 10/13/2014 07:11 PM, Christopher Yeoh wrote:
  I guess we could also start fleshing out in the repo how we'll work in
  practice too (eg once the document is stable what process do we have
  for making changes - two +2's is probably not adequate for something
  like this).
 
  We can make it work exactly like the openstack/governance repo, where
  ttx has the only ability to +2/+W approve a patch for merging, and he
  tallies a majority vote from the TC members, who vote -1 or +1 on a
  proposed patch.
 
  Instead of ttx, though, we can have an API working group lead selected
  from the set of folks currently listed as committed to the effort?

 Yes, the working group should select a chair who would fill the same
 role I do for the TC (organize meetings, push agenda, tally votes on
 reviews, etc.)

 That would be very helpful in keeping that diverse group on track. Now
 you just need some volunteer :)

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Juno RC3 available

2014-10-14 Thread Thierry Carrez
Hello everyone,

Due to two last-minute issues discovered in testing the published
Ceilometer 2014.2 RC2, we generated a new Juno release candidate. You
can find the list of bugfixes in this RC and a link to a source tarball at:

https://launchpad.net/ceilometer/juno/juno-rc3

At this point, only show-stoppers would warrant a release candidate
respin, so this RC3 is very likely to be formally released as the final
Ceilometer 2014.2 on Thursday. You are therefore strongly encouraged to
give a last-minute test round and validate this tarball !

Alternatively, you can directly test the proposed/juno branch at:
https://github.com/openstack/ceilometer/tree/proposed/juno

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/ceilometer/+filebug

and tag it *juno-rc-potential* to bring it to the release crew's attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Travels tips for the Paris summit

2014-10-14 Thread Adrien Cunin
Hi everyone,

Inspired by the travels tips published for the HK summit, the French
OpenStack user group wrote a similar wiki page for Paris:

https://wiki.openstack.org/wiki/Summit/Kilo/Travel_Tips

Also note that if you want some local informations or want to talk about
user groups during the summit we will have a booth in the market place
expo hall (location: E47).

Adrien,
On behalf of OpenStack-fr



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Heat templates for kubernetes + docker

2014-10-14 Thread Lars Kellogg-Stedman
This came up briefly on the meeting yesterday, but I wanted to bring
it to a wider audience.

I know some folks out there are using the Heat templates I put
together for setting up a simple kubernetes environment.  I have
recently added support for the Gluster shared filesystem; you'll find
it in the feature/gluster branch:

  https://github.com/larsks/heat-kubernetes/tree/feature/gluster

Once everything is booted, you can create a volume:

  # gluster volume create mariadb replica 2 \
192.168.113.5:/bricks/mariadb 192.168.113.4:/bricks/mariadb
  volume create: mariadb: success: please start the volume to access data
  # gluster volume start mariadb
  volume start: mariadb: success

And then immediately access that volume under the /gluster autofs
mountpoint (e.g., /gluster/mariadb).  You can use this in
combination with Kubernetes volumes to allocate storage to containers
that will be available on all of the minions.  For example, you could
use a pod configuration like this:

  desiredState:
manifest:
  volumes:
- name: mariadb-data
  source:
hostDir:
  path: /gluster/mariadb
  containers:
  - env:
- name: DB_ROOT_PASSWORD
  value: password
image: kollaglue/fedora-rdo-mariadb
name: mariadb
ports:
- containerPort: 3306
volumeMounts:
  - name: mariadb-data
mountPath: /var/lib/mysql
  id: mariadb-1
  version: v1beta1
  id: mariadb
  labels:
name: mariadb

With this configuration, you could kill the mariadb container, have it
created on other minion, and you would still have access to all the
data.

This is meant simply as a way to experiment with storage and
kubernetes.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgpBjVQ9JXPAB.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Susanne Balle
Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.com wrote:

 Diagrams in jpeg format..

 On 10/12/14 10:06 PM, Phillip Toohill phillip.tooh...@rackspace.com
 wrote:

 Hello all,
 
 Heres some additional diagrams and docs. Not incredibly detailed, but
 should get the point across.
 
 Feel free to edit if needed.
 
 Once we come to some kind of agreement and understanding I can rewrite
 these more to be thorough and get them in a more official place. Also, I
 understand theres other use cases not shown in the initial docs, so this
 is a good time to collaborate to make this more thought out.
 
 Please feel free to ping me with any questions,
 
 Thank you
 
 
 Google DOCS link for FLIP folder:
 
 https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
 a
 ring
 
 -diagrams are draw.io based and can be opened from within Drive by
 selecting the appropriate application.
 
 On 10/7/14 2:25 PM, Brandon Logan brandon.lo...@rackspace.com wrote:
 
 I'll add some more info to this as well:
 
 Neutron LBaaS creates the neutron port for the VIP in the plugin layer
 before drivers ever have any control.  In the case of an async driver,
 it will then call the driver's create method, and then return to the
 user the vip info.  This means the user will know the VIP before the
 driver even finishes creating the load balancer.
 
 So if Octavia is just going to create a floating IP and then associate
 that floating IP to the neutron port, there is the problem of the user
 not ever seeing the correct VIP (which would be the floating iP).
 
 So really, we need to have a very detailed discussion on what the
 options are for us to get this to work for those of us intending to use
 floating ips as VIPs while also working for those only requiring a
 neutron port.  I'm pretty sure this will require changing the way V2
 behaves, but there's more discussion points needed on that.  Luckily, V2
 is in a feature branch and not merged into Neutron master, so we can
 change it pretty easily.  Phil and I will bring this up in the meeting
 tomorrow, which may lead to a meeting topic in the neutron lbaas
 meeting.
 
 Thanks,
 Brandon
 
 
 On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
  Hello All,
 
  I wanted to start a discussion on floating IP management and ultimately
  decide how the LBaaS group wants to handle the association.
 
  There is a need to utilize floating IPs(FLIP) and its API calls to
  associate a FLIP to the neutron port that we currently spin up.
 
  See DOCS here:
 
  
 
 http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_c
 r
 eate.html
 
  Currently, LBaaS will make internal service calls (clean interface :/)
 to create and attach a Neutron port.
  The VIP from this port is added to the Loadbalancer object of the Load
 balancer configuration and returned to the user.
 
  This creates a bit of a problem if we want to associate a FLIP with the
 port and display the FLIP to the user instead of
  the ports VIP because the port is currently created and attached in the
 plugin and there is no code anywhere to handle the FLIP
  association.
 
  To keep this short and to the point:
 
  We need to discuss where and how we want to handle this association. I
 have a few questions to start it off.
 
  Do we want to add logic in the plugin to call the FLIP association API?
 
  If we have logic in the plugin should we have configuration that
 identifies weather to use/return the FLIP instead the port VIP?
 
  Would we rather have logic for FLIP association in the drivers?
 
  If logic is in the drivers would we still return the port VIP to the
 user then later overwrite it with the FLIP?
  Or would we have configuration to not return the port VIP initially,
 but an additional query would show the associated FLIP.
 
 
  Is there an internal service call for this, and if so would we use it
 instead of API calls?
 
 
  Theres plenty of other thoughts and questions to be asked and discussed
 in regards to FLIP handling,
  hopefully this will get us going. I'm certain I may not be completely
 understanding this and
  is the hopes of this email to clarify any uncertainties.
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Fox, Kevin M
Absolutely this needs splitting out. I ran into an issue a few years ago with 
this antipattern with the mythtv folks. The myth client on my laptop got 
upgraded and it was overly helpful in that it connected directly to the 
database and upgraded the schema for me, breaking the server, and all the other 
clients, and forcing an unplanned upgrade of everything else. :/


From: Chris Dent [chd...@redhat.com]
Sent: Tuesday, October 14, 2014 3:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns

On Tue, 14 Oct 2014, Angus Lees wrote:

 2. I think we should separate out run the server from do once-off setup.

Yes! Otherwise it feels like the entire point of using containers
and dockerfiles is rather lost.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Question about the OVS_PHYSICAL_BRIDGE attribute defined in localrc

2014-10-14 Thread Danny Choi (dannchoi)
Hi,

When I have OVS_PHYSICAL_BRIDGE=br-p1p1” defined in localrc, devstack creates 
the OVS bridge br-p1p1.

localadmin@qa4:~/devstack$ sudo ovs-vsctl show
5f845d2e-9647-47f2-b92d-139f6faaf39e
Bridge br-p1p1 
Port phy-br-p1p1
Interface phy-br-p1p1
type: patch
options: {peer=int-br-p1p1}
Port br-p1p1
Interface br-p1p1
type: internal

However, no physical port is added to it.  I have to manually do it.

localadmin@qa4:~/devstack$ sudo ovs-vsctl add-port br-p1p1 p1p1
localadmin@qa4:~/devstack$ sudo ovs-vsctl show
5f845d2e-9647-47f2-b92d-139f6faaf39e
Bridge br-p1p1
Port phy-br-p1p1
Interface phy-br-p1p1
type: patch
options: {peer=int-br-p1p1}
Port br-p1p1
Interface br-p1p1
type: internal
Port “p1p1” 
Interface “p1p1


Is this expected behavior?

Thanks,
Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon]Blueprint- showing a small message to the user for browser incompatibility

2014-10-14 Thread Solly Ross
I'm not sure User Agent detection is the best way to go.

Suppose you do UA sniffing and say show the message unless the UA is one of 
X.  Then, if there's a browser which fully supports your feature set, but 
doesn't have a known UA (or someone set a custom UA on their browser), the 
message will still show, which could be confusing to users.

On the other hand, if you do UA sniffing and say show the message if the UA is 
one of X, then a browser that didn't support the features, but didn't have a 
matching User Agent, wouldn't show the message.

If you go with User Agent sniffing, I'd say the latter way (a blacklist) 
preferable, since it's probably easier to come up with currently unsupported 
browsers than it is to predict future browsers.

Have you identified which specific browser features are needed?  You could use 
Modernizr and then warn if the requisite feature set it not implemented.  This 
way, you simply check for the feature set required.

Best Regards,
Solly Ross

- Original Message -
 From: Nikunj Aggarwal nikunj.aggar...@hp.com
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, October 14, 2014 4:53:42 AM
 Subject: [openstack-dev] [horizon]Blueprint- showing a small message to the 
 user for browser incompatibility
 
 
 
 Hi Everyone,
 
 
 
 I have submitted a blueprint which targets the issues end-users are faces
 when they are using Horizon in the old browsers. So, this blueprint targets
 to overcome this problem by showing a small message on the Horizon login
 page.
 
 
 
 I urge to all the Horizon community to take a look and share your views.
 
 
 
 https://blueprints.launchpad.net/horizon/+spec/detecting-browser
 
 
 
 
 
 
 
 Regards,
 Nikunj
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-14 Thread Anita Kuno
On 10/14/2014 11:35 AM, Adrien Cunin wrote:
 Hi everyone,
 
 Inspired by the travels tips published for the HK summit, the
 French OpenStack user group wrote a similar wiki page for Paris:
 
 https://wiki.openstack.org/wiki/Summit/Kilo/Travel_Tips
 
 Also note that if you want some local informations or want to talk
 about user groups during the summit we will have a booth in the
 market place expo hall (location: E47).
 
 Adrien, On behalf of OpenStack-fr
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
This is awesome, thanks Adrien.

I have a request. Is there any way to expand the Food section to
include how to find vegetarian restaurants? Any help here appreciated.

Thanks so much for creating this wikipage,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Ian Cordasco


On 10/14/14, 10:22 AM, Everett Toews everett.to...@rackspace.com wrote:

On Oct 14, 2014, at 8:57 AM, Jay Pipes jaypi...@gmail.com wrote:

 I personally think proposing patches to an openstack-api repository is
the most effective way to make those proposals. Etherpads and wiki pages
are fine for dumping content, but IMO, we don't need to dump content --
we already have plenty of it. We need to propose guidelines for *new*
APIs to follow.

+1

I’m all for putting a stake in the ground (in the form of docs in a repo)
and having people debate that. I think it results in a much more focused
discussion as opposed to dumping content into an etherpad/wiki page and
trying to wade through it. If people want to dump content somewhere and
use that to help inform their contributions to the repo, that’s okay too.

Another big benefit of putting things in a repo is provenance. Guidelines
like these can be...contentious...at times. Having a clear history of how
a guideline got into a repo is very valuable as you can link newcomers
who challenge a guideline to the history of how it got there, who
approved it, and the tradeoffs that were considered during the review
process. Of course, this is possible with etherpad/wiki too but it’s more
difficult to reconstruct the history.

Everett

Also, I don’t think the first version of the standards are going to get
everything right so setting something as the starting point seems the most
reasonable. Waiting entire cycles to include a new piece of the standard
seems  a bit impractical.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-14 Thread Salvatore Orlando
Hi Doug,

do you know if the existing quota oslo-incubator module has already some
active consumers?
In the meanwhile I've pushed a spec to neutron-specs for improving quota
management there [1]

Now, I can either work on the oslo-incubator module and leverage it in
Neutron, or develop the quota module in Neutron, and move it to
oslo-incubator once we validate it with Neutron. The latter approach seems
easier from a workflow perspective - as it avoid the intermediate steps of
moving code from oslo-incubator to neutron. On the other hand it will delay
adoption in oslo-incubator.

What's your opinion?

Regards,
Salvatore

[1] https://review.openstack.org/#/c/128318/

On 8 October 2014 18:52, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 8, 2014, at 7:03 AM, Davanum Srinivas dava...@gmail.com wrote:

  Salvatore, Joe,
 
  We do have this at the moment:
 
 
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/quota.py
 
  — dims

 If someone wants to drive creating a useful library during kilo, please
 consider adding the topic to the etherpad we’re using to plan summit
 sessions and then come participate in the Oslo meeting this Friday 16:00
 UTC.

 https://etherpad.openstack.org/p/kilo-oslo-summit-topics

 Doug

 
  On Wed, Oct 8, 2014 at 2:29 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
 
  On 8 October 2014 04:13, Joe Gordon joe.gord...@gmail.com wrote:
 
 
  On Fri, Oct 3, 2014 at 10:47 AM, Morgan Fainberg
  morgan.fainb...@gmail.com wrote:
 
  Keeping the enforcement local (same way policy works today) helps
 limit
  the fragility, big +1 there.
 
  I also agree with Vish, we need a uniform way to talk about quota
  enforcement similar to how we have a uniform policy language /
 enforcement
  model (yes I know it's not perfect, but it's far closer to uniform
 than
  quota management is).
 
 
  It sounds like maybe we should have an oslo library for quotas?
 Somewhere
  where we can share the code,but keep the operations local to each
 service.
 
 
  This is what I had in mind as well. A simple library for quota
 enforcement
  which can be used regardless of where and how you do it, which might
 depend
  on the application business logic, the WSGI framework in use, or other
  factors.
 
 
 
 
  If there is still interest of placing quota in keystone, let's talk
 about
  how that will work and what will be needed from Keystone . The
 previous
  attempt didn't get much traction and stalled out early in
 implementation. If
  we want to revisit this lets make sure we have the resources needed
 and
  spec(s) in progress / info on etherpads (similar to how the
 multitenancy
  stuff was handled at the last summit) as early as possible.
 
 
  Why not centralize quota management via the python-openstackclient,
 what
  is the benefit of getting keystone involved?
 
 
  Providing this through the openstack client in my opinion has the
  disadvantage that users which either use the REST API direct or write
 their
  own clients won't leverage it. I don't think it's a reasonable
 assumption
  that everybody will use python-openstackclient, is it?
 
  Said that, storing quotas in keystone poses a further challenge to the
  scalability of the system, which we shall perhaps address by using
  appropriate caching strategies and leveraging keystone notifications.
 Until
  we get that, I think that the openstack client will be the best way of
  getting a unified quota management experience.
 
  Salvatore
 
 
 
  Cheers,
  Morgan
 
  Sent via mobile
 
 
  On Friday, October 3, 2014, Salvatore Orlando sorla...@nicira.com
  wrote:
 
  Thanks Vish,
 
  this seems a very reasonable first step as well - and since most
  projects would be enforcing quotas in the same way, the shared
 library would
  be the logical next step.
  After all this is quite the same thing we do with authZ.
 
  Duncan is expressing valid concerns which in my opinion can be
 addressed
  with an appropriate design - and a decent implementation.
 
  Salvatore
 
  On 3 October 2014 18:25, Vishvananda Ishaya vishvana...@gmail.com
  wrote:
 
  The proposal in the past was to keep quota enforcement local, but to
  put the resource limits into keystone. This seems like an obvious
 first
  step to me. Then a shared library for enforcing quotas with decent
  performance should be next. The quota calls in nova are extremely
  inefficient right now and it will only get worse when we try to add
  hierarchical projects and quotas.
 
  Vish
 
  On Oct 3, 2014, at 7:53 AM, Duncan Thomas duncan.tho...@gmail.com
  wrote:
 
  Taking quota out of the service / adding remote calls for quota
  management is going to make things fragile - you've somehow got to
  deal with the cases where your quota manager is slow, goes away,
  hiccups, drops connections etc. You'll also need some way of
  reconciling actual usage against quota usage periodically, to
 detect
  problems.
 
  On 3 October 2014 15:03, Salvatore Orlando sorla...@nicira.com
  wrote:
 

Re: [openstack-dev] [horizon]Blueprint- showing a small message to the user for browser incompatibility

2014-10-14 Thread Aggarwal, Nikunj
Hi Solly,

You are right with your questions about user setting custom UA on their 
browser. And during my discussion with other Horizon community members on IRC, 
we decided that Horizon should not care about user setting custom UA on for 
their browser because it is not our job to identify that and fix. If user is 
setting custom UA for their browser which means they want that irrespective of 
the outcome.

Also we discussed about using Modernizr and also implementing graceful 
degradation but this will work for bigger features like canvas or svg but 
smaller css feature where it is breaking in older versions of IE like IE 9, 
will be a huge change and I personally think will be waste of resources and 
will make the code more complicated. 

Instead Horizon guys came to an conclusion that we will identify the browser 
type and version to deal with legacy browsers like older IE or firefox or any 
other browser and for other major features we can use feature detection with 
Modernizr.

We are targeting all major browsers which are listed in this page  - 
https://wiki.openstack.org/wiki/Horizon/BrowserSupport

And I also think by going with this approach we will minimize the code 
complexity and also this small feature will greatly improve the UX experience 
for the end-users.

Thank you for the reply and your views. Also can I put the contents of your 
reply as a comment into the Blueprint page? Because I think many others will 
also have same kind of questions.

 
Thanks  Regards,
Nikunj

-Original Message-
From: Solly Ross [mailto:sr...@redhat.com] 
Sent: Tuesday, October 14, 2014 9:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [horizon]Blueprint- showing a small message to the 
user for browser incompatibility

I'm not sure User Agent detection is the best way to go.

Suppose you do UA sniffing and say show the message unless the UA is one of 
X.  Then, if there's a browser which fully supports your feature set, but 
doesn't have a known UA (or someone set a custom UA on their browser), the 
message will still show, which could be confusing to users.

On the other hand, if you do UA sniffing and say show the message if the UA is 
one of X, then a browser that didn't support the features, but didn't have a 
matching User Agent, wouldn't show the message.

If you go with User Agent sniffing, I'd say the latter way (a blacklist) 
preferable, since it's probably easier to come up with currently unsupported 
browsers than it is to predict future browsers.

Have you identified which specific browser features are needed?  You could use 
Modernizr and then warn if the requisite feature set it not implemented.  This 
way, you simply check for the feature set required.

Best Regards,
Solly Ross

- Original Message -
 From: Nikunj Aggarwal nikunj.aggar...@hp.com
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, October 14, 2014 4:53:42 AM
 Subject: [openstack-dev] [horizon]Blueprint- showing a small message 
 to the user for browser incompatibility
 
 
 
 Hi Everyone,
 
 
 
 I have submitted a blueprint which targets the issues end-users are 
 faces when they are using Horizon in the old browsers. So, this 
 blueprint targets to overcome this problem by showing a small message 
 on the Horizon login page.
 
 
 
 I urge to all the Horizon community to take a look and share your views.
 
 
 
 https://blueprints.launchpad.net/horizon/+spec/detecting-browser
 
 
 
 
 
 
 
 Regards,
 Nikunj
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Jay Pipes

On 10/14/2014 10:49 AM, Lars Kellogg-Stedman wrote:

On Tue, Oct 14, 2014 at 02:51:15PM +1100, Angus Lees wrote:

1. It would be good if the interesting code came from python sdist/bdists
rather than rpms.


I agree in principal, although starting from packages right now lets
us ignore a whole host of issues.  Possibly we'll hit that change down
the road.


2. I think we should separate out run the server from do once-off setup.

Currently the containers run a start.sh that typically sets up the database,
runs the servers, creates keystone users and sets up the keystone catalog.  In
something like k8s, the container will almost certainly be run multiple times
in parallel and restarted numerous times, so all those other steps go against
the service-oriented k8s ideal and are at-best wasted.


All the existing containers [*] are designed to be idempotent, which I
think is not a bad model.  Even if we move initial configuration out
of the service containers I think that is a goal we want to preserve.

I pursued exactly the model you suggest on my own when working on an
ansible-driven workflow for setting things up:

   https://github.com/larsks/openstack-containers

Ansible made it easy to support one-off batch containers which, as
you say, aren't exactly supported in Kubernetes.  I like your
(ab?)use of restartPolicy; I think that's worth pursuing.


I agree that Ansible makes it easy to support one-off batch containers. 
Ansible rocks.


Which brings me to my question (admittedly, I am a Docker n00b, so 
please forgive me for the question)...


Can I use your Dockerfiles to build Ubuntu/Debian images instead of only 
Fedora images? Seems to me that the image-based Docker system makes the 
resulting container quite brittle -- since a) you can't use 
configuration management systems like Ansible to choose which operating 
system or package management tools you wish to use, and b) any time you 
may a change to the image, you need to regenerate the image from a new 
Dockerfile and, presumably, start a new container with the new image, 
shut down the old container, and then change all the other containers 
that were linked with the old container to point to the new one. All of 
this would be a simple apt-get upgrade -y for things like security 
updates, which for the most part, wouldn't require any such container 
rebuilds.


So... what am I missing with this? What makes Docker images more ideal 
than straight up LXC containers and using Ansible to control 
upgrades/changes to configuration of the software on those containers?


Again, sorry for the n00b question!

Best,
-jay


[*] That work, which includes rabbitmq, mariadb, keystone, and glance.


I'm open to whether we want to make these as lightweight/independent as
possible (every daemon in an individual container), or limit it to one per
project (eg: run nova-api, nova-conductor, nova-scheduler, etc all in one
container).


My goal is one-service-per-container, because that generally makes the
question of process supervision and log collection a *host* problem
rather than a *container* problem. It also makes it easier to scale an
individual service, if that becomes necessary.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-14 Thread Sylvain Bauza


Le 14/10/2014 18:29, Anita Kuno a écrit :

On 10/14/2014 11:35 AM, Adrien Cunin wrote:

Hi everyone,

Inspired by the travels tips published for the HK summit, the
French OpenStack user group wrote a similar wiki page for Paris:

https://wiki.openstack.org/wiki/Summit/Kilo/Travel_Tips

Also note that if you want some local informations or want to talk
about user groups during the summit we will have a booth in the
market place expo hall (location: E47).

Adrien, On behalf of OpenStack-fr



___ OpenStack-dev
mailing list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


This is awesome, thanks Adrien.

I have a request. Is there any way to expand the Food section to
include how to find vegetarian restaurants? Any help here appreciated.


Well, this is a tough question. We usually make use of TripAdvisor or 
other French noting websites for finding good places to eat, but some 
small restaurant don't provide this kind of information. There is no 
official requirement to provide these details for example.


What I can suggest is when looking at the menu (this is mandatory to put 
it outside of the restaurant) and check for the word 'Végétarien'.


Will amend the wiki tho with these details.

-Sylvain

Thanks so much for creating this wikipage,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/14/2014 10:22 AM, Everett Toews wrote:
 I personally think proposing patches to an openstack-api repository is the 
 most effective way to make those proposals. Etherpads and wiki pages are 
 fine for dumping content, but IMO, we don't need to dump content -- we 
 already have plenty of it. We need to propose guidelines for *new* APIs to 
 follow.
 +1
 
 I’m all for putting a stake in the ground (in the form of docs in a repo) and 
 having people debate that. I think it results in a much more focused 
 discussion as opposed to dumping content into an etherpad/wiki page and 
 trying to wade through it. If people want to dump content somewhere and use 
 that to help inform their contributions to the repo, that’s okay too.

+1 from me too. Etherpads for active discussions tend to devolve into a
mess.


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUPVQpAAoJEKMgtcocwZqL95cP/j6yt8UKflP91KRZsiXMTU/h
TBb8CY3dGnwsvYJcbU5gtfvp52QAc2tnI4H8hwUGmnuaTmz0Bm3EE5gMmmB3VNw3
5a1wdoJANVQ86vRwKq8r7U5cgUIRmPDTl7OGZnS3x3H7fJR0bZa/QcFYNNU3rAqO
0HcpljDTLXU9PLKcGQ6ueeQLfZDQJGB96TZxcf+w5WUbUnLD4pMjHWnn2HR55v44
pAqHkD+swNCq061OffriG+w87+In15433QPbtNKmNQhXODCTwRI77WkVlF02p6Ca
X+pi6YGMI4lshmyS4vn/7DZryrVQYxtS0lCOMHIhK0+GLdZ9TjzNAWACnSCPLuKK
OYM2KEoGWJS8dHSx5WuiNcCnhZ7W+d+TM1/tgKtdGkBMmKNMSOkD5vuAywzSEJZW
tUAq87SHxgINlkra9pYVzcoJGL7uLCW3655+254R4iKKX1dzZ+xEd06qawUZlHhC
8KP/eYo2ydo9fs8ABOCsh/1SdL3W9Rakw2MvG3ns79A2EVJArz+H5ngLUtm32DC6
muaPKkluS+V3JP1rDsARUY7yMXTQPbX0hr1wYdenjfqwDSSdWY1SVuPU3VFw/A7F
YABxzggCbl6EvTSXPU+3uDYVbOxwOvVglG/7K2aDn3i6EIXLma6oI4EZp1wzRtj/
YS5UyN2J4wnZrjwnXr8A
=lxk3
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-14 Thread Jay Pipes

On 10/13/2014 05:59 PM, Russell Bryant wrote:

Nice timing.  I was working on a blog post on this topic.

On 10/13/2014 05:40 PM, Fei Long Wang wrote:

I think Adam is talking about this bp:
https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically

For now, we're using Nagios probe/event to trigger the Nova evacuate
command, but I think it's possible to do that in Nova if we can find a
good way to define the trigger policy.


I actually think that's the right way to do it.


+1. Not everything needs to be built-in to Nova. This very much sounds 
like something that should be handled by PaaS-layer things that can 
react to a Nagios notification (or any other event) and take some sort 
of action, possibly using administrative commands like nova evacuate.


 There are a couple of

other things to consider:

1) An ideal solution also includes fencing.  When you evacuate, you want
to make sure you've fenced the original compute node.  You need to make
absolutely sure that the same VM can't be running more than once,
especially when the disks are backed by shared storage.

Because of the fencing requirement, another option would be to use
Pacemaker to orchestrate this whole thing.  Historically Pacemaker
hasn't been suitable to scale to the number of compute nodes an
OpenStack deployment might have, but Pacemaker has a new feature called
pacemaker_remote [1] that may be suitable.

2) Looking forward, there is a lot of demand for doing this on a per
instance basis.  We should decide on a best practice for allowing end
users to indicate whether they would like their VMs automatically
rescued by the infrastructure, or just left down in the case of a
failure.  It could be as simple as a special tag set on an instance [2].


Please note that server instance tagging (thanks for the shout-out, BTW) 
is intended for only user-defined tags, not system-defined metadata 
which is what this sounds like...


Of course, one might implement some external polling/monitoring system 
using server instance tags, which might do a nova list --tag $TAG --host 
$FAILING_HOST, and initiate a migrate for each returned server instance...


Best,
-jay


[1]
http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Remote/
[2] https://review.openstack.org/#/c/127281/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Private external network

2014-10-14 Thread Salvatore Orlando
The blueprint was untargeted mostly because the analysis indicated that
there was no easy solution, and that what we needed was a solution to do
some RBAC on neutron resources.

I think this would be a good addition to the Neutron resource model, and it
would be great if you could start the discussion on the mailing list
exposing your thoughts.

Salvatore

On 14 October 2014 11:50, Édouard Thuleau thul...@gmail.com wrote:

 Hi Salvatore,

 I like to propose a blueprint for the next Neutron release that permits to
 dedicated an external network to a tenant. For that I though to rethink the
 he conjunction of the two attributes `shared`
 and `router:external' of the network resource.

 I saw that you already initiate a work on that topic [1] and [2] but the
 bp was un-targeted for an alternative approaches which might be more
 complete. Does it alternative was released or in work in progress? To be
 sure to not duplicating work/effort.

 [1]
 https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks
 [2]
 https://wiki.openstack.org/wiki/Neutron/sharing-model-for-external-networks

 Regards,
 Édouard.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-14 Thread Anita Kuno
On 10/14/2014 12:40 PM, Sylvain Bauza wrote:
 
 Le 14/10/2014 18:29, Anita Kuno a écrit :
 On 10/14/2014 11:35 AM, Adrien Cunin wrote:
 Hi everyone,

 Inspired by the travels tips published for the HK summit, the
 French OpenStack user group wrote a similar wiki page for Paris:

 https://wiki.openstack.org/wiki/Summit/Kilo/Travel_Tips

 Also note that if you want some local informations or want to talk
 about user groups during the summit we will have a booth in the
 market place expo hall (location: E47).

 Adrien, On behalf of OpenStack-fr



 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 This is awesome, thanks Adrien.

 I have a request. Is there any way to expand the Food section to
 include how to find vegetarian restaurants? Any help here appreciated.
 
 Well, this is a tough question. We usually make use of TripAdvisor or
 other French noting websites for finding good places to eat, but some
 small restaurant don't provide this kind of information. There is no
 official requirement to provide these details for example.
 
 What I can suggest is when looking at the menu (this is mandatory to put
 it outside of the restaurant) and check for the word 'Végétarien'.
 
 Will amend the wiki tho with these details.
 
 -Sylvain
Thanks Sylvain, I appreciate the pointers. Will wander around and look
at menus outside restaurants. Not hard to do since I love wandering
around the streets of Paris, so easy to walk, nice wide sidewalks.

I'll also check back on the wikipage after you have edited.

Thank you!
Anita.
 Thanks so much for creating this wikipage,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites

2014-10-14 Thread Nathan Kinder


On 10/14/2014 07:42 AM, Tim Hinrichs wrote:
 First, some truth in advertising: I work on Congress (policy as a service), 
 so I’ve mostly given thought to this problem in that context.
 
 1) I agree with the discussion below about creating a token that encodes all 
 the permitted actions for the user.  The cons seem substantial.  

+1

 
 (i) The token will get stale, requiring us to either revoke it when 
 policy/roles change or to live with incorrect access control enforcement 
 until the token expires.

This is a very good point.

 
 (ii) The token could become large, complex, or both.  Suppose the policy is 
 granular enough to put restrictions on the arguments a user is permitted to 
 provide to an action.  The token might end up encoding a significant portion 
 of the policy itself.  Checking if the token permits a given action could be 
 similar computationally to checking the original policy.json file.  
 
 
 2) I like the idea of an out-of-band service that caches a copy of all the 
 policy.jsons and allows users to interrogate/edit them.  I’ve definitely 
 talked to operators who would like this kind of thing.  This would be a 
 low-risk, low-friction solution to the problem because nothing about 
 OpenStack today would need to change.  We’d just add an extra service and 
 tell people to use it—sort of a new UI for the policy.json files.  And we 
 could add interesting functionality, e.g. hypothetical queries such as “if I 
 were to add role X, what changes would that make in my rights?
 
 Perhaps some more context about why users want to know all of the actions 
 they are permitted to execute might help.

I think that there are two questions that a user may have here:

1) What actions can I perform using a particular token?

2) What role(s) do I need to perform a particular action?

For me, the second question is more interesting.  A user likely already
has an idea of a task that they want to perform.  With question number
1, what do I do as a user if the response says that I'm not allowed to
perform the task I'm trying to accomplish?  The answer really doesn't
give me a way to move forward and perform my task.

With question 2, I'm able to find out what exact roles are needed to
perform a specific action.  With this information, I could request a
Keystone token with a subset of my roles that is authorized to perform
the task while leaving out roles that might have a higher level of
authorization.  For instance, why should I need to send a token with the
'admin' role to Nova just to launch an instance if '_member_' is all
that's required?

Another real use case is determining what roles are needed when creating
a trust in Keystone.  If I want to use a trust to allow a service like
Heat or Neutron's LBaaS to perform an action on my behalf, I want to
minimize the authorization that I'm delegating to those services.
Keystone trusts already have the ability to explicitly define the roles
that will be present in the issues trust tokens, but I have no way of
knowing what roles are required to perform a particular action without
consulting the policy.

-NGK

 
 Tim
 
 
  
 On Oct 14, 2014, at 1:56 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote:
 


 On 14/10/2014 01:25, Nathan Kinder wrote:


 On 10/13/2014 01:17 PM, Morgan Fainberg wrote:
 Description of the problem: Without attempting an action on an
 endpoint with a current scoped token, it is impossible to know what
 actions are available to a user.


 This is not unusual in the physical world. If you think about all the
 authz tokens you carry around in your pocket (as plastic cards), very
 few of them (if any) list what you are entitled to do with them. This
 gives the issuers and SPs flexibility to dynamically change your
 accesses rights without changing your authorisation. What you can do, in
 general terms, may be written in policy documents that you can consult
 if you wish. So you may wish to introduce a service that is equivalent
 to this (i.e. user may optionally consult some policy advice service).

 If you introduce a service to allow a user to dynamically determine his
 access rights (absolutely), you have to decide what to do about the
 dynamics of this service compared to the lifetime of the keystone token,
 as the rights may change more quickly than the token's lifetime.


 Horizon makes some attempts to solve this issue by sourcing all of
 the policy files from all of the services to determine what a user
 can accomplish with a given role. This is highly inefficient as it
 requires processing the various policy.json files for each request
 in multiple places and presents a mechanism that is not really
 scalable to understand what a user can do with the current
 authorization. Horizon may not be the only service that (in the
 long term) would want to know what actions a token can take.

 This is also extremely useful for being able to actually support
 more restricted tokens as well.  If I as an end user want to request
 a 

[openstack-dev] [Heat] image requirements for Heat software config

2014-10-14 Thread Thomas Spatzier

Hi all,

I have been experimenting a lot with Heat software config to  check out
what works today, and to think about potential next steps.
I've also worked on an internal project where we are leveraging software
config as of the Icehouse release.

I think what we can do now from a user's perspective in a HOT template is
really nice and resonates well also with customers I've talked to.
One of the points where we are constantly having issues, and also got some
push back from customers, are the requirements on the in-instance tools and
the process of building base images.
One observation is that building a base image with all the right stuff
inside sometimes is a brittle process; the other point is that a lot of
customers do not like a lot of requirements on their base images. They want
to maintain one set of corporate base images, with as little modification
on top as possible.

Regarding the process of building base images, the currently documented way
[1] of using diskimage-builder turns out to be a bit unstable sometimes.
Not because diskimage-builder is unstable, but probably because it pulls in
components from a couple of sources:
#1 we have a dependency on implementation of the Heat engine of course (So
this is not pulled in to the image building process, but the dependency is
there)
#2 we depend on features in python-heatclient (and other python-* clients)
#3 we pull in implementation from the heat-templates repo
#4 we depend on tripleo-image-elements
#5 we depend on os-collect-config, os-refresh-config and os-apply-config
#6 we depend on diskimage-builder itself

Heat itself and python-heatclient are reasonably well in synch because
there is a release process for both, so we can tell users with some
certainty that a feature will work with release X of OpenStack and Heat and
version x.z.y of python-heatclient. For the other 4 sources, success
sometimes depends on the time of day when you try to build an image
(depending on what changes are currently included in each repo). So
basically there does not seem to be a consolidated release process across
all that is currently needed for software config.

The ideal solution would be to have one self-contained package that is easy
to install on various distributions (an rpm, deb, MSI ...).
Secondly, it would be ideal to not have to bake additional things into the
image but doing bootstrapping during instance creation based on an existing
cloud-init enabled image. For that we would have to strip requirements down
to a bare minimum required for software config. One thing that comes to my
mind is the cirros software config example [2] that Steven Hardy created.
It is admittedly no up to what one could do with an image built according
to [1] but on the other hand is really slick, whereas [1] installs a whole
set of things into the image (some of which do not really seem to be needed
for software config).

Another issue that comes to mind: what about operating systems not
supported by diskimage-builder (Windows), or other hypervisor platforms?

Any, not really suggestions from my side but more observations and
thoughts. I wanted to share those and raise some discussion on possible
options.

Regards,
Thomas

[1]
https://github.com/openstack/heat-templates/blob/master/hot/software-config/elements/README.rst
[2]
https://github.com/openstack/heat-templates/tree/master/hot/software-config/example-templates/cirros-example


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Packaging Sinon.JS as xstatic

2014-10-14 Thread Timur Sufiev
Adding yesterday discussion of JS libs for unit-testing in
#openstack-horizon: http://paste2.org/B9xN1yI4

On Mon, Oct 13, 2014 at 5:01 PM, Timur Sufiev tsuf...@mirantis.com wrote:

 Hello folks!

 Discussing the proposed Sinon.js dependency to Horizon on the last meeting
 has brought quite an expected question: why should we add it when there is
 already such wonderful testing framework as Jasmine? And if you need some
 testing feature present in Jasmine, why not rewrite your QUnit test in
 Jasmine? I was not ready to answer the question at that moment so I took a
 pause to learn more about Jasmine capabilities compared to Sinon.JS.

 First of all, I googled to see if someone did this investigation before.
 Unfortunately, I haven't found much: judging from the links [1], [2] both
 Jasmine and Sinon.JS provide the same functionality, while Sinon.JS is a
 bit more flexible and could be more convenient in some cases (I guess those
 cases are specific to the project being tested).

 Then I had studied Jasmine/Sinon.JS docs and repos myself and have found
 that:
 * both project repos have lots of contributors and fresh commits
 * indeed, they provide roughly the same functionality: matchers/testing
 spies/stubs/mocks/faking timers/AJAX mocking, but
 * to use AJAX mocking in Jasmine, you need to use a separate library [5],
 which I guess means another xstatic dependency besides xstatic-jasmine if
 you want to mock AJAX calls via Jasmine
 * Sinon.JS has a much more comprehensive documentation [6] than Jasmine
 [3], [4]

 So, while Horizon doesn't have too many QUnit tests meaning that they
 could be rewritten in Jasmine in a relatively short time, it seems that in
 order to mock AJAX requests (the reason I looked to the Sinon.JS) in
 Jasmine another xstatic dependency should be added (Radomir Dopieralski
 could correct me here if I'm wrong). Also, I've found quite an interesting
 feature in Sinon.JS's AJAX mocks: it is possible to mock only a filtered
 set of server calls and let others pass through [7] - didn't find such
 feature in Jasmine ajax.js docs. On the other hand, reducing all JS
 unit-tests to one framework is good thing also, and given that Jasmine is
 officially used for Angular.js testing, I'd rather see Jasmine as the 'only
 Horizon JS unit-testing framework' than the QUnit. But then, again: want to
 have AJAX mocks = add 'jasmine-ajax' dependency to the already existing
 'jasmine' (why not add Sinon.JS then?).

 Summarizing all the things I've written so far, I would:
 * replace QUnit with Jasmine (=remove QUnit dependency)
 * add Sinon.JS just to have its AJAX-mocking features.

 [1] http://stackoverflow.com/questions/15002541/does-jasmine-need-sinon-js
 [2]
 http://stackoverflow.com/questions/12216053/whats-the-advantage-of-using-sinon-js-over-jasmines-built-in-spys
 [3] http://jasmine.github.io/1.3/introduction.html
 [4] http://jasmine.github.io/2.0/ajax.html
 [5] https://github.com/pivotal/jasmine-ajax
 [6] http://sinonjs.org/docs/
 [7] http://sinonjs.org/docs/#server search for 'Filtered requests'

 On Tue, Oct 7, 2014 at 1:19 PM, Timur Sufiev tsuf...@mirantis.com wrote:

 Hello all!

 Recently I've stumbled upon wonderful Sinon.JS library [1] for stubs and
 mocks in JS unit tests and found that it can be used for simplifying unit
 test I've made in [2] and speeding it up. Just before wrapping it as
 xstatic package I'd like to clarify 2 questions regarding Sinon.JS:

 * Are Horizon folks fine with adding this dependency? Right now it will
 be used just for one test, but it would be useful for anybody who wants to
 mock AJAX requests in tests or emulate timeout events being fired up
 (again, see very brief and concise examples at [1]).
 * Is it okay to include QUnit and Jasmine adapters for Sinon.JS in the
 same xstatic package? Well, personally I'd vote for including QUnit adapter
 [3] since it has very little code and allows for seamless integration of
 Sinon.JS with QUnit testing framework. If someone is interested in using
 Jasmine matchers for Sinon [4], please let me know.

 [1] http://sinonjs.org/
 [2]
 https://review.openstack.org/#/c/113855/2/horizon/static/horizon/tests/tables.js
 [3] http://sinonjs.org/qunit/
 [4] https://github.com/froots/jasmine-sinon

 --
 Timur Sufiev




 --
 Timur Sufiev




-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for testing and merging the merge.py-free templates

2014-10-14 Thread Steven Hardy
On Tue, Oct 14, 2014 at 01:27:20PM +0200, Tomas Sedovic wrote:
 On 10/14/2014 12:43 PM, Steven Hardy wrote:
 On Tue, Oct 14, 2014 at 10:55:30AM +0200, Tomas Sedovic wrote:
 Hi everyone,
 
 As outlined in the Remove merge.py[1] spec, Peter Belanyi and I have built
 the templates for controller, nova compute, swift and cinder nodes that can
 be deploying directly to Heat (i.e. no merge.py pass is necessary).
 
 The patches:
 
 https://review.openstack.org/#/c/123100/
 https://review.openstack.org/#/c/123713/
 
 I'd like to talk about testing and merging them.
 
 Both Peter and myself have successfully run them through devtest multiple
 times. The Tuskar and TripleO UI folks have managed to hook them up to the
 UI and make things work, too.
 
 That said, there is a number of limitations which don't warrant making them
 the new default just yet:
 
 * Juno Heat is required
 * python-heatclient version 0.2.11 is required to talk to Heat
 * There is currently no way in Heat to drop specific nodes from a
 ResourceGroup (say because of a hardware failure) so the ellision feature
 from merge.py is not supported yet
 
 FYI, I saw that comment from Clint in 123713, and have been looking
 into ways to add this feature to Heat - hopefully will have some code to
 post soon.
 
 Oh, cool! It was on my todo list of things to tackle next. If I can help in
 any way (e.g. testing), let me know.

I've posted an initial patch for discussion:

https://review.openstack.org/#/c/128365/

This is pretty much the simplest way I could think of to solve (my
interpretation of) the requirement - feedback welcome re if I'm on the
right track.

The proposed solution is to do a stack update with force_remove specifying
the index of ResourceGroup members you wish to drop out of the group (if
the count is unchanged, this will build a new resource and delete the one
specified).

Please let me know if this meets the requirement for replacement of the
elide functionality, thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Lars Kellogg-Stedman
On Tue, Oct 14, 2014 at 12:33:42PM -0400, Jay Pipes wrote:
 Can I use your Dockerfiles to build Ubuntu/Debian images instead of only
 Fedora images?

Not easily, no.

 Seems to me that the image-based Docker system makes the
 resulting container quite brittle -- since a) you can't use configuration
 management systems like Ansible to choose which operating system or package
 management tools you wish to use...

While that's true, it seems like a non-goal.  You're not starting with
a virtual machine and a blank disk here, you're starting from an
existing filesystem.

I'm not sure I understand your use case enough to give you a more
useful reply.

 So... what am I missing with this? What makes Docker images more ideal than
 straight up LXC containers and using Ansible to control upgrades/changes to
 configuration of the software on those containers?

I think that in general that Docker images are more share-able, and
the layered model makes building components on top of a base image
both easy and reasonably efficient in terms of time and storage.

I think that Ansible makes a great tool for managing configuration
inside Docker containers, and you could easily use it as part of the
image build process.  Right now, people using Docker are basically
writing shell scripts to perform system configuration, which is like a
20 year step back in time.  Using a more structured mechanism for
doing this is a great idea, and one that lots of people are pursuing.
I have looked into using Puppet as part of both the build and runtime
configuration process, but I haven't spent much time on it yet.

A key goal for Docker images is generally that images are immutable,
or at least stateless.  You don't yum upgrade or apt-get upgrade
in a container; you generate a new image with new packages/code/etc.
This makes it trivial to revert to a previous version of a deployment,
and clearly separates the build the image process from the run the
application process.

I like this model.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgpF7zc0dsOpi.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites

2014-10-14 Thread Morgan Fainberg
On Tuesday, October 14, 2014, Nathan Kinder nkin...@redhat.com wrote:



 On 10/14/2014 07:42 AM, Tim Hinrichs wrote:
  First, some truth in advertising: I work on Congress (policy as a
 service), so I’ve mostly given thought to this problem in that context.
 
  1) I agree with the discussion below about creating a token that encodes
 all the permitted actions for the user.  The cons seem substantial.

 +1


  (i) The token will get stale, requiring us to either revoke it when
 policy/roles change or to live with incorrect access control enforcement
 until the token expires.

 This is a very good point.


Totally valid point. Worth avoiding making this problem worse than today.
We have this to a minor extent because roles are static within an issued
token (policy file could be changed to mitigate).



  (ii) The token could become large, complex, or both.  Suppose the policy
 is granular enough to put restrictions on the arguments a user is permitted
 to provide to an action.  The token might end up encoding a significant
 portion of the policy itself.  Checking if the token permits a given action
 could be similar computationally to checking the original policy.json file.
 
 
  2) I like the idea of an out-of-band service that caches a copy of all
 the policy.jsons and allows users to interrogate/edit them.  I’ve
 definitely talked to operators who would like this kind of thing.  This
 would be a low-risk, low-friction solution to the problem because nothing
 about OpenStack today would need to change.  We’d just add an extra service
 and tell people to use it—sort of a new UI for the policy.json files.  And
 we could add interesting functionality, e.g. hypothetical queries such as
 “if I were to add role X, what changes would that make in my rights?
 
  Perhaps some more context about why users want to know all of the
 actions they are permitted to execute might help.

 I think that there are two questions that a user may have here:

 1) What actions can I perform using a particular token?



This is an important question to answer for tools like Horizon, we don't
want to show capabilities or lock out capabilities so we don't need to
try to know if it will succeed. This is purely UX.



 2) What role(s) do I need to perform a particular action?

 For me, the second question is more interesting.  A user likely already
 has an idea of a task that they want to perform.  With question number
 1, what do I do as a user if the response says that I'm not allowed to
 perform the task I'm trying to accomplish?  The answer really doesn't
 give me a way to move forward and perform my task.

 With question 2, I'm able to find out what exact roles are needed to
 perform a specific action.  With this information, I could request a
 Keystone token with a subset of my roles that is authorized to perform
 the task while leaving out roles that might have a higher level of
 authorization.  For instance, why should I need to send a token with the
 'admin' role to Nova just to launch an instance if '_member_' is all
 that's required?

 Another real use case is determining what roles are needed when creating
 a trust in Keystone.  If I want to use a trust to allow a service like
 Heat or Neutron's LBaaS to perform an action on my behalf, I want to
 minimize the authorization that I'm delegating to those services.
 Keystone trusts already have the ability to explicitly define the roles
 that will be present in the issues trust tokens, but I have no way of
 knowing what roles are required to perform a particular action without
 consulting the policy


This sums up the large(er) part of or starting this conversation.


-NGK

 
  Tim
 
 
 
  On Oct 14, 2014, at 1:56 AM, David Chadwick d.w.chadw...@kent.ac.uk
 javascript:; wrote:
 
 
 
  On 14/10/2014 01:25, Nathan Kinder wrote:
 
 
  On 10/13/2014 01:17 PM, Morgan Fainberg wrote:
  Description of the problem: Without attempting an action on an
  endpoint with a current scoped token, it is impossible to know what
  actions are available to a user.
 
 
  This is not unusual in the physical world. If you think about all the
  authz tokens you carry around in your pocket (as plastic cards), very
  few of them (if any) list what you are entitled to do with them. This
  gives the issuers and SPs flexibility to dynamically change your
  accesses rights without changing your authorisation. What you can do, in
  general terms, may be written in policy documents that you can consult
  if you wish. So you may wish to introduce a service that is equivalent
  to this (i.e. user may optionally consult some policy advice service).
 
  If you introduce a service to allow a user to dynamically determine his
  access rights (absolutely), you have to decide what to do about the
  dynamics of this service compared to the lifetime of the keystone token,
  as the rights may change more quickly than the token's lifetime.
 
 
  Horizon makes some attempts to solve this issue by sourcing all of
  the 

[openstack-dev] [Group-based Policy] Review of patches

2014-10-14 Thread Sumit Naiksatam
Hi, We are meeting in the #openstack-gbp channel today (10/14) 18.00 UTC to
jointly review some of the pending patches:

https://review.openstack.org/#/q/status:open+project:stackforge/group-based-policy+branch:master,n,z

Please join if you would like to provide feedback.

Thanks,
~Sumit.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for testing and merging the merge.py-free templates

2014-10-14 Thread Gregory Haynes
Excerpts from Tomas Sedovic's message of 2014-10-14 08:55:30 +:
 James Slagle proposed something like this when I talked to him on IRC:
 
 1. teach devtest about the new templates, driven by a 
 OVERCLOUD_USE_MERGE_PY switch (defaulting to the merge.py-based templates)
 2. Do a CI run of the new template patches, merge them
 3. Add a (initially non-voting?) job to test the heat-only templates
 4. When we've resolved all the issues stopping up from the switch, make 
 the native templates default, deprecate the merge.py ones

This sounds good to me. I would support even making it voting from the
start if it is on-par with pass rates of our other jobs (which seems
like it should be). The main question here is whether we have the
capacity for this (derekh?) as I know we tend to run close to our
capacity limit worth of jobs.

Cheers,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Private external network

2014-10-14 Thread A, Keshava
Hi,
Across these private External network/tenant :: floating IP can be shared ?


Keshava


From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Tuesday, October 14, 2014 10:33 PM
To: Édouard Thuleau
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [neutron] Private external network

The blueprint was untargeted mostly because the analysis indicated that there 
was no easy solution, and that what we needed was a solution to do some RBAC on 
neutron resources.

I think this would be a good addition to the Neutron resource model, and it 
would be great if you could start the discussion on the mailing list exposing 
your thoughts.

Salvatore

On 14 October 2014 11:50, Édouard Thuleau 
thul...@gmail.commailto:thul...@gmail.com wrote:
Hi Salvatore,

I like to propose a blueprint for the next Neutron release that permits to 
dedicated an external network to a tenant. For that I though to rethink the he 
conjunction of the two attributes `shared`
and `router:external' of the network resource.

I saw that you already initiate a work on that topic [1] and [2] but the bp was 
un-targeted for an alternative approaches which might be more complete. Does it 
alternative was released or in work in progress? To be sure to not duplicating 
work/effort.

[1] 
https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks
[2] https://wiki.openstack.org/wiki/Neutron/sharing-model-for-external-networks

Regards,
Édouard.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Cannot start the VM console when VM is launched at Compute node

2014-10-14 Thread Danny Choi (dannchoi)
Hi,

I used devstack to deploy multi-node OpenStack, with Controller + nova-compute 
+ Network on one physical node (qa4),
and Compute on a separate physical node (qa5).

When I launch a VM which spun up on the Compute node (qa5), I cannot launch the 
VM console, in both CLI and Horizon.


localadmin@qa4:~/devstack$ nova hypervisor-servers q

+--+---+---+-+

| ID   | Name  | Hypervisor ID | 
Hypervisor Hostname |

+--+---+---+-+

| 48b16e7c-0a17-42f8-9439-3146f26b4cd8 | instance-000e | 1 | 
qa4 |

| 3eadf190-465b-4e90-ba49-7bc8ce7f12b9 | instance-000f | 1 | 
qa4 |

| 056d4ad2-e081-4706-b7d1-84ee281e65fc | instance-0010 | 2 | 
qa5 |

+--+---+---+-+

localadmin@qa4:~/devstack$ nova list

+--+--+++-+-+

| ID   | Name | Status | Task State | Power 
State | Networks|

+--+--+++-+-+

| 3eadf190-465b-4e90-ba49-7bc8ce7f12b9 | vm1  | ACTIVE | -  | Running   
  | private=10.0.0.17   |

| 48b16e7c-0a17-42f8-9439-3146f26b4cd8 | vm2  | ACTIVE | -  | Running   
  | private=10.0.0.16, 172.29.173.4 |

| 056d4ad2-e081-4706-b7d1-84ee281e65fc | vm3  | ACTIVE | -  | Running   
  | private=10.0.0.18, 172.29.173.5 |

+--+--+++-+-+

localadmin@qa4:~/devstack$ nova get-vnc-console vm3 novnc

ERROR (CommandError): No server with a name or ID of 'vm3' exists.  
[ERROR]


This does not happen if the VM resides at the Controlller (qa5).


localadmin@qa4:~/devstack$ nova get-vnc-console vm2 novnc

+---+-+

| Type  | Url   
  |

+---+-+

| novnc | 
http://172.29.172.161:6080/vnc_auto.html?token=f556dea2-125d-49ed-bfb7-55a9a7714b2e
 |

+---+-+

Is this expected behavior?

Thanks,
Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-10-14 Thread Adam Young

On 10/13/2014 06:21 PM, Preston L. Bannister wrote:
Too-short token expiration times are one of my concerns, in my current 
exercise.


Working on a replacement for Nova backup. Basically creating backups 
jobs, writing the jobs into a queue, with a background worker that 
reads jobs from the queue. Tokens could expire while the jobs are in 
the queue (not too likely). Tokens could expire during the execution 
of a backup (while can be very long running, in some cases).


Had not run into mention of trusts before. Is the intent to cover 
this sort of use-case?
Keystone trusts are User to User delegations appropriate for long 
running tasks.  So, yeah, should work for these use cases.




(Pulled up what I could find on trusts. Need to chew on this a bit, 
as it is not immediately clear if this fits.)








On Wed, Oct 1, 2014 at 6:53 AM, Adam Young ayo...@redhat.com 
mailto:ayo...@redhat.com wrote:


On 10/01/2014 04:14 AM, Steven Hardy wrote:

On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:

What is keeping us from dropping the (scoped) token
duration to 5 minutes?


If we could keep their lifetime as short as network skew
lets us, we would
be able to:

Get rid of revocation checking.
Get rid of persisted tokens.

OK,  so that assumes we can move back to PKI tokens, but
we're working on
that.

What are the uses that require long lived tokens? Can they
be replaced with
a better mechanism for long term delegation (OAuth or
Keystone trusts) as
Heat has done?

FWIW I think you're misrepresenting Heat's usage of Trusts
here - 2 minute
tokens will break Heat just as much as any other service:

https://bugs.launchpad.net/heat/+bug/1306294


http://lists.openstack.org/pipermail/openstack-dev/2014-September/045585.html


Summary:

- Heat uses the request token to process requests (e.g stack
create), which
   may take an arbitrary amount of time (default timeout one
hour).

- Some use-cases demand timeout of more than one hour
(specifically big
   TripleO deployments), heat breaks in these situations atm,
folks are
   working around it by using long (several hour) token expiry
times.

- Trusts are only used of asynchronous signalling, e.g
Ceilometer signals
   Heat, we switch to a trust scoped token to process the
response to the
   alarm (e.g launch more instances on behalf of the user for
autoscaling)

My understanding, ref notes in that bug, is that using Trusts
while
servicing a request to effectively circumvent token expiry was
not legit
(or at least yukky and to be avoided).  If you think otherwise
then please
let me know, as that would be the simplest way to fix the bug
above (switch
to a trust token while doing the long-running create operation).

Using trusts to circumvent timeout is OK.  There are two issues in
tension here:

1.  A user needs to be able to maintain control of their own data.

2.  We want to limit the attack surface provided by tokens.

Since tokens are currently blanket access to the users data, there
really is no lessening of control by using trusts in a wider
context.  I'd argue that using trusts would actually reduce the
capability for abuse,if coupled with short lived tokens. With long
lived tokens, anyone can reuse the token. With a trust, only the
trustee would be able to create a new token.


Could we start by identifying the set of operations that are
currently timing out due to the one hour token duration and add an
optional trustid on those operations?




Trusts is not really ideal for this use-case anyway, as it
requires the
service to have knowledge of the roles to delegate (or that
the user
provides a pre-created trust), ref bug #1366133.  I suppose we
could just
delegate all the roles we find in the request scope and be
done with it,
given that bug has been wontfixed.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [horizon]Blueprint- showing a small message to the user for browser incompatibility

2014-10-14 Thread Solly Ross
Sure, feel free to put my response in the Blueprint page.  Thanks for the quick 
answer.

Best Regards,
Solly Ross

- Original Message -
 From: Nikunj Aggarwal nikunj.aggar...@hp.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Cc: sr...@redhat.com
 Sent: Tuesday, October 14, 2014 12:30:13 PM
 Subject: RE: [openstack-dev] [horizon]Blueprint- showing a small message to 
 the user for browser incompatibility
 
 Hi Solly,
 
 You are right with your questions about user setting custom UA on their
 browser. And during my discussion with other Horizon community members on
 IRC, we decided that Horizon should not care about user setting custom UA on
 for their browser because it is not our job to identify that and fix. If
 user is setting custom UA for their browser which means they want that
 irrespective of the outcome.
 
 Also we discussed about using Modernizr and also implementing graceful
 degradation but this will work for bigger features like canvas or svg but
 smaller css feature where it is breaking in older versions of IE like IE 9,
 will be a huge change and I personally think will be waste of resources and
 will make the code more complicated.
 
 Instead Horizon guys came to an conclusion that we will identify the browser
 type and version to deal with legacy browsers like older IE or firefox or
 any other browser and for other major features we can use feature detection
 with Modernizr.
 
 We are targeting all major browsers which are listed in this page  -
 https://wiki.openstack.org/wiki/Horizon/BrowserSupport
 
 And I also think by going with this approach we will minimize the code
 complexity and also this small feature will greatly improve the UX
 experience for the end-users.
 
 Thank you for the reply and your views. Also can I put the contents of your
 reply as a comment into the Blueprint page? Because I think many others will
 also have same kind of questions.
 
  
 Thanks  Regards,
 Nikunj
 
 -Original Message-
 From: Solly Ross [mailto:sr...@redhat.com]
 Sent: Tuesday, October 14, 2014 9:37 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [horizon]Blueprint- showing a small message to
 the user for browser incompatibility
 
 I'm not sure User Agent detection is the best way to go.
 
 Suppose you do UA sniffing and say show the message unless the UA is one of
 X.  Then, if there's a browser which fully supports your feature set, but
 doesn't have a known UA (or someone set a custom UA on their browser), the
 message will still show, which could be confusing to users.
 
 On the other hand, if you do UA sniffing and say show the message if the UA
 is one of X, then a browser that didn't support the features, but didn't
 have a matching User Agent, wouldn't show the message.
 
 If you go with User Agent sniffing, I'd say the latter way (a blacklist)
 preferable, since it's probably easier to come up with currently unsupported
 browsers than it is to predict future browsers.
 
 Have you identified which specific browser features are needed?  You could
 use Modernizr and then warn if the requisite feature set it not implemented.
 This way, you simply check for the feature set required.
 
 Best Regards,
 Solly Ross
 
 - Original Message -
  From: Nikunj Aggarwal nikunj.aggar...@hp.com
  To: openstack-dev@lists.openstack.org
  Sent: Tuesday, October 14, 2014 4:53:42 AM
  Subject: [openstack-dev] [horizon]Blueprint- showing a small message
  to the user for browser incompatibility
  
  
  
  Hi Everyone,
  
  
  
  I have submitted a blueprint which targets the issues end-users are
  faces when they are using Horizon in the old browsers. So, this
  blueprint targets to overcome this problem by showing a small message
  on the Horizon login page.
  
  
  
  I urge to all the Horizon community to take a look and share your views.
  
  
  
  https://blueprints.launchpad.net/horizon/+spec/detecting-browser
  
  
  
  
  
  
  
  Regards,
  Nikunj
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] image requirements for Heat software config

2014-10-14 Thread Ryan Brown
inline responses

On 10/14/2014 01:13 PM, Thomas Spatzier wrote:
 
 Hi all,
 
 I have been experimenting a lot with Heat software config to  check out
 what works today, and to think about potential next steps.
 I've also worked on an internal project where we are leveraging software
 config as of the Icehouse release.
 
 I think what we can do now from a user's perspective in a HOT template is
 really nice and resonates well also with customers I've talked to.
 One of the points where we are constantly having issues, and also got some
 push back from customers, are the requirements on the in-instance tools and
 the process of building base images.
 One observation is that building a base image with all the right stuff
 inside sometimes is a brittle process; the other point is that a lot of
 customers do not like a lot of requirements on their base images. They want
 to maintain one set of corporate base images, with as little modification
 on top as possible.
 
 Regarding the process of building base images, the currently documented way
 [1] of using diskimage-builder turns out to be a bit unstable sometimes.
 Not because diskimage-builder is unstable, but probably because it pulls in
 components from a couple of sources:
 #1 we have a dependency on implementation of the Heat engine of course (So
 this is not pulled in to the image building process, but the dependency is
 there)
 #2 we depend on features in python-heatclient (and other python-* clients)
 #3 we pull in implementation from the heat-templates repo
 #4 we depend on tripleo-image-elements
 #5 we depend on os-collect-config, os-refresh-config and os-apply-config
 #6 we depend on diskimage-builder itself
 
 Heat itself and python-heatclient are reasonably well in synch because
 there is a release process for both, so we can tell users with some
 certainty that a feature will work with release X of OpenStack and Heat and
 version x.z.y of python-heatclient. For the other 4 sources, success
 sometimes depends on the time of day when you try to build an image
 (depending on what changes are currently included in each repo). So
 basically there does not seem to be a consolidated release process across
 all that is currently needed for software config.
 
 The ideal solution would be to have one self-contained package that is easy
 to install on various distributions (an rpm, deb, MSI ...).

It would be simple enough to make an RPM metapackage that just installs
the deps. The definition of self-contained I'm using here is one
install command and not has its own vendored python and every module.

 Secondly, it would be ideal to not have to bake additional things into the
 image but doing bootstrapping during instance creation based on an existing
 cloud-init enabled image. For that we would have to strip requirements down
 to a bare minimum required for software config. One thing that comes to my
 mind is the cirros software config example [2] that Steven Hardy created.
 It is admittedly no up to what one could do with an image built according
 to [1] but on the other hand is really slick, whereas [1] installs a whole
 set of things into the image (some of which do not really seem to be needed
 for software config).

I like this option much better, actually. Idoubt many deployers would
have complaints since cloud-init is pretty much standard. The downside
here is that it wouldn't be all that feasible to include bootstrap
scripts for every platform.

Maybe it would be enough to have the ability to bootstrap one or two
popular distros (Ubuntu, Fedora, Cent, etc) and accept patches for other
platforms.

 
 Another issue that comes to mind: what about operating systems not
 supported by diskimage-builder (Windows), or other hypervisor platforms?
 
 Any, not really suggestions from my side but more observations and
 thoughts. I wanted to share those and raise some discussion on possible
 options.
 
 Regards,
 Thomas
 
 [1]
 https://github.com/openstack/heat-templates/blob/master/hot/software-config/elements/README.rst
 [2]
 https://github.com/openstack/heat-templates/tree/master/hot/software-config/example-templates/cirros-example
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Jay Pipes

On 10/14/2014 01:28 PM, Lars Kellogg-Stedman wrote:

On Tue, Oct 14, 2014 at 12:33:42PM -0400, Jay Pipes wrote:

Can I use your Dockerfiles to build Ubuntu/Debian images instead of only
Fedora images?


Not easily, no.


Seems to me that the image-based Docker system makes the
resulting container quite brittle -- since a) you can't use configuration
management systems like Ansible to choose which operating system or package
management tools you wish to use...


While that's true, it seems like a non-goal.  You're not starting with
a virtual machine and a blank disk here, you're starting from an
existing filesystem.

I'm not sure I understand your use case enough to give you a more
useful reply.


Sorry, I'm trying hard to describe some of this. I have limited 
vocabulary to use in this new space :)


I guess what I am saying is that there is a wealth of existing 
configuration management provenance that installs and configures 
application packages. These configuration management modules/tools are 
written to entirely take away the multi-operating-system, 
multi-package-manager problems.


Instead of having two Dockerfiles, one that only works on Fedora that 
does something like:


FROM fedora20
RUN yum -y install python-pbr

and one that only works on Debian:

FROM debian:wheezy
RUN apt-get install -y python-pbr

Configuration management tools like Ansible already work cross-operating 
system, and allow you to express what gets installed regardless of the 
operating system of the disk image:


tasks:
  - name: install PBR Debian
apt: name=python-pbr state=present
when: ansible_os_family == Debian
  - name: install PBR RH
yum: name=python-pbr state=present
when: ansible_os_family == RedHat

Heck, in Chef, you wouldn't even need the when: switch logic, since Chef 
knows which package management system to use depending on the operating 
system.


With Docker, you are limited to the operating system of whatever the 
image uses.


This means, for things like your openstack-containers Ansible+Docker 
environment (which is wicked cool, BTW), you have containers running 
Fedora20 for everything except MySQL, which due to the fact that you are 
using the official [1] MySQL image on dockerhub, is only a 
Debian:Wheezy image.


This means you now have to know the system administrative comments and 
setup for two operating systems ... or go find a Fedora20 image for 
mysql somewhere.


It just seems to me that Docker is re-inventing a whole bunch of stuff 
that configuration management tools like Ansible, Puppet, Chef, and 
Saltstack have gotten good at over the years.


[1] Is there an official MySQL docker image? I found 553 Dockerhub 
repositories for MySQL images...



So... what am I missing with this? What makes Docker images more ideal than
straight up LXC containers and using Ansible to control upgrades/changes to
configuration of the software on those containers?


I think that in general that Docker images are more share-able, and
the layered model makes building components on top of a base image
both easy and reasonably efficient in terms of time and storage.


By layered model, are you referring to the bottom layer being the Docker 
image and then upper layers being stuff managed by a configuration 
management system?



I think that Ansible makes a great tool for managing configuration
inside Docker containers, and you could easily use it as part of the
image build process.  Right now, people using Docker are basically
writing shell scripts to perform system configuration, which is like a
20 year step back in time.


Right, I've noticed that :)

  Using a more structured mechanism for

doing this is a great idea, and one that lots of people are pursuing.
I have looked into using Puppet as part of both the build and runtime
configuration process, but I haven't spent much time on it yet.


Oh, I don't think Puppet is any better than Ansible for these things.


A key goal for Docker images is generally that images are immutable,
or at least stateless.  You don't yum upgrade or apt-get upgrade
in a container; you generate a new image with new packages/code/etc.
This makes it trivial to revert to a previous version of a deployment,
and clearly separates the build the image process from the run the
application process.


OK, so bear with me more on this, please... :)

Let's say I build Docker images for, say, my nova-conductor container, 
and I build the image by doing a devstack-style install from git repos 
method. Then, I build another nova-conductor container from a newer 
revision in source control.


How would I go about essentially transferring the ownership of the RPC 
exchanges that the original nova-conductor container managed to the new 
nova-conductor container? Would it be as simple as shutting down the old 
container and starting up the new nova-conductor container using things 
like --link rabbitmq:rabbitmq in the startup docker line?


Genuinely curious and inspired by this 

Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites

2014-10-14 Thread Tim Hinrichs
That was really helpful background.  Thanks!

I’d be happy to look into using Congress to implement what we’ve discussed: 
caching policy.json files, updating them periodically, and answering queries 
about the roles required to be granted access to a certain kind of action.  I 
think we have the right algorithms sitting around.

Let me know if that would help or if you’d prefer a different approach.

Tim



On Oct 14, 2014, at 10:31 AM, Morgan Fainberg 
morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com wrote:



2) What role(s) do I need to perform a particular action?

For me, the second question is more interesting.  A user likely already
has an idea of a task that they want to perform.  With question number
1, what do I do as a user if the response says that I'm not allowed to
perform the task I'm trying to accomplish?  The answer really doesn't
give me a way to move forward and perform my task.

With question 2, I'm able to find out what exact roles are needed to
perform a specific action.  With this information, I could request a
Keystone token with a subset of my roles that is authorized to perform
the task while leaving out roles that might have a higher level of
authorization.  For instance, why should I need to send a token with the
'admin' role to Nova just to launch an instance if '_member_' is all
that's required?

Another real use case is determining what roles are needed when creating
a trust in Keystone.  If I want to use a trust to allow a service like
Heat or Neutron's LBaaS to perform an action on my behalf, I want to
minimize the authorization that I'm delegating to those services.
Keystone trusts already have the ability to explicitly define the roles
that will be present in the issues trust tokens, but I have no way of
knowing what roles are required to perform a particular action without
consulting the policy

This sums up the large(er) part of or starting this conversation.


-NGK


 Tim



 On Oct 14, 2014, at 1:56 AM, David Chadwick 
 d.w.chadw...@kent.ac.ukjavascript:; wrote:



 On 14/10/2014 01:25, Nathan Kinder wrote:


 On 10/13/2014 01:17 PM, Morgan Fainberg wrote:
 Description of the problem: Without attempting an action on an
 endpoint with a current scoped token, it is impossible to know what
 actions are available to a user.


 This is not unusual in the physical world. If you think about all the
 authz tokens you carry around in your pocket (as plastic cards), very
 few of them (if any) list what you are entitled to do with them. This
 gives the issuers and SPs flexibility to dynamically change your
 accesses rights without changing your authorisation. What you can do, in
 general terms, may be written in policy documents that you can consult
 if you wish. So you may wish to introduce a service that is equivalent
 to this (i.e. user may optionally consult some policy advice service).

 If you introduce a service to allow a user to dynamically determine his
 access rights (absolutely), you have to decide what to do about the
 dynamics of this service compared to the lifetime of the keystone token,
 as the rights may change more quickly than the token's lifetime.


 Horizon makes some attempts to solve this issue by sourcing all of
 the policy files from all of the services to determine what a user
 can accomplish with a given role. This is highly inefficient as it
 requires processing the various policy.json files for each request
 in multiple places and presents a mechanism that is not really
 scalable to understand what a user can do with the current
 authorization. Horizon may not be the only service that (in the
 long term) would want to know what actions a token can take.

 This is also extremely useful for being able to actually support
 more restricted tokens as well.  If I as an end user want to request
 a token that only has the roles required to perform a particular
 action, I'm going to need to have a way of knowing what those roles
 are.  I think that is one of the main things missing to allow the
 role-filtered tokens option that I wrote up after the last Summit
 to be a viable approach:

 https://urldefense.proofpoint.com/v1/url?u=https://blog-nkinder.rhcloud.com/?p%3D101k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=XcBszEjYqiYgkoy9iUk3baKeyYoE%2Bb20k6zm3jIXGAs%3D%0As=ff78352cef9982b47c9e6cca97aa001f38b13837387332a38b56ce30e1394b87


 I would like to start a discussion on how we should improve our
 policy implementation (OpenStack wide) to help make it easier to
 know what is possible with a current authorization context
 (Keystone token). The key feature should be that whatever the
 implementation is, it doesn’t require another round-trip to a third
 party service to “enforce” the policy which avoids another scaling
 point like UUID Keystone token validation.

 Presumably this does not rule out the user, at his option, calling
 another service to ask for advice what can I do with this token,
 bearing 

Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-14 Thread Tim Bell
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 14 October 2014 19:01
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Automatic evacuate
 
 On 10/13/2014 05:59 PM, Russell Bryant wrote:
  Nice timing.  I was working on a blog post on this topic.
 
  On 10/13/2014 05:40 PM, Fei Long Wang wrote:
  I think Adam is talking about this bp:
  https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automat
  ically
 
  For now, we're using Nagios probe/event to trigger the Nova evacuate
  command, but I think it's possible to do that in Nova if we can find
  a good way to define the trigger policy.
 
  I actually think that's the right way to do it.
 
 +1. Not everything needs to be built-in to Nova. This very much sounds
 like something that should be handled by PaaS-layer things that can react to a
 Nagios notification (or any other event) and take some sort of action, 
 possibly
 using administrative commands like nova evacuate.
 

Nova is also not the right place to do the generic solution as many other parts 
could be involved... neutron and cinder come to mind. Nova needs to provide the 
basic functions but it needs something outside to make it all happen 
transparently.

I would really like a shared solution rather than each deployment doing their 
own and facing identical problems. A best of breed solution which can be 
incrementally improved as we find problems to dget the hypervisor down event, 
to force detach of boot volumes, restart elsewhere and reconfigure floating ips 
with race conditions is needed.

Some standards for tagging is good but we also need some code :-)

Tim

   There are a couple of
  other things to consider:
 
  1) An ideal solution also includes fencing.  When you evacuate, you
  want to make sure you've fenced the original compute node.  You need
  to make absolutely sure that the same VM can't be running more than
  once, especially when the disks are backed by shared storage.
 
  Because of the fencing requirement, another option would be to use
  Pacemaker to orchestrate this whole thing.  Historically Pacemaker
  hasn't been suitable to scale to the number of compute nodes an
  OpenStack deployment might have, but Pacemaker has a new feature
  called pacemaker_remote [1] that may be suitable.
 
  2) Looking forward, there is a lot of demand for doing this on a per
  instance basis.  We should decide on a best practice for allowing end
  users to indicate whether they would like their VMs automatically
  rescued by the infrastructure, or just left down in the case of a
  failure.  It could be as simple as a special tag set on an instance [2].
 
 Please note that server instance tagging (thanks for the shout-out, BTW) is
 intended for only user-defined tags, not system-defined metadata which is what
 this sounds like...
 
 Of course, one might implement some external polling/monitoring system using
 server instance tags, which might do a nova list --tag $TAG --host
 $FAILING_HOST, and initiate a migrate for each returned server instance...
 
 Best,
 -jay
 
  [1]
  http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_R
  emote/ [2] https://review.openstack.org/#/c/127281/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cinder/Neutron plugins on UI

2014-10-14 Thread Mike Scherbakov
+1 for doing now:
 we are going to implement something really simple, like updating plugin
attributes directly via api.
Then we can have discussions in parallel how we plan to evolve it.

Please confirm that we went this path.

Thanks,


On Mon, Oct 13, 2014 at 7:31 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 We've discussed what we will be able to do for the current release and
 what we will not be able to implement.
 We have not only technical problems, but also we don't have a lot of time
 for implementation. We were trying to find solution which will work well
 enough
 with all of the constraints.
 For the current release we want to implement approach which was suggested
 by Mike.
 We are going to generate for UI checkbox which defines if plugin is set for
 deployment. In nailgun we'll be able to parse generated checkboxes and
 remove or add relation between plugin and cluster models.
 With this relation we'll be able to identify if plugins is used, it will
 allow us
 to remove the plugins if it's unused (in the future), or if we need to pass
 tasks to orchestrator. Also in POC, we are going to implement something
 really simple, like updating plugin attributes directly via api.

 Thanks,

 On Thu, Oct 9, 2014 at 8:13 PM, Dmitry Borodaenko 
 dborodae...@mirantis.com wrote:

 Notes from the architecture review meeting on plugins UX:

- separate page for plugins management
- user installs the plugin on the master
- global master node configuration across all environments:
   - user can see a list of plugins on Plugins tab (plugins
   description)
   - Enable/Disable plugin
  - should we enable/disable plugins globally, or only per
  environment?
 - yes, we need a global plugins management page, it will
 later be extended to upload or remove plugins
  - if a plugin is used in a deployed environment, options to
  globally disable or remove that plugin are blocked
   - show which environments (or a number of environments) have a
   specific plugin enabled
   - global plugins page is a Should in 6.0 (but easy to add)
   - future: a plugin like ostf should have a deployable flag set to
   false, so that it doesn't show up as an option per env
- user creates new environment
   - in setup wizard on the releases page (1st step), a list of
   checkboxes for all plugins is offered (same page as releases?)
  - all globally enabled plugins are checked (enabled) by default
  - changes in selection of plugins will trigger regeneration of
  subsequent setup wizard steps
   - plugin may include a yaml mixin for settings page options in
   openstack.yaml format
  - in future releases, it will support describing setup wizard
  (disk configuration, network settings etc.) options in the same way
  - what is the simplest case? does plugin writer have to define
  the plugin enable/disable checkbox, or is it autogenerated?
 - if plugin does not define any configuration options: a
 checkbox is automatically added into Additional Services section 
 of the
 settings page (disabled by default)
 - *problem:* if a plugin is enabled by default, but the
 option to deploy it is disabled by default, such environment 
 would count
 against the plugin (and won't allow to remove this plugin 
 globally) even
 though it actually wasn't deployed
  - manifest of plugins enabled/used for an environment?


 We ended the discussion on the problem highlighted in bold above: what's
 the best way to detect which plugins are actually used in an environment?


 On Thu, Oct 9, 2014 at 6:42 AM, Vitaly Kramskikh vkramsk...@mirantis.com
  wrote:

 Evgeniy,

 Yes, the plugin management page should be a separate page. As for
 dependency on releases, I meant that some plugin can work only on Ubuntu
 for example, so for different releases different plugins could be available.

 And please confirm that you also agree with the flow: the user install a
 plugin, then he enables it on the plugin management page, and then he
 creates an environment and on the first step he can uncheck some plugins
 which he doesn't want to use in that particular environment.

 2014-10-09 20:11 GMT+07:00 Evgeniy L e...@mirantis.com:

 Hi,

 Vitaly, I like the idea of having separate page, but I'm not sure if it
 should be on releases page.
 Usually a plugin is not release specific, usually it's environment
 specific and you can have
 different set of plugins for different environments.

 Also I don't think that we should enable plugins by default, user
 should enable plugin if he wants
 it to be installed.

 Thanks,

 On Thu, Oct 9, 2014 at 3:34 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:

 Let me propose another approach. I agree with most of Dmitry's
 statements and it seems in MVP we need plugin 

Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-14 Thread Russell Bryant
On 10/14/2014 01:01 PM, Jay Pipes wrote:
 2) Looking forward, there is a lot of demand for doing this on a per
 instance basis.  We should decide on a best practice for allowing end
 users to indicate whether they would like their VMs automatically
 rescued by the infrastructure, or just left down in the case of a
 failure.  It could be as simple as a special tag set on an instance [2].
 
 Please note that server instance tagging (thanks for the shout-out, BTW)
 is intended for only user-defined tags, not system-defined metadata
 which is what this sounds like...

I was envisioning the tag being set by the end user to say please keep
my VM running until I say otherwise, or something like auto-recover
for short.

So, it's specified by the end user, but potentially acted upon by the
system (as you say below).

 Of course, one might implement some external polling/monitoring system
 using server instance tags, which might do a nova list --tag $TAG --host
 $FAILING_HOST, and initiate a migrate for each returned server instance...

Yeah, that's what I was thinking.  Whatever system you use to react to a
failing host could use the tag as part of the criteria to figure out
which instances to evacuate and which to leave as dead.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-14 Thread Mathieu Gagné

On 2014-10-14 2:49 PM, Tim Bell wrote:


Nova is also not the right place to do the generic solution as many other parts 
could be involved... neutron and cinder come to mind. Nova needs to provide the 
basic functions but it needs something outside to make it all happen 
transparently.

I would really like a shared solution rather than each deployment doing their 
own and facing identical problems. A best of breed solution which can be 
incrementally improved as we find problems to dget the hypervisor down event, 
to force detach of boot volumes, restart elsewhere and reconfigure floating ips 
with race conditions is needed.

Some standards for tagging is good but we also need some code :-)



I agree with Tim. Nova does not have all the required information to 
make a proper decision which could imply other OpenStack (and 
non-OpenStack) services. Furthermore, evacuating a node might imply 
fencing which Nova might not be able to do properly or have the proper 
tooling. What about non-shared storage backend in Nova? You can't 
evacuate those without data loss.


--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Lars Kellogg-Stedman
On Tue, Oct 14, 2014 at 02:45:30PM -0400, Jay Pipes wrote:
 With Docker, you are limited to the operating system of whatever the image
 uses.

See, that's the part I disagree with.  What I was saying about ansible
and puppet in my email is that I think the right thing to do is take
advantage of those tools:

  FROM ubuntu

  RUN apt-get install ansible
  COPY my_ansible_config.yaml /my_ansible_config.yaml
  RUN ansible /my_ansible_config.yaml

Or:

  FROM Fedora

  RUN yum install ansible
  COPY my_ansible_config.yaml /my_ansible_config.yaml
  RUN ansible /my_ansible_config.yaml

Put the minimal instructions in your dockerfile to bootstrap your
preferred configuration management tool. This is exactly what you
would do when booting, say, a Nova instance into an openstack
environment: you can provide a shell script to cloud-init that would
install whatever packages are required to run your config management
tool, and then run that tool.

Once you have bootstrapped your cm environment you can take advantage
of all those distribution-agnostic cm tools.

In other words, using docker is no more limiting than using a vm or
bare hardware that has been installed with your distribution of
choice.

 [1] Is there an official MySQL docker image? I found 553 Dockerhub
 repositories for MySQL images...

Yes, it's called mysql.  It is in fact one of the official images
highlighted on https://registry.hub.docker.com/.

 I have looked into using Puppet as part of both the build and runtime
 configuration process, but I haven't spent much time on it yet.
 
 Oh, I don't think Puppet is any better than Ansible for these things.

I think it's pretty clear that I was not suggesting it was better than
ansible.  That is hardly relevant to this discussion.  I was only
saying that is what *I* have looked at, and I was agreeing that *any*
configuration management system is probably better than writing shells
cript.

 How would I go about essentially transferring the ownership of the RPC
 exchanges that the original nova-conductor container managed to the new
 nova-conductor container? Would it be as simple as shutting down the old
 container and starting up the new nova-conductor container using things like
 --link rabbitmq:rabbitmq in the startup docker line?

I think that you would not necessarily rely on --link for this sort of
thing.  Under kubernetes, you would use a service definition, in
which kubernetes maintains a proxy that directs traffic to the
appropriate place as containers are created and destroyed.

Outside of kubernetes, you would use some other service discovery
mechanism; there are many available (etcd, consul, serf, etc).

But this isn't particularly a docker problem.  This is the same
problem you would face running the same software on top of a cloud
environment in which you cannot predict things like ip addresses a
priori.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgp9SM_Y1OfTe.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites

2014-10-14 Thread Adam Young

There are two distinct permissions to be managed:

1.  What can the user do.
2.  What actions can this token be used to do.

2. is a subset of 1.


Just because I, Adam Young, have the ability to destroy the golden image 
I have up on glance does not mean that I want to delegate that ability 
every time I use a token.


But that is exactly the mechanism we have today.

As a user, I should not be locked in to only delegating roles. A role 
may say you can read or modify an image but I want to only delegate 
the Read part when creating a new VM:  I want Nova to be able to read 
the image I specify.



Hence, I started a spec around capabilities  which are I think, a 
different check than for RBAC.


https://review.openstack.org/#/c/123726/





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Jay Pipes

On 10/14/2014 03:10 PM, Lars Kellogg-Stedman wrote:

On Tue, Oct 14, 2014 at 02:45:30PM -0400, Jay Pipes wrote:

With Docker, you are limited to the operating system of whatever the image
uses.


See, that's the part I disagree with.  What I was saying about ansible
and puppet in my email is that I think the right thing to do is take
advantage of those tools:

   FROM ubuntu

   RUN apt-get install ansible
   COPY my_ansible_config.yaml /my_ansible_config.yaml
   RUN ansible /my_ansible_config.yaml

Or:

   FROM Fedora

   RUN yum install ansible
   COPY my_ansible_config.yaml /my_ansible_config.yaml
   RUN ansible /my_ansible_config.yaml

Put the minimal instructions in your dockerfile to bootstrap your
preferred configuration management tool. This is exactly what you
would do when booting, say, a Nova instance into an openstack
environment: you can provide a shell script to cloud-init that would
install whatever packages are required to run your config management
tool, and then run that tool.


I think the above strategy is spot on. Unfortunately, that's not how the 
Docker ecosystem works. Everything is based on images, and you get the 
operating system that the image is built for. I see you didn't respond 
to my point that in your openstack-containers environment, you end up 
with Debian *and* Fedora images, since you use the official MySQL 
dockerhub image. And therefore you will end up needing to know sysadmin 
specifics (such as how network interfaces are set up) on multiple 
operating system distributions.



Once you have bootstrapped your cm environment you can take advantage
of all those distribution-agnostic cm tools.

In other words, using docker is no more limiting than using a vm or
bare hardware that has been installed with your distribution of
choice.


Sure, Docker isn't any more limiting than using a VM or bare hardware, 
but if you use the official Docker images, it is more limiting, no?



How would I go about essentially transferring the ownership of the RPC
exchanges that the original nova-conductor container managed to the new
nova-conductor container? Would it be as simple as shutting down the old
container and starting up the new nova-conductor container using things like
--link rabbitmq:rabbitmq in the startup docker line?


I think that you would not necessarily rely on --link for this sort of
thing.  Under kubernetes, you would use a service definition, in
which kubernetes maintains a proxy that directs traffic to the
appropriate place as containers are created and destroyed.

Outside of kubernetes, you would use some other service discovery
mechanism; there are many available (etcd, consul, serf, etc).

But this isn't particularly a docker problem.  This is the same
problem you would face running the same software on top of a cloud
environment in which you cannot predict things like ip addresses a
priori.


Gotcha. I'm still reading through the Kubernetes docs to better 
understand the role that it plays in relation to Docker. Hopefully I'll 
have a better grip on this stuff by summit time :) Thanks again for your 
very helpful replies.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Chris Dent

On Tue, 14 Oct 2014, Jay Pipes wrote:

This means you now have to know the system administrative comments and setup 
for two operating systems ... or go find a Fedora20 image for mysql 
somewhere.


For sake of conversation and devil's advocacy let me ask, in
response to this paragraph, why [do you] have to know [...]?

If you've got a docker container that is running mysql, IME that's _all_
it should be doing and any (post-setup) management you are doing of it
should be happening by creating a new container and trashing the one
that is not right, not manipulating the existing container. The
operating system should be as close to invisible as you can get it.

Everything in the Dockerfile should be about getting the service to
a fully installe and configured state and it should close with the
one command or entrypoint it is going to run/use, in the mysql case
that's one of the various ways to get mysqld happening.

If the goal is to use Docker and to have automation I think it would
be better to automate the generation of suitably
layered/hierarchical Dockerfiles (using whatever tool of choice),
not having Dockerfiles which then use config tools to populate out
the image. If such config tools are necessary in the container it is
pretty much guaranteed that the container is being overburdened with
too many responsibilities.

[1] Is there an official MySQL docker image? I found 553 Dockerhub 
repositories for MySQL images...


  https://github.com/docker-library/mysql

Docker-library appears to be the place for official things.

By layered model, are you referring to the bottom layer being the Docker 
image and then upper layers being stuff managed by a configuration management 
system?


I assume it's the layering afforded by union file systems: Makes
building images based on other images cheap and fast. The cheapness
and fastness is one of the reasons why expressive Dockerfiles[1] are
important: each line is a separate checkpointed image.


[1] That is, Dockerfiles which do the install and config work rather
than calling on other stuff to do the work.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Fox, Kevin M
Same thing works with cloud init too...


I've been waiting on systemd working inside a container for a while. it seems 
to work now.

The idea being its hard to write a shell script to get everything up and 
running with all the interactions that may need to happen. The init system's 
already designed for that. Take a nova-compute docker container for example, 
you probably need nova-compute, libvirt, neutron-openvswitch-agent, and the 
celiometer-agent all backed in. Writing a shell script to get it all started 
and shut down properly would be really ugly.

You could split it up into 4 containers and try and ensure they are coscheduled 
and all the pieces are able to talk to each other, but why? Putting them all in 
one container with systemd starting the subprocesses is much easier and 
shouldn't have many drawbacks. The components code is designed and tested 
assuming the pieces are all together.

You can even add a ssh server in there easily too and then ansible in to do 
whatever other stuff you want to do to the container like add other monitoring 
and such

Ansible or puppet or whatever should work better in this arrangement too since 
existing code assumes you can just systemctl start foo;

Kevin

From: Lars Kellogg-Stedman [l...@redhat.com]
Sent: Tuesday, October 14, 2014 12:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns

On Tue, Oct 14, 2014 at 02:45:30PM -0400, Jay Pipes wrote:
 With Docker, you are limited to the operating system of whatever the image
 uses.

See, that's the part I disagree with.  What I was saying about ansible
and puppet in my email is that I think the right thing to do is take
advantage of those tools:

  FROM ubuntu

  RUN apt-get install ansible
  COPY my_ansible_config.yaml /my_ansible_config.yaml
  RUN ansible /my_ansible_config.yaml

Or:

  FROM Fedora

  RUN yum install ansible
  COPY my_ansible_config.yaml /my_ansible_config.yaml
  RUN ansible /my_ansible_config.yaml

Put the minimal instructions in your dockerfile to bootstrap your
preferred configuration management tool. This is exactly what you
would do when booting, say, a Nova instance into an openstack
environment: you can provide a shell script to cloud-init that would
install whatever packages are required to run your config management
tool, and then run that tool.

Once you have bootstrapped your cm environment you can take advantage
of all those distribution-agnostic cm tools.

In other words, using docker is no more limiting than using a vm or
bare hardware that has been installed with your distribution of
choice.

 [1] Is there an official MySQL docker image? I found 553 Dockerhub
 repositories for MySQL images...

Yes, it's called mysql.  It is in fact one of the official images
highlighted on https://registry.hub.docker.com/.

 I have looked into using Puppet as part of both the build and runtime
 configuration process, but I haven't spent much time on it yet.

 Oh, I don't think Puppet is any better than Ansible for these things.

I think it's pretty clear that I was not suggesting it was better than
ansible.  That is hardly relevant to this discussion.  I was only
saying that is what *I* have looked at, and I was agreeing that *any*
configuration management system is probably better than writing shells
cript.

 How would I go about essentially transferring the ownership of the RPC
 exchanges that the original nova-conductor container managed to the new
 nova-conductor container? Would it be as simple as shutting down the old
 container and starting up the new nova-conductor container using things like
 --link rabbitmq:rabbitmq in the startup docker line?

I think that you would not necessarily rely on --link for this sort of
thing.  Under kubernetes, you would use a service definition, in
which kubernetes maintains a proxy that directs traffic to the
appropriate place as containers are created and destroyed.

Outside of kubernetes, you would use some other service discovery
mechanism; there are many available (etcd, consul, serf, etc).

But this isn't particularly a docker problem.  This is the same
problem you would face running the same software on top of a cloud
environment in which you cannot predict things like ip addresses a
priori.

--
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Lars Kellogg-Stedman
On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:
 I think the above strategy is spot on. Unfortunately, that's not how the
 Docker ecosystem works.

I'm not sure I agree here, but again nobody is forcing you to use this
tool.

 operating system that the image is built for. I see you didn't respond to my
 point that in your openstack-containers environment, you end up with Debian
 *and* Fedora images, since you use the official MySQL dockerhub image. And
 therefore you will end up needing to know sysadmin specifics (such as how
 network interfaces are set up) on multiple operating system distributions.

I missed that part, but ideally you don't *care* about the
distribution in use.  All you care about is the application.  Your
container environment (docker itself, or maybe a higher level
abstraction) sets up networking for you, and away you go.

If you have to perform system administration tasks inside your
containers, my general feeling is that something is wrong.

 Sure, Docker isn't any more limiting than using a VM or bare hardware, but
 if you use the official Docker images, it is more limiting, no?

No more so than grabbing a virtual appliance rather than building a
system yourself.  

In other words: sure, it's less flexible, but possibly it's faster to
get started, which is especially useful if your primary goal is not
be a database administrator but is actually write an application
that uses a database backend.

I think there are uses cases for both official and customized
images.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgpKaSDODdjVy.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Jay Pipes

On 10/14/2014 03:50 PM, Lars Kellogg-Stedman wrote:

On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:

I think the above strategy is spot on. Unfortunately, that's not how the
Docker ecosystem works.


I'm not sure I agree here, but again nobody is forcing you to use this
tool.


I know that. I'm not slamming Docker. I'm trying to better understand 
the Docker and Kubernetes systems.



operating system that the image is built for. I see you didn't respond to my
point that in your openstack-containers environment, you end up with Debian
*and* Fedora images, since you use the official MySQL dockerhub image. And
therefore you will end up needing to know sysadmin specifics (such as how
network interfaces are set up) on multiple operating system distributions.


I missed that part, but ideally you don't *care* about the
distribution in use.  All you care about is the application.  Your
container environment (docker itself, or maybe a higher level
abstraction) sets up networking for you, and away you go.

If you have to perform system administration tasks inside your
containers, my general feeling is that something is wrong.


I understand that general feeling, but system administration tasks like 
debugging networking issues or determining and grepping log file 
locations or diagnosing packaging issues for OpenStack services or 
performing database logfile maintenance and backups don't just go away 
because you're using containers, right? If there are multiple operating 
systems in use in these containers, it makes the life of an admin more 
cumbersome, IMO.


I guess all I'm saying is that, from what I can tell, Docker and 
Kubernetes and all the application/service-centric worldviews are cool 
and all, but they very much seem to be developed from the point of view 
of application developers, and not so much from the point of view of 
operators who need to maintain and support those applications.


I'm still super-interested in the topic, just thought I'd point that out.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Clint Byrum
Excerpts from Lars Kellogg-Stedman's message of 2014-10-14 12:50:48 -0700:
 On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:
  I think the above strategy is spot on. Unfortunately, that's not how the
  Docker ecosystem works.
 
 I'm not sure I agree here, but again nobody is forcing you to use this
 tool.
 
  operating system that the image is built for. I see you didn't respond to my
  point that in your openstack-containers environment, you end up with Debian
  *and* Fedora images, since you use the official MySQL dockerhub image. And
  therefore you will end up needing to know sysadmin specifics (such as how
  network interfaces are set up) on multiple operating system distributions.
 
 I missed that part, but ideally you don't *care* about the
 distribution in use.  All you care about is the application.  Your
 container environment (docker itself, or maybe a higher level
 abstraction) sets up networking for you, and away you go.
 
 If you have to perform system administration tasks inside your
 containers, my general feeling is that something is wrong.
 

Speaking as a curmudgeon ops guy from back in the day.. the reason
I choose the OS I do is precisely because it helps me _when something
is wrong_. And the best way an OS can help me is to provide excellent
debugging tools, and otherwise move out of the way.

When something _is_ wrong and I want to attach GDB to mysqld in said
container, I could build a new container with debugging tools installed,
but that may lose the very system state that I'm debugging. So I need to
run things inside the container like apt-get or yum to install GDB.. and
at some point you start to realize that having a whole OS is actually a
good thing even if it means needing to think about a few more things up
front, such as which OS will I use? and what tools do I need installed
in my containers?

What I mean to say is, just grabbing off the shelf has unstated
consequences.

  Sure, Docker isn't any more limiting than using a VM or bare hardware, but
  if you use the official Docker images, it is more limiting, no?
 
 No more so than grabbing a virtual appliance rather than building a
 system yourself.  
 
 In other words: sure, it's less flexible, but possibly it's faster to
 get started, which is especially useful if your primary goal is not
 be a database administrator but is actually write an application
 that uses a database backend.
 
 I think there are uses cases for both official and customized
 images.
 

In the case of Kolla, we're deploying OpenStack, not just some new
application that uses a database backend. I think the bar is a bit
higher for operations than end-user applications, since it sits below
the abstractions, much closer to the metal.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Lars Kellogg-Stedman
On Tue, Oct 14, 2014 at 04:06:22PM -0400, Jay Pipes wrote:
 I understand that general feeling, but system administration tasks like
 debugging networking issues or determining and grepping log file locations
 or diagnosing packaging issues for OpenStack services or performing database
 logfile maintenance and backups don't just go away because you're using
 containers, right?

They don't go away, but they're not necessarily things that you would
do inside your container.

Any state (e.g., database tables) that has a lifetime different from
that of your container should be stored outside of the container
proper.  In docker, this would be a volume (in a cloud environment,
this would be something like EBS or a Cinder volume).

Ideally, your container-optimized applications logs to stdout/stderr.
If you have multiple processes, they each run in a separate container.

Backups take advantage of the data volumes you've associated with your
container.  E.g., spawning a new container using the docker
--volumes-from option to access that data for backup purposes.

If you really need to get inside a container for diagnostic purposes,
then you use something like nsenter, nsinit, or the forthcoming
docker exec.


 they very much seem to be developed from the point of view of application
 developers, and not so much from the point of view of operators who need to
 maintain and support those applications.

I think it's entirely accurate to say that they are
application-centric, much like services such as Heroku, OpenShift,
etc.

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgp29hOhB_K2x.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday October 14th at 19:00 UTC

2014-10-14 Thread Elizabeth K. Joseph
On Mon, Oct 13, 2014 at 3:01 PM, Elizabeth K. Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting on Tuesday October 14th, at 19:00 UTC in #openstack-meeting

Meeting minutes and log are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-10-14-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-10-14-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-10-14-19.03.log.html

Thanks everyone (and welcome back jeblair!)

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Christopher Yeoh
On Tue, 14 Oct 2014 09:57:01 -0400
Jay Pipes jaypi...@gmail.com wrote:
 On 10/14/2014 05:04 AM, Alex Xu wrote:
  There is one reason to think about what projects *currently* do.
  When we choice which convention we want.
  For example, the CamelCase and snake_case, if the most project use
  snake_case, then choice snake_case style
  will be the right.
 
 I would posit that the reason we have such inconsistencies in our 
 project's APIs is that we haven't taken a stand and said this is the 
 way it must be.

Not only have we not taken a stand, but we haven't actually told anyone
what they should do in the first place. I don't think its defiance on
the part of the developers of the API its just that they don't know
and without any guidance just do what they think is reasonable. And a
lot of the decisions (eg project vs tenant or instance vs server) are
fairly arbitrary - we just need to choose one.

 
 There's lots of examples of inconsistencies out in the OpenStack
 APIs. We can certainly use a wiki or etherpad page to document those 
 inconsistencies. But, eventually, this working group should produce 
 solid decisions that should be enforced across *future* OpenStack
 APIs. And that guidance should be forthcoming in the next month or
 so, not in one or two release cycles.

I agree.

 
 I personally think proposing patches to an openstack-api repository
 is the most effective way to make those proposals. Etherpads and wiki
 pages are fine for dumping content, but IMO, we don't need to dump
 content -- we already have plenty of it. We need to propose
 guidelines for *new* APIs to follow.

ok. I'm happy to go that way - I guess I disagree about us already
having a lot of content. I haven't yet seen (maybe I've missed them)
API guidelines suggestions from most of the openstack projects. But
lets just get started :-)

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Ian Main
Angus Lees wrote:
 I've been reading a bunch of the existing Dockerfiles, and I have two humble 
 requests:
 
 
 1. It would be good if the interesting code came from python sdist/bdists 
 rather than rpms.
 
 This will make it possible to rebuild the containers using code from a 
 private 
 branch or even unsubmitted code, without having to go through a redhat/rpm 
 release process first.
 
 I care much less about where the python dependencies come from. Pulling them 
 from rpms rather than pip/pypi seems like a very good idea, given the 
 relative 
 difficulty of caching pypi content and we also pull in the required C, etc 
 libraries for free.
 
 
 With this in place, I think I could drop my own containers and switch to 
 reusing kolla's for building virtual testing environments.  This would make 
 me 
 happy.
 
 
 2. I think we should separate out run the server from do once-off setup.
 
 Currently the containers run a start.sh that typically sets up the database, 
 runs the servers, creates keystone users and sets up the keystone catalog.  
 In 
 something like k8s, the container will almost certainly be run multiple times 
 in parallel and restarted numerous times, so all those other steps go against 
 the service-oriented k8s ideal and are at-best wasted.
 
 I suggest making the container contain the deployed code and offer a few thin 
 scripts/commands for entrypoints.  The main replicationController/pod _just_ 
 starts the server, and then we have separate pods (or perhaps even non-k8s 
 container invocations) that do initial database setup/migrate, and post-
 install keystone setup.
 
 I'm open to whether we want to make these as lightweight/independent as 
 possible (every daemon in an individual container), or limit it to one per 
 project (eg: run nova-api, nova-conductor, nova-scheduler, etc all in one 
 container).   I think the differences are run-time scalability and resource-
 attribution vs upfront coding effort and are not hugely significant either 
 way.
 
 Post-install catalog setup we can combine into one cross-service setup like 
 tripleO does[1].  Although k8s doesn't have explicit support for batch tasks 
 currently, I'm doing the pre-install setup in restartPolicy: onFailure pods 
 currently and it seems to work quite well[2].
 
 (I'm saying post install catalog setup, but really keystone catalog can 
 happen at any point pre/post aiui.)
 
 [1] 
 https://github.com/openstack/tripleo-incubator/blob/master/scripts/setup-endpoints
 [2] 
 https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-db-sync-pod.yaml
 
 -- 
  - Gus

One thing I've learned is to not perform software updates within a container.
A number of the containers I've seen do software updates on startup but I've
seen this break dependencies in containers a few times making them unusable.
This detracts from the ability to have a completely controlled environment
within a container with proven software versions that play nicely together.

Ian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Christopher Yeoh
On Tue, 14 Oct 2014 10:29:34 -0500
Lance Bragstad lbrags...@gmail.com wrote:

 I found a couple of free times available for a weekly meeting if
 people are interested:
 
 https://review.openstack.org/#/c/128332/2
 
 Not sure if a meeting time has been hashed out already or not, and if
 it has I'll change the patch accordingly. If not, we can iterate on
 possible meeting times in the review if needed. This was to just get
 the ball rolling if we want a weekly meeting. I proposed one review
 for Thursdays at 2100 and a second patch set for 2000, UTC. That can
 easily change, but those were two times that didn't conflict with the
 existing meeting schedules in #openstack-meeting.

So UTC 2000 is 7am Sydney, 5am Tokyo and 4am Beijing time which is
pretty early in the day. I'd suggest UTC  if that's not too late for
others who'd like to participate.

Chris


 
 
 
 On Tue, Oct 14, 2014 at 3:55 AM, Thierry Carrez
 thie...@openstack.org wrote:
 
  Jay Pipes wrote:
   On 10/13/2014 07:11 PM, Christopher Yeoh wrote:
   I guess we could also start fleshing out in the repo how we'll
   work in practice too (eg once the document is stable what
   process do we have for making changes - two +2's is probably not
   adequate for something like this).
  
   We can make it work exactly like the openstack/governance repo,
   where ttx has the only ability to +2/+W approve a patch for
   merging, and he tallies a majority vote from the TC members, who
   vote -1 or +1 on a proposed patch.
  
   Instead of ttx, though, we can have an API working group lead
   selected from the set of folks currently listed as committed to
   the effort?
 
  Yes, the working group should select a chair who would fill the
  same role I do for the TC (organize meetings, push agenda, tally
  votes on reviews, etc.)
 
  That would be very helpful in keeping that diverse group on track.
  Now you just need some volunteer :)
 
  --
  Thierry Carrez (ttx)
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] image requirements for Heat software config

2014-10-14 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2014-10-14 10:13:27 -0700:
 
 Hi all,
 
 I have been experimenting a lot with Heat software config to  check out
 what works today, and to think about potential next steps.
 I've also worked on an internal project where we are leveraging software
 config as of the Icehouse release.
 
 I think what we can do now from a user's perspective in a HOT template is
 really nice and resonates well also with customers I've talked to.
 One of the points where we are constantly having issues, and also got some
 push back from customers, are the requirements on the in-instance tools and
 the process of building base images.
 One observation is that building a base image with all the right stuff
 inside sometimes is a brittle process; the other point is that a lot of
 customers do not like a lot of requirements on their base images. They want
 to maintain one set of corporate base images, with as little modification
 on top as possible.
 
 Regarding the process of building base images, the currently documented way
 [1] of using diskimage-builder turns out to be a bit unstable sometimes.
 Not because diskimage-builder is unstable, but probably because it pulls in
 components from a couple of sources:
 #1 we have a dependency on implementation of the Heat engine of course (So
 this is not pulled in to the image building process, but the dependency is
 there)
 #2 we depend on features in python-heatclient (and other python-* clients)
 #3 we pull in implementation from the heat-templates repo
 #4 we depend on tripleo-image-elements
 #5 we depend on os-collect-config, os-refresh-config and os-apply-config
 #6 we depend on diskimage-builder itself
 
 Heat itself and python-heatclient are reasonably well in synch because
 there is a release process for both, so we can tell users with some
 certainty that a feature will work with release X of OpenStack and Heat and
 version x.z.y of python-heatclient. For the other 4 sources, success
 sometimes depends on the time of day when you try to build an image
 (depending on what changes are currently included in each repo). So
 basically there does not seem to be a consolidated release process across
 all that is currently needed for software config.
 

I don't really understand why a consolidated release process across
all would be desired or needed.

#3 is pretty odd. You're pulling in templates from the examples repo?

For #4-#6, those are all on pypi and released on a regular basis. Build
yourself a bandersnatch mirror and you'll have locally controlled access
to them which should eliminate any reliability issues.

 The ideal solution would be to have one self-contained package that is easy
 to install on various distributions (an rpm, deb, MSI ...).
 Secondly, it would be ideal to not have to bake additional things into the
 image but doing bootstrapping during instance creation based on an existing
 cloud-init enabled image. For that we would have to strip requirements down
 to a bare minimum required for software config. One thing that comes to my
 mind is the cirros software config example [2] that Steven Hardy created.
 It is admittedly no up to what one could do with an image built according
 to [1] but on the other hand is really slick, whereas [1] installs a whole
 set of things into the image (some of which do not really seem to be needed
 for software config).

The agent problem is one reason I've been drifting away from Heat
for software configuration, and toward Ansible. Mind you, I wrote
os-collect-config to have as few dependencies as possible as one attempt
around this problem. Still it isn't capable enough to do the job on its
own, so you end up needing os-apply-config and then os-refresh-config
to tie the two together.

Ansible requires sshd, and python, with a strong recommendation for
sudo. These are all things that pretty much every Linux distribution is
going to have available.

 
 Another issue that comes to mind: what about operating systems not
 supported by diskimage-builder (Windows), or other hypervisor platforms?
 

There is a windows-diskimage-builder:

https://git.openstack.org/cgit/stackforge/windows-diskimage-builder

diskimage-builder can produce raw images, so that should be convertible
to pretty much any other hypervisor's preferred disk format.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] image requirements for Heat software config

2014-10-14 Thread Steve Baker

On 15/10/14 06:13, Thomas Spatzier wrote:

Hi all,

I have been experimenting a lot with Heat software config to  check out
what works today, and to think about potential next steps.
I've also worked on an internal project where we are leveraging software
config as of the Icehouse release.

I think what we can do now from a user's perspective in a HOT template is
really nice and resonates well also with customers I've talked to.
One of the points where we are constantly having issues, and also got some
push back from customers, are the requirements on the in-instance tools and
the process of building base images.
One observation is that building a base image with all the right stuff
inside sometimes is a brittle process; the other point is that a lot of
customers do not like a lot of requirements on their base images. They want
to maintain one set of corporate base images, with as little modification
on top as possible.

Regarding the process of building base images, the currently documented way
[1] of using diskimage-builder turns out to be a bit unstable sometimes.
Not because diskimage-builder is unstable, but probably because it pulls in
components from a couple of sources:
#1 we have a dependency on implementation of the Heat engine of course (So
this is not pulled in to the image building process, but the dependency is
there)
#2 we depend on features in python-heatclient (and other python-* clients)
#3 we pull in implementation from the heat-templates repo
#4 we depend on tripleo-image-elements
#5 we depend on os-collect-config, os-refresh-config and os-apply-config
#6 we depend on diskimage-builder itself

Heat itself and python-heatclient are reasonably well in synch because
there is a release process for both, so we can tell users with some
certainty that a feature will work with release X of OpenStack and Heat and
version x.z.y of python-heatclient. For the other 4 sources, success
sometimes depends on the time of day when you try to build an image
(depending on what changes are currently included in each repo). So
basically there does not seem to be a consolidated release process across
all that is currently needed for software config.

The ideal solution would be to have one self-contained package that is easy
to install on various distributions (an rpm, deb, MSI ...).
Secondly, it would be ideal to not have to bake additional things into the
image but doing bootstrapping during instance creation based on an existing
cloud-init enabled image. For that we would have to strip requirements down
to a bare minimum required for software config. One thing that comes to my
mind is the cirros software config example [2] that Steven Hardy created.
It is admittedly no up to what one could do with an image built according
to [1] but on the other hand is really slick, whereas [1] installs a whole
set of things into the image (some of which do not really seem to be needed
for software config).


Building an image from git repos was the best chance of having a single 
set of instructions which works for most cases, since the tools were not 
packaged for debian derived distros. This seems to be improving though; 
the whole build stack is now packaged for Debian Unstable, Testing and 
also Ubuntu Utopic (which isn't released yet). Another option is 
switching the default instructions to installing from pip rather than 
git, but that still gets into distro-specific quirks which complicate 
the instructions. Until these packages are on the recent releases of 
common distros then we'll be stuck in this slightly awkward situation.


I wrote a cloud-init boot script to install the agents from packages 
from a pristine Fedora 20 [3] and it seems like a reasonable approach 
for when building a custom image isn't practical. Somebody submitting 
the equivalent for Debian and Ubuntu would be most welcome. We need to 
decide whether *everything* should be packaged or if some things can be 
delivered by cloud-init on boot (os-collect-config.conf template, 
55-heat-config, the actual desired config hook...)


I'm all for there being documentation for the different ways of getting 
the agent and hooks onto a running server for a given distro. I think 
the hot-guide would be the best place to do that, and I've been making a 
start on that recently [4][5] (help welcome!). The README in [1] should 
eventually refer to the hot-guide once it is published so we're not 
maintaining multiple build instructions.



Another issue that comes to mind: what about operating systems not
supported by diskimage-builder (Windows), or other hypervisor platforms?
The Cloudbase folk have contributed some useful cloudbase-init templates 
this cycle [6], so that is a start.  I think there is interest in 
porting os-*-config to Windows as the way of enabling deployment 
resources (help welcome!).

Any, not really suggestions from my side but more observations and
thoughts. I wanted to share those and raise some discussion on possible
options.


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Lance Bragstad
On Tue, Oct 14, 2014 at 4:29 PM, Christopher Yeoh cbky...@gmail.com wrote:

 On Tue, 14 Oct 2014 10:29:34 -0500
 Lance Bragstad lbrags...@gmail.com wrote:

  I found a couple of free times available for a weekly meeting if
  people are interested:
 
  https://review.openstack.org/#/c/128332/2
 
  Not sure if a meeting time has been hashed out already or not, and if
  it has I'll change the patch accordingly. If not, we can iterate on
  possible meeting times in the review if needed. This was to just get
  the ball rolling if we want a weekly meeting. I proposed one review
  for Thursdays at 2100 and a second patch set for 2000, UTC. That can
  easily change, but those were two times that didn't conflict with the
  existing meeting schedules in #openstack-meeting.

 So UTC 2000 is 7am Sydney, 5am Tokyo and 4am Beijing time which is
 pretty early in the day. I'd suggest UTC  if that's not too late for
 others who'd like to participate.

 Chris


Unfortunately, we conflict with the I18N Team Meeting at UTC  (in
#openstack-meeting). I updated the review for UTC 1000. An alternating
schedule would probably work well given the attendance from different times
zones. I'll snoop around for another meeting time to alternate with, unless
someone has one in mind.




 
 
 
  On Tue, Oct 14, 2014 at 3:55 AM, Thierry Carrez
  thie...@openstack.org wrote:
 
   Jay Pipes wrote:
On 10/13/2014 07:11 PM, Christopher Yeoh wrote:
I guess we could also start fleshing out in the repo how we'll
work in practice too (eg once the document is stable what
process do we have for making changes - two +2's is probably not
adequate for something like this).
   
We can make it work exactly like the openstack/governance repo,
where ttx has the only ability to +2/+W approve a patch for
merging, and he tallies a majority vote from the TC members, who
vote -1 or +1 on a proposed patch.
   
Instead of ttx, though, we can have an API working group lead
selected from the set of folks currently listed as committed to
the effort?
  
   Yes, the working group should select a chair who would fill the
   same role I do for the TC (organize meetings, push agenda, tally
   votes on reviews, etc.)
  
   That would be very helpful in keeping that diverse group on track.
   Now you just need some volunteer :)
  
   --
   Thierry Carrez (ttx)
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread David Vossel


- Original Message -
 Same thing works with cloud init too...
 
 
 I've been waiting on systemd working inside a container for a while. it seems
 to work now.

oh no...

 The idea being its hard to write a shell script to get everything up and
 running with all the interactions that may need to happen. The init system's
 already designed for that. Take a nova-compute docker container for example,
 you probably need nova-compute, libvirt, neutron-openvswitch-agent, and the
 celiometer-agent all backed in. Writing a shell script to get it all started
 and shut down properly would be really ugly.

 You could split it up into 4 containers and try and ensure they are
 coscheduled and all the pieces are able to talk to each other, but why?
 Putting them all in one container with systemd starting the subprocesses is
 much easier and shouldn't have many drawbacks. The components code is
 designed and tested assuming the pieces are all together.

What you need is a dependency model that is enforced outside of the containers. 
Something
that manages the order containers are started/stopped/recovered in. This allows
you to isolate your containers with 1 service per container, yet still express 
that
container with service A needs to start before container with service B.

Pacemaker does this easily. There's even a docker resource-agent for Pacemaker 
now.
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/docker

-- Vossel

ps. don't run systemd in a container... If you think you should, talk to me 
first.

 
 You can even add a ssh server in there easily too and then ansible in to do
 whatever other stuff you want to do to the container like add other
 monitoring and such
 
 Ansible or puppet or whatever should work better in this arrangement too
 since existing code assumes you can just systemctl start foo;
 
 Kevin
 
 From: Lars Kellogg-Stedman [l...@redhat.com]
 Sent: Tuesday, October 14, 2014 12:10 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns
 
 On Tue, Oct 14, 2014 at 02:45:30PM -0400, Jay Pipes wrote:
  With Docker, you are limited to the operating system of whatever the image
  uses.
 
 See, that's the part I disagree with.  What I was saying about ansible
 and puppet in my email is that I think the right thing to do is take
 advantage of those tools:
 
   FROM ubuntu
 
   RUN apt-get install ansible
   COPY my_ansible_config.yaml /my_ansible_config.yaml
   RUN ansible /my_ansible_config.yaml
 
 Or:
 
   FROM Fedora
 
   RUN yum install ansible
   COPY my_ansible_config.yaml /my_ansible_config.yaml
   RUN ansible /my_ansible_config.yaml
 
 Put the minimal instructions in your dockerfile to bootstrap your
 preferred configuration management tool. This is exactly what you
 would do when booting, say, a Nova instance into an openstack
 environment: you can provide a shell script to cloud-init that would
 install whatever packages are required to run your config management
 tool, and then run that tool.
 
 Once you have bootstrapped your cm environment you can take advantage
 of all those distribution-agnostic cm tools.
 
 In other words, using docker is no more limiting than using a vm or
 bare hardware that has been installed with your distribution of
 choice.
 
  [1] Is there an official MySQL docker image? I found 553 Dockerhub
  repositories for MySQL images...
 
 Yes, it's called mysql.  It is in fact one of the official images
 highlighted on https://registry.hub.docker.com/.
 
  I have looked into using Puppet as part of both the build and runtime
  configuration process, but I haven't spent much time on it yet.
 
  Oh, I don't think Puppet is any better than Ansible for these things.
 
 I think it's pretty clear that I was not suggesting it was better than
 ansible.  That is hardly relevant to this discussion.  I was only
 saying that is what *I* have looked at, and I was agreeing that *any*
 configuration management system is probably better than writing shells
 cript.
 
  How would I go about essentially transferring the ownership of the RPC
  exchanges that the original nova-conductor container managed to the new
  nova-conductor container? Would it be as simple as shutting down the old
  container and starting up the new nova-conductor container using things
  like
  --link rabbitmq:rabbitmq in the startup docker line?
 
 I think that you would not necessarily rely on --link for this sort of
 thing.  Under kubernetes, you would use a service definition, in
 which kubernetes maintains a proxy that directs traffic to the
 appropriate place as containers are created and destroyed.
 
 Outside of kubernetes, you would use some other service discovery
 mechanism; there are many available (etcd, consul, serf, etc).
 
 But this isn't particularly a docker problem.  This is the same
 problem you would face running the same software on top of a cloud
 

Re: [openstack-dev] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-14 Thread Vadivel Poonathan
Agreed on the requirements of test results to qualify the vendor plugin to
be listed in the upstream docs.
Is there any procedure/infrastructure currently available for this
purpose?..
Pls. fwd any link/pointers on those info.

Thanks,
Vad
--

On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki amot...@gmail.com wrote:

 I agree with Kevin and Kyle. Even if we decided to use separate tree for
 neutron
 plugins and drivers, they still will be regarded as part of the upstream.
 These plugins/drivers need to prove they are well integrated with Neutron
 master
 in some way and gating integration proves it is well tested and integrated.
 I believe it is a reasonable assumption and requirement that a vendor
 plugin/driver
 is listed in the upstream docs. This is a same kind of question as
 what vendor plugins
 are tested and worth documented in the upstream docs.
 I hope you work with the neutron team and run the third party requirements.

 Thanks,
 Akihiro

 On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery mest...@mestery.com
 wrote:
  On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton blak...@gmail.com wrote:
 The OpenStack dev and docs team dont have to worry about
  gating/publishing/maintaining the vendor specific plugins/drivers.
 
  I disagree about the gating part. If a vendor wants to have a link that
  shows they are compatible with openstack, they should be reporting test
  results on all patches. A link to a vendor driver in the docs should
 signify
  some form of testing that the community is comfortable with.
 
  I agree with Kevin here. If you want to play upstream, in whatever
  form that takes by the end of Kilo, you have to work with the existing
  third-party requirements and team to take advantage of being a part of
  things like upstream docs.
 
  Thanks,
  Kyle
 
  On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi,
 
  If the plan is to move ALL existing vendor specific plugins/drivers
  out-of-tree, then having a place-holder within the OpenStack domain
 would
  suffice, where the vendors can list their plugins/drivers along with
 their
  documentation as how to install and use etc.
 
  The main Openstack Neutron documentation page can explain the plugin
  framework (ml2 type drivers, mechanism drivers, serviec plugin and so
 on)
  and its purpose/usage etc, then provide a link to refer the currently
  supported vendor specific plugins/drivers for more details.  That way
 the
  documentation will be accurate to what is in-tree and limit the
  documentation of external plugins/drivers to have just a reference
 link. So
  its now vendor's responsibility to keep their  driver's up-to-date and
 their
  documentation accurate. The OpenStack dev and docs team dont have to
 worry
  about gating/publishing/maintaining the vendor specific
 plugins/drivers.
 
  The built-in drivers such as LinuxBridge or OpenVSwitch etc can
 continue
  to be in-tree and their documentation will be part of main Neutron's
 docs.
  So the Neutron is guaranteed to work with built-in plugins/drivers as
 per
  the documentation and the user is informed to refer the external
 vendor
  plug-in page for additional/specific plugins/drivers.
 
 
  Thanks,
  Vad
  --
 
 
  On Fri, Oct 10, 2014 at 8:10 PM, Anne Gentle a...@openstack.org
 wrote:
 
 
 
  On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton blak...@gmail.com
 wrote:
 
  I think you will probably have to wait until after the summit so we
 can
  see the direction that will be taken with the rest of the in-tree
  drivers/plugins. It seems like we are moving towards removing all of
 them so
  we would definitely need a solution to documenting out-of-tree
 drivers as
  you suggested.
 
  However, I think the minimum requirements for having a driver being
  documented should be third-party testing of Neutron patches.
 Otherwise the
  docs will become littered with a bunch of links to drivers/plugins
 with no
  indication of what actually works, which ultimately makes Neutron
 look bad.
 
 
  This is my line of thinking as well, expanded to ultimately makes
  OpenStack docs look bad -- a perception I want to avoid.
 
  Keep the viewpoints coming. We have a crucial balancing act ahead:
 users
  need to trust docs and trust the drivers. Ultimately the
 responsibility for
  the docs is in the hands of the driver contributors so it seems those
 should
  be on a domain name where drivers control publishing and OpenStack
 docs are
  not a gatekeeper, quality checker, reviewer, or publisher.
 
  We have documented the status of hypervisor drivers on an OpenStack
 wiki
  page. [1] To me, that type of list could be maintained on the wiki
 page
  better than in the docs themselves. Thoughts? Feelings? More
 discussion,
  please. And thank you for the responses so far.
  Anne
 
  [1] https://wiki.openstack.org/wiki/HypervisorSupportMatrix
 
 
 
  On Fri, Oct 10, 2014 at 1:28 PM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi Anne,
 
  

Re: [openstack-dev] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-14 Thread Anne Gentle
On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan 
vadivel.openst...@gmail.com wrote:

 Agreed on the requirements of test results to qualify the vendor plugin to
 be listed in the upstream docs.
 Is there any procedure/infrastructure currently available for this
 purpose?..
 Pls. fwd any link/pointers on those info.


Here's a link to the third-party testing setup information.

http://ci.openstack.org/third_party.html

Feel free to keep asking questions as you dig deeper.
Thanks,
Anne


 Thanks,
 Vad
 --

 On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki amot...@gmail.com
 wrote:

 I agree with Kevin and Kyle. Even if we decided to use separate tree for
 neutron
 plugins and drivers, they still will be regarded as part of the upstream.
 These plugins/drivers need to prove they are well integrated with Neutron
 master
 in some way and gating integration proves it is well tested and
 integrated.
 I believe it is a reasonable assumption and requirement that a vendor
 plugin/driver
 is listed in the upstream docs. This is a same kind of question as
 what vendor plugins
 are tested and worth documented in the upstream docs.
 I hope you work with the neutron team and run the third party
 requirements.

 Thanks,
 Akihiro

 On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery mest...@mestery.com
 wrote:
  On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton blak...@gmail.com
 wrote:
 The OpenStack dev and docs team dont have to worry about
  gating/publishing/maintaining the vendor specific plugins/drivers.
 
  I disagree about the gating part. If a vendor wants to have a link that
  shows they are compatible with openstack, they should be reporting test
  results on all patches. A link to a vendor driver in the docs should
 signify
  some form of testing that the community is comfortable with.
 
  I agree with Kevin here. If you want to play upstream, in whatever
  form that takes by the end of Kilo, you have to work with the existing
  third-party requirements and team to take advantage of being a part of
  things like upstream docs.
 
  Thanks,
  Kyle
 
  On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi,
 
  If the plan is to move ALL existing vendor specific plugins/drivers
  out-of-tree, then having a place-holder within the OpenStack domain
 would
  suffice, where the vendors can list their plugins/drivers along with
 their
  documentation as how to install and use etc.
 
  The main Openstack Neutron documentation page can explain the plugin
  framework (ml2 type drivers, mechanism drivers, serviec plugin and so
 on)
  and its purpose/usage etc, then provide a link to refer the currently
  supported vendor specific plugins/drivers for more details.  That way
 the
  documentation will be accurate to what is in-tree and limit the
  documentation of external plugins/drivers to have just a reference
 link. So
  its now vendor's responsibility to keep their  driver's up-to-date
 and their
  documentation accurate. The OpenStack dev and docs team dont have to
 worry
  about gating/publishing/maintaining the vendor specific
 plugins/drivers.
 
  The built-in drivers such as LinuxBridge or OpenVSwitch etc can
 continue
  to be in-tree and their documentation will be part of main
 Neutron's docs.
  So the Neutron is guaranteed to work with built-in plugins/drivers as
 per
  the documentation and the user is informed to refer the external
 vendor
  plug-in page for additional/specific plugins/drivers.
 
 
  Thanks,
  Vad
  --
 
 
  On Fri, Oct 10, 2014 at 8:10 PM, Anne Gentle a...@openstack.org
 wrote:
 
 
 
  On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton blak...@gmail.com
 wrote:
 
  I think you will probably have to wait until after the summit so we
 can
  see the direction that will be taken with the rest of the in-tree
  drivers/plugins. It seems like we are moving towards removing all
 of them so
  we would definitely need a solution to documenting out-of-tree
 drivers as
  you suggested.
 
  However, I think the minimum requirements for having a driver being
  documented should be third-party testing of Neutron patches.
 Otherwise the
  docs will become littered with a bunch of links to drivers/plugins
 with no
  indication of what actually works, which ultimately makes Neutron
 look bad.
 
 
  This is my line of thinking as well, expanded to ultimately makes
  OpenStack docs look bad -- a perception I want to avoid.
 
  Keep the viewpoints coming. We have a crucial balancing act ahead:
 users
  need to trust docs and trust the drivers. Ultimately the
 responsibility for
  the docs is in the hands of the driver contributors so it seems
 those should
  be on a domain name where drivers control publishing and OpenStack
 docs are
  not a gatekeeper, quality checker, reviewer, or publisher.
 
  We have documented the status of hypervisor drivers on an OpenStack
 wiki
  page. [1] To me, that type of list could be maintained on the wiki
 page
  better than in the docs themselves. 

Re: [openstack-dev] Treating notifications as a contract

2014-10-14 Thread Doug Hellmann

On Oct 13, 2014, at 5:49 PM, Chris Dent chd...@redhat.com wrote:

 On Tue, 7 Oct 2014, Sandy Walsh wrote:
 
 Haven't had any time to get anything written down (pressing deadlines
 with StackTach.v3) but open to suggestions. Perhaps we should just add
 something to the olso.messaging etherpad to find time at the summit to
 talk about it?
 
 Have you got a link for that?

It might be more appropriate to put it on the cross-project session list: 
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

Doug

 
 Another topic that I think is at least somewhat related to the
 standardizing/contractualizing notifications topic is deprecating
 polling (to get metrics/samples).
 
 In the ceilometer side of the telemetry universe, if samples can't
 be gathered via notifications then somebody writes a polling plugin
 or agent and sticks it in the ceilometer tree where it is run as
 either an independent agent (c.f. the new ipmi-agent) or a plugin
 under the compute-agent or a plugin under the central-agent.
 
 This is problematic in a few ways (at least to me):
 
 * Those plugins distract from the potential leanness of a core
  ceilometer system.
 
 * The meters created by those plugins are produced for ceilometer
  rather than for telemetry. Yes, of course you can re-publish the
  samples in all sorts of ways.
 
 * The services aren't owning the form and publication of information
  about themselves.
 
 There are solid arguments against each of these problems individually
 but as a set I find them saying services should make more
 notifications pretty loud and clear and obviously to make that work
 we need tidy notifications with good clean semantics.
 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Treating notifications as a contract

2014-10-14 Thread Sandy Walsh
From: Doug Hellmann [d...@doughellmann.com] Tuesday, October 14, 2014 7:19 PM

 It might be more appropriate to put it on the cross-project session list: 
 https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

Done ... thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-14 Thread Doug Hellmann

On Oct 14, 2014, at 12:31 PM, Salvatore Orlando sorla...@nicira.com wrote:

 Hi Doug,
 
 do you know if the existing quota oslo-incubator module has already some 
 active consumers?
 In the meanwhile I've pushed a spec to neutron-specs for improving quota 
 management there [1]

It looks like a lot of projects are syncing the module:

$ grep policy */openstack-common.conf
barbican/openstack-common.conf:modules=gettextutils,jsonutils,log,local,timeutils,importutils,policy
ceilometer/openstack-common.conf:module=policy
cinder/openstack-common.conf:module=policy
designate/openstack-common.conf:module=policy
gantt/openstack-common.conf:module=policy
glance/openstack-common.conf:module=policy
heat/openstack-common.conf:module=policy
horizon/openstack-common.conf:module=policy
ironic/openstack-common.conf:module=policy
keystone/openstack-common.conf:module=policy
manila/openstack-common.conf:module=policy
neutron/openstack-common.conf:module=policy
nova/openstack-common.conf:module=policy
trove/openstack-common.conf:module=policy
tuskar/openstack-common.conf:module=policy

I’m not sure how many are actively using it, but I wouldn’t expect them to copy 
it in if they weren’t using it at all.

 
 Now, I can either work on the oslo-incubator module and leverage it in 
 Neutron, or develop the quota module in Neutron, and move it to 
 oslo-incubator once we validate it with Neutron. The latter approach seems 
 easier from a workflow perspective - as it avoid the intermediate steps of 
 moving code from oslo-incubator to neutron. On the other hand it will delay 
 adoption in oslo-incubator.

The policy module is up for graduation this cycle. It may end up in its own 
library, to allow us to build a review team for the code more easily than if we 
put it in with some of the other semi-related modules like the server code. 
We’re still working that out [1], and if you expect to make a lot of 
incompatible changes we should delay graduation to make that simpler.

Either way, since we have so many consumers, I think it would be easier to have 
the work happen in Oslo somewhere so we can ensure those changes are useful to 
and usable by all of the existing consumers.

Doug

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals

 
 What's your opinion?
 
 Regards,
 Salvatore
 
 [1] https://review.openstack.org/#/c/128318/
 
 On 8 October 2014 18:52, Doug Hellmann d...@doughellmann.com wrote:
 
 On Oct 8, 2014, at 7:03 AM, Davanum Srinivas dava...@gmail.com wrote:
 
  Salvatore, Joe,
 
  We do have this at the moment:
 
  https://github.com/openstack/oslo-incubator/blob/master/openstack/common/quota.py
 
  — dims
 
 If someone wants to drive creating a useful library during kilo, please 
 consider adding the topic to the etherpad we’re using to plan summit sessions 
 and then come participate in the Oslo meeting this Friday 16:00 UTC.
 
 https://etherpad.openstack.org/p/kilo-oslo-summit-topics
 
 Doug
 
 
  On Wed, Oct 8, 2014 at 2:29 AM, Salvatore Orlando sorla...@nicira.com 
  wrote:
 
  On 8 October 2014 04:13, Joe Gordon joe.gord...@gmail.com wrote:
 
 
  On Fri, Oct 3, 2014 at 10:47 AM, Morgan Fainberg
  morgan.fainb...@gmail.com wrote:
 
  Keeping the enforcement local (same way policy works today) helps limit
  the fragility, big +1 there.
 
  I also agree with Vish, we need a uniform way to talk about quota
  enforcement similar to how we have a uniform policy language / 
  enforcement
  model (yes I know it's not perfect, but it's far closer to uniform than
  quota management is).
 
 
  It sounds like maybe we should have an oslo library for quotas? Somewhere
  where we can share the code,but keep the operations local to each service.
 
 
  This is what I had in mind as well. A simple library for quota enforcement
  which can be used regardless of where and how you do it, which might depend
  on the application business logic, the WSGI framework in use, or other
  factors.
 
 
 
 
  If there is still interest of placing quota in keystone, let's talk about
  how that will work and what will be needed from Keystone . The previous
  attempt didn't get much traction and stalled out early in 
  implementation. If
  we want to revisit this lets make sure we have the resources needed and
  spec(s) in progress / info on etherpads (similar to how the multitenancy
  stuff was handled at the last summit) as early as possible.
 
 
  Why not centralize quota management via the python-openstackclient, what
  is the benefit of getting keystone involved?
 
 
  Providing this through the openstack client in my opinion has the
  disadvantage that users which either use the REST API direct or write their
  own clients won't leverage it. I don't think it's a reasonable assumption
  that everybody will use python-openstackclient, is it?
 
  Said that, storing quotas in keystone poses a further challenge to the
  scalability of the system, which we shall perhaps address by using
  appropriate caching strategies and 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Fox, Kevin M
Ok, why are you so down on running systemd in a container?

Pacemaker works, but its kind of a pain to setup compared just yum installing a 
few packages and setting init to systemd. There are some benefits for sure, but 
if you have to force all the docker components onto the same physical machine 
anyway, why bother with the extra complexity?

Thanks,
Kevin


From: David Vossel [dvos...@redhat.com]
Sent: Tuesday, October 14, 2014 3:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns

- Original Message -
 Same thing works with cloud init too...


 I've been waiting on systemd working inside a container for a while. it seems
 to work now.

oh no...

 The idea being its hard to write a shell script to get everything up and
 running with all the interactions that may need to happen. The init system's
 already designed for that. Take a nova-compute docker container for example,
 you probably need nova-compute, libvirt, neutron-openvswitch-agent, and the
 celiometer-agent all backed in. Writing a shell script to get it all started
 and shut down properly would be really ugly.

 You could split it up into 4 containers and try and ensure they are
 coscheduled and all the pieces are able to talk to each other, but why?
 Putting them all in one container with systemd starting the subprocesses is
 much easier and shouldn't have many drawbacks. The components code is
 designed and tested assuming the pieces are all together.

What you need is a dependency model that is enforced outside of the containers. 
Something
that manages the order containers are started/stopped/recovered in. This allows
you to isolate your containers with 1 service per container, yet still express 
that
container with service A needs to start before container with service B.

Pacemaker does this easily. There's even a docker resource-agent for Pacemaker 
now.
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/docker

-- Vossel

ps. don't run systemd in a container... If you think you should, talk to me 
first.


 You can even add a ssh server in there easily too and then ansible in to do
 whatever other stuff you want to do to the container like add other
 monitoring and such

 Ansible or puppet or whatever should work better in this arrangement too
 since existing code assumes you can just systemctl start foo;

 Kevin
 
 From: Lars Kellogg-Stedman [l...@redhat.com]
 Sent: Tuesday, October 14, 2014 12:10 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns

 On Tue, Oct 14, 2014 at 02:45:30PM -0400, Jay Pipes wrote:
  With Docker, you are limited to the operating system of whatever the image
  uses.

 See, that's the part I disagree with.  What I was saying about ansible
 and puppet in my email is that I think the right thing to do is take
 advantage of those tools:

   FROM ubuntu

   RUN apt-get install ansible
   COPY my_ansible_config.yaml /my_ansible_config.yaml
   RUN ansible /my_ansible_config.yaml

 Or:

   FROM Fedora

   RUN yum install ansible
   COPY my_ansible_config.yaml /my_ansible_config.yaml
   RUN ansible /my_ansible_config.yaml

 Put the minimal instructions in your dockerfile to bootstrap your
 preferred configuration management tool. This is exactly what you
 would do when booting, say, a Nova instance into an openstack
 environment: you can provide a shell script to cloud-init that would
 install whatever packages are required to run your config management
 tool, and then run that tool.

 Once you have bootstrapped your cm environment you can take advantage
 of all those distribution-agnostic cm tools.

 In other words, using docker is no more limiting than using a vm or
 bare hardware that has been installed with your distribution of
 choice.

  [1] Is there an official MySQL docker image? I found 553 Dockerhub
  repositories for MySQL images...

 Yes, it's called mysql.  It is in fact one of the official images
 highlighted on https://registry.hub.docker.com/.

  I have looked into using Puppet as part of both the build and runtime
  configuration process, but I haven't spent much time on it yet.
 
  Oh, I don't think Puppet is any better than Ansible for these things.

 I think it's pretty clear that I was not suggesting it was better than
 ansible.  That is hardly relevant to this discussion.  I was only
 saying that is what *I* have looked at, and I was agreeing that *any*
 configuration management system is probably better than writing shells
 cript.

  How would I go about essentially transferring the ownership of the RPC
  exchanges that the original nova-conductor container managed to the new
  nova-conductor container? Would it be as simple as shutting down the old
  container and starting up the new nova-conductor container using things
  like
  

Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Christopher Yeoh
On Tue, 14 Oct 2014 09:45:44 -0400
Jay Pipes jaypi...@gmail.com wrote:

 On 10/14/2014 12:52 AM, Christopher Yeoh wrote:
  On Mon, 13 Oct 2014 22:20:32 -0400
  Jay Pipes jaypi...@gmail.com wrote:
 
  On 10/13/2014 07:11 PM, Christopher Yeoh wrote:
  On Mon, 13 Oct 2014 10:52:26 -0400
  And whilst I don't have a problem with having some guidelines which
  suggest a future standard for APIs, I don't think we should be
  requiring any type of feature which has not yet been implemented in
  at least one, preferably two openstack projects and released and
  tested for a cycle. Eg standards should be lagging rather than
  leading.
 
 What about features in some of our APIs that are *not* preferable?
 For instance: API extensions.
 
 I think we've seen where API extensions leads us. And it isn't
 pretty. Would you suggest we document what a Nova API extension or a
 Neutron API extension looks like and then propose, for instance, not
 to ever do it again in future APIs and instead use schema
 discoverability?

So if we had standards leading development rather than lagging in the
past then API extensions would have ended up in the standard because we
once thought they were a good idea.

Perhaps we should distinguish in the documentation between
recommendations (future looking) and standards (proven it works well
for us). The latter would be potentially enforced a lot more strictly
than the former.

In the case of extensions I think we should have a section documenting
why we think they're a bad idea and new projects certainly shouldn't
use them. But also give some advice around if they are used what
features they should have (eg version numbers!). Given the time that it
takes to make major API infrastructure changes it is inevitable that
there will be api extensions added in the short to medium term. Because
API development will not just stop while API infrastructure is improved.

  I think it will be better in git (but we also need it in gerrit)
  when it comes to resolving conflicts and after we've established a
  decent document (eg when we have more content). I'm just looking to
  make it as easy as possible for anyone to add any guidelines now.
  Once we've actually got something to discuss then we use git/gerrit
  with patches proposed to resolve conflicts within the document.
 
 Of course it would be in Gerrit. I just put it up on GitHub first 
 because I can't just add a repo into the openstack/ code
 namespace... :)

I've submitted a patch to add an api-wg project using your repository
as the initial content for the git repository.

https://review.openstack.org/#/c/128466/

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Doug Hellmann

On Oct 14, 2014, at 6:00 PM, Lance Bragstad lbrags...@gmail.com wrote:

 
 
 On Tue, Oct 14, 2014 at 4:29 PM, Christopher Yeoh cbky...@gmail.com wrote:
 On Tue, 14 Oct 2014 10:29:34 -0500
 Lance Bragstad lbrags...@gmail.com wrote:
 
  I found a couple of free times available for a weekly meeting if
  people are interested:
 
  https://review.openstack.org/#/c/128332/2
 
  Not sure if a meeting time has been hashed out already or not, and if
  it has I'll change the patch accordingly. If not, we can iterate on
  possible meeting times in the review if needed. This was to just get
  the ball rolling if we want a weekly meeting. I proposed one review
  for Thursdays at 2100 and a second patch set for 2000, UTC. That can
  easily change, but those were two times that didn't conflict with the
  existing meeting schedules in #openstack-meeting.
 
 So UTC 2000 is 7am Sydney, 5am Tokyo and 4am Beijing time which is
 pretty early in the day. I'd suggest UTC  if that's not too late for
 others who'd like to participate.
 
 Chris
 
 Unfortunately, we conflict with the I18N Team Meeting at UTC  (in 
 #openstack-meeting). I updated the review for UTC 1000. An alternating 
 schedule would probably work well given the attendance from different times 
 zones. I'll snoop around for another meeting time to alternate with, unless 
 someone has one in mind. 

Don’t forget about #openstack-meeting-alt and #openstack-meeting-3 as 
alternative rooms for the meetings. Those have the meeting bot and are managed 
on the same wiki page for scheduling.

Doug

  
 
 
 
 
 
  On Tue, Oct 14, 2014 at 3:55 AM, Thierry Carrez
  thie...@openstack.org wrote:
 
   Jay Pipes wrote:
On 10/13/2014 07:11 PM, Christopher Yeoh wrote:
I guess we could also start fleshing out in the repo how we'll
work in practice too (eg once the document is stable what
process do we have for making changes - two +2's is probably not
adequate for something like this).
   
We can make it work exactly like the openstack/governance repo,
where ttx has the only ability to +2/+W approve a patch for
merging, and he tallies a majority vote from the TC members, who
vote -1 or +1 on a proposed patch.
   
Instead of ttx, though, we can have an API working group lead
selected from the set of folks currently listed as committed to
the effort?
  
   Yes, the working group should select a chair who would fill the
   same role I do for the TC (organize meetings, push agenda, tally
   votes on reviews, etc.)
  
   That would be very helpful in keeping that diverse group on track.
   Now you just need some volunteer :)
  
   --
   Thierry Carrez (ttx)
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-10-14 Thread Salvatore Orlando
Doug,

I totally agree with your findings on the policy module.
Neutron already has some customizations there and we already have a few
contributors working on syncing it back with oslo-incubator during the Kilo
release cycle.

However, my query was about the quota module.
From what I gather it seems not a lot of projects use it:

$ find . -name openstack-common.conf | xargs grep quota
$

Salvatore

On 15 October 2014 00:34, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 14, 2014, at 12:31 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 Hi Doug,

 do you know if the existing quota oslo-incubator module has already some
 active consumers?
 In the meanwhile I've pushed a spec to neutron-specs for improving quota
 management there [1]


 It looks like a lot of projects are syncing the module:

 $ grep policy */openstack-common.conf

 barbican/openstack-common.conf:modules=gettextutils,jsonutils,log,local,timeutils,importutils,policy
 ceilometer/openstack-common.conf:module=policy
 cinder/openstack-common.conf:module=policy
 designate/openstack-common.conf:module=policy
 gantt/openstack-common.conf:module=policy
 glance/openstack-common.conf:module=policy
 heat/openstack-common.conf:module=policy
 horizon/openstack-common.conf:module=policy
 ironic/openstack-common.conf:module=policy
 keystone/openstack-common.conf:module=policy
 manila/openstack-common.conf:module=policy
 neutron/openstack-common.conf:module=policy
 nova/openstack-common.conf:module=policy
 trove/openstack-common.conf:module=policy
 tuskar/openstack-common.conf:module=policy

 I’m not sure how many are actively using it, but I wouldn’t expect them to
 copy it in if they weren’t using it at all.


 Now, I can either work on the oslo-incubator module and leverage it in
 Neutron, or develop the quota module in Neutron, and move it to
 oslo-incubator once we validate it with Neutron. The latter approach seems
 easier from a workflow perspective - as it avoid the intermediate steps of
 moving code from oslo-incubator to neutron. On the other hand it will delay
 adoption in oslo-incubator.


 The policy module is up for graduation this cycle. It may end up in its
 own library, to allow us to build a review team for the code more easily
 than if we put it in with some of the other semi-related modules like the
 server code. We’re still working that out [1], and if you expect to make a
 lot of incompatible changes we should delay graduation to make that simpler.

 Either way, since we have so many consumers, I think it would be easier to
 have the work happen in Oslo somewhere so we can ensure those changes are
 useful to and usable by all of the existing consumers.

 Doug

 [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals


 What's your opinion?

 Regards,
 Salvatore

 [1] https://review.openstack.org/#/c/128318/

 On 8 October 2014 18:52, Doug Hellmann d...@doughellmann.com wrote:


 On Oct 8, 2014, at 7:03 AM, Davanum Srinivas dava...@gmail.com wrote:

  Salvatore, Joe,
 
  We do have this at the moment:
 
 
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/quota.py
 
  — dims

 If someone wants to drive creating a useful library during kilo, please
 consider adding the topic to the etherpad we’re using to plan summit
 sessions and then come participate in the Oslo meeting this Friday 16:00
 UTC.

 https://etherpad.openstack.org/p/kilo-oslo-summit-topics

 Doug

 
  On Wed, Oct 8, 2014 at 2:29 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
 
  On 8 October 2014 04:13, Joe Gordon joe.gord...@gmail.com wrote:
 
 
  On Fri, Oct 3, 2014 at 10:47 AM, Morgan Fainberg
  morgan.fainb...@gmail.com wrote:
 
  Keeping the enforcement local (same way policy works today) helps
 limit
  the fragility, big +1 there.
 
  I also agree with Vish, we need a uniform way to talk about quota
  enforcement similar to how we have a uniform policy language /
 enforcement
  model (yes I know it's not perfect, but it's far closer to uniform
 than
  quota management is).
 
 
  It sounds like maybe we should have an oslo library for quotas?
 Somewhere
  where we can share the code,but keep the operations local to each
 service.
 
 
  This is what I had in mind as well. A simple library for quota
 enforcement
  which can be used regardless of where and how you do it, which might
 depend
  on the application business logic, the WSGI framework in use, or other
  factors.
 
 
 
 
  If there is still interest of placing quota in keystone, let's talk
 about
  how that will work and what will be needed from Keystone . The
 previous
  attempt didn't get much traction and stalled out early in
 implementation. If
  we want to revisit this lets make sure we have the resources needed
 and
  spec(s) in progress / info on etherpads (similar to how the
 multitenancy
  stuff was handled at the last summit) as early as possible.
 
 
  Why not centralize quota management via the python-openstackclient,
 what
  is the benefit of getting 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread David Vossel


- Original Message -
 Ok, why are you so down on running systemd in a container?

It goes against the grain.

From a distributed systems view, we gain quite a bit of control by maintaining
one service per container. Containers can be re-organised and re-purposed 
dynamically.
If we have systemd trying to manage an entire stack of resources within a 
container,
we lose this control.

From my perspective a containerized application stack needs to be managed 
externally
by whatever is orchestrating the containers to begin with. When we take a step 
back
and look at how we actually want to deploy containers, systemd doesn't make 
much sense.
It actually limits us in the long run.

Also... recovery. Using systemd to manage a stack of resources within a single 
container
makes it difficult for whatever is externally enforcing the availability of 
that container
to detect the health of the container.  As it is now, the actual service is pid 
1 of a
container. If that service dies, the container dies. If systemd is pid 1, there 
can
be all kinds of chaos occurring within the container, but the external 
distributed
orchestration system won't have a clue (unless it invokes some custom health 
monitoring
tools within the container itself, which will likely be the case someday.)

-- Vossel


 Pacemaker works, but its kind of a pain to setup compared just yum installing
 a few packages and setting init to systemd. There are some benefits for
 sure, but if you have to force all the docker components onto the same
 physical machine anyway, why bother with the extra complexity?

 Thanks,
 Kevin
 
 
 From: David Vossel [dvos...@redhat.com]
 Sent: Tuesday, October 14, 2014 3:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns
 
 - Original Message -
  Same thing works with cloud init too...
 
 
  I've been waiting on systemd working inside a container for a while. it
  seems
  to work now.
 
 oh no...
 
  The idea being its hard to write a shell script to get everything up and
  running with all the interactions that may need to happen. The init
  system's
  already designed for that. Take a nova-compute docker container for
  example,
  you probably need nova-compute, libvirt, neutron-openvswitch-agent, and the
  celiometer-agent all backed in. Writing a shell script to get it all
  started
  and shut down properly would be really ugly.
 
  You could split it up into 4 containers and try and ensure they are
  coscheduled and all the pieces are able to talk to each other, but why?
  Putting them all in one container with systemd starting the subprocesses is
  much easier and shouldn't have many drawbacks. The components code is
  designed and tested assuming the pieces are all together.
 
 What you need is a dependency model that is enforced outside of the
 containers. Something
 that manages the order containers are started/stopped/recovered in. This
 allows
 you to isolate your containers with 1 service per container, yet still
 express that
 container with service A needs to start before container with service B.
 
 Pacemaker does this easily. There's even a docker resource-agent for
 Pacemaker now.
 https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/docker
 
 -- Vossel
 
 ps. don't run systemd in a container... If you think you should, talk to me
 first.
 
 
  You can even add a ssh server in there easily too and then ansible in to do
  whatever other stuff you want to do to the container like add other
  monitoring and such
 
  Ansible or puppet or whatever should work better in this arrangement too
  since existing code assumes you can just systemctl start foo;
 
  Kevin
  
  From: Lars Kellogg-Stedman [l...@redhat.com]
  Sent: Tuesday, October 14, 2014 12:10 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns
 
  On Tue, Oct 14, 2014 at 02:45:30PM -0400, Jay Pipes wrote:
   With Docker, you are limited to the operating system of whatever the
   image
   uses.
 
  See, that's the part I disagree with.  What I was saying about ansible
  and puppet in my email is that I think the right thing to do is take
  advantage of those tools:
 
FROM ubuntu
 
RUN apt-get install ansible
COPY my_ansible_config.yaml /my_ansible_config.yaml
RUN ansible /my_ansible_config.yaml
 
  Or:
 
FROM Fedora
 
RUN yum install ansible
COPY my_ansible_config.yaml /my_ansible_config.yaml
RUN ansible /my_ansible_config.yaml
 
  Put the minimal instructions in your dockerfile to bootstrap your
  preferred configuration management tool. This is exactly what you
  would do when booting, say, a Nova instance into an openstack
  environment: you can provide a shell script to cloud-init that would
  install whatever packages are required to run 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Charles Crouch


- Original Message -
 Ok, why are you so down on running systemd in a container?
 
 Pacemaker works, but its kind of a pain to setup compared just yum installing
 a few packages and setting init to systemd. There are some benefits for
 sure, but if you have to force all the docker components onto the same
 physical machine anyway, why bother with the extra complexity?

The other benefit of running systemd in a container is that, at least
conceptually, it makes the migration path from VMs to docker containers
that much easier. If you've got an existing investment in using system
tools to run your services, it would be awfully nice to be able to keep
using that, at least in the short term, as you get your feet wet with
docker. Once you've seen the light and decided that pacemaker is a much
better choice than systemd then you can go port everything over.

I know that using systemd is not the docker-way but it is a way. 
And I think enabling people to gain some benefits of using containers without 
making them completely re-architect how they manage their services
has some value. 

To me this is akin to letting people run their legacy/non-cloudy apps on 
an IaaS. Are you going to get all the benefits that come from re-architecting 
your app to be autoscaleable and robust in the face of instance outages?
Nope. But are you going to get some benefits? Yep

Cheers
Charles

 
 Thanks,
 Kevin
 
 
 From: David Vossel [dvos...@redhat.com]
 Sent: Tuesday, October 14, 2014 3:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns
 
 - Original Message -
  Same thing works with cloud init too...
 
 
  I've been waiting on systemd working inside a container for a while. it
  seems
  to work now.
 
 oh no...
 
  The idea being its hard to write a shell script to get everything up and
  running with all the interactions that may need to happen. The init
  system's
  already designed for that. Take a nova-compute docker container for
  example,
  you probably need nova-compute, libvirt, neutron-openvswitch-agent, and the
  celiometer-agent all backed in. Writing a shell script to get it all
  started
  and shut down properly would be really ugly.
 
  You could split it up into 4 containers and try and ensure they are
  coscheduled and all the pieces are able to talk to each other, but why?
  Putting them all in one container with systemd starting the subprocesses is
  much easier and shouldn't have many drawbacks. The components code is
  designed and tested assuming the pieces are all together.
 
 What you need is a dependency model that is enforced outside of the
 containers. Something
 that manages the order containers are started/stopped/recovered in. This
 allows
 you to isolate your containers with 1 service per container, yet still
 express that
 container with service A needs to start before container with service B.
 
 Pacemaker does this easily. There's even a docker resource-agent for
 Pacemaker now.
 https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/docker
 
 -- Vossel
 
 ps. don't run systemd in a container... If you think you should, talk to me
 first.
 
 
  You can even add a ssh server in there easily too and then ansible in to do
  whatever other stuff you want to do to the container like add other
  monitoring and such
 
  Ansible or puppet or whatever should work better in this arrangement too
  since existing code assumes you can just systemctl start foo;
 
  Kevin
  
  From: Lars Kellogg-Stedman [l...@redhat.com]
  Sent: Tuesday, October 14, 2014 12:10 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns
 
  On Tue, Oct 14, 2014 at 02:45:30PM -0400, Jay Pipes wrote:
   With Docker, you are limited to the operating system of whatever the
   image
   uses.
 
  See, that's the part I disagree with.  What I was saying about ansible
  and puppet in my email is that I think the right thing to do is take
  advantage of those tools:
 
FROM ubuntu
 
RUN apt-get install ansible
COPY my_ansible_config.yaml /my_ansible_config.yaml
RUN ansible /my_ansible_config.yaml
 
  Or:
 
FROM Fedora
 
RUN yum install ansible
COPY my_ansible_config.yaml /my_ansible_config.yaml
RUN ansible /my_ansible_config.yaml
 
  Put the minimal instructions in your dockerfile to bootstrap your
  preferred configuration management tool. This is exactly what you
  would do when booting, say, a Nova instance into an openstack
  environment: you can provide a shell script to cloud-init that would
  install whatever packages are required to run your config management
  tool, and then run that tool.
 
  Once you have bootstrapped your cm environment you can take advantage
  of all those distribution-agnostic cm tools.
 
  In other words, using 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Vijay B
Hi Phillip,


Adding my thoughts below. I’ll first answer the questions you raised with
what I think should be done, and then give my explanations to reason
through with those views.



1. Do we want to add logic in the plugin to call the FLIP association API?


  We should implement the logic in the new v2 extension and the plugin
layer as a single API call. We would need to add to the existing v2 API to
be able to do this. The best place to add this option of passing the FLIP
info/request to the VIP is in the VIP create and update API calls via new
parameters.


2. If we have logic in the plugin should we have configuration that
identifies whether to use/return the FLIP instead of the port VIP?


  Yes and no, in that we should return the complete result of the VIP
create/update/list/show API calls, in which we show the VIP internal IP,
but we also show the FLIP either as empty or having a FLIP uuid. External
users will anyway use only the FLIP, else they wouldn’t be able to reach
the LB and the VIP IP, but the APIs need to show both fields.


3. Would we rather have logic for FLIP association in the drivers?


  This is the hardest part to decide. To understand this, we need to look
at two important drivers of LBaaS design:


 I)  The Neutron core plugin we’re using.

II) The different types of LB devices - physical, virtual standalone, and
virtual controlled by a management plane. This leads to different kinds of
LBaaS drivers and different kinds of interaction or the lack of it between
them and the core neutron plugin.


The reason we need to take into account both these is that port
provisioning as well as NATing for the FLIP to internal VIP IP will be
configured differently by the different network management/backend planes
that the plugins use, and the way drivers configure LBs can be highly
impacted by this.


For example, we can have an NSX infrastructure that will implement the FLIP
to internal IP conversion in the logical router module which sits pretty
much outside of Openstack’s realm, using openflow. Or we can use lighter
solutions directly on the hypervisor that still employ open flow entries
without actually having a dedicated logical routing module. Neither will
matter much if we are in a position to have them deploy our networking for
us, i.e., in the cases of us using virtual LBs sitting on compute nodes.
But if we have a physical LB, the neutron plugins cannot do much of the
network provisioning work for us, typically because physical LBs usually
sit outside of the cloud, and are among the earliest points of contact from
the external world.


This already nudges us to consider putting the FLIP provisioning
functionality in the driver. However, consider again more closely the major
ways in which LBaaS drivers talk to LB solutions today depending on II) :


 a) LBaaS drivers that talk to a virtual LB device on a compute node,
directly.

b) LBaaS drivers that talk to a physical LB device (or a virtual LB sitting
outside the cloud) directly.

c) LBaaS drivers that talk to a management plane like F5’s BigIQ, or
Netscaler’s NCC, or as in our case, Octavia, that try to provide tenant
based provisioning of virtual LBs.

d) The HAProxy reference namespace driver.


d) is really a PoC use case, and we can forget it. Let’s consider a), b)
and c).


If we use a) or b), we must assume that the required routing for the
virtual LB has been setup correctly, either already through nova or
manually. So we can afford to do our FLIP plumbing in the neutron plugin
layer, but, driven by the driver - how? - typically, after the VIP is
successfully created on the LB, and just before the driver updates the
VIP’s status as ACTIVE, it can create the FLIP. Of course, if the FLIP
provisioning fails for any reason, the VIP still stands. It’ll be empty in
the result, and the API will error out saying “VIP created but FLIP
creation failed”. It must be manually deleted by another delete VIP call.
We can’t afford to provision a FLIP before a VIP is active, for external
traffic shouldn’t be taken while the VIP isn’t up yet. If the lines are
getting hazy right now because of this callback model, let’s just focus on
the point that we’re initiating FLIP creation in the driver layer while the
code sits in the plugin layer because it will need to update the database.
But in absolute terms, we’re doing it in the driver.


It is use case c) that is interesting. In this case, we should do all
neutron based provisioning neither in the driver, nor in the plugin in
neutron, rather, we should do this in Octavia, and in the Octavia
controller to be specific. This is very important to note, because if
customers are using this deployment (which today has the potential to be
way greater in the near future than any other model simply because of the
sheer existing customer base), we can’t be creating the FLIP in the plugin
layer and have the controller reattempt it. Indeed, the controllers can
change their code to not attempt this, but 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Vijay Venkatachalam
Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
a
ring

-diagrams are draw.iohttp://draw.io based and can be opened from within 
Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:

I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then return to the
user the vip info.  This means the user will know the VIP before the
driver even finishes creating the load balancer.

So if Octavia is just going to create a floating IP and then associate
that floating IP to the neutron port, there is the problem of the user
not ever seeing the correct VIP (which would be the floating iP).

So really, we need to have a very detailed discussion on what the
options are for us to get this to work for those of us intending to use
floating ips as VIPs while also working for those only requiring a
neutron port.  I'm pretty sure this will require changing the way V2
behaves, but there's more discussion points needed on that.  Luckily, V2
is in a feature branch and not merged into Neutron master, so we can
change it pretty easily.  Phil and I will bring this up in the meeting
tomorrow, which may lead to a meeting topic in the neutron lbaas
meeting.

Thanks,
Brandon


On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
 Hello All,

 I wanted to start a discussion on floating IP management and ultimately
 decide how the LBaaS group wants to handle the association.

 There is a need to utilize floating IPs(FLIP) and its API calls to
 associate a FLIP to the neutron port that we currently spin up.

 See DOCS here:

 
http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_c
r
eate.html

 Currently, LBaaS will make internal service calls (clean interface :/)
to create and attach a Neutron port.
 The VIP from this port is added to the Loadbalancer object of the Load
balancer configuration and returned to the user.

 This creates a bit of a problem if we want to associate a FLIP with the
port and display the FLIP to the user instead of
 the ports VIP because the port is currently created and attached in the
plugin and there is no code anywhere to handle the FLIP
 association.

 To keep this short and to the point:

 We need to discuss where and how we want to handle this association. I
have a few questions to start it off.

 Do we want to add logic in the plugin to call the FLIP association API?

 If we have logic in the plugin should we have configuration that
identifies weather to use/return the FLIP instead the port VIP?

 Would we rather have logic for FLIP association in the drivers?

 If logic is in the drivers would we still return the port VIP to the
user then later overwrite it with the FLIP?
 Or would we have configuration to not return the port VIP initially,
but an additional query would show the associated FLIP.


 Is there an internal service call for this, and if so would we use it
instead of API calls?


 Theres plenty of other thoughts and questions to be asked and discussed
in regards to FLIP handling,
 hopefully this will get us going. I'm certain I may not be completely
understanding this and
 is the hopes of this email to clarify any uncertainties.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-14 Thread Adam Lawson

 Nova is also not the right place to do the generic solution as many other
 parts could be involved... neutron and cinder come to mind. Nova needs to
 provide the basic functions but it needs something outside to make it all
 happen transparently.
 I would really like a shared solution rather than each deployment doing
 their own and facing identical problems. A best of breed solution which can
 be incrementally improved as we find problems to get the hypervisor down
 event, to force detach of boot volumes, restart elsewhere and reconfigure
 floating ips with race conditions is needed.
 Some standards for tagging is good but we also need some code :-)


I think this would actually be a worthwhile cross-project effort but
getting it done would require some higher-level guidance to keep it on
track.

I also do not believe Nova *contains* all of the data to perform auto-evac,
but it has *access* to the data right? Or could anyway. I think Cinder
would definitely play a roll, and Neutron for sure.

And as far is scope is concerned, I personally think something like this
should only support VM's with shared storage. Otherwise, phase 1 gets
overly complex and gets into something akin to VMware's DRS which I DO
think could be another step, but the first step needs to be clean to ensure
it gets done.


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Tue, Oct 14, 2014 at 12:01 PM, Mathieu Gagné mga...@iweb.com wrote:

 On 2014-10-14 2:49 PM, Tim Bell wrote:


 Nova is also not the right place to do the generic solution as many other
 parts could be involved... neutron and cinder come to mind. Nova needs to
 provide the basic functions but it needs something outside to make it all
 happen transparently.

 I would really like a shared solution rather than each deployment doing
 their own and facing identical problems. A best of breed solution which can
 be incrementally improved as we find problems to dget the hypervisor down
 event, to force detach of boot volumes, restart elsewhere and reconfigure
 floating ips with race conditions is needed.

 Some standards for tagging is good but we also need some code :-)


 I agree with Tim. Nova does not have all the required information to make
 a proper decision which could imply other OpenStack (and non-OpenStack)
 services. Furthermore, evacuating a node might imply fencing which Nova
 might not be able to do properly or have the proper tooling. What about
 non-shared storage backend in Nova? You can't evacuate those without data
 loss.

 --
 Mathieu


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Zhipeng Huang
Hi all, to my understanding we certainly don't want developers to
arbitrarily extend APIs which would lead to a lot of mess, however we still
need to find a way to standardize how we augment existing APIs since they
are not perfect either.

On Wed, Oct 15, 2014 at 7:02 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Oct 14, 2014, at 6:00 PM, Lance Bragstad lbrags...@gmail.com wrote:



 On Tue, Oct 14, 2014 at 4:29 PM, Christopher Yeoh cbky...@gmail.com
 wrote:

 On Tue, 14 Oct 2014 10:29:34 -0500
 Lance Bragstad lbrags...@gmail.com wrote:

  I found a couple of free times available for a weekly meeting if
  people are interested:
 
  https://review.openstack.org/#/c/128332/2
 
  Not sure if a meeting time has been hashed out already or not, and if
  it has I'll change the patch accordingly. If not, we can iterate on
  possible meeting times in the review if needed. This was to just get
  the ball rolling if we want a weekly meeting. I proposed one review
  for Thursdays at 2100 and a second patch set for 2000, UTC. That can
  easily change, but those were two times that didn't conflict with the
  existing meeting schedules in #openstack-meeting.

 So UTC 2000 is 7am Sydney, 5am Tokyo and 4am Beijing time which is
 pretty early in the day. I'd suggest UTC  if that's not too late for
 others who'd like to participate.

 Chris


 Unfortunately, we conflict with the I18N Team Meeting at UTC  (in
 #openstack-meeting). I updated the review for UTC 1000. An alternating
 schedule would probably work well given the attendance from different times
 zones. I'll snoop around for another meeting time to alternate with, unless
 someone has one in mind.


 Don’t forget about #openstack-meeting-alt and #openstack-meeting-3 as
 alternative rooms for the meetings. Those have the meeting bot and are
 managed on the same wiki page for scheduling.

 Doug





 
 
 
  On Tue, Oct 14, 2014 at 3:55 AM, Thierry Carrez
  thie...@openstack.org wrote:
 
   Jay Pipes wrote:
On 10/13/2014 07:11 PM, Christopher Yeoh wrote:
I guess we could also start fleshing out in the repo how we'll
work in practice too (eg once the document is stable what
process do we have for making changes - two +2's is probably not
adequate for something like this).
   
We can make it work exactly like the openstack/governance repo,
where ttx has the only ability to +2/+W approve a patch for
merging, and he tallies a majority vote from the TC members, who
vote -1 or +1 on a proposed patch.
   
Instead of ttx, though, we can have an API working group lead
selected from the set of folks currently listed as committed to
the effort?
  
   Yes, the working group should select a chair who would fill the
   same role I do for the TC (organize meetings, push agenda, tally
   votes on reviews, etc.)
  
   That would be very helpful in keeping that diverse group on track.
   Now you just need some volunteer :)
  
   --
   Thierry Carrez (ttx)
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Zhipeng Huang
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402
OpenStack, OpenDaylight, OpenCompute affcienado
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Fox, Kevin M
I'm not arguing that everything should be managed by one systemd, I'm just 
saying, for certain types of containers, a single docker container with systemd 
in it might be preferable to trying to slice it unnaturally into several 
containers.

Systemd has invested a lot of time/effort to be able to relaunch failed 
services, support spawning and maintaining unix sockets and services across 
them, etc, that you'd have to push out of and across docker containers. All of 
that can be done, but why reinvent the wheel? Like you said, pacemaker can be 
made to make it all work, but I have yet to see a way to deploy pacemaker 
services anywhere near as easy as systemd+yum makes it. (Thanks be to redhat. :)

The answer seems to be, its not dockerish. Thats ok. I just wanted to 
understand the issue for what it is. If there is a really good reason for not 
wanting to do it, or that its just not the way things are done. I've had kind 
of the opposite feeling regarding docker containers. Docker use to do very bad 
things when killing the container. nasty if you wanted your database not to go 
corrupt. killing pid 1 is a bit sketchy then forcing the container down after 
10 seconds was particularly bad. having something like systemd in place allows 
the database to be notified, then shutdown properly. Sure you can script up 
enough shell to make this work, but you have to do some difficult code, over 
and over again... Docker has gotten better more recently but it still makes me 
a bit nervous using it for statefull things.

As for recovery, systemd can do the recovery too. I'd argue at this point in 
time, I'd expect systemd recovery to probably work better then some custom 
shell scripts when it comes to doing the right thing recovering at bring up. 
The other thing is, recovery is not just about pid1 going away. often it sticks 
around and other badness is going on. Its A way to know things are bad, but you 
can't necessarily rely on it to know the container's healty. You need more 
robust checks for that.

Thanks,
Kevin


From: David Vossel [dvos...@redhat.com]
Sent: Tuesday, October 14, 2014 4:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns

- Original Message -
 Ok, why are you so down on running systemd in a container?

It goes against the grain.

From a distributed systems view, we gain quite a bit of control by maintaining
one service per container. Containers can be re-organised and re-purposed 
dynamically.
If we have systemd trying to manage an entire stack of resources within a 
container,
we lose this control.

From my perspective a containerized application stack needs to be managed 
externally
by whatever is orchestrating the containers to begin with. When we take a step 
back
and look at how we actually want to deploy containers, systemd doesn't make 
much sense.
It actually limits us in the long run.

Also... recovery. Using systemd to manage a stack of resources within a single 
container
makes it difficult for whatever is externally enforcing the availability of 
that container
to detect the health of the container.  As it is now, the actual service is pid 
1 of a
container. If that service dies, the container dies. If systemd is pid 1, there 
can
be all kinds of chaos occurring within the container, but the external 
distributed
orchestration system won't have a clue (unless it invokes some custom health 
monitoring
tools within the container itself, which will likely be the case someday.)

-- Vossel


 Pacemaker works, but its kind of a pain to setup compared just yum installing
 a few packages and setting init to systemd. There are some benefits for
 sure, but if you have to force all the docker components onto the same
 physical machine anyway, why bother with the extra complexity?

 Thanks,
 Kevin

 
 From: David Vossel [dvos...@redhat.com]
 Sent: Tuesday, October 14, 2014 3:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [kolla] on Dockerfile patterns

 - Original Message -
  Same thing works with cloud init too...
 
 
  I've been waiting on systemd working inside a container for a while. it
  seems
  to work now.

 oh no...

  The idea being its hard to write a shell script to get everything up and
  running with all the interactions that may need to happen. The init
  system's
  already designed for that. Take a nova-compute docker container for
  example,
  you probably need nova-compute, libvirt, neutron-openvswitch-agent, and the
  celiometer-agent all backed in. Writing a shell script to get it all
  started
  and shut down properly would be really ugly.
 
  You could split it up into 4 containers and try and ensure they are
  coscheduled and all the pieces are able to talk to each other, but why?
  Putting them all in one container with systemd starting 

Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Angus Lees
On Tue, 14 Oct 2014 07:51:54 AM Steven Dake wrote:
 Angus,
 
 On 10/13/2014 08:51 PM, Angus Lees wrote:
  I've been reading a bunch of the existing Dockerfiles, and I have two
  humble requests:
  
  
  1. It would be good if the interesting code came from python
  sdist/bdists
  rather than rpms.
  
  This will make it possible to rebuild the containers using code from a
  private branch or even unsubmitted code, without having to go through a
  redhat/rpm release process first.
  
  I care much less about where the python dependencies come from. Pulling
  them from rpms rather than pip/pypi seems like a very good idea, given
  the relative difficulty of caching pypi content and we also pull in the
  required C, etc libraries for free.
  
  
  With this in place, I think I could drop my own containers and switch to
  reusing kolla's for building virtual testing environments.  This would
  make me happy.
 
 I've captured this requirement here:
 https://blueprints.launchpad.net/kolla/+spec/run-from-master
 
 I also believe it would be interesting to run from master or a stable
 branch for CD.  Unfortunately I'm still working on the nova-compute
 docker code, but if someone comes along and picks up that blueprint, i
 expect it will get implemented :)  Maybe that could be you.

Yeah I've already got a bunch of working containers that pull from master[1], 
but I've been thinking I should change that to use an externally supplied 
bdist.  The downside is you quickly end up wanting a docker container to build 
your deployment docker container.  I gather this is quite a common thing to 
do, but I haven't found the time to script it up yet.

[1] https://github.com/anguslees/kube-openstack/tree/master/docker

I could indeed work on this, and I guess I was gauging the level of enthusiasm 
within kolla for such a change.  I don't want to take time away from the 
alternative I have that already does what I need only to push uphill to get it 
integrated :/

  2. I think we should separate out run the server from do once-off
  setup.
  
  Currently the containers run a start.sh that typically sets up the
  database, runs the servers, creates keystone users and sets up the
  keystone catalog.  In something like k8s, the container will almost
  certainly be run multiple times in parallel and restarted numerous times,
  so all those other steps go against the service-oriented k8s ideal and
  are at-best wasted.
  
  I suggest making the container contain the deployed code and offer a few
  thin scripts/commands for entrypoints.  The main
  replicationController/pod _just_ starts the server, and then we have
  separate pods (or perhaps even non-k8s container invocations) that do
  initial database setup/migrate, and post- install keystone setup.
 
 The server may not start before the configuration of the server is
 complete.  I guess I don't quite understand what you indicate here when
 you say we have separate pods that do initial database setup/migrate.
 Do you mean have dependencies in some way, or for eg:
 
 glance-registry-setup-pod.yaml - the glance registry pod descriptor
 which sets up the db and keystone
 glance-registry-pod.yaml - the glance registry pod descriptor which
 starts the application and waits for db/keystone setup
 
 and start these two pods as part of the same selector (glance-registry)?
 
 That idea sounds pretty appealing although probably won't be ready to go
 for milestone #1.

So the way I do it now, I have a replicationController that starts/manages 
(eg) nova-api pods[2].  I separately have a nova-db-sync pod[3] that basically 
just runs nova-manage db sync.

I then have a simple shell script[4] that starts them all at the same time.  
The nova-api pods crash and get restarted a few times until the database has 
been appropriately configured by the nova-db-sync pod, and then they're fine 
and 
start serving.

When nova-db-sync exits successfully, the pod just sits in state terminated 
thanks to restartPolicy: onFailure.  Sometime later I can delete the 
terminated nova-db-sync pod, but it's also harmless if I just leave it or even 
if it gets occasionally re-run as part of some sort of update.


[2] 
https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-api-repcon.yaml
[3] 
https://github.com/anguslees/kube-openstack/blob/master/kubernetes-in/nova-db-sync-pod.yaml
[4] https://github.com/anguslees/kube-openstack/blob/master/kubecfg-create.sh


  I'm open to whether we want to make these as lightweight/independent as
  possible (every daemon in an individual container), or limit it to one per
  project (eg: run nova-api, nova-conductor, nova-scheduler, etc all in one
  container).   I think the differences are run-time scalability and
  resource- attribution vs upfront coding effort and are not hugely
  significant either way.
  
  Post-install catalog setup we can combine into one cross-service setup
  like
  tripleO does[1].  Although k8s doesn't have explicit support for 

Re: [openstack-dev] [neutron][all] Naming convention for unused variables

2014-10-14 Thread Angus Lees
On Tue, 14 Oct 2014 12:28:29 PM Angus Lees wrote:
 (Context: https://review.openstack.org/#/c/117418/)
 
 I'm looking for some rough consensus on what naming conventions we want for
 unused variables in Neutron, and across the larger OpenStack python codebase
 since there's no reason for Neutron to innovate here.

So after carefully collecting and summarising all one opinion[1], we're going 
with:

 __ (double-underscore) and
 _foo (leading underscore)


[1] Next time I'll be sure to mention docker in the subject line ;)

 - Gus

 As far as I can see, there are two cases:
 
 
 1.  The I just don't care variable
 
 Eg:_, _, filename = path.rpartition('/')
 
 In python this is very commonly '_', but this conflicts with the gettext
 builtin so we should avoid it in OpenStack.
 
 Possible candidates include:
 
 a.  'x'
 b. '__'  (double-underscore)
 c. No convention
 
 
 2.  I know it is unused, but the name still serves as documentation
 
 Note this turns up as two cases: as a local, and as a function parameter.
 
 Eg:   out, _err = execute('df', path)
 
 Eg:   def makefile(self, _mode, _other):
 return self._buffer
 
 I deliberately chose that second example to highlight that the leading-
 underscore convention collides with its use for private properties.
 
 Possible candidates include:
 
 a. _foo   (leading-underscore, note collides with private properties)
 b. unused_foo   (suggested in the Google python styleguide)
 c. NOQA_foo   (as suggested in c/117418)
 d. No convention  (including not indicating that variables are known-unused)
 
 
 As with all style discussions, everyone feels irrationally attached to their
 favourite, but the important bit is to be consistent to aid readability 
 (and in this case, also to help the mechanical code checkers).
 
 Vote / Discuss / Suggest additional alternatives.

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Alex Xu

On 2014年10月14日 21:57, Jay Pipes wrote:

On 10/14/2014 05:04 AM, Alex Xu wrote:

There is one reason to think about what projects *currently* do. When we
choice which convention we want.
For example, the CamelCase and snake_case, if the most project use
snake_case, then choice snake_case style
will be the right.


I would posit that the reason we have such inconsistencies in our 
project's APIs is that we haven't taken a stand and said this is the 
way it must be.


There's lots of examples of inconsistencies out in the OpenStack APIs. 
We can certainly use a wiki or etherpad page to document those 
inconsistencies. But, eventually, this working group should produce 
solid decisions that should be enforced across *future* OpenStack 
APIs. And that guidance should be forthcoming in the next month or so, 
not in one or two release cycles.


I personally think proposing patches to an openstack-api repository is 
the most effective way to make those proposals. Etherpads and wiki 
pages are fine for dumping content, but IMO, we don't need to dump 
content -- we already have plenty of it. We need to propose guidelines 
for *new* APIs to follow.



+1 in the next month, stop more inconsistent.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-14 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2014-10-14 17:40:16 -0700:
 I'm not arguing that everything should be managed by one systemd, I'm
 just saying, for certain types of containers, a single docker container
 with systemd in it might be preferable to trying to slice it unnaturally
 into several containers.
 

Can you be more concrete? Most of the time things that need to be in
the same machine tend to have some kind of controller already. Meanwhile
it is worth noting that you can have one _image_, but several containers
running from that one image. So if you're trying to run a few pieces of
Neutron, for instance, you can have multiple containers each from that
one neutron image.

 Systemd has invested a lot of time/effort to be able to relaunch failed
 services, support spawning and maintaining unix sockets and services
 across them, etc, that you'd have to push out of and across docker
 containers. All of that can be done, but why reinvent the wheel? Like you
 said, pacemaker can be made to make it all work, but I have yet to see
 a way to deploy pacemaker services anywhere near as easy as systemd+yum
 makes it. (Thanks be to redhat. :)
 

There are some of us who are rather annoyed that systemd tries to do
this in such a naive way and assumes everyone will want that kind of
management. It's the same naiveté that leads people to think if they
make their app server systemd service depend on their mysql systemd
service that this will eliminate startup problems. Once you have more
than one server, it doesn't work.

Kubernetes adds a distributed awareness of the containers that makes it
uniquely positioned to do most of those jobs much better than systemd
can.

 The answer seems to be, its not dockerish. Thats ok. I just wanted to
 understand the issue for what it is. If there is a really good reason for
 not wanting to do it, or that its just not the way things are done. I've
 had kind of the opposite feeling regarding docker containers. Docker use
 to do very bad things when killing the container. nasty if you wanted
 your database not to go corrupt. killing pid 1 is a bit sketchy then
 forcing the container down after 10 seconds was particularly bad. having
 something like systemd in place allows the database to be notified, then
 shutdown properly. Sure you can script up enough shell to make this work,
 but you have to do some difficult code, over and over again... Docker
 has gotten better more recently but it still makes me a bit nervous
 using it for statefull things.
 

What I think David was saying was that the process you want to run under
systemd is the pid 1 of the container. So if killing that would be bad,
it would also be bad to stop the systemd service, which would do the
same thing: send it SIGTERM. If that causes all hell to break loose, the
stateful thing isn't worth a dime, because it isn't crash safe.

 As for recovery, systemd can do the recovery too. I'd argue at this
 point in time, I'd expect systemd recovery to probably work better
 then some custom shell scripts when it comes to doing the right thing
 recovering at bring up. The other thing is, recovery is not just about
 pid1 going away. often it sticks around and other badness is going
 on. Its A way to know things are bad, but you can't necessarily rely on
 it to know the container's healty. You need more robust checks for that.

I think one thing people like about Kubernetes is that when a container
crashes, and needs to be brought back up, it may actually be brought
up on a different, less busy, more healthy host. I could be wrong, or
that might be in the FUTURE section. But the point is, recovery and
start-up are not things that always want to happen on the same box.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] Naming convention for unused variables

2014-10-14 Thread Kevin Benton
One cannot simply prevent bike-shedding by asking people to do it up front
on the mailing list.
You'll have to wait until review time like everyone else. ;-)

On Tue, Oct 14, 2014 at 6:04 PM, Angus Lees g...@inodes.org wrote:

 On Tue, 14 Oct 2014 12:28:29 PM Angus Lees wrote:
  (Context: https://review.openstack.org/#/c/117418/)
 
  I'm looking for some rough consensus on what naming conventions we want
 for
  unused variables in Neutron, and across the larger OpenStack python
 codebase
  since there's no reason for Neutron to innovate here.

 So after carefully collecting and summarising all one opinion[1], we're
 going
 with:

  __ (double-underscore) and
  _foo (leading underscore)


 [1] Next time I'll be sure to mention docker in the subject line ;)

  - Gus

  As far as I can see, there are two cases:
 
 
  1.  The I just don't care variable
 
  Eg:_, _, filename = path.rpartition('/')
 
  In python this is very commonly '_', but this conflicts with the gettext
  builtin so we should avoid it in OpenStack.
 
  Possible candidates include:
 
  a.  'x'
  b. '__'  (double-underscore)
  c. No convention
 
 
  2.  I know it is unused, but the name still serves as documentation
 
  Note this turns up as two cases: as a local, and as a function parameter.
 
  Eg:   out, _err = execute('df', path)
 
  Eg:   def makefile(self, _mode, _other):
  return self._buffer
 
  I deliberately chose that second example to highlight that the leading-
  underscore convention collides with its use for private properties.
 
  Possible candidates include:
 
  a. _foo   (leading-underscore, note collides with private properties)
  b. unused_foo   (suggested in the Google python styleguide)
  c. NOQA_foo   (as suggested in c/117418)
  d. No convention  (including not indicating that variables are
 known-unused)
 
 
  As with all style discussions, everyone feels irrationally attached to
 their
  favourite, but the important bit is to be consistent to aid readability
  (and in this case, also to help the mechanical code checkers).
 
  Vote / Discuss / Suggest additional alternatives.

 --
  - Gus

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] [all] config repository rename to system-config

2014-10-14 Thread Elizabeth K. Joseph
Hi everyone,

A couple weeks ago we split the infrastructure project-config repo out
from the core config repo[0].

The second phase of our restructuring is the rename of config to
system-config, which we've scheduled for Friday, October 17th at 21:00
UTC.

Gerrit will be down for about 30 minutes while we do this rename, we
will also be doing a couple of other outstanding project renames
during this maintenance window.

Since this is a rename, you will not have to re-propose outstanding
config patches to the system-config repository, they will follow the
project through the rename. However, you will have to update your
local repositories to point to the new location of the project.

For more about this restructuring, see our spec:
http://specs.openstack.org/openstack-infra/infra-specs/specs/config-repo-split.html

Feel free to reply to this email or join #openstack-infra if you have
any questions.

[0] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/047207.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread GHANSHYAM MANN
On Wed, Oct 15, 2014 at 7:44 AM, Christopher Yeoh cbky...@gmail.com wrote:

 On Tue, 14 Oct 2014 09:45:44 -0400
 Jay Pipes jaypi...@gmail.com wrote:

  On 10/14/2014 12:52 AM, Christopher Yeoh wrote:
   On Mon, 13 Oct 2014 22:20:32 -0400
   Jay Pipes jaypi...@gmail.com wrote:
  
   On 10/13/2014 07:11 PM, Christopher Yeoh wrote:
   On Mon, 13 Oct 2014 10:52:26 -0400
   And whilst I don't have a problem with having some guidelines which
   suggest a future standard for APIs, I don't think we should be
   requiring any type of feature which has not yet been implemented in
   at least one, preferably two openstack projects and released and
   tested for a cycle. Eg standards should be lagging rather than
   leading.
 
  What about features in some of our APIs that are *not* preferable?
  For instance: API extensions.
 
  I think we've seen where API extensions leads us. And it isn't
  pretty. Would you suggest we document what a Nova API extension or a
  Neutron API extension looks like and then propose, for instance, not
  to ever do it again in future APIs and instead use schema
  discoverability?

 So if we had standards leading development rather than lagging in the
 past then API extensions would have ended up in the standard because we
 once thought they were a good idea.

 Perhaps we should distinguish in the documentation between
 recommendations (future looking) and standards (proven it works well
 for us). The latter would be potentially enforced a lot more strictly
 than the former.


That will be great to have classification in guidelines (Strict
, recommended etc ) and step by step those can be moved to higher
classification as project start consuming those.


 In the case of extensions I think we should have a section documenting
 why we think they're a bad idea and new projects certainly shouldn't
 use them. But also give some advice around if they are used what
 features they should have (eg version numbers!). Given the time that it
 takes to make major API infrastructure changes it is inevitable that
 there will be api extensions added in the short to medium term. Because
 API development will not just stop while API infrastructure is improved.

   I think it will be better in git (but we also need it in gerrit)
   when it comes to resolving conflicts and after we've established a
   decent document (eg when we have more content). I'm just looking to
   make it as easy as possible for anyone to add any guidelines now.
   Once we've actually got something to discuss then we use git/gerrit
   with patches proposed to resolve conflicts within the document.
 
  Of course it would be in Gerrit. I just put it up on GitHub first
  because I can't just add a repo into the openstack/ code
  namespace... :)

 I've submitted a patch to add an api-wg project using your repository
 as the initial content for the git repository.

 https://review.openstack.org/#/c/128466/

 Regards,

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks  Regards
Ghanshyam Mann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Get keystone auth token via Horizon URL

2014-10-14 Thread Ed Lima
I'm on the very early stages of developing an app for android to manage
openstack services and would like to get the user credentials/tokens on
keystone to get data and execute commands via the horizon URL. I'm using
IceHouse on Ubuntu 14.04.

In my particular use case I have keystone running on my internal
server *http://localhost:5000/v3/auth/tokens
http://localhost:5000/v3/auth/tokens* which would allow me to use my app
fine with JSON to get information from other services and execute commands
however I'd have to be on the same network as my server for it to work.

On the other hand I have my horizon URL published externally on the
internet at the address *https://openstack.domain.com/horizon
https://openstack.domain.com/horizon* which is available from anywhere
and gives me access to my OpenStack services fine via browser on a desktop.
I'd like to do the same on android, would it be possible? Is there a way
for my app to send JSON requests to horizon at
*https://openstack.domain.com/horizon
https://openstack.domain.com/horizon* and get the authentication tokens
from keystone indirectly?

I should mention I'm not a very experienced developer and any help would be
amazing! Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >