[openstack-dev] [Ceilometer] [Bug#1497073]The return sample body of sample-list is different when use -m and not

2015-11-04 Thread Lin Juan IX Xia
Hi,Here is an open bug : https://bugs.launchpad.net/ceilometer/+bug/1497073Is it a bug or not?For the command "ceilometer sample-list --meter cpu", it calls "/v2/meter" API and return the OldSample objectswhich return body is different from "ceilometer sample-list --query 'meter=cpu'".To fix this inconformity, we can deprecate the command using -m or fix it to return the same body as command sample-listBest Regards,Xia Linjuan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Planning and prioritizing session for Mitaka

2015-11-04 Thread Renat Akhmerov
Team,

We’ve done a great job at the summit discussing our most hot topics within the 
project and a lot of important decisions were made. I would like to have though 
one more session in IRC to wrap up this by going over all the BPs/bugs we 
created in order to scope and prioritize them.

I’m proposing next Monday 9 Nov at 7.00 UTC. If you have other time options 
let’s communicate.

Thanks

Renat Akhmerov
@ Mirantis Inc.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] external-network-connectivity

2015-11-04 Thread Vikas Choudhary
Hi All,

Would appreciate your views on
https://blueprints.launchpad.net/kuryr/+spec/external-network-connectivity .



-Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-04 Thread Vikas Choudhary
Hi All,

I would appreciate inputs on following queries:
1. Are we assuming nova bm nodes to be docker host for now?

If Not:
 - Assuming nova vm as docker host and ovs as networking plugin:
This line is from the etherpad[1], "Eachdriver would have an
executable that receives the name of the veth pair that has to be bound to
the overlay" .
Query 1:  As per current ovs binding proposals by Feisky[2] and
Diga[3], vif seems to be binding with br-int on vm. I am unable to
understand how overlay will work. AFAICT , neutron will configure br-tun of
compute machines ovs only. How overlay(br-tun) configuration will happen
inside vm ?

 Query 2: Are we having double encapsulation(both at vm and
compute)? Is not it possible to bind vif into compute host br-int?

 Query3: I did not see subnet tags for network plugin being
passed in any of the binding patches[2][3][4]. Dont we need that?


[1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
[2]  https://review.openstack.org/#/c/241558/
[3]  https://review.openstack.org/#/c/232948/1
[4]  https://review.openstack.org/#/c/227972/


-Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-04 Thread Fawad Khaliq
On Thu, Nov 5, 2015 at 1:28 AM, Edgar Magana 
wrote:

> Dear Colleagues,
>
> I have been part of this community from the very beginning when in Santa
> Clara, CA back in 2011 a bunch of we crazy people decided to work on this
> networking project.
> Neutron has become is a very unique piece of code and it requires an
> approval team that will always be on the top of everything, this is why I
> would like to communicate you that I decided to step down as Neutron Core.
>
> These are not breaking news for many of you because I shared this thought
> during the summit in Tokyo and now it is a commitment. I want to let you
> know that I learnt a lot from you and I hope my comments and reviews never
> offended you.
>
> I will be around of course. I will continue my work on code reviews and
> coordination on the Networking Guide.
>
> Thank you all for your support and good feedback,
>

It has been great working with you since the early days. Have a nice well
deserved break. Enjoy!


>
> Edgar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-04 Thread Nader Lahouti
Hi,

I'm seeing the below warning message continuously:

2015-11-04 21:09:38  WARNING [oslo_messaging.server] wait() should have
been called after stop() as wait() waits for existing messages to finish
processing, it has been 692.98 seconds and stop() still has not been called

How to avoid this waring message? Anything needs to be changed when using
the notification API with the latest oslo_messaging?

Thanks,
Nader.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] network control plane (libkv role)

2015-11-04 Thread Vikas Choudhary
Hi all,

By network control plane i specifically mean here sharing network state
across docker daemons sitting on different hosts/nova_vms in multi-host
networking.

libnetwork provides flexibility where vendors have a choice between network
control plane to be handled by libnetwork(libkv) or remote driver itself
OOB. Vendor can choose to "mute" libnetwork/libkv by advertising remote
driver capability as "local".

"local" is our current default "capability" configuration in kuryr.

I have following queries:
1. Does it mean Kuryr is taking responsibility of sharing network state
across docker daemons? If yes, network created on one docker host should be
visible in "docker network ls" on other hosts. To achieve this, I guess
kuryr driver will need help of some distributed data-store like consul etc.
so that kuryr driver on other hosts could create network in docker on other
hosts. Is this correct?

2. Why we cannot  set default scope as "Global" and let libkv do the
network state sync work?

Thoughts?

Regards
-Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin] Mitaka summit meetup - a summary

2015-11-04 Thread Qiming Teng
Hi,

Thanks for joining the senlin meetup last week at Tokyo summit. We know
some of you were not able to make it for various reasons. I'm trying to
summarize things we discussed during the meetup and some preliminary
conclusions we got. Please feel free to reply to this email or find the
team on #senlin channel if you have questions/suggestions.


Short Version
-

- Senlin will focus more on two things during Mitaka cycle: 1)
  stability regarding API and engine; 2) Heat resource type support.

- Senlin engine won't do "convergence" as suggested by some people,
  however the engine should be responsible to manage the lifecycles of
  the objects it creates on behalf of users.

- Team will revise the APIs according to the recent guidelines from
  api-wg and make the first version released as stable as possible.
  Before having a versioning scheme in place, we won't bump the API
  versions in ad-hoc ways.

- Senlin will NOT introduce complicated monitoring mechanisms into the
  engine albeit we'd strive to support cluster/node status checkings.
  We opt to use whatever external monitoring services and leave that
  an option for users.

- We will continue working with TOSCA team to polish policy definitions.

- We will document guidelines on how policy decisions are passed from
  one policy to another.

- We are interested in building baremetal clusters, but we will keep it
  in pipeline unless there are: 1) real requests, and 2) resources to
  get it done.

- As part of the API stabilization effort, we will generalize the
  concept of 'webhook' into 'receiver'.


Long Version (TL;DR)


* Stability v.s. Features

We had some feature requests like managing container clusters, doing
smart scheduling, running scripts on a cluster of servers, supporting
clusters of non-compute resources... etc. These are all good ideas.
However, Senlin is not aiming to become a service of everything. We have
to refrain from the temptation of too wide a scope. There are millions
of things we can do, but the first priority at this stage is about
stability. Making it usable and stable before adding fancy features,
this was the consensus we achieved during the meetup. We will stick to
that during Mitaka cycle.

* Heat Resource Type Support

Team had a discussion with heat team during a design summit slot. The
basic vision remained the same: let senlin do autoscaling and deprecate
heat autoscaling when senlin is stable. There are quite some details
to be figured out. The first thing we would do is to land senlin
cluster, node and profile resource types in Heat and build a
auto-scaling end-to-end solution comparable to existing one. Then the
two teams will make decision on how to make the transition smooth for
both developers and users.

* Convergence or Not

There were suggestions to define 'desired' state and 'observed' state
for clusters and have senlin engine do the convergence. After some
closer examination of the use case, we decided not to do it. The
'desired' state of a node is obvious (i.e. ACTIVE). The 'desired' state
of a cluster is a little bit vague. It boils down to whether we would
allow 'partial success' when creating a cluster of 1,000 nodes. Failures
are unavoidable, thus something we have to live with. However, we are
very cautious about making decisions for users. Say we have 90% nodes
ACTIVE in a cluster, should we label the cluster an 'ERROR' state, or a
'WARNING' state, or just 'ACTIVE'? We tend to leave this decision to
users who are smart people too. To avoid too much burdens on users, we
will add some defaults that can be set by operators.

There are cases where senlin engine creates objects when enforcing a
policy, e.g. the load-balancing policy. The engine should do a good job
managing the status of those objects.

* API Design

Senlin already have an API design which is documented. Before doing a
verion 1.0 release, we need to further hammer on it. Most of these
revisions would be related to guidelines from api-wg. For example, the
following changes are expected to land during Mitaka:

 - return 202 instead of 200 for asynchronous operations
 - better align with the proposed change to 'action' APIs
 - sorting keys and directions
 - returning 400 or 404 for resources not found
 - add location headers where appropriate

Another change to the current API will be about webhook. We got
suggestions related to receving notifications from other channels other
than webhooks, e.g. message queues, external monitoring services. To avoid
disruptive changes to the APIs in future, we decided to generalize webhook
APIs to 'receivers'. This is an important work even if we only support
webhook as the only type of receivers. We don't want to see webhook APIs
provided and soon replaced/deprecated.

* Relying on External Monitoring

There used to be some interests in doing status polling on cluster
nodes so that the engine will know whether nodes are healthy or not.
This idea was rejected during 

Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Ian Wienand

On 11/05/2015 01:43 AM, Matthew Thode wrote:

python wheel repo could help maybe?


So I think we've (i.e. greghaynes) got that mostly in place, we just
got a bit side-tracked.

[1] adds mirror slaves, that build the wheels using pypi-mirror [2],
and then [3] adds the jobs.

This should give us wheels of everything in requirements

I think this could be enhanced by using bindep to install
build-requirements on the mirrors; in chat we tossed around some ideas
of making this a puppet provider, etc.

-i

[1] https://review.openstack.org/165240
[2] https://git.openstack.org/cgit/openstack-infra/pypi-mirror
[3] https://review.openstack.org/164927
[4] https://git.openstack.org/cgit/openstack-infra/bindep

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Vikas Choudhary
If we see from the angle of the contributor whose approach would not be
better than other competing one, it will be far easy for him to accept
logic at discussion stage rather after weeks of tracking review request and
addressing review comments.
On 5 Nov 2015 08:24, "Vikas Choudhary"  wrote:

> @Toni ,
>
> In scenarios where two developers, with different implementation
> approaches, are not able to reach any consensus over gerrit or ml, IMO,
> other core members can do a voting or discussion and then PTL should take a
> call which one to accept and allow for implementation. Anyways community
> has to make a call even after implementations, so why to unnecessary waste
> effort in implementation.
> WDYT?
> On 4 Nov 2015 19:35, "Baohua Yang"  wrote:
>
>> Sure, thanks!
>> And suggest add the time and channel information at the kuryr wiki page.
>>
>>
>> On Wed, Nov 4, 2015 at 9:45 PM, Antoni Segura Puimedon <
>> toni+openstac...@midokura.com> wrote:
>>
>>>
>>>
>>> On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang 
>>> wrote:
>>>
 +1, Antoni!
 btw, is our weekly meeting still on meeting-4 channel?
 Not found it there yesterday.

>>>
>>> Yes, it is still on openstack-meeting-4, but this week we skipped it,
>>> since some of us were
>>> traveling and we already held the meeting on Friday. Next Monday it will
>>> be held as usual
>>> and the following week we start alternating (we have yet to get a room
>>> for that one).
>>>

 On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
 toni+openstac...@midokura.com> wrote:

> Hi Kuryrs,
>
> Last Friday, as part of the contributors meetup, we discussed also
> code contribution etiquette. Like other OpenStack project (Magnum comes to
> mind), the etiquette for what to do when there is disagreement in the way
> to code a blueprint of fix a bug is as follows:
>
> 1.- Try to reach out so that the original implementation gets closer
> to a compromise by having the discussion in gerrit (and Mailing list if it
> requires a wider range of arguments).
> 2.- If a compromise can't be reached, feel free to make a separate
> implementation arguing well its difference, virtues and comparative
> disadvantages. We trust the whole community of reviewers to be able to
> judge which is the best implementation and I expect that often the
> reviewers will steer both submissions closer than they originally were.
> 3.- If both competing implementations get the necessary support, the
> core reviewers will take a specific decision on which to take based on
> technical merit. Important factor are:
> * conciseness,
> * simplicity,
> * loose coupling,
> * logging and error reporting,
> * test coverage,
> * extensibility (when an immediate pending and blueprinted feature
> can better be built on top of it).
> * documentation,
> * performance.
>
> It is important to remember that technical disagreement is a healthy
> thing and should be tackled with civility. If we follow the rules above, 
> it
> will lead to a healthier project and a more friendly community in which
> everybody can propose their vision with equal standing. Of course,
> sometimes there may be a feeling of duplicity, but even in the case where
> one's solution it is not selected (and I can assure you I've been there 
> and
> know how it can feel awkward) it usually still enriches the discussion and
> constitutes a contribution that improves the project.
>
> Regards,
>
> Toni
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 --
 Best wishes!
 Baohua


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best wishes!
>> Baohua
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
_

Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Vikas Choudhary
@Toni ,

In scenarios where two developers, with different implementation
approaches, are not able to reach any consensus over gerrit or ml, IMO,
other core members can do a voting or discussion and then PTL should take a
call which one to accept and allow for implementation. Anyways community
has to make a call even after implementations, so why to unnecessary waste
effort in implementation.
WDYT?
On 4 Nov 2015 19:35, "Baohua Yang"  wrote:

> Sure, thanks!
> And suggest add the time and channel information at the kuryr wiki page.
>
>
> On Wed, Nov 4, 2015 at 9:45 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>>
>>
>> On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang  wrote:
>>
>>> +1, Antoni!
>>> btw, is our weekly meeting still on meeting-4 channel?
>>> Not found it there yesterday.
>>>
>>
>> Yes, it is still on openstack-meeting-4, but this week we skipped it,
>> since some of us were
>> traveling and we already held the meeting on Friday. Next Monday it will
>> be held as usual
>> and the following week we start alternating (we have yet to get a room
>> for that one).
>>
>>>
>>> On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
>>> toni+openstac...@midokura.com> wrote:
>>>
 Hi Kuryrs,

 Last Friday, as part of the contributors meetup, we discussed also code
 contribution etiquette. Like other OpenStack project (Magnum comes to
 mind), the etiquette for what to do when there is disagreement in the way
 to code a blueprint of fix a bug is as follows:

 1.- Try to reach out so that the original implementation gets closer to
 a compromise by having the discussion in gerrit (and Mailing list if it
 requires a wider range of arguments).
 2.- If a compromise can't be reached, feel free to make a separate
 implementation arguing well its difference, virtues and comparative
 disadvantages. We trust the whole community of reviewers to be able to
 judge which is the best implementation and I expect that often the
 reviewers will steer both submissions closer than they originally were.
 3.- If both competing implementations get the necessary support, the
 core reviewers will take a specific decision on which to take based on
 technical merit. Important factor are:
 * conciseness,
 * simplicity,
 * loose coupling,
 * logging and error reporting,
 * test coverage,
 * extensibility (when an immediate pending and blueprinted feature
 can better be built on top of it).
 * documentation,
 * performance.

 It is important to remember that technical disagreement is a healthy
 thing and should be tackled with civility. If we follow the rules above, it
 will lead to a healthier project and a more friendly community in which
 everybody can propose their vision with equal standing. Of course,
 sometimes there may be a feeling of duplicity, but even in the case where
 one's solution it is not selected (and I can assure you I've been there and
 know how it can feel awkward) it usually still enriches the discussion and
 constitutes a contribution that improves the project.

 Regards,

 Toni


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Best wishes!
>>> Baohua
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Pagination in thre API

2015-11-04 Thread Tony Breeds
On Thu, Nov 05, 2015 at 01:09:36PM +1100, Tony Breeds wrote:
> Hi All,
> Around the middle of October a spec [1] was uploaded to add pagination
> support to the os-hypervisors API.  While I recognize the use case it seemed
> like adding another pagination implementation wasn't an awesome idea.
> 
> Today I see 3 more requests to add pagination to APIs [2]
> 
> Perhaps I'm over thinking it but should we do something more strategic rather
> than scattering "add pagination here".
> 
> It looks to me like we have at least 3 parties interested in this.
> 
> Yours Tony.
> 
> [1] https://review.openstack.org/#/c/234038
> [2] 
> https://review.openstack.org/#/q/message:pagination+project:openstack/nova-specs+status:open,n,z

Sorry about the send without complete subject.

Yours Tony.


pgplqNDkgelpr.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][api]

2015-11-04 Thread Tony Breeds
Hi All,
Around the middle of October a spec [1] was uploaded to add pagination
support to the os-hypervisors API.  While I recognize the use case it seemed
like adding another pagination implementation wasn't an awesome idea.

Today I see 3 more requests to add pagination to APIs [2]

Perhaps I'm over thinking it but should we do something more strategic rather
than scattering "add pagination here".

It looks to me like we have at least 3 parties interested in this.

Yours Tony.

[1] https://review.openstack.org/#/c/234038
[2] 
https://review.openstack.org/#/q/message:pagination+project:openstack/nova-specs+status:open,n,z


pgpQ4Af2zQVVu.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-04 Thread Anita Kuno
On 11/04/2015 07:28 PM, Edgar Magana wrote:
> Dear Colleagues,
> 
> I have been part of this community from the very beginning when in Santa 
> Clara, CA back in 2011 a bunch of we crazy people decided to work on this 
> networking project.
> Neutron has become is a very unique piece of code and it requires an approval 
> team that will always be on the top of everything, this is why I would like 
> to communicate you that I decided to step down as Neutron Core.
> 
> These are not breaking news for many of you because I shared this thought 
> during the summit in Tokyo and now it is a commitment. I want to let you know 
> that I learnt a lot from you and I hope my comments and reviews never 
> offended you.
> 
> I will be around of course. I will continue my work on code reviews and 
> coordination on the Networking Guide.
> 
> Thank you all for your support and good feedback,
> 
> Edgar
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Thanks so much for all your hard work, Edgar. I really appreciate
working with you and your dedication to task. You showed up and did what
you said what you would show up and do and I really admire that quality.

I'm glad you are taking a break, you have worked hard and need to take a
bit of time for yourself. Core reviewer responsibilities are time consuming.

I look forward to continuing to work with you on the Networking Guide.
See you at the next thing.

Thanks Edgar,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-barbican-authenticate-keystone-barbican-command

2015-11-04 Thread Dave McCowan (dmccowan)
Hi Arif--
Maybe using the OpenStack client would be easier for you.  It will take 
care of authenticating with Keystone, setting the HTTP headers, and providing 
reasonable defaults.
It looks like you have installed OpenStack with DevStack.  If this is the 
case:

$ cd ~/devstack
$ source openrc admin admin
$ openstack secret store -p "super secret data"
# An HREF is returned in the response when the secret has been stored
$ openstack secret get   -p
# Your secret is returned

   Drop by our IRC channel at #openstack-barbican on freenode if you have more 
questions, or if this suggestion doesn't work with your deployment.
--Dave

From: OpenStack Mailing List Archive mailto:cor...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, October 26, 2015 at 1:46 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] 
openstack-barbican-authenticate-keystone-barbican-command

Link: https://openstack.nimeyo.com/62868/?show=63238#c63238
From: marif82 mailto:mari...@gmail.com>>


Hi Dave,

Thanks for your response.
I am the beginner in OpenStack so I don't know how to get Keystone token so I 
searched and found "admintoken = a682f596-76f3-11e3-b3b2-e716f9080d50" in 
keystone.conf file. As you suggested i have removed the projectid from curl 
command and X-Auth-Token in curl command as per your suggestion and I am still 
getting the same error, please see below:

bash-4.2$ curl -X POST -H 'content-type:application/json' -H 
'X-Auth-Token:a682f59676f311e3b3b2e716f9080d50' -H 'X-Project-Id:12345' -d 
'{"payload": "my-secret-here", "payloadcontenttype": "text/plain"}' 
http://localhost:9311/v1/secrets -v
* About to connect() to localhost port 9311 
(#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9311 
(#0)

POST /v1/secrets HTTP/1.1
User-Agent: curl/7.29.0
Host: localhost:9311
Accept: /
content-type:application/json
X-Auth-Token:a682f59676f311e3b3b2e716f9080d50
X-Project-Id:12345
Content-Length: 67

* upload completely sent off: 67 out of 67 bytes
< HTTP/1.1 401 Unauthorized
< Content-Type: text/html; charset=UTF-8
< Content-Length: 23
< WWW-Authenticate: Keystone uri='http://localhost:35357'
< Connection: close
<
* Closing connection 0
Authentication requiredbash-4.2$

Please help me.

Regards,
Arif
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Joshua Harlow

Shraddha Pandhe wrote:



On Wed, Nov 4, 2015 at 3:12 PM, Kevin Benton mailto:blak...@gmail.com>> wrote:

>If we add our own database for internal stuff, we go back to the
same problem of allowing bad design.

I'm not sure I understand what you are saying here. A JSON blob that
only one driver knows how to interpret is worse than a vendor table.

I am only talking about the IPAM tables here. The reference
implementation doesn't need to play with JSON blob at all. Rather I
would say, it shouldn't. It can be left up to the vendors/users to
manage that blob responsibly. I can create my own database and point my
IPAM module to that, but then IPAM tables are practically useless for
me. The only reason for suggesting the blob is flexibility, which is the
main reason for pluggability of IPAM.

They both are specific to one driver but at least with a vendor
table you can have DB migrations, integrity, column queries, etc.
Additionally, the vendor table with extra features exposed via an
API extension makes it more clear to the API caller what is vendor
specific.


I agree that thats a huge advantage of having a db. But sometimes, it
may not be absolutely necessary to have an extra DB.

e.g. For multiple gateways support, a separate database would probably
add more overhead than required. All I want is to be able to fetch those
IPs.

The user can take a responsible decision whether to use the blob or the
database depending on the requirement, if they have the flexibility.

Can you elaborate what you mean by bad design?

When we are working on internal features, we have to follow different
timelines. Having an arbitrary blob can sometimes make us use that by
default, especially under pressing deadlines, instead of consulting with
broader audience and finding the right solution.


Just my 2 cents, and I know this since I'm on the team shraddha is on, 
but the above isn't really that great of an excuse for having/adding 
arbitrary blob(s); thinking long-term and figuring out what is really 
required (and perhaps that ends up being a structured format vs a json 
blob) is usually the better way of dealing with these types of issues 
(knowingly fully well that it is not always possible).


Everyone in every company has different timelines and that (IMHO) 
shouldn't be a 'way out' of consulting with a broader audience and 
finding the right solution...




On Nov 4, 2015 3:58 PM, "Shraddha Pandhe"
mailto:spandhe.openst...@gmail.com>>
wrote:



On Wed, Nov 4, 2015 at 1:38 PM, Armando M. mailto:arma...@gmail.com>> wrote:



On 4 November 2015 at 13:21, Shraddha Pandhe
mailto:spandhe.openst...@gmail.com>> wrote:

Hi Salvatore,

Thanks for the feedback. I agree with you that arbitrary
JSON blobs will make IPAM much more powerful. Some other
projects already do things like this.

e.g. In Ironic, node has driver_info, which is JSON. it
also has an 'extras' arbitrary JSON field. This allows
us to put any information in there that we think is
important for us.


I personally feel that relying on json blobs is not only
dangerously affecting portability, but it causes us to bloat
the business logic, and forcing us to be doing less
efficient when querying/filtering data


Most importantly though, I feel it's like abdicating our
responsibility to do a good design job.

How does it affect portability?

I don't think it forces us to do anything. 'Allows'? Maybe. But
that can be solved. Before making any design decisions for
internal feature-requests, we should first check with the
community if its a wider use-case. If it is a wider use-case, we
should collaborate and fix it upstream the right way.

I feel that, its impossible for the community to know all the
use-cases. Even if they knew, it would be impossible to
incorporate all of them. I filed a bug few months ago about
multiple gateway support for subnets.

https://bugs.launchpad.net/neutron/+bug/1464361
It was marked as 'Wont fix' because nobody else had this
use-case. Adding and maintaining a patch to support this is
super risky as it breaks the APIs. A JSON blob would have helped
me here.

I have another use-case. For multi-ip support for Ironic, we
want to divide the IP allocation ranges into two: Static IPs and
extra IPs. The static IPs are pre-configured IPs for Ironic
inventory whereas extra IPs are the multi-ips. Nobody else in
the community has this use-case.

If we add our own database for internal stuff, we go back to the
same problem of allowing bad design.

Ultimately, we should be able to identify how to

[openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-04 Thread Edgar Magana
Dear Colleagues,

I have been part of this community from the very beginning when in Santa Clara, 
CA back in 2011 a bunch of we crazy people decided to work on this networking 
project.
Neutron has become is a very unique piece of code and it requires an approval 
team that will always be on the top of everything, this is why I would like to 
communicate you that I decided to step down as Neutron Core.

These are not breaking news for many of you because I shared this thought 
during the summit in Tokyo and now it is a commitment. I want to let you know 
that I learnt a lot from you and I hope my comments and reviews never offended 
you.

I will be around of course. I will continue my work on code reviews and 
coordination on the Networking Guide.

Thank you all for your support and good feedback,

Edgar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Robert Collins
On 5 November 2015 at 13:06, Boris Pavlovic  wrote:
> Robert,
>
> I don't have the exactly numbers, but during the real testing of real
> deployments I saw the impact of polling resource, this is one of the reason
> why we have to add quite big sleep() during polling in Rally to reduce
> amount of GET requests and avoid DDoS of OpenStack..
>
> In any case it doesn't seem like hard task to collect the numbers.

Please do!.

But for clarity - in case the sub-thread wasn't clear - I was talking
about the numbers for a websocket based push thing, not polling.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Gabriel Bezerra

Em 04.11.2015 17:19, Jim Rollenhagen escreveu:

On Wed, Nov 04, 2015 at 02:55:49PM -0500, Sean Dague wrote:
On 11/04/2015 02:42 PM, Jim Rollenhagen wrote:
> On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
>> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
>>> On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
>>> On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
 Hi,

 The change in https://review.openstack.org/237122 touches a feature from
 ironic that has not been released in any tag yet.

 At first, we from the team who has written the patch thought that, as it
 has not been part of any release, we could do backwards incompatible
 changes on that part of the code. As it turned out from discussing with
 the community, ironic commits to keeping the master branch backwards
 compatible and a deprecation process is needed in that case.

 That stated, the question at hand is: How long should this deprecation
 process last?

 This spec specifies the deprecation policy we should follow:
 
https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst


 As from its excerpt below, the minimum obsolescence period must be
 max(next_release, 3 months).

 """
 Based on that data, an obsolescence date will be set. At the very
 minimum the feature (or API, or configuration option) should be marked
 deprecated (and still be supported) in the next stable release branch,
 and for at least three months linear time. For example, a feature
 deprecated in November 2015 should still appear in the Mitaka release
 and stable/mitaka stable branch and cannot be removed before the
 beginning of the N development cycle in April 2016. A feature deprecated
 in March 2016 should still appear in the Mitaka release and
 stable/mitaka stable branch, and cannot be removed before June 2016.
 """

 This spec, however, only covers released and/or tagged code.

 tl;dr:

 How should we proceed regarding code/features/configs/APIs that have not
 even been tagged yet?

 Isn't waiting for the next OpenStack release in this case too long?
 Otherwise, we are going to have features/configs/APIs/etc. that are
 deprecated from their very first tag/release.

 How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
 months? max(next_tag, 3 months)?
>>>
>>> -1
>>>
>>> The reason the wording is that way is because lots of people deploy
>>> OpenStack services in a continuous deployment model, from the master
>>> source
>>> branches (sometimes minus X number of commits as these deployers run the
>>> code through their test platforms).
>>>
>>> Not everyone uses tagged releases, and OpenStack as a community has
>>> committed (pun intended) to serving these continuous deployment scenarios.
>>>
>>> Right, so I asked Gabriel to send this because it's an odd case, and I'd
>>> like to clear up the governance doc on this, since it doesn't seem to
>>> say much about code that was never released.
>>>
>>> The rule is a cycle boundary *and* at least 3 months. However, in this
>>> case, the code was never in a release at all, much less a stable
>>> release. So looking at the two types of deployers:
>>>
>>> 1) CD from trunk: 3 months is fine, we do that, done.
>>>
>>> 2) Deploying stable releases: if we only wait three months and not a
>>> cycle boundary, they'll never see it. If we do wait for a cycle
>>> boundary, we're pushing deprecated code to them for (seemingly to me) no
>>> benefit.
>>>
>>> So, it makes sense to me to not introduce the cycle boundary thing in
>>> this case. But there is value in keeping the rule simple, and if we want
>>> this one to pass a cycle boundary to optimize for that, I'm okay with
>>> that too. :)
>>>
>>> (Side note: there's actually a third type of deployer for Ironic; one
>>> that deploys intermediate releases. I think if we give them at least one
>>> release and three months, they're okay, so the general standard
>>> deprecation rule covers them.)
>>>
>>> // jim
>>
>> So, summarizing that:
>>
>> * untagged/master: 3 months
>>
>> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
>>
>> * stable release: max(next release, 3 months)
>>
>> Is it correct?
>
> No, my proposal is that, but s/max/AND/.
>
> This also needs buyoff from other folks in the community, and an update
> to the document in the governance repo which requires TC approval.
>
> For now we must assume a cycle boundary and three months, and/or hold off on
> the patch until this is decided.

The AND version of this seems to respect the spirit of the original
intent. The 3 month window was designed to push back a little on last
minute deprecations for release, that we deleted the second master
landed. Which looked very different for stable release vs. CD consuming
folks.

The intermediate rel

Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Boris Pavlovic
Robert,

I don't have the exactly numbers, but during the real testing of real
deployments I saw the impact of polling resource, this is one of the reason
why we have to add quite big sleep() during polling in Rally to reduce
amount of GET requests and avoid DDoS of OpenStack..

In any case it doesn't seem like hard task to collect the numbers.

Best regards,
Boris Pavlovic

On Thu, Nov 5, 2015 at 3:56 AM, Robert Collins 
wrote:

> On 5 November 2015 at 04:42, Sean Dague  wrote:
> > On 11/04/2015 10:13 AM, John Garbutt wrote:
>
> > I think longer term we probably need a dedicated event service in
> > OpenStack. A few of us actually had an informal conversation about this
> > during the Nova notifications session to figure out if there was a way
> > to optimize the Searchlight path. Nearly everyone wants websockets,
> > which is good. The problem is, that means you've got to anticipate
> > 10,000+ open websockets as soon as we expose this. Which means the stack
> > to deliver that sanely isn't just a bit of python code, it's also the
> > highly optimized server underneath.
>
> So any decent epoll implementation should let us hit that without a
> super optimised server - eventlet being in that category. I totally
> get that we're going to expect thundering herds, but websockets isn't
> new and the stacks we have - apache, eventlet - have been around long
> enough to adjust to the rather different scaling pattern.
>
> So - lets not panic, get a proof of concept up somewhere and then run
> an actual baseline test. If thats shockingly bad *then* lets panic.
>
> -Rob
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
On Wed, Nov 4, 2015 at 3:12 PM, Kevin Benton  wrote:

> >If we add our own database for internal stuff, we go back to the same
> problem of allowing bad design.
>
> I'm not sure I understand what you are saying here. A JSON blob that only
> one driver knows how to interpret is worse than a vendor table.
>
I am only talking about the IPAM tables here. The reference implementation
doesn't need to play with JSON blob at all. Rather I would say, it
shouldn't. It can be left up to the vendors/users to manage that blob
responsibly. I can create my own database and point my IPAM module to that,
but then IPAM tables are practically useless for me. The only reason for
suggesting the blob is flexibility, which is the main reason for
pluggability of IPAM.


> They both are specific to one driver but at least with a vendor table you
> can have DB migrations, integrity, column queries, etc. Additionally, the
> vendor table with extra features exposed via an API extension makes it more
> clear to the API caller what is vendor specific.
>

I agree that thats a huge advantage of having a db. But sometimes, it may
not be absolutely necessary to have an extra DB.

e.g. For multiple gateways support, a separate database would probably add
more overhead than required. All I want is to be able to fetch those IPs.

The user can take a responsible decision whether to use the blob or the
database depending on the requirement, if they have the flexibility.

Can you elaborate what you mean by bad design?
>
When we are working on internal features, we have to follow different
timelines. Having an arbitrary blob can sometimes make us use that by
default, especially under pressing deadlines, instead of consulting with
broader audience and finding the right solution.



> On Nov 4, 2015 3:58 PM, "Shraddha Pandhe" 
> wrote:
>
>>
>>
>> On Wed, Nov 4, 2015 at 1:38 PM, Armando M.  wrote:
>>
>>>
>>>
>>> On 4 November 2015 at 13:21, Shraddha Pandhe <
>>> spandhe.openst...@gmail.com> wrote:
>>>
 Hi Salvatore,

 Thanks for the feedback. I agree with you that arbitrary JSON blobs
 will make IPAM much more powerful. Some other projects already do things
 like this.

 e.g. In Ironic, node has driver_info, which is JSON. it also has an
 'extras' arbitrary JSON field. This allows us to put any information in
 there that we think is important for us.

>>>
>>> I personally feel that relying on json blobs is not only dangerously
>>> affecting portability, but it causes us to bloat the business logic, and
>>> forcing us to be doing less efficient when querying/filtering data
>>>
>>
>>> Most importantly though, I feel it's like abdicating our responsibility
>>> to do a good design job.
>>>
>>
>>
>> How does it affect portability?
>>
>> I don't think it forces us to do anything. 'Allows'? Maybe. But that can
>> be solved. Before making any design decisions for internal
>> feature-requests, we should first check with the community if its a wider
>> use-case. If it is a wider use-case, we should collaborate and fix it
>> upstream the right way.
>>
>> I feel that, its impossible for the community to know all the use-cases.
>> Even if they knew, it would be impossible to incorporate all of them. I
>> filed a bug few months ago about multiple gateway support for subnets.
>>
>> https://bugs.launchpad.net/neutron/+bug/1464361
>>
>> It was marked as 'Wont fix' because nobody else had this use-case. Adding
>> and maintaining a patch to support this is super risky as it breaks the
>> APIs. A JSON blob would have helped me here.
>>
>> I have another use-case. For multi-ip support for Ironic, we want to
>> divide the IP allocation ranges into two: Static IPs and extra IPs. The
>> static IPs are pre-configured IPs for Ironic inventory whereas extra IPs
>> are the multi-ips. Nobody else in the community has this use-case.
>>
>> If we add our own database for internal stuff, we go back to the same
>> problem of allowing bad design.
>>
>>
>>
>>> Ultimately, we should be able to identify how to model these extensions
>>> you're thinking of both conceptually and logically.
>>>
>>
>> I would agree with that. If theres an effort going on in this direction,
>> ill be happy to join. Without this, people like us with unique use-cases
>> are stuck with having patches.
>>
>>
>>
>>>
>>> I couldn't care less if other projects use it, but we ended up using in
>>> Neutron too, and since I lost this battle time and time again, all I am
>>> left with is this rant :)
>>>
>>>


 Hoping to get some positive feedback from API and DB lieutenants too.


 On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando <
 salv.orla...@gmail.com> wrote:

> Arbitrary blobs are a powerful tools to circumvent limitations of an
> API, as well as other constraints which might be imposed for versioning or
> portability purposes.
> The parameters that should end up in such blob are typically specific
> for the target IPAM d

Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Boris Pavlovic
Sean,

This seems like a fundamental abuse of HTTP honestly. If you find
> yourself creating a ton of new headers, you are probably doing it wrong.


I totally agree on this. We shouldn't add a lot of HTTP headers. Imho why
not just return in body string with status (in my case).


> I think longer term we probably need a dedicated event service in
> OpenStack.


Unfortunately, this will work slower then current solution with JOINs,
require more resources and it will be very hard to use... (like you'll need
to add one more service to openstack, and use one more client..)


Best regards,
Boris Pavlovic


On Thu, Nov 5, 2015 at 12:42 AM, Sean Dague  wrote:

> On 11/04/2015 10:13 AM, John Garbutt wrote:
> > On 4 November 2015 at 14:49, Jay Pipes  wrote:
> >> On 11/04/2015 09:32 AM, Sean Dague wrote:
> >>>
> >>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
> 
>  On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
> >
> > Hi stackers,
> >
> > Usually such projects like Heat, Tempest, Rally, Scalar, and other
> tool
> > that works with OpenStack are working with resources (e.g. VM,
> Volumes,
> > Images, ..) in the next way:
> >
> >   >>> resource = api.resouce_do_some_stuff()
> >   >>> while api.resource_get(resource["uuid"]) != expected_status
> >   >>>sleep(a_bit)
> >
> > For each async operation they are polling and call many times
> > resource_get() which creates significant load on API and DB layers
> due
> > the nature of this request. (Usually getting full information about
> > resources produces SQL requests that contains multiple JOINs, e,g for
> > nova vm it's 6 joins).
> >
> > What if we add new API method that will just resturn resource status
> by
> > UUID? Or even just extend get request with the new argument that
> returns
> > only status?
> 
> 
>  +1
> 
>  All APIs should have an HTTP HEAD call on important resources for
>  retrieving quick status information for the resource.
> 
>  In fact, I proposed exactly this in my Compute "vNext" API proposal:
> 
>  http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
> 
>  Swift's API supports HEAD for accounts:
> 
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
> 
> 
>  containers:
> 
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
> 
> 
>  and objects:
> 
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
> 
>  So, yeah, I agree.
>  -jay
> >>>
> >>>
> >>> How would you expect this to work on "servers"? HEAD specifically
> >>> forbids returning a body, and, unlike swift, we don't return very much
> >>> information in our headers.
> >>
> >>
> >> I didn't propose doing it on a collection resource like "servers". Only
> on
> >> an entity resource like a single "server".
> >>
> >> HEAD /v2/{tenant}/servers/{uuid}
> >> HTTP/1.1 200 OK
> >> Content-Length: 1022
> >> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> >> Content-Type: application/json
> >> Date: Thu, 16 Jan 2014 21:13:19 GMT
> >> OpenStack-Compute-API-Server-VM-State: ACTIVE
> >> OpenStack-Compute-API-Server-Power-State: RUNNING
> >> OpenStack-Compute-API-Server-Task-State: NONE
> >
> > For polling, that sounds quite efficient and handy.
> >
> > For "servers" we could do this (I think there was a spec up that wanted
> this):
> >
> > HEAD /v2/{tenant}/servers
> > HTTP/1.1 200 OK
> > Content-Length: 1022
> > Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> > Content-Type: application/json
> > Date: Thu, 16 Jan 2014 21:13:19 GMT
> > OpenStack-Compute-API-Server-Count: 13
>
> This seems like a fundamental abuse of HTTP honestly. If you find
> yourself creating a ton of new headers, you are probably doing it wrong.
>
> I do think the near term work around is to actually use Searchlight.
> They're monitoring the notifications bus for nova, and refreshing
> resources when they see a notification which might have changed it. It
> still means that Searchlight is hitting our API more than ideal, but at
> least only one service is doing so, and if the rest hit that instead
> they'll get the resource without any db hits (it's all through an
> elastic search cluster).
>
> I think longer term we probably need a dedicated event service in
> OpenStack. A few of us actually had an informal conversation about this
> during the Nova notifications session to figure out if there was a way
> to optimize the Searchlight path. Nearly everyone wants websockets,
> which is good. The problem is, that means you've got to anticipate
> 10,000+ open websockets as soon as we expose this. Which means the stack
> to deliver that sanely isn't just a bit of python code, it's also the
> highly optimized server underneath.
>
> So, I feel like with Searchlight we've got a work around that's more
> efficient than we'r

Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Boris Pavlovic
John,

> Our resources are not. We've also had specific requests to prevent
> > header bloat because it impacts the HTTP caching systems. Also, it's
> > pretty clear that headers are really not where you want to put volatile
> > information, which this is.
> Hmm, you do make a good point about caching.



Caching is useful only in such cases when you would like to return same
data many times.
In our case we are interested in latest state of resource, such kinds of
things can't be cached.


> I think we should step back here and figure out what the actual problem
> > is, and what ways we might go about solving it. This has jumped directly
> > to a point in time optimized fast poll loop. It will shave a few cycles
> > off right now on our current implementation, but will still be orders of
> > magnitude more costly that consuming the Nova notifications if the only
> > thing that is cared about is task state transitions. And it's an API
> > change we have to live with largely *forever* so short term optimization
> > is not what we want to go for.
> I do agree with that.


The thing here is that we have to have Async API, because we have long
running operations.
And basically there are 3 approaches to understand that operation is done:
1) pub/sub
2) polling resource status
3) long polling requests

All approaches have pros and cons, however the "actual" problem will stay
the same and you can't fix that..


Best regards,
Boris Pavlovic

On Thu, Nov 5, 2015 at 12:18 AM, John Garbutt  wrote:

> On 4 November 2015 at 15:00, Sean Dague  wrote:
> > On 11/04/2015 09:49 AM, Jay Pipes wrote:
> >> On 11/04/2015 09:32 AM, Sean Dague wrote:
> >>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
>  On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
> > Hi stackers,
> >
> > Usually such projects like Heat, Tempest, Rally, Scalar, and other
> tool
> > that works with OpenStack are working with resources (e.g. VM,
> Volumes,
> > Images, ..) in the next way:
> >
> >   >>> resource = api.resouce_do_some_stuff()
> >   >>> while api.resource_get(resource["uuid"]) != expected_status
> >   >>>sleep(a_bit)
> >
> > For each async operation they are polling and call many times
> > resource_get() which creates significant load on API and DB layers
> due
> > the nature of this request. (Usually getting full information about
> > resources produces SQL requests that contains multiple JOINs, e,g for
> > nova vm it's 6 joins).
> >
> > What if we add new API method that will just resturn resource status
> by
> > UUID? Or even just extend get request with the new argument that
> > returns
> > only status?
> 
>  +1
> 
>  All APIs should have an HTTP HEAD call on important resources for
>  retrieving quick status information for the resource.
> 
>  In fact, I proposed exactly this in my Compute "vNext" API proposal:
> 
>  http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
> 
>  Swift's API supports HEAD for accounts:
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
> 
> 
> 
>  containers:
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
> 
> 
> 
>  and objects:
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
> 
> 
>  So, yeah, I agree.
>  -jay
> >>>
> >>> How would you expect this to work on "servers"? HEAD specifically
> >>> forbids returning a body, and, unlike swift, we don't return very much
> >>> information in our headers.
> >>
> >> I didn't propose doing it on a collection resource like "servers". Only
> >> on an entity resource like a single "server".
> >>
> >> HEAD /v2/{tenant}/servers/{uuid}
> >> HTTP/1.1 200 OK
> >> Content-Length: 1022
> >> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> >> Content-Type: application/json
> >> Date: Thu, 16 Jan 2014 21:13:19 GMT
> >> OpenStack-Compute-API-Server-VM-State: ACTIVE
> >> OpenStack-Compute-API-Server-Power-State: RUNNING
> >> OpenStack-Compute-API-Server-Task-State: NONE
> >
> > Right, but these headers aren't in the normal resource. They are
> > returned in the body only.
> >
> > The point of HEAD is give me the same thing as GET without the body,
> > because I only care about the headers. Swift resources are structured in
> > a way where this information is useful.
>
> I guess we would have to add this to GET requests, for consistency,
> which feels like duplication.
>
> > Our resources are not. We've also had specific requests to prevent
> > header bloat because it impacts the HTTP caching systems. Also, it's
> > pretty clear that headers are really not where you want to put volatile
> > information, which this is.
>
> Hmm, you do make a good point about caching.
>
> > I think we should step back here and figure out what the actual problem
> > is, and what ways we might g

Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-04 Thread Armando M.
On 3 November 2015 at 08:49, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi all,
>
> currently we have a single neutron-wide stable-maint gerrit group that
> maintains all stable branches for all stadium subprojects. I believe
> that in lots of cases it would be better to have subproject members to
> run their own stable maintenance programs, leaving
> neutron-stable-maint folks to help them in non-obvious cases, and to
> periodically validate that project wide stable policies are still honore
> d.
>
> I suggest we open gate to creating subproject stable-maint teams where
> current neutron-stable-maint members feel those subprojects are ready
> for that and can be trusted to apply stable branch policies in
> consistent way.
>
> Note that I don't suggest we grant those new permissions completely
> automatically. If neutron-stable-maint team does not feel safe to give
> out those permissions to some stable branches, their feeling should be
> respected.
>
> I believe it will be beneficial both for subprojects that would be
> able to iterate on backports in more efficient way; as well as for
> neutron-stable-maint members who are often busy with other stuff, and
> often times are not the best candidates to validate technical validity
> of backports in random stadium projects anyway. It would also be in
> line with general 'open by default' attitude we seem to embrace in
> Neutron.
>
> If we decide it's the way to go, there are alternatives on how we
> implement it. For example, we can grant those subproject teams all
> permissions to merge patches; or we can leave +W votes to
> neutron-stable-maint group.
>
> I vote for opening the gates, *and* for granting +W votes where
> projects showed reasonable quality of proposed backports before; and
> leaving +W to neutron-stable-maint in those rare cases where history
> showed backports could get more attention and safety considerations
> [with expectation that those subprojects will eventually own +W votes
> as well, once quality concerns are cleared].
>
> If we indeed decide to bootstrap subproject stable-maint teams, I
> volunteer to reach the candidate teams for them to decide on initial
> lists of stable-maint members, and walk them thru stable policies.
>
> Comments?
>

It was like this in the past, then it got changed, now we're proposing of
changing it back? Will we change it back again in 6 months time? Just
wondering :)

I suppose this has to do with the larger question of what belonging to the
stadium really means. I guess this is a concept that is still shaping up,
but if the concept is here to stay, I personally believe that being part of
the stadium means adhering to a common set of practices and principles
(like those largely implemented in OpenStack) where all projects feel and
behave equally. We have evidence where a few feel that 'stable' is not a
concept worth honoring and for that reason I am wary to relax this

I suppose it could be fine to have a probation period only to grant full
rights later on, but who is going to police that? That's a job in itself.
Once the permission is granted are we ever really gonna revoke it? And what
does this mean once the damage is done?

Perhaps an alternative could be to add a selected member of each subproject
to the neutron-stable-maint, with the proviso that they are only supposed
to +2 their backports (the same way Lieutenant is supposed to +2 their
area, and *only their area* of expertise), leaving the +2/+A to more
seasoned folks who have been doing this for a lot longer.

Would that strike a better middle ground?

Kyle, Russell and I have talked during the summit about clarifying the
meaning of the stadium. Stable backports falls into this category, and I am
glad you brought this up.

Cheers,
Armando


>
> Ihar
> -BEGIN PGP SIGNATURE-
>
> iQEcBAEBAgAGBQJWOOWkAAoJEC5aWaUY1u57sVIIALrnqvuj3t7c25DBHvywxBZV
> tCMlRY4cRCmFuVy0VXokM5DxGQ3VRwbJ4uWzuXbeaJxuVWYT2Kn8JJ+yRjdg7Kc4
> 5KXy3Xv0MdJnQgMMMgyjJxlTK4MgBKEsCzIRX/HLButxcXh3tqWAh0oc8WW3FKtm
> wWFZ/2Gmf4K9OjuGc5F3dvbhVeT23IvN+3VkobEpWxNUHHoALy31kz7ro2WMiGs7
> GHzatA2INWVbKfYo2QBnszGTp4XXaS5KFAO8+4H+HvPLxOODclevfKchOIe6jthH
> F1z4JcJNMmQrQDg1WSqAjspAlne1sqdVLX0efbvagJXb3Ju63eSLrvUjyCsZG4Q=
> =HE+y
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing a new library: requestsexceptions

2015-11-04 Thread Monty Taylor

On 11/04/2015 06:30 PM, James E. Blair wrote:

Hi,

I'm pleased to announce the availability of a new micro-library named
requestsexceptions.  Now the task of convincing the requests library
not to fill up your filesystem with warnings about SSL requests has
never been easier!

Over in infra-land, we use the requests library a lot, whether it's
gertty talking to Gerrit or shade talking to OpenStack, and we love
using it.  It's a pleasure.  Except for two little things.

Requests is in the middle of a number of unfortunate standoffs.  It is
attempting to push the bar on SSL security by letting us all know when
a request is substandard in some way -- whether that is because a
certificate is missing a subject alternate name field, or the version
of Python in use is missing the latest SSL features.

This is great, but in many cases a user of requests is unable to
address any of the underlying causes of these warnings.  For example,
try as we might, public cloud providers are still using non-SAN
certificates.  And upgrading python on a system (or even the
underlying ssl module) is often out of the question.

Requests has a solution to this -- a simple recipe to disable specific
warnings when users know they are not necessary.

This is when we run into another standoff.

Requests is helpfully packaged in many GNU/Linux distributions.
However, the standard version of requests bundles the urllib3 library.
Some packagers have *unbundled* the urllib3 library from requests and
cause it to use the packaged version of urllib3.  This would be a
simple matter for the packagers and requests authors to argue about
over beer at PyCon, except if you want to disable a specific warning
rather than all warnings you need to import the specific urllib3
exceptions that requests uses.  The import path for those exceptions
will be different depending on whether urllib3 is bundled or not.

This means that in order to find a specific exception in order to
handle a requests warning, code like this must be used:

   try:
   from requests.packages.urllib3.exceptions import InsecurePlatformWarning
   except ImportError:
   try:
   from urllib3.exceptions import InsecurePlatformWarning
   except ImportError:
   InsecurePlatformWarning = None

The requestsexceptions library handles that for you so that you can
simply type:

   from requestsexepctions import InsecurePlatformWarning

We have just released requestsexceptions to pypi at version 1.1.1, and
proposed it to global requirements.  You can find it here:

   https://pypi.python.org/pypi/requestsexceptions
   https://git.openstack.org/cgit/openstack-infra/requestsexceptions


All hail the removal of non-actionable warnings from my logs!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Designate] Mitaka Summit Notes

2015-11-04 Thread Hayes, Graham
Here are my rough notes I wrote up about the Mitaka design summit, and
what designate covered during the week.

Design Summit
=

This was a much more relaxed summit for Designate. We had done a huge
amount of
work in Vancouver, and we were nailing down details and doing cross
project work.

We got a few major features discussed, and laid out our priorities for
the next cycle.

We decided on the following:

1. Nova / Neutron Integration
2. Pool Scheduler
3. Pool Configuration migration to database
4. IXFR (Incremental Zone Transfer)
5. ALIAS Record type (Allows for CNAME like records at the root of a DNS
Zone)
6. DNSSEC (this may drag on for a cycle or two)

Nova & Neutron Integration
--

This is progressing pretty well, and Miguel Lavalle has patches up for
this. He,
Kiall Mac Innes and Carl Baldwin demoed this in a session on the
Thursday. If
you are interested in the idea, it is definitely worth a watch `here`_

Pool Scheduler
--

A vital piece of the pools re architecture that needs to be finished out.
There is no great debate on what we need, and I have taken on the task of
finishing this out.

Pool Configuration migration to database


Are current configuration file format is quite complex, and moving it to
an API
allows us to iterate on it much quicker, while reducing the complexity
of the
config file. I recently had to write an ansible play to write out this
file, and
it was not fun.

Kiall had a patch up, so we should be able to continue based on that.

IXFR


There was quite a lot of discussion on how this will be implemented,
both in the
work session, and the 1/2 day session on the Friday. Tim Simmons has
stepped up,
to continue the work on this.

ALIAS
-

This is quite a sort after feature - but is quite complex to implement.
The DNS RFCs explicitly ban this behavior, so we have to work the solution
around them. Eric Larson has been doing quite a lot of work on this in
Liberty,
and is going to continue in Mitaka.

DNSSEC
--

This is a feature that we have been looking at for a while, but we
started to plan
out our roadmap for it recently.

We (I) am allergic to storing private encryption keys in Designate, so
we had a
good conversation with Barbican about implementing a signing endpoint
that we would
post a hash to. This work is on me to now drive for Mitaka, so we can
consume it in N.

There is some raw notes in the `etherpad`_ and I expect we will soon be
seeing
specs building out of them.

.. _etherpad:
https://etherpad.openstack.org/p/mitaka-designate-summit-roadmap
.. _here: http://https://www.youtube.com/watch?v=AZbiARM9FPM

Thanks for reading!

Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing a new library: requestsexceptions

2015-11-04 Thread James E. Blair
Hi,

I'm pleased to announce the availability of a new micro-library named
requestsexceptions.  Now the task of convincing the requests library
not to fill up your filesystem with warnings about SSL requests has
never been easier!

Over in infra-land, we use the requests library a lot, whether it's
gertty talking to Gerrit or shade talking to OpenStack, and we love
using it.  It's a pleasure.  Except for two little things.

Requests is in the middle of a number of unfortunate standoffs.  It is
attempting to push the bar on SSL security by letting us all know when
a request is substandard in some way -- whether that is because a
certificate is missing a subject alternate name field, or the version
of Python in use is missing the latest SSL features.

This is great, but in many cases a user of requests is unable to
address any of the underlying causes of these warnings.  For example,
try as we might, public cloud providers are still using non-SAN
certificates.  And upgrading python on a system (or even the
underlying ssl module) is often out of the question.

Requests has a solution to this -- a simple recipe to disable specific
warnings when users know they are not necessary.

This is when we run into another standoff.

Requests is helpfully packaged in many GNU/Linux distributions.
However, the standard version of requests bundles the urllib3 library.
Some packagers have *unbundled* the urllib3 library from requests and
cause it to use the packaged version of urllib3.  This would be a
simple matter for the packagers and requests authors to argue about
over beer at PyCon, except if you want to disable a specific warning
rather than all warnings you need to import the specific urllib3
exceptions that requests uses.  The import path for those exceptions
will be different depending on whether urllib3 is bundled or not.

This means that in order to find a specific exception in order to
handle a requests warning, code like this must be used:

  try:
  from requests.packages.urllib3.exceptions import InsecurePlatformWarning
  except ImportError:
  try:
  from urllib3.exceptions import InsecurePlatformWarning
  except ImportError:
  InsecurePlatformWarning = None

The requestsexceptions library handles that for you so that you can
simply type:

  from requestsexepctions import InsecurePlatformWarning
  
We have just released requestsexceptions to pypi at version 1.1.1, and
proposed it to global requirements.  You can find it here:

  https://pypi.python.org/pypi/requestsexceptions
  https://git.openstack.org/cgit/openstack-infra/requestsexceptions

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Kevin Benton
>If we add our own database for internal stuff, we go back to the same
problem of allowing bad design.

I'm not sure I understand what you are saying here. A JSON blob that only
one driver knows how to interpret is worse than a vendor table. They both
are specific to one driver but at least with a vendor table you can have DB
migrations, integrity, column queries, etc. Additionally, the vendor table
with extra features exposed via an API extension makes it more clear to the
API caller what is vendor specific.

Can you elaborate what you mean by bad design?
On Nov 4, 2015 3:58 PM, "Shraddha Pandhe" 
wrote:

>
>
> On Wed, Nov 4, 2015 at 1:38 PM, Armando M.  wrote:
>
>>
>>
>> On 4 November 2015 at 13:21, Shraddha Pandhe > > wrote:
>>
>>> Hi Salvatore,
>>>
>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>> make IPAM much more powerful. Some other projects already do things like
>>> this.
>>>
>>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>> there that we think is important for us.
>>>
>>
>> I personally feel that relying on json blobs is not only dangerously
>> affecting portability, but it causes us to bloat the business logic, and
>> forcing us to be doing less efficient when querying/filtering data
>>
>
>> Most importantly though, I feel it's like abdicating our responsibility
>> to do a good design job.
>>
>
>
> How does it affect portability?
>
> I don't think it forces us to do anything. 'Allows'? Maybe. But that can
> be solved. Before making any design decisions for internal
> feature-requests, we should first check with the community if its a wider
> use-case. If it is a wider use-case, we should collaborate and fix it
> upstream the right way.
>
> I feel that, its impossible for the community to know all the use-cases.
> Even if they knew, it would be impossible to incorporate all of them. I
> filed a bug few months ago about multiple gateway support for subnets.
>
> https://bugs.launchpad.net/neutron/+bug/1464361
>
> It was marked as 'Wont fix' because nobody else had this use-case. Adding
> and maintaining a patch to support this is super risky as it breaks the
> APIs. A JSON blob would have helped me here.
>
> I have another use-case. For multi-ip support for Ironic, we want to
> divide the IP allocation ranges into two: Static IPs and extra IPs. The
> static IPs are pre-configured IPs for Ironic inventory whereas extra IPs
> are the multi-ips. Nobody else in the community has this use-case.
>
> If we add our own database for internal stuff, we go back to the same
> problem of allowing bad design.
>
>
>
>> Ultimately, we should be able to identify how to model these extensions
>> you're thinking of both conceptually and logically.
>>
>
> I would agree with that. If theres an effort going on in this direction,
> ill be happy to join. Without this, people like us with unique use-cases
> are stuck with having patches.
>
>
>
>>
>> I couldn't care less if other projects use it, but we ended up using in
>> Neutron too, and since I lost this battle time and time again, all I am
>> left with is this rant :)
>>
>>
>>>
>>>
>>> Hoping to get some positive feedback from API and DB lieutenants too.
>>>
>>>
>>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando <
>>> salv.orla...@gmail.com> wrote:
>>>
 Arbitrary blobs are a powerful tools to circumvent limitations of an
 API, as well as other constraints which might be imposed for versioning or
 portability purposes.
 The parameters that should end up in such blob are typically specific
 for the target IPAM driver (to an extent they might even identify a
 specific driver to use), and therefore an API consumer who knows what
 backend is performing IPAM can surely leverage it.

 Therefore this would make a lot of sense, assuming API portability and
 not leaking backend details are not a concern.
 The Neutron team API & DB lieutenants will be able to provide more
 input on this regard.

 In this case other approaches such as a vendor specific extension are
 not a solution - assuming your granularity level is the allocation pool;
 indeed allocation pools are not first-class neutron resources, and it is
 not therefore possible to have APIs which associate vendor specific
 properties to allocation pools.

 Salvatore

 On 4 November 2015 at 21:46, Shraddha Pandhe <
 spandhe.openst...@gmail.com> wrote:

> Hi folks,
>
> I have a small question/suggestion about IPAM.
>
> With IPAM, we are allowing users to have their own IPAM drivers so
> that they can manage IP allocation. The problem is, the new ipam tables in
> the database have the same columns as the old tables. So, as a user, if I
> want to have my own logic for ip allocation, I can't actually get any help
> from the database. Whereas, if we had an arbitrar

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
On Wed, Nov 4, 2015 at 1:38 PM, Armando M.  wrote:

>
>
> On 4 November 2015 at 13:21, Shraddha Pandhe 
> wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>> make IPAM much more powerful. Some other projects already do things like
>> this.
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>> 'extras' arbitrary JSON field. This allows us to put any information in
>> there that we think is important for us.
>>
>
> I personally feel that relying on json blobs is not only dangerously
> affecting portability, but it causes us to bloat the business logic, and
> forcing us to be doing less efficient when querying/filtering data
>

> Most importantly though, I feel it's like abdicating our responsibility to
> do a good design job.
>


How does it affect portability?

I don't think it forces us to do anything. 'Allows'? Maybe. But that can be
solved. Before making any design decisions for internal feature-requests,
we should first check with the community if its a wider use-case. If it is
a wider use-case, we should collaborate and fix it upstream the right way.

I feel that, its impossible for the community to know all the use-cases.
Even if they knew, it would be impossible to incorporate all of them. I
filed a bug few months ago about multiple gateway support for subnets.

https://bugs.launchpad.net/neutron/+bug/1464361

It was marked as 'Wont fix' because nobody else had this use-case. Adding
and maintaining a patch to support this is super risky as it breaks the
APIs. A JSON blob would have helped me here.

I have another use-case. For multi-ip support for Ironic, we want to divide
the IP allocation ranges into two: Static IPs and extra IPs. The static IPs
are pre-configured IPs for Ironic inventory whereas extra IPs are the
multi-ips. Nobody else in the community has this use-case.

If we add our own database for internal stuff, we go back to the same
problem of allowing bad design.



> Ultimately, we should be able to identify how to model these extensions
> you're thinking of both conceptually and logically.
>

I would agree with that. If theres an effort going on in this direction,
ill be happy to join. Without this, people like us with unique use-cases
are stuck with having patches.



>
> I couldn't care less if other projects use it, but we ended up using in
> Neutron too, and since I lost this battle time and time again, all I am
> left with is this rant :)
>
>
>>
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>
>>
>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando > > wrote:
>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for versioning or
>>> portability purposes.
>>> The parameters that should end up in such blob are typically specific
>>> for the target IPAM driver (to an extent they might even identify a
>>> specific driver to use), and therefore an API consumer who knows what
>>> backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability and
>>> not leaking backend details are not a concern.
>>> The Neutron team API & DB lieutenants will be able to provide more input
>>> on this regard.
>>>
>>> In this case other approaches such as a vendor specific extension are
>>> not a solution - assuming your granularity level is the allocation pool;
>>> indeed allocation pools are not first-class neutron resources, and it is
>>> not therefore possible to have APIs which associate vendor specific
>>> properties to allocation pools.
>>>
>>> Salvatore
>>>
>>> On 4 November 2015 at 21:46, Shraddha Pandhe <
>>> spandhe.openst...@gmail.com> wrote:
>>>
 Hi folks,

 I have a small question/suggestion about IPAM.

 With IPAM, we are allowing users to have their own IPAM drivers so that
 they can manage IP allocation. The problem is, the new ipam tables in the
 database have the same columns as the old tables. So, as a user, if I want
 to have my own logic for ip allocation, I can't actually get any help from
 the database. Whereas, if we had an arbitrary json blob in the ipam tables,
 I could put any useful information/tags there, that can help me for
 allocation.

 Does this make sense?

 e.g. If I want to create multiple allocation pools in a subnet and use
 them for different purposes, I would need some sort of tag for each
 allocation pool for identification. Right now, there is no scope for doing
 something like that.

 Any thoughts? If there are any other way to solve the problem, please
 let me know





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://

Re: [openstack-dev] [puppet] acceptance: run WSGI for API services

2015-11-04 Thread Sergii Golovatiuk
Hi,

mod_wsgi has some limitations so making order for WSGI services. Reloading
and restarting processes is well documented at [1]. Creating ordering is a
problem. There are several options:
1. Use uwsgi instead of mod_wsgi
2. Use several apache instances (one instance for one service)
3. Ignore ordering as processes start quite fast

[1] https://code.google.com/p/modwsgi/wiki/ReloadingSourceCode

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Nov 4, 2015 at 4:31 PM, Jason Guiditta  wrote:

> On 14/08/15 09:45 -0400, Emilien Macchi wrote:
>
>> So far we have WSGI support for puppet-keystone & pupper-ceilometer.
>> I'm currently working on other components to easily deploy OpenStack
>> running API services using apache/wsgi instead of eventlet.
>>
>> I would like to propose some change in our beaker tests:
>>
>> stable/kilo:
>> * puppet-{ceilometer,keystone}: test both cases so we validate the
>> upgrade with beaker
>> * puppet-*: no wsgi support now, but eventually could be backported from
>> master (liberty) once pushed.
>>
>> master (future stable/liberty):
>> * puppet-{ceilometer,keystone}: keep only WSGI scenario
>> * puppet-*: push WSGI support in manifests, test them in beaker,
>> eventually backport them to stable/kilo, and if on time (before
>> stable/libery), drop non-WSGI scenario.
>>
>> The goal here is to:
>> * test upgrade from non-WSGI to WSGI setup in stable/kilo for a maximum
>> of modules
>> * keep WSGI scenario only for Liberty
>>
>> Thoughts?
>> --
>> Emilien Macchi
>>
>> Sorry for the late reply, but I am wondering if anyone knows how we
> (via puppet, pacemaker, or whatever else - even the services
> themselves, within apache) would handle start order if all these
> services become wsgi apps running under apache?  In other words, for
> an HA deployment, as an example, we typically set ceilometer-central
> to start _after_ keystone.  If they are both in apache, how could this
> be done?  Is it truly not needed? If not, is this something new, or
> have those of us working on deployments with the pacemaker
> architecture been misinformed all this time?
>
> -j
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove][release] python-troveclient 1.4.0 release

2015-11-04 Thread Craig Vyvial
Hello everyone,

We have released the 1.4.0 version of the python-troveclient.

In liberty Trove had added more datastores to support clustering but the
client was missing an attribute to allow you to set the network and
availability zone for each of the instances in the cluster. This
troveclient version release adds the az and nic parameters.

Thanks,
Craig Vyvial


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Fox, Kevin M
To clarify that statement a little more,

Speaking only for myself as an op, I don't want to support yet one more 
snowflake in a sea of snowflakes, that works differently then all the rest, 
without a very good reason.

Java has its own set of issues associated with the JVM. Care, and feeding sorts 
of things. If we are to invest time/money/people in learning how to properly 
maintain it, its easier to justify if its not just a one off for just DLM,

So I wouldn't go so far as to say we're vehemently opposed to java, just that 
DLM on its own is probably not a strong enough feature all on its own to 
justify requiring pulling in java. Its been only a very recent thing that you 
could convince folks that DLM was needed at all. So either make java optional, 
or find some other use cases that needs java badly enough that you can make 
java a required component. I suspect some day searchlight might be compelling 
enough for that, but not today.

As for the default, the default should be good reference. if most sites would 
run with etc or something else since java isn't needed, then don't default 
zookeeper on.

Thanks,
Kevin 


From: Ed Leafe [e...@leafe.com]
Sent: Wednesday, November 04, 2015 12:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager  
discussion @ the summit

On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>
> Here's a Devstack review for zookeeper in support of this initiative:
>
> https://review.openstack.org/241040
>
> Thanks,
> Dims

I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2015-11-04 14:13:33 -0800:
> Agreed, it'd be nice to have an 'audit' of the various projects configs 
> and try to categorize which ones should be reloadable (and the priority 
> to make it reloadable) and which ones are service discovery configs (and 
> probably shouldn't be in config in the first place) and which ones are 
> nice to haves to be configurable... (like rpc_response_timeout).
> 
> The side-effects of making a few things configurable will likely cause a 
> whole bunch of other issues anyway (like how an application/library... 
> gets notified that a config has changed and it may need to re-adjust 
> itself to those new values which may include for example unloading a 
> driver, stopping a thread, starting a new driver...), so I'm thinking we 
> should start small and grow as needed (based on above prioritization).
> 
> Log levels are high on my known list.

Right, anything that is going to take significant application rework to
support should wait. The session identified a few relatively simple
options that help with debugging when they are changed, including
logging, and we should start with those.

Doug

> 
> Fox, Kevin M wrote:
> > Ah, I hadn't considered the rabbit_hosts (discovery) case. yeah, that would 
> > be a useful thing to be able to tweak live. I haven't needed that feature 
> > yet, but could see how that flexibility could come in handy.
> >
> > Thanks,
> > Kevin
> > 
> > From: Joshua Harlow [harlo...@fastmail.com]
> > Sent: Wednesday, November 04, 2015 11:34 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic 
> > Reconfiguration of OpenStack Services
> >
> > Along this line, thinks like the following are likely more changeable
> > (and my guess is operators would want to change them when things start
> > going badly), for example from a nova.conf that I have laying around...
> >
> > [DEFAULT]
> >
> > rabbit_hosts=...
> > rpc_response_timeout=...
> > default_notification_level=...
> > default_log_levels=...
> >
> > [glance]
> >
> > api_servers=...
> >
> > (and more)
> >
> > Some of those I think should have higher priority as being
> > reconfigurable, but I think operators should be asked what they think
> > would be useful and prioritize those.
> >
> > Some of those really are service discovery 'types' (rabbit_hosts,
> > glance/api_servers, keystone/api_servers) but fixing this is likely a
> > longer term goal (see conversations in keystone).
> >
> > Joshua Harlow wrote:
> >> gord chung wrote:
> >>> we actually had a solution implemented in Ceilometer to handle this[1].
> >>>
> >>> that said, based on the results of our survey[2], we found that most
> >>> operators *never* update configuration files after the initial setup and
> >>> if they did it was very rarely (monthly updates). the question related
> >>> to Ceilometer and its pipeline configuration file so the results might
> >>> be specific to Ceilometer. I think you should definitely query operators
> >>> before undertaking any work. the last thing you want to do is implement
> >>> a feature no one really needs/wants.
> >>>
> >>> [1]
> >>> http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
> >>>
> >>> [2]
> >>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html
> >>>
> >> So my general though on the above is yes, definitely consult operators
> >> to see if they would use this, although if a feature doesn't exist and
> >> has never existed (say outside of ceilometer) then it's sort of hard to
> >> get an accurate survey result from a group of people that have never had
> >> the feature in the first place... Either way it should be done, just to
> >> get more knowledge...
> >>
> >> I know operators (at yahoo!) want to be able to dynamically change the
> >> logging level, and that's not a monthly task, but more of an 'as-needed'
> >> one that would be very helpful when things start going badly... So
> >> perhaps the set of reloadable configuration should start out small and
> >> not encompass all the things...
> >>
> >>> On 04/11/2015 10:00 AM, Marian Horban wrote:
>  Hi guys,
> 
>  Unfortunately I haven't been on Tokio summit but I know that there was
>  discussion about dynamic reloading of configuration.
>  Etherpad refs:
>  https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,
> 
> 
>  https://etherpad.openstack.org/p/mitaka-oslo-security-logging
> 
>  In this thread I want to discuss agreements reached on the summit and
>  discuss
>  implementation details.
> 
>  Some notes taken from etherpad and my remarks:
> 
>  1. "Adding "mutable" parameter for each option."
>  "Do we have an option mutable=True on CfgOpt? Yes"
>  

Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Joshua Harlow
Agreed, it'd be nice to have an 'audit' of the various projects configs 
and try to categorize which ones should be reloadable (and the priority 
to make it reloadable) and which ones are service discovery configs (and 
probably shouldn't be in config in the first place) and which ones are 
nice to haves to be configurable... (like rpc_response_timeout).


The side-effects of making a few things configurable will likely cause a 
whole bunch of other issues anyway (like how an application/library... 
gets notified that a config has changed and it may need to re-adjust 
itself to those new values which may include for example unloading a 
driver, stopping a thread, starting a new driver...), so I'm thinking we 
should start small and grow as needed (based on above prioritization).


Log levels are high on my known list.

Fox, Kevin M wrote:

Ah, I hadn't considered the rabbit_hosts (discovery) case. yeah, that would be 
a useful thing to be able to tweak live. I haven't needed that feature yet, but 
could see how that flexibility could come in handy.

Thanks,
Kevin

From: Joshua Harlow [harlo...@fastmail.com]
Sent: Wednesday, November 04, 2015 11:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic 
Reconfiguration of OpenStack Services

Along this line, thinks like the following are likely more changeable
(and my guess is operators would want to change them when things start
going badly), for example from a nova.conf that I have laying around...

[DEFAULT]

rabbit_hosts=...
rpc_response_timeout=...
default_notification_level=...
default_log_levels=...

[glance]

api_servers=...

(and more)

Some of those I think should have higher priority as being
reconfigurable, but I think operators should be asked what they think
would be useful and prioritize those.

Some of those really are service discovery 'types' (rabbit_hosts,
glance/api_servers, keystone/api_servers) but fixing this is likely a
longer term goal (see conversations in keystone).

Joshua Harlow wrote:

gord chung wrote:

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most
operators *never* update configuration files after the initial setup and
if they did it was very rarely (monthly updates). the question related
to Ceilometer and its pipeline configuration file so the results might
be specific to Ceilometer. I think you should definitely query operators
before undertaking any work. the last thing you want to do is implement
a feature no one really needs/wants.

[1]
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html

[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html


So my general though on the above is yes, definitely consult operators
to see if they would use this, although if a feature doesn't exist and
has never existed (say outside of ceilometer) then it's sort of hard to
get an accurate survey result from a group of people that have never had
the feature in the first place... Either way it should be done, just to
get more knowledge...

I know operators (at yahoo!) want to be able to dynamically change the
logging level, and that's not a monthly task, but more of an 'as-needed'
one that would be very helpful when things start going badly... So
perhaps the set of reloadable configuration should start out small and
not encompass all the things...


On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,


https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service
contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt
constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL
options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different
projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutab

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Vilobh Meshram
I will be working on adding the Consul driver to Tooz [1].

-Vilobh
[1] https://blueprints.launchpad.net/python-tooz/+spec/add-consul-driver

On Wed, Nov 4, 2015 at 2:05 PM, Mark Voelker  wrote:

> On Nov 4, 2015, at 4:41 PM, Gregory Haynes  wrote:
> >
> > Excerpts from Clint Byrum's message of 2015-11-04 21:17:15 +:
> >> Excerpts from Joshua Harlow's message of 2015-11-04 12:57:53 -0800:
> >>> Ed Leafe wrote:
>  On Nov 3, 2015, at 6:45 AM, Davanum Srinivas
> wrote:
> > Here's a Devstack review for zookeeper in support of this initiative:
> >
> > https://review.openstack.org/241040
> >
> > Thanks,
> > Dims
> 
>  I thought that the operators at that session made it very clear that
> they would *not* run any Java applications, and that if OpenStack required
> a Java app to run, they would no longer use it.
> 
>  I like the idea of using Zookeeper as the DLM, but I don't think it
> should be set up as a default, even for devstack, given the vehement
> opposition expressed.
> 
> >>>
> >>> What should be the default then?
> >>>
> >>> As for 'vehement opposition' I didn't see that as being there, I saw a
> >>> small set of people say 'I don't want to run java or I can't run java',
> >>> some comments about requiring using oracles JVM (which isn't correct,
> >>> OpenJDK works for folks that I have asked in the zookeeper community
> and
> >>> else where) and the rest of the folks were ok with it...
> >>>
> >>> If people want a alternate driver, propose it IMHO...
> >>>
> >>
> >> The few operators who stated this position are very much appreciated
> >> for standing up and making it clear. It has helped us not step into a
> >> minefield with a native ZK driver!
> >>
> >> Consul is the most popular second choice, and should work fine for the
> >> use cases we identified. It will not be sufficient if we ever have
> >> a use case where many agents must lock many resources, since Consul
> >> does not offer a way to grant lock access in a fair manner (ZK does,
> >> and we're not aware of any others that do actually). Using Consul or
> >> etcd for this case would result in situations where lock waiters may
> >> wait _forever_, and will likely wait longer than they should at times.
> >> Hopefully we can simply avoid the need for this in OpenStack all
> together.
> >>
> >> I do _not_ think we should wait for constrained operators to scream
> >> at us about ZK to write a Consul driver. It's important enough that we
> >> should start documenting all of the issues we expect to see with Consul
> >> (it's not widely packaged, for instance) and writing a driver with its
> >> own devstack plugin.
> >>
> >> If there are Consul experts who did not make it to those sessions,
> >> it would be greatly appreciated if you can spend some time on this.
> >>
> >> What I don't want to see happen is we get into a deadlock where there's
> >> a large portion of users who can't upgrade and no driver to support
> them.
> >> So lets stay ahead of the problem, and get a set of drivers that works
> >> for everybody!
> >>
> >
> > One additional note - out of the three possible options I see for tooz
> > drivers in production (zk, consul, etcd) we currently only have drivers
> > for ZK. This means that unless new drivers are created, when we depend
> > on tooz we will be requiring folks deploy zk.
> >
> > It would be *awesome* if some folks stepped up to create and support at
> > least one of the aternate backends.
> >
> > Although I am a fan of the ZK solution, I have an old WIP patch for
> > creating an etcd driver. I would like to revive and maintain it, but I
> > would also need one more maintainer per the new rules for in tree
> > drivers…
>
> For those following along at home, said WIP etcd driver patch is here:
>
> https://review.openstack.org/#/c/151463/
>
> And said rules are at:
>
> https://review.openstack.org/#/c/240645/
>
> And FWIW, I too am personally fine with ZK as a default for devstack.
>
> At Your Service,
>
> Mark T. Voelker
>
> >
> > Cheers,
> > Greg
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Fox, Kevin M
Ah, I hadn't considered the rabbit_hosts (discovery) case. yeah, that would be 
a useful thing to be able to tweak live. I haven't needed that feature yet, but 
could see how that flexibility could come in handy.

Thanks,
Kevin

From: Joshua Harlow [harlo...@fastmail.com]
Sent: Wednesday, November 04, 2015 11:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic 
Reconfiguration of OpenStack Services

Along this line, thinks like the following are likely more changeable
(and my guess is operators would want to change them when things start
going badly), for example from a nova.conf that I have laying around...

[DEFAULT]

rabbit_hosts=...
rpc_response_timeout=...
default_notification_level=...
default_log_levels=...

[glance]

api_servers=...

(and more)

Some of those I think should have higher priority as being
reconfigurable, but I think operators should be asked what they think
would be useful and prioritize those.

Some of those really are service discovery 'types' (rabbit_hosts,
glance/api_servers, keystone/api_servers) but fixing this is likely a
longer term goal (see conversations in keystone).

Joshua Harlow wrote:
> gord chung wrote:
>> we actually had a solution implemented in Ceilometer to handle this[1].
>>
>> that said, based on the results of our survey[2], we found that most
>> operators *never* update configuration files after the initial setup and
>> if they did it was very rarely (monthly updates). the question related
>> to Ceilometer and its pipeline configuration file so the results might
>> be specific to Ceilometer. I think you should definitely query operators
>> before undertaking any work. the last thing you want to do is implement
>> a feature no one really needs/wants.
>>
>> [1]
>> http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
>>
>> [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html
>>
>
> So my general though on the above is yes, definitely consult operators
> to see if they would use this, although if a feature doesn't exist and
> has never existed (say outside of ceilometer) then it's sort of hard to
> get an accurate survey result from a group of people that have never had
> the feature in the first place... Either way it should be done, just to
> get more knowledge...
>
> I know operators (at yahoo!) want to be able to dynamically change the
> logging level, and that's not a monthly task, but more of an 'as-needed'
> one that would be very helpful when things start going badly... So
> perhaps the set of reloadable configuration should start out small and
> not encompass all the things...
>
>>
>> On 04/11/2015 10:00 AM, Marian Horban wrote:
>>> Hi guys,
>>>
>>> Unfortunately I haven't been on Tokio summit but I know that there was
>>> discussion about dynamic reloading of configuration.
>>> Etherpad refs:
>>> https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,
>>>
>>>
>>> https://etherpad.openstack.org/p/mitaka-oslo-security-logging
>>>
>>> In this thread I want to discuss agreements reached on the summit and
>>> discuss
>>> implementation details.
>>>
>>> Some notes taken from etherpad and my remarks:
>>>
>>> 1. "Adding "mutable" parameter for each option."
>>> "Do we have an option mutable=True on CfgOpt? Yes"
>>> -
>>> As I understood 'mutable' parameter must indicate whether service
>>> contains
>>> code responsible for reloading of this option or not.
>>> And this parameter should be one of the arguments of cfg.Opt
>>> constructor.
>>> Problems:
>>> 1. Library's options.
>>> SSL options ca_file, cert_file, key_file taken from oslo.service library
>>> could be reloaded in nova-api so these options should be mutable...
>>> But for some projects that don't need SSL support reloading of SSL
>>> options
>>> doesn't make sense. For such projects this option should be non mutable.
>>> Problem is that oslo.service - single and there are many different
>>> projects
>>> which use it in different way.
>>> The same options could be mutable and non mutable in different contexts.
>>> 2. Support of config options on some platforms.
>>> Parameter "mutable" could be different for different platforms. Some
>>> options
>>> make sense only for specific platforms. If we mark such options as
>>> mutable
>>> it could be misleading on some platforms.
>>> 3. Dependency of options.
>>> There are many 'workers' options(osapi_compute_workers, ec2_workers,
>>> metadata_workers, workers). These options specify number of workers for
>>> OpenStack API services.
>>> If value of the 'workers' option is greater than '1' instance of
>>> ProcessLauncher is created otherwise instance of ServiceLauncher is
>>> created.
>>> When ProcessLauncher receives SIGHUP it reloads it own configuration,
>>> grac

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Mark Voelker
On Nov 4, 2015, at 4:41 PM, Gregory Haynes  wrote:
> 
> Excerpts from Clint Byrum's message of 2015-11-04 21:17:15 +:
>> Excerpts from Joshua Harlow's message of 2015-11-04 12:57:53 -0800:
>>> Ed Leafe wrote:
 On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> Here's a Devstack review for zookeeper in support of this initiative:
> 
> https://review.openstack.org/241040
> 
> Thanks,
> Dims
 
 I thought that the operators at that session made it very clear that they 
 would *not* run any Java applications, and that if OpenStack required a 
 Java app to run, they would no longer use it.
 
 I like the idea of using Zookeeper as the DLM, but I don't think it should 
 be set up as a default, even for devstack, given the vehement opposition 
 expressed.
 
>>> 
>>> What should be the default then?
>>> 
>>> As for 'vehement opposition' I didn't see that as being there, I saw a 
>>> small set of people say 'I don't want to run java or I can't run java', 
>>> some comments about requiring using oracles JVM (which isn't correct, 
>>> OpenJDK works for folks that I have asked in the zookeeper community and 
>>> else where) and the rest of the folks were ok with it...
>>> 
>>> If people want a alternate driver, propose it IMHO...
>>> 
>> 
>> The few operators who stated this position are very much appreciated
>> for standing up and making it clear. It has helped us not step into a
>> minefield with a native ZK driver!
>> 
>> Consul is the most popular second choice, and should work fine for the
>> use cases we identified. It will not be sufficient if we ever have
>> a use case where many agents must lock many resources, since Consul
>> does not offer a way to grant lock access in a fair manner (ZK does,
>> and we're not aware of any others that do actually). Using Consul or
>> etcd for this case would result in situations where lock waiters may
>> wait _forever_, and will likely wait longer than they should at times.
>> Hopefully we can simply avoid the need for this in OpenStack all together.
>> 
>> I do _not_ think we should wait for constrained operators to scream
>> at us about ZK to write a Consul driver. It's important enough that we
>> should start documenting all of the issues we expect to see with Consul
>> (it's not widely packaged, for instance) and writing a driver with its
>> own devstack plugin.
>> 
>> If there are Consul experts who did not make it to those sessions,
>> it would be greatly appreciated if you can spend some time on this.
>> 
>> What I don't want to see happen is we get into a deadlock where there's
>> a large portion of users who can't upgrade and no driver to support them.
>> So lets stay ahead of the problem, and get a set of drivers that works
>> for everybody!
>> 
> 
> One additional note - out of the three possible options I see for tooz
> drivers in production (zk, consul, etcd) we currently only have drivers
> for ZK. This means that unless new drivers are created, when we depend
> on tooz we will be requiring folks deploy zk.
> 
> It would be *awesome* if some folks stepped up to create and support at
> least one of the aternate backends.
> 
> Although I am a fan of the ZK solution, I have an old WIP patch for
> creating an etcd driver. I would like to revive and maintain it, but I
> would also need one more maintainer per the new rules for in tree
> drivers…

For those following along at home, said WIP etcd driver patch is here:

https://review.openstack.org/#/c/151463/

And said rules are at:

https://review.openstack.org/#/c/240645/

And FWIW, I too am personally fine with ZK as a default for devstack.

At Your Service,

Mark T. Voelker

> 
> Cheers,
> Greg
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Fox, Kevin M
As an Op, I can say the only time I've really wanted to change the config file 
and felt pain restarting a service was when I needed to adjust the logging 
level. If that one thing could be done, or it could be done in a completely 
different way (mgmt unix socket?), I think that would go a very long way.

Thanks,
Kevin

From: gord chung [g...@live.ca]
Sent: Wednesday, November 04, 2015 9:43 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic 
Reconfiguration of OpenStack Services

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most operators 
*never* update configuration files after the initial setup and if they did it 
was very rarely (monthly updates). the question related to Ceilometer and its 
pipeline configuration file so the results might be specific to Ceilometer. I 
think you should definitely query operators before undertaking any work. the 
last thing you want to do is implement a feature no one really needs/wants.

[1] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html

On 04/11/2015 10:00 AM, Marian Horban wrote:
Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,
https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some options
make sense only for specific platforms. If we mark such options as mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I think that
adding of 'mutable' parameter into cfg.Opt constructor could just add mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
>From my point of view every service should register list of hooks to reload
config options. oslo.service should catch SIGHUP and call list of registered
hooks one by one with specified order.
Discussion of such implementation was started in ML:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html.
Raw reviews:
https://review.openstack.org/#/c/228892/,
https://review.openstack.org/#/c/223668/.

3. "oslo.config is responsible to log changes which were ignored on SIGHUP"
-
Some config options could be changed using API(for example quotas) that's why
oslo.config doesn't know actual configuration of service and can't log
changes of configuration.

Regards, Marian Horban



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo

Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Gregory Haynes
Excerpts from Jeremy Stanley's message of 2015-11-04 21:31:58 +:
> On 2015-11-04 15:34:26 -0500 (-0500), Sean Dague wrote:
> > This seems like incorrect logic. We should test devstack can do all the
> > things on a devstack change, not on every neutron / trove / nova change.
> > I'm fine if we want to have a slow version of this for devstack testing
> > which starts from a massively stripped down state, but for the 99% of
> > patches that aren't devstack changes, this seems like overkill.
> 
> We are, however, trying to get away from preinstalling additional
> distro packages on our job workers (in favor of providing a warm
> local cache) and leaving it up to the individual projects/jobs to
> define the packages they'll need to be able to run. I'll save the
> lengthy list of whys, it's been in progress for a while and we're
> finally close to making it a reality.

++

One way this could be done in DIB is to either:
bind mount the wheelhouse in from the build host, build an additional
image we dont use which fills up the wheel house, then bind mount that
in to the image we upload.

OR

make a chroot inside of our image build which creates the wheelhouse,
then either bind mount it out or copy it out in to the image we upload.

Either way, its pretty nasty and non trivial. I think the path of least
resistance for now is probably making a wheel mirror.

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Sean Dague
On 11/04/2015 03:57 PM, Joshua Harlow wrote:
> Ed Leafe wrote:
>> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>>> Here's a Devstack review for zookeeper in support of this initiative:
>>>
>>> https://review.openstack.org/241040
>>>
>>> Thanks,
>>> Dims
>>
>> I thought that the operators at that session made it very clear that
>> they would *not* run any Java applications, and that if OpenStack
>> required a Java app to run, they would no longer use it.
>>
>> I like the idea of using Zookeeper as the DLM, but I don't think it
>> should be set up as a default, even for devstack, given the vehement
>> opposition expressed.
>>
> 
> What should be the default then?
> 
> As for 'vehement opposition' I didn't see that as being there, I saw a
> small set of people say 'I don't want to run java or I can't run java',
> some comments about requiring using oracles JVM (which isn't correct,
> OpenJDK works for folks that I have asked in the zookeeper community and
> else where) and the rest of the folks were ok with it...
> 
> If people want a alternate driver, propose it IMHO...

Zookeeper has previously been used by a number of projects, I think it
makes a sensible default to start. We even had it in the gate on the
unit test jobs for a while. We can make a plug point in devstack later
once we see some kinds of jobs running on the zookeeper base for what
the semantics would make sense to plug more stuff in.

Kind of like the MQ path in devstack right now. One default, and a plug
point for people trying other stuff.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Gregory Haynes
Excerpts from Clint Byrum's message of 2015-11-04 21:17:15 +:
> Excerpts from Joshua Harlow's message of 2015-11-04 12:57:53 -0800:
> > Ed Leafe wrote:
> > > On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> > >> Here's a Devstack review for zookeeper in support of this initiative:
> > >>
> > >> https://review.openstack.org/241040
> > >>
> > >> Thanks,
> > >> Dims
> > >
> > > I thought that the operators at that session made it very clear that they 
> > > would *not* run any Java applications, and that if OpenStack required a 
> > > Java app to run, they would no longer use it.
> > >
> > > I like the idea of using Zookeeper as the DLM, but I don't think it 
> > > should be set up as a default, even for devstack, given the vehement 
> > > opposition expressed.
> > >
> > 
> > What should be the default then?
> > 
> > As for 'vehement opposition' I didn't see that as being there, I saw a 
> > small set of people say 'I don't want to run java or I can't run java', 
> > some comments about requiring using oracles JVM (which isn't correct, 
> > OpenJDK works for folks that I have asked in the zookeeper community and 
> > else where) and the rest of the folks were ok with it...
> > 
> > If people want a alternate driver, propose it IMHO...
> > 
> 
> The few operators who stated this position are very much appreciated
> for standing up and making it clear. It has helped us not step into a
> minefield with a native ZK driver!
> 
> Consul is the most popular second choice, and should work fine for the
> use cases we identified. It will not be sufficient if we ever have
> a use case where many agents must lock many resources, since Consul
> does not offer a way to grant lock access in a fair manner (ZK does,
> and we're not aware of any others that do actually). Using Consul or
> etcd for this case would result in situations where lock waiters may
> wait _forever_, and will likely wait longer than they should at times.
> Hopefully we can simply avoid the need for this in OpenStack all together.
> 
> I do _not_ think we should wait for constrained operators to scream
> at us about ZK to write a Consul driver. It's important enough that we
> should start documenting all of the issues we expect to see with Consul
> (it's not widely packaged, for instance) and writing a driver with its
> own devstack plugin.
> 
> If there are Consul experts who did not make it to those sessions,
> it would be greatly appreciated if you can spend some time on this.
> 
> What I don't want to see happen is we get into a deadlock where there's
> a large portion of users who can't upgrade and no driver to support them.
> So lets stay ahead of the problem, and get a set of drivers that works
> for everybody!
> 

One additional note - out of the three possible options I see for tooz
drivers in production (zk, consul, etcd) we currently only have drivers
for ZK. This means that unless new drivers are created, when we depend
on tooz we will be requiring folks deploy zk.

It would be *awesome* if some folks stepped up to create and support at
least one of the aternate backends.

Although I am a fan of the ZK solution, I have an old WIP patch for
creating an etcd driver. I would like to revive and maintain it, but I
would also need one more maintainer per the new rules for in tree
drivers...

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Armando M.
On 4 November 2015 at 13:21, Shraddha Pandhe 
wrote:

> Hi Salvatore,
>
> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
> make IPAM much more powerful. Some other projects already do things like
> this.
>
> e.g. In Ironic, node has driver_info, which is JSON. it also has an
> 'extras' arbitrary JSON field. This allows us to put any information in
> there that we think is important for us.
>

I personally feel that relying on json blobs is not only dangerously
affecting portability, but it causes us to bloat the business logic, and
forcing us to be doing less efficient when querying/filtering data.

Most importantly though, I feel it's like abdicating our responsibility to
do a good design job. Ultimately, we should be able to identify how to
model these extensions you're thinking of both conceptually and logically.

I couldn't care less if other projects use it, but we ended up using in
Neutron too, and since I lost this battle time and time again, all I am
left with is this rant :)


>
>
> Hoping to get some positive feedback from API and DB lieutenants too.
>
>
> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando 
> wrote:
>
>> Arbitrary blobs are a powerful tools to circumvent limitations of an API,
>> as well as other constraints which might be imposed for versioning or
>> portability purposes.
>> The parameters that should end up in such blob are typically specific for
>> the target IPAM driver (to an extent they might even identify a specific
>> driver to use), and therefore an API consumer who knows what backend is
>> performing IPAM can surely leverage it.
>>
>> Therefore this would make a lot of sense, assuming API portability and
>> not leaking backend details are not a concern.
>> The Neutron team API & DB lieutenants will be able to provide more input
>> on this regard.
>>
>> In this case other approaches such as a vendor specific extension are not
>> a solution - assuming your granularity level is the allocation pool; indeed
>> allocation pools are not first-class neutron resources, and it is not
>> therefore possible to have APIs which associate vendor specific properties
>> to allocation pools.
>>
>> Salvatore
>>
>> On 4 November 2015 at 21:46, Shraddha Pandhe > > wrote:
>>
>>> Hi folks,
>>>
>>> I have a small question/suggestion about IPAM.
>>>
>>> With IPAM, we are allowing users to have their own IPAM drivers so that
>>> they can manage IP allocation. The problem is, the new ipam tables in the
>>> database have the same columns as the old tables. So, as a user, if I want
>>> to have my own logic for ip allocation, I can't actually get any help from
>>> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
>>> I could put any useful information/tags there, that can help me for
>>> allocation.
>>>
>>> Does this make sense?
>>>
>>> e.g. If I want to create multiple allocation pools in a subnet and use
>>> them for different purposes, I would need some sort of tag for each
>>> allocation pool for identification. Right now, there is no scope for doing
>>> something like that.
>>>
>>> Any thoughts? If there are any other way to solve the problem, please
>>> let me know
>>>
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Jeremy Stanley
On 2015-11-04 15:34:26 -0500 (-0500), Sean Dague wrote:
> This seems like incorrect logic. We should test devstack can do all the
> things on a devstack change, not on every neutron / trove / nova change.
> I'm fine if we want to have a slow version of this for devstack testing
> which starts from a massively stripped down state, but for the 99% of
> patches that aren't devstack changes, this seems like overkill.

We are, however, trying to get away from preinstalling additional
distro packages on our job workers (in favor of providing a warm
local cache) and leaving it up to the individual projects/jobs to
define the packages they'll need to be able to run. I'll save the
lengthy list of whys, it's been in progress for a while and we're
finally close to making it a reality.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tripleo] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-04 Thread Michal Rostecki

On 11/03/2015 10:27 PM, Zane Bitter wrote:

I think we all agree that using something _like_ Kubernetes would be
extremely interesting for controller services, where you have a bunch of
heterogeneous services with scheduling constraints (HA), that may need
to be scaled out at different rates, &c. &c.

IMHO it's not interesting at all for compute nodes though, where the
scheduling is not only fixed but well-defined in advance. (It's... one
compute node per compute node. Duh.)

e.g. I could easily imagine a future containerised TripleO where the
controller services were deployed with Magnum but the compute nodes were
configured directly with Heat software deployments.

In such a scenario the fact that you can't use Kubernetes for compute
nodes diminishes its value not at all. So while I'm guessing net=host is
still a blocker (for Neutron services on the controller - although
another message in this thread suggests that K8s now supports it
anyway), I don't think pid=host needs to be since AFAICT it appears to
be required only for libvirt.

Something to think about...



One of the goals of Kolla (and idea of containerizing OpenStack services 
in general) is to simplify upgrades. Scaling and scheduling are 
obviously important points of Kolla, but they are not the only.


The model of upgrade where images of nova-compute, Neutron agents etc. 
are build once, pushed to registry and then pulled on compute nodes 
looks much better for me than traditional upgrade of packages. It also 
may decrease probability of breaking some common dependency during upgrades.


Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][ceilometer][aodh][gnocchi] Tokyo summit roundup

2015-11-04 Thread gord chung

hi folks,

i want to start off by thanking everyone for joining the telemetry 
related discussions at the Tokyo summit -- we got some great feedback 
and ideas. similar to the last summit, here's a rundown of items that we 
talked about[1] and will be tracking in the upcoming cycle. as before, 
this is a (chaotic) brain dump and does not necessarily reflect any 
prioritisation.


note: the following is split into different sections, each representing 
a service provided under the telemetry umbrella. these projects are 
discretely managed but with tight collaboration.



-- Aodh (alarming service) --

- client - since we split alarming functionality from Ceilometer into 
it's own unique service[2], aodhclient support will be added so 
ceilometerclient functionality does not become overloaded with unrelated 
alarming code.
- user interface - to improve usability, support for CRUD operations of 
alarms will be added to horizon [3]
- horizontal scaling - the existing event alarm support added in 
Liberty[4] handles a single evaluator. multiple worker support will be 
added to enable better scaling
- simplifying combination alarm - combination alarms allowed flexibility 
of reusing threshold alarms but limited the use case to AND conditions 
and added evaluation ordering complexity. these drawbacks will be 
addressed by a new composite alarm [5]
- testing - tempest and grenade plugin testing support to be added in 
addition to existing unit/functional tests



-- Ceilometer (data collection service) --

- example reference architecture - to improve the consumption of 
Ceilometer, performance study will be done to build reference 
architecture. additional example configurations will be added to enable 
easier entry based on use case.
- housekeeping - alarming and rpc functionality were deprecated in 
Kilo/Liberty[6]. to ensure a tidy code base, it's time to clean house or 
as some devs like to put it: burn it down.[*]

- rolling upgrades - document the process of upgrading
- refined polling - the polling agent(s) now exclusively poll data and 
defer processing to notification agent(s). because of this, we can 
create a more tailored polling and pipeline configuration experience.
- improved polling distribution - currently, the cache is shared within 
a process. to better enable task distribution, we should enable sharing 
the cache across processes.
- polling metadata caching - we improved the caching mechanism in 
Liberty to minimise the load caused by Ceilometer polling. further 
improvements can be made to minimise the number of calls in general.[7]
- resource caching - to improve write speed, a cache will be implemented 
in the collector to minimise unnecessary writes[8]
- batch db writing - to minimise writes, batched writing of data will be 
added to collector[9]
- componentisation part 2 - Ceilometer handles meters[10] and 
events[11]. we need to make it pluggable to offer better flexibility.
- testing - tempest and grenade plugin testing support to be added in 
addition to existing unit/functional tests. additionally, multi-node 
testing to test upgrade path.



-- Gnocchi (time-series database and resource indexing service) --

- metric archive sharding - to improve performance of very large data 
sets (ie. second-by-second granularity), we can split the archive when 
updating data to avoid transferring entire archive set.
- dynamic resource creation - currently to create a new resource type, a 
new model and migration needs to be added. we need to make this creation 
dynamic to allow for more resource types.
- proliferate gnocchiclient adoption - gnocchiclient is now available. 
to ensure consistent usage, it should be adopted in Ceilometer and Aodh.
- testing - tempest and grenade plugin testing support to be added in 
addition to existing unit/functional tests



you can sign up for work items on the etherpad[12]. as always, you're 
more than welcome to propose addition ideas on irc:#openstack-ceilometer 
(to be #openstack-telemetry) and to openstack-dev using  
[telemetry]/[ceilometer]/[aodh]/[gnocchi] in subject.


as always we will continue to work with externally managed projects[13].

[1] 
https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Ceilometer
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073897.html
[3] 
https://blueprints.launchpad.net/openstack-ux/+spec/horizon-alarm-management
[4] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/event-alarm-evaluator.html

[5] https://review.openstack.org/#/c/208786/
[6] 
https://wiki.openstack.org/wiki/ReleaseNotes/Liberty#OpenStack_Telemetry_.28Ceilometer.29

[7] https://review.openstack.org/#/c/209799/
[8] https://review.openstack.org/#/c/203109/
[9] https://review.openstack.org/#/c/234831/
[10] http://docs.openstack.org/admin-guide-cloud/telemetry-measurements.html
[11] http://docs.openstack.org/admin-guide-cloud/telemetry-events.html
[12] https://etherpad.openstack.org/p/mitaka-telemetry-t

[openstack-dev] [Neutron] Mid-cycle meetup for Mitaka

2015-11-04 Thread Armando M.
Hi folks,

After some consideration, I am proposing a change for the Mitaka release
cycle in relation to the mid-cycle meetup event.

My proposal is to defer the gathering to later in the release cycle [1],
and assess whether we have it or not based on the course of events in the
cycle. If we feel that a last push closer to the end will help us hit some
critical targets, then I am all in for arranging it.

Based on our latest experiences, I have not seen a strong correlation
between progress made during the cycle and progress made during the meetup,
so we might as well save us the trouble of travelling close to Christmas.

I'd like to thank Kyle, Miguel Lavalle and Doug for looking into the
logistics. We may still need their services later in the new year, but as
of now all I can say is:

Happy (distributed) hacking!

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Mitaka Design Summit Recap

2015-11-04 Thread Sean McGinnis
Cinder Mitaka Design Summit Summary

Will the Real Block Storage Service Please Stand Up
===
Should Cinder be usable outside of a full OpenStack environment.
There are several solutions out there for providing a Software
Defined Storage service with plugins for various backends. Most
of the functionality used for these is already done by Cinder.
So the question is, should Cinder try to be that ubiquitous SDS
interface?

The concern is that Cinder should either try to address this
broader use case or be left behind. Especially since there is
already a lot of overlap in functionality, and end users already
asking about it.

Some concern about doing this is whether it will be a distraction
from our core purpose - to be a solid and useful service for
providing block storage in an OpenStack cloud.

On the other hand, some folks have played around with doing this
already and found there really are only a few key issues with
being able to use Cinder without something like Keystone. Based on
this, it was decided we will spend some time looking into doing
this, but at a lower priority than our core work.

Availability Zones in Cinder

Recently it was highlighted that there are issues between AZs
used in Cinder versus AZs used in Nova. When Cinder was originally
branched out of the Nova code base we picked up the concept of
Availability Zones, but the ideas was never fully implemented and
isn't exactly what some expect it to be in its current state.

Speaking with some of the operators in the room, there were two
main desires for AZ interaction with Nova - either the AZ specified
in Nova needs to match one to one with the AZ in Cinder, or there
is no connection between the two and the Nova AZ doesn't matter on
the Cinder side.

There is currently a workaround in Cinder. If the config file
value for allow_availability_zone_fallback is set to True, if a
request for a new volume comes in with a Nova AZ not present, the
default Cinder AZ will be used instead.

A few options for improving AZ support were suggested. At least for
those present, the current "dirty fix" workaround is sufficient. If
further input makes it clear that this is not enough, we can look
in to one of the proposed alternatives to address those needs.

API Microversions
=
Some projects, particularly Nova and Manila, have already started
work on supporting API microversions. We plan on leveraging their
work to add support in Cinder. Scott D'Angelo has done some work
porting that framework from Manila into a spec and proof of concept
in Cinder.

API microversions would allow us to make breaking API changes while
still providing backward compatibility to clients that expect the
existing behavior. It may also allow us to remove functionality
more easily.

We still want to be restrictive about modifying the API. Just
because this will make it slightly easier to do, it still has
an ongoing maintenance cost, and slightly a higher one at that,
that we will want to limit as much as possible.

A great explanation of the microversions concept was written up by
Sean Dague here:

https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/

Experimental APIs
=
Building on the work with microversions, we would use that to expose
experimental APIs and make it explicit that they are experimental
only and could be removed at any time, without the normal window
provided with deprecating other features.

Although there were certainly some very valid concerns raised about
doing this, and whether it would be useful or not, general consensus
was that it would be good to support it.

After further discussion, it was pointed out that there really isn't
anything in the works that needs this right now, so it may be delayed.
The issue there being that if we wait to do it, when we actually do
need to use it for something it won't be ready to go.

Cinder Nova Interaction
===
Great joint session with some of the Nova folks. Talked through some
of the issues we've had with the interaction between Nova and Cinder
and areas where we need to improve it.

Some of the decisions were:
- Working on support for multiattach. Will delay encryption support
  until non-encrypted issues get worked out.
- Rootwrap issues with the use of os-brick. Priv-sep sounds like it
  is the better answer. Will need to wait for that to mature before
  we can switch away from rootwrap though.
- API handling changes. A lot of cases where an API call is made and
  it is assumed to succeed. Will use event notifications to report
  results back to Nova. Requires changes on both sides.
- Specs will be submitted for coordinated handling for extending
  volumes.
- Volume related Nova bugs were highlighted. Cinder team will try to
  help triage and resolve some of those.
  https://bugs.launchpad.net/nova/+bugs?field.tag=volumes

Volume Manager Locks

Covered in cross-p

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Ed Leafe
On Nov 4, 2015, at 3:17 PM, Clint Byrum  wrote:

> What I don't want to see happen is we get into a deadlock where there's
> a large portion of users who can't upgrade and no driver to support them.
> So lets stay ahead of the problem, and get a set of drivers that works
> for everybody!

I think that this is a great idea, but we also need some people familiar with 
Consul to do this work. Otherwise, ZK (and hence Java) is a defacto dependency.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
Hi Salvatore,

Thanks for the feedback. I agree with you that arbitrary JSON blobs will
make IPAM much more powerful. Some other projects already do things like
this.

e.g. In Ironic, node has driver_info, which is JSON. it also has an
'extras' arbitrary JSON field. This allows us to put any information in
there that we think is important for us.


Hoping to get some positive feedback from API and DB lieutenants too.


On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando 
wrote:

> Arbitrary blobs are a powerful tools to circumvent limitations of an API,
> as well as other constraints which might be imposed for versioning or
> portability purposes.
> The parameters that should end up in such blob are typically specific for
> the target IPAM driver (to an extent they might even identify a specific
> driver to use), and therefore an API consumer who knows what backend is
> performing IPAM can surely leverage it.
>
> Therefore this would make a lot of sense, assuming API portability and not
> leaking backend details are not a concern.
> The Neutron team API & DB lieutenants will be able to provide more input
> on this regard.
>
> In this case other approaches such as a vendor specific extension are not
> a solution - assuming your granularity level is the allocation pool; indeed
> allocation pools are not first-class neutron resources, and it is not
> therefore possible to have APIs which associate vendor specific properties
> to allocation pools.
>
> Salvatore
>
> On 4 November 2015 at 21:46, Shraddha Pandhe 
> wrote:
>
>> Hi folks,
>>
>> I have a small question/suggestion about IPAM.
>>
>> With IPAM, we are allowing users to have their own IPAM drivers so that
>> they can manage IP allocation. The problem is, the new ipam tables in the
>> database have the same columns as the old tables. So, as a user, if I want
>> to have my own logic for ip allocation, I can't actually get any help from
>> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
>> I could put any useful information/tags there, that can help me for
>> allocation.
>>
>> Does this make sense?
>>
>> e.g. If I want to create multiple allocation pools in a subnet and use
>> them for different purposes, I would need some sort of tag for each
>> allocation pool for identification. Right now, there is no scope for doing
>> something like that.
>>
>> Any thoughts? If there are any other way to solve the problem, please let
>> me know
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2015-11-04 12:57:53 -0800:
> Ed Leafe wrote:
> > On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> >> Here's a Devstack review for zookeeper in support of this initiative:
> >>
> >> https://review.openstack.org/241040
> >>
> >> Thanks,
> >> Dims
> >
> > I thought that the operators at that session made it very clear that they 
> > would *not* run any Java applications, and that if OpenStack required a 
> > Java app to run, they would no longer use it.
> >
> > I like the idea of using Zookeeper as the DLM, but I don't think it should 
> > be set up as a default, even for devstack, given the vehement opposition 
> > expressed.
> >
> 
> What should be the default then?
> 
> As for 'vehement opposition' I didn't see that as being there, I saw a 
> small set of people say 'I don't want to run java or I can't run java', 
> some comments about requiring using oracles JVM (which isn't correct, 
> OpenJDK works for folks that I have asked in the zookeeper community and 
> else where) and the rest of the folks were ok with it...
> 
> If people want a alternate driver, propose it IMHO...
> 

The few operators who stated this position are very much appreciated
for standing up and making it clear. It has helped us not step into a
minefield with a native ZK driver!

Consul is the most popular second choice, and should work fine for the
use cases we identified. It will not be sufficient if we ever have
a use case where many agents must lock many resources, since Consul
does not offer a way to grant lock access in a fair manner (ZK does,
and we're not aware of any others that do actually). Using Consul or
etcd for this case would result in situations where lock waiters may
wait _forever_, and will likely wait longer than they should at times.
Hopefully we can simply avoid the need for this in OpenStack all together.

I do _not_ think we should wait for constrained operators to scream
at us about ZK to write a Consul driver. It's important enough that we
should start documenting all of the issues we expect to see with Consul
(it's not widely packaged, for instance) and writing a driver with its
own devstack plugin.

If there are Consul experts who did not make it to those sessions,
it would be greatly appreciated if you can spend some time on this.

What I don't want to see happen is we get into a deadlock where there's
a large portion of users who can't upgrade and no driver to support them.
So lets stay ahead of the problem, and get a set of drivers that works
for everybody!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Monty Taylor

On 11/04/2015 04:09 PM, Davanum Srinivas wrote:

Graham,

Agree. Hence the Tooz as the abstraction layer. Folks are welcome to
write new drivers or fix existing drivers for Tooz where needed.


Yes. This is correct. We cannot grow a hard depend on a Java thing, but 
optional depends are ok - and it turns out the semantics needed from 
DLMs and DKVSs are sufficiently abstractable for it to make sense.


That said - the only usable tooz backend at the moment is zookeeper - so 
someone who cares about the not-Java use case will have to step up and 
write a consul backend. The main thing is that we allow that to happen 
and don't do things that would prevent such a thing from being written.


Reasons for making ZK the default are:

- It exists in tooz today
- It's easily installable in all the distros
- It has devstack support already

None of those three are true of consul, although none are terribly hard 
to achieve.



On Wed, Nov 4, 2015 at 3:04 PM, Hayes, Graham  wrote:

On 04/11/15 20:04, Ed Leafe wrote:

On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:


Here's a Devstack review for zookeeper in support of this initiative:

https://review.openstack.org/241040

Thanks,
Dims


I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.


-- Ed Leafe



I got the impression that there was *some* operators that wouldn't run
java.

I do not see an issue with having ZooKeeper as the default, as long as
there is an alternate solution that also works for the operators that do
not want to use it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Robert Collins
On 5 November 2015 at 09:02, Ed Leafe  wrote:
> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>>
>> Here's a Devstack review for zookeeper in support of this initiative:
>>
>> https://review.openstack.org/241040
>>
>> Thanks,
>> Dims
>
> I thought that the operators at that session made it very clear that they 
> would *not* run any Java applications, and that if OpenStack required a Java 
> app to run, they would no longer use it.
>
> I like the idea of using Zookeeper as the DLM, but I don't think it should be 
> set up as a default, even for devstack, given the vehement opposition 
> expressed.

There was no option suggested that all the operators would run happily.

Thus it doesn't matter what the 'default' is - we know only some
operators will run it.

In the session we were told that zookeeper is already used in CI jobs
for ceilometer (was this wrong?) and thats why we figured it made a
sane default for devstack.

We can always change the default later.

What is important is that folk step up and write the consul and etcd
drivers for the non-Java-happy operators to consume.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Davanum Srinivas
Graham,

Agree. Hence the Tooz as the abstraction layer. Folks are welcome to
write new drivers or fix existing drivers for Tooz where needed.

-- Dims

On Wed, Nov 4, 2015 at 3:04 PM, Hayes, Graham  wrote:
> On 04/11/15 20:04, Ed Leafe wrote:
>> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>>>
>>> Here's a Devstack review for zookeeper in support of this initiative:
>>>
>>> https://review.openstack.org/241040
>>>
>>> Thanks,
>>> Dims
>>
>> I thought that the operators at that session made it very clear that they 
>> would *not* run any Java applications, and that if OpenStack required a Java 
>> app to run, they would no longer use it.
>>
>> I like the idea of using Zookeeper as the DLM, but I don't think it should 
>> be set up as a default, even for devstack, given the vehement opposition 
>> expressed.
>>
>>
>> -- Ed Leafe
>>
>
> I got the impression that there was *some* operators that wouldn't run
> java.
>
> I do not see an issue with having ZooKeeper as the default, as long as
> there is an alternate solution that also works for the operators that do
> not want to use it.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Salvatore Orlando
Arbitrary blobs are a powerful tools to circumvent limitations of an API,
as well as other constraints which might be imposed for versioning or
portability purposes.
The parameters that should end up in such blob are typically specific for
the target IPAM driver (to an extent they might even identify a specific
driver to use), and therefore an API consumer who knows what backend is
performing IPAM can surely leverage it.

Therefore this would make a lot of sense, assuming API portability and not
leaking backend details are not a concern.
The Neutron team API & DB lieutenants will be able to provide more input on
this regard.

In this case other approaches such as a vendor specific extension are not a
solution - assuming your granularity level is the allocation pool; indeed
allocation pools are not first-class neutron resources, and it is not
therefore possible to have APIs which associate vendor specific properties
to allocation pools.

Salvatore

On 4 November 2015 at 21:46, Shraddha Pandhe 
wrote:

> Hi folks,
>
> I have a small question/suggestion about IPAM.
>
> With IPAM, we are allowing users to have their own IPAM drivers so that
> they can manage IP allocation. The problem is, the new ipam tables in the
> database have the same columns as the old tables. So, as a user, if I want
> to have my own logic for ip allocation, I can't actually get any help from
> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
> I could put any useful information/tags there, that can help me for
> allocation.
>
> Does this make sense?
>
> e.g. If I want to create multiple allocation pools in a subnet and use
> them for different purposes, I would need some sort of tag for each
> allocation pool for identification. Right now, there is no scope for doing
> something like that.
>
> Any thoughts? If there are any other way to solve the problem, please let
> me know
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Hayes, Graham
On 04/11/15 20:04, Ed Leafe wrote:
> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>>
>> Here's a Devstack review for zookeeper in support of this initiative:
>>
>> https://review.openstack.org/241040
>>
>> Thanks,
>> Dims
> 
> I thought that the operators at that session made it very clear that they 
> would *not* run any Java applications, and that if OpenStack required a Java 
> app to run, they would no longer use it.
> 
> I like the idea of using Zookeeper as the DLM, but I don't think it should be 
> set up as a default, even for devstack, given the vehement opposition 
> expressed.
> 
> 
> -- Ed Leafe
> 

I got the impression that there was *some* operators that wouldn't run
java.

I do not see an issue with having ZooKeeper as the default, as long as
there is an alternate solution that also works for the operators that do
not want to use it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread John Belamaric
If you have custom data you want to keep for your driver, you should create 
your own database tables to track that information. For example, the reference 
driver creates its own tables to track its data in ipam* tables.

John

On Nov 4, 2015, at 3:46 PM, Shraddha Pandhe 
mailto:spandhe.openst...@gmail.com>> wrote:

Hi folks,

I have a small question/suggestion about IPAM.

With IPAM, we are allowing users to have their own IPAM drivers so that they 
can manage IP allocation. The problem is, the new ipam tables in the database 
have the same columns as the old tables. So, as a user, if I want to have my 
own logic for ip allocation, I can't actually get any help from the database. 
Whereas, if we had an arbitrary json blob in the ipam tables, I could put any 
useful information/tags there, that can help me for allocation.

Does this make sense?

e.g. If I want to create multiple allocation pools in a subnet and use them for 
different purposes, I would need some sort of tag for each allocation pool for 
identification. Right now, there is no scope for doing something like that.

Any thoughts? If there are any other way to solve the problem, please let me 
know



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Joshua Harlow

Ed Leafe wrote:

On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:

Here's a Devstack review for zookeeper in support of this initiative:

https://review.openstack.org/241040

Thanks,
Dims


I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.



What should be the default then?

As for 'vehement opposition' I didn't see that as being there, I saw a 
small set of people say 'I don't want to run java or I can't run java', 
some comments about requiring using oracles JVM (which isn't correct, 
OpenJDK works for folks that I have asked in the zookeeper community and 
else where) and the rest of the folks were ok with it...


If people want a alternate driver, propose it IMHO...



-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
Hi folks,

I have a small question/suggestion about IPAM.

With IPAM, we are allowing users to have their own IPAM drivers so that
they can manage IP allocation. The problem is, the new ipam tables in the
database have the same columns as the old tables. So, as a user, if I want
to have my own logic for ip allocation, I can't actually get any help from
the database. Whereas, if we had an arbitrary json blob in the ipam tables,
I could put any useful information/tags there, that can help me for
allocation.

Does this make sense?

e.g. If I want to create multiple allocation pools in a subnet and use them
for different purposes, I would need some sort of tag for each allocation
pool for identification. Right now, there is no scope for doing something
like that.

Any thoughts? If there are any other way to solve the problem, please let
me know
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 03:27 PM, Clark Boylan wrote:
> On Wed, Nov 4, 2015, at 09:14 AM, Sean Dague wrote:
>> On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
>>> On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
 On 11/04/2015 06:47 AM, Sean Dague wrote:
>>> [...]
> Is there a nodepool cache strategy where we could pre build these? A 25%
> performance win comes out the other side if there is a strategy here.

 python wheel repo could help maybe?
>>>
>>> That's along the lines of how I expect we'd need to solve it.
>>> Basically add a new DIB element to openstack-infra/project-config in
>>> nodepool/elements (or extend the cache-devstack element already
>>> there) to figure out which version(s) it needs to prebuild and then
>>> populate a wheelhouse which can be leveraged by the jobs running on
>>> the resulting diskimage. The test scripts in the
>>> openstack/requirements repo may already have much of this logic
>>> implemented for the purpose of testing that we can build sane wheels
>>> of all our requirements.
>>>
>>> This of course misses situations where the requirements change and
>>> the diskimages haven't been rebuilt or in jobs testing proposed
>>> changes which explicitly alter these requirements, but could be
>>> augmented by similar mechanisms in devstack itself to avoid building
>>> them more than once.
>>
>> Ok, so given that pip automatically builds a local wheel cache now when
>> it installs this... is it as simple as
>> https://review.openstack.org/#/c/241692/ ?
> It is not that simple and this change will probably need to be reverted.
> We don't install the build deps for these packages during the dib run.
> We only add them to the appropriate apt/yum caches. This means that the
> image builds will start to fail because lxml won't find libxml2-dev and
> whatever other headers packages it needs in order to link against the
> appropriate libs.
> 
> The issue here is we do our best to force devstack to do the work at run
> time to make sure that devstack-gate or our images aren't masking some
> bug or become a required part of the devstack process. This means that
> none of these packages are installed and won't be available to the pip
> install.

This seems like incorrect logic. We should test devstack can do all the
things on a devstack change, not on every neutron / trove / nova change.
I'm fine if we want to have a slow version of this for devstack testing
which starts from a massively stripped down state, but for the 99% of
patches that aren't devstack changes, this seems like overkill.

> We have already had to revert a similar change in the past and at the
> time the basic agreement was we should go back to building wheel package
> mirrors that jobs could take advantage of. That work floundered due to a
> lack of reviews, but I still think that is the correct way to solve this
> problem. Basic idea for that is to have some periodic jobs build a
> distro/arch/release specific wheel cache then rsync that over to all our
> pypi mirrors for use by the jobs.
> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Clark Boylan
On Wed, Nov 4, 2015, at 09:14 AM, Sean Dague wrote:
> On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
> > On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
> >> On 11/04/2015 06:47 AM, Sean Dague wrote:
> > [...]
> >>> Is there a nodepool cache strategy where we could pre build these? A 25%
> >>> performance win comes out the other side if there is a strategy here.
> >>
> >> python wheel repo could help maybe?
> > 
> > That's along the lines of how I expect we'd need to solve it.
> > Basically add a new DIB element to openstack-infra/project-config in
> > nodepool/elements (or extend the cache-devstack element already
> > there) to figure out which version(s) it needs to prebuild and then
> > populate a wheelhouse which can be leveraged by the jobs running on
> > the resulting diskimage. The test scripts in the
> > openstack/requirements repo may already have much of this logic
> > implemented for the purpose of testing that we can build sane wheels
> > of all our requirements.
> > 
> > This of course misses situations where the requirements change and
> > the diskimages haven't been rebuilt or in jobs testing proposed
> > changes which explicitly alter these requirements, but could be
> > augmented by similar mechanisms in devstack itself to avoid building
> > them more than once.
> 
> Ok, so given that pip automatically builds a local wheel cache now when
> it installs this... is it as simple as
> https://review.openstack.org/#/c/241692/ ?
It is not that simple and this change will probably need to be reverted.
We don't install the build deps for these packages during the dib run.
We only add them to the appropriate apt/yum caches. This means that the
image builds will start to fail because lxml won't find libxml2-dev and
whatever other headers packages it needs in order to link against the
appropriate libs.

The issue here is we do our best to force devstack to do the work at run
time to make sure that devstack-gate or our images aren't masking some
bug or become a required part of the devstack process. This means that
none of these packages are installed and won't be available to the pip
install.

We have already had to revert a similar change in the past and at the
time the basic agreement was we should go back to building wheel package
mirrors that jobs could take advantage of. That work floundered due to a
lack of reviews, but I still think that is the correct way to solve this
problem. Basic idea for that is to have some periodic jobs build a
distro/arch/release specific wheel cache then rsync that over to all our
pypi mirrors for use by the jobs.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-04 Thread Javeria Khan
Thanks Igor, Alex. Guess there isn't any support for running tasks directly
on the Fuel Master node for now.

I did try moving to deployment_tasks.yaml, however it leads to other issues
such as "/etc/fuel/plugins// does not exist" failing on
deployments.

I'm trying to move back to using the former tasks.yaml, but the
fuel-plugin-builder keeps looking for deployment_tasks.yaml now. There
should be some build source list I can remove?


--
Javeria

On Wed, Nov 4, 2015 at 12:44 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> please note that such tasks are executed inside 'mcollective' docker
> container, not on the Fuel master host system.
>
> Regards,
> Alex
>
> On Tue, Nov 3, 2015 at 10:41 PM, Igor Kalnitsky 
> wrote:
>
>> Hi Javeria,
>>
>> Try to use 'master' in 'role' field. Example:
>>
>> - role: 'master'
>>   stage: pre_deployment
>>   type: shell
>>   parameters:
>>   cmd: echo all > /tmp/plugin.all
>>   timeout: 42
>>
>> Let me know if you need additional help.
>>
>> Thanks,
>> Igor
>>
>> P.S: Since Fuel 7.0 it's recommended to use deployment_tasks.yaml
>> instead of tasks.yaml. Please see Fuel Plugins wiki page for details.
>>
>> On Tue, Nov 3, 2015 at 10:26 PM, Javeria Khan 
>> wrote:
>> > Hey everyone,
>> >
>> > I've been working on a fuel plugin and for some reason just cant figure
>> out
>> > how to run a task on the fuel master node through the tasks.yaml. Is
>> there
>> > even a role for it?
>> >
>> > Something similar to what ansible does with localhost would work.
>> >
>> > Thanks,
>> > Javeria
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Jim Rollenhagen
On Wed, Nov 04, 2015 at 02:55:49PM -0500, Sean Dague wrote:
> On 11/04/2015 02:42 PM, Jim Rollenhagen wrote:
> > On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
> >> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
> >>> On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
> >>> On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
>  Hi,
> 
>  The change in https://review.openstack.org/237122 touches a feature from
>  ironic that has not been released in any tag yet.
> 
>  At first, we from the team who has written the patch thought that, as it
>  has not been part of any release, we could do backwards incompatible
>  changes on that part of the code. As it turned out from discussing with
>  the community, ironic commits to keeping the master branch backwards
>  compatible and a deprecation process is needed in that case.
> 
>  That stated, the question at hand is: How long should this deprecation
>  process last?
> 
>  This spec specifies the deprecation policy we should follow:
>  https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
> 
> 
>  As from its excerpt below, the minimum obsolescence period must be
>  max(next_release, 3 months).
> 
>  """
>  Based on that data, an obsolescence date will be set. At the very
>  minimum the feature (or API, or configuration option) should be marked
>  deprecated (and still be supported) in the next stable release branch,
>  and for at least three months linear time. For example, a feature
>  deprecated in November 2015 should still appear in the Mitaka release
>  and stable/mitaka stable branch and cannot be removed before the
>  beginning of the N development cycle in April 2016. A feature deprecated
>  in March 2016 should still appear in the Mitaka release and
>  stable/mitaka stable branch, and cannot be removed before June 2016.
>  """
> 
>  This spec, however, only covers released and/or tagged code.
> 
>  tl;dr:
> 
>  How should we proceed regarding code/features/configs/APIs that have not
>  even been tagged yet?
> 
>  Isn't waiting for the next OpenStack release in this case too long?
>  Otherwise, we are going to have features/configs/APIs/etc. that are
>  deprecated from their very first tag/release.
> 
>  How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
>  months? max(next_tag, 3 months)?
> >>>
> >>> -1
> >>>
> >>> The reason the wording is that way is because lots of people deploy
> >>> OpenStack services in a continuous deployment model, from the master
> >>> source
> >>> branches (sometimes minus X number of commits as these deployers run the
> >>> code through their test platforms).
> >>>
> >>> Not everyone uses tagged releases, and OpenStack as a community has
> >>> committed (pun intended) to serving these continuous deployment scenarios.
> >>>
> >>> Right, so I asked Gabriel to send this because it's an odd case, and I'd
> >>> like to clear up the governance doc on this, since it doesn't seem to
> >>> say much about code that was never released.
> >>>
> >>> The rule is a cycle boundary *and* at least 3 months. However, in this
> >>> case, the code was never in a release at all, much less a stable
> >>> release. So looking at the two types of deployers:
> >>>
> >>> 1) CD from trunk: 3 months is fine, we do that, done.
> >>>
> >>> 2) Deploying stable releases: if we only wait three months and not a
> >>> cycle boundary, they'll never see it. If we do wait for a cycle
> >>> boundary, we're pushing deprecated code to them for (seemingly to me) no
> >>> benefit.
> >>>
> >>> So, it makes sense to me to not introduce the cycle boundary thing in
> >>> this case. But there is value in keeping the rule simple, and if we want
> >>> this one to pass a cycle boundary to optimize for that, I'm okay with
> >>> that too. :)
> >>>
> >>> (Side note: there's actually a third type of deployer for Ironic; one
> >>> that deploys intermediate releases. I think if we give them at least one
> >>> release and three months, they're okay, so the general standard
> >>> deprecation rule covers them.)
> >>>
> >>> // jim
> >>
> >> So, summarizing that:
> >>
> >> * untagged/master: 3 months
> >>
> >> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
> >>
> >> * stable release: max(next release, 3 months)
> >>
> >> Is it correct?
> > 
> > No, my proposal is that, but s/max/AND/.
> > 
> > This also needs buyoff from other folks in the community, and an update
> > to the document in the governance repo which requires TC approval.
> > 
> > For now we must assume a cycle boundary and three months, and/or hold off on
> > the patch until this is decided.
> 
> The AND version of this seems to respect the spirit of the original
> intent. The 3 month window was designed to push back a little on

Re: [openstack-dev] [All][Glance] Feedback on the proposed refactor to the image import process required

2015-11-04 Thread Brian Rosmaita
Thanks to everyone who has commented on the spec and/or participated in
the discussions at the summit last week.

I've uploaded a new patch set that describes my interpretation of the
image import workflow and API calls that were discussed.

Please take a look and leave comments.

--
cheers,
brian


On 10/20/15, 1:06 PM, "Brian Rosmaita" 
wrote:

>Hello,
>
>I've updated the image import spec [4] to incorporate the discussion thus
>far.
>
>The fishbowl session [5] is scheduled for Thursday, October 29,
>2:40pm-3:20pm.
>
>If you read through the spec and the current discussion on the review,
>you'll be in a good position to help us get this worked out during the
>summit.
>
>--
>cheers,
>brian 
>
>On 10/9/15, 3:39 AM, "Flavio Percoco"  wrote:
>
>>Greetings,
>>
>>There was recently a discussion[0] on the mailing list, started by Doug
>>Hellman, to discuss some issues related to Glance's API, the conflicts
>>between v1 and v2 and how this is making some pandas sad.
>>
>>The above served as a starting point for a discussion around the
>>current API, how it can be improved, etc. This discussions happened on
>>IRC[1], on  a call (sorry, I forgot to record this call, this is entirely
>>my fault) and on an etherpad[2]. Later on, Brian Rosmaita summarized
>>all this in a document[3], which became a spec[4]. :D
>>
>>The spec is the central point of discussion now and it contains a more
>>structured, more organized and more concrete proposal that needs to be
>>discussed. Nevertheless, I believe there's still lot to do there and I
>>also believe - I'm sure others do as well - this spec could use
>>opinions from a broader audience. Therefore, I'd really appreciate
>>your opinion on this thread.
>>
>>This will also be discussed at the summit[5] in a fishbowl session and
>>I hope to see you all there as well.
>>
>>I'd like to thank everyone that has participated in this discussion so
>>far and I hope to see others chime in as well.
>>
>>Flavio
>>
>>[0] 
>>http://lists.openstack.org/pipermail/openstack-dev/2015-September/074360.
>>h
>>tml
>>[1] 
>>http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-g
>>l
>>ance.2015-09-22.log.html#t2015-09-22T14:31:00
>>[2] https://etherpad.openstack.org/p/glance-upload-mechanism-reloaded
>>[3] 
>>https://docs.google.com/document/d/1_mQZlUN_AtqhH6qh3ANz-m1zCOYkp1GyxndLt
>>Y
>>MFRb0
>>[4] https://review.openstack.org/#/c/232371/
>>[5] 
>>http://mitakadesignsummit.sched.org/event/398b1f44af7a4ae3dde9cb47d4d52d9
>>a
>>
>>-- 
>>@flaper87
>>Flavio Percoco
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Ed Leafe
On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> 
> Here's a Devstack review for zookeeper in support of this initiative:
> 
> https://review.openstack.org/241040
> 
> Thanks,
> Dims

I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Sean Dague
On 11/04/2015 02:42 PM, Jim Rollenhagen wrote:
> On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
>> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
>>> On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
>>> On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
 Hi,

 The change in https://review.openstack.org/237122 touches a feature from
 ironic that has not been released in any tag yet.

 At first, we from the team who has written the patch thought that, as it
 has not been part of any release, we could do backwards incompatible
 changes on that part of the code. As it turned out from discussing with
 the community, ironic commits to keeping the master branch backwards
 compatible and a deprecation process is needed in that case.

 That stated, the question at hand is: How long should this deprecation
 process last?

 This spec specifies the deprecation policy we should follow:
 https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst


 As from its excerpt below, the minimum obsolescence period must be
 max(next_release, 3 months).

 """
 Based on that data, an obsolescence date will be set. At the very
 minimum the feature (or API, or configuration option) should be marked
 deprecated (and still be supported) in the next stable release branch,
 and for at least three months linear time. For example, a feature
 deprecated in November 2015 should still appear in the Mitaka release
 and stable/mitaka stable branch and cannot be removed before the
 beginning of the N development cycle in April 2016. A feature deprecated
 in March 2016 should still appear in the Mitaka release and
 stable/mitaka stable branch, and cannot be removed before June 2016.
 """

 This spec, however, only covers released and/or tagged code.

 tl;dr:

 How should we proceed regarding code/features/configs/APIs that have not
 even been tagged yet?

 Isn't waiting for the next OpenStack release in this case too long?
 Otherwise, we are going to have features/configs/APIs/etc. that are
 deprecated from their very first tag/release.

 How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
 months? max(next_tag, 3 months)?
>>>
>>> -1
>>>
>>> The reason the wording is that way is because lots of people deploy
>>> OpenStack services in a continuous deployment model, from the master
>>> source
>>> branches (sometimes minus X number of commits as these deployers run the
>>> code through their test platforms).
>>>
>>> Not everyone uses tagged releases, and OpenStack as a community has
>>> committed (pun intended) to serving these continuous deployment scenarios.
>>>
>>> Right, so I asked Gabriel to send this because it's an odd case, and I'd
>>> like to clear up the governance doc on this, since it doesn't seem to
>>> say much about code that was never released.
>>>
>>> The rule is a cycle boundary *and* at least 3 months. However, in this
>>> case, the code was never in a release at all, much less a stable
>>> release. So looking at the two types of deployers:
>>>
>>> 1) CD from trunk: 3 months is fine, we do that, done.
>>>
>>> 2) Deploying stable releases: if we only wait three months and not a
>>> cycle boundary, they'll never see it. If we do wait for a cycle
>>> boundary, we're pushing deprecated code to them for (seemingly to me) no
>>> benefit.
>>>
>>> So, it makes sense to me to not introduce the cycle boundary thing in
>>> this case. But there is value in keeping the rule simple, and if we want
>>> this one to pass a cycle boundary to optimize for that, I'm okay with
>>> that too. :)
>>>
>>> (Side note: there's actually a third type of deployer for Ironic; one
>>> that deploys intermediate releases. I think if we give them at least one
>>> release and three months, they're okay, so the general standard
>>> deprecation rule covers them.)
>>>
>>> // jim
>>
>> So, summarizing that:
>>
>> * untagged/master: 3 months
>>
>> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
>>
>> * stable release: max(next release, 3 months)
>>
>> Is it correct?
> 
> No, my proposal is that, but s/max/AND/.
> 
> This also needs buyoff from other folks in the community, and an update
> to the document in the governance repo which requires TC approval.
> 
> For now we must assume a cycle boundary and three months, and/or hold off on
> the patch until this is decided.

The AND version of this seems to respect the spirit of the original
intent. The 3 month window was designed to push back a little on last
minute deprecations for release, that we deleted the second master
landed. Which looked very different for stable release vs. CD consuming
folks.

The intermediate release or no-release model just wasn't considered
initially.

-Sean

-- 
Sean Dague
http://dague.net

Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Jim Rollenhagen
On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
> >On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
> >On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
> >>Hi,
> >>
> >>The change in https://review.openstack.org/237122 touches a feature from
> >>ironic that has not been released in any tag yet.
> >>
> >>At first, we from the team who has written the patch thought that, as it
> >>has not been part of any release, we could do backwards incompatible
> >>changes on that part of the code. As it turned out from discussing with
> >>the community, ironic commits to keeping the master branch backwards
> >>compatible and a deprecation process is needed in that case.
> >>
> >>That stated, the question at hand is: How long should this deprecation
> >>process last?
> >>
> >>This spec specifies the deprecation policy we should follow:
> >>https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
> >>
> >>
> >>As from its excerpt below, the minimum obsolescence period must be
> >>max(next_release, 3 months).
> >>
> >>"""
> >>Based on that data, an obsolescence date will be set. At the very
> >>minimum the feature (or API, or configuration option) should be marked
> >>deprecated (and still be supported) in the next stable release branch,
> >>and for at least three months linear time. For example, a feature
> >>deprecated in November 2015 should still appear in the Mitaka release
> >>and stable/mitaka stable branch and cannot be removed before the
> >>beginning of the N development cycle in April 2016. A feature deprecated
> >>in March 2016 should still appear in the Mitaka release and
> >>stable/mitaka stable branch, and cannot be removed before June 2016.
> >>"""
> >>
> >>This spec, however, only covers released and/or tagged code.
> >>
> >>tl;dr:
> >>
> >>How should we proceed regarding code/features/configs/APIs that have not
> >>even been tagged yet?
> >>
> >>Isn't waiting for the next OpenStack release in this case too long?
> >>Otherwise, we are going to have features/configs/APIs/etc. that are
> >>deprecated from their very first tag/release.
> >>
> >>How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
> >>months? max(next_tag, 3 months)?
> >
> >-1
> >
> >The reason the wording is that way is because lots of people deploy
> >OpenStack services in a continuous deployment model, from the master
> >source
> >branches (sometimes minus X number of commits as these deployers run the
> >code through their test platforms).
> >
> >Not everyone uses tagged releases, and OpenStack as a community has
> >committed (pun intended) to serving these continuous deployment scenarios.
> >
> >Right, so I asked Gabriel to send this because it's an odd case, and I'd
> >like to clear up the governance doc on this, since it doesn't seem to
> >say much about code that was never released.
> >
> >The rule is a cycle boundary *and* at least 3 months. However, in this
> >case, the code was never in a release at all, much less a stable
> >release. So looking at the two types of deployers:
> >
> >1) CD from trunk: 3 months is fine, we do that, done.
> >
> >2) Deploying stable releases: if we only wait three months and not a
> >cycle boundary, they'll never see it. If we do wait for a cycle
> >boundary, we're pushing deprecated code to them for (seemingly to me) no
> >benefit.
> >
> >So, it makes sense to me to not introduce the cycle boundary thing in
> >this case. But there is value in keeping the rule simple, and if we want
> >this one to pass a cycle boundary to optimize for that, I'm okay with
> >that too. :)
> >
> >(Side note: there's actually a third type of deployer for Ironic; one
> >that deploys intermediate releases. I think if we give them at least one
> >release and three months, they're okay, so the general standard
> >deprecation rule covers them.)
> >
> >// jim
> 
> So, summarizing that:
> 
> * untagged/master: 3 months
> 
> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
> 
> * stable release: max(next release, 3 months)
> 
> Is it correct?

No, my proposal is that, but s/max/AND/.

This also needs buyoff from other folks in the community, and an update
to the document in the governance repo which requires TC approval.

For now we must assume a cycle boundary and three months, and/or hold off on
the patch until this is decided.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder mid-cycle planning survey

2015-11-04 Thread Duncan Thomas
Hi Folks

The Cinder team is trying to plan our mid-cycle meetup again.

Can anybody interested in attending please fill out this quick survey to
help with planning, please?

https://www.surveymonkey.com/r/Q5FZX68

Closing date is 11th November.

Thanks
-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Joshua Harlow
Along this line, thinks like the following are likely more changeable 
(and my guess is operators would want to change them when things start 
going badly), for example from a nova.conf that I have laying around...


[DEFAULT]

rabbit_hosts=...
rpc_response_timeout=...
default_notification_level=...
default_log_levels=...

[glance]

api_servers=...

(and more)

Some of those I think should have higher priority as being 
reconfigurable, but I think operators should be asked what they think 
would be useful and prioritize those.


Some of those really are service discovery 'types' (rabbit_hosts, 
glance/api_servers, keystone/api_servers) but fixing this is likely a 
longer term goal (see conversations in keystone).


Joshua Harlow wrote:

gord chung wrote:

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most
operators *never* update configuration files after the initial setup and
if they did it was very rarely (monthly updates). the question related
to Ceilometer and its pipeline configuration file so the results might
be specific to Ceilometer. I think you should definitely query operators
before undertaking any work. the last thing you want to do is implement
a feature no one really needs/wants.

[1]
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html

[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html



So my general though on the above is yes, definitely consult operators
to see if they would use this, although if a feature doesn't exist and
has never existed (say outside of ceilometer) then it's sort of hard to
get an accurate survey result from a group of people that have never had
the feature in the first place... Either way it should be done, just to
get more knowledge...

I know operators (at yahoo!) want to be able to dynamically change the
logging level, and that's not a monthly task, but more of an 'as-needed'
one that would be very helpful when things start going badly... So
perhaps the set of reloadable configuration should start out small and
not encompass all the things...



On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,


https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service
contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt
constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL
options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different
projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some
options
make sense only for specific platforms. If we mark such options as
mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is
created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of
ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I
think that
adding of 'mutable' parameter into cfg.Opt constructor could just add
mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
From my point of view every service should register list of hooks to
reload
config options. oslo.service should catch SIGHUP and call list of
registered
hooks one by one with specified order.
Discussion of s

Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Joshua Harlow

gord chung wrote:

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most
operators *never* update configuration files after the initial setup and
if they did it was very rarely (monthly updates). the question related
to Ceilometer and its pipeline configuration file so the results might
be specific to Ceilometer. I think you should definitely query operators
before undertaking any work. the last thing you want to do is implement
a feature no one really needs/wants.

[1]
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html


So my general though on the above is yes, definitely consult operators 
to see if they would use this, although if a feature doesn't exist and 
has never existed (say outside of ceilometer) then it's sort of hard to 
get an accurate survey result from a group of people that have never had 
the feature in the first place... Either way it should be done, just to 
get more knowledge...


I know operators (at yahoo!) want to be able to dynamically change the 
logging level, and that's not a monthly task, but more of an 'as-needed' 
one that would be very helpful when things start going badly... So 
perhaps the set of reloadable configuration should start out small and 
not encompass all the things...




On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,

https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service
contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL
options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different
projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some
options
make sense only for specific platforms. If we mark such options as
mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is
created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of
ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I
think that
adding of 'mutable' parameter into cfg.Opt constructor could just add
mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
From my point of view every service should register list of hooks to
reload
config options. oslo.service should catch SIGHUP and call list of
registered
hooks one by one with specified order.
Discussion of such implementation was started in ML:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html.
Raw reviews:
https://review.openstack.org/#/c/228892/,
https://review.openstack.org/#/c/223668/.

3. "oslo.config is responsible to log changes which were ignored on
SIGHUP"
-
Some config options could be changed using API(for example quotas)
that's why
oslo.config doesn't know actual configuration of service and can't log
changes of configuration.

Regards, Marian Horban


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.or

Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Gabriel Bezerra

Em 04.11.2015 11:32, Jim Rollenhagen escreveu:

On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
>Hi,
>
>The change in https://review.openstack.org/237122 touches a feature from
>ironic that has not been released in any tag yet.
>
>At first, we from the team who has written the patch thought that, as it
>has not been part of any release, we could do backwards incompatible
>changes on that part of the code. As it turned out from discussing with
>the community, ironic commits to keeping the master branch backwards
>compatible and a deprecation process is needed in that case.
>
>That stated, the question at hand is: How long should this deprecation
>process last?
>
>This spec specifies the deprecation policy we should follow:
>https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
>
>
>As from its excerpt below, the minimum obsolescence period must be
>max(next_release, 3 months).
>
>"""
>Based on that data, an obsolescence date will be set. At the very
>minimum the feature (or API, or configuration option) should be marked
>deprecated (and still be supported) in the next stable release branch,
>and for at least three months linear time. For example, a feature
>deprecated in November 2015 should still appear in the Mitaka release
>and stable/mitaka stable branch and cannot be removed before the
>beginning of the N development cycle in April 2016. A feature deprecated
>in March 2016 should still appear in the Mitaka release and
>stable/mitaka stable branch, and cannot be removed before June 2016.
>"""
>
>This spec, however, only covers released and/or tagged code.
>
>tl;dr:
>
>How should we proceed regarding code/features/configs/APIs that have not
>even been tagged yet?
>
>Isn't waiting for the next OpenStack release in this case too long?
>Otherwise, we are going to have features/configs/APIs/etc. that are
>deprecated from their very first tag/release.
>
>How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
>months? max(next_tag, 3 months)?

-1

The reason the wording is that way is because lots of people deploy
OpenStack services in a continuous deployment model, from the master 
source
branches (sometimes minus X number of commits as these deployers run 
the

code through their test platforms).

Not everyone uses tagged releases, and OpenStack as a community has
committed (pun intended) to serving these continuous deployment 
scenarios.


Right, so I asked Gabriel to send this because it's an odd case, and 
I'd

like to clear up the governance doc on this, since it doesn't seem to
say much about code that was never released.

The rule is a cycle boundary *and* at least 3 months. However, in this
case, the code was never in a release at all, much less a stable
release. So looking at the two types of deployers:

1) CD from trunk: 3 months is fine, we do that, done.

2) Deploying stable releases: if we only wait three months and not a
cycle boundary, they'll never see it. If we do wait for a cycle
boundary, we're pushing deprecated code to them for (seemingly to me) 
no

benefit.

So, it makes sense to me to not introduce the cycle boundary thing in
this case. But there is value in keeping the rule simple, and if we 
want

this one to pass a cycle boundary to optimize for that, I'm okay with
that too. :)

(Side note: there's actually a third type of deployer for Ironic; one
that deploys intermediate releases. I think if we give them at least 
one

release and three months, they're okay, so the general standard
deprecation rule covers them.)

// jim


So, summarizing that:

* untagged/master: 3 months

* tagged/intermediate release: max(next tag/intermediate release, 3 
months)


* stable release: max(next release, 3 months)

Is it correct?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Robert Collins
On 5 November 2015 at 07:37, Sean Dague  wrote:

> It only really will screw when upper-constraints.txt gets updated on a
> branch.

Bah, yes.

> I honestly think it's ok to not be perfect here. In the base case we'll
> speed up a good chunk, and we'll be slower (though not as slow as today)
> for a day after we bump upper-constraints for something expensive (like
> numpy). It seems like a reasonable trade off for not much complexity.

Oh, I clearly wasn't clear. I think your patch is a good thing. I'm
highlighting the corner case and proposing a down-the-track way to
address it.

And the reason I'm doing that is that Clark has said that we have lots
and lots of trouble updating images, so I'm expecting the corner case
to be fairly common :/.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Robert Collins
On 5 November 2015 at 04:42, Sean Dague  wrote:
> On 11/04/2015 10:13 AM, John Garbutt wrote:

> I think longer term we probably need a dedicated event service in
> OpenStack. A few of us actually had an informal conversation about this
> during the Nova notifications session to figure out if there was a way
> to optimize the Searchlight path. Nearly everyone wants websockets,
> which is good. The problem is, that means you've got to anticipate
> 10,000+ open websockets as soon as we expose this. Which means the stack
> to deliver that sanely isn't just a bit of python code, it's also the
> highly optimized server underneath.

So any decent epoll implementation should let us hit that without a
super optimised server - eventlet being in that category. I totally
get that we're going to expect thundering herds, but websockets isn't
new and the stacks we have - apache, eventlet - have been around long
enough to adjust to the rather different scaling pattern.

So - lets not panic, get a proof of concept up somewhere and then run
an actual baseline test. If thats shockingly bad *then* lets panic.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Lee Calcote
Especially given the pervasiveness of discussion topics. +1

Lee

> On Nov 4, 2015, at 12:44 PM, Brad Topol  wrote:
> 
> +1 That's an extremely good suggestion!!!
> 
> --Brad
> 
> 
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
> 
> Sean McGinnis ---11/04/2015 10:36:22 AM---On Wed, Nov 04, 2015 
> at 07:30:51AM -0500, Sean Dague wrote: > On 10/28/2015 07:15 PM, Anne Gentle 
> wr
> 
> From: Sean McGinnis 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/04/2015 10:36 AM
> Subject: Re: [openstack-dev] Troubleshooting cross-project comms
> 
> 
> 
> 
> On Wed, Nov 04, 2015 at 07:30:51AM -0500, Sean Dague wrote:
> > On 10/28/2015 07:15 PM, Anne Gentle wrote:
> > 
> > Has anyone considered using #openstack-dev, instead of a new meeting
> > room? #openstack-dev is mostly a ghost town at this point, and deciding
> > that instead it would be the dedicated cross project space, including
> > meetings support, might be interesting.
> > 
> > -Sean
> 
> +1 - That makes a lot of sense to me.
> 
> > 
> > -- 
> > Sean Dague
> > http://dague.net 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Brad Topol

+1  That's an extremely good suggestion!!!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Sean McGinnis 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   11/04/2015 10:36 AM
Subject:Re: [openstack-dev] Troubleshooting cross-project comms



On Wed, Nov 04, 2015 at 07:30:51AM -0500, Sean Dague wrote:
> On 10/28/2015 07:15 PM, Anne Gentle wrote:
>
> Has anyone considered using #openstack-dev, instead of a new meeting
> room? #openstack-dev is mostly a ghost town at this point, and deciding
> that instead it would be the dedicated cross project space, including
> meetings support, might be interesting.
>
>-Sean

+1 - That makes a lot of sense to me.

>
> --
> Sean Dague
> http://dague.net
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Match type checking from oslo.config.

2015-11-04 Thread Hunter Haugen
> Ouha!  I didn't know that property could have parent class defined.
> This is nice.  Does it work also for parameter ?

I haven't tried, but property is just a subclass of parameter so
truthy could probably be made a parameter then become a parent of
either a property or a parameter.

>
> The NetScalerTruthy is more or less what would be needed for thruthy stuff.
>
> On my side I came up with this solution (for different stuff, but the
> same principle could be used here as well):
>
> https://review.openstack.org/#/c/238954/10/lib/puppet_x/keystone/type/read_only.rb
>
> And I call it like that:
>
>   newproperty(:id) do
> include PuppetX::Keystone::Type::ReadOnly
>   end
>
> I was thinking of extending this scheme to have needed types (Boolean,
> ...):
>
>   newproperty(:truth) do
> include PuppetX::Openstack::Type::Boolean
>   end
>
> Your solution in NetScalerTruthy is nice, integrated with puppet, but
> require a function call.

The function call is to a) pass documentation inline (since I assume
every attribute has different documentation so didn't want to hardcode
it in the truthy class), and b) pass the default truthy/falsy values
that should be exposed to the provider (ie, allow you to cast all
truthy values to `"enable"` and `"disable"` instead of only supporting
`true` and `false`.

The truthy class could obviously be implemented such that if no block
is passed to the attribute then the method is automatically called
with default values, then you wouldn't even need the `include` mixin.
>
> My "solution" require no function call unless you have to pass
> parameters. If you have to pass parameter, the interface I used is a
> preset function.  Here is an example:
>
> https://review.openstack.org/#/c/239434/8/lib/puppet_x/keystone/type/required.rb
>
> and you use it like this:
>
>   newparam(:type) do
> isnamevar
> def required_custom_message
>   'Not specifying type parameter in Keystone_endpoint is a bug. ' \
> 'See bug https://bugs.launchpad.net/puppet-keystone/+bug/1506996 '
> \
> "and https://review.openstack.org/#/c/238954/ for more
> information.\n"
> end
> include PuppetX::Keystone::Type::Required
>   end
>
> So, modulo you can have parameter with parent, both solutions could be
> used.  Which one will it be:
>  - one solution (NetScalerTruthy) is based on inheritance, mine on
> composition.
>  - you have a function call to make with NetScalerTruthy no matter what;
>  - you have to define function to pass parameter with my solution (but
>that shouldn't be required very often)
>
> I tend to prefer my resulting syntax, but that's really me ... I may be
> biased.
>
> What do you think ?
>
>>
>> On Mon, Nov 2, 2015 at 12:06 PM Cody Herriges 
>> wrote:
>>
>> Sofer Athlan-Guyot wrote:
>> > Hi,
>> >
>> > The idea would be to have some of the types defined oslo config
>> >
>>
>> http://git.openstack.org/cgit/openstack/oslo.config/tree/oslo_config/types.
>> py
>> > ported to puppet type. Those that looks like good candidates
>> are:
>> > - Boolean;
>> > - IPAddress;
>> > and in a lesser extend:
>> > - Integer;
>> > - Float;
>> >
>> > For instance in puppet type requiring a Boolean, we may test
>> > "/[tT]rue|[fF]alse/", but the real thing is :
>> >
>> > TRUE_VALUES = ['true', '1', 'on', 'yes']
>> > FALSE_VALUES = ['false', '0', 'off', 'no']
>> >
>>
>> Good idea. I'd only add that we should convert 'true' and 'false'
>> to
>> real booleans for Puppet's purposes since the Puppet language is
>> now typed.
>>
>> --
>> Cody
>>
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Sofer Athlan-Guyot
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 


-Hunter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-04 Thread Jonathan D. Proulx
On Wed, Nov 04, 2015 at 06:17:17PM +, Murray, Paul (HP Cloud) wrote:
:> From: Jay Pipes [mailto:jaypi...@gmail.com]
:> A fair point. However, I think that a generic update VM API, which would
:> allow changes to the resources consumed by the VM along with capabiities
:> like CPU model or local disk performance (SSD) is a better way to handle this
:> than a resize-specific API.
:
:
:Sorry I am so late to this - but this stuck out for me. 
:
:Resize is an operation that a cloud user would do to his VM. Usually the
:cloud user does not know what host the VM is running on so a resize does 
:not appear to be a move at all.
:
:Migrate is an operation that a cloud operator does to a VM that is not normally
:available to a cloud user. A cloud operator does not change the VM because 
:the operator just provides what the user asked for. He only choses where he is 
:going to put it.
:
:It seems clear to me that resize and migrate are very definitely different 
things,
:even if they are implemented using the same code path internally for 
convenience.
:At the very least I believe they need to be kept separate at the API so we can 
apply
:different policy to control access to them.

As an operator I'm with Paul on this.

By all means use the same code path becasue behind the scenes it *is*
the same thing.  

BUT, at the API level we do need the distinction particularly for access
control policy. The UX 'findablility' is important too, but if that
were the only issue a bit of syntactic sugar in the UI could take care
of it.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 01:26 PM, Robert Collins wrote:
> On 5 November 2015 at 06:21, Morgan Fainberg  
> wrote:
>>
> ...
>>
>> If it is that easy, what a fantastic win in speeding things up!
> 
> It'll help, but skew as new things are released - so e.g. after a
> release of numpy until the next image builds and is enabled
> successfully.
> 
> We could have a network mirror with prebuilt wheels with some care,
> but thats a bit more work. The upside is we could be refreshing it
> hourly or so without a multi-GB upload.

It only really will screw when upper-constraints.txt gets updated on a
branch.

I honestly think it's ok to not be perfect here. In the base case we'll
speed up a good chunk, and we'll be slower (though not as slow as today)
for a day after we bump upper-constraints for something expensive (like
numpy). It seems like a reasonable trade off for not much complexity.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Match type checking from oslo.config.

2015-11-04 Thread Sofer Athlan-Guyot
Hunter Haugen  writes:

> I have some code that is similar to this in the F5 and Netscaler
> modules. I make a generic "truthy" property that accepts various
> truthy/falsy values
> (https://github.com/puppetlabs/puppetlabs-netscaler/blob/master/lib/puppet/property/netscaler_
> truthy.rb) then just define that as the parent of the property
> (https://github.com/puppetlabs/puppetlabs-netscaler/blob/master/lib/puppet/type/netscaler_
> csvserver.rb#L73-L75)

Ouha!  I didn't know that property could have parent class defined.
This is nice.  Does it work also for parameter ?

The NetScalerTruthy is more or less what would be needed for thruthy stuff.

On my side I came up with this solution (for different stuff, but the
same principle could be used here as well):

https://review.openstack.org/#/c/238954/10/lib/puppet_x/keystone/type/read_only.rb

And I call it like that:

  newproperty(:id) do
include PuppetX::Keystone::Type::ReadOnly
  end

I was thinking of extending this scheme to have needed types (Boolean,
...):

  newproperty(:truth) do
include PuppetX::Openstack::Type::Boolean
  end

Your solution in NetScalerTruthy is nice, integrated with puppet, but
require a function call.

My "solution" require no function call unless you have to pass
parameters. If you have to pass parameter, the interface I used is a
preset function.  Here is an example:

https://review.openstack.org/#/c/239434/8/lib/puppet_x/keystone/type/required.rb

and you use it like this:

  newparam(:type) do
isnamevar
def required_custom_message
  'Not specifying type parameter in Keystone_endpoint is a bug. ' \
'See bug https://bugs.launchpad.net/puppet-keystone/+bug/1506996 ' \
"and https://review.openstack.org/#/c/238954/ for more information.\n"
end
include PuppetX::Keystone::Type::Required
  end

So, modulo you can have parameter with parent, both solutions could be
used.  Which one will it be:
 - one solution (NetScalerTruthy) is based on inheritance, mine on composition.
 - you have a function call to make with NetScalerTruthy no matter what;
 - you have to define function to pass parameter with my solution (but
   that shouldn't be required very often)

I tend to prefer my resulting syntax, but that's really me ... I may be
biased.

What do you think ?

>
> On Mon, Nov 2, 2015 at 12:06 PM Cody Herriges 
> wrote:
>
> Sofer Athlan-Guyot wrote:
> > Hi,
> >
> > The idea would be to have some of the types defined oslo config
> >
> 
> http://git.openstack.org/cgit/openstack/oslo.config/tree/oslo_config/types.
> py
> > ported to puppet type. Those that looks like good candidates
> are:
> > - Boolean;
> > - IPAddress;
> > and in a lesser extend:
> > - Integer;
> > - Float;
> >
> > For instance in puppet type requiring a Boolean, we may test
> > "/[tT]rue|[fF]alse/", but the real thing is :
> >
> > TRUE_VALUES = ['true', '1', 'on', 'yes']
> > FALSE_VALUES = ['false', '0', 'off', 'no']
> >
> 
> Good idea. I'd only add that we should convert 'true' and 'false'
> to
> real booleans for Puppet's purposes since the Puppet language is
> now typed.
> 
> --
> Cody
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sofer Athlan-Guyot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Robert Collins
On 5 November 2015 at 06:21, Morgan Fainberg  wrote:
>
...
>
> If it is that easy, what a fantastic win in speeding things up!

It'll help, but skew as new things are released - so e.g. after a
release of numpy until the next image builds and is enabled
successfully.

We could have a network mirror with prebuilt wheels with some care,
but thats a bit more work. The upside is we could be refreshing it
hourly or so without a multi-GB upload.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-04 Thread Murray, Paul (HP Cloud)


> From: Jay Pipes [mailto:jaypi...@gmail.com]
> On 10/27/2015 01:16 PM, Chris Friesen wrote:
> > On 10/26/2015 06:02 PM, Jay Pipes wrote:
> >
> >> I believe strongly that we should deprecate the existing migrate,
> >> resize, an live-migrate APIs in favor of a single consolidated,
> >> consistent "move"
> >> REST API
> >> that would have the following characteristics:
> >>
> >> * No manual or wait-input states in any FSM graph
> >
> > Sounds good.
> >
> >> * Removal of the term "resize" from the API entirely (the target
> >> resource sizing is an attribute of the move operation, not a
> >> different type of API operation in and of itself)
> >
> > I disagree on this one.
> >
> > As an end-user, if my goal is to take an existing instance and give it
> > bigger disks, my first instinct isn't going to be to look at the "move"
> > operation.  I'm going to look for "scale", or "resize", or something
> > like that.
> >
> > And if an admin wants to migrate an instance away from its current
> > host, why would they want to change its disk size in the process?
> 
> A fair point. However, I think that a generic update VM API, which would
> allow changes to the resources consumed by the VM along with capabiities
> like CPU model or local disk performance (SSD) is a better way to handle this
> than a resize-specific API.


Sorry I am so late to this - but this stuck out for me. 

Resize is an operation that a cloud user would do to his VM. Usually the
cloud user does not know what host the VM is running on so a resize does 
not appear to be a move at all.

Migrate is an operation that a cloud operator does to a VM that is not normally
available to a cloud user. A cloud operator does not change the VM because 
the operator just provides what the user asked for. He only choses where he is 
going to put it.

It seems clear to me that resize and migrate are very definitely different 
things,
even if they are implemented using the same code path internally for 
convenience.
At the very least I believe they need to be kept separate at the API so we can 
apply
different policy to control access to them.



> 
> So, in other words, I'd support this:
> 
> PATCH /servers/
> 
> with some corresponding request payload that would indicate the required
> changes.
> 
> > I do think it makes sense to combine the external APIs for live and
> > cold migration.  Those two are fundamentally similar, logically
> > separated only by whether the instance stays running or not.
> >
> > And I'm perfectly fine with having the internal implementation of all
> > three share a code path, I just don't think it makes sense for the
> > *external* API.
> 
> I think you meant to say you don't think it makes sense to have three
> separate external APIs for what is fundamentally the same operation (move
> a VM), right?
> 
> Best,
> -jay
> 
> >> * Transition to a task-based API for poll-state requests. This means
> >> that in order for a caller to determine the state of a VM the caller
> >> would call something like GET /servers//tasks/ in order
> >> to see the history of state changes or subtask operations for a
> >> particular request to move a VM
> > enstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > Sounds good.
> >
> > Chris
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] mitaka release schedule

2015-11-04 Thread Doug Hellmann
PTLs and release liaisons,

The mitaka release schedule is in the wiki at
https://wiki.openstack.org/wiki/Mitaka_Release_Schedule

Please note that there are only 5 weeks between Feature Freeze and
the final release, instead of the usual 6. This means we have more
time before the freeze for feature development, but it also means
that we need to be more strict about limiting Feature Freeze
Exceptions (FFEs) this cycle than we were for Liberty because we
will have less time to finish them and fix release-blocking bugs.

The Feature Freeze date for the Mitaka3 milestone is March 3, with
FFEs to be completed by March 11 so we can produce initial Release
Candidates (RCs) by March 18.

Remember that non-client libraries should freeze a week earlier
around February 26 and client libraries should freeze with the
services on March 3. If you have service work that will require
client library work, the service work will need to land early enough
to allow the new features in the client to land by the final deadline.

If any of the freeze conditions or dates are not clear, please ask
questions via a follow-up on this thread so everyone can benefit
from the answers.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread gord chung

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most 
operators *never* update configuration files after the initial setup and 
if they did it was very rarely (monthly updates). the question related 
to Ceilometer and its pipeline configuration file so the results might 
be specific to Ceilometer. I think you should definitely query operators 
before undertaking any work. the last thing you want to do is implement 
a feature no one really needs/wants.


[1] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html


On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services, 


https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and 
discuss

implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service 
contains

code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL 
options

doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different 
projects

which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some 
options
make sense only for specific platforms. If we mark such options as 
mutable

it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is 
created.

When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of 
ServiceLauncher

is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I 
think that
adding of 'mutable' parameter into cfg.Opt constructor could just add 
mess.


2. "oslo.service catches SIGHUP and calls oslo.config"
-
From my point of view every service should register list of hooks to 
reload
config options. oslo.service should catch SIGHUP and call list of 
registered

hooks one by one with specified order.
Discussion of such implementation was started in ML:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html.
Raw reviews:
https://review.openstack.org/#/c/228892/,
https://review.openstack.org/#/c/223668/.

3. "oslo.config is responsible to log changes which were ignored on 
SIGHUP"

-
Some config options could be changed using API(for example quotas) 
that's why

oslo.config doesn't know actual configuration of service and can't log
changes of configuration.

Regards, Marian Horban


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-04 Thread Eichberger, German
This seems we will get some more velocity which is good!!
+1

German

From: Gary Kotton mailto:gkot...@vmware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 4, 2015 at 5:24 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][stable] should we open gate for per 
sub-project stable-maint teams?



From: "mest...@mestery.com" 
mailto:mest...@mestery.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 3, 2015 at 7:09 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][stable] should we open gate for per 
sub-project stable-maint teams?

On Tue, Nov 3, 2015 at 10:49 AM, Ihar Hrachyshka 
mailto:ihrac...@redhat.com>> wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

currently we have a single neutron-wide stable-maint gerrit group that
maintains all stable branches for all stadium subprojects. I believe
that in lots of cases it would be better to have subproject members to
run their own stable maintenance programs, leaving
neutron-stable-maint folks to help them in non-obvious cases, and to
periodically validate that project wide stable policies are still honore
d.

I suggest we open gate to creating subproject stable-maint teams where
current neutron-stable-maint members feel those subprojects are ready
for that and can be trusted to apply stable branch policies in
consistent way.

Note that I don't suggest we grant those new permissions completely
automatically. If neutron-stable-maint team does not feel safe to give
out those permissions to some stable branches, their feeling should be
respected.

I believe it will be beneficial both for subprojects that would be
able to iterate on backports in more efficient way; as well as for
neutron-stable-maint members who are often busy with other stuff, and
often times are not the best candidates to validate technical validity
of backports in random stadium projects anyway. It would also be in
line with general 'open by default' attitude we seem to embrace in
Neutron.

If we decide it's the way to go, there are alternatives on how we
implement it. For example, we can grant those subproject teams all
permissions to merge patches; or we can leave +W votes to
neutron-stable-maint group.

I vote for opening the gates, *and* for granting +W votes where
projects showed reasonable quality of proposed backports before; and
leaving +W to neutron-stable-maint in those rare cases where history
showed backports could get more attention and safety considerations
[with expectation that those subprojects will eventually own +W votes
as well, once quality concerns are cleared].

If we indeed decide to bootstrap subproject stable-maint teams, I
volunteer to reach the candidate teams for them to decide on initial
lists of stable-maint members, and walk them thru stable policies.

Comments?


As someone who spends a considerable amount of time reviewing stable backports 
on a regular basis across all the sub-projects, I'm in favor of this approach. 
I'd like to be included when selecting teams which are approproate to have 
their own stable teams as well. Please include me when doing that.

+1


Thanks,
Kyle

Ihar
-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJWOOWkAAoJEC5aWaUY1u57sVIIALrnqvuj3t7c25DBHvywxBZV
tCMlRY4cRCmFuVy0VXokM5DxGQ3VRwbJ4uWzuXbeaJxuVWYT2Kn8JJ+yRjdg7Kc4
5KXy3Xv0MdJnQgMMMgyjJxlTK4MgBKEsCzIRX/HLButxcXh3tqWAh0oc8WW3FKtm
wWFZ/2Gmf4K9OjuGc5F3dvbhVeT23IvN+3VkobEpWxNUHHoALy31kz7ro2WMiGs7
GHzatA2INWVbKfYo2QBnszGTp4XXaS5KFAO8+4H+HvPLxOODclevfKchOIe6jthH
F1z4JcJNMmQrQDg1WSqAjspAlne1sqdVLX0efbvagJXb3Ju63eSLrvUjyCsZG4Q=
=HE+y
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread gord chung

apologies, if the below was mentioned at some point in this thread.

On 04/11/2015 10:42 AM, Sean Dague wrote:

This seems like a fundamental abuse of HTTP honestly. If you find
yourself creating a ton of new headers, you are probably doing it wrong.
if we want to explore the HTTP path, did we consider using ETags[1] to 
check whether resources have changed? it's something used by Gnocchi's 
API to handle resource changes.


I do think the near term work around is to actually use Searchlight.
They're monitoring the notifications bus for nova, and refreshing
resources when they see a notification which might have changed it. It
still means that Searchlight is hitting our API more than ideal, but at
least only one service is doing so, and if the rest hit that instead
they'll get the resource without any db hits (it's all through an
elastic search cluster).

I think longer term we probably need a dedicated event service in
OpenStack. A few of us actually had an informal conversation about this
during the Nova notifications session to figure out if there was a way
to optimize the Searchlight path. Nearly everyone wants websockets,
which is good. The problem is, that means you've got to anticipate
10,000+ open websockets as soon as we expose this. Which means the stack
to deliver that sanely isn't just a bit of python code, it's also the
highly optimized server underneath.
as part of the StackTach integration efforts, Ceilometer (as of Juno) 
listens to all notifications in the OpenStack ecosystem and builds a 
normalised event model[2] from it. the normalised event data is stored 
in a backend (elasticsearch, sql, mongodb, hbase) and from this you can 
query based on required attributes. in addition to storing events, in 
Liberty, Aodh (alarming service) added support to take events and create 
alarms based on change of state[3] with expanded functionality to be 
added. this was added to handle the NFV use case but may also be 
relevant here as it seems like we want to have an action based on status 
changes.


i should mention that we discussed splitting out the event logic in 
Ceilometer to create a generic listener[4] service which could convert 
notification data to meters, events, and anything else. this isn't a 
high priority item but might be an integration point for those looking 
to leverage notifications in OpenStack.


[1] https://en.wikipedia.org/wiki/HTTP_ETag
[2] http://docs.openstack.org/admin-guide-cloud/telemetry-events.html
[3] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/event-alarm-evaluator.html

[4] https://etherpad.openstack.org/p/mitaka-telemetry-split

cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Morgan Fainberg
On Nov 4, 2015 09:14, "Sean Dague"  wrote:
>
> On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
> > On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
> >> On 11/04/2015 06:47 AM, Sean Dague wrote:
> > [...]
> >>> Is there a nodepool cache strategy where we could pre build these? A
25%
> >>> performance win comes out the other side if there is a strategy here.
> >>
> >> python wheel repo could help maybe?
> >
> > That's along the lines of how I expect we'd need to solve it.
> > Basically add a new DIB element to openstack-infra/project-config in
> > nodepool/elements (or extend the cache-devstack element already
> > there) to figure out which version(s) it needs to prebuild and then
> > populate a wheelhouse which can be leveraged by the jobs running on
> > the resulting diskimage. The test scripts in the
> > openstack/requirements repo may already have much of this logic
> > implemented for the purpose of testing that we can build sane wheels
> > of all our requirements.
> >
> > This of course misses situations where the requirements change and
> > the diskimages haven't been rebuilt or in jobs testing proposed
> > changes which explicitly alter these requirements, but could be
> > augmented by similar mechanisms in devstack itself to avoid building
> > them more than once.
>
> Ok, so given that pip automatically builds a local wheel cache now when
> it installs this... is it as simple as
> https://review.openstack.org/#/c/241692/ ?
>

If it is that easy, what a fantastic win in speeding things up!

> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
> On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
>> On 11/04/2015 06:47 AM, Sean Dague wrote:
> [...]
>>> Is there a nodepool cache strategy where we could pre build these? A 25%
>>> performance win comes out the other side if there is a strategy here.
>>
>> python wheel repo could help maybe?
> 
> That's along the lines of how I expect we'd need to solve it.
> Basically add a new DIB element to openstack-infra/project-config in
> nodepool/elements (or extend the cache-devstack element already
> there) to figure out which version(s) it needs to prebuild and then
> populate a wheelhouse which can be leveraged by the jobs running on
> the resulting diskimage. The test scripts in the
> openstack/requirements repo may already have much of this logic
> implemented for the purpose of testing that we can build sane wheels
> of all our requirements.
> 
> This of course misses situations where the requirements change and
> the diskimages haven't been rebuilt or in jobs testing proposed
> changes which explicitly alter these requirements, but could be
> augmented by similar mechanisms in devstack itself to avoid building
> them more than once.

Ok, so given that pip automatically builds a local wheel cache now when
it installs this... is it as simple as
https://review.openstack.org/#/c/241692/ ?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Jeremy Stanley
On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
> On 11/04/2015 06:47 AM, Sean Dague wrote:
[...]
> > Is there a nodepool cache strategy where we could pre build these? A 25%
> > performance win comes out the other side if there is a strategy here.
> 
> python wheel repo could help maybe?

That's along the lines of how I expect we'd need to solve it.
Basically add a new DIB element to openstack-infra/project-config in
nodepool/elements (or extend the cache-devstack element already
there) to figure out which version(s) it needs to prebuild and then
populate a wheelhouse which can be leveraged by the jobs running on
the resulting diskimage. The test scripts in the
openstack/requirements repo may already have much of this logic
implemented for the purpose of testing that we can build sane wheels
of all our requirements.

This of course misses situations where the requirements change and
the diskimages haven't been rebuilt or in jobs testing proposed
changes which explicitly alter these requirements, but could be
augmented by similar mechanisms in devstack itself to avoid building
them more than once.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][dsvm] openstack packages pre-installed dsvm node?

2015-11-04 Thread Jeremy Stanley
On 2015-11-04 18:14:43 +0800 (+0800), Gareth wrote:
> When choosing nodes to run a gate job, is there a liberty-trusty node?
> The liberty core services is installed and well setup. It is helpful
> for development on non-core projects and saves much time.

Integration tests really do need to install the services from source
during the job, not in advance. Otherwise you would be unable to
have your project's change declare a cross-repository dependency
(depends-on) to a pending change in another one of those projects.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Ironic] Deploy Ironic with fuel ?

2015-11-04 Thread Stanisław Dałek

Pavlo,

I tried to deploy Ironic from Fuel 8.0 release (iso build 
fuel-community-8.0-83-2015-11-02_05-42-11)


4b1f91ad496c571e4cbc5931134db6479e582c8b

After creating a fuel network-group as You suggested to Loic and 
configuring an Ironic deployment (4 nodes including controller, compute, 
cinder and Ironic) , I am getting an error during the deployment of 
firewall.pp on the controller node:


$$network_metadata["vips"]["baremetal"] is :undef, not a hash or array 
at /etc/puppet/modules/osnailyfacter/modular/firewall/firewall.pp:52 on 
node node-36.domain.tld


It seems the the vip for the baremetal network hasn't been created.
Please note, I didn't deploy the Fuel-Plugin-Ironic Loic refers to (I 
tried it in 7.0, but it doesn't work anyway), but used the Ironic 
functionality available in 8.0.


Is there anyway I can make it work??

Best regards
Stanislaw



Subject: 	Re: [openstack-dev] [Fuel][Plugins][Ironic] Deploy Ironic with 
fuel ?  	permalink 


From:   Pavlo Shchelokovskyy (pshc...@mirantis.com)
Date:   Oct 20, 2015 9:36:37 am
List:   org.openstack.lists.openstack-dev

Hi Loic,

the story of this plugin is a bit complicated. We've done it as PoC of 
integrating Ironic into Mirantis OpenStack/Fuel during 7.0 release. 
Currently we are working on integrating Ironic into core of Fuel 
targeting its 8.0 release. Given that, the plugin is not official in any 
sense, is not certified according to Fuel plugins guidelines, is not 
supported at all and has had only limited testing on a small in-house lab.


To successfully deploy Ironic with this plugin "as-is" you'd most 
probably need access to Mirantis package repositories as it relies on 
some patches to fuel-agent that we use for bootstrapping, and some of 
those are not merged yet, so we use repos created by our CI from Gerrit 
changes. Probably though you can hack on the code and disable such 
dependencies/building and uploading the custom bootstrap image, activate 
clear upstream Ironic drivers and then use upstream images with e.g. 
ironic-python-agent for bootstrapping baremetal nodes.


As to your network setup question - the baremetal network is somewhat 
similar to the public network in Fuel, which needs two ip ranges 
defined, one for service nodes, and the other for actual VMs to assign 
as floating ips. Thus networking setup for the plugin should be done as 
follows (naming it "baremetal" is mandatory):


fuel network-group --name baremetal --cidr 192.168.3.0/24 -c --nodegroup 
1 --meta='{ip_range: ["192.168.3.2", "192.168.3.50"], notation: 
"ip_ranges"}'


where the ip range (I've put some example values) is for those service 
OpenStack nodes that host Ironic services and need to have access to 
this provider network where BM nodes do live (this range is then 
auto-filled to network.baremetal section of Networking settings tab in 
Fuel UI). The range for the actual BM nodes is defined then on the 
"Settings->Ironic" tab in Fuel UI once Ironic checkbox there is activated.


I admit we do need to make some effort and document the plugin a bit 
better (actually at all :) ) to not confuse people wishing to try it out.


Best regards,

On Mon, Oct 19, 2015 at 6:45 AM,  wrote:

Hello,

I’m currently searching for information about Ironic Fuel plugin : 
https://github.com/openstack/fuel-plugin-ironic I don’t find any 
documentation on it.


I’ve tried to install and deploy an Openstack environment with Fuel 7.0 
and Ironic plugin but it failed. After adding ironic role to a node Fuel 
UI crashed, due to a missing network “baremetal” . When creating a 
network group


fuel network-group --create --node-group 1 --name \

"baremetal" --cidr 192.168.3.0/24

UI works again, but I got some errors in the deployment, during network 
configuration. So I think I have to configure a network template, did 
someone already do this for this plugin ?


Regards,

Loic

_ 



Ce message et ses pieces jointes peuvent contenir des informations
confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
exploites ou copies sans autorisation. Si vous avez recu ce
message par erreur, veuillez le signaler a l'expediteur et le detruire 
ainsi que les pieces jointes. Les messages
electroniques etant susceptibles d'alteration, Orange decline toute 
responsabilite si ce message a ete altere, deforme ou

falsifie. Merci.

This message and its attachments may contain confidential or privileged
information that may be protected by law; they should not be 
distributed, used or copied without authorisation. If you have received 
this email in error, please notify the sender and delete
this message and its attachments. As emails may be altered, Orange is 
not liable for messages that have been

modified, changed or falsified. Thank you.

_

Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 07:47 AM, Sean Dague wrote:
> I was spot checking the grenade multinode job to make sure it looks like
> it was doing the correct thing. In doing so I found that ~15minutes of
> it's hour long build time is compiling lxml and numpy 3 times each.
> 
> Due to our exact calculations by upper-constraints.txt we ensure exactly
> the right version of each of those in old & new & subnode (old).
> 
> Is there a nodepool cache strategy where we could pre build these? A 25%
> performance win comes out the other side if there is a strategy here.

Also, if anyone needs more data -
http://paste.openstack.org/show/477989/ is the current cost during a
devstack tempest run (not grenade, which makes this 3x worse).

This was calculated with this parser -
https://review.openstack.org/#/c/241676/

Building cryptography twice looks like a fun one, and cffi 3 times.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ian Wienand as core reviewer on diskimage-builder

2015-11-04 Thread James Slagle
On Wed, Nov 4, 2015 at 12:25 AM, Gregory Haynes  wrote:
> Hello everyone,
>
> I would like to propose adding Ian Wienand as a core reviewer on the
> diskimage-builder project. Ian has been making a significant number of
> contributions for some time to the project, and has been a great help in
> reviews lately. Thus, I think we could benefit greatly by adding him as
> a core reviewer.
>
> Current cores - Please respond with any approvals/objections by next Friday
> (November 13th).

+1

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 10:50 AM, Brant Knudson wrote:
> 
> 
> On Wed, Nov 4, 2015 at 6:47 AM, Sean Dague  > wrote:
> 
> I was spot checking the grenade multinode job to make sure it looks like
> it was doing the correct thing. In doing so I found that ~15minutes of
> it's hour long build time is compiling lxml and numpy 3 times each.
> 
> 
> I've always wondered why lxml is used rather than python's built-in XML
> support. Is there some function that xml.etree is missing that
> lxml.etree provides? The only thing I know about is that lxml has better
> support for some XPATH features.

It's all xpath semantics as far as I know.

I was told that SAML needs a thing the built in xml library doesn't
support. I doubt that any other projects really need it. However, there
would be a ton of unwind in nova to get rid of it given how extensively
it's used in libvirt driver.

I don't know about other projects. It also only benefits us if we can
remove it from g-r entirely.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Creating puppet-keystone-core and proposing Richard Megginson core-reviewer

2015-11-04 Thread Emilien Macchi


On 11/03/2015 03:56 PM, Matt Fischer wrote:
> Sorry I replied to this right away but used the wrong email address and
> it bounced!
> 
>> I've appreciated all of richs v3 contributions to keystone. +1 from me.

2 positives votes from our core-reviewer team.
No negative vote at all.

I guess that's a 'yes', welcome Rich, you're the first
puppet-keystone-core member!

Note: anyone else core-reviewer on Puppet modules is also core on
puppet-keystone by the way.

Congrats Rich!

> On Tue, Nov 3, 2015 at 4:38 AM, Sofer Athlan-Guyot  > wrote:
> 
> He's very good reviewer with a deep knowledge of keystone and puppet.
> Thank you Richard for your help.
> 
> +1
> 
> Emilien Macchi  > writes:
> 
> > At the Summit we discussed about scaling-up our team.
> > We decided to investigate the creation of sub-groups specific to our
> > modules that would have +2 power.
> >
> > I would like to start with puppet-keystone:
> > https://review.openstack.org/240666
> >
> > And propose Richard Megginson part of this group.
> >
> > Rich is leading puppet-keystone work since our Juno cycle. Without his
> > leadership and skills, I'm not sure we would have Keystone v3 support
> > in our modules.
> > He's a good Puppet reviewer and takes care of backward compatibility.
> > He also has strong knowledge at how Keystone works. He's always
> > willing to lead our roadmap regarding identity deployment in
> > OpenStack.
> >
> > Having him on-board is for us an awesome opportunity to be ahead of
> > other deployments tools and supports many features in Keystone that
> > real deployments actually need.
> >
> > I would like to propose him part of the new puppet-keystone-core
> > group.
> >
> > Thank you Rich for your work, which is very appreciated.
> 
> --
> Sofer Athlan-Guyot
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-04 Thread Doug Hellmann
Excerpts from Louis Taylor's message of 2015-11-04 08:05:44 +:
> On Wed, Nov 04, 2015 at 03:03:29PM +1300, Fei Long Wang wrote:
> > Hi Doug,
> > 
> > Thanks for posting this. I'm working on this for Zaqar now and there is a
> > question. As for the stable/liberty patch, where does the "60fdcaba00e30d02"
> > in [1] come from? Thanks.
> > 
> > [1] 
> > https://review.openstack.org/#/c/241322/1/releasenotes/notes/60fdcaba00e30d02-start-using-reno.yaml
> 
> This is from running the reno command to create a uniquely named release note
> file. See http://docs.openstack.org/developer/reno/usage.html

Right, we need the files to have a unique name so reno generates a part
of the file name as unique, combined with the partial filename you give
it on the command line.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-04 Thread Salvatore Orlando
Inline,
Salvatore

On 4 November 2015 at 15:11, Cory Benfield  wrote:

>
> > On 4 Nov 2015, at 13:13, Salvatore Orlando 
> wrote:
> >
> > Regarding Jay's proposal, this would be tantamount to defining an API
> action for retrieving instances, something currently being discussed here
> [1].
> > The only comment I have is that I am not entirely surely whether using
> the POST verb for operations which do no alter at all the server
> representation of any object is in accordance with RFC 7231.
>
> It’s totally fine, so long as you define things appropriately. Jay’s
> suggestion does exactly that, and is entirely in line with RFC 7231.
>
> The analogy here is to things like complex search forms. Many search
> engines allow you to construct very complex search queries (consider
> something like Amazon or eBay, where you can filter on all kinds of
> interesting criteria). These forms are often submitted to POST endpoints
> rather than GET.
>
> This is totally fine. In fact, the first example from RFC 7231 Section
> 4.3.3 (POST) applies here: “POST is used for the following functions (among
> others): Providing a block of data […] to a data-handling process”. In this
> case, the data-handling function is the search function on the server.
>

I looked back at the RFC and indeed it does not state anywhere that a POST
operation is required to change somehow the state of any object, so the
approach is entirely fine from this aspect as well.


>
> The *only* downside of Jay’s approach is that the response cannot really
> be cached. It’s not clear to me whether anyone actually deploys a cache in
> this kind of role though, so it may not hurt too much.
>

I believe there will be not a great advantage from caching this kind of
responses, as cache hits would be very low anyway.


> Cory
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Brant Knudson
On Wed, Nov 4, 2015 at 6:47 AM, Sean Dague  wrote:

> I was spot checking the grenade multinode job to make sure it looks like
> it was doing the correct thing. In doing so I found that ~15minutes of
> it's hour long build time is compiling lxml and numpy 3 times each.
>
>
I've always wondered why lxml is used rather than python's built-in XML
support. Is there some function that xml.etree is missing that lxml.etree
provides? The only thing I know about is that lxml has better support for
some XPATH features.

:: Brant


> Due to our exact calculations by upper-constraints.txt we ensure exactly
> the right version of each of those in old & new & subnode (old).
>
> Is there a nodepool cache strategy where we could pre build these? A 25%
> performance win comes out the other side if there is a strategy here.
>
> -Sean
>
>



> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Sean Dague
On 11/04/2015 10:13 AM, John Garbutt wrote:
> On 4 November 2015 at 14:49, Jay Pipes  wrote:
>> On 11/04/2015 09:32 AM, Sean Dague wrote:
>>>
>>> On 11/04/2015 09:00 AM, Jay Pipes wrote:

 On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
>
> Hi stackers,
>
> Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
> that works with OpenStack are working with resources (e.g. VM, Volumes,
> Images, ..) in the next way:
>
>   >>> resource = api.resouce_do_some_stuff()
>   >>> while api.resource_get(resource["uuid"]) != expected_status
>   >>>sleep(a_bit)
>
> For each async operation they are polling and call many times
> resource_get() which creates significant load on API and DB layers due
> the nature of this request. (Usually getting full information about
> resources produces SQL requests that contains multiple JOINs, e,g for
> nova vm it's 6 joins).
>
> What if we add new API method that will just resturn resource status by
> UUID? Or even just extend get request with the new argument that returns
> only status?


 +1

 All APIs should have an HTTP HEAD call on important resources for
 retrieving quick status information for the resource.

 In fact, I proposed exactly this in my Compute "vNext" API proposal:

 http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head

 Swift's API supports HEAD for accounts:


 http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta


 containers:


 http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta


 and objects:


 http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta

 So, yeah, I agree.
 -jay
>>>
>>>
>>> How would you expect this to work on "servers"? HEAD specifically
>>> forbids returning a body, and, unlike swift, we don't return very much
>>> information in our headers.
>>
>>
>> I didn't propose doing it on a collection resource like "servers". Only on
>> an entity resource like a single "server".
>>
>> HEAD /v2/{tenant}/servers/{uuid}
>> HTTP/1.1 200 OK
>> Content-Length: 1022
>> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
>> Content-Type: application/json
>> Date: Thu, 16 Jan 2014 21:13:19 GMT
>> OpenStack-Compute-API-Server-VM-State: ACTIVE
>> OpenStack-Compute-API-Server-Power-State: RUNNING
>> OpenStack-Compute-API-Server-Task-State: NONE
> 
> For polling, that sounds quite efficient and handy.
> 
> For "servers" we could do this (I think there was a spec up that wanted this):
> 
> HEAD /v2/{tenant}/servers
> HTTP/1.1 200 OK
> Content-Length: 1022
> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> Content-Type: application/json
> Date: Thu, 16 Jan 2014 21:13:19 GMT
> OpenStack-Compute-API-Server-Count: 13

This seems like a fundamental abuse of HTTP honestly. If you find
yourself creating a ton of new headers, you are probably doing it wrong.

I do think the near term work around is to actually use Searchlight.
They're monitoring the notifications bus for nova, and refreshing
resources when they see a notification which might have changed it. It
still means that Searchlight is hitting our API more than ideal, but at
least only one service is doing so, and if the rest hit that instead
they'll get the resource without any db hits (it's all through an
elastic search cluster).

I think longer term we probably need a dedicated event service in
OpenStack. A few of us actually had an informal conversation about this
during the Nova notifications session to figure out if there was a way
to optimize the Searchlight path. Nearly everyone wants websockets,
which is good. The problem is, that means you've got to anticipate
10,000+ open websockets as soon as we expose this. Which means the stack
to deliver that sanely isn't just a bit of python code, it's also the
highly optimized server underneath.

So, I feel like with Searchlight we've got a work around that's more
efficient than we're going to make with an API that we really don't want
to support down the road. Because I definitely don't want to make
general purpose search a thing inside every service, as in order to make
it efficient we're going to have to reimplement most of searchlight in
the services.

Instead of spending the energy on this path, it would be much better to
push forward on the end user events path, which is really the long term
model we want.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Doug Hellmann
Excerpts from Marian Horban's message of 2015-11-04 17:00:55 +0200:
> Hi guys,
> 
> Unfortunately I haven't been on Tokio summit but I know that there was
> discussion about dynamic reloading of configuration.
> Etherpad refs:
> https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services
> ,
> https://etherpad.openstack.org/p/mitaka-oslo-security-logging
> 
> In this thread I want to discuss agreements reached on the summit and
> discuss
> implementation details.
> 
> Some notes taken from etherpad and my remarks:
> 
> 1. "Adding "mutable" parameter for each option."
> "Do we have an option mutable=True on CfgOpt? Yes"
> -
> As I understood 'mutable' parameter must indicate whether service contains
> code responsible for reloading of this option or not.
> And this parameter should be one of the arguments of cfg.Opt constructor.
> Problems:
> 1. Library's options.
> SSL options ca_file, cert_file, key_file taken from oslo.service library
> could be reloaded in nova-api so these options should be mutable...
> But for some projects that don't need SSL support reloading of SSL options
> doesn't make sense. For such projects this option should be non mutable.
> Problem is that oslo.service - single and there are many different projects
> which use it in different way.
> The same options could be mutable and non mutable in different contexts.

No, that would not be allowed. An option would either always be mutable,
or never. Library options, such as logging levels, would be marked
mutable and the library would need to provide a callback of some sort to
be invoked when the configuration is reloaded. If we can't do this for
SSL-related options, those are not mutable.

> 2. Support of config options on some platforms.
> Parameter "mutable" could be different for different platforms. Some
> options
> make sense only for specific platforms. If we mark such options as mutable
> it could be misleading on some platforms.

Again, if the option cannot be made mutable everywhere it is not
mutable.

> 3. Dependency of options.
> There are many 'workers' options(osapi_compute_workers, ec2_workers,
> metadata_workers, workers). These options specify number of workers for
> OpenStack API services.
> If value of the 'workers' option is greater than '1' instance of
> ProcessLauncher is created otherwise instance of ServiceLauncher is created.
> When ProcessLauncher receives SIGHUP it reloads it own configuration,
> gracefully terminates children and respawns new children.
> This mechanism allows to reload many config options implicitly.
> But if value of the 'workers' option equals '1' instance of ServiceLauncher
> is created.
> ServiceLauncher starts everything in single process and in this case we
> don't have such implicit reloading.
> 
> I think that mutability of options is a complicated feature and I think that
> adding of 'mutable' parameter into cfg.Opt constructor could just add mess.

The idea is to start with a very small number of options. Most of the
ones identified in the summit session are owned by the application,
if I remember correctly. After the configuration changes, the same
function that calls the reload for oslo.config would call the necessary
reload functions in the libraries and application modules that have
mutable options.

> 
> 2. "oslo.service catches SIGHUP and calls oslo.config"
> -
> From my point of view every service should register list of hooks to reload
> config options. oslo.service should catch SIGHUP and call list of
> registered
> hooks one by one with specified order.
> Discussion of such implementation was started in ML:
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html

The reload hooks may need to be called in a specific order, so for
now we're leaving it up to the application to do that, rather than
having a generic registry.

> .
> Raw reviews:
> https://review.openstack.org/#/c/228892/,
> https://review.openstack.org/#/c/223668/.
> 
> 3. "oslo.config is responsible to log changes which were ignored on SIGHUP"
> -
> Some config options could be changed using API(for example quotas)
> that's
> why
> oslo.config doesn't know actual configuration of service and can't log
> changes of configuration.

This proposal only applies to options defined by oslo.config, which
should not be duplicated in the database.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding a new feature in Kilo. Is it possible?

2015-11-04 Thread Michał Dubiel
Hi John,

Just to let you know, I have just filled in the blueprint here:
https://blueprints.launchpad.net/nova/+spec/libvirt-vhostuser-vrouter-vif

Regards,
Michal

On 4 November 2015 at 12:20, John Garbutt  wrote:

> In terms of adding this into master, we can go for a spec-less
> blueprint in Nova.
>
> Reach out to me on IRC if I can help you through the process.
>
> Thanks,
> johnthetubaguy
>
> PS
> We are working on making this easier in the future, by using OS VIF Lib.
>
> On 4 November 2015 at 08:56, Michał Dubiel  wrote:
> > Ok, I see. Thanks for all the answers.
> >
> > Regards,
> > Michal
> >
> > On 3 November 2015 at 22:50, Matt Riedemann 
> > wrote:
> >>
> >>
> >>
> >> On 11/3/2015 11:57 AM, Michał Dubiel wrote:
> >>>
> >>> Hi all,
> >>>
> >>> We have a simple patch allowing to use OpenContrail's vrouter with
> >>> vhostuser vif types (currently only OVS has support for that). We would
> >>> like to contribute it.
> >>>
> >>> However, We would like this change to land in the next maintenance
> >>> release of Kilo. Is it possible? What should be the process for this?
> >>> Should we prepare a blueprint and review request for the 'master'
> branch
> >>> first? It is small self contained change so I believe it does not need
> a
> >>> nova-spec.
> >>>
> >>> Regards,
> >>> Michal
> >>>
> >>>
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> The short answer is 'no' to backporting features to stable branches.
> >>
> >> As the other reply said, feature changes are targeted to master.
> >>
> >> The full stable branch policy is here:
> >>
> >> https://wiki.openstack.org/wiki/StableBranch
> >>
> >> --
> >>
> >> Thanks,
> >>
> >> Matt Riedemann
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Sean McGinnis
On Wed, Nov 04, 2015 at 07:30:51AM -0500, Sean Dague wrote:
> On 10/28/2015 07:15 PM, Anne Gentle wrote:
> 
> Has anyone considered using #openstack-dev, instead of a new meeting
> room? #openstack-dev is mostly a ghost town at this point, and deciding
> that instead it would be the dedicated cross project space, including
> meetings support, might be interesting.
> 
>   -Sean

+1 - That makes a lot of sense to me.

> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >