Re: [openstack-dev] [barbican] Help for Barbican and UWSGI Community Goal

2017-06-23 Thread Dave McCowan (dmccowan)


On 6/23/17, 2:24 PM, "Matthew Treinish"  wrote:

>On Fri, Jun 23, 2017 at 04:11:50PM +, Dave McCowan (dmccowan) wrote:
>> The Barbican team is currently lacking a UWSGI expert.
>> We need help identifying what work items we have to meet the UWSGI
>>community goal.[1]
>> Could someone with expertise in this area review our code and docs [2]
>>and help me put together a to-do list?
>
>So honestly barbican is probably already like 90% complete by the way
>there. It
>was already running everything as a proper wsgi script under uwsgi. The
>only thing
>missing was the apache config to use mod_proxy_uwsgi to have all the api
>servers
>running on port 80.
>
>It was also doing everything manually instead of relying on the common
>functionality in PBR and devstack to handle creating wsgi entrypoints and
>deploying wsgi apps.
>
>I pushed up:
>
>https://review.openstack.org/#/q/topic:deploy-in-wsgi
>
>To take care of the gaps and make everything use the common mechanisms. It
>probably will need a little bit of work before it's ready to go. (I didn't
>bother testing anything before I pushed it)
>
>-Matt Treinish

Thanks Matt!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-23 Thread Michał Jastrzębski
Great idea!

I would also throw another issue new people often have (I had it too).
Namely what to contribute. Lot's of people wants to do something but
not quite know where to start.
So few ideas for start:
* List of triaged bugs
* List of work items of large blueprits

Thoughts,
Michal

On 23 June 2017 at 13:17, Mike Perez  wrote:
> Hello all,
>
> Every month we have people asking on IRC or the dev mailing list having
> interest in working on OpenStack, and sometimes they're given different
> answers from people, or worse, no answer at all.
>
> Suggestion: lets work our efforts together to create some common
> documentation so that all teams in OpenStack can benefit.
>
> First it’s important to note that we’re not just talking about code projects
> here. OpenStack contributions come in many forms such as running meet ups,
> identifying use cases (product working group), documentation, testing, etc.
> We want to make sure those potential contributors feel welcomed too!
>
> What is common documentation? Things like setting up Git, the many accounts
> you need to setup to contribute (gerrit, launchpad, OpenStack foundation
> account). Not all teams will use some common documentation, but the point is
> one or more projects will use them. Having the common documentation worked
> on by various projects will better help prevent duplicated efforts,
> inconsistent documentation, and hopefully just more accurate information.
>
> A team might use special tools to do their work. These can also be
> integrated in this idea as well.
>
> Once we have common documentation we can have something like:
> 1. Choose your own adventure: I want to contribute by code
> 2. What service type are you interested in? (Database, Block storage,
> compute)
> 3. Here’s step-by-step common documentation to setting up Git, IRC,
> Mailing Lists, Accounts, etc.
> 4. A service type project might choose to also include additional
> documentation in that flow for special tools, etc.
>
> Important things to note in this flow:
> * How do you want to contribute?
> * Here are **clear** names that identify the team. Not code names like
> Cloud Kitty, Cinder, etc.
> * The documentation should really aim to not be daunting:
> * Someone should be able to glance at it and feel like they can finish
> things in five minutes. Not be yet another tab left in their browser that
> they’ll eventually forget about
> * No wall of text!
> * Use screen shots
> * Avoid covering every issue you could hit along the way.
>
> ## Examples of More Simple Documentation
> I worked on some documentation for the Upstream University preparation that
> has received excellent feedback meet close to these suggestions:
> * IRC [1]
> * Git [2]
> * Account Setup [3]
>
> ## 500 Feet Birds Eye view
> There will be a Contributor landing page on the openstack.org website.
> Existing contributors will find reference links to quickly jump to things.
> New contributors will find a banner at the top of the page to direct them to
> the choose your own adventure to contributing to OpenStack, with ordered
> documentation flow that reuses existing documentation when necessary.
> Picture also a progress bar somewhere to show how close you are to being
> ready to contribute to whatever team. Of course there are a lot of other
> fancy things we can come up with, but I think getting something up as an
> initial pass would be better than what we have today.
>
> Here's an example of what the sections/chapters could look like:
>
> - Code
> * Volumes (Cinder)
>  * IRC
>  * Git
>  * Account Setup
>  * Generating Configs
> * Compute (Nova)
>  * IRC
>  * Git
>  * Account Setup
> * Something about hypervisors (matrix?)
> -  Use Cases
> * Products (Product working group)
> * IRC
> * Git
> * Use Case format
>
> There are some rough mock up ideas [4]. Probably Sphinx will be fine for
> this. Potentially we could use this content for conference lunch and learns,
> upstream university, and the on-boarding events at the Forum. What do you
> all think?
>
> [1] - http://docs.openstack.org/upstream-training/irc.html
> [2] - http://docs.openstack.org/upstream-training/git.html
> [3] - http://docs.openstack.org/upstream-training/accounts.html
> [4] -
> https://www.dropbox.com/s/o46xh1cp0sv0045/OpenStack%20contributor%20portal.pdf?dl=0
>
> —
>
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-23 Thread Ken Giusti
On Wed, Jun 21, 2017 at 10:13 AM, Ilya Shakhat  wrote:

> Hi Ken,
>
> Please check scenarios and reports that exist in Performance Docs. In
> particular you may be interested in:
>  * O.M.Simulator - https://github.com/openstack/o
> slo.messaging/blob/master/tools/simulator.py
>  * MQ  performance scenario - https://docs.openstack.org/dev
> eloper/performance-docs/test_plans/mq/plan.html#message-queue-performance
>  * One of RabbitMQ reports - https://docs.openstack.org/dev
> eloper/performance-docs/test_results/mq/rabbitmq/cmsm/index.html
>  * MQ HA scenario - https://docs.openstack.org/dev
> eloper/performance-docs/test_plans/mq_ha/plan.html
>  * One of RabbitMQ HA reports - https://docs.openstack.org/dev
> eloper/performance-docs/test_results/mq_ha/rabbitmq-ha-
> queues/cs1ss2-ks2-ha/omsimulator-ha-call-cs1ss2-ks2-ha/index.html
>
>
Thank you Ilya - these tests you reference are indeed valuable.

But, IIUC, those tests benchmark queue throughput, using a single (highly
threaded) client->single server traffic flow.   If that is the case, I
think the tests we're trying to define might be a bit more specific to the
FEMDC [0] use cases:  multiple servers consuming from different topics
while many clients distributed across the message bus are connecting,
generating traffic, failing over, etc.

The goal of these tests would be to quantify the behavior of the message
bus as a whole under different messaging loads, failure conditions, etc.

[0] https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds




>
> Thanks,
> Ilya
>
> 2017-06-21 15:23 GMT+02:00 Ken Giusti :
>
>> Hi All,
>>
>> Andy and I have taken a stab at defining some test scenarios for anal the
>> different message bus technologies:
>>
>> https://etherpad.openstack.org/p/1BGhFHDIoi
>>
>> We've started with tests for just the oslo.messaging layer to analyze
>> throughput and latency as the number of message bus clients - and the bus
>> itself - scale out.
>>
>> The next step will be to define messaging oriented test scenarios for an
>> openstack deployment.  We've started by enumerating a few of the tools,
>> topologies, and fault conditions that need to be covered.
>>
>> Let's use this epad as a starting point for analyzing messaging - please
>> feel free to contribute, question, and criticize :)
>>
>> thanks,
>>
>>
>>
>> --
>> Ken Giusti  (kgiu...@gmail.com)
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-23 Thread Mike Perez
Hello all,

Every month we have people asking on IRC or the dev mailing list having
interest in working on OpenStack, and sometimes they're given different
answers from people, or worse, no answer at all.

Suggestion: lets work our efforts together to create some common
documentation so that all teams in OpenStack can benefit.

First it’s important to note that we’re not just talking about code
projects here. OpenStack contributions come in many forms such as running
meet ups, identifying use cases (product working group), documentation,
testing, etc. We want to make sure those potential contributors feel
welcomed too!

What is common documentation? Things like setting up Git, the many accounts
you need to setup to contribute (gerrit, launchpad, OpenStack foundation
account). Not all teams will use some common documentation, but the point
is one or more projects will use them. Having the common documentation
worked on by various projects will better help prevent duplicated efforts,
inconsistent documentation, and hopefully just more accurate information.

A team might use special tools to do their work. These can also be
integrated in this idea as well.

Once we have common documentation we can have something like:
1. Choose your own adventure: I want to contribute by code
2. What service type are you interested in? (Database, Block storage,
compute)
3. Here’s step-by-step common documentation to setting up Git, IRC,
Mailing Lists, Accounts, etc.
4. A service type project might choose to also include additional
documentation in that flow for special tools, etc.

Important things to note in this flow:
* How do you want to contribute?
* Here are **clear** names that identify the team. Not code names like
Cloud Kitty, Cinder, etc.
* The documentation should really aim to not be daunting:
* Someone should be able to glance at it and feel like they can finish
things in five minutes. Not be yet another tab left in their browser that
they’ll eventually forget about
* No wall of text!
* Use screen shots
* Avoid covering every issue you could hit along the way.

## Examples of More Simple Documentation
I worked on some documentation for the Upstream University preparation that
has received excellent feedback meet close to these suggestions:
* IRC [1]
* Git [2]
* Account Setup [3]

## 500 Feet Birds Eye view
There will be a Contributor landing page on the openstack.org website.
Existing contributors will find reference links to quickly jump to things.
New contributors will find a banner at the top of the page to direct them
to the choose your own adventure to contributing to OpenStack, with ordered
documentation flow that reuses existing documentation when necessary.
Picture also a progress bar somewhere to show how close you are to being
ready to contribute to whatever team. Of course there are a lot of other
fancy things we can come up with, but I think getting something up as an
initial pass would be better than what we have today.

Here's an example of what the sections/chapters could look like:

- Code
* Volumes (Cinder)
 * IRC
 * Git
 * Account Setup
 * Generating Configs
* Compute (Nova)
 * IRC
 * Git
 * Account Setup
* Something about hypervisors (matrix?)
-  Use Cases
* Products (Product working group)
* IRC
* Git
* Use Case format

There are some rough mock up ideas [4]. Probably Sphinx will be fine for
this. Potentially we could use this content for conference lunch and
learns, upstream university, and the on-boarding events at the Forum. What
do you all think?

[1] - http://docs.openstack.org/upstream-training/irc.html
[2] - http://docs.openstack.org/upstream-training/git.html
[3] - http://docs.openstack.org/upstream-training/accounts.html
[4] -
https://www.dropbox.com/s/o46xh1cp0sv0045/OpenStack%20contributor%20portal.pdf?dl=0

—

Mike Perez
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gnocchi] Difference between Gnocchi-api and uwsgi

2017-06-23 Thread gordon chung


On 23/06/17 03:08 PM, mate...@mailbox.org wrote:
> The quantity of metrics is very low. I'm not sure that batch_size works.
> Regarding the batch_timeout. What will be when the timeout reached ? Will 
> ceilometer throw error to the log file and
> discard the whole batch ? I've got this timeout set to 300, but every minute 
> I receive errors if api doesn't work
> correctly.

you set batch_timeout as 300? it's either or scenario. basically the 
notification agent (or collector) either waits to receive  
messages before processing/publishing or it waits  
seconds before processing/publishing. nothing is thrown away.

i'm not sure why you receive some metrics but get timeouts for others. 
maybe others have an idea.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gnocchi] Difference between Gnocchi-api and uwsgi

2017-06-23 Thread mate200


On Thu, 2017-06-22 at 22:57 +, gordon chung wrote:
> 
> On 22/06/17 04:23 PM, mate...@mailbox.org wrote:
> > Hello everyone !
> > 
> > I'm sorry that I'm disturbing you, but I was sent here from 
> > openstack-operators ML.
> > On my Mitaka test stack I installed Gnocchi as database for measurements, 
> > but I have problems with
> > api part. Firstly, I ran it directly executing gnocchi-api -p 8041. I noted 
> > the warning message and later rerun api
> > using uwsgi daemon. The problem that I'm faced with is a connection errors 
> > that appears in ceilometer-collector.log
> > approximately every 5-10 minutes:
> > 
> > 2017-06-22 12:54:09.751 1846835 ERROR ceilometer.dispatcher.gnocchi 
> > ConnectFailure: Unable to establish connection
> > to ht
> > tp://10.10.10.69:8041/v1/resource/generic/c900fd60-0b65-57b5-a481-
> > eaee8e116312/metric/network.incoming.bytes.rate/measures
> 
> 
> is this failing on all your requests or just some? do you have data in 
> your gnocchi?

Hello Gordon !

Yes I have data in gnocchi. Only some requests failing.


> > 
> > I run uwsgi with the following config:
> > 
> > [uwsgi]
> > #http-socket = 127.0.0.1:8000
> > http-socket = 10.10.10.69:8041
> 
> this should work but i imagine it's not behind a proxy so you could use 
> http instead of http-socket.

Yes I run it directly without http proxy server. With http option it doesn't 
start:

*** Starting uWSGI 2.0.12-debian (64bit) on [Fri Jun 23 19:03:56 2017] ***
compiled with version: 5.3.1 20160412 on 13 April 2016 08:36:06
os: Linux-4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017
nodename: ZABBIX1
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /root
detected binary path: /usr/bin/uwsgi-core
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your processes number limit is 15650
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: enabled
Python version: 2.7.12 (default, Nov 19 2016, 06:48:10)  [GCC 5.4.0 20160609]
Python main interpreter initialized at 0x1280140
python threads support enabled
The -s/--socket option is missing and stdin is not a socket.


> > 
> > # Set the correct path depending on your installation
> > wsgi-file = /usr/local/bin/gnocchi-api
> > logto = /var/log/gnocchi/gnocchi-uwsgi.log
> > 
> > master = true
> > die-on-term = true
> > threads = 1
> > # Adjust based on the number of CPU
> > processes = 5
> > enabled-threads = true
> > thunder-lock = true
> > plugins = python
> > buffer-size = 65535
> > lazy-apps = true
> > 
> > 
> > I don't understand why this happens.
> > Maybe I should point wsgi-file as 
> > /usr/local/lib/python2.7/dist-packages/gnocchi/rest/app.wsgi ?
> 
> /usr/local/bin/gnocchi-api is correct... assuming it's in that path and 
> not /usr/bin/gnocchi-api
> 
> > Form uwsgi manual I read that direct parsing of http is slow. So maybe I 
> > need to use apache with uwsgi mod ?
> > 
> 
> not sure about this part. do you have a lot of metrics being pushed to 
> gnocchi? you can minimised connection requirements by setting 
> batch_size/batch_timeout for collector (i think mitaka should support 
> this?). i believe in the gate we have 2 processes assigned to uwsgi so 5 
> should be sufficient.
> 
> cheers,
> -- 
> gord

The quantity of metrics is very low. I'm not sure that batch_size works.
Regarding the batch_timeout. What will be when the timeout reached ? Will 
ceilometer throw error to the log file and
discard the whole batch ? I've got this timeout set to 300, but every minute I 
receive errors if api doesn't work
correctly. 


-- 
Best regards,
Mate200


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Help for Barbican and UWSGI Community Goal

2017-06-23 Thread Matthew Treinish
On Fri, Jun 23, 2017 at 04:11:50PM +, Dave McCowan (dmccowan) wrote:
> The Barbican team is currently lacking a UWSGI expert.
> We need help identifying what work items we have to meet the UWSGI community 
> goal.[1]
> Could someone with expertise in this area review our code and docs [2] and 
> help me put together a to-do list?

So honestly barbican is probably already like 90% complete by the way there. It
was already running everything as a proper wsgi script under uwsgi. The only 
thing
missing was the apache config to use mod_proxy_uwsgi to have all the api servers
running on port 80.

It was also doing everything manually instead of relying on the common
functionality in PBR and devstack to handle creating wsgi entrypoints and
deploying wsgi apps.

I pushed up:

https://review.openstack.org/#/q/topic:deploy-in-wsgi

To take care of the gaps and make everything use the common mechanisms. It
probably will need a little bit of work before it's ready to go. (I didn't
bother testing anything before I pushed it)

-Matt Treinish


 
> [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html
> [2] https://git.openstack.org/cgit/openstack/barbican/tree/


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-23 Thread Amrith Kumar
On Thu, Jun 22, 2017 at 4:38 PM, Zane Bitter  wrote:

> (Top posting. Deal with it ;)
>
>
​Yes, please keep the conversation going; top posting is fine, the k8s
issue isn't 'off topic'.
​


> You're both right!
>
> Making OpenStack monolithic is not the answer. In fact, rearranging Git
> repos has nothing to do with the answer.
>
> But back in the day we had a process (incubation) for adding stuff to
> OpenStack that it made sense to depend on being there. It was a highly
> imperfect process. We got rid of that process with the big tent reform, but
> didn't really replace it with anything at all. Tags never evolved into a
> replacement as I hoped they would.
>
> So now we have a bunch of things that are integral to building a
> "Kubernetes-like experience for application developers" - secret storage,
> DNS, load balancing, asynchronous messaging - that exist but are not in
> most clouds. (Not to mention others like fine-grained authorisation control
> that are completely MIA.)
>
> Instead of trying to drive adoption of all of that stuff, we are either
> just giving up or reinventing bits of it, badly, in multiple places. The
> biggest enemy of "do one thing and do it well" is when a thing that you
> need to do was chosen by a project in another silo as their "one thing",
> but you don't want to just depend on that project because it's not widely
> adopted.
>
> I'm not saying this is an easy problem. It's something that the
> proprietary public cloud providers don't face: if you have only one cloud
> then you can just design everything to be as tightly integrated as it needs
> to be. When you have multiple clouds and the components are optional you
> have to do a bit more work. But if those components are rarely used at all
> then you lose the feedback loop that helps create a single polished
> implementation and everything else has to choose between not integrating,
> or implementing just the bits it needs itself so that whatever smaller
> feedback loop does manage to form, the benefits are contained entirely
> within the silo. OpenStack is arguably the only cloud project that has to
> deal with this. (Azure is also going into the same market, but they already
> have the feedback loop set up because they run their own public cloud built
> from the components.) Figuring out how to empower the community to solve
> this problem is our #1 governance concern IMHO.
>
> In my view, one of the keys is to stop thinking of OpenStack as an
> abstraction layer over a bunch of vendor technologies. If you think of Nova
> as an abstraction layer over libvirt/Xen/HyperV, and Keystone as an
> abstraction layer over LDAP/ActiveDirectory, and Cinder/Neutron as an
> abstraction layer over a bunch of storage/network vendors, then two things
> will happen. The first is unrelenting "pressure from vendors to add
> yet-another-specialized-feature to the codebase" that you won't be able
> to push back against because you can't point to a competing vision. And the
> second is that you will never build a integrated, application-centric
> cloud, because the integration bit needs to happen at the layer above the
> backends we are abstracting.
>
> We need to think of those things as the compute, authn, block storage and
> networking components of an integrated, application-centric cloud. And to
> remember that *by no means* are those the only components it will need -
> "The mission of Kubernetes is much smaller than OpenStack"; there's a lot
> we need to do.
>
> So no, the strength of k8s isn't in having a monolithic git repo (and I
> don't think that's what Kevin was suggesting). That's actually a
> slow-motion train-wreck waiting to happen. Its strength is being able to do
> all of this stuff and still be easy enough to install, so that there's no
> question of trying to build bits of it without relying on shared primitives.
>
> cheers,
> Zane.
>
> On 22/06/17 13:05, Jay Pipes wrote:
>
>> On 06/22/2017 11:59 AM, Fox, Kevin M wrote:
>>
>>> My $0.02.
>>>
>>> That view of dependencies is why Kubernetes development is outpacing
>>> OpenStacks and some users are leaving IMO. Not trying to be mean here but
>>> trying to shine some light on this issue.
>>>
>>> Kubernetes at its core has essentially something kind of equivalent to
>>> keystone (k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses),
>>> heat with convergence (deployments/daemonsets/etc), barbican (secrets),
>>> designate (kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops
>>> dont have to work hard to get all of it, users can assume its all there,
>>> and devs don't have many silo's to cross to implement features that touch
>>> multiple pieces.
>>>
>>
>> I think it's kind of hysterical that you're advocating a monolithic
>> approach when the thing you're advocating (k8s) is all about enabling
>> non-monolithic microservices architectures.
>>
>> Look, the fact of the matter is that OpenStack's mission is larger than
>> that of 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-23 Thread Amrith Kumar
​Kevin, just one comment inline below.
​
​
On Thu, Jun 22, 2017 at 3:33 PM, Fox, Kevin M  wrote:

> No, I'm not necessarily advocating a monolithic approach.
>
> I'm saying that they have decided to start with functionality and accept
> whats needed to get the task done. Theres not really such strong walls
> between the various functionality, rbac/secrets/kublet/etc. They don't
> spawn off a whole new project just to add functionality. they do so only
> when needed. They also don't balk at one feature depending on another.
>
> rbac's important, so they implemented it. ssl cert management was
> important. so they added that. adding a feature that restricts secret
> downloads only to the physical nodes need them, could then reuse the rbac
> system and ssl cert management.
>
> Their sigs are more oriented to features/functionality (or catagories
> there of), not as much specific components. We need to do X. X may involve
> changes to components A and B.
>
> OpenStack now tends to start with A and B and we try and work backwards
> towards implementing X, which is hard due to the strong walls and unclear
> ownership of the feature. And the general solution has been to try and make
> C but not commit to C being in the core so users cant depend on it which
> hasn't proven to be a very successful pattern.
>
> Your right, they are breaking up their code base as needed, like nova did.
> I'm coming around to that being a pretty good approach to some things.
> starting things is simpler, and if it ends up not needing its own whole
> project, then it doesn't get one. if it needs one, then it gets one.  Its
> not by default, start whole new project with db user, db schema, api,
> scheduler, etc. And the project might not end up with daemons split up in
> exactly the way you would expect if you prepoptomized breaking off a
> project not knowing exactly how it might integrate with everything else.
>
> Maybe the porcelain api that's been discussed for a while is part of the
> solution. initial stuff can prototyped/start there and break off as needed
> to separate projects and moved around without the user needing to know
> where it ends up.
>
> Your right that OpenStack's scope is much grater. and think that the
> commons are even more important in that case. If it doesn't have a solid
> base, every project has to re-implement its own base. That takes a huge
> amount of manpower all around. Its not sustainable.
>
> I guess we've gotten pretty far away from discussing Trove at this point.
>

​Please keep the conversation going.
​


>
> Thanks,
> Kevin
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: Thursday, June 22, 2017 10:05 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect
> Trove
>
> On 06/22/2017 11:59 AM, Fox, Kevin M wrote:
> > My $0.02.
> >
> > That view of dependencies is why Kubernetes development is outpacing
> OpenStacks and some users are leaving IMO. Not trying to be mean here but
> trying to shine some light on this issue.
> >
> > Kubernetes at its core has essentially something kind of equivalent to
> keystone (k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses),
> heat with convergence (deployments/daemonsets/etc), barbican (secrets),
> designate (kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops
> dont have to work hard to get all of it, users can assume its all there,
> and devs don't have many silo's to cross to implement features that touch
> multiple pieces.
>
> I think it's kind of hysterical that you're advocating a monolithic
> approach when the thing you're advocating (k8s) is all about enabling
> non-monolithic microservices architectures.
>
> Look, the fact of the matter is that OpenStack's mission is larger than
> that of Kubernetes. And to say that "Ops don't have to work hard" to get
> and maintain a Kubernetes deployment (which, frankly, tends to be dozens
> of Kubernetes deployments, one for each tenant/project/namespace) is
> completely glossing over the fact that by abstracting away the
> infrastructure (k8s' "cloud provider" concept), Kubernetes developers
> simply get to ignore some of the hardest and trickiest parts of operations.
>
> So, let's try to compare apples to apples, shall we?
>
> It sounds like the end goal that you're advocating -- more than anything
> else -- is an easy-to-install package of OpenStack services that
> provides a Kubernetes-like experience for application developers.
>
> I 100% agree with that goal. 100%.
>
> But pulling Neutron, Cinder, Keystone, Designate, Barbican, and Octavia
> back into Nova is not the way to do that. You're trying to solve a
> packaging and installation problem with a code structure solution.
>
> In fact, if you look at the Kubernetes development community, you see
> the *opposite* direction being taken: they have broken out and are
> actively breaking out large pieces of the 

Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-23 Thread Kendall Nelson
I think more than one champion is helpful in some cases, but its dangerous
when you get more than two that each person thinks the other two will
handle things unless they are very communicative with one another :)

Somewhere else in this thread the idea of a committee and a chair was
mentioned which seems like a good way of handling it. One primary point of
contact that delegates to the others when they need help.

-Kendall (diablo_rojo)

On Thu, Jun 22, 2017 at 1:49 PM Lance Bragstad  wrote:

>
>
> On 06/22/2017 12:57 PM, Mike Perez wrote:
>
> Hey all,
>
> In the community wide goals, we started as a group discussing goals at the
> OpenStack Forum. Then we brought those ideas to the mailing list to
> continue the discussion and include those that were not able to be at the
> forum. The discussions help the TC decide on what goals we will do for the
> Queens release. The goals that have the most support so far are:
>
> 1) Split Tempest plugins into separate repos/projects [1]
> 2) Move policy and policy docs into code [2]
>
> In the recent TC meeting [3] it was recognized that goals in Pike haven't
> been going as smoothly and not being completed. There will be a follow up
> thread to cover gathering feedback in an etherpad later, but for now the TC
> has discussed potential actions to improve completing goals in Queens.
>
> An idea that came from the meeting was creating a role of "Champions", who
> are the drum beaters to get a goal done by helping projects with tracking
> status and sometimes doing code patches. These would be interested
> volunteers who have a good understanding of their selected goal and its
> implementation to be a trusted person.
>
> What do people think before we bikeshed on the name? Would having a
> champion volunteer to each goal to help? Are there ideas that weren't
> mentioned in the TC meeting [3]?
>
> I like this idea. Some projects might have existing context about a
> particular goal built up before it's even proposed, others might not. I
> think this will help share knowledge across the projects that understand
> the goal with projects who might not be as familiar with it (even though
> the community goal proposal process attempts to fix that).
>
> Is the role of a goal "champion" limited to a single person? Can it be
> distributed across multiple people pending actions are well communicated?
>
>
> [1] -
> https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
> [2] -
> https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg106392.html
> [3] -
> http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-06-20-20.01.log.html#l-10
>
> —
> Mike Perez
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug triage experimentation

2017-06-23 Thread Clay Gerrard
Sean,

This sounds amazing and Swift could definitely use some [automated]
assistance here.  It would help if you could throw out a WIP somewhere.

First thought that comes to mind tho  storyboard.o.o :\

-Clay

On Fri, Jun 23, 2017 at 9:52 AM, Sean Dague  wrote:

> The Nova bug backlog is just over 800 open bugs, which while
> historically not terrible, remains too large to be collectively usable
> to figure out where things stand. We've had a few recent issues where we
> just happened to discover upgrade bugs filed 4 months ago that needed
> fixes and backports.
>
> Historically we've tried to just solve the bug backlog with volunteers.
> We've had many a brave person dive into here, and burn out after 4 - 6
> months. And we're currently without a bug lead. Having done a big giant
> purge in the past
> (http://lists.openstack.org/pipermail/openstack-dev/2014-
> September/046517.html)
> I know how daunting this all can be.
>
> I don't think that people can currently solve the bug triage problem at
> the current workload that it creates. We've got to reduce the smart
> human part of that workload.
>
> But, I think that we can also learn some lessons from what active github
> projects do.
>
> #1 Bot away bad states
>
> There are known bad states of bugs - In Progress with no open patch,
> Assigned but not In Progress. We can just bot these away with scripts.
> Even better would be to react immediately on bugs like those, that helps
> to train folks how to use our workflow. I've got some starter scripts
> for this up at - https://github.com/sdague/nova-bug-tools
>
> #2 Use tag based workflow
>
> One lesson from github projects, is the github tracker has no workflow.
> Issues are openned or closed. Workflow has to be invented by every team
> based on a set of tags. Sometimes that's annoying, but often times it's
> super handy, because it allows the tracker to change workflows and not
> try to change the meaning of things like "Confirmed vs. Triaged" in your
> mind.
>
> We can probably tag for information we know we need at lot easier. I'm
> considering something like
>
> * needs.system-version
> * needs.openstack-version
> * needs.logs
> * needs.subteam-feedback
> * has.system-version
> * has.openstack-version
> * has.reproduce
>
> Some of these a bot can process the text on and tell if that info was
> provided, and comment how to provide the updated info. Some of this
> would be human, but with official tags, it would probably help.
>
> #3 machine assisted functional tagging
>
> I'm playing around with some things that might be useful in mapping new
> bugs into existing functional buckets like: libvirt, volumes, etc. We'll
> see how useful it ends up being.
>
> #4 reporting on smaller slices
>
> Build some tooling to report on the status and change over time of bugs
> under various tags. This will help visualize how we are doing
> (hopefully) and where the biggest piles of issues are.
>
> The intent is the normal unit of interaction would be one of these
> smaller piles. Be they the 76 libvirt bugs, 61 volumes bugs, or 36
> vmware bugs. It would also highlight the rates of change in these piles,
> and what's getting attention and what is not.
>
>
> This is going to be kind of an ongoing experiment, but as we currently
> have no one spear heading bug triage, it seemed like a good time to try
> this out.
>
> Comments and other suggestions are welcomed. The tooling will have the
> nova flow in mind, but I'm trying to make it so it takes a project name
> as params on all the scripts, so anyone can use it. It's a little hack
> and slash right now to discover what the right patterns are.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-23 Thread gordon chung


On 22/06/17 01:57 PM, Mike Perez wrote:
> An idea that came from the meeting was creating a role of "Champions",
> who are the drum beaters to get a goal done by helping projects with
> tracking status and sometimes doing code patches. These would be
> interested volunteers who have a good understanding of their selected
> goal and its implementation to be a trusted person.
>
> What do people think before we bikeshed on the name? Would having a
> champion volunteer to each goal to help? Are there ideas that weren't
> mentioned in the TC meeting [3]?

do we know why they're not being completed? indifference? lack of resources?

i like the champion idea although i think its scope should be expanded. 
i didn't mention this in meeting and the following has no legit research 
behind it so feel free to disregard but i imagine some of the 
indifference towards the goals is because:

- it's often trivial (but important) work
many projects are already flooded with a lot of non-trivial, 
self-interest goals AND a lot trivial (and unimportant) copy/paste 
patches already so it's hard to feel passionate and find motivation to 
do it. the champion stuff may help here.

- there is a disconnect between the TC and the projects.
it seems there is a requirement for the projects to engage the TC but 
not necessarily the other way around. for many projects, i'm fairly 
certain nothing would change whether they actively engaged the TC or 
just left relationship as is and had minimal/no interaction. i apologise 
if that's blunt but just based on my own prior experience.

i don't know if the TC wants to become PMs but having the goals i feel 
sort of requires the TC to be PMs and to actually interact with the PTLs 
regularly, not just about the goal itself but the project and it's role 
in openstack. maybe it's as designed, but if there's no relationship 
there, i don't think 'TC wants you to do this' will get something done. 
it's in the same vein as how it's easier to get a patch approved if 
you're engaged in a project for some time as oppose to a patch out of 
the blue (disclaimer: i did not study sociology).

just my random thoughts.

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bug triage experimentation

2017-06-23 Thread Sean Dague
The Nova bug backlog is just over 800 open bugs, which while
historically not terrible, remains too large to be collectively usable
to figure out where things stand. We've had a few recent issues where we
just happened to discover upgrade bugs filed 4 months ago that needed
fixes and backports.

Historically we've tried to just solve the bug backlog with volunteers.
We've had many a brave person dive into here, and burn out after 4 - 6
months. And we're currently without a bug lead. Having done a big giant
purge in the past
(http://lists.openstack.org/pipermail/openstack-dev/2014-September/046517.html)
I know how daunting this all can be.

I don't think that people can currently solve the bug triage problem at
the current workload that it creates. We've got to reduce the smart
human part of that workload.

But, I think that we can also learn some lessons from what active github
projects do.

#1 Bot away bad states

There are known bad states of bugs - In Progress with no open patch,
Assigned but not In Progress. We can just bot these away with scripts.
Even better would be to react immediately on bugs like those, that helps
to train folks how to use our workflow. I've got some starter scripts
for this up at - https://github.com/sdague/nova-bug-tools

#2 Use tag based workflow

One lesson from github projects, is the github tracker has no workflow.
Issues are openned or closed. Workflow has to be invented by every team
based on a set of tags. Sometimes that's annoying, but often times it's
super handy, because it allows the tracker to change workflows and not
try to change the meaning of things like "Confirmed vs. Triaged" in your
mind.

We can probably tag for information we know we need at lot easier. I'm
considering something like

* needs.system-version
* needs.openstack-version
* needs.logs
* needs.subteam-feedback
* has.system-version
* has.openstack-version
* has.reproduce

Some of these a bot can process the text on and tell if that info was
provided, and comment how to provide the updated info. Some of this
would be human, but with official tags, it would probably help.

#3 machine assisted functional tagging

I'm playing around with some things that might be useful in mapping new
bugs into existing functional buckets like: libvirt, volumes, etc. We'll
see how useful it ends up being.

#4 reporting on smaller slices

Build some tooling to report on the status and change over time of bugs
under various tags. This will help visualize how we are doing
(hopefully) and where the biggest piles of issues are.

The intent is the normal unit of interaction would be one of these
smaller piles. Be they the 76 libvirt bugs, 61 volumes bugs, or 36
vmware bugs. It would also highlight the rates of change in these piles,
and what's getting attention and what is not.


This is going to be kind of an ongoing experiment, but as we currently
have no one spear heading bug triage, it seemed like a good time to try
this out.

Comments and other suggestions are welcomed. The tooling will have the
nova flow in mind, but I'm trying to make it so it takes a project name
as params on all the scripts, so anyone can use it. It's a little hack
and slash right now to discover what the right patterns are.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-23 Thread Chris Friesen

On 06/23/2017 09:35 AM, Henning Schild wrote:

Am Fri, 23 Jun 2017 11:11:10 +0200
schrieb Sahid Orentino Ferdjaoui :



In Linux RT context, and as you mentioned, the non-RT vCPU can acquire
some guest kernel lock, then be pre-empted by emulator thread while
holding this lock. This situation blocks RT vCPUs from doing its
work. So that is why we have implemented [2]. For DPDK I don't think
we have such problems because it's running in userland.

So for DPDK context I think we could have a mask like we have for RT
and basically considering vCPU0 to handle best effort works (emulator
threads, SSH...). I think it's the current pattern used by DPDK users.


DPDK is just a library and one can imagine an application that has
cross-core communication/synchronisation needs where the emulator
slowing down vpu0 will also slow down vcpu1. You DPDK application would
have to know which of its cores did not get a full pcpu.

I am not sure what the DPDK-example is doing in this discussion, would
that not just be cpu_policy=dedicated? I guess normal behaviour of
dedicated is that emulators and io happily share pCPUs with vCPUs and
you are looking for a way to restrict emulators/io to a subset of pCPUs
because you can live with some of them beeing not 100%.


Yes.  A typical DPDK-using VM might look something like this:

vCPU0: non-realtime, housekeeping and I/O, handles all virtual interrupts and 
"normal" linux stuff, emulator runs on same pCPU

vCPU1: realtime, runs in tight loop in userspace processing packets
vCPU2: realtime, runs in tight loop in userspace processing packets
vCPU3: realtime, runs in tight loop in userspace processing packets

In this context, vCPUs 1-3 don't really ever enter the kernel, and we've 
offloaded as much kernel work as possible from them onto vCPU0.  This works 
pretty well with the current system.



For RT we have to isolate the emulator threads to an additional pCPU
per guests or as your are suggesting to a set of pCPUs for all the
guests running.

I think we should introduce a new option:

   - hw:cpu_emulator_threads_mask=^1

If on 'nova.conf' - that mask will be applied to the set of all host
CPUs (vcpu_pin_set) to basically pack the emulator threads of all VMs
running here (useful for RT context).


That would allow modelling exactly what we need.
In nova.conf we are talking absolute known values, no need for a mask
and a set is much easier to read. Also using the same name does not
sound like a good idea.
And the name vcpu_pin_set clearly suggest what kind of load runs here,
if using a mask it should be called pin_set.


I agree with Henning.

In nova.conf we should just use a set, something like "rt_emulator_vcpu_pin_set" 
which would be used for running the emulator/io threads of *only* realtime 
instances.


We may also want to have "rt_emulator_overcommit_ratio" to control how many 
threads/instances we allow per pCPU.



If on flavor extra-specs It will be applied to the vCPUs dedicated for
the guest (useful for DPDK context).


And if both are present the flavor wins and nova.conf is ignored?


In the flavor I'd like to see it be a full bitmask, not an exclusion mask with 
an implicit full set.  Thus the end-user could specify 
"hw:cpu_emulator_threads_mask=0" and get the emulator threads to run alongside 
vCPU0.


Henning, there is no conflict, the nova.conf setting and the flavor setting are 
used for two different things.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-23 Thread Doug Hellmann

> On Jun 23, 2017, at 12:10 PM, j.kl...@cloudbau.de wrote:
> 
> Hi Doug,
> 
> first of all sorry for the late response. I am the current PTL of the 
> openstack-chef project and read the spec a while ago, but did not really 
> connect it to our project. To be honest, i am not really sure what we would 
> move and how to do it. Currently we have some wiki pages, but most of them 
> are pretty old and completely outdated since we stopped caring for them after 
> we dropped to only a few contributors and moved our focus to maintaining the 
> project code itself.
> 
> Currently we own and maintain 12 service and ops cookbook repositories:
> 
> cookbook-openstack-block-storage
> cookbook-openstack-common
> cookbook-openstack-compute
> cookbook-openstack-dashboard
> cookbook-openstack-identity
> cookbook-openstack-image
> cookbook-openstack-integration-test
> cookbook-openstack-network
> cookbook-openstack-ops-database
> cookbook-openstack-ops-messaging
> cookbook-openstack-orchestration
> cookbook-openstack-telemetry
> 
> one (not very often used) specs repository:
> 
> openstack-chef-specs
> 
>  and one repo to integrate them all:
> 
> openstack-chef-repo
> 
> All of these repos have some READMEs that contain some of the documentation 
> needed to use these. The documentation on how to use the openstack-chef 
> project as a combination of all these cookbooks, is located under 
> https://github.com/openstack/openstack-chef-repo/tree/master/doc 
>  (which 
> might already be close to the right space ?). Looking through the 
> openstack-manuals repo, i did not find any documentation specific for our 
> projects, so i think we do not need to export/import any of it. In my opinion 
> the easiest way to follow the proposed change of us would be to just move the 
> stuff we have under the ‘openstack-chef-repo’ mentioned above and add some 
> more files to follow the ‘minimal layout’. If you agree on this, i can try to 
> push a patch for it next week and add you as a reviewer?

Yes, it sounds like you only need to move things you already have around into 
the new structure. You don’t need to create empty directories or pages for 
content that doesn’t apply, so if you don’t have any configuration guide 
instructions for example then you don’t need to create the configuration/ 
directory.

If you add me as a reviewer, I’ll help make sure you have it organized as 
expected. And if you use the doc-migration tag the other folks working on the 
migration will review the patch, too.

Doug

> 
> Cheers,
> Jan
> 
> 
> On 23. June 2017 at 16:16:58, Doug Hellmann (d...@doughellmann.com 
> ) wrote:
> 
>> 
>>> On Jun 23, 2017, at 8:57 AM, Renat Akhmerov >> > wrote:
>>> 
>>> I can say for Mistral . We only planned to add Mistral docs to the central 
>>> repo but didn’t do it yet. So, as far as I understand, we don’t need to 
>>> move anything. We’ll review the spec and adjust the folder structure 
>>> according to the proposed.
>> 
>> Please do review the steps in the spec. Not all of them are about moving 
>> content. There are steps for setting up the new build job so that the 
>> content will be published to the new URLs as well.
>> 
>> Doug
>> 
>>> 
>>> Thanks
>>> 
>>> Renat Akhmerov
>>> @Nokia
>>> 
>>> 23 июня 2017 г., 3:32 +0700, Doug Hellmann >> >, писал:
 Excerpts from Alexandra Settle's message of 2017-06-08 15:17:34 +:
> Hi everyone,
> 
> Doug and I have written up a spec following on from the conversation [0] 
> that we had regarding the documentation publishing future.
> 
> Please take the time out of your day to review the spec as this affects 
> *everyone*.
> 
> See: https://review.openstack.org/#/c/472275/ 
> 
> 
> I will be PTO from the 9th – 19th of June. If you have any pressing 
> concerns, please email me and I will get back to you as soon as I can, 
> or, email Doug Hellmann and hopefully he will be able to assist you.
> 
> Thanks,
> 
> Alex
> 
> [0] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-May/117162.html 
> 
 
 Andreas pointed out that the new documentation job will behave a
 little differently from the old setup, and thought I should mention
 it so that people aren't surprised.
 
 The new job is configured to update the docs for all repos every
 time a patch is merged, not just when we tag releases. The server
 projects have been working that way, but this is different for some
 of the libraries, especially the clients.
 
 I will be going back and adding a step to build the docs when we
 tag releases after the move actually 

[openstack-dev] [nova] Need volunteer(s) to help migrate project docs

2017-06-23 Thread Matt Riedemann
The spec [1] with the plan to migrate project-specific docs from 
docs.openstack.org to each project has merged.


There are a number of steps outlined in there which need people from the 
project teams, e.g. nova, to do for their project. Some of it we're 
already doing, like building a config reference, API reference, using 
the openstackdocstheme, etc. But there are other things like moving the 
install guide for compute into the nova repo.


Is anyone interested in owning this work? There are enough tasks that it 
could probably be a couple of people coordinating. It also needs to be 
done by the end of the Pike release, so time is a factor.


[1] https://review.openstack.org/#/c/472275/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Help for Barbican and UWSGI Community Goal

2017-06-23 Thread Dave McCowan (dmccowan)
The Barbican team is currently lacking a UWSGI expert.
We need help identifying what work items we have to meet the UWSGI community 
goal.[1]
Could someone with expertise in this area review our code and docs [2] and help 
me put together a to-do list?

Thanks!
Dave (dave-mccowan)

[1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html
[2] https://git.openstack.org/cgit/openstack/barbican/tree/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-23 Thread j.kl...@cloudbau.de
Hi Doug,

first of all sorry for the late response. I am the current PTL of the 
openstack-chef project and read the spec a while ago, but did not really 
connect it to our project. To be honest, i am not really sure what we would 
move and how to do it. Currently we have some wiki pages, but most of them are 
pretty old and completely outdated since we stopped caring for them after we 
dropped to only a few contributors and moved our focus to maintaining the 
project code itself.

Currently we own and maintain 12 service and ops cookbook repositories:

cookbook-openstack-block-storage
cookbook-openstack-common
cookbook-openstack-compute
cookbook-openstack-dashboard
cookbook-openstack-identity
cookbook-openstack-image
cookbook-openstack-integration-test
cookbook-openstack-network
cookbook-openstack-ops-database
cookbook-openstack-ops-messaging
cookbook-openstack-orchestration
cookbook-openstack-telemetry

one (not very often used) specs repository:

openstack-chef-specs

 and one repo to integrate them all:

openstack-chef-repo

All of these repos have some READMEs that contain some of the documentation 
needed to use these. The documentation on how to use the openstack-chef project 
as a combination of all these cookbooks, is located under 
https://github.com/openstack/openstack-chef-repo/tree/master/doc (which might 
already be close to the right space ?). Looking through the openstack-manuals 
repo, i did not find any documentation specific for our projects, so i think we 
do not need to export/import any of it. In my opinion the easiest way to follow 
the proposed change of us would be to just move the stuff we have under the 
‘openstack-chef-repo’ mentioned above and add some more files to follow the 
‘minimal layout’. If you agree on this, i can try to push a patch for it next 
week and add you as a reviewer?

Cheers,
Jan


On 23. June 2017 at 16:16:58, Doug Hellmann (d...@doughellmann.com) wrote:


On Jun 23, 2017, at 8:57 AM, Renat Akhmerov  wrote:

I can say for Mistral . We only planned to add Mistral docs to the central repo 
but didn’t do it yet. So, as far as I understand, we don’t need to move 
anything. We’ll review the spec and adjust the folder structure according to 
the proposed.

Please do review the steps in the spec. Not all of them are about moving 
content. There are steps for setting up the new build job so that the content 
will be published to the new URLs as well.

Doug


Thanks

Renat Akhmerov
@Nokia

23 июня 2017 г., 3:32 +0700, Doug Hellmann , писал:
Excerpts from Alexandra Settle's message of 2017-06-08 15:17:34 +:
Hi everyone,

Doug and I have written up a spec following on from the conversation [0] that 
we had regarding the documentation publishing future.

Please take the time out of your day to review the spec as this affects 
*everyone*.

See: https://review.openstack.org/#/c/472275/

I will be PTO from the 9th – 19th of June. If you have any pressing concerns, 
please email me and I will get back to you as soon as I can, or, email Doug 
Hellmann and hopefully he will be able to assist you.

Thanks,

Alex

[0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117162.html

Andreas pointed out that the new documentation job will behave a
little differently from the old setup, and thought I should mention
it so that people aren't surprised.

The new job is configured to update the docs for all repos every
time a patch is merged, not just when we tag releases. The server
projects have been working that way, but this is different for some
of the libraries, especially the clients.

I will be going back and adding a step to build the docs when we
tag releases after the move actually starts, so that we can link
to docs for specific versions of projects. That change will be
transparent to everyone else, so I have it on the list for after
the migration is under way.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] realtime kvm cpu affinities

2017-06-23 Thread Henning Schild
Am Fri, 23 Jun 2017 11:11:10 +0200
schrieb Sahid Orentino Ferdjaoui :

> On Wed, Jun 21, 2017 at 12:47:27PM +0200, Henning Schild wrote:
> > Am Tue, 20 Jun 2017 10:04:30 -0400
> > schrieb Luiz Capitulino :
> >   
> > > On Tue, 20 Jun 2017 09:48:23 +0200
> > > Henning Schild  wrote:
> > >   
> > > > Hi,
> > > > 
> > > > We are using OpenStack for managing realtime guests. We modified
> > > > it and contributed to discussions on how to model the realtime
> > > > feature. More recent versions of OpenStack have support for
> > > > realtime, and there are a few proposals on how to improve that
> > > > further.
> > > > 
> > > > But there is still no full answer on how to distribute threads
> > > > across host-cores. The vcpus are easy but for the emulation and
> > > > io-threads there are multiple options. I would like to collect
> > > > the constraints from a qemu/kvm perspective first, and than
> > > > possibly influence the OpenStack development
> > > > 
> > > > I will put the summary/questions first, the text below provides
> > > > more context to where the questions come from.
> > > > - How do you distribute your threads when reaching the really
> > > > low cyclictest results in the guests? In [3] Rik talked about
> > > > problems like hold holder preemption, starvation etc. but not
> > > > where/how to schedule emulators and io
> > > 
> > > We put emulator threads and io-threads in housekeeping cores in
> > > the host. I think housekeeping cores is what you're calling
> > > best-effort cores, those are non-isolated cores that will run host
> > > load.  
> > 
> > As expected, any best-effort/housekeeping core will do but overlap
> > with the vcpu-cores is a bad idea.
> >   
> > > > - Is it ok to put a vcpu and emulator thread on the same core as
> > > > long as the guest knows about it? Any funny behaving guest, not
> > > > just Linux.
> > > 
> > > We can't do this for KVM-RT because we run all vcpu threads with
> > > FIFO priority.  
> > 
> > Same point as above, meaning the "hw:cpu_realtime_mask" approach is
> > wrong for realtime.
> >   
> > > However, we have another project with DPDK whose goal is to
> > > achieve zero-loss networking. The configuration required by this
> > > project is very similar to the one required by KVM-RT. One
> > > difference though is that we don't use RT and hence don't use
> > > FIFO priority.
> > > 
> > > In this project we've been running with the emulator thread and a
> > > vcpu sharing the same core. As long as the guest housekeeping CPUs
> > > are idle, we don't get any packet drops (most of the time, what
> > > causes packet drops in this test-case would cause spikes in
> > > cyclictest). However, we're seeing some packet drops for certain
> > > guest workloads which we are still debugging.  
> > 
> > Ok but that seems to be a different scenario where hw:cpu_policy
> > dedicated should be sufficient. However if the placement of the io
> > and emulators has to be on a subset of the dedicated cpus something
> > like hw:cpu_realtime_mask would be required.
> >   
> > > > - Is it ok to make the emulators potentially slow by running
> > > > them on busy best-effort cores, or will they quickly be on the
> > > > critical path if you do more than just cyclictest? - our
> > > > experience says we don't need them reactive even with
> > > > rt-networking involved
> > > 
> > > I believe it is ok.  
> > 
> > Ok.
> >
> > > > Our goal is to reach a high packing density of realtime VMs. Our
> > > > pragmatic first choice was to run all non-vcpu-threads on a
> > > > shared set of pcpus where we also run best-effort VMs and host
> > > > load. Now the OpenStack guys are not too happy with that
> > > > because that is load outside the assigned resources, which
> > > > leads to quota and accounting problems.
> > > > 
> > > > So the current OpenStack model is to run those threads next to
> > > > one or more vcpu-threads. [1] You will need to remember that
> > > > the vcpus in question should not be your rt-cpus in the guest.
> > > > I.e. if vcpu0 shares its pcpu with the hypervisor noise your
> > > > preemptrt-guest would use isolcpus=1.
> > > > 
> > > > Is that kind of sharing a pcpu really a good idea? I could
> > > > imagine things like smp housekeeping (cache invalidation etc.)
> > > > to eventually cause vcpu1 having to wait for the emulator stuck
> > > > in IO.
> > > 
> > > Agreed. IIRC, in the beginning of KVM-RT we saw a problem where
> > > running vcpu0 on an non-isolated core and without FIFO priority
> > > caused spikes in vcpu1. I guess we debugged this down to vcpu1
> > > waiting a few dozen microseconds for vcpu0 for some reason.
> > > Running vcpu0 on a isolated core with FIFO priority fixed this
> > > (again, this was years ago, I won't remember all the details).
> > >   
> > > > Or maybe a busy polling vcpu0 starving its own emulator causing
> > > > high latency or even deadlocks.  

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-23 Thread Mike Bayer



On 06/22/2017 11:59 AM, Fox, Kevin M wrote:

My $0.02.

That view of dependencies is why Kubernetes development is outpacing OpenStacks 
and some users are leaving IMO. Not trying to be mean here but trying to shine 
some light on this issue.

Kubernetes at its core has essentially something kind of equivalent to keystone 
(k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses), heat with 
convergence (deployments/daemonsets/etc), barbican (secrets), designate 
(kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops dont have to 
work hard to get all of it, users can assume its all there, and devs don't have 
many silo's to cross to implement features that touch multiple pieces.

This core functionality being combined has allowed them to land features that 
are really important to users but has proven difficult for OpenStack to do 
because of the silo's. OpenStack's general pattern has been, stand up a new 
service for new feature, then no one wants to depend on it so its ignored and 
each silo reimplements a lesser version of it themselves.

The OpenStack commons then continues to suffer.

We need to stop this destructive cycle.

OpenStack needs to figure out how to increase its commons. Both internally and 
externally. etcd as a common service was a step in the right direction.


+1 to this, and it's a similar theme to my dismay a few weeks ago when I 
realized projects are looking to ditch oslo rather than improve it; 
since then I got to chase down a totally avoidable problem in Zaqar 
that's been confusing dozens of people because zaqar implemented their 
database layer as direct-to-SQLAlchemy rather than using oslo.db 
(https://bugs.launchpad.net/tripleo/+bug/1691951) and missed out on some 
basic stability features that oslo.db turns on.


There is a balance to be struck between monolithic and expansive for 
sure, but I think the monolith-phobia may be affecting the quality of 
the product.  It is possible to have clean modularity and separation of 
concerns in a codebase while still having tighter dependencies, it just 
takes more discipline to monitor the boundaries.





I think k8s needs to be another common service all the others can rely on. That 
could greatly simplify the rest of the OpenStack projects as a lot of its 
functionality no longer has to be implemented in each project.

We also need a way to break down the silo walls and allow more cross project 
collaboration for features. I fear the new push for letting projects run 
standalone will make this worse, not better, further fracturing OpenStack.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, June 22, 2017 12:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Fox, Kevin M wrote:

[...]
If you build a Tessmaster clone just to do mariadb, then you share nothing with 
the other communities and have to reinvent the wheel, yet again. Operators load 
increases because the tool doesn't function like other tools.

If you rely on a container orchestration engine that's already cross cloud that 
can be easily deployed by user or cloud operator, and fill in the gaps with 
what Trove wants to support, easy management of db's, you get to reuse a lot of 
the commons and the users slight increase in investment in dealing with the bit 
of extra plumbing in there allows other things to also be easily added to their 
cluster. Its very rare that a user would need to deploy/manage only a database. 
The net load on the operator decreases, not increases.


I think the user-side tool could totally deploy on Kubernetes clusters
-- if that was the only possible target that would make it a Kubernetes
tool more than an open infrastructure tool, but that's definitely a
possibility. I'm not sure work is needed there though, there are already
tools (or charts) doing that ?

For a server-side approach where you want to provide a DB-provisioning
API, I fear that making the functionality depend on K8s would make
TroveV2/Hoard would not only depend on Heat and Nova, but also depend on
something that would deploy a Kubernetes cluster (Magnum?), which would
likely hurt its adoption (and reusability in simpler setups). Since
databases would just work perfectly well in VMs, it feels like a
gratuitous dependency addition ?

We generally need to be very careful about creating dependencies between
OpenStack projects. On one side there are base services (like Keystone)
that we said it was alright to depend on, but depending on anything else
is likely to reduce adoption. Magnum adoption suffers from its
dependency on Heat. If Heat starts depending on Zaqar, we make the
problem worse. I understand it's a hard trade-off: you want to reuse
functionality rather than reinvent it in every project... we just need
to recognize the cost of doing that.

--
Thierry Carrez (ttx)


Re: [openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-23 Thread Doug Hellmann

> On Jun 23, 2017, at 8:57 AM, Renat Akhmerov  wrote:
> 
> I can say for Mistral . We only planned to add Mistral docs to the central 
> repo but didn’t do it yet. So, as far as I understand, we don’t need to move 
> anything. We’ll review the spec and adjust the folder structure according to 
> the proposed.

Please do review the steps in the spec. Not all of them are about moving 
content. There are steps for setting up the new build job so that the content 
will be published to the new URLs as well.

Doug

> 
> Thanks
> 
> Renat Akhmerov
> @Nokia
> 
> 23 июня 2017 г., 3:32 +0700, Doug Hellmann , писал:
>> Excerpts from Alexandra Settle's message of 2017-06-08 15:17:34 +:
>>> Hi everyone,
>>> 
>>> Doug and I have written up a spec following on from the conversation [0] 
>>> that we had regarding the documentation publishing future.
>>> 
>>> Please take the time out of your day to review the spec as this affects 
>>> *everyone*.
>>> 
>>> See: https://review.openstack.org/#/c/472275/
>>> 
>>> I will be PTO from the 9th – 19th of June. If you have any pressing 
>>> concerns, please email me and I will get back to you as soon as I can, or, 
>>> email Doug Hellmann and hopefully he will be able to assist you.
>>> 
>>> Thanks,
>>> 
>>> Alex
>>> 
>>> [0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117162.html
>> 
>> Andreas pointed out that the new documentation job will behave a
>> little differently from the old setup, and thought I should mention
>> it so that people aren't surprised.
>> 
>> The new job is configured to update the docs for all repos every
>> time a patch is merged, not just when we tag releases. The server
>> projects have been working that way, but this is different for some
>> of the libraries, especially the clients.
>> 
>> I will be going back and adding a step to build the docs when we
>> tag releases after the move actually starts, so that we can link
>> to docs for specific versions of projects. That change will be
>> transparent to everyone else, so I have it on the list for after
>> the migration is under way.
>> 
>> Doug
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-23 Thread Zane Bitter

On 23/06/17 05:31, Thierry Carrez wrote:

Zane Bitter wrote:

But back in the day we had a process (incubation) for adding stuff to
OpenStack that it made sense to depend on being there. It was a highly
imperfect process. We got rid of that process with the big tent reform,
but didn't really replace it with anything at all. Tags never evolved
into a replacement as I hoped they would.

So now we have a bunch of things that are integral to building a
"Kubernetes-like experience for application developers" - secret
storage, DNS, load balancing, asynchronous messaging - that exist but
are not in most clouds.


Yet another tangent in that thread, but you seem to regret a past that
never happened.


It kind of did. The TC used to require that new projects graduating into 
OpenStack didn't reimplement anything that an existing project in the 
integrated release already did. e.g. Sahara and Trove were required to 
use Heat for orchestration rather than rolling their own orchestration. 
The very strong implication was that once something was officially 
included in OpenStack you didn't develop the same thing again. It's true 
that nothing was ever enforced against existing projects (the only 
review was at incubation/graduation), but then again I can't think of a 
situation where it would have come up at that time.



The "integrated release" was never about stuff that you
can "depend on being there". It was about things that were tested to
work well together, and released together. Projects were incubating
until they were deemed mature-enough (and embedded-enough in our
community) that it was fine for other projects to take the hit to be
tested with them, and take the risk of being released together. I don't
blame you for thinking otherwise: since the integrated release was the
only answer we gave, everyone assumed it answered their specific
question[1]. And that was why we needed to get rid of it.


I agree and I supported getting rid of it. But not all of the roles it 
fulfilled (intended or otherwise) were replaced with anything. One of 
the things that fell by the wayside was the sense some of us had that we 
were building an integrated product with flexible deployment options, 
rather than a series of disconnected islands.



If it was really about stuff you can "depend on being there" then most
OpenStack clouds would have had Swift, Ceilometer, Trove and Sahara.

Stuff you can "depend on being there" is a relatively-new concept:
https://governance.openstack.org/tc/reference/base-services.html

Yes, we can (and should) add more of those when they are relevant to
most OpenStack deployments, otherwise projects will never start
depending on Barbican and continue to NIH secrets management locally.
But since any addition comes with a high operational cost, we need to
consider them very carefully.


+1


We should also consider use cases and group projects together (a concept
  we start to call "constellations"). Yes, it would be great that, if you
have a IaaS/Compute use case, you could assume Designate is part of the mix.


+1


[1] https://ttx.re/facets-of-the-integrated-release.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][rabbitmq] RPC demo code no response?

2017-06-23 Thread Jay Pipes
You're using the eventlet executor and time.sleep(1) but not 
monkey-patching your server for eventlet.


Try adding:

import eventlet
eventlet.monkey_patch()

Above your other imports in the server file.

Best,
-jay

On 06/23/2017 02:58 AM, zhi wrote:

Hi, all.

 Recently, I do some research about RPC by using oslo-messaging. I 
write some demo code about RPC server and RPC client by following here 
[1]. And the demo codes locate here[2].


 I think there is something wrong about these codes. The server code 
doesn't print response message when I run both the server code and the 
client code. And my oslo message version is oslo.messaging==4.6.1.


 Could someone give me some advice about that?

 Thanks a lot. ;-)



[1].https://docs.openstack.org/developer/oslo.messaging/server.html
https://docs.openstack.org/developer/oslo.messaging/client.html

[2].http://paste.openstack.org/show/613472/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-23 Thread Harry Rybacki
On Fri, Jun 23, 2017 at 6:14 AM, Sean Dague  wrote:
> On 06/23/2017 05:44 AM, Thierry Carrez wrote:
>> Lance Bragstad wrote:
>>> Is the role of a goal "champion" limited to a single person? Can it be
>>> distributed across multiple people pending actions are well communicated?
>>
>> I'm a bit torn on that. On one hand work can definitely (and probably
>> should) be covered by multiple people. But I like the idea that it's
>> someone's responsibility to ensure that progress is made (even if that
>> person ends up delegating all the work). The trick is, it's easy for
>> everyone to assume that the work is covered since someone has signed up
>> for it.
>>
>> It's like the PTL situation -- work is done by the group and it's great
>> to have a clear go-to person to keep track of things, until the PTL has
>> to do everything because they end up as the default assignee for everything.
>
> I agree, there should be a single name here. They can delegate and
> collect up a group, but at the end of the day one person should be
> responsible for it.
>
Aye, this sounds much like a committee chair. Ideally, even if in name
only, it provides a single point of communication for folks regardless
of whether they are directly working on the goal or not.

Two points: It may require some additional monitoring from the
respective PTL and, we should have have a plan in place for shifting
responsibilities if a given champion can no longer take on the role
(illness, injury, work changes, etc...).

- Harry

> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-23 Thread Renat Akhmerov
I can say for Mistral . We only planned to add Mistral docs to the central repo 
but didn’t do it yet. So, as far as I understand, we don’t need to move 
anything. We’ll review the spec and adjust the folder structure according to 
the proposed.

Thanks

Renat Akhmerov
@Nokia

23 июня 2017 г., 3:32 +0700, Doug Hellmann , писал:
> Excerpts from Alexandra Settle's message of 2017-06-08 15:17:34 +:
> > Hi everyone,
> >
> > Doug and I have written up a spec following on from the conversation [0] 
> > that we had regarding the documentation publishing future.
> >
> > Please take the time out of your day to review the spec as this affects 
> > *everyone*.
> >
> > See: https://review.openstack.org/#/c/472275/
> >
> > I will be PTO from the 9th – 19th of June. If you have any pressing 
> > concerns, please email me and I will get back to you as soon as I can, or, 
> > email Doug Hellmann and hopefully he will be able to assist you.
> >
> > Thanks,
> >
> > Alex
> >
> > [0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117162.html
>
> Andreas pointed out that the new documentation job will behave a
> little differently from the old setup, and thought I should mention
> it so that people aren't surprised.
>
> The new job is configured to update the docs for all repos every
> time a patch is merged, not just when we tag releases. The server
> projects have been working that way, but this is different for some
> of the libraries, especially the clients.
>
> I will be going back and adding a step to build the docs when we
> tag releases after the move actually starts, so that we can link
> to docs for specific versions of projects. That change will be
> transparent to everyone else, so I have it on the list for after
> the migration is under way.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [dib] RFC: moving/transitioning the ironic-agent element to the ironic-python-agent tree

2017-06-23 Thread Dmitry Tantsur

As no objections were recorded, I'm proceeding with the plan.

Governance change for ironic-python-agent-builder: 
https://review.openstack.org/476900.


On 05/22/2017 04:59 PM, Dmitry Tantsur wrote:

On 05/22/2017 03:10 PM, Sam Betts (sambetts) wrote:
I would like to suggest that we create a new repo for housing the tools 
required to build ironic python agent images: 
ironic-python-agent-builder(tooling). This would include, the DIB element, the 
existing coreos and tinyipa methods and hopefully in the future the buildroot 
method for creating IPA images.


+1, I like this one as well.



The reason I propose a separation of tooling and IPA itself is that the 
tooling is mostly detached from which version of IPA is being built into the 
image, and often when we make changes to the tooling that change should be 
included in images built for all versions of IPA which involves us having to 
backport these changes to all currently maintained versions of IPA.


Hopefully having this as a separate repo will also simplify packaging for 
distros as they won’t need to include IPA itself with the tooling to build it.


I’m happy with the name ironic-python-agent for the element, I think that is 
more intuitive anyway.


An RFE or multiple might be useful for tracking this work.


Ok, will create after today's meeting (I submitted this thread as a topic 
there).


https://bugs.launchpad.net/ironic/+bug/1700071





Sam

On 22/05/2017, 13:40, "Dmitry Tantsur"  wrote:

 Hi all!
 Some time ago we discussed moving ironic-agent element that is used to 
build IPA

 to IPA tree itself. It got stuck, and I'd like to restart the discussion.
 The reason for this move is to make the DIB element in question one of
 *official* ways to build IPA. This includes gating on both IPA and the 
element

 changes, which we currently don't do.
 The primary concern IIRC was elements name clash. We can solve it by just
 renaming the element. The new one will be called "ironic-python-agent".
  From the packaging perspective, we'll create a new subpackage
 openstack-ironic-python-agent-elements (the RDO name, may differ for other
 distribution) that will only ship /usr/share/ironic-python-agent-elements 
with

 the ironic-python-agent element within it. To pick the new element, the
 consumers will have to add /usr/share/ironic-python-agent-elements to the
 ELEMENTS_PATH, and change the element name from ironic-agent to 
ironic-python-agent.
 Please let me know what you think about the approach. If there are no 
objects,

 I'll work on this move in the coming weeks.
 P.S.
 Do we need an Ironic RFE for that?
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-23 Thread Renat Akhmerov
We'll do that.

Thanks

Renat Akhmerov
@Nokia

23 июня 2017 г., 18:54 +0700, Thierry Carrez , писал:
> Dougal Matthews wrote:
> > > We have been trying to break the requirement on mistral (from
> > > tripleo-common) but it is proving to be harder than expected. We are
> > > really doing some nasty things, but I wont go into details here :-)
> > > If anyone is interested, feel free to reach out.
> >
> > After sending that we did make some solid progress. We are two patches
> > away from breaking the link AFAICT.
> >
> > 1. https://review.openstack.org/#/c/454719/
> > 2. https://review.openstack.org/#/c/476866/
> >
> > Both have some errors that need to be resolved but it is looking much
> > closer now.
>
> That's definitely the best way to handle the situation, so if you are
> confident that we can complete the transition to depending on
> mistral-lib before the non-client lib freeze (~ July 20), we should try
> that.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-23 Thread Kevin Benton
Yes, let's move discussion to bug report.

On Fri, Jun 23, 2017 at 5:01 AM, Margin Hu  wrote:

> Hi kevin,
>
> [ovs]
> bridge_mappings = physnet1:br-ex,physnet2:provision,physnet3:provider
> ovsdb_connection = tcp:10.53.16.12:6640
> local_ip = 10.53.32.12
> you can check the attachement,  and more logs can be found at
> https://bugs.launchpad.net/neutron/+bug/1697243
>
>
> On 6/23 16:43, Kevin Benton wrote:
>
> Can you provide your ml2_conf.ini values you are using?
>
> On Thu, Jun 22, 2017 at 7:06 AM, Margin Hu  wrote:
>
>> thanks.
>>
>> I met an issue , I  configured three ovs bridge ( br-ex, provision,
>> provider) in ml2_conf.ini  but after I reboot the node , found only 2
>> bridges flow table is normal , the other one bridge's flow table is empty.
>>
>> the bridge sometimes is "provision" , sometimes is "provider" ,  which
>> possibilities is there for this issue.?
>> [root@cloud]# ovs-ofctl show provision
>> OFPT_FEATURES_REPLY (xid=0x2): dpid:248a075541e8
>> n_tables:254, n_buffers:256
>> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
>> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
>> mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
>>  1(bond0): addr:24:8a:07:55:41:e8
>>  config: 0
>>  state:  0
>>  speed: 0 Mbps now, 0 Mbps max
>>  2(phy-provision): addr:2e:7c:ba:fe:91:72
>>  config: 0
>>  state:  0
>>  speed: 0 Mbps now, 0 Mbps max
>>  LOCAL(provision): addr:24:8a:07:55:41:e8
>>  config: 0
>>  state:  0
>>  speed: 0 Mbps now, 0 Mbps max
>> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
>> [root@cloud]# ovs-ofctl dump-flows  provision
>> NXST_FLOW reply (xid=0x4):
>>
>> [root@cloud]# ip r
>> default via 192.168.60.247 dev br-ex
>> 10.53.16.0/24 dev vlan16  proto kernel  scope link  src 10.53.16.11
>> 10.53.17.0/24 dev provider  proto kernel  scope link  src 10.53.17.11
>> 10.53.22.0/24 dev vlan22  proto kernel  scope link  src 10.53.22.111
>> 10.53.32.0/24 dev vlan32  proto kernel  scope link  src 10.53.32.11
>> 10.53.33.0/24 dev provision  proto kernel  scope link  src 10.53.33.11
>> 10.53.128.0/24 dev docker0  proto kernel  scope link  src 10.53.128.1
>> 169.254.0.0/16 dev vlan16  scope link  metric 1012
>> 169.254.0.0/16 dev vlan22  scope link  metric 1014
>> 169.254.0.0/16 dev vlan32  scope link  metric 1015
>> 169.254.0.0/16 dev br-ex  scope link  metric 1032
>> 169.254.0.0/16 dev provision  scope link  metric 1033
>> 169.254.0.0/16 dev provider  scope link  metric 1034
>> 192.168.60.0/24 dev br-ex  proto kernel  scope link  src 192.168.60.111
>>
>> what' the root cause ?
>>
>>  rpm -qa | grep openvswitch
>> openvswitch-2.6.1-4.1.git20161206.el7.x86_64
>> python-openvswitch-2.6.1-4.1.git20161206.el7.noarch
>> openstack-neutron-openvswitch-10.0.1-1.el7.noarch
>>
>>
>>
>> On 6/22 9:53, Kevin Benton wrote:
>>
>> Rules to allow aren't setup until the port is wired and it calls the
>> functions like this:
>> https://github.com/openstack/neutron/blob/master/neutron/plu
>> gins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606
>>
>> On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu  wrote:
>>
>>> Hi Guys,
>>>
>>> I have a question in setup_physical_bridges funtion  of
>>> neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
>>>
>>>  # block all untranslated traffic between bridges
>>> self.int_br.drop_port(in_port=int_ofport)
>>> br.drop_port(in_port=phys_ofport)
>>>
>>> [refer](https://github.com/openstack/neutron/blob/master/neu
>>> tron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159)
>>>
>>> when permit traffic between bridges ?  when modify flow table of ovs
>>> bridge?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-23 Thread Alexandra Settle
Hi everyone,

This morning (afternoon) the specification for the documentation migration was 
merged. Thanks to all that took time to review :)

You can now view here in all its glory: 
https://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html

If you have any questions, feel free to shoot them to me or Doug :)

Let’s begin!

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-23 Thread Chris Dent

On Thu, 22 Jun 2017, Shewale, Bhagyashri wrote:


* who or what needs to consume these JSON samples?

The users of placement API can rely on the request/response for different 
supported placement versions based on some tests running on the OpenStack CI 
infrastructure.
Right now, most of the placement APIs are well documented and others are in 
progress but there are no tests to verify these APIs.


Either we are misunderstanding each other, or you're not aware of
what the gabbi tests are doing. They verify the placement API and
provide extensive coverage of the entire placement HTTP framework,
including the accuracy of response codes in edge cases not on the
"happy path". Coverage is well over 90% for the group of files in
nova/api/openstack/placement (except for the wsgi deployment script
itself) when the
nova/tests/functional/api/openstack/placement/test_placement_api.py
functionl tests runs all the gabbi files in
nova/tests/functional/api/openstack/placement/gabbits/.

So I'd say the api is verified. What is missing, and could be
useful, is using those tests to get accurate and up to date
representations of the JSON request and response bodies. If that's
something we'd like to pursue as I said in my other message the
'verbose' functionality that can be provided in gabbi-based tests
should be able to help.


We would like to write new functional test to consume these json samples to 
verify each placement API for all supported versions.


Those gabbi files also test functionality at micorversion
boundaries.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-23 Thread Margin Hu

Hi kevin,

[ovs]
bridge_mappings = physnet1:br-ex,physnet2:provision,physnet3:provider
ovsdb_connection = tcp:10.53.16.12:6640
local_ip = 10.53.32.12

you can check the attachement,  and more logs can be found at
https://bugs.launchpad.net/neutron/+bug/1697243

On 6/23 16:43, Kevin Benton wrote:

Can you provide your ml2_conf.ini values you are using?

On Thu, Jun 22, 2017 at 7:06 AM, Margin Hu > wrote:


thanks.

I met an issue , I  configured three ovs bridge ( br-ex,
provision, provider) in ml2_conf.ini  but after I reboot the node
, found only 2 bridges flow table is normal , the other one
bridge's flow table is empty.

the bridge sometimes is "provision" , sometimes is "provider" , 
which possibilities is there for this issue.?


[root@cloud]# ovs-ofctl show provision
OFPT_FEATURES_REPLY (xid=0x2): dpid:248a075541e8
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS
ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan
mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src
mod_tp_dst
 1(bond0): addr:24:8a:07:55:41:e8
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 2(phy-provision): addr:2e:7c:ba:fe:91:72
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(provision): addr:24:8a:07:55:41:e8
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@cloud]# ovs-ofctl dump-flows  provision
NXST_FLOW reply (xid=0x4):

[root@cloud]# ip r
default via 192.168.60.247 dev br-ex
10.53.16.0/24  dev vlan16 proto kernel 
scope link  src 10.53.16.11
10.53.17.0/24  dev provider proto kernel 
scope link  src 10.53.17.11
10.53.22.0/24  dev vlan22 proto kernel 
scope link  src 10.53.22.111
10.53.32.0/24  dev vlan32 proto kernel 
scope link  src 10.53.32.11
10.53.33.0/24  dev provision proto kernel 
scope link  src 10.53.33.11
10.53.128.0/24  dev docker0 proto kernel 
scope link  src 10.53.128.1
169.254.0.0/16  dev vlan16 scope link 
metric 1012
169.254.0.0/16  dev vlan22 scope link 
metric 1014
169.254.0.0/16  dev vlan32 scope link 
metric 1015
169.254.0.0/16  dev br-ex scope link 
metric 1032
169.254.0.0/16  dev provision scope link 
metric 1033
169.254.0.0/16  dev provider scope link 
metric 1034
192.168.60.0/24  dev br-ex proto kernel 
scope link  src 192.168.60.111


what' the root cause ?

 rpm -qa | grep openvswitch
openvswitch-2.6.1-4.1.git20161206.el7.x86_64
python-openvswitch-2.6.1-4.1.git20161206.el7.noarch
openstack-neutron-openvswitch-10.0.1-1.el7.noarch



On 6/22 9:53, Kevin Benton wrote:

Rules to allow aren't setup until the port is wired and it calls
the functions like this:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606



On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu > wrote:

Hi Guys,

I have a question in setup_physical_bridges funtion  of
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

 # block all untranslated traffic between bridges
self.int_br.drop_port(in_port=int_ofport)
br.drop_port(in_port=phys_ofport)


[refer](https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159

)

when permit traffic between bridges ?  when modify flow table
of ovs bridge?










__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack 

Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-23 Thread Thierry Carrez
Dougal Matthews wrote:
>> We have been trying to break the requirement on mistral (from
>> tripleo-common) but it is proving to be harder than expected. We are
>> really doing some nasty things, but I wont go into details here :-)
>> If anyone is interested, feel free to reach out.
> 
> After sending that we did make some solid progress. We are two patches
> away from breaking the link AFAICT.
> 
> 1. https://review.openstack.org/#/c/454719/
> 2. https://review.openstack.org/#/c/476866/
> 
> Both have some errors that need to be resolved but it is looking much
> closer now.

That's definitely the best way to handle the situation, so if you are
confident that we can complete the transition to depending on
mistral-lib before the non-client lib freeze (~ July 20), we should try
that.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-23 Thread Dougal Matthews
On 22 June 2017 at 14:01, Dougal Matthews  wrote:

>
>
> On 22 June 2017 at 11:01, Thierry Carrez  wrote:
>
>> Thierry Carrez wrote:
>> > Renat Akhmerov wrote:
>> >> We have a weekly meeting next Monday, will it be too late?
>> >
>> > Before Thursday EOD (when the Pike-2 deadline hits) should be OK.
>>
>> If there was a decision, I missed it (and in the mean time Mistral
>> published 5.0.0.0b2 for the Pike-2 milestone).
>>
>> Given the situation, I'm fine with giving an exception to Mistral to
>> switch now to cycle-with-intermediary and release 5.0.0 if you think
>> master is currently releasable...
>>
>> Let me know what you think.
>>
>
>
> I think that probably is the best option.
>
> We have been trying to break the requirement on mistral (from
> tripleo-common) but it is proving to be harder than expected. We are really
> doing some nasty things, but I wont go into details here :-) If anyone is
> interested, feel free to reach out.
>

After sending that we did make some solid progress. We are two patches away
from breaking the link AFAICT.

1. https://review.openstack.org/#/c/454719/
2. https://review.openstack.org/#/c/476866/

Both have some errors that need to be resolved but it is looking much
closer now.


>
>
>
>>
>> --
>> Thierry Carrez (ttx)
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-23 Thread Sean Dague
On 06/23/2017 05:44 AM, Thierry Carrez wrote:
> Lance Bragstad wrote:
>> Is the role of a goal "champion" limited to a single person? Can it be
>> distributed across multiple people pending actions are well communicated?
> 
> I'm a bit torn on that. On one hand work can definitely (and probably
> should) be covered by multiple people. But I like the idea that it's
> someone's responsibility to ensure that progress is made (even if that
> person ends up delegating all the work). The trick is, it's easy for
> everyone to assume that the work is covered since someone has signed up
> for it.
> 
> It's like the PTL situation -- work is done by the group and it's great
> to have a clear go-to person to keep track of things, until the PTL has
> to do everything because they end up as the default assignee for everything.

I agree, there should be a single name here. They can delegate and
collect up a group, but at the end of the day one person should be
responsible for it.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Required Ceph rbd image features

2017-06-23 Thread James Page
On Wed, 21 Jun 2017 at 20:08 Jason Dillaman  wrote:

> On Wed, Jun 21, 2017 at 12:32 PM, Jon Bernard  wrote:
> > I suspect you'd want to enable layering at minimum.
>
> I'd agree that layering is probably the most you'd want to enable for
> krbd-use cases as of today. The v4.9 kernel added support for
> exclusive-lock, but that probably doesn't provide much additional
> benefit at this point. The striping v2 feature is still not supported
> by krbd for non-basic stripe count/unit settings.
>

The newer cinder ceph replication features use journaling and exclusive-lock
(see [0]), so any krbd based driver would not be able to support this
feature right now.

[0]
http://ceph.com/planet/openstack-cinder-configure-replication-api-with-ceph/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-23 Thread Thierry Carrez
Lance Bragstad wrote:
> Is the role of a goal "champion" limited to a single person? Can it be
> distributed across multiple people pending actions are well communicated?

I'm a bit torn on that. On one hand work can definitely (and probably
should) be covered by multiple people. But I like the idea that it's
someone's responsibility to ensure that progress is made (even if that
person ends up delegating all the work). The trick is, it's easy for
everyone to assume that the work is covered since someone has signed up
for it.

It's like the PTL situation -- work is done by the group and it's great
to have a clear go-to person to keep track of things, until the PTL has
to do everything because they end up as the default assignee for everything.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-23 Thread Thierry Carrez
Zane Bitter wrote:
> But back in the day we had a process (incubation) for adding stuff to
> OpenStack that it made sense to depend on being there. It was a highly
> imperfect process. We got rid of that process with the big tent reform,
> but didn't really replace it with anything at all. Tags never evolved
> into a replacement as I hoped they would.
> 
> So now we have a bunch of things that are integral to building a
> "Kubernetes-like experience for application developers" - secret
> storage, DNS, load balancing, asynchronous messaging - that exist but
> are not in most clouds.

Yet another tangent in that thread, but you seem to regret a past that
never happened. The "integrated release" was never about stuff that you
can "depend on being there". It was about things that were tested to
work well together, and released together. Projects were incubating
until they were deemed mature-enough (and embedded-enough in our
community) that it was fine for other projects to take the hit to be
tested with them, and take the risk of being released together. I don't
blame you for thinking otherwise: since the integrated release was the
only answer we gave, everyone assumed it answered their specific
question[1]. And that was why we needed to get rid of it.

If it was really about stuff you can "depend on being there" then most
OpenStack clouds would have had Swift, Ceilometer, Trove and Sahara.

Stuff you can "depend on being there" is a relatively-new concept:
https://governance.openstack.org/tc/reference/base-services.html

Yes, we can (and should) add more of those when they are relevant to
most OpenStack deployments, otherwise projects will never start
depending on Barbican and continue to NIH secrets management locally.
But since any addition comes with a high operational cost, we need to
consider them very carefully.

We should also consider use cases and group projects together (a concept
 we start to call "constellations"). Yes, it would be great that, if you
have a IaaS/Compute use case, you could assume Designate is part of the mix.

[1] https://ttx.re/facets-of-the-integrated-release.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-23 Thread Alexandra Settle
Hey Blair,

Thanks ( I appreciate your offer of assistance. We are in full swing at the 
moment. The spec is very close to being merged. You can view that here: 
https://review.openstack.org/#/c/472275/ 

I am still looking for someone who can help us out with the pandoc conversion. 
I am happy to go through it with said individual.

Cheers,

Alex

On 6/23/17, 3:47 AM, "Blair Bethwaite"  wrote:

Hi Alex,

On 2 June 2017 at 23:13, Alexandra Settle  wrote:
> O I like your thinking – I’m a pandoc fan, so, I’d be interested in
> moving this along using any tools to make it easier.

I can't realistically offer much time on this but I would be happy to
help (ad-hoc) review/catalog/clean-up issues with export.

> I think my only proviso (now I’m thinking about it more) is that we still
> have a link on docs.o.o, but it goes to the wiki page for the Ops Guide.

Agreed, need to maintain discoverability.

-- 
Cheers,
~Blairo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-23 Thread Sahid Orentino Ferdjaoui
On Wed, Jun 21, 2017 at 12:47:27PM +0200, Henning Schild wrote:
> Am Tue, 20 Jun 2017 10:04:30 -0400
> schrieb Luiz Capitulino :
> 
> > On Tue, 20 Jun 2017 09:48:23 +0200
> > Henning Schild  wrote:
> > 
> > > Hi,
> > > 
> > > We are using OpenStack for managing realtime guests. We modified
> > > it and contributed to discussions on how to model the realtime
> > > feature. More recent versions of OpenStack have support for
> > > realtime, and there are a few proposals on how to improve that
> > > further.
> > > 
> > > But there is still no full answer on how to distribute threads
> > > across host-cores. The vcpus are easy but for the emulation and
> > > io-threads there are multiple options. I would like to collect the
> > > constraints from a qemu/kvm perspective first, and than possibly
> > > influence the OpenStack development
> > > 
> > > I will put the summary/questions first, the text below provides more
> > > context to where the questions come from.
> > > - How do you distribute your threads when reaching the really low
> > >   cyclictest results in the guests? In [3] Rik talked about problems
> > >   like hold holder preemption, starvation etc. but not where/how to
> > >   schedule emulators and io  
> > 
> > We put emulator threads and io-threads in housekeeping cores in
> > the host. I think housekeeping cores is what you're calling
> > best-effort cores, those are non-isolated cores that will run host
> > load.
> 
> As expected, any best-effort/housekeeping core will do but overlap with
> the vcpu-cores is a bad idea.
> 
> > > - Is it ok to put a vcpu and emulator thread on the same core as
> > > long as the guest knows about it? Any funny behaving guest, not
> > > just Linux.  
> > 
> > We can't do this for KVM-RT because we run all vcpu threads with
> > FIFO priority.
> 
> Same point as above, meaning the "hw:cpu_realtime_mask" approach is
> wrong for realtime.
> 
> > However, we have another project with DPDK whose goal is to achieve
> > zero-loss networking. The configuration required by this project is
> > very similar to the one required by KVM-RT. One difference though is
> > that we don't use RT and hence don't use FIFO priority.
> > 
> > In this project we've been running with the emulator thread and a
> > vcpu sharing the same core. As long as the guest housekeeping CPUs
> > are idle, we don't get any packet drops (most of the time, what
> > causes packet drops in this test-case would cause spikes in
> > cyclictest). However, we're seeing some packet drops for certain
> > guest workloads which we are still debugging.
> 
> Ok but that seems to be a different scenario where hw:cpu_policy
> dedicated should be sufficient. However if the placement of the io and
> emulators has to be on a subset of the dedicated cpus something like
> hw:cpu_realtime_mask would be required.
> 
> > > - Is it ok to make the emulators potentially slow by running them on
> > >   busy best-effort cores, or will they quickly be on the critical
> > > path if you do more than just cyclictest? - our experience says we
> > > don't need them reactive even with rt-networking involved  
> > 
> > I believe it is ok.
> 
> Ok.
>  
> > > Our goal is to reach a high packing density of realtime VMs. Our
> > > pragmatic first choice was to run all non-vcpu-threads on a shared
> > > set of pcpus where we also run best-effort VMs and host load.
> > > Now the OpenStack guys are not too happy with that because that is
> > > load outside the assigned resources, which leads to quota and
> > > accounting problems.
> > > 
> > > So the current OpenStack model is to run those threads next to one
> > > or more vcpu-threads. [1] You will need to remember that the vcpus
> > > in question should not be your rt-cpus in the guest. I.e. if vcpu0
> > > shares its pcpu with the hypervisor noise your preemptrt-guest
> > > would use isolcpus=1.
> > > 
> > > Is that kind of sharing a pcpu really a good idea? I could imagine
> > > things like smp housekeeping (cache invalidation etc.) to eventually
> > > cause vcpu1 having to wait for the emulator stuck in IO.  
> > 
> > Agreed. IIRC, in the beginning of KVM-RT we saw a problem where
> > running vcpu0 on an non-isolated core and without FIFO priority
> > caused spikes in vcpu1. I guess we debugged this down to vcpu1
> > waiting a few dozen microseconds for vcpu0 for some reason. Running
> > vcpu0 on a isolated core with FIFO priority fixed this (again, this
> > was years ago, I won't remember all the details).
> > 
> > > Or maybe a busy polling vcpu0 starving its own emulator causing high
> > > latency or even deadlocks.  
> > 
> > This will probably happen if you run vcpu0 with FIFO priority.
> 
> Two more points that indicate that hw:cpu_realtime_mask (putting
> emulators/io next to any vcpu) does not work for general rt.
> 
> > > Even if it happens to work for Linux guests it seems like a strong
> > > assumption that an rt-guest that has noise 

[openstack-dev] [tc] Status update, Jun 23

2017-06-23 Thread Thierry Carrez
Hi!

Here is our regular update on the status of a number of TC-proposed
governance changes, in an attempt to rely less on a weekly meeting to
convey that information.

You can find a regularly-updated status list of all open topics at:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee


== Recently-approved changes ==

* Introduce Top 5 help wanted list [1]
* Add "Doc owners" to top-5 wanted list [2]
* Guidelines for managing releases of binary artifacts [3]
* Barbican gets the assert:supports-upgrade tag [4]
* New git repositories: kuryr-tempest-plugins, zuul-sphinx,
openstack-ansible-os_freezer, puppet-ganesha

[1] https://review.openstack.org/#/c/466684/
[2] https://review.openstack.org/#/c/469115/
[3] https://review.openstack.org/#/c/469265/
[4] https://review.openstack.org/#/c/472547/

The "Top 5 help wanted list" is published at:
https://governance.openstack.org/tc/reference/top-5-help-wanted.html

The idea is to be explicit about where OpenStack, as a whole, urgently
needs help, and where contributing would make a significant difference.
Let's all encourage investment in those areas and celebrate whoever
works on them.


== Open discussions ==

The latest draft for the TC vision is still up for review. Please see it at:

* Begin integrating vision feedback and editing for style [5]

[5] https://review.openstack.org/#/c/473620/

The database support resolution is also up for review, although it could
use a new patchset to address formatting issues. Please review it at:

* Declare plainly the current state of PostgreSQL in OpenStack [6]

[6] https://review.openstack.org/427880

Discussion also continues on John Garbutt's resolution on ensuring that
decisions are globally inclusive. Please see:

* Decisions should be globally inclusive [7]

[7] https://review.openstack.org/#/c/460946/

John also updated his clarification of what "upstream support" means.
It's up for review at:

* Describe what upstream support means [8]

[8] https://review.openstack.org/440601

We had a meeting this week around goals, and the outcome was summarized
by thingee in the following ML thread:

http://lists.openstack.org/pipermail/openstack-dev/2017-June/118808.html

TL;DR: is that we identified a need for project management to drive the
completion of goal, therefore goals should probably have "champions"
pushing them. We think that Queens should only have two goals, with the
following options leading the pack:

* Split tempest plugins (already approved) [9]
* Policy and docs in code [10]

[9]
https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
[10] https://review.openstack.org/#/c/469954/

The other goals were a bit less popular. Discovery alignment [11][12]
was deemed a bit difficult, Migrate off paste [13] lacking a practical
success story and Python3.5 continuation [14] slightly premature. Please
comment on those reviews and threads if you feel stronglmy one way or
another:

[11] https://review.openstack.org/#/c/468436/
[12] https://review.openstack.org/#/c/468437/
[13] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117747.html
[14] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117746.html


== Voting in progress ==

Two patches reached majority votes earlier this week and will be
approved Monday unless there are last-minute objections:

* Remove "meritocracy" from the opens [15]
* Introduce assert:supports-api-interoperability tag [16]

[15] https://review.openstack.org/473510
[16] https://review.openstack.org/418010

Flavio proposed an addition to the top-5 wanted list, which seems rather
consensual:

* Add "Glance Contributors " to top-5 wanted list [17]

[17] https://review.openstack.org/#/c/474604/

Finally, following a discussion on the ML I proposed to remove Fuel from
the official list of projects, which also seems consensual so far:

* Move Fuel from official to unofficial (hosted) [18]

[18] https://review.openstack.org/#/c/475721/


== TC member actions for the coming week(s) ==

flaper87 to update "Drop Technical Committee meetings" with a new
revision

sdague or dirkm to post a new patchset on the database support
resolution, to address formatting issues

dims to sync with Stackube on progress and report back

ttx to sync with Gluon on progress and report back

johnthetubaguy to resurrect the SWG group and set up regular reporting


== Need for a TC meeting next Tuesday ==

We don't seem to be blocked on anything currently, so I don't think a
meeting is necessary to make progress.


Cheers!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [l2gw] How to handle correctly unknown-dst traffic

2017-06-23 Thread Ricardo Noriega De Soto
Hi Saverio,

Comments and questions inline:

First of all, which backend are you using? the l2gateway agent? or
something like OpenDaylight?? I'm currently testing an L2GW scenario with
ODL.


On Mon, May 29, 2017 at 4:54 PM, Saverio Proto 
wrote:

> Hello,
>
> I have a question about the l2gw. I did a deployment, I described the
> steps here:
> https://review.openstack.org/#/c/453209/
>
> The unicast traffic works fine, but I dont understand what is the idea
> behind the handling of the broadcast traffic.
>
> Looking at openvswitch:
>
> I obtain the uuid with `vtep-ctl list-ls`
>
> vtep-ctl list-remote-macs 
>
> In this output I get an entry for each VM that has an interface in the
> L2 network I am bridging:
>
> 
> # vtep-ctl list-remote-macs 
> ucast-mac-remote
>   fa:16:3e:c2:7b:da -> vxlan_over_ipv4/10.1.1.167
>
> mcast-mac-remote
> -
>

The ucast-mac-remote table is filled with information that don't match your
comments. In my environment, I have created only one neutron network, one
l2gw instance and one l2gw connection. However, the mac reflected in that
table corresponds to the dhcp port of the Neutron network (I've checked the
mac on the dhcp namespace and it's the same).
I've created several VMs in different compute nodes and there is only one
line there. Could you check again the MAC address?


>
> The ucast-mac-remote entry is created by Openstack when I start a VM.
> (Also it is never removed when I delete the instance, is this a bug ? )
> Note that 10.1.1.167 is the IP address of the hypervisor where the VM is
> running.
>
> But mcast-mac-remote is empty. So this means that ARP learning for
> example works only in 1 way. The VM in openstack does not receive any
> broadcast traffic, unless I do manually:
>
> vtep-ctl add-mcast-remote ee87db33-1b3a-42e9-bc09-02747f8a0ad5
> unknown-dst  10.1.1.167
>
> This creates an entry in the table mcast-mac-remote and everything works
> correctly.
>

In my setup I get this automatically:

mcast-mac-remote
  unknown-dst -> vxlan_over_ipv4/192.0.2.6

If you're using the agent, it might be a bug.


>
>
> Now I read here http://networkop.co.uk/blog/2016/05/21/neutron-l2gw/
> about sending add-mcast-remote to the network nodes and then doing some
> magic I dont really understand. But I am confused because in my setup
> the tenant does not have a L3 router, so there is not a qrouter
> namespace for this network, I was planning to keep the network node out
> of the game.
>
> Is anyone running this in production and can shed some light ?
>

No production sorry, just PoC mode :-)

>
> thanks
>
> Saverio
>
>
>
>
>
>
>
>
>
>
>
> --
> SWITCH
> Saverio Proto, Peta Solutions
> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
> phone +41 44 268 15 15, direct +41 44 268 1573
> saverio.pr...@switch.ch, http://www.switch.ch
>
> http://www.switch.ch/stories
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-23 Thread Kevin Benton
Can you provide your ml2_conf.ini values you are using?

On Thu, Jun 22, 2017 at 7:06 AM, Margin Hu  wrote:

> thanks.
>
> I met an issue , I  configured three ovs bridge ( br-ex, provision,
> provider) in ml2_conf.ini  but after I reboot the node , found only 2
> bridges flow table is normal , the other one bridge's flow table is empty.
>
> the bridge sometimes is "provision" , sometimes is "provider" ,  which
> possibilities is there for this issue.?
> [root@cloud]# ovs-ofctl show provision
> OFPT_FEATURES_REPLY (xid=0x2): dpid:248a075541e8
> n_tables:254, n_buffers:256
> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
> mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
>  1(bond0): addr:24:8a:07:55:41:e8
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
>  2(phy-provision): addr:2e:7c:ba:fe:91:72
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
>  LOCAL(provision): addr:24:8a:07:55:41:e8
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
> [root@cloud]# ovs-ofctl dump-flows  provision
> NXST_FLOW reply (xid=0x4):
>
> [root@cloud]# ip r
> default via 192.168.60.247 dev br-ex
> 10.53.16.0/24 dev vlan16  proto kernel  scope link  src 10.53.16.11
> 10.53.17.0/24 dev provider  proto kernel  scope link  src 10.53.17.11
> 10.53.22.0/24 dev vlan22  proto kernel  scope link  src 10.53.22.111
> 10.53.32.0/24 dev vlan32  proto kernel  scope link  src 10.53.32.11
> 10.53.33.0/24 dev provision  proto kernel  scope link  src 10.53.33.11
> 10.53.128.0/24 dev docker0  proto kernel  scope link  src 10.53.128.1
> 169.254.0.0/16 dev vlan16  scope link  metric 1012
> 169.254.0.0/16 dev vlan22  scope link  metric 1014
> 169.254.0.0/16 dev vlan32  scope link  metric 1015
> 169.254.0.0/16 dev br-ex  scope link  metric 1032
> 169.254.0.0/16 dev provision  scope link  metric 1033
> 169.254.0.0/16 dev provider  scope link  metric 1034
> 192.168.60.0/24 dev br-ex  proto kernel  scope link  src 192.168.60.111
>
> what' the root cause ?
>
>  rpm -qa | grep openvswitch
> openvswitch-2.6.1-4.1.git20161206.el7.x86_64
> python-openvswitch-2.6.1-4.1.git20161206.el7.noarch
> openstack-neutron-openvswitch-10.0.1-1.el7.noarch
>
>
>
> On 6/22 9:53, Kevin Benton wrote:
>
> Rules to allow aren't setup until the port is wired and it calls the
> functions like this:
> https://github.com/openstack/neutron/blob/master/neutron/
> plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606
>
> On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu  wrote:
>
>> Hi Guys,
>>
>> I have a question in setup_physical_bridges funtion  of
>> neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
>>
>>  # block all untranslated traffic between bridges
>> self.int_br.drop_port(in_port=int_ofport)
>> br.drop_port(in_port=phys_ofport)
>>
>> [refer](https://github.com/openstack/neutron/blob/master/neu
>> tron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159)
>>
>> when permit traffic between bridges ?  when modify flow table of ovs
>> bridge?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer][gnocchi] RBAC for attributes in resource

2017-06-23 Thread Julien Danjou
On Fri, Jun 23 2017, Deepthi V V wrote:

> Current gnocchi code supports RBAC at operation level 
> [gnocchi/gnocchi/rest/policy.json].
> Is it possible to add RBAC for attributes in a resource?
> For eg: Restrict resource search/show should display specific attributes only
> when query is performed by resource creator or admin.

oslo.policy does not have such a capability, so this is done by auth
helpers:
  https://github.com/gnocchixyz/gnocchi/blob/master/gnocchi/rest/auth_helper.py

They are picked by the `api.auth_mode' setting.

Feel free to send patches or write a new one if you prefer.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] git.openstack.org working again

2017-06-23 Thread Andreas Jaeger
Just a heads up: The recent failure of git.openstack.org - seen by many
jobs with a download failure of the upper-constraints.txt file - has
been fixed. Feel free to recheck failed changes where jobs failed to
download from git.openstack.org.

thanks to Clark, Monty, and Ian from the Infra team!
Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][ALL] What tempest tests will go under tempest plugin for a Project?

2017-06-23 Thread Chandan kumar
Hello,

In Queen OpenStack release, We have a community goal to split In-Tree
tempest plugin to a separate repo[1.].

I have a couple question regarding the tempest tests movement within
tempest plugins.

[1.] Since some of the core OpenStack projects like Nova, Glance and
Swift does have tempest plugin currently.
 Their Tempest tests reside under tempest project repo.
 are we going to create tempest plugin for the same?
 If yes, what are the tempest tests (API/Scenario) tests moving
under tempest plugins?

[2.] And, like other core projects like neutron and cinder have their
in-tree tempest plugins also.
 And those are also moving to a separate repo and currently, their
tests also resides under tempest repo.
 How can we avoid the duplication of the tempest tests?

[3.] For other projects while moving tests to a separate repo how we
are going to collaborate together to avoid
 duplication and move common tests to Tempest?

Links:
[1.] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] upper-constraints.txt is missing

2017-06-23 Thread Rikimaru Honjo

On 2017/06/23 16:17, Andreas Jaeger wrote:

On 2017-06-23 08:05, Rikimaru Honjo wrote:

Hi,

I run "tox -epy27" in nova repository just now.
As a result, following error message was printed.


HTTPError: 404 Client Error: Not found for url:
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt



Actually following URI is missing.
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt


The same error occurred other repositories.(e.g. cinder, glance...)
Where did upper-constraints.txt go?


git.openstack.org is currently broken, we're investigating.

Btw. best to report those on #openstack-infra directly, see
https://docs.openstack.org/infra/manual/ for further instructions and
ways to check the status of our infrastructure,

Thank you for suggesting!
I'll report on #openstack-infra directly next time.

FIY:
I succeeded to run tox in my machine by setting following environment variable:

$ export 
UPPER_CONSTRAINTS_FILE=https://raw.githubusercontent.com/openstack/requirements/master/upper-constraints.txt

 

Andreas



--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] upper-constraints.txt is missing

2017-06-23 Thread Andreas Jaeger
On 2017-06-23 08:05, Rikimaru Honjo wrote:
> Hi,
> 
> I run "tox -epy27" in nova repository just now.
> As a result, following error message was printed.
> 
>> HTTPError: 404 Client Error: Not found for url:
>> https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt
>>
> 
> Actually following URI is missing.
> https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt
> 
> 
> The same error occurred other repositories.(e.g. cinder, glance...)
> Where did upper-constraints.txt go?

git.openstack.org is currently broken, we're investigating.

Btw. best to report those on #openstack-infra directly, see
https://docs.openstack.org/infra/manual/ for further instructions and
ways to check the status of our infrastructure,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][rabbitmq] RPC demo code no response?

2017-06-23 Thread zhi
Hi, all.

Recently, I do some research about RPC by using oslo-messaging. I write
some demo code about RPC server and RPC client by following here [1]. And
the demo codes locate here[2].

I think there is something wrong about these codes. The server code
doesn't print response message when I run both the server code and the
client code. And my oslo message version is oslo.messaging==4.6.1.

Could someone give me some advice about that?

Thanks a lot. ;-)



[1].https://docs.openstack.org/developer/oslo.messaging/server.html
 https://docs.openstack.org/developer/oslo.messaging/client.html

[2].http://paste.openstack.org/show/613472/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][ceilometer][gnocchi] RBAC for attributes in resource

2017-06-23 Thread Deepthi V V
Hi,

Current gnocchi code supports RBAC at operation level 
[gnocchi/gnocchi/rest/policy.json].
Is it possible to add RBAC for attributes in a resource?
For eg: Restrict resource search/show should display specific attributes only 
when query is performed by resource creator or admin.

Thanks,
Deepthi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] any known issue on http://git.openstack.org/cgit/openstack/requirements link?

2017-06-23 Thread Andreas Jaeger
On 2017-06-23 07:52, Ghanshyam Mann wrote:
> Hi All,
> 
> Seems like many of the repository link on http://git.openstack.org
> stopped working where no issue on github.com.
> 
> any known issue, i cannot run tox due to that and so gate.

thanks, some admins are currently investigating!

Btw. best to report those on #openstack-infra directly, see
https://docs.openstack.org/infra/manual/

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] upper-constraints.txt is missing

2017-06-23 Thread Rikimaru Honjo

Hi,

I run "tox -epy27" in nova repository just now.
As a result, following error message was printed.


HTTPError: 404 Client Error: Not found for url: 
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt


Actually following URI is missing.
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt

The same error occurred other repositories.(e.g. cinder, glance...)
Where did upper-constraints.txt go?

Best regards,
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev