[openstack-dev] any known issue on http://git.openstack.org/cgit/openstack/requirements link?

2017-06-22 Thread Ghanshyam Mann
Hi All,

Seems like many of the repository link on http://git.openstack.org
stopped working where no issue on github.com.

any known issue, i cannot run tox due to that and so gate.

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] An idea about how to deploy multi Nova cells v2 using devstack

2017-06-22 Thread joehuang
Hello,

Currently Nova has one patch in review to deploy multi-cells for Cells V2, 
https://review.openstack.org/#/c/436094/

And Tricircle also provide one documentation on how how to deploy nova cells v2 
+ Tricricle: 
https://docs.openstack.org/developer/tricircle/installation-guide.html#work-with-nova-cell-v2-experiment
 . And there are may be some issue will happen, you can refer to the trouble 
shooting section.

Best Regards
Chaoyi Huang (joehuang)

From: David Mckein [davidmck...@gmail.com]
Sent: 23 June 2017 10:24
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] An idea about how to deploy multi Nova cells v2 using 
devstack

Hello,

I would like to experiment multi Nova cells v2. I found some references about 
Nova cells v2.
But It's not about multi cells v2 and how to deploy it.
I really want to know if there is a document or idea about how to deploy multi 
Nova cells v2.
I want to know where I can refer and what is going on, please.


Yours Truly
David Mckein
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-22 Thread Blair Bethwaite
Hi Alex,

On 2 June 2017 at 23:13, Alexandra Settle  wrote:
> O I like your thinking – I’m a pandoc fan, so, I’d be interested in
> moving this along using any tools to make it easier.

I can't realistically offer much time on this but I would be happy to
help (ad-hoc) review/catalog/clean-up issues with export.

> I think my only proviso (now I’m thinking about it more) is that we still
> have a link on docs.o.o, but it goes to the wiki page for the Ops Guide.

Agreed, need to maintain discoverability.

-- 
Cheers,
~Blairo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] An idea about how to deploy multi Nova cells v2 using devstack

2017-06-22 Thread David Mckein
Hello,

I would like to experiment multi Nova cells v2. I found some references
about Nova cells v2.
But It's not about multi cells v2 and how to deploy it.
I really want to know if there is a document or idea about how to deploy
multi Nova cells v2.
I want to know where I can refer and what is going on, please.


Yours Truly
David Mckein
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is it possible to configure ssl options for glance-api?

2017-06-22 Thread Avery Rozar
It does not appear that the glance-api.conf file has any SSL or WSGI
options. I'd like to disable sslv2, and tls1/tls1.1 but the options do not
appear to be available. Is this something that can be done?

Thanks
Avery
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-22 Thread Lingxian Kong
I like the idea, which means some projects lack of resources could benefit
more from the whole community :)


Cheers,
Lingxian Kong (Larry)

On Fri, Jun 23, 2017 at 5:57 AM, Mike Perez  wrote:

> Hey all,
>
> In the community wide goals, we started as a group discussing goals at the
> OpenStack Forum. Then we brought those ideas to the mailing list to
> continue the discussion and include those that were not able to be at the
> forum. The discussions help the TC decide on what goals we will do for the
> Queens release. The goals that have the most support so far are:
>
> 1) Split Tempest plugins into separate repos/projects [1]
> 2) Move policy and policy docs into code [2]
>
> In the recent TC meeting [3] it was recognized that goals in Pike haven't
> been going as smoothly and not being completed. There will be a follow up
> thread to cover gathering feedback in an etherpad later, but for now the TC
> has discussed potential actions to improve completing goals in Queens.
>
> An idea that came from the meeting was creating a role of "Champions", who
> are the drum beaters to get a goal done by helping projects with tracking
> status and sometimes doing code patches. These would be interested
> volunteers who have a good understanding of their selected goal and its
> implementation to be a trusted person.
>
> What do people think before we bikeshed on the name? Would having a
> champion volunteer to each goal to help? Are there ideas that weren't
> mentioned in the TC meeting [3]?
>
> [1] - https://governance.openstack.org/tc/goals/queens/
> split-tempest-plugins.html
> [2] - https://www.mail-archive.com/openstack-dev@lists.
> openstack.org/msg106392.html
> [3] - http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-
> 06-20-20.01.log.html#l-10
>
> —
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-22 Thread Sean McGinnis
> >
> > An idea that came from the meeting was creating a role of "Champions",
> > who are the drum beaters to get a goal done by helping projects with
> > tracking status and sometimes doing code patches. These would be
> > interested volunteers who have a good understanding of their selected
> > goal and its implementation to be a trusted person.
> >
> I like this idea. Some projects might have existing context about a
> particular goal built up before it's even proposed, others might not. I
> think this will help share knowledge across the projects that understand
> the goal with projects who might not be as familiar with it (even though
> the community goal proposal process attempts to fix that).
> 
> Is the role of a goal "champion" limited to a single person? Can it be
> distributed across multiple people pending actions are well communicated?

I like this idea, and I would think it would be "one or more champions" in
some cases. I thought the uWSGI goal was a great example of the need for
this. There were some that understood this very well and what needed to
happen in each project. Some projects understood this too and were able to
make the change easily, while others had (have) to do a lot of catching up
to understand what needs to be done.

Rather than making a bunch of people take this time to get up to speed, I
think it makes a lot of sense to be able to have a few people just take
care of it. Or at least be a good, and identified, resource for others to
go to to understand what needs to be done.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gnocchi] Difference between Gnocchi-api and uwsgi

2017-06-22 Thread gordon chung


On 22/06/17 04:23 PM, mate...@mailbox.org wrote:
> Hello everyone !
>
> I'm sorry that I'm disturbing you, but I was sent here from 
> openstack-operators ML.
> On my Mitaka test stack I installed Gnocchi as database for measurements, but 
> I have problems with
> api part. Firstly, I ran it directly executing gnocchi-api -p 8041. I noted 
> the warning message and later rerun api
> using uwsgi daemon. The problem that I'm faced with is a connection errors 
> that appears in ceilometer-collector.log
> approximately every 5-10 minutes:
>
> 2017-06-22 12:54:09.751 1846835 ERROR ceilometer.dispatcher.gnocchi 
> ConnectFailure: Unable to establish connection to ht
> tp://10.10.10.69:8041/v1/resource/generic/c900fd60-0b65-57b5-a481-
> eaee8e116312/metric/network.incoming.bytes.rate/measures


is this failing on all your requests or just some? do you have data in 
your gnocchi?

>
> I run uwsgi with the following config:
>
> [uwsgi]
> #http-socket = 127.0.0.1:8000
> http-socket = 10.10.10.69:8041

this should work but i imagine it's not behind a proxy so you could use 
http instead of http-socket.

>
> # Set the correct path depending on your installation
> wsgi-file = /usr/local/bin/gnocchi-api
> logto = /var/log/gnocchi/gnocchi-uwsgi.log
>
> master = true
> die-on-term = true
> threads = 1
> # Adjust based on the number of CPU
> processes = 5
> enabled-threads = true
> thunder-lock = true
> plugins = python
> buffer-size = 65535
> lazy-apps = true
>
>
> I don't understand why this happens.
> Maybe I should point wsgi-file as 
> /usr/local/lib/python2.7/dist-packages/gnocchi/rest/app.wsgi ?

/usr/local/bin/gnocchi-api is correct... assuming it's in that path and 
not /usr/bin/gnocchi-api

> Form uwsgi manual I read that direct parsing of http is slow. So maybe I need 
> to use apache with uwsgi mod ?
>

not sure about this part. do you have a lot of metrics being pushed to 
gnocchi? you can minimised connection requirements by setting 
batch_size/batch_timeout for collector (i think mitaka should support 
this?). i believe in the gate we have 2 processes assigned to uwsgi so 5 
should be sufficient.

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-22 Thread Armando M.
On 22 June 2017 at 17:24, Édouard Thuleau  wrote:

> Hi Armando,
>
> I did not opened any bug report. But if a core plugin implements only
> the NeutronPluginBaseV2 interface [1] and not the NeutronDbPluginV2
> interface [2], most of the service plugins of that list will be
> initialized without any errors (only the timestamp plugin fails to
> initialize because it tries to do DB stuff in its constructor [3]).
> And all API extensions of that service plugins are listed as supported
> but none of them works. Resources are not extended (tag, revision,
> auto-allocate) or some API extensions returns 404
> (network-ip-availability or flavors).
>
> What I proposed, is to improve all that service plugins of that list
> to be able to support pluggable backend drivers (thanks to the Neutron
> service driver mechanism [4]) and uses by default a driver based on
> the Neutron DB(like it's implemented actually). That will permits core
> plugin which not implements the Neutron DB model to provide its own
> driver. But until all service plugins will be fixed, I proposed a
> workaround to disable them.
>

I would recommend against the workaround of disabling them because of the
stated rationale.

Can you open a bug report, potentially when you're ready to file a fix (or
enable someone else to take ownership of the fix)? This way we can have a
more effective conversation either on the bug report or code review.

Thanks,
Armando


>
> [1] https://github.com/openstack/neutron/blob/master/neutron/
> neutron_plugin_base_v2.py#L30
> [2] https://github.com/openstack/neutron/blob/master/neutron/
> db/db_base_plugin_v2.py#L124
> [3] https://github.com/openstack/neutron/blob/master/neutron/
> services/timestamp/timestamp_plugin.py#L32
> [4] https://github.com/openstack/neutron/blob/master/neutron/
> services/service_base.py#L27
>
> Édouard.
>
> On Thu, Jun 22, 2017 at 12:29 AM, Armando M.  wrote:
> >
> >
> > On 21 June 2017 at 17:40, Édouard Thuleau 
> wrote:
> >>
> >> Hi,
> >>
> >> @Chaoyi,
> >> I don't want to change the core plugin interface. But I'm not sure we
> >> are talking about the same interface. I had a very quick look into the
> >> tricycle code and I think it uses the NeutronDbPluginV2 interface [1]
> >> which implements the Neutron DB model. Our Contrail Neutron plugin
> >> implements the NeutronPluginBaseV2 interface [2]. Anyway,
> >> NeutronDbPluginV2 is inheriting from NeutronPluginBaseV2 [3].
> >> Thanks for the pointer to the stadium paragraph.
> >
> >
> > Is there any bug report that captures the actual error you're facing?
> Out of
> > the list of plugins that have been added to that list over time, most
> work
> > just exercising the core plugin API, and we can look into the ones that
> > don't to figure out whether we overlooked some design abstractions during
> > code review.
> >
> >>
> >>
> >> @Kevin,
> >> Service plugins loaded by default are defined in a contant list [4]
> >> and I don't see how I can remove a default service plugin to be loaded
> >> [5].
> >>
> >> [1]
> >> https://github.com/openstack/tricircle/blob/master/
> tricircle/network/central_plugin.py#L128
> >> [2]
> >> https://github.com/Juniper/contrail-neutron-plugin/blob/
> master/neutron_plugin_contrail/plugins/opencontrail/
> contrail_plugin_base.py#L113
> >> [3]
> >> https://github.com/openstack/neutron/blob/master/neutron/
> db/db_base_plugin_v2.py#L125
> >> [4]
> >> https://github.com/openstack/neutron/blob/master/neutron/
> plugins/common/constants.py#L43
> >> [5]
> >> https://github.com/openstack/neutron/blob/master/neutron/
> manager.py#L190
> >>
> >> Édouard.
> >>
> >> On Wed, Jun 21, 2017 at 11:22 AM, Kevin Benton 
> wrote:
> >> > Why not just delete the service plugins you don't support from the
> >> > default
> >> > plugins dict?
> >> >
> >> > On Wed, Jun 21, 2017 at 1:45 AM, Édouard Thuleau
> >> > 
> >> > wrote:
> >> >>
> >> >> Ok, we would like to help on that. How we can start?
> >> >>
> >> >> I think the issue I raise in that thread must be the first point to
> >> >> address and my second proposition seems to be the correct one. What
> do
> >> >> you think?
> >> >> But it will needs some time and not sure we'll be able to fix all
> >> >> service plugins loaded by default before the next Pike release.
> >> >>
> >> >> I like to propose a workaround until all default service plugins will
> >> >> be compatible with non-DB core plugins. We can continue to load that
> >> >> default service plugins list but authorizing a core plugin to disable
> >> >> it completely with a private attribut on the core plugin class like
> >> >> it's done for bulk/pagination/sorting operations.
> >> >>
> >> >> Of course, we need to add the ability to report any regression on
> >> >> that. I think unit tests will help and we can also work on a
> >> >> functional test based on a fake non-DB core plugin.
> >> >>
> >> >> Regards,
> >> 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Zane Bitter

(Top posting. Deal with it ;)

You're both right!

Making OpenStack monolithic is not the answer. In fact, rearranging Git 
repos has nothing to do with the answer.


But back in the day we had a process (incubation) for adding stuff to 
OpenStack that it made sense to depend on being there. It was a highly 
imperfect process. We got rid of that process with the big tent reform, 
but didn't really replace it with anything at all. Tags never evolved 
into a replacement as I hoped they would.


So now we have a bunch of things that are integral to building a 
"Kubernetes-like experience for application developers" - secret 
storage, DNS, load balancing, asynchronous messaging - that exist but 
are not in most clouds. (Not to mention others like fine-grained 
authorisation control that are completely MIA.)


Instead of trying to drive adoption of all of that stuff, we are either 
just giving up or reinventing bits of it, badly, in multiple places. The 
biggest enemy of "do one thing and do it well" is when a thing that you 
need to do was chosen by a project in another silo as their "one thing", 
but you don't want to just depend on that project because it's not 
widely adopted.


I'm not saying this is an easy problem. It's something that the 
proprietary public cloud providers don't face: if you have only one 
cloud then you can just design everything to be as tightly integrated as 
it needs to be. When you have multiple clouds and the components are 
optional you have to do a bit more work. But if those components are 
rarely used at all then you lose the feedback loop that helps create a 
single polished implementation and everything else has to choose between 
not integrating, or implementing just the bits it needs itself so that 
whatever smaller feedback loop does manage to form, the benefits are 
contained entirely within the silo. OpenStack is arguably the only cloud 
project that has to deal with this. (Azure is also going into the same 
market, but they already have the feedback loop set up because they run 
their own public cloud built from the components.) Figuring out how to 
empower the community to solve this problem is our #1 governance concern 
IMHO.


In my view, one of the keys is to stop thinking of OpenStack as an 
abstraction layer over a bunch of vendor technologies. If you think of 
Nova as an abstraction layer over libvirt/Xen/HyperV, and Keystone as an 
abstraction layer over LDAP/ActiveDirectory, and Cinder/Neutron as an 
abstraction layer over a bunch of storage/network vendors, then two 
things will happen. The first is unrelenting "pressure from vendors to 
add yet-another-specialized-feature to the codebase" that you won't be 
able to push back against because you can't point to a competing vision. 
And the second is that you will never build a integrated, 
application-centric cloud, because the integration bit needs to happen 
at the layer above the backends we are abstracting.


We need to think of those things as the compute, authn, block storage 
and networking components of an integrated, application-centric cloud. 
And to remember that *by no means* are those the only components it will 
need - "The mission of Kubernetes is much smaller than OpenStack"; 
there's a lot we need to do.


So no, the strength of k8s isn't in having a monolithic git repo (and I 
don't think that's what Kevin was suggesting). That's actually a 
slow-motion train-wreck waiting to happen. Its strength is being able to 
do all of this stuff and still be easy enough to install, so that 
there's no question of trying to build bits of it without relying on 
shared primitives.


cheers,
Zane.

On 22/06/17 13:05, Jay Pipes wrote:

On 06/22/2017 11:59 AM, Fox, Kevin M wrote:

My $0.02.

That view of dependencies is why Kubernetes development is outpacing 
OpenStacks and some users are leaving IMO. Not trying to be mean here 
but trying to shine some light on this issue.


Kubernetes at its core has essentially something kind of equivalent to 
keystone (k8s rbac), nova (container mgmt), cinder 
(pv/pvc/storageclasses), heat with convergence 
(deployments/daemonsets/etc), barbican (secrets), designate 
(kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops dont 
have to work hard to get all of it, users can assume its all there, 
and devs don't have many silo's to cross to implement features that 
touch multiple pieces.


I think it's kind of hysterical that you're advocating a monolithic 
approach when the thing you're advocating (k8s) is all about enabling 
non-monolithic microservices architectures.


Look, the fact of the matter is that OpenStack's mission is larger than 
that of Kubernetes. And to say that "Ops don't have to work hard" to get 
and maintain a Kubernetes deployment (which, frankly, tends to be dozens 
of Kubernetes deployments, one for each tenant/project/namespace) is 
completely glossing over the fact that by abstracting away the 
infrastructure (k8s' "cloud provider" 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Fox, Kevin M
No, I'm not necessarily advocating a monolithic approach.

I'm saying that they have decided to start with functionality and accept whats 
needed to get the task done. Theres not really such strong walls between the 
various functionality, rbac/secrets/kublet/etc. They don't spawn off a whole 
new project just to add functionality. they do so only when needed. They also 
don't balk at one feature depending on another.

rbac's important, so they implemented it. ssl cert management was important. so 
they added that. adding a feature that restricts secret downloads only to the 
physical nodes need them, could then reuse the rbac system and ssl cert 
management.

Their sigs are more oriented to features/functionality (or catagories there 
of), not as much specific components. We need to do X. X may involve changes to 
components A and B.

OpenStack now tends to start with A and B and we try and work backwards towards 
implementing X, which is hard due to the strong walls and unclear ownership of 
the feature. And the general solution has been to try and make C but not commit 
to C being in the core so users cant depend on it which hasn't proven to be a 
very successful pattern.

Your right, they are breaking up their code base as needed, like nova did. I'm 
coming around to that being a pretty good approach to some things. starting 
things is simpler, and if it ends up not needing its own whole project, then it 
doesn't get one. if it needs one, then it gets one.  Its not by default, start 
whole new project with db user, db schema, api, scheduler, etc. And the project 
might not end up with daemons split up in exactly the way you would expect if 
you prepoptomized breaking off a project not knowing exactly how it might 
integrate with everything else.

Maybe the porcelain api that's been discussed for a while is part of the 
solution. initial stuff can prototyped/start there and break off as needed to 
separate projects and moved around without the user needing to know where it 
ends up.

Your right that OpenStack's scope is much grater. and think that the commons 
are even more important in that case. If it doesn't have a solid base, every 
project has to re-implement its own base. That takes a huge amount of manpower 
all around. Its not sustainable.

I guess we've gotten pretty far away from discussing Trove at this point.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, June 22, 2017 10:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

On 06/22/2017 11:59 AM, Fox, Kevin M wrote:
> My $0.02.
>
> That view of dependencies is why Kubernetes development is outpacing 
> OpenStacks and some users are leaving IMO. Not trying to be mean here but 
> trying to shine some light on this issue.
>
> Kubernetes at its core has essentially something kind of equivalent to 
> keystone (k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses), 
> heat with convergence (deployments/daemonsets/etc), barbican (secrets), 
> designate (kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops 
> dont have to work hard to get all of it, users can assume its all there, and 
> devs don't have many silo's to cross to implement features that touch 
> multiple pieces.

I think it's kind of hysterical that you're advocating a monolithic
approach when the thing you're advocating (k8s) is all about enabling
non-monolithic microservices architectures.

Look, the fact of the matter is that OpenStack's mission is larger than
that of Kubernetes. And to say that "Ops don't have to work hard" to get
and maintain a Kubernetes deployment (which, frankly, tends to be dozens
of Kubernetes deployments, one for each tenant/project/namespace) is
completely glossing over the fact that by abstracting away the
infrastructure (k8s' "cloud provider" concept), Kubernetes developers
simply get to ignore some of the hardest and trickiest parts of operations.

So, let's try to compare apples to apples, shall we?

It sounds like the end goal that you're advocating -- more than anything
else -- is an easy-to-install package of OpenStack services that
provides a Kubernetes-like experience for application developers.

I 100% agree with that goal. 100%.

But pulling Neutron, Cinder, Keystone, Designate, Barbican, and Octavia
back into Nova is not the way to do that. You're trying to solve a
packaging and installation problem with a code structure solution.

In fact, if you look at the Kubernetes development community, you see
the *opposite* direction being taken: they have broken out and are
actively breaking out large pieces of the Kubernetes repository/codebase
into separate repositories and addons/plugins. And this is being done to
*accelerate* development of Kubernetes in very much the same way that
splitting services out of Nova was done to accelerate the development of
those various pieces of infrastructure 

Re: [openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-22 Thread Doug Hellmann
Excerpts from Alexandra Settle's message of 2017-06-08 15:17:34 +:
> Hi everyone,
> 
> Doug and I have written up a spec following on from the conversation [0] that 
> we had regarding the documentation publishing future.
> 
> Please take the time out of your day to review the spec as this affects 
> *everyone*.
> 
> See: https://review.openstack.org/#/c/472275/
> 
> I will be PTO from the 9th – 19th of June. If you have any pressing concerns, 
> please email me and I will get back to you as soon as I can, or, email Doug 
> Hellmann and hopefully he will be able to assist you.
> 
> Thanks,
> 
> Alex
> 
> [0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117162.html

Andreas pointed out that the new documentation job will behave a
little differently from the old setup, and thought I should mention
it so that people aren't surprised.

The new job is configured to update the docs for all repos every
time a patch is merged, not just when we tag releases. The server
projects have been working that way, but this is different for some
of the libraries, especially the clients.

I will be going back and adding a step to build the docs when we
tag releases after the move actually starts, so that we can link
to docs for specific versions of projects. That change will be
transparent to everyone else, so I have it on the list for after
the migration is under way.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gnocchi] Difference between Gnocchi-api and uwsgi

2017-06-22 Thread mate200
Hello everyone !

I'm sorry that I'm disturbing you, but I was sent here from openstack-operators 
ML.
On my Mitaka test stack I installed Gnocchi as database for measurements, but I 
have problems with
api part. Firstly, I ran it directly executing gnocchi-api -p 8041. I noted the 
warning message and later rerun api
using uwsgi daemon. The problem that I'm faced with is a connection errors that 
appears in ceilometer-collector.log
approximately every 5-10 minutes:

2017-06-22 12:54:09.751 1846835 ERROR ceilometer.dispatcher.gnocchi 
ConnectFailure: Unable to establish connection to ht
tp://10.10.10.69:8041/v1/resource/generic/c900fd60-0b65-57b5-a481-
eaee8e116312/metric/network.incoming.bytes.rate/measures

I run uwsgi with the following config:

[uwsgi]
#http-socket = 127.0.0.1:8000
http-socket = 10.10.10.69:8041

# Set the correct path depending on your installation
wsgi-file = /usr/local/bin/gnocchi-api
logto = /var/log/gnocchi/gnocchi-uwsgi.log

master = true
die-on-term = true
threads = 1
# Adjust based on the number of CPU
processes = 5
enabled-threads = true
thunder-lock = true
plugins = python
buffer-size = 65535
lazy-apps = true


I don't understand why this happens. 
Maybe I should point wsgi-file as 
/usr/local/lib/python2.7/dist-packages/gnocchi/rest/app.wsgi ? 
Form uwsgi manual I read that direct parsing of http is slow. So maybe I need 
to use apache with uwsgi mod ?



Thanks in advance.



-- 
Best regards,
Mate200

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-22 Thread Rochelle Grober
Thanks for the clarification.  Yeah.  No interop interaction  to see here.  
These are not the api's you are looking for;-)

I think Chris Dent's response  about extending the gabbi based tests is great.  

I'm a firm believer in never discouraging anyone from writing more tests, 
especially when current coverage isn't complete.  But that doesn't mean I want 
them in the gate.  But also, since developers gain a lot of understanding 
through reading code, the test code itself could help devs better understand 
the extent and limitations of the placement api's.  Or not.

I just can't keep from encouraging devs to write tests.

--Rocky
 
> From: Sean Dague [mailto:s...@dague.net]
> On 06/22/2017 01:22 PM, Matt Riedemann wrote:
> 
> > Rocky, we have tests, we just don't have API samples for documentation
> > purposes like in the compute API reference docs.
> >
> > This doesn't have anything to do with interop guidelines, and it
> > wouldn't, since the Placement APIs are all admin-only and interop is
> > strictly about non-admin APIs.
> 
> I think the other important thing to remember is that the consumers of the
> placement api are currently presumed to be other OpenStack projects.
> Definitely not end users. So the documentation priority of providing lots of
> examples so that people don't need to look at source code is not as high.
> 
> I'm firmly in the camp that sample request / response is a good thing, but
> from a priorities perspective it's way more important to get that on end user
> APIs than quasi internal ones.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Hongbin Lu


> -Original Message-
> From: Zane Bitter [mailto:zbit...@redhat.com]
> Sent: June-20-17 4:57 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect
> Trove
> 
> On 20/06/17 11:45, Jay Pipes wrote:
> > Good discussion, Zane. Comments inline.
> 
> ++
> 
> > On 06/20/2017 11:01 AM, Zane Bitter wrote:
> >> On 20/06/17 10:08, Jay Pipes wrote:
> >>> On 06/20/2017 09:42 AM, Doug Hellmann wrote:
>  Does "service VM" need to be a first-class thing?  Akanda creates
>  them, using a service user. The VMs are tied to a "router" which
> is
>  the billable resource that the user understands and interacts with
>  through the API.
> >>>
> >>> Frankly, I believe all of these types of services should be built
> as
> >>> applications that run on OpenStack (or other) infrastructure. In
> >>> other words, they should not be part of the infrastructure itself.
> >>>
> >>> There's really no need for a user of a DBaaS to have access to the
> >>> host or hosts the DB is running on. If the user really wanted that,
> >>> they would just spin up a VM/baremetal server and install the thing
> >>> themselves.
> >>
> >> Hey Jay,
> >> I'd be interested in exploring this idea with you, because I think
> >> everyone agrees that this would be a good goal, but at least in my
> >> mind it's not obvious what the technical solution should be.
> >> (Actually, I've read your email a bunch of times now, and I go back
> >> and forth on which one you're actually advocating for.) The two
> >> options, as I see it, are as follows:
> >>
> >> 1) The database VMs are created in the user's tena^W project. They
> >> connect directly to the tenant's networks, are governed by the
> user's
> >> quota, and are billed to the project as Nova VMs (on top of whatever
> >> additional billing might come along with the management services). A
> >> [future] feature in Nova (https://review.openstack.org/#/c/438134/)
> >> allows the Trove service to lock down access so that the user cannot
> >> actually interact with the server using Nova, but must go through
> the
> >> Trove API. On a cloud that doesn't include Trove, a user could run
> >> Trove as an application themselves and all it would have to do
> >> differently is not pass the service token to lock down the VM.
> >>
> >> alternatively:
> >>
> >> 2) The database VMs are created in a project belonging to the
> >> operator of the service. They're connected to the user's network
> >> through , and isolated from other users' databases running in
> >> the same project through  magic?>.
> >> Trove has its own quota management and billing. The user cannot
> >> interact with the server using Nova since it is owned by a different
> >> project. On a cloud that doesn't include Trove, a user could run
> >> Trove as an application themselves, by giving it credentials for
> >> their own project and disabling all of the cross-tenant networking
> stuff.
> >
> > None of the above :)
> >
> > Don't think about VMs at all. Or networking plumbing. Or volume
> > storage or any of that.
> 
> OK, but somebody has to ;)
> 
> > Think only in terms of what a user of a DBaaS really wants. At the
> end
> > of the day, all they want is an address in the cloud where they can
> > point their application to write and read data from.
> >
> > Do they want that data connection to be fast and reliable? Of course,
> > but how that happens is irrelevant to them
> >
> > Do they want that data to be safe and backed up? Of course, but how
> > that happens is irrelevant to them.
> 
> Fair enough. The world has changed a lot since RDS (which was the model
> for Trove) was designed, it's certainly worth reviewing the base
> assumptions before embarking on a new design.
> 
> > The problem with many of these high-level *aaS projects is that they
> > consider their user to be a typical tenant of general cloud
> > infrastructure -- focused on launching VMs and creating volumes and
> > networks etc. And the discussions around the implementation of these
> > projects always comes back to minutia about how to set up secure
> > communication channels between a control plane message bus and the
> > service VMs.
> 
> Incidentally, the reason that discussions always come back to that is
> because OpenStack isn't very good at it, which is a huge problem not
> only for the *aaS projects but for user applications in general running
> on OpenStack.
> 
> If we had fine-grained authorisation and ubiquitous multi-tenant
> asynchronous messaging in OpenStack then I firmly believe that we, and
> application developers, would be in much better shape.
> 
> > If you create these projects as applications that run on cloud
> > infrastructure (OpenStack, k8s or otherwise),
> 
> I'm convinced there's an interesting idea here, but the terminology
> you're using doesn't really capture it. When you say 'as applications
> that run on cloud infrastructure', it sounds like you mean they should
> run in a 

Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Jeremy Stanley
On 2017-06-22 06:11:39 -0400 (-0400), Sean Dague wrote:
[...]
> It doesn't look like either (pinned repos or topics) are currently
> available over the API (topics in get format are experimental, but no
> edit as of yet). The pinned repositories aren't such a big deal, we're
> talking a handful here. The topics / tags maintenance would be more, but
> those aren't changing so fast that I think they'd be too unweildly to
> keep close.

Unfortunately I came to the same (API) conclusions while browsing
their dev docs yesterday. It looks like they're relatively on the
ball with implementing new objects and methods in their v4
(GraphQL-based) API on request, but someone would need to write up
the use cases and monitor for implementation... and then probably
mirror those into a popular SDK before we could reasonably automate.

> As I have strong feelings that this all would help, I would be happy to
> volunteer to manually update that information. Just need enough access
> to do that.

Much appreciated (I know how busy you are already!) and this
compromise was also the conclusion I was ending up at: volunteer(s)
from the community to help curate and maintain our GH presence. I
know I said the same in IRC but repeating for the sake of the ML:
it's become increasingly apparent that GH is first and foremost a
social media platform, so it makes sense to start treating it
similarly to our community's presence on other such similar
platforms rather than relying solely on the Infrastructure team
there.

Just because I'm PTL doesn't mean I'm comfortable speaking on behalf
of the rest of the team where this is concerned, so I'll first get
some feedback before I agree for sure (but it certainly seems
reasonable to me at least). Thanks again!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-22 Thread Lance Bragstad


On 06/22/2017 12:57 PM, Mike Perez wrote:
> Hey all,
>
> In the community wide goals, we started as a group discussing goals at
> the OpenStack Forum. Then we brought those ideas to the mailing list
> to continue the discussion and include those that were not able to be
> at the forum. The discussions help the TC decide on what goals we will
> do for the Queens release. The goals that have the most support so far
> are:
>
> 1) Split Tempest plugins into separate repos/projects [1]
> 2) Move policy and policy docs into code [2]
>
> In the recent TC meeting [3] it was recognized that goals in Pike
> haven't been going as smoothly and not being completed. There will be
> a follow up thread to cover gathering feedback in an etherpad later,
> but for now the TC has discussed potential actions to improve
> completing goals in Queens.
>
> An idea that came from the meeting was creating a role of "Champions",
> who are the drum beaters to get a goal done by helping projects with
> tracking status and sometimes doing code patches. These would be
> interested volunteers who have a good understanding of their selected
> goal and its implementation to be a trusted person.
>
> What do people think before we bikeshed on the name? Would having a
> champion volunteer to each goal to help? Are there ideas that weren't
> mentioned in the TC meeting [3]?
I like this idea. Some projects might have existing context about a
particular goal built up before it's even proposed, others might not. I
think this will help share knowledge across the projects that understand
the goal with projects who might not be as familiar with it (even though
the community goal proposal process attempts to fix that).

Is the role of a goal "champion" limited to a single person? Can it be
distributed across multiple people pending actions are well communicated?
>
> [1]
> - https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
> [2]
> - 
> https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg106392.html
> [3]
> - 
> http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-06-20-20.01.log.html#l-10
>
> —
> Mike Perez
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Jeremy Stanley
On 2017-06-22 05:54:41 -0400 (-0400), Sean Dague wrote:
> On 06/22/2017 04:33 AM, Thierry Carrez wrote:
[...]
> > Jeremy is right that the GitHub mirroring goes beyond an infrastructure
> > service: it's a marketing exercise, an online presence more than a
> > technical need. As such it needs to be curated, and us doing that for
> > "projects that are not official but merely hosted" is an anti-feature.
> > No real value for the hosted project, and extra confusion as a result.
> 
> Good point, I hadn't thought of that. I'd be totally fine only mirroring
> official projects.

This gets back to the "selective/conditional mirroring" challenge
though: Gerrit's replication plugin at best will let us manually
list repositories to replicate by name or name regex, but
maintaining this would be nontrivial (I suppose we could build a 1k+
line whitelist from deliverables registered in governance, though
we wouldn't get replication for new repos instantly as that would
have to wait for a Gerrit service restart and I'd be worried about
Gerrit sanely handling such a long config file). It _would_ be
possible to write some alternative daemon to monitor Gerrit activity
and pull+push new commits to the mirror, but that's not something I
personally have the bandwidth to write nor would I expect other
Infra team members to prioritize something like that.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Clint Byrum
tl;dr - I think Trove's successor has a future, but there are two
conflicting ideas presented and Trove should pick one or the other.

Excerpts from Amrith Kumar's message of 2017-06-18 07:35:49 -0400:
> 
> We have learned a lot from v1, and the hope is that we can address that in
> v2. Some of the more significant things that I have learned are:
> 
> - We should adopt a versioned front-end API from the very beginning; making
> the REST API versioned is not a ‘v2 feature’
> 

+1

> - A guest agent running on a tenant instance, with connectivity to a shared
> management message bus is a security loophole; encrypting traffic,
> per-tenant-passwords, and any other scheme is merely lipstick on a security
> hole
>

This is a broad statement, and I'm not sure I understand the actual risk
you're presenting here as "a security loophole".

How else would you administer a database server than through some kind
of agent? Whether that agent is a python daemon of our making, sshd, or
whatever kubernetes component lets you change things, they're all
administrative pieces that sit next to the resource.

> - Reliance on Nova for compute resources is fine, but dependence on Nova VM
> specific capabilities (like instance rebuild) is not; it makes things like
> containers or bare-metal second class citizens
> 

I whole heartedly agree that rebuild is a poor choice for database
servers. In fact, I believe it is a completely non-scalable feature that
should not even exist in Nova.

This is kind of a "we shouldn't be this". What should we be running
database clusters on?

> - A fair portion of what Trove does is resource orchestration; don’t
> reinvent the wheel, there’s Heat for that. Admittedly, Heat wasn’t as far
> along when Trove got started but that’s not the case today and we have an
> opportunity to fix that now
> 

Yeah. You can do that. I'm not really sure what it gets you at this
level. There was an effort a few years ago to use Heat for Trove and
some other pieces, but they fell short at the point where they had to
ask Heat for a few features like, oddly enough, rebuild confirmation
after test. Also, it increases friction to your project if your project
requires Heat in a cloud. That's a whole new service that one would have
to choose to expose or not to users and manage just for Trove. That's
a massive dependency, and it should come with something significant. I
don't see what it actually gets you when you already have to keep track
of your resources for cluster membership purposes anyway.

> - A similarly significant portion of what Trove does is to implement a
> state-machine that will perform specific workflows involved in implementing
> database specific operations. This makes the Trove taskmanager a stateful
> entity. Some of the operations could take a fair amount of time. This is a
> serious architectural flaw
> 

A state driven workflow is unavoidable if you're going to do cluster
manipulation. So you can defer this off to Mistral or some other
workflow engine, but I don't think it's an architectural flaw _that
Trove does it_. Clusters have states. They have to be tracked. Do that
well and your users will be happy.

> - Tenants should not ever be able to directly interact with the underlying
> storage and compute used by database instances; that should be the default
> configuration, not an untested deployment alternative
> 

Agreed. There's no point in having an "inside the cloud" service if
you're just going to hand them the keys to the VMs and volumes anyway.

The point of something like Trove is to be able to retain control at the
operator level, and only give users the interface you promised,
optimized without the limitations of the cloud.

> - The CI should test all databases that are considered to be ‘supported’
> without excessive use of resources in the gate; better code modularization
> will help determine the tests which can safely be skipped in testing changes
> 

Take the same approach as the other driver-hosting things. If it's
in-tree, it has to have a gate test.

> - Clusters should be first class citizens not an afterthought, single
> instance databases may be the ‘special case’, not the other way around
> 

+1

> - The project must provide guest images (or at least complete tooling for
> deployers to build these); while the project can’t distribute operating
> systems and database software, the current deployment model merely impedes
> adoption
> 

IIRC the project provides dib elements and a basic command line to build
images for your cloud, yes? Has that not worked out?

> - Clusters spanning OpenStack deployments are a real thing that must be
> supported
> 

This is the most problematic thing you asserted. There are two basic
desires I see that drive a Trove adoption:

1) I need database clusters and I don't know how to do them right.
2) I need _high performance/availability/capacity_ databases and my
cloud's standard VM flavors/hosts/networks/disks/etc. stand in the way
of that.

For 

[openstack-dev] [all][api] POST /api-wg/news

2017-06-22 Thread Edward Leafe
Greetings OpenStack community,

Today's meeting was even shorter and sweeter than last week's, as only edleafe 
was present, due to cdent being on PTO and elmiko being on the road. Prior to 
the meeting, though, we had decided that the guideline change for raising the 
minimum microversion was ready for freeze, and so it was indeed frozen. Other 
than that, there really is nothing else to report.

# Newly Published Guidelines

None this week

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

# Guidelines Currently Under Review [3]

* A (shrinking) suite of several documents about doing version discovery
  Start at https://review.openstack.org/#/c/459405/

* Fix html_last_updated_fmt for Python3
  https://review.openstack.org/475219

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your 
concerns in an email to the OpenStack developer mailing list[1] with the tag 
"[api]" in the subject. In your email, you should include any relevant reviews, 
links, and comments to help guide the discussion of the specific challenge you 
are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-22 Thread Mike Perez
Hey all,

In the community wide goals, we started as a group discussing goals at the
OpenStack Forum. Then we brought those ideas to the mailing list to
continue the discussion and include those that were not able to be at the
forum. The discussions help the TC decide on what goals we will do for the
Queens release. The goals that have the most support so far are:

1) Split Tempest plugins into separate repos/projects [1]
2) Move policy and policy docs into code [2]

In the recent TC meeting [3] it was recognized that goals in Pike haven't
been going as smoothly and not being completed. There will be a follow up
thread to cover gathering feedback in an etherpad later, but for now the TC
has discussed potential actions to improve completing goals in Queens.

An idea that came from the meeting was creating a role of "Champions", who
are the drum beaters to get a goal done by helping projects with tracking
status and sometimes doing code patches. These would be interested
volunteers who have a good understanding of their selected goal and its
implementation to be a trusted person.

What do people think before we bikeshed on the name? Would having a
champion volunteer to each goal to help? Are there ideas that weren't
mentioned in the TC meeting [3]?

[1] -
https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
[2] -
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg106392.html
[3] -
http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-06-20-20.01.log.html#l-10

—
Mike Perez
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-22 Thread Sean Dague
On 06/22/2017 01:22 PM, Matt Riedemann wrote:

> Rocky, we have tests, we just don't have API samples for documentation
> purposes like in the compute API reference docs.
> 
> This doesn't have anything to do with interop guidelines, and it
> wouldn't, since the Placement APIs are all admin-only and interop is
> strictly about non-admin APIs.

I think the other important thing to remember is that the consumers of
the placement api are currently presumed to be other OpenStack projects.
Definitely not end users. So the documentation priority of providing
lots of examples so that people don't need to look at source code is not
as high.

I'm firmly in the camp that sample request / response is a good thing,
but from a priorities perspective it's way more important to get that on
end user APIs than quasi internal ones.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-22 Thread Matt Riedemann

On 6/21/2017 4:28 PM, Rochelle Grober wrote:




From: Matt
On 6/21/2017 7:04 AM, Shewale, Bhagyashri wrote:

I  would like to write functional tests to check the exact req/resp
for each placement API for all supported versions similar

to what is already done for other APIs under
nova/tests/functional/api_sample_tests/api_samples/*.

These request/response json samples can be used by the
api.openstack.org and in the manuals.

There are already functional tests written for placement APIs under
nova/tests/functional/api/openstack/placement,

but these tests doesn’t check the entire HTTP response for each API
for all supported versions.

I think adding such functional tests for checking response for each
placement API would be beneficial to the project.

If there is an interest to create such functional tests, I can file a
new blueprint for this activity.



This has come up before and we don't want to use the same functional API
samples infrastructure for generating API samples for the placement API.
The functional API samples tests are confusing and a steep learning curve for
new contributors (and even long time old tooth contributors still get
confused by them).


I second that you talk with Chris Dent (mentioned below), but I also want to 
encourage you to write tests.  Write API tests that demonstrate *exactly* what 
is allowed and not allowed and verify that whether the api call is constructed 
correctly or not, that the responses are appropriate and correct.  By writing 
these new/extra/improved tests, the Interop guidelines can use these tests to 
improve interop expectations across clouds.  Plus, operators will be able to 
more quickly identify what the problem is when the tests demonstrate the 
problem-response patterns.  And, like you said, knowing what to expect makes 
documenting expected behaviors, for both correct and incorrect uses, much more 
straightforward.  Details are very important when tracking down issues based on 
the responses logged.

I want to encourage you to work with Chris to help expand our tests and their 
specificity and their extent.

Thanks!

--Rocky (with my interop, QA and ops hats on)




Talk with Chris Dent about ideas here for API samples with placement.
He's talked about building something into the gabbi library for this, but I 
don't
know if that's being worked on or not.

Chris is also on vacation for a couple of weeks, just FYI.

--

Thanks,

Matt

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Rocky, we have tests, we just don't have API samples for documentation 
purposes like in the compute API reference docs.


This doesn't have anything to do with interop guidelines, and it 
wouldn't, since the Placement APIs are all admin-only and interop is 
strictly about non-admin APIs.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Jay Pipes

On 06/22/2017 11:59 AM, Fox, Kevin M wrote:

My $0.02.

That view of dependencies is why Kubernetes development is outpacing OpenStacks 
and some users are leaving IMO. Not trying to be mean here but trying to shine 
some light on this issue.

Kubernetes at its core has essentially something kind of equivalent to keystone 
(k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses), heat with 
convergence (deployments/daemonsets/etc), barbican (secrets), designate 
(kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops dont have to 
work hard to get all of it, users can assume its all there, and devs don't have 
many silo's to cross to implement features that touch multiple pieces.


I think it's kind of hysterical that you're advocating a monolithic 
approach when the thing you're advocating (k8s) is all about enabling 
non-monolithic microservices architectures.


Look, the fact of the matter is that OpenStack's mission is larger than 
that of Kubernetes. And to say that "Ops don't have to work hard" to get 
and maintain a Kubernetes deployment (which, frankly, tends to be dozens 
of Kubernetes deployments, one for each tenant/project/namespace) is 
completely glossing over the fact that by abstracting away the 
infrastructure (k8s' "cloud provider" concept), Kubernetes developers 
simply get to ignore some of the hardest and trickiest parts of operations.


So, let's try to compare apples to apples, shall we?

It sounds like the end goal that you're advocating -- more than anything 
else -- is an easy-to-install package of OpenStack services that 
provides a Kubernetes-like experience for application developers.


I 100% agree with that goal. 100%.

But pulling Neutron, Cinder, Keystone, Designate, Barbican, and Octavia 
back into Nova is not the way to do that. You're trying to solve a 
packaging and installation problem with a code structure solution.


In fact, if you look at the Kubernetes development community, you see 
the *opposite* direction being taken: they have broken out and are 
actively breaking out large pieces of the Kubernetes repository/codebase 
into separate repositories and addons/plugins. And this is being done to 
*accelerate* development of Kubernetes in very much the same way that 
splitting services out of Nova was done to accelerate the development of 
those various pieces of infrastructure code.



This core functionality being combined has allowed them to land features that 
are really important to users but has proven difficult for OpenStack to do 
because of the silo's. OpenStack's general pattern has been, stand up a new 
service for new feature, then no one wants to depend on it so its ignored and 
each silo reimplements a lesser version of it themselves.


I disagree. I believe the reason Kubernetes is able to land features 
that are "really important to users" is primarily due to the following 
reasons:


1) The Kubernetes technical leadership strongly resists pressure from 
vendors to add yet-another-specialized-feature to the codebase. This 
ability to say "No" pays off in spades with regards to stability and focus.


2) The mission of Kubernetes is much smaller than OpenStack. If the 
OpenStack community were able to say "OpenStack is a container 
orchestration system", and not "OpenStack is a ubiquitous open source 
cloud operating system", we'd probably be able to deliver features in a 
more focused fashion.



The OpenStack commons then continues to suffer.

We need to stop this destructive cycle.

OpenStack needs to figure out how to increase its commons. Both internally and 
externally. etcd as a common service was a step in the right direction.

I think k8s needs to be another common service all the others can rely on. That 
could greatly simplify the rest of the OpenStack projects as a lot of its 
functionality no longer has to be implemented in each project.


I don't disagree with the goal of being able to rely on Kubernetes for 
many things. But relying on Kubernetes doesn't solve the "I want some 
easy-to-install infrastructure" problem. Nor does it solve the types of 
advanced networking scenarios that the NFV community requires.



We also need a way to break down the silo walls and allow more cross project 
collaboration for features. I fear the new push for letting projects run 
standalone will make this worse, not better, further fracturing OpenStack.


Perhaps you are referring to me with the above? As I said on Twitter, 
"Make your #OpenStack project usable by and useful for things outside of 
the OpenStack ecosystem. Fewer deps. Do one thing well. Solid APIs."


I don't think that the above leads to "further fracturing OpenStack". I 
think it leads to solid, reusable components.


Best,
-jay


Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, June 22, 2017 12:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Andrey Kurilin
2017-06-22 18:59 GMT+03:00 Fox, Kevin M :

> My $0.02.
>
> That view of dependencies is why Kubernetes development is outpacing
> OpenStacks and some users are leaving IMO. Not trying to be mean here but
> trying to shine some light on this issue.
>
> Kubernetes at its core has essentially something kind of equivalent to
> keystone (k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses),
> heat with convergence (deployments/daemonsets/etc), barbican (secrets),
> designate (kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops
> dont have to work hard to get all of it, users can assume its all there,
> and devs don't have many silo's to cross to implement features that touch
> multiple pieces.
>
> This core functionality being combined has allowed them to land features
> that are really important to users but has proven difficult for OpenStack
> to do because of the silo's. OpenStack's general pattern has been, stand up
> a new service for new feature, then no one wants to depend on it so its
> ignored and each silo reimplements a lesser version of it themselves.
>
>
Totally agree

The OpenStack commons then continues to suffer.
>
> We need to stop this destructive cycle.
>
> OpenStack needs to figure out how to increase its commons. Both internally
> and externally. etcd as a common service was a step in the right direction.
>
> I think k8s needs to be another common service all the others can rely on.
> That could greatly simplify the rest of the OpenStack projects as a lot of
> its functionality no longer has to be implemented in each project.
>
> We also need a way to break down the silo walls and allow more cross
> project collaboration for features. I fear the new push for letting
> projects run standalone will make this worse, not better, further
> fracturing OpenStack.
>
> Thanks,
> Kevin
> 
> From: Thierry Carrez [thie...@openstack.org]
> Sent: Thursday, June 22, 2017 12:58 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect
> Trove
>
> Fox, Kevin M wrote:
> > [...]
> > If you build a Tessmaster clone just to do mariadb, then you share
> nothing with the other communities and have to reinvent the wheel, yet
> again. Operators load increases because the tool doesn't function like
> other tools.
> >
> > If you rely on a container orchestration engine that's already cross
> cloud that can be easily deployed by user or cloud operator, and fill in
> the gaps with what Trove wants to support, easy management of db's, you get
> to reuse a lot of the commons and the users slight increase in investment
> in dealing with the bit of extra plumbing in there allows other things to
> also be easily added to their cluster. Its very rare that a user would need
> to deploy/manage only a database. The net load on the operator decreases,
> not increases.
>
> I think the user-side tool could totally deploy on Kubernetes clusters
> -- if that was the only possible target that would make it a Kubernetes
> tool more than an open infrastructure tool, but that's definitely a
> possibility. I'm not sure work is needed there though, there are already
> tools (or charts) doing that ?
>
> For a server-side approach where you want to provide a DB-provisioning
> API, I fear that making the functionality depend on K8s would make
> TroveV2/Hoard would not only depend on Heat and Nova, but also depend on
> something that would deploy a Kubernetes cluster (Magnum?), which would
> likely hurt its adoption (and reusability in simpler setups). Since
> databases would just work perfectly well in VMs, it feels like a
> gratuitous dependency addition ?
>
> We generally need to be very careful about creating dependencies between
> OpenStack projects. On one side there are base services (like Keystone)
> that we said it was alright to depend on, but depending on anything else
> is likely to reduce adoption. Magnum adoption suffers from its
> dependency on Heat. If Heat starts depending on Zaqar, we make the
> problem worse. I understand it's a hard trade-off: you want to reuse
> functionality rather than reinvent it in every project... we just need
> to recognize the cost of doing that.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Fox, Kevin M
My $0.02.

That view of dependencies is why Kubernetes development is outpacing OpenStacks 
and some users are leaving IMO. Not trying to be mean here but trying to shine 
some light on this issue.

Kubernetes at its core has essentially something kind of equivalent to keystone 
(k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses), heat with 
convergence (deployments/daemonsets/etc), barbican (secrets), designate 
(kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops dont have to 
work hard to get all of it, users can assume its all there, and devs don't have 
many silo's to cross to implement features that touch multiple pieces.

This core functionality being combined has allowed them to land features that 
are really important to users but has proven difficult for OpenStack to do 
because of the silo's. OpenStack's general pattern has been, stand up a new 
service for new feature, then no one wants to depend on it so its ignored and 
each silo reimplements a lesser version of it themselves.

The OpenStack commons then continues to suffer.

We need to stop this destructive cycle.

OpenStack needs to figure out how to increase its commons. Both internally and 
externally. etcd as a common service was a step in the right direction.

I think k8s needs to be another common service all the others can rely on. That 
could greatly simplify the rest of the OpenStack projects as a lot of its 
functionality no longer has to be implemented in each project.

We also need a way to break down the silo walls and allow more cross project 
collaboration for features. I fear the new push for letting projects run 
standalone will make this worse, not better, further fracturing OpenStack.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, June 22, 2017 12:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Fox, Kevin M wrote:
> [...]
> If you build a Tessmaster clone just to do mariadb, then you share nothing 
> with the other communities and have to reinvent the wheel, yet again. 
> Operators load increases because the tool doesn't function like other tools.
>
> If you rely on a container orchestration engine that's already cross cloud 
> that can be easily deployed by user or cloud operator, and fill in the gaps 
> with what Trove wants to support, easy management of db's, you get to reuse a 
> lot of the commons and the users slight increase in investment in dealing 
> with the bit of extra plumbing in there allows other things to also be easily 
> added to their cluster. Its very rare that a user would need to deploy/manage 
> only a database. The net load on the operator decreases, not increases.

I think the user-side tool could totally deploy on Kubernetes clusters
-- if that was the only possible target that would make it a Kubernetes
tool more than an open infrastructure tool, but that's definitely a
possibility. I'm not sure work is needed there though, there are already
tools (or charts) doing that ?

For a server-side approach where you want to provide a DB-provisioning
API, I fear that making the functionality depend on K8s would make
TroveV2/Hoard would not only depend on Heat and Nova, but also depend on
something that would deploy a Kubernetes cluster (Magnum?), which would
likely hurt its adoption (and reusability in simpler setups). Since
databases would just work perfectly well in VMs, it feels like a
gratuitous dependency addition ?

We generally need to be very careful about creating dependencies between
OpenStack projects. On one side there are base services (like Keystone)
that we said it was alright to depend on, but depending on anything else
is likely to reduce adoption. Magnum adoption suffers from its
dependency on Heat. If Heat starts depending on Zaqar, we make the
problem worse. I understand it's a hard trade-off: you want to reuse
functionality rather than reinvent it in every project... we just need
to recognize the cost of doing that.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Thierry Carrez
Samuel Cassiba wrote:
>> On Jun 22, 2017, at 03:01, Sean Dague  wrote:
>> The micro repositories for config management and packaging create this
>> overwhelming wall of projects from the outside. I realize that git repos
>> are cheap from a dev perspective, but they are expensive from a concept
>> perspective.
> 
> I ask, then, what about those communities that do advocate one repo per
> subproject? Chef is one such case in which monolithic all-in-one repos used to
> be the mainstay, and are now frowned upon and actively discouraged in the Chef
> community due to real pain felt in finding the right patterns. In the past, we
> (Chef OpenStack) experimented with the One Repo To Rule Them All idea, but 
> that
> didn’t get much traction and wasn’t further fostered. Right now, what works 
> for
> us is a hybrid model of community best practices doing one repo per
> cookbook/subproject and a single “meta” repo that ties the whole thing
> together. Maybe that one repo per subproject pattern is an anti-pattern to
> OpenStack’s use case and we’re now getting to the point of criticality. Past
> iterations of the Chef method include using git submodules and the GitHub
> workflow, as well as One Repo To Rule Them All. They’re in the past, gone and
> left to the ages. Those didn’t work because they tried to be too opinionated,
> or too clever, without looking at the user experience.
> 
> While I agree that the repo creep is real, there has to be a balance. The Chef
> method to OpenStack has been around for a long time in both Chef and OpenStack
> terms, and has generally followed the same pattern of one repo per subproject.
> We still have users[1], most of whom have adopted this pattern and have been 
> in
> production, some for years, myself included. What, I ask, happens to their
> future if Chef were to shake things up and pivot to a One Repo To Rule Them 
> All
> model? Not everyone can pivot, and some would be effectively left to rot with
> what would now be considered tech debt by those closer to upstream. “If it
> ain’t broke, don’t fix it” is still a strong force to contend with, whether we
> like it or not. Providing smooth, clear paths to a production-grade open cloud
> should be the aim, not what the definition of is, is, even if that is what
> comes naturally to groups of highly skilled, highly technical people.

I don't think Sean was advocating for "one repo per project". I just
think he was pointing to the hidden cost of multiplying repositories. I
certainly wouldn't support a push to regroup repositories where it
doesn't make sense, just to make a GitHub organization page
incrementally more navigable.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable]requirements] Bootstrapiing requirements-stable-core

2017-06-22 Thread Thierry Carrez
Tony Breeds wrote:
> Hi All,
>   Recently it's been clear that we need a requirements-stable team.
> Until npw that's been handled by the release managers and the
> stable-maint-core team.
> 
> With the merge of [1] The have the groundwork for that team.  I'd like
> to nominate:
> 
>  * dmllr -- Dirk Mueller
>  * prometheanfire -- Matthew Thode
>  * SeanM -- Sean McGinnis

+1 from me !

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-22 Thread Chris Friesen

On 06/22/2017 01:47 AM, Henning Schild wrote:

Am Wed, 21 Jun 2017 11:40:14 -0600
schrieb Chris Friesen :


On 06/21/2017 10:46 AM, Henning Schild wrote:



As we know from our setup, and as Luiz confirmed - it is _not_
"critical to separate emulator threads for different KVM instances".
They have to be separated from the vcpu-cores but not from each
other. At least not on the "cpuset" basis, maybe "blkio" and
cgroups like that.


I'm reluctant to say conclusively that we don't need to separate
emulator threads since I don't think we've considered all the cases.
For example, what happens if one or more of the instances are being
live-migrated?  The migration thread for those instances will be very
busy scanning for dirty pages, which could delay the emulator threads
for other instances and also cause significant cross-NUMA traffic
unless we ensure at least one core per NUMA-node.


Realtime instances can not be live-migrated. We are talking about
threads that can not even be moved between two cores on one numa-node
without missing a deadline. But your point is good because it could
mean that such an emulator_set - if defined - should not be used for all
VMs.


I'd suggest that realtime instances cannot be live-migrated *while meeting 
realtime commitments*.  There may be reasons to live-migrate realtime instances 
that aren't currently providing service.



Also, I don't think we've determined how much CPU time is needed for
the emulator threads.  If we have ~60 CPUs available for instances
split across two NUMA nodes, can we safely run the emulator threads
of 30 instances all together on a single CPU?  If not, how much
"emulator overcommit" is allowable?


That depends on how much IO your VMs are issuing and can not be
answered in general. All VMs can cause high load with IO/emulation,
rt-VMs are probably less likely to do so.


I think the result of this is that in addition to "rt_emulator_pin_set" you'd 
probably want a config option for "rt_emulator_overcommit_ratio" or something 
similar.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] QA meeting cancelled today

2017-06-22 Thread Andrea Frittoli
Hello everyone,

unfortunately I have to cancel the QA meeting today as I won't be able to
attend.
Sorry about the short notice!

Kind regards

andreaf
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Stepping down from core

2017-06-22 Thread Brian Rosmaita
Fei Long:

On behalf of the entire Glance team, thank you for your extensive past
service to Glance.  We hope that you'll be able to find time to work
on Glance again in the future.

cheers,
brian


On Wed, Jun 21, 2017 at 9:18 AM, Erno Kuvaja  wrote:
> On Tue, Jun 20, 2017 at 11:07 AM, Flavio Percoco  wrote:
>> On 20/06/17 09:31 +1200, feilong wrote:
>>>
>>> Hi there,
>>>
>>> I've been a Glance core since 2013 and been involved in the Glance
>>> community even longer, so I care deeply about Glance. My situation right now
>>> is such that I cannot devote sufficient time to Glance, and while as you've
>>> seen elsewhere on the mailing list, Glance needs reviewers, I'm afraid that
>>> keeping my name on the core list is giving people a false impression of how
>>> dire the current Glance personnel situation is. So after discussed with
>>> Glance PTL, I'd like to offer my resignation as a member of the Glance core
>>> reviewer team. Thank you for your understanding.
>>
>>
>> Thanks for being honest and open about the situation. I agree with you that
>> this
>> is the right move.
>>
>> I'd like to thank you for all these years of service and I think it goes
>> without
>> saying that you're welcome back in the team anytime you want.
>>
>> Flavio
>>
>> --
>> @flaper87
>> Flavio Percoco
>>
>
>
> Just want to reinforce what Flavio said. Big thanks for all your time
> and expertise! You're always welcome back if your time so permits.
>
> - Erno
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-22 Thread Édouard Thuleau
Hi Armando,

I did not opened any bug report. But if a core plugin implements only
the NeutronPluginBaseV2 interface [1] and not the NeutronDbPluginV2
interface [2], most of the service plugins of that list will be
initialized without any errors (only the timestamp plugin fails to
initialize because it tries to do DB stuff in its constructor [3]).
And all API extensions of that service plugins are listed as supported
but none of them works. Resources are not extended (tag, revision,
auto-allocate) or some API extensions returns 404
(network-ip-availability or flavors).

What I proposed, is to improve all that service plugins of that list
to be able to support pluggable backend drivers (thanks to the Neutron
service driver mechanism [4]) and uses by default a driver based on
the Neutron DB(like it's implemented actually). That will permits core
plugin which not implements the Neutron DB model to provide its own
driver. But until all service plugins will be fixed, I proposed a
workaround to disable them.

[1] 
https://github.com/openstack/neutron/blob/master/neutron/neutron_plugin_base_v2.py#L30
[2] 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L124
[3] 
https://github.com/openstack/neutron/blob/master/neutron/services/timestamp/timestamp_plugin.py#L32
[4] 
https://github.com/openstack/neutron/blob/master/neutron/services/service_base.py#L27

Édouard.

On Thu, Jun 22, 2017 at 12:29 AM, Armando M.  wrote:
>
>
> On 21 June 2017 at 17:40, Édouard Thuleau  wrote:
>>
>> Hi,
>>
>> @Chaoyi,
>> I don't want to change the core plugin interface. But I'm not sure we
>> are talking about the same interface. I had a very quick look into the
>> tricycle code and I think it uses the NeutronDbPluginV2 interface [1]
>> which implements the Neutron DB model. Our Contrail Neutron plugin
>> implements the NeutronPluginBaseV2 interface [2]. Anyway,
>> NeutronDbPluginV2 is inheriting from NeutronPluginBaseV2 [3].
>> Thanks for the pointer to the stadium paragraph.
>
>
> Is there any bug report that captures the actual error you're facing? Out of
> the list of plugins that have been added to that list over time, most work
> just exercising the core plugin API, and we can look into the ones that
> don't to figure out whether we overlooked some design abstractions during
> code review.
>
>>
>>
>> @Kevin,
>> Service plugins loaded by default are defined in a contant list [4]
>> and I don't see how I can remove a default service plugin to be loaded
>> [5].
>>
>> [1]
>> https://github.com/openstack/tricircle/blob/master/tricircle/network/central_plugin.py#L128
>> [2]
>> https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/contrail_plugin_base.py#L113
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L125
>> [4]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
>> [5]
>> https://github.com/openstack/neutron/blob/master/neutron/manager.py#L190
>>
>> Édouard.
>>
>> On Wed, Jun 21, 2017 at 11:22 AM, Kevin Benton  wrote:
>> > Why not just delete the service plugins you don't support from the
>> > default
>> > plugins dict?
>> >
>> > On Wed, Jun 21, 2017 at 1:45 AM, Édouard Thuleau
>> > 
>> > wrote:
>> >>
>> >> Ok, we would like to help on that. How we can start?
>> >>
>> >> I think the issue I raise in that thread must be the first point to
>> >> address and my second proposition seems to be the correct one. What do
>> >> you think?
>> >> But it will needs some time and not sure we'll be able to fix all
>> >> service plugins loaded by default before the next Pike release.
>> >>
>> >> I like to propose a workaround until all default service plugins will
>> >> be compatible with non-DB core plugins. We can continue to load that
>> >> default service plugins list but authorizing a core plugin to disable
>> >> it completely with a private attribut on the core plugin class like
>> >> it's done for bulk/pagination/sorting operations.
>> >>
>> >> Of course, we need to add the ability to report any regression on
>> >> that. I think unit tests will help and we can also work on a
>> >> functional test based on a fake non-DB core plugin.
>> >>
>> >> Regards,
>> >> Édouard.
>> >>
>> >> On Tue, Jun 20, 2017 at 12:09 AM, Kevin Benton 
>> >> wrote:
>> >> > The issue is mainly developer resources. Everyone currently working
>> >> > upstream
>> >> > doesn't have the bandwidth to keep adding/reviewing the layers of
>> >> > interfaces
>> >> > to make the DB optional that go untested. (None of the projects that
>> >> > would
>> >> > use them run a CI system that reports results on Neutron patches.)
>> >> >
>> >> > I think we can certainly accept patches to do the things you are
>> >> > proposing,
>> >> > but there is no guarantee that it won't regress to being DB-dependent
>> >> > 

Re: [openstack-dev] [Watcher] handle multiple data sources in watcher

2017-06-22 Thread Vincent FRANCOISE
Bin Zhou,

IMHO, we should have a new configuration that introduces weights for each
metrics backend we intend to use. Then, we use the datasource with the
biggest weight that actually provides the metric we are requesting. This way,
only one datasource will ever serve a given metric although many datasources
may be involved at once. By doing so, we can provide sensible defaults as to
which datasource has the best expected performance (i.e. prefer Monasca
over Ghocchi wherever possible or vice-versa) and also provide something
that will work straight out of the box.

What do you think?

Vincent Françoise

From: Bin Zhou 
Sent: 22 June 2017 15:39
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Watcher] handle multiple data sources in watcher

Hi All,

I am working on the blueprint
https://blueprints.launchpad.net/watcher/+spec/watcher-multi-datasource.
It is a good idea to construct an abstract layer and hide data source
details from the consumer of the data. But I have one doubt about the
possibility of using multiple metrics data sources at the same
deployment. Metrics instrumentation is not free, it comes with
overhead of CPU, memory, storage. Production deployment should always
avoid to use multiple monitoring tools for the same metrics.

I propose to have a default monitoring engine in  the watcher
configuration. In case that multiple monitoring engines are
configured, and a user has to collect different metrics from different
monitoring engine, the caller has to specify the engine explicitly to
avoid using the default engine.

Please let me know if you have any concerns or better solutions. Thanks!

Bin Zhou (lakerzhou)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Samuel Cassiba


> On Jun 22, 2017, at 03:01, Sean Dague  wrote:
> 
> On 06/21/2017 09:52 PM, Chris Hoge wrote:
>> 
>>> On Jun 21, 2017, at 2:35 PM, Jeremy Stanley  wrote:
>>> 
>>> On 2017-06-21 13:52:11 -0500 (-0500), Lauren Sell wrote:
>>> [...]
 To make this actionable...Github is just a mirror of our
 repositories, but for better or worse it's the way most people in
 the world explore software. If you look at OpenStack on Github
 now, it’s impossible to tell which projects are official. Maybe we
 could help by better curating the Github projects (pinning some of
 the top projects, using the new new topics feature to put tags
 like openstack-official or openstack-unofficial, coming up with
 more standard descriptions or naming, etc.).
>>> 
>>> I hadn't noticed the pinned repositories option until you mentioned
>>> it: appears they just extended that feature to orgs back in October
>>> (and introduced the topics feature in January). I could see
>>> potentially integrating pinning and topic management into the
>>> current GH API script we run when creating new mirrors
>>> there--assuming these are accessible via their API anyway--and yes
>>> normalizing the descriptions to something less freeform is something
>>> else we'd discussed to be able to drive users back to the official
>>> locations for repositories (or perhaps to the project navigator).
>>> 
>>> I've already made recent attempts to clarify our use of GH in the
>>> org descriptions and linked the openstack org back to the project
>>> navigator too, since those were easy enough to do right off the bat.
>>> 
 Same goes for our repos…if there’s a way we could differentiate
 between official and unofficial projects on this page it would be
 really useful: https://git.openstack.org/cgit/openstack/
>>> 
>>> I have an idea as to how to go about that by generating custom
>>> indices rather than relying on the default one cgit provides; I'll
>>> mull it over.
>>> 
 2) Create a simple structure within the official set of projects
 to provide focus and a place to get started. The challenge (again
 to our success, and lots of great work by the community) is that
 even the official project set is too big for most people to
 follow.
>>> 
>>> This is one of my biggest concerns as well where high-cost (in the
>>> sense of increasingly valuable Infra team member time) solutions are
>>> being tossed around to solve the "what's official?" dilemma, while
>>> not taking into account that the overwhelming majority of active Git
>>> repositories we're hosting _are_ already deliverables for official
>>> teams. I strongly doubt that just labelling the minority as
>>> unofficial will any any way lessen the overall confusion about the
>>> *more than one thousand* official Git repositories we're
>>> maintaining.
>> 
>> Another instance where the horse is out of the barn, but this
>> is one of the reasons why I don’t like it when config-management
>> style efforts are organized as one-to-one mapping of repositories
>> to corresponding project. It created massive sprawl
>> within the ecosystem, limited opportunities for code sharing,
>> and made refactoring a nightmare. I lost count of the number
>> of times we submitted n inconsistent patches to change
>> similar behavior across n+1 projects. Trying to build a library
>> helped but was never as powerful as being able to target a
>> single repository.
> 
> ++
> 
> The micro repositories for config management and packaging create this
> overwhelming wall of projects from the outside. I realize that git repos
> are cheap from a dev perspective, but they are expensive from a concept
> perspective.

I ask, then, what about those communities that do advocate one repo per
subproject? Chef is one such case in which monolithic all-in-one repos used to
be the mainstay, and are now frowned upon and actively discouraged in the Chef
community due to real pain felt in finding the right patterns. In the past, we
(Chef OpenStack) experimented with the One Repo To Rule Them All idea, but that
didn’t get much traction and wasn’t further fostered. Right now, what works for
us is a hybrid model of community best practices doing one repo per
cookbook/subproject and a single “meta” repo that ties the whole thing
together. Maybe that one repo per subproject pattern is an anti-pattern to
OpenStack’s use case and we’re now getting to the point of criticality. Past
iterations of the Chef method include using git submodules and the GitHub
workflow, as well as One Repo To Rule Them All. They’re in the past, gone and
left to the ages. Those didn’t work because they tried to be too opinionated,
or too clever, without looking at the user experience.

While I agree that the repo creep is real, there has to be a balance. The Chef
method to OpenStack has been around for a long time in both Chef and OpenStack
terms, and has generally followed the same pattern of one repo per 

Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-22 Thread Margin Hu

thanks.

I met an issue , I  configured three ovs bridge ( br-ex, provision, 
provider) in ml2_conf.ini  but after I reboot the node , found only 2 
bridges flow table is normal , the other one bridge's flow table is empty.


the bridge sometimes is "provision" , sometimes is "provider" , which 
possibilities is there for this issue.?


[root@cloud]# ovs-ofctl show provision
OFPT_FEATURES_REPLY (xid=0x2): dpid:248a075541e8
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst

 1(bond0): addr:24:8a:07:55:41:e8
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 2(phy-provision): addr:2e:7c:ba:fe:91:72
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(provision): addr:24:8a:07:55:41:e8
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@cloud]# ovs-ofctl dump-flows  provision
NXST_FLOW reply (xid=0x4):

[root@cloud]# ip r
default via 192.168.60.247 dev br-ex
10.53.16.0/24 dev vlan16  proto kernel  scope link  src 10.53.16.11
10.53.17.0/24 dev provider  proto kernel  scope link  src 10.53.17.11
10.53.22.0/24 dev vlan22  proto kernel  scope link  src 10.53.22.111
10.53.32.0/24 dev vlan32  proto kernel  scope link  src 10.53.32.11
10.53.33.0/24 dev provision  proto kernel  scope link  src 10.53.33.11
10.53.128.0/24 dev docker0  proto kernel  scope link  src 10.53.128.1
169.254.0.0/16 dev vlan16  scope link  metric 1012
169.254.0.0/16 dev vlan22  scope link  metric 1014
169.254.0.0/16 dev vlan32  scope link  metric 1015
169.254.0.0/16 dev br-ex  scope link  metric 1032
169.254.0.0/16 dev provision  scope link  metric 1033
169.254.0.0/16 dev provider  scope link  metric 1034
192.168.60.0/24 dev br-ex  proto kernel  scope link  src 192.168.60.111

what' the root cause ?

 rpm -qa | grep openvswitch
openvswitch-2.6.1-4.1.git20161206.el7.x86_64
python-openvswitch-2.6.1-4.1.git20161206.el7.noarch
openstack-neutron-openvswitch-10.0.1-1.el7.noarch


On 6/22 9:53, Kevin Benton wrote:
Rules to allow aren't setup until the port is wired and it calls the 
functions like this:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606

On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu > wrote:


Hi Guys,

I have a question in setup_physical_bridges funtion  of
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

 # block all untranslated traffic between bridges
self.int_br.drop_port(in_port=int_ofport)
br.drop_port(in_port=phys_ofport)


[refer](https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159

)

when permit traffic between bridges ?  when modify flow table of
ovs bridge?









__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable]requirements] Bootstrapiing requirements-stable-core

2017-06-22 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2017-06-22 11:48:09 +1000:
> Hi All,
> Recently it's been clear that we need a requirements-stable team.
> Until npw that's been handled by the release managers and the
> stable-maint-core team.
> 
> With the merge of [1] The have the groundwork for that team.  I'd like
> to nominate:
> 
>  * dmllr -- Dirk Mueller
>  * prometheanfire -- Matthew Thode
>  * SeanM -- Sean McGinnis
> 
> As that initial team.  Each of them has been doing regular reviews on
> stable branches and have shown an understanding of how the stable policy
> applies to the requirements repo.
> 
> Yours Tony.
> 
> [1] https://review.openstack.org/#/c/470419/

+1 for all of them

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][uc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-22 Thread Melvin Hillsman
On Wed, Jun 21, 2017 at 10:44 AM, Shamail Tahir 
wrote:

> Hi,
>
> In the past, governance has helped (on the UC WG side) to reduce
> overlaps/duplication in WGs chartered for similar objectives. I would like
> to understand how we will handle this (if at all) with the new SIG proposa?
> Also, do we have to replace WGs as a concept or could SIG augment them? One
> suggestion I have would be to keep projects on the TC side and WGs on the
> UC side and then allow for spin-up/spin-down of SIGs as needed for
> accomplishing specific goals/tasks (picture of a  diagram I created at the
> Forum[1]).
>
>
We currently have WGs that overlap for specific objectives - like
scalability - and having a Scalability SIG could be more efficient rather
than the objective existing in multiple WGs who still have to gather
resources to address the objective. Spin up/down would still happen though
a SIG would generally live longer, like UC teams, depending on the special
interest.



> The WGs could focus on defining key objectives for users of a shared group
> (market vertical like Enterprise or Scientific WG, horizontal function like
> PWG) and then SIGs could be created based on this list to accomplish the
> objective and spin-down. Similarly a project team could determine a need to
> gather additional data/requirements or need help with a certain task could
> also spin-up a SIG to accomplish it (e.g. updating an outdated docs set,
> discussion on a specific spec that needs to be more thoroughly crafted,
> etc.)
>

SIGs like Documentation, Public Cloud, Scalability, make sense rather than
being created to address a more specific objective. A WG could spin off
from a SIG but does not seem reasonable for it to be the other way around
since the idea behind a working group is to work on something specific -
AUC Recognition - and then those members fold back into 0 or 1 SIG. I could
be off but I get the feeling that WGs can come together and find
commonality amongst each other within a SIG and get more accomplished and
possibly quicker by aggregating resources around what they already share
interest in.


>
> Finally, how will this change impact the ATC/AUC status of the SIG members
> for voting rights in the TC/UC elections?
>
>
Possibly AUC could go to all members of the SIGs and ATC or extra-ATC is
provided as it does not seem fair, though fair may be irrelevant or
relative, for UC elections to be wholly subjected to all SIG members and
not TC elections. I think, Shamail correct me if I am wrong, under AUC
guidelines all SIG members would qualify?


> [1] https://drive.google.com/file/d/0B_yCSDGnhIbzS3V1b1lpZGp
> IaHBmc29SaUdiYzJtX21BWkl3/
>
> Thanks,
> Shamail
>
>
> On Wed, Jun 21, 2017 at 11:26 AM, Thierry Carrez 
> wrote:
>
>> Matt Riedemann wrote:
>> > How does the re-branding or re-categorization of these groups solve the
>> > actual feedback problem? If the problem is getting different people from
>> > different groups together, how does this solve that? For example, how do
>> > we get upstream developers aware of operator issues or product managers
>> > communicating their needs and feature priorities to the upstream
>> > developers?
>>
>> My hope is that specific developers interested in a given use case or a
>> given problem space would join the corresponding SIG and discuss with
>> operators in the same SIG. As an example, imagine an upstream developer
>> from CERN, able to join the Scientific SIG to discuss with operators and
>> users with Scientific/Academic needs of the feature gap, and group with
>> other like-minded developers to get that feature gap collectively
>> addressed.
>>
>> > No one can join all work groups or SIGs and be aware of all
>> > things at the same time, and actually have time to do anything else.
>> > Is the number of various work groups/SIGs a problem?
>>
>> I would not expect everyone to join every SIG. I would actually expect
>> most people to join 0 or 1 SIG.
>>
>> > Maybe what I'd need is an example of an existing problem case and how
>> > the new SIG model would fix that - concrete examples would be really
>> > appreciated when communicating suggested governance changes.
>> >
>> > For example, is there some feature/requirement/issue that one group has
>> > wanted implemented/fixed for a long time but another group isn't aware
>> > of it? How would SIGs fix that in a way that work groups haven't?
>>
>> Two examples:
>>
>> - the "API WG" was started by people on the UC side, listed as a UC
>> workgroup, and wasn't making much progress as it was missing devs. Now
>> it's been reborn as a TC workgroup, led by a couple of devs, and is
>> lacking app user input. Artificial barriers discourage people to join.
>> Let's just call all of them SIGs.
>>
>> - the "Public Cloud WG" tries to cover an extremely important use case
>> for all of OpenStack (we all need successful OpenStack public clouds).
>> However, so far I've hardly seen a developer joining, 

[openstack-dev] [Watcher] handle multiple data sources in watcher

2017-06-22 Thread Bin Zhou
Hi All,

I am working on the blueprint
https://blueprints.launchpad.net/watcher/+spec/watcher-multi-datasource.
It is a good idea to construct an abstract layer and hide data source
details from the consumer of the data. But I have one doubt about the
possibility of using multiple metrics data sources at the same
deployment. Metrics instrumentation is not free, it comes with
overhead of CPU, memory, storage. Production deployment should always
avoid to use multiple monitoring tools for the same metrics.

I propose to have a default monitoring engine in  the watcher
configuration. In case that multiple monitoring engines are
configured, and a user has to collect different metrics from different
monitoring engine, the caller has to specify the engine explicitly to
avoid using the default engine.

Please let me know if you have any concerns or better solutions. Thanks!

Bin Zhou (lakerzhou)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-22 Thread Dougal Matthews
On 22 June 2017 at 11:01, Thierry Carrez  wrote:

> Thierry Carrez wrote:
> > Renat Akhmerov wrote:
> >> We have a weekly meeting next Monday, will it be too late?
> >
> > Before Thursday EOD (when the Pike-2 deadline hits) should be OK.
>
> If there was a decision, I missed it (and in the mean time Mistral
> published 5.0.0.0b2 for the Pike-2 milestone).
>
> Given the situation, I'm fine with giving an exception to Mistral to
> switch now to cycle-with-intermediary and release 5.0.0 if you think
> master is currently releasable...
>
> Let me know what you think.
>


I think that probably is the best option.

We have been trying to break the requirement on mistral (from
tripleo-common) but it is proving to be harder than expected. We are really
doing some nasty things, but I wont go into details here :-) If anyone is
interested, feel free to reach out.



>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-22 Thread gordon chung


On 21/06/17 08:41 PM, Zhipeng Huang wrote:
> 2. Enhance dev/non-dev comms. I doubt more meetings will be the solution.
>
> a. I would suggest projects when doing their planning at Forum or
> PTG, always leave a spot for requirement from WGs. And WG chairs
> should participate this dev meetings if their WG has done related work.
> b. Moreover the foundation could start promotion of project/WG
> collaboration best practices, or even specify in the release
> document that certain feature are based upon feedback from a certain
> WGs.
>
> c. WG should have cycle-based releases of works so that they got a
> sense of timing, no lost in a permanent discussion mode for issues.

agree that we need to stop being so silo'd. i would suggest even more 
regular feedback rather than a once a cycle data dump at the forum.

i don't attend WG meetings/discussions but i'm curious, do the Working 
Groups have resource to contact the desired PTLs bi-weekly/(some regular 
interval) to give feedback and discuss requirements? given the lack of 
resources in community dev teams, i would expect better results if WG 
members could initiate communication... or go 51% at least :)

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-22 Thread Shewale, Bhagyashri
>> * who or what needs to consume these JSON samples?
The users of placement API can rely on the request/response for different 
supported placement versions based on some tests running on the OpenStack CI 
infrastructure. 
Right now, most of the placement APIs are well documented and others are in 
progress but there are no tests to verify these APIs.
We would like to write new functional test to consume these json samples to 
verify each placement API for all supported versions.

>>do they need to be tests against current code, or are they  primarily 
>>reference info?
Yes, the request/response json files for each supported version should be run 
against the current master code.

>>what are the value propositions associated with fully validating the 
>>structure and content of the response bodies?
It's kind of validation and verification mechanism available on OpenStack CI 
which can be fully trusted by the users of placement API and some of the other 
value propositions are mentioned by Rochelle Grober in his reply.

Thanks for your response.

Regards,
Bhagyashri Shewale

-Original Message-
From: Chris Dent [mailto:cdent...@anticdent.org] 
Sent: Thursday, June 22, 2017 3:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][placement] Add API Sample tests for 
Placement APIs

On Wed, 21 Jun 2017, Shewale, Bhagyashri wrote:

> I  would like to write functional tests to check the exact req/resp 
> for each placement API for all supported versions similar to what is already 
> done for other APIs under 
> nova/tests/functional/api_sample_tests/api_samples/*.
> These request/response json samples can be used by the api.openstack.org and 
> in the manuals.
>
> There are already functional tests written for placement APIs under 
> nova/tests/functional/api/openstack/placement,
> but these tests doesn't check the entire HTTP response for each API for all 
> supported versions.
>
> I think adding such functional tests for checking response for each placement 
> API would be beneficial to the project.
> If there is an interest to create such functional tests, I can file a new 
> blueprint for this activity.

At Matt points out elsewhere, we made a choice to not use the api_samples 
format when developing placement. There were a few different reasons for this:

* we wanted to limit the amount of nova code used in placement, to
   ease the eventual extraction of placement to its own code
   repository

* some of us wanted to use gabbi [1] as the testing framework as it is
   nicely declarative [2] and keeps the request and response in the same
   place

* we were building the api framework from scratch and doing what
   amounts to test driven development [2] using functional tests and
   gabbi works well for that

* testing the full response isn't actually a great way to test an
   API in a granular way; the info is useful to have but it isn't a
   useful test (from a development standpoint)

But, as you've noted, this means there isn't a single place to go to see a 
collection of a full request and response bodies. That information can be 
extracted from the gabbi tests, but it's a) not immediately obvious, b) 
requires interpretation.

Quite some time ago I started a gabbi-based full request and response suite of 
tests [3] but it was never finished and now is very out of date.

If the end goal is to have a set of documents that pair all the possible 
requests (with bodies) with all possible responses (with bodies), gabbi could 
easily create this in its "verbose" mode [4] when run as functional tests or 
with the gabbi-run [5] command that can run against a running service.

So I would suggest that we more completely explain the goal or goals that 
you're trying to satisfy and then see how we can use the existing tooling to 
fill them. Some questions related to that:

* who or what needs to consume these JSON samples?
* do they need to be tests against current code, or are they
   primarily reference info?
* what are the value propositions associated with fully validating
   the structure and content of the response bodies?

We can relatively easily figure out some way to drive gabbi to produce the 
desired information, but first we want to make sure that the information 
produced is going to be the right info (that is, will satisfy the needs of 
whoever wants it).

I am, as Matt mentioned, on holiday at the moment so my response to any future 
messages may be delayed, but I'll catch up as I'm able.

[1] https://gabbi.readthedocs.io/en/latest/
[2] 
https://github.com/openstack/nova/tree/master/nova/tests/functional/api/openstack/placement/gabbits
[3] https://review.openstack.org/#/c/370204/
[4] verbose mode can print out request and response headers, bodies,
 or both. If the bodies are JSON, it will be pretty printed.
[5] https://gabbi.readthedocs.io/en/latest/runner.html

-- 
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/

Re: [openstack-dev] [vitrage] First Vitrage Pike release by the end of this week

2017-06-22 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Yujun,

Yes, I already tagged the current Vitrage releases for Pike:
vitrage 1.6.0
python-vitrageclient 1.2.0
vitrage-dashboard 1.2.0

I also tagged the stable/ocata releases:
vitrage 1.5.2
python-vitrageclient 1.1.2
(vitrage-dashboard is the same as Ocata – 1.1.1)

You can get the releases by e.g.:
pip install vitrage=1.6.0

Best Regards,
Ifat.

From: "Yujun Zhang (ZTE)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 22 June 2017 at 5:22
To: "OpenStack Development Mailing List (not for usage questions)" 
, "yinli...@zte.com.cn" 
Subject: Re: [openstack-dev] [vitrage] First Vitrage Pike release by the end of 
this week

Is it done yet?

How to fetch the released version for downstream developing?

On Tue, Jun 6, 2017 at 2:17 PM Afek, Ifat (Nokia - IL/Kfar Sava) 
> wrote:
Hi,

Pike-2 milestone is at the end of this week, and although we are not working by 
the milestones model (we are working by a cycle-with-intermediary model) we 
need to have the first Vitrage Pike release by the end of this week.

I would like to release vitrage, python-vitrageclient and vitrage-dashboard 
tomorrow. Any objections? Please let me know if you think something has to be 
changed/added before the release.

Also, we need to add release notes for the newly added features. This list 
includes (let me know if I missed something):

vitrage
• Vitrage ID
• Support ‘not’ operator in the evaluator templates
• Performance improvements
• Support entity equivalences
• SNMP notifier

python-vitrageclient
• Multi tenancy support
• Resources API

vitrage-dashboard
• Multi tenancy support – Vitrage in admin menu
• Added ‘search’ option in the entity graph

Please add a release notes file for each of your features (I’ll send an 
explanation in a separate mail), or send me a few lines of the feature’s 
description and I’ll add it.

Thanks,
Ifat.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for core

2017-06-22 Thread Eduardo Gonzalez
+1

2017-06-22 11:12 GMT+01:00 Christian Berendt :

> +1
>
> > On 14. Jun 2017, at 17:46, Michał Jastrzębski  wrote:
> >
> > Hello,
> >
> > With great pleasure I'm kicking off another core voting to
> > kolla-ansible and kolla teams:) this one is about spsurya. Voting will
> > be open for 2 weeks (till 28th Jun).
> >
> > Consider this mail my +1 vote, you know the drill:)
> >
> > Regards,
> > Michal
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Christian Berendt
> Chief Executive Officer (CEO)
>
> Mail: bere...@betacloud-solutions.de
> Web: https://www.betacloud-solutions.de
>
> Betacloud Solutions GmbH
> Teckstrasse 62 / 70190 Stuttgart / Deutschland
>
> Geschäftsführer: Christian Berendt
> Unternehmenssitz: Stuttgart
> Amtsgericht: Stuttgart, HRB 756139
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for core

2017-06-22 Thread Christian Berendt
+1 

> On 14. Jun 2017, at 17:46, Michał Jastrzębski  wrote:
> 
> Hello,
> 
> With great pleasure I'm kicking off another core voting to
> kolla-ansible and kolla teams:) this one is about spsurya. Voting will
> be open for 2 weeks (till 28th Jun).
> 
> Consider this mail my +1 vote, you know the drill:)
> 
> Regards,
> Michal
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Sean Dague
On 06/21/2017 05:35 PM, Jeremy Stanley wrote:
> On 2017-06-21 13:52:11 -0500 (-0500), Lauren Sell wrote:
> [...]
>> To make this actionable...Github is just a mirror of our
>> repositories, but for better or worse it's the way most people in
>> the world explore software. If you look at OpenStack on Github
>> now, it’s impossible to tell which projects are official. Maybe we
>> could help by better curating the Github projects (pinning some of
>> the top projects, using the new new topics feature to put tags
>> like openstack-official or openstack-unofficial, coming up with
>> more standard descriptions or naming, etc.).
> 
> I hadn't noticed the pinned repositories option until you mentioned
> it: appears they just extended that feature to orgs back in October
> (and introduced the topics feature in January). I could see
> potentially integrating pinning and topic management into the
> current GH API script we run when creating new mirrors
> there--assuming these are accessible via their API anyway--and yes
> normalizing the descriptions to something less freeform is something
> else we'd discussed to be able to drive users back to the official
> locations for repositories (or perhaps to the project navigator).
> 
> I've already made recent attempts to clarify our use of GH in the
> org descriptions and linked the openstack org back to the project
> navigator too, since those were easy enough to do right off the bat.

It doesn't look like either (pinned repos or topics) are currently
available over the API (topics in get format are experimental, but no
edit as of yet). The pinned repositories aren't such a big deal, we're
talking a handful here. The topics / tags maintenance would be more, but
those aren't changing so fast that I think they'd be too unweildly to
keep close.

As I have strong feelings that this all would help, I would be happy to
volunteer to manually update that information. Just need enough access
to do that.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Flavio Percoco

On 21/06/17 16:27 -0400, Sean Dague wrote:

On 06/21/2017 02:52 PM, Lauren Sell wrote:

Two things we should address:

1) Make it more clear which projects are “officially” part of
OpenStack. It’s possible to find that information, but it’s not obvious.
I am one of the people who laments the demise of stackforge…it was very
clear that stackforge projects were not official, but part of the
OpenStack ecosystem. I wish it could be resurrected, but I know that’s
impractical.

To make this actionable...Github is just a mirror of our repositories,
but for better or worse it's the way most people in the world
explore software. If you look at OpenStack on Github now, it’s
impossible to tell which projects are official. Maybe we could help by
better curating the Github projects (pinning some of the top projects,
using the new new topics feature to put tags like openstack-official or
openstack-unofficial, coming up with more standard descriptions or
naming, etc.). Same goes for our repos…if there’s a way we could
differentiate between official and unofficial projects on this page it
would be really useful: https://git.openstack.org/cgit/openstack/


I think even if it was only solvable on github, and not cgit, it would
help a lot. The idea of using github project tags and pinning suggested
by Lauren seems great to me.

If we replicated the pinning on github.com/openstack to "popular
projects" here - https://www.openstack.org/software/, and then even just
start with the tags as defined in governance -
https://governance.openstack.org/tc/reference/tags/index.html it would
go a long way.


We can also standardize the README files in the projects and use the badges that
were created already. These badges are automatically generated for every
project. I think there's a way we could also make this work in cgit too and we
won't need something that is github specific. These badges can be used for
documentation too.

Here's Glance's example: 
https://github.com/openstack/glance#team-and-repository-tags

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-22 Thread Thierry Carrez
Thierry Carrez wrote:
> Renat Akhmerov wrote:
>> We have a weekly meeting next Monday, will it be too late?
> 
> Before Thursday EOD (when the Pike-2 deadline hits) should be OK.

If there was a decision, I missed it (and in the mean time Mistral
published 5.0.0.0b2 for the Pike-2 milestone).

Given the situation, I'm fine with giving an exception to Mistral to
switch now to cycle-with-intermediary and release 5.0.0 if you think
master is currently releasable...

Let me know what you think.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Sean Dague
On 06/21/2017 09:52 PM, Chris Hoge wrote:
> 
>> On Jun 21, 2017, at 2:35 PM, Jeremy Stanley  wrote:
>>
>> On 2017-06-21 13:52:11 -0500 (-0500), Lauren Sell wrote:
>> [...]
>>> To make this actionable...Github is just a mirror of our
>>> repositories, but for better or worse it's the way most people in
>>> the world explore software. If you look at OpenStack on Github
>>> now, it’s impossible to tell which projects are official. Maybe we
>>> could help by better curating the Github projects (pinning some of
>>> the top projects, using the new new topics feature to put tags
>>> like openstack-official or openstack-unofficial, coming up with
>>> more standard descriptions or naming, etc.).
>>
>> I hadn't noticed the pinned repositories option until you mentioned
>> it: appears they just extended that feature to orgs back in October
>> (and introduced the topics feature in January). I could see
>> potentially integrating pinning and topic management into the
>> current GH API script we run when creating new mirrors
>> there--assuming these are accessible via their API anyway--and yes
>> normalizing the descriptions to something less freeform is something
>> else we'd discussed to be able to drive users back to the official
>> locations for repositories (or perhaps to the project navigator).
>>
>> I've already made recent attempts to clarify our use of GH in the
>> org descriptions and linked the openstack org back to the project
>> navigator too, since those were easy enough to do right off the bat.
>>
>>> Same goes for our repos…if there’s a way we could differentiate
>>> between official and unofficial projects on this page it would be
>>> really useful: https://git.openstack.org/cgit/openstack/
>>
>> I have an idea as to how to go about that by generating custom
>> indices rather than relying on the default one cgit provides; I'll
>> mull it over.
>>
>>> 2) Create a simple structure within the official set of projects
>>> to provide focus and a place to get started. The challenge (again
>>> to our success, and lots of great work by the community) is that
>>> even the official project set is too big for most people to
>>> follow.
>>
>> This is one of my biggest concerns as well where high-cost (in the
>> sense of increasingly valuable Infra team member time) solutions are
>> being tossed around to solve the "what's official?" dilemma, while
>> not taking into account that the overwhelming majority of active Git
>> repositories we're hosting _are_ already deliverables for official
>> teams. I strongly doubt that just labelling the minority as
>> unofficial will any any way lessen the overall confusion about the
>> *more than one thousand* official Git repositories we're
>> maintaining.
> 
> Another instance where the horse is out of the barn, but this
> is one of the reasons why I don’t like it when config-management
> style efforts are organized as one-to-one mapping of repositories
> to corresponding project. It created massive sprawl
> within the ecosystem, limited opportunities for code sharing,
> and made refactoring a nightmare. I lost count of the number
> of times we submitted n inconsistent patches to change
> similar behavior across n+1 projects. Trying to build a library
> helped but was never as powerful as being able to target a
> single repository.

++

The micro repositories for config management and packaging create this
overwhelming wall of projects from the outside. I realize that git repos
are cheap from a dev perspective, but they are expensive from a concept
perspective.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Sean Dague
On 06/22/2017 04:33 AM, Thierry Carrez wrote:
> Sean Dague wrote:
>> [...]
>> I think even if it was only solvable on github, and not cgit, it would
>> help a lot. The idea of using github project tags and pinning suggested
>> by Lauren seems great to me.
>>
>> If we replicated the pinning on github.com/openstack to "popular
>> projects" here - https://www.openstack.org/software/, and then even just
>> start with the tags as defined in governance -
>> https://governance.openstack.org/tc/reference/tags/index.html it would
>> go a long way.
>>
>> I think where the conversation is breaking down is realizing that
>> different people process the information we put out there in different
>> ways, and different things lock in and make sense to them. Lots of
>> people are trained to perceive github structure as meaningful, because
>> it is 98% of the time. As such I'd still also like to see us using that
>> structure well, and mirroring only things we tag as official to
>> github.com/openstack, and the rest to /openstack-ecosystem or something.
>>
>> Even if that's flat inside our gerrit and cgit environment.
> 
> I would even question the need for us to mirror the rest. Those are
> hosted projects, if they want presence on GitHub they would certainly
> welcome the idea of setting up an organization for their project. I
> wouldn't be shocked to find a fuel-ccp or a stacktach GitHub org. And if
> they don't care about their GitHub presence, then just don't do
> anything. I'm not sure why we would make that choice for us.
> 
> Jeremy is right that the GitHub mirroring goes beyond an infrastructure
> service: it's a marketing exercise, an online presence more than a
> technical need. As such it needs to be curated, and us doing that for
> "projects that are not official but merely hosted" is an anti-feature.
> No real value for the hosted project, and extra confusion as a result.

Good point, I hadn't thought of that. I'd be totally fine only mirroring
official projects.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-22 Thread Chris Dent

On Wed, 21 Jun 2017, Shewale, Bhagyashri wrote:


I  would like to write functional tests to check the exact req/resp for each 
placement API for all supported versions similar
to what is already done for other APIs under 
nova/tests/functional/api_sample_tests/api_samples/*.
These request/response json samples can be used by the api.openstack.org and in 
the manuals.

There are already functional tests written for placement APIs under 
nova/tests/functional/api/openstack/placement,
but these tests doesn't check the entire HTTP response for each API for all 
supported versions.

I think adding such functional tests for checking response for each placement 
API would be beneficial to the project.
If there is an interest to create such functional tests, I can file a new 
blueprint for this activity.


At Matt points out elsewhere, we made a choice to not use the
api_samples format when developing placement. There were a few
different reasons for this:

* we wanted to limit the amount of nova code used in placement, to
  ease the eventual extraction of placement to its own code
  repository

* some of us wanted to use gabbi [1] as the testing framework as it is
  nicely declarative [2] and keeps the request and response in the same
  place

* we were building the api framework from scratch and doing what
  amounts to test driven development [2] using functional tests and
  gabbi works well for that

* testing the full response isn't actually a great way to test an
  API in a granular way; the info is useful to have but it isn't a
  useful test (from a development standpoint)

But, as you've noted, this means there isn't a single place to go to
see a collection of a full request and response bodies. That
information can be extracted from the gabbi tests, but it's a) not
immediately obvious, b) requires interpretation.

Quite some time ago I started a gabbi-based full request and
response suite of tests [3] but it was never finished and now is
very out of date.

If the end goal is to have a set of documents that pair all the
possible requests (with bodies) with all possible responses (with
bodies), gabbi could easily create this in its "verbose" mode [4]
when run as functional tests or with the gabbi-run [5] command that
can run against a running service.

So I would suggest that we more completely explain the goal or goals
that you're trying to satisfy and then see how we can use the
existing tooling to fill them. Some questions related to that:

* who or what needs to consume these JSON samples?
* do they need to be tests against current code, or are they
  primarily reference info?
* what are the value propositions associated with fully validating
  the structure and content of the response bodies?

We can relatively easily figure out some way to drive gabbi to
produce the desired information, but first we want to make sure that
the information produced is going to be the right info (that is,
will satisfy the needs of whoever wants it).

I am, as Matt mentioned, on holiday at the moment so my response to
any future messages may be delayed, but I'll catch up as I'm able.

[1] https://gabbi.readthedocs.io/en/latest/
[2] 
https://github.com/openstack/nova/tree/master/nova/tests/functional/api/openstack/placement/gabbits
[3] https://review.openstack.org/#/c/370204/
[4] verbose mode can print out request and response headers, bodies,
or both. If the bodies are JSON, it will be pretty printed.
[5] https://gabbi.readthedocs.io/en/latest/runner.html

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Thierry Carrez
Jeremy Stanley wrote:
> [...]
> This is one of my biggest concerns as well where high-cost (in the
> sense of increasingly valuable Infra team member time) solutions are
> being tossed around to solve the "what's official?" dilemma, while
> not taking into account that the overwhelming majority of active Git
> repositories we're hosting _are_ already deliverables for official
> teams. I strongly doubt that just labelling the minority as
> unofficial will any any way lessen the overall confusion about the
> *more than one thousand* official Git repositories we're
> maintaining.

I'm not sure the issue is in the numbers. Yes, the "openstack" GitHub
org is not really navigable, and removing unofficial projects from it
won't change that a lot. Pinning some repositories will help a bit, but
our target should not be to make the github.org/openstack useful.

I think what we are trying to solve is someone googling for "github
openstack machine learning" and getting the following links:

https://github.com/openstack/meteos
https://github.com/openstack/cognitive

and then assuming those both are official OpenStack projects, looking up
Cognitive and realizing it's been dead for 2 years, and wondering why
OpenStack is full of completely dead projects.

If the "openstack" GitHub org was limited to official projects, the TC
indirectly controls what appears there. If we stopped automatically
mirroring unofficial projects, those would have a harder time marketing
themselves as official projects -- they would do their own marketing on
GitHub if they wanted to.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-22 Thread Thierry Carrez
Sean Dague wrote:
> [...]
> I think even if it was only solvable on github, and not cgit, it would
> help a lot. The idea of using github project tags and pinning suggested
> by Lauren seems great to me.
> 
> If we replicated the pinning on github.com/openstack to "popular
> projects" here - https://www.openstack.org/software/, and then even just
> start with the tags as defined in governance -
> https://governance.openstack.org/tc/reference/tags/index.html it would
> go a long way.
> 
> I think where the conversation is breaking down is realizing that
> different people process the information we put out there in different
> ways, and different things lock in and make sense to them. Lots of
> people are trained to perceive github structure as meaningful, because
> it is 98% of the time. As such I'd still also like to see us using that
> structure well, and mirroring only things we tag as official to
> github.com/openstack, and the rest to /openstack-ecosystem or something.
> 
> Even if that's flat inside our gerrit and cgit environment.

I would even question the need for us to mirror the rest. Those are
hosted projects, if they want presence on GitHub they would certainly
welcome the idea of setting up an organization for their project. I
wouldn't be shocked to find a fuel-ccp or a stacktach GitHub org. And if
they don't care about their GitHub presence, then just don't do
anything. I'm not sure why we would make that choice for us.

Jeremy is right that the GitHub mirroring goes beyond an infrastructure
service: it's a marketing exercise, an online presence more than a
technical need. As such it needs to be curated, and us doing that for
"projects that are not official but merely hosted" is an anti-feature.
No real value for the hosted project, and extra confusion as a result.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Thierry Carrez
Fox, Kevin M wrote:
> [...]
> If you build a Tessmaster clone just to do mariadb, then you share nothing 
> with the other communities and have to reinvent the wheel, yet again. 
> Operators load increases because the tool doesn't function like other tools.
> 
> If you rely on a container orchestration engine that's already cross cloud 
> that can be easily deployed by user or cloud operator, and fill in the gaps 
> with what Trove wants to support, easy management of db's, you get to reuse a 
> lot of the commons and the users slight increase in investment in dealing 
> with the bit of extra plumbing in there allows other things to also be easily 
> added to their cluster. Its very rare that a user would need to deploy/manage 
> only a database. The net load on the operator decreases, not increases.

I think the user-side tool could totally deploy on Kubernetes clusters
-- if that was the only possible target that would make it a Kubernetes
tool more than an open infrastructure tool, but that's definitely a
possibility. I'm not sure work is needed there though, there are already
tools (or charts) doing that ?

For a server-side approach where you want to provide a DB-provisioning
API, I fear that making the functionality depend on K8s would make
TroveV2/Hoard would not only depend on Heat and Nova, but also depend on
something that would deploy a Kubernetes cluster (Magnum?), which would
likely hurt its adoption (and reusability in simpler setups). Since
databases would just work perfectly well in VMs, it feels like a
gratuitous dependency addition ?

We generally need to be very careful about creating dependencies between
OpenStack projects. On one side there are base services (like Keystone)
that we said it was alright to depend on, but depending on anything else
is likely to reduce adoption. Magnum adoption suffers from its
dependency on Heat. If Heat starts depending on Zaqar, we make the
problem worse. I understand it's a hard trade-off: you want to reuse
functionality rather than reinvent it in every project... we just need
to recognize the cost of doing that.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-22 Thread Henning Schild
Am Wed, 21 Jun 2017 11:40:14 -0600
schrieb Chris Friesen :

> On 06/21/2017 10:46 AM, Henning Schild wrote:
> > Am Wed, 21 Jun 2017 10:04:52 -0600
> > schrieb Chris Friesen :  
> 
> > i guess you are talking about that section from [1]:
> >  
>  We could use a host level tunable to just reserve a set of host
>  pCPUs for running emulator threads globally, instead of trying to
>  account for it per instance. This would work in the simple case,
>  but when NUMA is used, it is highly desirable to have more fine
>  grained config to control emulator thread placement. When
>  real-time or dedicated CPUs are used, it will be critical to
>  separate emulator threads for different KVM instances.  
> 
> Yes, that's the relevant section.
> 
> > I know it has been considered, but i would like to bring the topic
> > up again. Because doing it that way allows for many more rt-VMs on
> > a host and i am not sure i fully understood why the idea was
> > discarded in the end.
> >
> > I do not really see the influence of NUMA here. Say the
> > emulator_pin_set is used only for realtime VMs, we know that the
> > emulators and IOs can be "slow" so crossing numa-nodes should not
> > be an issue. Or you could say the set needs to contain at least one
> > core per numa-node and schedule emulators next to their vcpus.
> >
> > As we know from our setup, and as Luiz confirmed - it is _not_
> > "critical to separate emulator threads for different KVM instances".
> > They have to be separated from the vcpu-cores but not from each
> > other. At least not on the "cpuset" basis, maybe "blkio" and
> > cgroups like that.  
> 
> I'm reluctant to say conclusively that we don't need to separate
> emulator threads since I don't think we've considered all the cases.
> For example, what happens if one or more of the instances are being
> live-migrated?  The migration thread for those instances will be very
> busy scanning for dirty pages, which could delay the emulator threads
> for other instances and also cause significant cross-NUMA traffic
> unless we ensure at least one core per NUMA-node.

Realtime instances can not be live-migrated. We are talking about
threads that can not even be moved between two cores on one numa-node
without missing a deadline. But your point is good because it could
mean that such an emulator_set - if defined - should not be used for all
VMs.
 
> Also, I don't think we've determined how much CPU time is needed for
> the emulator threads.  If we have ~60 CPUs available for instances
> split across two NUMA nodes, can we safely run the emulator threads
> of 30 instances all together on a single CPU?  If not, how much
> "emulator overcommit" is allowable?

That depends on how much IO your VMs are issuing and can not be
answered in general. All VMs can cause high load with IO/emulation,
rt-VMs are probably less likely to do so.
Say your 64cpu compute-node would be used for both rt and regular. To
mix you would have two instances of nova running on that machine. One
gets node0 (32 cpus) for regular VMs. The emulator-pin-set would not
be defined here (so it would equal the vcpu_pin_set, full overlap).
The other nova would get node1 and disable hyperthreads for all rt
cores (17 cpus left). It would need at least one core for housekeeping
and io/emulation threads. So you are down to max. 15 VMs putting their
IO on that one core and its hyperthread 7.5 per cpu.

In the same setup with [2] we would get a max of 7 single-cpu VMs,
instead of 15! And 15 vs 31 if you dedicate the whole box to rt.

Henning 
 
> Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-22 Thread Zhenyu Zheng
Thanks all for the reply, I guess it will be better to config those
preference using flavor/image according to different hardware then.

On Wed, Jun 21, 2017 at 1:21 AM, Mooney, Sean K 
wrote:

>
>
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: Tuesday, June 20, 2017 5:59 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [openstack-dev[[nova] Simple question
> > about sorting CPU topologies
> >
> > On 06/20/2017 12:53 PM, Chris Friesen wrote:
> > > On 06/20/2017 06:29 AM, Jay Pipes wrote:
> > >> On 06/19/2017 10:45 PM, Zhenyu Zheng wrote:
> > >>> Sorry, The mail sent accidentally by mis-typing ...
> > >>>
> > >>> My question is, what is the benefit of the above preference?
> > >>
> > >> Hi Kevin!
> > >>
> > >> I believe the benefit is so that the compute node prefers CPU
> > >> topologies that do not have hardware threads over CPU topologies
> > that
> > >> do include hardware threads.
> [Mooney, Sean K] if you have not expressed that you want the require or
> isolate policy
> Then you really cant infer which is better as for some workloads
> preferring hyperthread
> Siblings will improve performance( 2 threads sharing data via l2 cache)
> and other it will reduce it
> (2 thread that do not share data)
> > >>
> > >> I'm not sure exactly of the reason for this preference, but perhaps
> > >> it is due to assumptions that on some hardware, threads will compete
> > >> for the same cache resources as other siblings on a core whereas
> > >> cores may have their own caches (again, on some specific hardware).
> > >
> > > Isn't the definition of hardware threads basically the fact that the
> > > sibling threads share the resources of a single core?
> > >
> > > Are there architectures that OpenStack runs on where hardware threads
> > > don't compete for cache/TLB/execution units?  (And if there are, then
> > > why are they called threads and not cores?)
> [Mooney, Sean K] well on x86 when you turn on hypter threading your L1
> data and instruction cache is
> Partitioned in 2 with each half allocated to a thread sibling. The l2
> cache which is also per core is shared
> Between the 2 thread siblings so on intels x86 implementation the thread
> do not compete for l1 cache but do share l2
> That could easibly change though in new generations.
>
> Pre xen architure I believe amd shared the floating point units between
> each smt thread but had separate integer execution units that
> Were not shared. That meant for integer heavy workloads there smt
> implementation approached 2X performance limited by the
> Shared load and store units and reduced to 0 scaling if both Treads tried
> to access the floating point execution unit concurrently.
>
> So its not quite as clean cut as saying the thread  do or don’t share
> resources
> Each vendor addresses this differently even with in x86 you are not
> required to have the partitioning
> described above for cache as intel did or for the execution units. On
> other architectures im sure they have
> come up with equally inventive ways to make this an interesting shade of
> grey when describing the difference
> between a hardware thread a full core.
>
> >
> > I've learned over the years not to make any assumptions about hardware.
> >
> > Thus my "not sure exactly" bet-hedging ;)
> [Mooney, Sean K] yep hardware is weird and will always find ways to break
> your assumptions :)
> >
> > Best,
> > -jay
> >
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] diskimage builder works for trusty but not for xenial

2017-06-22 Thread Ignazio Cassano
It works fine
Thanks

2017-06-22 0:24 GMT+02:00 Ian Wienand :

> On 06/21/2017 04:44 PM, Ignazio Cassano wrote:
>
>> * Connection #0 to host cloud-images.ubuntu.com left intact
>> Downloaded and cached
>> http://cloud-images.ubuntu.com/xenial/current/xenial-server-
>> cloudimg-amd64-root.tar.gz,
>> having forced upstream caches to revalidate
>> xenial-server-cloudimg-amd64-root.tar.gz: FAILED
>> sha256sum: WARNING: 1 computed checksum did NOT match
>>
>
> Are there any problems on http://cloud-images.ubuntu.com ?
>>
>
> There was [1] which is apparently fixed.
>
> As Paul mentioned, the -minimal builds take a different approach and
> build the image from debootstrap, rather than modifying the upstream
> image.  They are generally well tested just as a side-effect of infra
> relying on them daily.  You can use DIB_DISTRIBUTION_MIRROR to set
> that to a local mirror and eliminate another source of instability
> (however, that leaves the mirror in the final image ... a known issue.
> Contributions welcome :)
>
> -i
>
> [1] https://bugs.launchpad.net/cloud-images/+bug/1699396
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev