Re: [openstack-dev] [neutron][api[[graphql] A starting point

2018-06-21 Thread Flint WALRUS
Hi everyone,

Thanks for the updates and support, that appreciated.

@Gilles, did you already implemented all the service types?

What is left to do? You already want to merge the feature branch with
master?

@tristan I’d like to work on the feature branch but I’ll wait for gilles
answers as I don’t want to mess up having pieces of code everywhere.

Thanks!
Le ven. 22 juin 2018 à 06:44, Gilles Dubreuil  a
écrit :

>
> On 22/06/18 09:21, Tristan Cacqueray wrote:
>
> Hi Flint,
>
> On June 21, 2018 5:32 pm, Flint WALRUS wrote:
>
> Hi everyone, sorry for the late answer but I’m currently trapped into a
> cluster issue with cinder-volume that doesn’t give me that much time.
>
> That being said, I’ll have some times to work on this feature during the
> summer (July/August) and so do some coding once I’ll have catched up with
> your work.
>
> That's great to hear! The next step is to understand how to deal with
> oslo policy and control objects access/modification.
>
> Did you created a specific tree or did you created a new graphql folder
> within the neutron/neutron/api/ path regarding the schemas etc?
>
>
> There is a feature/graphql branch were an initial patch[1] adds a new
> neutron/api/graphql directory as well as a new test_graphql.py
> functional tests.
> The api-paste is also updated to expose the '/graphql' http endpoint.
>
> Not sure if we want to keep on updating that change, or propose further
> code as new change on top of this skeleton?
>
>
> Makes sense to merge it, I think we have the base we needed to get going.
> I'll make it green so we can get merge it.
>
>
> Regards,
> -Tristan
>
>
> Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray 
>  a
> écrit :
>
> On June 15, 2018 10:42 pm, Gilles Dubreuil wrote:
> > Hello,
> >
> > This initial patch [1]  allows to retrieve networks, subnets.
> >
> > This is very easy, thanks to the graphene-sqlalchemy helper.
> >
> > The edges, nodes layer might be confusing at first meanwhile they make
> > the Schema Relay-compliant in order to offer re-fetching, pagination
> > features out of the box.
> >
> > The next priority is to set the unit test in order to implement
> mutations.
> >
> > Could someone help provide a base in order to respect Neutron test
> > requirements?
> >
> >
> > [1] [abandoned]
>
> Actually, the correct review (proposed on the feature/graphql branch)
> is:
>
> [1] https://review.openstack.org/575898
>
> >
> > Thanks,
> > Gilles
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Gilles Dubreuil
> Senior Software Engineer - Red Hat - Openstack DFG Integration
> Email: gil...@redhat.com
> GitHub/IRC: gildub
> Mobile: +61 400 894 219
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api[[graphql] A starting point

2018-06-21 Thread Gilles Dubreuil


On 22/06/18 09:21, Tristan Cacqueray wrote:

Hi Flint,

On June 21, 2018 5:32 pm, Flint WALRUS wrote:

Hi everyone, sorry for the late answer but I’m currently trapped into a
cluster issue with cinder-volume that doesn’t give me that much time.

That being said, I’ll have some times to work on this feature during the
summer (July/August) and so do some coding once I’ll have catched up 
with

your work.


That's great to hear! The next step is to understand how to deal with
oslo policy and control objects access/modification.


Did you created a specific tree or did you created a new graphql folder
within the neutron/neutron/api/ path regarding the schemas etc?


There is a feature/graphql branch were an initial patch[1] adds a new
neutron/api/graphql directory as well as a new test_graphql.py
functional tests.
The api-paste is also updated to expose the '/graphql' http endpoint.

Not sure if we want to keep on updating that change, or propose further
code as new change on top of this skeleton?



Makes sense to merge it, I think we have the base we needed to get going.
I'll make it green so we can get merge it.



Regards,
-Tristan



Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray  a
écrit :


On June 15, 2018 10:42 pm, Gilles Dubreuil wrote:
> Hello,
>
> This initial patch [1]  allows to retrieve networks, subnets.
>
> This is very easy, thanks to the graphene-sqlalchemy helper.
>
> The edges, nodes layer might be confusing at first meanwhile they 
make

> the Schema Relay-compliant in order to offer re-fetching, pagination
> features out of the box.
>
> The next priority is to set the unit test in order to implement
mutations.
>
> Could someone help provide a base in order to respect Neutron test
> requirements?
>
>
> [1] [abandoned]

Actually, the correct review (proposed on the feature/graphql branch)
is:

[1] https://review.openstack.org/575898

>
> Thanks,
> Gilles
>
>
__ 


> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Gilles Dubreuil
Senior Software Engineer - Red Hat - Openstack DFG Integration
Email: gil...@redhat.com
GitHub/IRC: gildub
Mobile: +61 400 894 219

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Anti-Affinity Broke

2018-06-21 Thread Joe Topjian
Hello,

I originally posted this to the general openstack list to get a sanity
check on what I was seeing. Jeremy F reached out and confirmed that, so I'm
going to re-post the details here to begin a discussion.

>From what I can see, anti-affinity is not working at all in Sahara. I was
able to get it working locally by making the following changes:

1. ng.count is either invalid, always returns 0, or isn't being set
somewhere else.

https://github.com/openstack/sahara/blob/master/sahara/service/heat/templates.py#L276

Instead, I've used

ng_count = self.node_groups_extra[ng.id]['node_count']

2. An uninitialized Python key:

https://github.com/openstack/sahara/blob/master/sahara/service/heat/templates.py#L283

3. Incorrect bounds in range():

https://github.com/openstack/sahara/blob/master/sahara/service/heat/templates.py#L255-L256

I believe this should be:

for i in range(0, self.cluster.anti_affinity_ratio):
resources.update(self._serialize_aa_server_group(i+1))

https://github.com/openstack/sahara/blob/master/sahara/service/heat/templates.py#L278

I believe this should be:

for i in range(0, ng_count):

With the above in place, anti-affinity began working. The above changes
were all quick fixes to get it up and running, so there might be better
ways of going about this.

I can also create something on StoryBoard for this, too.

Thanks,
Joe
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api[[graphql] A starting point

2018-06-21 Thread Tristan Cacqueray

Hi Flint,

On June 21, 2018 5:32 pm, Flint WALRUS wrote:

Hi everyone, sorry for the late answer but I’m currently trapped into a
cluster issue with cinder-volume that doesn’t give me that much time.

That being said, I’ll have some times to work on this feature during the
summer (July/August) and so do some coding once I’ll have catched up with
your work.


That's great to hear! The next step is to understand how to deal with
oslo policy and control objects access/modification.


Did you created a specific tree or did you created a new graphql folder
within the neutron/neutron/api/ path regarding the schemas etc?


There is a feature/graphql branch were an initial patch[1] adds a new
neutron/api/graphql directory as well as a new test_graphql.py
functional tests.
The api-paste is also updated to expose the '/graphql' http endpoint.

Not sure if we want to keep on updating that change, or propose further
code as new change on top of this skeleton?

Regards,
-Tristan



Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray  a
écrit :


On June 15, 2018 10:42 pm, Gilles Dubreuil wrote:
> Hello,
>
> This initial patch [1]  allows to retrieve networks, subnets.
>
> This is very easy, thanks to the graphene-sqlalchemy helper.
>
> The edges, nodes layer might be confusing at first meanwhile they make
> the Schema Relay-compliant in order to offer re-fetching, pagination
> features out of the box.
>
> The next priority is to set the unit test in order to implement
mutations.
>
> Could someone help provide a base in order to respect Neutron test
> requirements?
>
>
> [1] [abandoned]

Actually, the correct review (proposed on the feature/graphql branch)
is:

[1] https://review.openstack.org/575898

>
> Thanks,
> Gilles
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



pgpm7wStzhbKR.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-9, June 25-29

2018-06-21 Thread Sean McGinnis
A nice and short one this week...

Development Focus
-

Teams should be focused on implementing planned work for the cycle. It is also
a good time to review those plans and reprioritize anything if needed based on
the what progress has been made and what looks realistic to complete in the
next few weeks.

General Information
---

We have a few deadlines coming up as we get closer to the end of the cycle:

* Non-client libraries (generally, any library that is not
  python-${PROJECT}client) must have a final release by July 19. Only
  critical bugfixes will be allowed past this point. Please make sure any
  important feature works has required library changes by this time.

* Client libraries must have a final release by July 26.


Upcoming Deadlines & Dates
--

Final non-client library release deadline: July 19
Final client library release deadline: July 26
Rocky-3 Milestone: July 26

--
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library

2018-06-21 Thread Sean McGinnis
> 
> Apparently heat-translator has a healthy ecosystem of contributors and
> users, but not of maintainers, and it remaining a deliverable of the Heat
> project is doing nothing to alleviate the latter problem. I'd like to find
> it a home that _would_ help.
> 

I'd be interested to hear thoughts if this is somewhere where we (the TC)
should step in and make a few people cores on this project? Or are the existing
contributors a healthy amount but not involved enough to trust to be cores?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [barbican] [tc] key store in base services

2018-06-21 Thread Zane Bitter

On 20/06/18 17:59, Adam Harwell wrote:

Looks like I missed this so I'm late to the party, but:

Ade is technically correct, Octavia doesn't explicitly depend on 
Barbican, as we do support castellan generically.


*HOWEVER*: we don't just store and retrieve our own secrets -- we rely 
on loading up user created secrets. This means that for Octavia to work, 
even if we use castellan, we still need some way for users to interact 
with the secret store via an API, and what that means in openstack in 
still Barbican. So I would still say that Barbican is a dependency for 
us logistically, if not technically.


Right, yeah, I'd call that a dependency on Barbican.

There are reportedly, however, other use cases where the keys are 
generated internally that don't depend on Barbican but can benefit from 
Castellan.


It might be a worthwhile exercise to make a list of all of the proposed 
features that have been blocked on this and whether they require user 
interaction with the key store.


For example, internally at GoDaddy we were investigating deploying Vault 
with a custom user-facing API/UI for allowing users to store secrets 
that could be consumed by Octavia with castellan (don't get me started 
on how dumb that is, but it's what we were investigating).
The correct way to do this in an openstack environment is the openstack 
secret store API, which is Barbican.


This is the correct answer, and thank you for being awesome :)

So, while I am personally dealing 
with an example of very painfully avoiding Barbican (which may have been 
a non-issue if Barbican were a base service), I have a tough time 
reconciling not including Barbican itself as a requirement.


On the bright side, getting everyone to deploy either Barbican or Vault 
makes it easier even for the folks who chose Vault to deploy Barbican later.


I don't think we've given up on making Barbican a base service, just 
recognised that it's a longer-term effort whereas this is something we 
can do to start down the path right now.


cheers,
Zane.


    --Adam (rm_work)

On Wed, Jun 20, 2018, 11:37 Jeremy Stanley > wrote:


On 2018-06-06 01:29:49 + (+), Jeremy Stanley wrote:
[...]
 > Seeing no further objections, I give you
 > https://review.openstack.org/572656 for the next step.

That change merged just a few minutes ago, and

https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services
now includes:

     A Castellan-compatible key store

     OpenStack components may keep secrets in a key store, using
     Oslo’s Castellan library as an indirection layer. While
     OpenStack provides a Castellan-compatible key store service,
     Barbican, other key store backends are also available for
     Castellan. Note that in the context of the base services set
     Castellan is intended only to provide an interface for services
     to interact with a key store, and it should not be treated as a
     means to proxy API calls from users to that key store. In order
     to reduce unnecessary exposure risks, any user interaction with
     secret material should be left to a dedicated API instead
     (preferably as provided by Barbican).

Thanks to everyone who helped brainstorming/polishing, and here's
looking forward to a ubiquity of default security features and
functionality in future OpenStack releases!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][fwaas] How to reproduce in Debian (was: Investigation into debian/l3/wsgi/fwaas error)

2018-06-21 Thread Thomas Goirand
On 06/21/2018 04:48 PM, Nate Johnston wrote:
> I will continue to debug the issue tomorrow.  I see no lonkage at this
> point with any of the previously listed constraints on this scenario.
> So I am going to copy Brian Haley for his L3 expertise, as well as the 3
> FWAaaS cores to see if this directs their thoughts in any particular
> direction.  I hope to continue the investigation tomorrow.
> 
> Thanks,
> 
> Nate Johnston
> njohnston

Hi there,

As per IRC discussion, let me explain to everyone the difference between
what I've done in Debian, and what's in the other distributions. First,
I would like to highlight the fact that this isn't at all Debian specific.

1/ Why doing: neutron-server -> neutron-api + neutron-rpc-server

On other distros, we use Python 2, therefore neutron-server can be in
use, and that works with or without SSL.

In Debian, since we've switched to Python 3, an Eventlet based API
daemon cannot work with SSL, due to a bug in Eventlet itself. That bug
has been known since 2015, with no fix coming. What happens is that when
a client connects to the API server, due to Eventlet's monkey patching,
non-blocking stuff produce an SSL handshake.

As a consequence, the only way to run Neutron with Python 3 and SSL, is
to avoid neutron-server, and use either uwsgi, or mod_uwsgi. In Debian,
most daemons are using uwsgi when possible. In such mode, the WSGI
application is /usr/bin/neutron-api. But this WSGI application, as it's
not a daemon (but an API only, served by a web server), cannot have a a
thread to listen to the RPC bus. So instead, there's neutron-rpc-server
to do that job.

2/ Bugs already fixed but not merged in neutron for this mode

An Nguyen Phong (annp on IRC) has fixed stuff in neutron for that mode
of operation described above, but it's not yet merged:

https://review.openstack.org/#/c/555608/

Without this patch, the l3 agent doesn't know about ml2 extensions, it's
impossible to pass startup --config-file= parameters correctly, and the
openvswitch agent never applies security-group firewall rules. Please
consider reviewing this patch and merging it.

3/ How to reproduce the Debian environment

You can always simply install stuff by hand with packages, but that's
boringly long to do. The easiest way is to pop a fresh Stretch, and have
puppet to run in it to install everything for you. Here's the steps:

a) Boot-up a stretch machine with access to the net.
b) git clone https://github.com/openstack/puppet-openstack-integration
c) cd puppet-openstack-integration
d) git review -d 577281

This will re-enable FWaaS for the l3 agent. Hopefully, we'll get to the
point where this patch can be applied and FWaaS re-enabled.

e) edit all-in-one.sh line 69:

--- a/all-in-one.sh
+++ b/all-in-one.sh
@@ -66,7 +66,7 @@ export GEM_HOME=`pwd`/.bundled_gems
 gem install bundler --no-rdoc --no-ri --verbose

 set -e
-./run_tests.sh
+SCENARIO=scenario001 ./run_tests.sh
 RESULT=$?
 set +e
 if [ $RESULT -ne 0 ]; then

f) git commit -a -m "test scenario001"

g) ./all-in-one.sh

Note that you may as well test scenario 2 & 4 which are also using OVS,
or scenario 3 that is using linuxbridge. After approx one hour, you'll
get a full Debian all-in-one installation using packages. If you're not
used to it, all the code is in /usr/lib/python3/dist-packages. You may
edit code there for your tests.

If you need to re-run a single test, you can do this:

cp /tmp/openstack/tempest/etc/tempest.conf /etc/tempest
cd /var/lib/tempest
tempest_debian_shell_wrapper \
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops

Just have a look at /usr/bin/tempest_debian_shell_wrapper, it's a tiny
small shell script to run tests easily.

Also, feel free to attempt switching to firewall_v2 in configuration
files in /etc/neutron, and then restart the daemons. By default, it's
still v1, but if it works with v2, we'll happily apply patches in
puppet-openstack for that (which will apply for all distros).

If you need me, just type "zigo" on IRC (I'm in most channels, including
#openstack-neutron and #openstack-fwaas), and I'll reply if it's office
hours in Geneva/France, or late in my evening.

I hope the above helps,
Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library

2018-06-21 Thread Zane Bitter

On 20/06/18 18:59, Doug Hellmann wrote:

According to
https://governance.openstack.org/tc/reference/projects/heat.html  the
Heat PTL*is*  the PTL for heat-translators. Any internal team structure
that implies otherwise is just that, an internal team structure.


Yes, correct.


I'm really unclear on what the problem is here.


From my perspective (wearing my Heat hat), the problem is that the 
official team structure no longer represents reality. The folks who were 
working on both heat and heat-translator are long gone. Bob is 
responsive to direct email, but heat-translator is effectively in 
maintenance mode at best.


A few weeks back I made the mistake of reviewing a patch (Gerrit 
confirms that it was literally the first patch I have ever reviewed in 
heat-translator) to update the docs PTI since (a) I know a bit about 
that, and (b) I technically have +2 rights. Immediately people started 
pinging me every day for reviews and adding stuff to my review queue, 
some of which was labelled 'trivial' right there in the patch headline, 
until I asked them to knock it off. That's how much demand there is for 
maintainers.


Apparently heat-translator has a healthy ecosystem of contributors and 
users, but not of maintainers, and it remaining a deliverable of the 
Heat project is doing nothing to alleviate the latter problem. I'd like 
to find it a home that _would_ help.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][heat-templates] Creating a role with no domain

2018-06-21 Thread Zane Bitter

On 21/06/18 07:39, Rabi Mishra wrote:
Looks like that's a bug where we create a domain specific role for 
'default' domain[1], when domain is not specified.


[1] 
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/keystone/role.py#L54


You can _probably_ pass

  domain: null

in your template. Worth a try, anyway.

- ZB

You're welcome to raise a bug and propose a fix where we should be just 
removing the default.


On Thu, Jun 21, 2018 at 4:14 PM, Tikkanen, Viktor (Nokia - FI/Espoo) 
mailto:viktor.tikka...@nokia.com>> wrote:


Hi!
There was a new ’domain’ property added to OS::Keystone::Role
(_https://storyboard.openstack.org/#!/story/1684558_
,
_https://review.openstack.org/#/c/459033/_
).
With “openstack role create” CLI command it is still possible to
create roles with no associated domains; but it seems that the same
cannot be done with heat templates.
An example: if I create two roles, CliRole (with “openstack role
create CliRole” command)  and SimpleRole with the following heat
template:
heat_template_version: 2015-04-30
description: Creates a role
resources:
   role_resource:
     type: OS::Keystone::Role
     properties:
   name: SimpleRole
the result in the keystone database will be:
MariaDB [keystone]> select * from role;
+--+--+---+---+
| id    | name | extra | domain_id |
+--+--+---+---+
| 5de0eee4990e4a59b83dae93af9c0951 | SimpleRole   | {}    |
default   |
| 79472e6e1bf341208bd88e1c2dcf7f85 | CliRole  | {}    |
<>  |
| 7dd5e4ea87e54a13897eb465fdd0e950 | heat_stack_owner | {}    |
<>  |
| 80fd61edbe8842a7abb47fd7c91ba9d7 | heat_stack_user  | {}    |
<>  |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | {}    |
<>  |
| e174c27e79b84ea392d28224eb0af7c9 | admin    | {}    |
<>  |
+--+--+---+---+
Should it be possible to create a role without associated domain
with a heat template?
-V.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Regards,
Rabi Mishra



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api[[graphql] A starting point

2018-06-21 Thread Flint WALRUS
Hi everyone, sorry for the late answer but I’m currently trapped into a
cluster issue with cinder-volume that doesn’t give me that much time.

That being said, I’ll have some times to work on this feature during the
summer (July/August) and so do some coding once I’ll have catched up with
your work.

Did you created a specific tree or did you created a new graphql folder
within the neutron/neutron/api/ path regarding the schemas etc?
Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray  a
écrit :

> On June 15, 2018 10:42 pm, Gilles Dubreuil wrote:
> > Hello,
> >
> > This initial patch [1]  allows to retrieve networks, subnets.
> >
> > This is very easy, thanks to the graphene-sqlalchemy helper.
> >
> > The edges, nodes layer might be confusing at first meanwhile they make
> > the Schema Relay-compliant in order to offer re-fetching, pagination
> > features out of the box.
> >
> > The next priority is to set the unit test in order to implement
> mutations.
> >
> > Could someone help provide a base in order to respect Neutron test
> > requirements?
> >
> >
> > [1] [abandoned]
>
> Actually, the correct review (proposed on the feature/graphql branch)
> is:
>
> [1] https://review.openstack.org/575898
>
> >
> > Thanks,
> > Gilles
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-06-21 Thread Michael McCune
Greetings OpenStack community,

Today's meeting was on the shorter side but covered several topics. We
discussed the migration to StoryBoard, and noted that we need to send
word to Gilles and the GraphQL experimentors that the board is in
place and ready for their usage. The GraphQL work was also highlighted
as there has been a review[7] posted along with an update[8] to the
mailing list. This work is in its early stages, but significant
progress has already been made. Kudos to Gilles and the neutron team!

We talked briefly about the upcoming PTG and which days might be
available for the SIG, but it is too early for such speculation and
the group has tabled the idea for now.

The API-SIG StoryBoard is now live[9], although it still has the old
"api-wg" name. That will be updated when the infra team does the
project rename for us. We encourage all new activity to take place
here and we are in the process of cleaning up the older links; stay
tuned for more information.

There is one new guideline change that is just starting its life in
the review process[10]. This is an addition to the guideline on errors
and although this review is still in the early stages of development
any comments are greatly appreciated.

As always if you're interested in helping out, in addition to coming
to the meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for
changes over time. If you find something that's not quite right,
submit a patch [6] to fix it.
* Have you done something for which you think guidance would have made
things easier but couldn't find any? Submit a patch and help others
[6].

# Newly Published Guidelines

None

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None

# Guidelines Currently Under Review [3]

* Expand error code document to expect clarity
  https://review.openstack.org/#/c/577118/

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and
service discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet
ready for review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs
that you are developing or changing, please address your concerns in
an email to the OpenStack developer mailing list[1] with the tag
"[api]" in the subject. In your email, you should include any relevant
reviews, links, and comments to help guide the discussion of the
specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our
wiki page [4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://review.openstack.org/575120
[8] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131557.html
[9] https://storyboard.openstack.org/#!/project/1039
[10] https://review.openstack.org/#/c/577118/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-21 Thread Chris Friesen

On 06/21/2018 07:04 AM, Artom Lifshitz wrote:

As I understand it, Artom is proposing to have a larger race window,
essentially
from when the scheduler selects a node until the resource audit runs on that
node.


Exactly. When writing the spec I thought we could just call the resource tracker
to claim the resources when the migration was done. However, when I started
looking at the code in reaction to Sahid's feedback, I noticed that there's no
way to do it without the MoveClaim context (right?)


In the previous patch, the MoveClaim is the thing that calculates the dest NUMA 
topology given the flavor/image, then calls hardware.numa_fit_instance_to_host() 
to figure out what specific host resources to consume.  That claim is then 
associated with the migration object and the instance.migration_context, and 
then we call _update_usage_from_migration() to actually consume the resources on 
the destination.  This all happens within check_can_live_migrate_destination().


As an improvement over what you've got, I think you could just kick off an early 
call of update_available_resource() once the migration is done.  It'd be 
potentially computationally expensive, but it'd reduce the race window.



Keep in mind, we're not making any race windows worse - I'm proposing keeping
the status quo and fixing it later with NUMA in placement (or the resource
tracker if we can swing it).


Well, right now live migration is totally broken so nobody's doing it.  You're 
going to make it kind of work but with racy resource tracking, which could lead 
to people doing it then getting in trouble.  We'll want to make sure there's a 
suitable release note for this.



The resource tracker stuff is just so... opaque. For instance, the original
patch [1] uses a mutated_migration_context around the pre_live_migration call to
the libvirt driver. Would I still need to do that? Why or why not?


The mutated context applies the "new" numa_topology and PCI stuff.

The reason for the mutated context for pre_live_migration() is so that the 
plug_vifs(instance) call will make use of the new macvtap device information. 
See Moshe's comment from Dec 8 2016 at https://review.openstack.org/#/c/244489/46.


I think the mutated context around the call to self.driver.live_migration() is 
so that the new XML represents the newly-claimed pinned CPUs on the destination.



At this point we need to commit to something and roll with it, so I'm sticking
to the "easy way". If it gets shut down in code review, at least we'll have
certainty on how to approach this next cycle.


Yep, I'm cool with incremental improvement.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-21 Thread Chris Friesen

On 06/21/2018 07:50 AM, Mooney, Sean K wrote:

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]



Side question... does either approach touch PCI device management
during live migration?

I ask because the only workloads I've ever seen that pin guest vCPU
threads to specific host processors -- or make use of huge pages
consumed from a specific host NUMA node -- have also made use of SR-IOV
and/or PCI passthrough. [1]

If workloads that use PCI passthrough or SR-IOV VFs cannot be live
migrated (due to existing complications in the lower-level virt layers)
I don't see much of a point spending lots of developer resources trying
to "fix" this situation when in the real world, only a mythical
workload that uses CPU pinning or huge pages but *doesn't* use PCI
passthrough or SR-IOV VFs would be helped by it.



[Mooney, Sean K]  I would generally agree but with the extention of include 
dpdk based vswitch like ovs-dpdk or vpp.
Cpu pinned or hugepage backed guests generally also have some kind of high 
performance networking solution or use a hardware
Acclaortor like a gpu to justify the performance assertion that pinning of 
cores or ram is required.
Dpdk networking stack would however not require the pci remaping to be 
addressed though I belive that is planned to be added in stine.


Jay, you make a good point but I'll second what Sean says...for the last few 
years my organization has been using a DPDK-accelerated vswitch which performs 
well enough for many high-performance purposes.


In the general case, I think live migration while using physical devices would 
require coordinating the migration with the guest software.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][fwaas] Investigation into debian/l3/wsgi/fwaas error

2018-06-21 Thread Nate Johnston
[bringing a side email conversation onto the main mailing list]

I have been looking into the issue with neutron_fwaas having an error
when running under the neutron-l3-agent on Debian when using wsgi.
Here's what I have tracked it down to at this point.  I am going to lay
it all out there, including points that you already know, because I am
going to bring in another party or two at this point.

To make sure we are on solid ground, let me restate what are the
parameters of the problem:

1. The error does not occur when neutron_fwaas is disabled.
2. The error does not occur if wsgi is in use.  If standard eventlet is
used, the error is not observed.
3. The error only occurs on debian; centos and ubuntu do not manifest
the problem.

As the neutron-l3-agent loads, it is trying to initialize the fwaas_v2
driver.  The driver initializes without incident, and then proceeds to
attempt to fetch firewall groups.  Note that you do not need to exercise
tempest to see this behavior; it is visible in the logs without anything
else going on.  Running pdb, I was able to trace the attempt to send the
message deep into the RPC transmission code; I saw very little there to
be suspicious of.

2018-06-20 21:06:34.761 915007 DEBUG 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 [-] 
Fetch firewall groups from plugin get_firewall_groups_for_project 
/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent_v2.py:44
...
2018-06-20 21:07:05.551 915007 ERROR neutron.common.rpc [-] Timeout in RPC 
method get_firewall_groups_for_project. Waiting for 10 seconds before next 
attempt. If the server is not down, consider increasing the 
rpc_response_timeout option as Neutron server(s) may be overloaded and unable 
to respond quickly enough.:
oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply 
to message ID 8616e98dd8d943eea1dcf99c04bd2be6

You can see, the RPC message goes into the ether and does not return.  This
results in the stacktraces in neutron-l3-agent.log.  This example is from a
later transaction.


2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 [-] 
FWaaS router add RPC info call failed for 8c13b5d7-7b93-4b91-ae4c-c387abe96734: 
oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to 
message ID 44851518f5ee4d40a2cdbcabc27e3c92
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 
Traceback (most recent call last):
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2   File 
"/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
324, in get
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 
return self._queues[msg_id].get(block=True, timeout=timeout)
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2   File 
"/usr/lib/python3/dist-packages/eventlet/queue.py", line 313, in get
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 
return waiter.wait()
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2   File 
"/usr/lib/python3/dist-packages/eventlet/queue.py", line 141, in wait
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 
return get_hub().switch()
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2   File 
"/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 294, in switch
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 
return self.greenlet.switch()
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 
queue.Empty
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 During 
handling of the above exception, another exception occurred:
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 
Traceback (most recent call last):
2018-06-20 21:23:24.772 919205 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent_v2.py",
 line 292, in 

Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-21 Thread Sahid Orentino Ferdjaoui
On Thu, Jun 21, 2018 at 09:36:58AM -0400, Jay Pipes wrote:
> On 06/18/2018 10:16 AM, Artom Lifshitz wrote:
> > Hey all,
> > 
> > For Rocky I'm trying to get live migration to work properly for
> > instances that have a NUMA topology [1].
> > 
> > A question that came up on one of patches [2] is how to handle
> > resources claims on the destination, or indeed whether to handle that
> > at all.
> > 
> > The previous attempt's approach [3] (call it A) was to use the
> > resource tracker. This is race-free and the "correct" way to do it,
> > but the code is pretty opaque and not easily reviewable, as evidenced
> > by [3] sitting in review purgatory for literally years.
> > 
> > A simpler approach (call it B) is to ignore resource claims entirely
> > for now and wait for NUMA in placement to land in order to handle it
> > that way. This is obviously race-prone and not the "correct" way of
> > doing it, but the code would be relatively easy to review.
> > 
> > For the longest time, live migration did not keep track of resources
> > (until it started updating placement allocations). The message to
> > operators was essentially "we're giving you this massive hammer, don't
> > break your fingers." Continuing to ignore resource claims for now is
> > just maintaining the status quo. In addition, there is value in
> > improving NUMA live migration *now*, even if the improvement is
> > incomplete because it's missing resource claims. "Best is the enemy of
> > good" and all that. Finally, making use of the resource tracker is
> > just work that we know will get thrown out once we start using
> > placement for NUMA resources.
> > 
> > For all those reasons, I would favor approach B, but I wanted to ask
> > the community for their thoughts.
> 
> Side question... does either approach touch PCI device management during
> live migration?
> 
> I ask because the only workloads I've ever seen that pin guest vCPU threads
> to specific host processors -- or make use of huge pages consumed from a
> specific host NUMA node -- have also made use of SR-IOV and/or PCI
> passthrough. [1]

Not really. There are lot of virtual switches that we do support like
OVS-DPDK, Contrail Virtual Router... that support vhostuser interfaces
which is one use-case. (We do support live-migration of vhostuser
interface)

> If workloads that use PCI passthrough or SR-IOV VFs cannot be live migrated
> (due to existing complications in the lower-level virt layers) I don't see
> much of a point spending lots of developer resources trying to "fix" this
> situation when in the real world, only a mythical workload that uses CPU
> pinning or huge pages but *doesn't* use PCI passthrough or SR-IOV VFs would
> be helped by it.
> 
> Best,
> -jay
> 
> [1 I know I'm only one person, but every workload I've seen that requires
> pinned CPUs and/or huge pages is a VNF that has been essentially an ASIC
> that a telco OEM/vendor has converted into software and requires the same
> guarantees that the ASIC and custom hardware gave the original
> hardware-based workload. These VNFs, every single one of them, used either
> PCI passthrough or SR-IOV VFs to handle latency-sensitive network I/O.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-21 Thread Artom Lifshitz
> Side question... does either approach touch PCI device management during
> live migration?

Nope. I'd need to do some research to see what, if anything, is needed
at the lower levels (kernel, libvirt) to enable this.

> I ask because the only workloads I've ever seen that pin guest vCPU threads
> to specific host processors -- or make use of huge pages consumed from a
> specific host NUMA node -- have also made use of SR-IOV and/or PCI
> passthrough. [1]
>
> If workloads that use PCI passthrough or SR-IOV VFs cannot be live migrated
> (due to existing complications in the lower-level virt layers) I don't see
> much of a point spending lots of developer resources trying to "fix" this
> situation when in the real world, only a mythical workload that uses CPU
> pinning or huge pages but *doesn't* use PCI passthrough or SR-IOV VFs would
> be helped by it.

It's definitely a paint point for at least some of our customers - I
don't know their use cases exactly, but live migration with CPU
pinning but no other "high performance" features has come up a few
times in our downstream bug tracker. In any case, incremental progress
is better than no progress at all, so if we can improve how NUMA live
migration works, we'll be in a better position to make it work with
PCI devices down the road.

> [Mooney, Sean K]  I would generally agree but with the extention of include 
> dpdk based vswitch like ovs-dpdk or vpp.
> Cpu pinned or hugepage backed guests generally also have some kind of high 
> performance networking solution or use a hardware
> Acclaortor like a gpu to justify the performance assertion that pinning of 
> cores or ram is required.
> Dpdk networking stack would however not require the pci remaping to be 
> addressed though I belive that is planned to be added in stine.

I think Stephen Finucane's NUMA-aware vswitches work depends on mine
to work with live migration - ie, it'll work just fine on its own, but
to live migrate an instance with a NUMA vswitch (I know I'm abusing
language here, apologies) this spec will need to be implemented first.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Vitrage] naming issues

2018-06-21 Thread Rafal Zielinski
Hello,

As suggested by eyalb on irc I am posting my problem here.

Basically I have 10 nova hosts named in nags as follows:

nova0
nova1
.
.
.
nova10

I’ve made config file for the vitrage to map hosts to real hosts in Openstack 
named like:

nova0.domain.com
nova1.domain.com
.
.
.
nova10.domain.com

And the issue:
When provoking nagios alert on host nova10 Vitrage is displaying error on 
nova1, when provoking nagios alert on host nova1 vitrage is not showing any 
alert.

Can somebody have a look at this issue ?

Thank you,
Rafal Zielinski


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-21 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Thursday, June 21, 2018 2:37 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] NUMA-aware live migration: easy but
> incomplete vs complete but hard
> 
> On 06/18/2018 10:16 AM, Artom Lifshitz wrote:
> > Hey all,
> >
> > For Rocky I'm trying to get live migration to work properly for
> > instances that have a NUMA topology [1].
> >
> > A question that came up on one of patches [2] is how to handle
> > resources claims on the destination, or indeed whether to handle that
> > at all.
> >
> > The previous attempt's approach [3] (call it A) was to use the
> > resource tracker. This is race-free and the "correct" way to do it,
> > but the code is pretty opaque and not easily reviewable, as evidenced
> > by [3] sitting in review purgatory for literally years.
> >
> > A simpler approach (call it B) is to ignore resource claims entirely
> > for now and wait for NUMA in placement to land in order to handle it
> > that way. This is obviously race-prone and not the "correct" way of
> > doing it, but the code would be relatively easy to review.
> >
> > For the longest time, live migration did not keep track of resources
> > (until it started updating placement allocations). The message to
> > operators was essentially "we're giving you this massive hammer,
> don't
> > break your fingers." Continuing to ignore resource claims for now is
> > just maintaining the status quo. In addition, there is value in
> > improving NUMA live migration *now*, even if the improvement is
> > incomplete because it's missing resource claims. "Best is the enemy
> of
> > good" and all that. Finally, making use of the resource tracker is
> > just work that we know will get thrown out once we start using
> > placement for NUMA resources.
> >
> > For all those reasons, I would favor approach B, but I wanted to ask
> > the community for their thoughts.
> 
> Side question... does either approach touch PCI device management
> during live migration?
> 
> I ask because the only workloads I've ever seen that pin guest vCPU
> threads to specific host processors -- or make use of huge pages
> consumed from a specific host NUMA node -- have also made use of SR-IOV
> and/or PCI passthrough. [1]
> 
> If workloads that use PCI passthrough or SR-IOV VFs cannot be live
> migrated (due to existing complications in the lower-level virt layers)
> I don't see much of a point spending lots of developer resources trying
> to "fix" this situation when in the real world, only a mythical
> workload that uses CPU pinning or huge pages but *doesn't* use PCI
> passthrough or SR-IOV VFs would be helped by it.
> 
> Best,
> -jay
> 
> [1 I know I'm only one person, but every workload I've seen that
> requires pinned CPUs and/or huge pages is a VNF that has been
> essentially an ASIC that a telco OEM/vendor has converted into software
> and requires the same guarantees that the ASIC and custom hardware gave
> the original hardware-based workload. These VNFs, every single one of
> them, used either PCI passthrough or SR-IOV VFs to handle latency-
> sensitive network I/O.
[Mooney, Sean K]  I would generally agree but with the extention of include 
dpdk based vswitch like ovs-dpdk or vpp.
Cpu pinned or hugepage backed guests generally also have some kind of high 
performance networking solution or use a hardware
Acclaortor like a gpu to justify the performance assertion that pinning of 
cores or ram is required.
Dpdk networking stack would however not require the pci remaping to be 
addressed though I belive that is planned to be added in stine.
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] minimum libvirt version for nova-compute

2018-06-21 Thread Lee Yarwood
On 20-06-18 13:54:29, Lee Yarwood wrote:
> On 20-06-18 07:32:08, Matt Riedemann wrote:
> > On 6/20/2018 6:54 AM, Lee Yarwood wrote:
> > > We can bump the minimum here but then we have to play a game of working
> > > out the oldest version the above fix was backported to across the
> > > various distros. I'd rather see this address by the Libvirt maintainers
> > > in Debian if I'm honest.
> > 
> > Just a thought, but in nova we could at least do:
> > 
> > 1. Add a 'known issues' release note about the issue and link to the libvirt
> > patch.
> 
> ACK
>  
> > and/or
> > 
> > 2. Handle libvirtError in that case, check for the "Incorrect number of
> > padding bytes" string in the error, and log something with a breadcrumb to
> > the libvirt fix - that would be for people that miss the release note, or
> > hit the issue past rocky and wouldn't have found the release note because
> > they're on Stein+ now.
> 
> Yeah that's fair, I'll submit something for both of the above today.


libvirt: Log breadcrumb for known encryption bug
https://review.openstack.org/577164

Cheers,

-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-21 Thread Jay Pipes

On 06/18/2018 10:16 AM, Artom Lifshitz wrote:

Hey all,

For Rocky I'm trying to get live migration to work properly for
instances that have a NUMA topology [1].

A question that came up on one of patches [2] is how to handle
resources claims on the destination, or indeed whether to handle that
at all.

The previous attempt's approach [3] (call it A) was to use the
resource tracker. This is race-free and the "correct" way to do it,
but the code is pretty opaque and not easily reviewable, as evidenced
by [3] sitting in review purgatory for literally years.

A simpler approach (call it B) is to ignore resource claims entirely
for now and wait for NUMA in placement to land in order to handle it
that way. This is obviously race-prone and not the "correct" way of
doing it, but the code would be relatively easy to review.

For the longest time, live migration did not keep track of resources
(until it started updating placement allocations). The message to
operators was essentially "we're giving you this massive hammer, don't
break your fingers." Continuing to ignore resource claims for now is
just maintaining the status quo. In addition, there is value in
improving NUMA live migration *now*, even if the improvement is
incomplete because it's missing resource claims. "Best is the enemy of
good" and all that. Finally, making use of the resource tracker is
just work that we know will get thrown out once we start using
placement for NUMA resources.

For all those reasons, I would favor approach B, but I wanted to ask
the community for their thoughts.


Side question... does either approach touch PCI device management during 
live migration?


I ask because the only workloads I've ever seen that pin guest vCPU 
threads to specific host processors -- or make use of huge pages 
consumed from a specific host NUMA node -- have also made use of SR-IOV 
and/or PCI passthrough. [1]


If workloads that use PCI passthrough or SR-IOV VFs cannot be live 
migrated (due to existing complications in the lower-level virt layers) 
I don't see much of a point spending lots of developer resources trying 
to "fix" this situation when in the real world, only a mythical workload 
that uses CPU pinning or huge pages but *doesn't* use PCI passthrough or 
SR-IOV VFs would be helped by it.


Best,
-jay

[1 I know I'm only one person, but every workload I've seen that 
requires pinned CPUs and/or huge pages is a VNF that has been 
essentially an ASIC that a telco OEM/vendor has converted into software 
and requires the same guarantees that the ASIC and custom hardware gave 
the original hardware-based workload. These VNFs, every single one of 
them, used either PCI passthrough or SR-IOV VFs to handle 
latency-sensitive network I/O.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-21 Thread Artom Lifshitz
>
> As I understand it, Artom is proposing to have a larger race window,
> essentially
> from when the scheduler selects a node until the resource audit runs on
> that node.
>

Exactly. When writing the spec I thought we could just call the resource
tracker to claim the resources when the migration was done. However, when I
started looking at the code in reaction to Sahid's feedback, I noticed that
there's no way to do it without the MoveClaim context (right?)

Keep in mind, we're not making any race windows worse - I'm proposing
keeping the status quo and fixing it later with NUMA in placement (or the
resource tracker if we can swing it).

The resource tracker stuff is just so... opaque. For instance, the original
patch [1] uses a mutated_migration_context around the pre_live_migration
call to the libvirt driver. Would I still need to do that? Why or why not?

At this point we need to commit to something and roll with it, so I'm
sticking to the "easy way". If it gets shut down in code review, at least
we'll have certainty on how to approach this next cycle.

[1] https://review.openstack.org/#/c/244489/

>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [barbican] [tc] key store in base services

2018-06-21 Thread Jeremy Stanley
On 2018-06-20 16:59:30 -0500 (-0500), Adam Harwell wrote:
> Looks like I missed this so I'm late to the party, but:
> 
> Ade is technically correct, Octavia doesn't explicitly depend on Barbican,
> as we do support castellan generically.
> 
> *HOWEVER*: we don't just store and retrieve our own secrets -- we rely on
> loading up user created secrets. This means that for Octavia to work, even
> if we use castellan, we still need some way for users to interact with the
> secret store via an API, and what that means in openstack in still
> Barbican. So I would still say that Barbican is a dependency for us
> logistically, if not technically.
> 
> For example, internally at GoDaddy we were investigating deploying Vault
> with a custom user-facing API/UI for allowing users to store secrets that
> could be consumed by Octavia with castellan (don't get me started on how
> dumb that is, but it's what we were investigating).
> The correct way to do this in an openstack environment is the openstack
> secret store API, which is Barbican. So, while I am personally dealing with
> an example of very painfully avoiding Barbican (which may have been a
> non-issue if Barbican were a base service), I have a tough time reconciling
> not including Barbican itself as a requirement.
[...]

The past pushback we received from operators and deployers was that
they didn't want to be required to care and feed for yet one more
API service. As a compromise, it was suggested that we at least
provide a guaranteed means for services to handle their own secrets
in a centralized and standardized manner. In practice, the wording
we arrived at is intended to drive projects to strongly recommend
deploying Barbican in cases where operators want to take advantage
of any features of other services which require user interaction
with the key store.

Making a user-facing API service a "required project" from that
perspective is a bigger discussion, in my opinion. I'm in favor of
trying, but to me this piece is the first step in such a direction.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][heat-templates] Creating a role with no domain

2018-06-21 Thread Rabi Mishra
Looks like that's a bug where we create a domain specific role for
'default' domain[1], when domain is not specified.

[1]
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/keystone/role.py#L54

You're welcome to raise a bug and propose a fix where we should be just
removing the default.

On Thu, Jun 21, 2018 at 4:14 PM, Tikkanen, Viktor (Nokia - FI/Espoo) <
viktor.tikka...@nokia.com> wrote:

> Hi!
>
> There was a new ’domain’ property added to OS::Keystone::Role (
> *https://storyboard.openstack.org/#!/story/1684558*
> ,
> *https://review.openstack.org/#/c/459033/*
> ).
>
> With “openstack role create” CLI command it is still possible to create
> roles with no associated domains; but it seems that the same cannot be done
> with heat templates.
>
> An example: if I create two roles, CliRole (with “openstack role create
> CliRole” command)  and SimpleRole with the following heat template:
>
> heat_template_version: 2015-04-30
> description: Creates a role
> resources:
>   role_resource:
> type: OS::Keystone::Role
> properties:
>   name: SimpleRole
>
> the result in the keystone database will be:
>
> MariaDB [keystone]> select * from role;
> +--+--+-
> --+---+
> | id   | name | extra | domain_id
> |
> +--+--+-
> --+---+
> | 5de0eee4990e4a59b83dae93af9c0951 | SimpleRole   | {}| default
> |
> | 79472e6e1bf341208bd88e1c2dcf7f85 | CliRole  | {}| <>
> |
> | 7dd5e4ea87e54a13897eb465fdd0e950 | heat_stack_owner | {}| <>
> |
> | 80fd61edbe8842a7abb47fd7c91ba9d7 | heat_stack_user  | {}| <>
> |
> | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | {}| <>
> |
> | e174c27e79b84ea392d28224eb0af7c9 | admin| {}| <>
> |
> +--+--+-
> --+---+
>
> Should it be possible to create a role without associated domain with a
> heat template?
>
> -V.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][heat-templates] Creating a role with no domain

2018-06-21 Thread Tikkanen, Viktor (Nokia - FI/Espoo)
Hi!

There was a new 'domain' property added to OS::Keystone::Role 
(https://storyboard.openstack.org/#!/story/1684558, 
https://review.openstack.org/#/c/459033/).

With "openstack role create" CLI command it is still possible to create roles 
with no associated domains; but it seems that the same cannot be done with heat 
templates.

An example: if I create two roles, CliRole (with "openstack role create 
CliRole" command)  and SimpleRole with the following heat template:

heat_template_version: 2015-04-30
description: Creates a role
resources:
  role_resource:
type: OS::Keystone::Role
properties:
  name: SimpleRole

the result in the keystone database will be:

MariaDB [keystone]> select * from role;
+--+--+---+---+
| id   | name | extra | domain_id |
+--+--+---+---+
| 5de0eee4990e4a59b83dae93af9c0951 | SimpleRole   | {}| default   |
| 79472e6e1bf341208bd88e1c2dcf7f85 | CliRole  | {}| <>  |
| 7dd5e4ea87e54a13897eb465fdd0e950 | heat_stack_owner | {}| <>  |
| 80fd61edbe8842a7abb47fd7c91ba9d7 | heat_stack_user  | {}| <>  |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | {}| <>  |
| e174c27e79b84ea392d28224eb0af7c9 | admin| {}| <>  |
+--+--+---+---+

Should it be possible to create a role without associated domain with a heat 
template?

-V.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] 'overcloud deploy' doesn't restart haproxy (Pike)

2018-06-21 Thread Alan Bishop
On Thu, Jun 21, 2018 at 1:41 AM, Juan Antonio Osorio 
wrote:

> It is unfortunately a known issue and is present in queens and master as
> well. I think Michele (bandini on IRC) was working on it.
>

See [1], and note that [2] merged to stable/queens just a couple days ago.

[1] https://bugs.launchpad.net/tripleo/+bug/1775196
[2] https://review.openstack.org/574264

Alan


>
> On Thu, 21 Jun 2018, 06:45 Lars Kellogg-Stedman,  wrote:
>
>> I've noticed that when updating the overcloud with 'overcloud deploy',
>> the deploy process does not restart the haproxy containers when there
>> are changes to the haproxy configuration.
>>
>> Is this expected behavior?
>>
>> --
>> Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
>> http://blog.oddbit.com/|
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team

2018-06-21 Thread Dougal Matthews
On 19 June 2018 at 10:27, Renat Akhmerov  wrote:

> Hi,
>
> I’d like to promote Vitalii Solodilov to the core team of Mistral. In my
> opinion, Vitalii is a very talented engineer  who has been demonstrating it
> by providing very high quality code and reviews in the last 6-7 months.
> He’s one of the people who doesn’t hesitate taking responsibility for
> solving challenging technical tasks. It’s been a great pleasure to work
> with Vitalii and I hope can will keep up doing great job.
>
> Core members, please vote.
>


Thanks all for the votes and thank you Renat for nominating. I have added
Vitalii to the core reviewers. Welcome aboard, you can now +2! :-)




>
> Vitalii’s statistics: http://stackalytics.com/?module=
> mistral-group=marks_id=mcdoker18
>
> Thanks
>
> Renat Akhmerov
> @Nokia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team

2018-06-21 Thread Adriano Petrich
+1

On 19 June 2018 at 10:47, Dougal Matthews  wrote:

>
>
> On 19 June 2018 at 10:27, Renat Akhmerov  wrote:
>
>> Hi,
>>
>> I’d like to promote Vitalii Solodilov to the core team of Mistral. In my
>> opinion, Vitalii is a very talented engineer  who has been demonstrating it
>> by providing very high quality code and reviews in the last 6-7 months.
>> He’s one of the people who doesn’t hesitate taking responsibility for
>> solving challenging technical tasks. It’s been a great pleasure to work
>> with Vitalii and I hope can will keep up doing great job.
>>
>> Core members, please vote.
>>
>
> +1 from me. Vitalii has been one of the most active reviewers and code
> contributors through Queens and Rocky.
>
>
> Vitalii’s statistics: http://stackalytics.com/?module=mistral-group;
>> metric=marks_id=mcdoker18
>>
>> Thanks
>>
>> Renat Akhmerov
>> @Nokia
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-21 Thread Sahid Orentino Ferdjaoui
On Mon, Jun 18, 2018 at 10:16:05AM -0400, Artom Lifshitz wrote:
> Hey all,
> 
> For Rocky I'm trying to get live migration to work properly for
> instances that have a NUMA topology [1].
> 
> A question that came up on one of patches [2] is how to handle
> resources claims on the destination, or indeed whether to handle that
> at all.
> 
> The previous attempt's approach [3] (call it A) was to use the
> resource tracker. This is race-free and the "correct" way to do it,
> but the code is pretty opaque and not easily reviewable, as evidenced
> by [3] sitting in review purgatory for literally years.
> 
> A simpler approach (call it B) is to ignore resource claims entirely
> for now and wait for NUMA in placement to land in order to handle it
> that way. This is obviously race-prone and not the "correct" way of
> doing it, but the code would be relatively easy to review.

Hello Artom, The problem I have with B approach is that. It's based on
something which has not been designed for which will end-up with the
same bugs that you are trying to solve (1417667, 1289064).

The live migration is a sensitive operation that operators need to
have trust on, if we take case of a host evacuation the result would
be terrible, no?

If you want continue with B, I think you will have to find at least a
mechanism to update the host NUMA topology resources of the
destination during the on-going migrations. But again that should be
done early to avoid a too big window where an other instance can be
scheduled and be assigned of the same CPU topology. Also does this
really make sense when we now that at some point placement will take
care of such things for NUMA resources?

The A approach already handles what you need:

- Test whether destination host can accept the guest CPU policy
- Build new instance NUMA topology based on destination host
- Hold and update NUMA topology resources of destination host
- Store the destination host NUMA topology so it can be used by source
...

My preference is A because it reuses something which is used for every
guests that are scheduled today (not only for pci or numa things), we
have trust on it, it's also used for some move operations, it limits
the race window to a one we already have, and finally we limit the
code introduced.


Thanks,
s.

> For the longest time, live migration did not keep track of resources
> (until it started updating placement allocations). The message to
> operators was essentially "we're giving you this massive hammer, don't
> break your fingers." Continuing to ignore resource claims for now is
> just maintaining the status quo. In addition, there is value in
> improving NUMA live migration *now*, even if the improvement is
> incomplete because it's missing resource claims. "Best is the enemy of
> good" and all that. Finally, making use of the resource tracker is
> just work that we know will get thrown out once we start using
> placement for NUMA resources.
> 
> For all those reasons, I would favor approach B, but I wanted to ask
> the community for their thoughts.
> 
> Thanks!
> 
> [1] 
> https://review.openstack.org/#/q/topic:bp/numa-aware-live-migration+(status:open+OR+status:merged)
> [2] https://review.openstack.org/#/c/567242/
> [3] https://review.openstack.org/#/c/244489/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack-Zun Service Appears down

2018-06-21 Thread Hongbin Lu
HI  Muhammad,

Here is the code (run in controller node) that decides whether a service is
up https://github.com/openstack/zun/blob/master/zun/api/servicegroup.py .
There are several possibilities to cause a service to be 'down':
1. The service was being 'force_down' via API (e.g. explicitly issued a
command like "appcontainer service forcedown")
2. The zun compute process is not doing the heartbeat for a certain period
of time (CONF.service_down_time).
3. The zun compute process is doing the heartbeat properly but the time
between controller node and compute node is out of sync.

In before, #3 is the common pitfall that people ran into. If it is not #3,
you might want to check if the zun compute process is doing the heartbeat
properly. Each zun compute process is running a periodic task to update its
state in DB:
https://github.com/openstack/zun/blob/master/zun/servicegroup/zun_service_periodic.py
. The call of ' report_state_up ' will record this service is up in DB at
current time. You might want to check if this periodic task is running
properly, or if the current state is updated in the DB.

Above is my best guess. Please feel free to follow it up with me or other
team members if you need further assistant for this issue.

Best regards,
Hongbin

On Wed, Jun 20, 2018 at 9:14 AM Usman Awais  wrote:

> Dear Zuners,
>
> I have installed RDO pike. I stopped openstack-nova-compute service on one
> of the hosts, and installed a zun-compute service. Although all the
> services seems to be running ok on both controller and compute but when I
> do
>
>  openstack appcontainer service list
>
> It gives me following
>
>
> ++--+-+---+--+-+-+---+
> | Id | Host | Binary  | State | Disabled | Disabled Reason |
> Updated At  | Availability Zone |
>
> ++--+-+---+--+-+-+---+
> |  1 | node1.os.lab | zun-compute | down  | False| None|
> 2018-06-20 13:14:31 | nova  |
>
> ++--+-+---+--+-+-+---+
>
> I have checked all ports in both directions they are open, including etcd
> ports and others. All services are running, only docker service has the
> warning message saying "failed to retrieve docker-runc version: exec:
> \"docker-runc\": executable file not found in $PATH". There is also a
> message at zun-compute
> "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:161:
> SAWarning: The IN-predicate on "container.uuid" was invoked with an empty
> sequence. This results in a contradiction, which nonetheless can be
> expensive to evaluate.  Consider alternative strategies for improved
> performance."
>
> Please guide...
>
> Regards,
> Muhammad Usman Awais
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG

2018-06-21 Thread Tobias Rydberg

Hi folks,

Time for a new meeting for the Public Cloud WG. Agenda draft can be 
found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to 
add items to that list.


See you all at IRC 1400 UTC in #openstack-publiccloud

Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev