[openstack-dev] [oslo][osc][cliff][tacker] New release of cmd2 break cliff and tacker client

2018-06-19 Thread super user
Hi everyone,

New release of cmd2 0.9.0 seems to break cliff and python-tackerclient.

The cmd2 library changed the way it handles parsing input commands. It now
uses a different library, which means the values passed to the commands are
no longer PyParsing objects and are instead Statement objects. These
objects do not have a “parsed” property, so the receiving code needs to
work with them differently.

The patch https://review.openstack.org/571524 tries to fix this in the
places within cliff where it was failing in interactive mode.

Please consider reviewing this patch and have a new release for cliff so
that the python-tackerclient pass the py35 tests.

Thank you,
Nguyen Hai
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][panko][pike] elasticsearch integration

2018-06-19 Thread cristian.calin
Hello,


I'm trying to run ceilometer with panko publishers in pike release and when I 
run the ceilometer-agent-notification I get a trace complaining about 
NoSuchOptError, but without the actual parameter that is missing (see trace 
below).

I have configured panko.conf with the following:

[database]
connection = es://user:password@elasticsearch.service.consul.:9200
[storage]
es_ssl_enable = False
es_index_name = events


As far as I  can tell from the debug log, the storage.es_ssl_enable and 
storage.es_index_name parameters are not loaded, they don't show up in the 
"cotyledon.oslo_config_glue" output so I assume the trace relates to these 
parameters. Has anybody else seen this error before?

PS: sorry for CC'ing the dev list but I hope to reach the right audience
 TRACE 
{"asctime": "2018-06-20 05:49:09.405","process": "59","levelname": 
"DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' 
driver in panko.storage"} {"funcName": "get_connection","source": {"p
ath": 
"/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno":
 "84"}}
{"asctime": "2018-06-20 05:49:10.436","process": "61","levelname": 
"DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' 
driver in panko.storage"} {"funcName": "get_connection","source": {"p
ath": 
"/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno":
 "84"}}
{"asctime": "2018-06-20 05:49:11.409","process": "63","levelname": 
"DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' 
driver in panko.storage"} {"funcName": "get_connection","source": {"p
ath": 
"/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno":
 "84"}}
{"asctime": "2018-06-20 05:49:18.467","process": "57","levelname": 
"DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' 
driver in panko.storage"} {"funcName": "get_connection","source": {"p
ath": 
"/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno":
 "84"}}
{"asctime": "2018-06-20 05:49:18.468","process": "57","levelname": 
"ERROR","name": "ceilometer.pipeline", "instance": {},"message":"Unable to load 
publisher panko://"}: RetryError: RetryError[]
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >Traceback (most 
recent call last):
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >  File 
"/opt/ceilometer/lib/python2.7/site-packages/ceilometer/pipeline.py", line 419, 
in __init__
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >
self.publishers.append(publisher_manager.get(p))
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >  File 
"/opt/ceilometer/lib/python2.7/site-packages/ceilometer/pipeline.py", line 713, 
in get
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >
'ceilometer.%s.publisher' % self._purpose)
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >  File 
"/opt/ceilometer/lib/python2.7/site-packages/ceilometer/publisher/__init__.py", 
line 36, in get_publisher
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >return 
loaded_driver.driver(parse_result)
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >  File 
"/opt/ceilometer/lib/python2.7/site-packages/panko/publisher/database.py", line 
35, in __init__
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >self.conn = 
storage.get_connection_from_config(conf)
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >  File 
"/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py", line 
73, in get_connection_from_config
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >return _inner()
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >  File 
"/opt/ceilometer/lib/python2.7/site-packages/tenacity/__init__.py", line 171, 
in wrapped_f
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >return 
self.call(f, *args, **kw)
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >  File 
"/opt/ceilometer/lib/python2.7/site-packages/tenacity/__init__.py", line 248, 
in call
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >
start_time=start_time)
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >  File 
"/opt/ceilometer/lib/python2.7/site-packages/tenacity/__init__.py", line 217, 
in iter
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >
six.raise_from(RetryError(fut), fut.exception())
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >  File 
"/opt/ceilometer/lib/python2.7/site-packages/six.py", line 718, in raise_from
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >raise value
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >RetryError: 
RetryError[]
2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline  >

_

Ce message et ses pieces j

[openstack-dev] [cyborg]Weekly Team Meeting 2018.06.20

2018-06-19 Thread Zhipeng Huang
Hi Team,

Weekly meeting as usual starting UTC1400 at #openstack-cyborg

Initial agenda:

1. Rocky task assignment confirmation
2. os-acc discussion

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Puppet debugging help?

2018-06-19 Thread Gilles Dubreuil



On 19/06/18 01:59, Alex Schultz wrote:

On Mon, Jun 18, 2018 at 9:13 AM, Lars Kellogg-Stedman  wrote:

Hey folks,

I'm trying to patch puppet-keystone to support multi-valued
configuration options (like trusted_dashboard).  I have a patch that
works, mostly, but I've run into a frustrating problem (frustrating
because it would seem to be orthogonal to my patches, which affect the
keystone_config provider and type).

During the initial deploy, running tripleo::profile::base::keystone
fails with:

   "Error: Could not set 'present' on ensure: undefined method `new'
   for nil:NilClass at
   /etc/puppet/modules/tripleo/manifests/profile/base/keystone.pp:274",


It's likely erroring in the keystone_domain provider.

https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_domain/openstack.rb#L115-L122
or
https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_domain/openstack.rb#L155-L161

Providers are notoriously bad at their error messaging.   Usually this
error happens when we get a null back from the underlying command and
we're still trying to do something.  This could point to a
misconfiguration of keystone if it's not getting anything back.


Per Alex comment, the keystone_domain class is definitely involved.

The provider fails: "Could not set 'present' on ensure"
And the propagated error seems to be because the provider could not be 
set up for some dependent reason and came back empty.


$ irb
irb(main):001:0> nil.new
NoMethodError: undefined method `new' for nil:NilClass

The second pass worked because the missing "dependent" bit was set up 
(in the meantime) and the provider creation was satisfied.


To investigate dependent cause within the provider, you could use 
'notice("Value: ${variable}")'




The line in question is:

   70: if $step == 3 and $manage_domain {
   71:   if hiera('heat_engine_enabled', false) {
   72: # create these seperate and don't use ::heat::keystone::domain since
   73: # that class writes out the configs
   74: keystone_domain { $heat_admin_domain:
 ensure  => 'present',
 enabled => true
   }

The thing is, despite the error...it creates the keystone domain
*anyway*, and a subsequent run of the module will complete without any
errors.

I'm not entirely sure that the error is telling me, since *none* of
the puppet types or providers have a "new" method as far as I can see.
Any pointers you can offer would be appreciated.

Thanks!

--
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Gilles Dubreuil
Senior Software Engineer - Red Hat - Openstack DFG Integration
Email: gil...@redhat.com
GitHub/IRC: gildub
Mobile: +61 400 894 219


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client

2018-06-19 Thread zijian1...@163.com
On Tue, Jun 19, 2018 at 11:15 AM, 李健  wrote:
>> So, my question is, why does the network service not use the
>> python2-neutronclient to get the client like other core projects, but
>> instead uses another separate project(openstacksdk)?

> There were multiple reasons to not use neutron client lib for OSC and
> the SDk was good enough at the time to use ut in spite of not being at
> a 1.0 release.  We have intended to migrate everything to use
> OpenStackSDK and eliminate OSC's use of the python-*client libraries
> completely. 

Thks for replying, just want to confirm, you mentioned "We have intended to 
migrate everything to use
OpenStackSDK", the current use of python-*client is:
1. OSC
2. all services that need to interact with other services (e.g.:  nova 
libraries: self.volume_api = volume_api or cinder.API())
Do you mean that both of the above will be migrated to use the OpenStack SDK?

> We are waiting on an SDK 1.0 release, it has stretched on
> for years longer than originally anticipated but the changes we have
> had to accommodate in the network commands in the past convinced me to
> wait until it was declared stable, even though it has been nearly
> stable for a while now.

>> My personal opinion, openstacksdk is a project that can be used
>> independently, it is mainly to provide a unified sdk for developers, so
>> there should be no interdependence between python-xxxclient and
>> openstacksdk, right?

> Correct, OpenStackSDK has no dependency on any of the python-*client
> libraries..  Its primary dependency is on keystoneauth for the core
> authentication logic, that was long ago pulled out of the keystone
> client package.

> dt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-25

2018-06-19 Thread Ed Leafe
On Jun 19, 2018, at 12:48 PM, Chris Dent  wrote:
> 
> Many of the things that get written will start off wrong but the
> only way they have a chance of becoming right is if they are written
> in the first place.

This.

Too many people are afraid of doing anything that might turn out to be the 
"wrong" thing that nothing gets done.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Zuul v3 integration status

2018-06-19 Thread linghucongsong
Hi Boden!



I am song, I have discussed with the ptl zhiyuan.


we all think it is not a simple work to finish this.



we will plan this as a bp, but maybe can not finished it



in the R release, we promise must be finish it in the next openstack version.





At 2018-06-19 10:13:47, "linghucongsong"  wrote:



Hi Boden! Thanks for report this bug.


we will talk about this bug in our meeting this week wednesday 9:00 beijing 
time.


if you have time i would like you join it in the openstack-meeting channel.









At 2018-06-15 21:56:29, "Boden Russell"  wrote:
>Is there anyone who can speak to the status of tricircle's adoption of
>Zuul v3?
>
>As per [1] it doesn't seem like the project is setup properly for Zuul
>v3. Thus, it's difficult/impossible to land patches like [2] that
>require neutron/master + a depends on patch.
>
>Assuming tricircle is still being maintained, IMO we need to find a way
>to get it up to speed with zuul v3; otherwise some of our neutron
>efforts will be held up, or tricircle will fall behind with respect to
>neutron-lib adoption.
>
>Thanks
>
>
>[1] https://bugs.launchpad.net/tricircle/+bug/1776922
>[2] https://review.openstack.org/#/c/565879/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][neutron-release] feature/graphql branch rebase

2018-06-19 Thread Gilles Dubreuil
Could someone from the Neutron release group rebase feature/graphql 
branch against master/HEAD branch please?


Regards,
Gilles



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Notification update week 25

2018-06-19 Thread Yikun Jiang
I'd like to help it. : )

Regards,
Yikun

Jiang Yikun(Kero)
Mail: yikunk...@gmail.com


Matt Riedemann  于2018年6月20日周三 上午1:07写道:

> On 6/18/2018 10:10 AM, Balázs Gibizer wrote:
> > * Introduce instance.lock and instance.unlock notifications
> >
> https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
>
> This hasn't been updated in quite awhile. I wonder if someone else wants
> to pick that up now?
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI is down stop workflowing

2018-06-19 Thread Alex Schultz
On Tue, Jun 19, 2018 at 1:45 PM, Wesley Hayutin  wrote:
> Check and gate jobs look clear.
> More details on a bit.
>


So for a recap of the last 24 hours or so...

Mistral auth problems - https://bugs.launchpad.net/tripleo/+bug/1777541
 - caused by https://review.openstack.org/#/c/574878/
 - fixed by https://review.openstack.org/#/c/576336/

Undercloud install failure - https://bugs.launchpad.net/tripleo/+bug/1777616
- caused by https://review.openstack.org/#/c/570307/
- fixed by https://review.openstack.org/#/c/576428/

Keystone duplicate role - https://bugs.launchpad.net/tripleo/+bug/1777451
- caused by https://review.openstack.org/#/c/572243/
- fixed by https://review.openstack.org/#/c/576356 and
https://review.openstack.org/#/c/576393/

The puppet issues should be prevented in the future by adding tripleo
undercloud jobs back in to the appropriate modules, see
https://review.openstack.org/#/q/topic:tripleo-ci+(status:open)
I recommended the undercloud jobs because that gives us some basic
coverage and the instack-undercloud job still uses puppet without
containers.  We'll likely want to replace these jobs with standalone
versions at somepoint as that configuration gets more mature.

We've restored any patches that were abandoned in the gate and it
should be ok to recheck.

Thanks,
-Alex

> Thanks
>
> Sent from my mobile
>
> On Tue, Jun 19, 2018, 07:33 Felix Enrique Llorente Pastora
>  wrote:
>>
>> Hi,
>>
>>We have the following bugs with fixes that need to land to unblock
>> check/gate jobs:
>>
>>https://bugs.launchpad.net/tripleo/+bug/1777451
>>https://bugs.launchpad.net/tripleo/+bug/1777616
>>
>>You can check them out at #tripleo ooolpbot.
>>
>>Please stop workflowing temporally until they get merged.
>>
>> BR.
>>
>> --
>> Quique Llorente
>>
>> Openstack TripleO CI
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-19 Thread Chris Friesen

On 06/19/2018 01:59 PM, Artom Lifshitz wrote:

Adding
claims support later on wouldn't change any on-the-wire messaging, it would
just make things work more robustly.


I'm not even sure about that. Assuming [1] has at least the right
idea, it looks like it's an either-or kind of thing: either we use
resource tracker claims and get the new instance NUMA topology that
way, or do what was in the spec and have the dest send it to the
source.


One way or another you need to calculate the new topology in 
ComputeManager.check_can_live_migrate_destination() and communicate that 
information back to the source so that it can be used in 
ComputeManager._do_live_migration().  The previous patches communicated the new 
topoology as part of instance.



That being said, I still think I'm still in favor of choosing the
"easy" way out. For instance, [2] should fail because we can't access
the api db from the compute node.


I think you could use objects.ImageMeta.from_instance(instance) instead of 
request_spec.image.  The limits might be an issue.



So unless there's a simpler way,
using RT claims would involve changing the RPC to add parameters to
check_can_live_migration_destination, which, while not necessarily
bad, seems like useless complexity for a thing we know will get ripped
out.


I agree that it makes sense to get the "simple" option working first.  If we 
later choose to make it work "properly" I don't think it would require undoing 
too much.


Something to maybe factor in to what you're doing--I think there is currently a 
bug when migrating an instance with no numa_topology to a host with a different 
set of host CPUs usable for floating instances--I think it will assume it can 
still float over the same host CPUs as before.  Once we have the ability to 
re-write the instance XML prior to the live-migration it would be good to fix 
this.  I think this would require passing the set of available CPUs on the 
destination back to the host for use when recalculating the XML for the guest. 
(See the "if not guest_cpu_numa_config" case in 
LibvirtDriver._get_guest_numa_config() where "allowed_cpus" is specified, and 
LibvirtDriver._get_guest_config() where guest.cpuset is written.)


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client

2018-06-19 Thread Monty Taylor

On 06/19/2018 03:51 PM, Dean Troyer wrote:

On Tue, Jun 19, 2018 at 11:15 AM, 李健  wrote:

So, my question is, why does the network service not use the
python2-neutronclient to get the client like other core projects, but
instead uses another separate project(openstacksdk)?


There were multiple reasons to not use neutron client lib for OSC and
the SDk was good enough at the time to use ut in spite of not being at
a 1.0 release.  We have intended to migrate everything to use
OpenStackSDK and eliminate OSC's use of the python-*client libraries
completely.  We are waiting on an SDK 1.0 release, it has stretched on
for years longer than originally anticipated but the changes we have
had to accommodate in the network commands in the past convinced me to
wait until it was declared stable, even though it has been nearly
stable for a while now.


Soon. Really soon. I promise.


My personal opinion, openstacksdk is a project that can be used
independently, it is mainly to provide a unified sdk for developers, so
there should be no interdependence between python-xxxclient and
openstacksdk, right?


Correct, OpenStackSDK has no dependency on any of the python-*client
libraries..  Its primary dependency is on keystoneauth for the core
authentication logic, that was long ago pulled out of the keystone
client package.

dt




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-06-19 Thread Fei Long Wang
Hi there,

For people who maybe still interested in this issue. I have proposed a
patch, see https://review.openstack.org/576029 And I have verified with
Sonobuoy for both multi masters (3 master nodes) and single master
clusters, all worked. Any comments will be appreciated. Thanks.


On 21/05/18 01:22, Sergey Filatov wrote:
> Hi!
> I’d like to initiate a discussion about this bug: [1].
> To resolve this issue we need to generate a secret cert and pass it to
> master nodes. We also need to store it somewhere to support scaling.
> This issue is specific for kubernetes drivers. Currently in magnum we
> have a general cert manager which is the same for all the drivers.
>
> What do you think about moving cert_manager logic into a
> driver-specific area?
> Having this common cert_manager logic forces us to generate client
> cert with “admin” and “system:masters” subject & organisation names [2], 
> which is really something that we need only for kubernetes drivers.
>
> [1] https://bugs.launchpad.net/magnum/+bug/1766546
> [2] 
> https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64
>
>
> ..Sergey Filatov
>
>
>
>> On 20 Apr 2018, at 20:57, Sergey Filatov > > wrote:
>>
>> Hello,
>>
>> I looked into k8s drivers for magnum I see that each api-server on
>> master node generates it’s own service-account-key-file. This causes
>> issues with service-accounts authenticating on api-server. (In case
>> api-server endpoint moves).
>> As far as I understand we should have either all api-server keys
>> synced on api-servesr or pre-generate single api-server key.
>>
>> What is the way for magnum to get over this issue?
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client

2018-06-19 Thread Dean Troyer
On Tue, Jun 19, 2018 at 11:15 AM, 李健  wrote:
> So, my question is, why does the network service not use the
> python2-neutronclient to get the client like other core projects, but
> instead uses another separate project(openstacksdk)?

There were multiple reasons to not use neutron client lib for OSC and
the SDk was good enough at the time to use ut in spite of not being at
a 1.0 release.  We have intended to migrate everything to use
OpenStackSDK and eliminate OSC's use of the python-*client libraries
completely.  We are waiting on an SDK 1.0 release, it has stretched on
for years longer than originally anticipated but the changes we have
had to accommodate in the network commands in the past convinced me to
wait until it was declared stable, even though it has been nearly
stable for a while now.

> My personal opinion, openstacksdk is a project that can be used
> independently, it is mainly to provide a unified sdk for developers, so
> there should be no interdependence between python-xxxclient and
> openstacksdk, right?

Correct, OpenStackSDK has no dependency on any of the python-*client
libraries..  Its primary dependency is on keystoneauth for the core
authentication logic, that was long ago pulled out of the keystone
client package.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][oot drivers] Putting a contract out on ComputeDriver.get_traits()

2018-06-19 Thread Eric Fried
All (but especially out-of-tree compute driver maintainers)-

ComputeDriver.get_traits() was introduced mere months ago [1] for
initial implementation by Ironic [2] mainly because the whole
update_provider_tree framework [3] wasn't fully baked yet.  Now that
update_provider_tree is a thing, I'm starting work to cut Ironic over to
using it [4].  Since, as of this writing, Ironic still has the only
in-tree implementation of get_traits [5], I'm planning to whack the
ComputeDriver interface [6] and its one callout in the resource tracker
[7] at the same time.

If you maintain an out-of-tree driver and this is going to break you
unbearably, scream now.  However, be warned that I will probably just
ask you to cut over to update_provider_tree.

Thanks,
efried

[1] https://review.openstack.org/#/c/532290/
[2]
https://review.openstack.org/#/q/topic:bp/ironic-driver-traits+(status:open+OR+status:merged)
[3]
http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/update-provider-tree.html
[4] https://review.openstack.org/#/c/576588/
[5]
https://github.com/openstack/nova/blob/0876b091db6f6f0d6795d5907d3d8314706729a7/nova/virt/ironic/driver.py#L737
[6]
https://github.com/openstack/nova/blob/ecaadf6d6d3c94706fdd1fb24676e3bd2370f9f7/nova/virt/driver.py#L886-L895
[7]
https://github.com/openstack/nova/blob/ecaadf6d6d3c94706fdd1fb24676e3bd2370f9f7/nova/compute/resource_tracker.py#L915-L926

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-19 Thread Artom Lifshitz
> Adding
> claims support later on wouldn't change any on-the-wire messaging, it would
> just make things work more robustly.

I'm not even sure about that. Assuming [1] has at least the right
idea, it looks like it's an either-or kind of thing: either we use
resource tracker claims and get the new instance NUMA topology that
way, or do what was in the spec and have the dest send it to the
source.

That being said, I still think I'm still in favor of choosing the
"easy" way out. For instance, [2] should fail because we can't access
the api db from the compute node. So unless there's a simpler way,
using RT claims would involve changing the RPC to add parameters to
check_can_live_migration_destination, which, while not necessarily
bad, seems like useless complexity for a thing we know will get ripped
out.

[1] https://review.openstack.org/#/c/576222/
[2] https://review.openstack.org/#/c/576222/3/nova/compute/manager.py@5897

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI is down stop workflowing

2018-06-19 Thread Wesley Hayutin
Check and gate jobs look clear.
More details on a bit.

Thanks

Sent from my mobile

On Tue, Jun 19, 2018, 07:33 Felix Enrique Llorente Pastora <
ellor...@redhat.com> wrote:

> Hi,
>
>We have the following bugs with fixes that need to land to unblock
> check/gate jobs:
>
>https://bugs.launchpad.net/tripleo/+bug/1777451
>https://bugs.launchpad.net/tripleo/+bug/1777616
>
>You can check them out at #tripleo ooolpbot.
>
>Please stop workflowing temporally until they get merged.
>
> BR.
>
> --
> Quique Llorente
>
> Openstack TripleO CI
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [placement] setting oslo config opts from environment

2018-06-19 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2018-06-19 20:06:15 +0100:
> 
> Every now and again I keep working on my experiments to containerize
> placement in a useful way [1]. At the moment I have it down to using
> a very small oslo_config-style conf file. I'd like to take it the
> rest of the way and have no file at all so that my container can be
> an immutable black box that's presence on the network and use of a
> database is all external to itself and I can add and remove them at
> will with very little effort and no mounts or file copies.
> 
> This is the way placement has been designed from the start. Internal
> to itself all it really knows is what database it wants to talk to,
> and how to talk to keystone for auth. That's what's in the conf
> file. We recently added support for policy, but it is policy-in-code
> and the defaults are okay, so no policy file required. Placement
> cannot create fully qualified URLs within itself. This is good and
> correct: it doesn't need to.
> 
> With that preamble out of the way, what I'd like to be able to do is
> make it so the placement service can start up and get its necessary
> configuration information from environment variables (which docker
> or k8s or whatever other orchestration you're using would set).
> There are plenty of ways to hack this into the existing code, but I
> would prefer to do it in a way that is useful and reusable by other
> people who want to do the same thing.
> 
> So I'd like people's feedback and ideas on what they think of the
> following ways, and any other ideas they have. Or if oslo_config
> already does this and I just missed it, please set me straight.
> 
> 1) I initially thought that the simplest way to do this would be to
> set a default when describing the options to do something like
> `default=os.environ.get('something', the_original_default)` but this
> has a bit of a flaw. It means that the conf.file wins over the
> environment and this is backwards from the expected [2] priority.
> 
> 2) When the service starts up, after it reads its own config, but
> before it actually does anything, it inspects the environment for
> a suite of variables which it uses to clobber the settings that came
> from files with the values in the environment.
> 
> 3) 2, but it happens in oslo_config instead of the service's own
> code, perhaps with a `from_env` kwarg when defining the opts. Maybe
> just for StrOpt, and maybe with some kind of automated env-naming
> scheme.
> 
> 4) Something else? What do you think?
> 
> Note that the main goal here is to avoid files, so solutions that
> are "read the environment variables to then write a custom config
> file" are not in this domain (although surely useful in other
> domains).
> 
> We had some IRC discussion about this [3] if you want a bit more
> context. Thanks for your interest and attention.
> 
> [1] https://anticdent.org/placement-container-playground-6.html
> [2] https://bugs.launchpad.net/oslo-incubator/+bug/1196368
> [3] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-06-19.log.html#t2018-06-19T18:30:12
> 

I think the TripleO folks were going to look at kubernetes configmaps
for passing configuration settings into containers. I don't know how far
that research went.

I certainly have no objection to doing the work in oslo.config. As
I described on IRC today, I think we would want to implement it
using the new driver feature we're working on this cycle, even if
the driver is enabled automatically so users don't have to turn it
on. We already special case command line options and the point of
the driver interface is to give us a way to extend the lookup logic
without having to add more special cases.

This might be worth a short spec, just so we can make sure we're
covering all of the details. For example:

We will need to consider what to do with configuration settings
more complicated than primitive data types like strings and numbers.
Lists can probably be expressed with a separator character. Perhaps
more complex types like dicts are just not supported. I would like
to remove them anyway, although that doesn't seem realistic now.

We also need to work out how variable names are constructed from option
and group names.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] [placement] setting oslo config opts from environment

2018-06-19 Thread Chris Dent


Every now and again I keep working on my experiments to containerize
placement in a useful way [1]. At the moment I have it down to using
a very small oslo_config-style conf file. I'd like to take it the
rest of the way and have no file at all so that my container can be
an immutable black box that's presence on the network and use of a
database is all external to itself and I can add and remove them at
will with very little effort and no mounts or file copies.

This is the way placement has been designed from the start. Internal
to itself all it really knows is what database it wants to talk to,
and how to talk to keystone for auth. That's what's in the conf
file. We recently added support for policy, but it is policy-in-code
and the defaults are okay, so no policy file required. Placement
cannot create fully qualified URLs within itself. This is good and
correct: it doesn't need to.

With that preamble out of the way, what I'd like to be able to do is
make it so the placement service can start up and get its necessary
configuration information from environment variables (which docker
or k8s or whatever other orchestration you're using would set).
There are plenty of ways to hack this into the existing code, but I
would prefer to do it in a way that is useful and reusable by other
people who want to do the same thing.

So I'd like people's feedback and ideas on what they think of the
following ways, and any other ideas they have. Or if oslo_config
already does this and I just missed it, please set me straight.

1) I initially thought that the simplest way to do this would be to
set a default when describing the options to do something like
`default=os.environ.get('something', the_original_default)` but this
has a bit of a flaw. It means that the conf.file wins over the
environment and this is backwards from the expected [2] priority.

2) When the service starts up, after it reads its own config, but
before it actually does anything, it inspects the environment for
a suite of variables which it uses to clobber the settings that came
from files with the values in the environment.

3) 2, but it happens in oslo_config instead of the service's own
code, perhaps with a `from_env` kwarg when defining the opts. Maybe
just for StrOpt, and maybe with some kind of automated env-naming
scheme.

4) Something else? What do you think?

Note that the main goal here is to avoid files, so solutions that
are "read the environment variables to then write a custom config
file" are not in this domain (although surely useful in other
domains).

We had some IRC discussion about this [3] if you want a bit more
context. Thanks for your interest and attention.

[1] https://anticdent.org/placement-container-playground-6.html
[2] https://bugs.launchpad.net/oslo-incubator/+bug/1196368
[3] 
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-06-19.log.html#t2018-06-19T18:30:12


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] default and implied roles changes

2018-06-19 Thread Lance Bragstad
Hi all,

Keystone recently took a big step in implementing the default roles work
that's been a hot topic over the past year [0][1][2][3][4], and a big
piece in making RBAC more robust across OpenStack. We merged a patch [5]
that ensures the roles described in the specification [6] exist. This
was formally a cross-project specification [7], but rescoped to target
keystone directly in hopes of making it a future community goal [8].

If you've noticed issues with various CI infrastructure, it could be due
to the fact a couple new roles are being populated by keystone's
bootstrap command. For example, if your testing infrastructure creates a
role named 'Member' or 'member', you could see HTTP 409s since keystone
is now creating that role by default. You can safely remove code that
ensures that role exists, since keystone will now handle that for you.
These types of changes have been working their way into infrastructure
and deployment projects [9] this week.

If you're seeing something that isn't an HTTP 409 and suspect it is
related to these changes, come find us in #openstack-keystone. We'll be
around to answer questions about the changes in keystone and can assist
in straightening things out.


[0] https://etherpad.openstack.org/p/policy-queens-ptg Queens PTG Policy
Session
[1] https://etherpad.openstack.org/p/queens-PTG-keystone-policy-roadmap
Queens PTG Roadmap Outline
[2] https://etherpad.openstack.org/p/rbac-and-policy-rocky-ptg Rocky PTG
Policy Session
[3] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg Rocky PTG
Identity Integration Track
[4] https://etherpad.openstack.org/p/YVR-rocky-default-roles Rocky Forum
Default Roles Forum Session
[5] https://review.openstack.org/#/c/572243/
[6]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
[7] https://review.openstack.org/#/c/523973/
[8] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130208.html
[9]
https://review.openstack.org/#/q/(status:open+OR+status:merged)+branch:master+topic:fix-member



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-25

2018-06-19 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-25.html

Over the time that I've been observing the TC, there's been quite a
lot of indecision about how and when to exercise power. The rules
and regulations of OpenStack governance have it that the TC has
pretty broad powers in terms of allowing and disallowing projects to
be "official" and in terms of causing or preventing the merging of
_any_ code in _any_ of those official projects.

Unfortunately, the negative aspect of these powers make them the
sort of powers that no one really wants to use. Instead the TC has a
history of, when it wants to pro-actively change things, using
techniques of gently nudging or trying to make obvious activities
that would be useful.  [OpenStack-wide
goals](https://governance.openstack.org/tc/goals/index.html) and the
[help most-needed
list](https://governance.openstack.org/tc/reference/help-most-needed.html)
are examples of this sort of thing.

Now that OpenStack is no longer sailing high on the hype seas,
resources are more scarce and some tactics and strategies are no
longer as useful as they once were. Some have expressed a desire for
the TC to provide a more active leadership role. One that allows the
community to adapt more quickly to changing times.

There's a delicate balance here that a few different conversations
in the past week have highlighted. [Last
Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-14.log.html#t2018-06-14T15:07:31),
a discussion about the (vast) volume of code getting review and
merged in the nova project led to some discussion on how to either
enforce or support a goal of decomposing nova into smaller,
less-coupled pieces. It was hard to find middle ground between
outright blocking code that didn't fit with that goal and believing
nothing could be done. Mixed in with that were valid concerns that
the TC [shouldn't be parenting people who are
adults](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-14.log.html#t2018-06-14T16:03:23)
and [is unable to be
effective](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-14.log.html#t2018-06-14T16:17:31).
(_Note: the context of those two linked statements is very
important, lest you be inclined to consider them out of context._)

And then
[today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-19.log.html#t2018-06-19T09:03:19),
some discussion about keeping the help wanted list up to date led to
thinking about ways to encourage reorganizing "[work around
objectives rather than code
boundaries](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-19.log.html#t2018-06-19T09:17:27)",
despite that being a very large cultural shift that may be very
difficult to make.

So what is the TC (or any vaguely powered governance group) to do?
We have some recent examples of the right thing: These are written
works—some completed, some in-progress—that layout a vision of how
things could or should be that community members can react and refer
to. As concrete documents they provide what amounts to an evolving
constitution of who we are or what we intend to be that people may
point to as a third-party authority that they choose to accept,
reject or modify without the complexity of "so and so said…".

* [Written principles for peer
  
review](https://governance.openstack.org/tc/reference/principles.html#we-value-constructive-peer-review)
 and [clear 
documentation](https://docs.openstack.org/project-team-guide/review-the-openstack-way.html)
 of the same.
* Starting a [Technical Vision for
  2018](https://etherpad.openstack.org/p/tech-vision-2018).
* There should be more here. There will be more here.

Many of the things that get written will start off wrong but the
only way they have a chance of becoming right is if they are written
in the first place. Providing ideas allows people to say "that's
right" or "that's wrong" or "that's right, except...". Writing
provides a focal point for including many different people in the
generation and refinement of ideas and an archive of long-lived
meaning and shared belief. Beliefs are what we use to choose
between what matters and what does not.

As the community evolves, and in some ways shrinks while demands
remain high, we have to make it easier for people to find and
understand, with greater alacrity, what we, as a community, choose
to care about. We've done a pretty good job in the past talking
about things like the [four
opens](https://governance.openstack.org/tc/reference/opens.html),
but now we need to be more explicit about what we are making and how
we make it.
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lis

Re: [openstack-dev] [nova]Notification update week 25

2018-06-19 Thread Matt Riedemann

On 6/18/2018 10:10 AM, Balázs Gibizer wrote:

* Introduce instance.lock and instance.unlock notifications
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances


This hasn't been updated in quite awhile. I wonder if someone else wants 
to pick that up now?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][heat][oslo.db] Configure maximum number of db connections

2018-06-19 Thread Zane Bitter

On 18/06/18 13:39, Jay Pipes wrote:

+openstack-dev since I believe this is an issue with the Heat source code.

On 06/18/2018 11:19 AM, Spyros Trigazis wrote:

Hello list,

I'm hitting quite easily this [1] exception with heat. The db server 
is configured to have 1000
max_connnections and 1000 max_user_connections and in the database 
section of heat

conf I have these values set:
max_pool_size = 22
max_overflow = 0
Full config attached.

I ended up with this configuration based on this formula:
num_heat_hosts=4
heat_api_workers=2
heat_api_cfn_workers=2
num_engine_workers=4
max_pool_size=22
max_overflow=0
num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers + 
num_engine_workers + heat_api_cfn_workers)

704

What I have noticed is that the number of connections I expected with 
the above formula is not respected.
Based on this formula each node (every node runs the heat-api, 
heat-api-cfn and heat-engine) should

use up to 176 connections but they even reach 400 connections.

Has anyone noticed a similar behavior?


Looking through the Heat code, I see that there are many methods in the 
/heat/db/sqlalchemy/api.py module that use a SQLAlchemy session but 
never actually call session.close() [1] which means that the session 
will not be released back to the connection pool, which might be the 
reason why connections keep piling up.


Thanks for looking at this Jay! Maybe I can try to explain our strategy 
(such as it is) here and you can tell us what we should be doing instead :)


Essentially we have one session per 'task', that is used for the 
duration of the task. Back in the day a 'task' was the processing of an 
entire stack from start to finish, but with our new distributed 
architecture it's much more granular - either it's just the initial 
setup of a change to a stack, or it's the processing of a single 
resource. (This was a major design change, and it's quite possible that 
the assumptions we made at the beginning - and tbh I don't think we 
really knew what we were doing then either - are no longer valid.)


So, for example, Heat sees an RPC request come in to update a resource, 
it starts a greenthread to handle it, that creates a database session 
that is stored in the request context. At the beginning of the request 
we load the data needed and update the status of the resource in the DB 
to IN_PROGRESS. Then we do whatever we need to do to update the resource 
(mostly this doesn't involve writing to the DB, but there are 
exceptions). Then we update the status to COMPLETE/FAILED, do some 
housekeeping stuff in the DB and send out RPC messages for any other 
work that needs to be done. IIUC that all uses the same session, 
although I don't know if it gets opened and closed multiple times in the 
process, and presumably the same object cache.


Crucially, we *don't* have a way to retry if we're unable to connect to 
the database in any of those operations. If we can't connect at the 
beginning that'd be manageable, because we could (but currently don't) 
just send out a copy of the incoming RPC message to try again later. But 
once we've changed something about the resource, we *must* record that 
in the DB or Bad Stuff(TM) will happen.


The way we handled that, as Spyros pointed out, was to adjust the size 
of the overflow pool to match the size of the greenthread pool. This 
ensures that every 'task' is able to connect to the DB, because  we 
won't take the message out of the RPC queue until there is a 
greenthread, and by extension a DB connection, available. This is 
infinitely preferable to finding out there are no connections available 
after you've already accepted the message (and oslo_messaging has an 
annoying 'feature' of acknowledging the message before it has even 
passed it to the application). It means stuff that we aren't able to 
handle yet queues up in the message queue, where it belongs, instead of 
in memory.


History: https://bugs.launchpad.net/heat/+bug/1491185

Unfortunately now you have to tune the size of the threadpool to trade 
off not utilising too little of your CPU against not opening too many DB 
connections. Nobody knows what the 'correct' tradeoff is, and even if we 
did Heat can't really tune it automatically by default because at 
startup it only knows the number of worker processes on the local node; 
it can't tell how many other nodes are [going to be] running and opening 
connections to the same database. Plus the number of allowed DB 
connections becomes the bottleneck to how much you can scale out the 
service horizontally.


What is the canonical way of handling this kind of situation? Retry any 
DB operation where we can't get a connection, and close the session 
after every transaction?


Not sure if there's any setting in Heat that will fix this problem. 
Disabling connection pooling will likely not help since connections are 
not properly being closed and returned to the connection pool to begin 
with.


Best,
-jay

[1] Heat app

Re: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client

2018-06-19 Thread Artem Goncharov
Hi,

No. Not right. Idea is to unify CLI for all projects inside of the
python-openstackclient and obsolete all individual python-XXXclients. This
can be achieved by using the openstacksdk. Network module was just first in
the row, where the progress stucked a bit.

Regards,


On Tue, Jun 19, 2018 at 6:15 PM, 李健  wrote:

> Hello everyone
> ---
> CentOS Linux release 7.3.1611
> OpenStack Version: Newton
> # rpm -qa | egrep "(openstacksdk|openstackclient)"
> python-openstackclient-3.2.1-1.el7.noarch
> python2-openstacksdk-0.9.5-1.el7.noarch
> 
> The openstack CLI is implemented by python-openstackclient.
> In the python-openstackclient package, the function make_client(instance)
> is used to obtain the client for each service (openstackclient/xxx/client.py),
> I noticed that almost all core services are import their own
> python2-xxxclient to get the client, for example:
> image/client.py --> import glanceclient.v2.client.Client
> compute/client.py --> import novaclient.client
> volume/client.py --> import cinderclient.v2.client.Client
>
> But only the network service is import openstacksdk to get the client, as
> follows:
> network/client.py --> import openstack.connection.Connection
>
> So, my question is, why does the network service not use the
> python2-neutronclient to get the client like other core projects, but
> instead uses another separate project(openstacksdk)?
> My personal opinion, openstacksdk is a project that can be used
> independently, it is mainly to provide a unified sdk for developers, so
> there should be no interdependence between python-xxxclient and
> openstacksdk, right?
>
> For any help, thks
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client

2018-06-19 Thread 李健
Hello everyone
---
CentOS Linux release 7.3.1611
OpenStack Version: Newton
# rpm -qa | egrep "(openstacksdk|openstackclient)"
python-openstackclient-3.2.1-1.el7.noarch
python2-openstacksdk-0.9.5-1.el7.noarch

The openstack CLI is implemented by python-openstackclient. 
In the python-openstackclient package, the function make_client(instance) is 
used to obtain the client for each service (openstackclient/xxx/client.py), I 
noticed that almost all core services are import their own python2-xxxclient to 
get the client, for example:
image/client.py --> import glanceclient.v2.client.Client
compute/client.py --> import novaclient.client
volume/client.py --> import cinderclient.v2.client.Client


But only the network service is import openstacksdk to get the client, as 
follows:
network/client.py --> import openstack.connection.Connection


So, my question is, why does the network service not use the 
python2-neutronclient to get the client like other core projects, but instead 
uses another separate project(openstacksdk)?  
My personal opinion, openstacksdk is a project that can be used independently, 
it is mainly to provide a unified sdk for developers, so there should be no 
interdependence between python-xxxclient and openstacksdk, right? 


For any help, thks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-19 Thread Alex Schultz
On Tue, Jun 19, 2018 at 9:17 AM, Jiří Stránský  wrote:
> On 19.6.2018 16:29, Lars Kellogg-Stedman wrote:
>>
>> On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote:
>>>
>>> Is this the same issue Carlos is trying to fix via
>>> https://review.openstack.org/#/c/494517/ ?
>>
>>
>> That solves part of the problem, but it's not a complete solution.
>> In particular, it doesn't solve the problem that bit me: if you're
>> changing puppet providers (e.g., replacing
>> provider/keystone_config/ini_setting.rb with
>> provider/keystone_config/openstackconfig.rb), you still have the old
>> provider sitting around causing problems because unpacking a tarball
>> only *adds* files.
>>
>>> Yeah I think we've never seen this because normally the
>>> /etc/puppet/modules tarball overwrites the symlink, effectively giving
>>> you a new tree (the first time round at least).
>>
>>
>> But it doesn't, and that's the unexpected problem: if you replace the
>> /etc/puppet/modules/keystone symlink with a directory, then
>> /usr/share/openstack-puppet/modules/keystone is still there, and while
>> the manifests won't be used, the contents of the lib/ directory will
>> still be active.
>>
>>> Probably we could add something to the script to enable a forced
>>> cleanup each update:
>>>
>>>
>>> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9
>>
>>
>> We could:
>>
>> (a) unpack the replacement puppet modules into a temporary location,
>>then
>>
>> (b) for each module; rm -rf the target directory and then copy it into
>>place
>>
>> But! This would require deploy_artifacts.sh to know that it was
>> unpacking puppet modules rather than a generic tarball.
>>
>>> This would have to be optional, so we could add something like a
>>> DeployArtifactsCleanupDirs parameter perhaps?
>>
>>
>> If we went with the above, sure.
>>
>>> One more thought which just occurred to me - we could add support for
>>> a git checkout/pull to the script?
>>
>>
>> Reiterating our conversion in #tripleo, I think rather than adding a
>> bunch of specific functionality to the DeployArtifacts feature, it
>> would make more sense to add the ability to include some sort of
>> user-defined pre/post tasks, either as shell scripts or as ansible
>> playbooks or something.
>>
>> On the other hand, I like your suggestion of just ditching
>> DeployArtifacts for a new composable service that defines
>> host_prep_tasks (or re-implenting DeployArtifacts as a composable
>> service), so I'm going to look at that as a possible alternative to
>> what I'm currently doing.
>>
>
> For the puppet modules specifically, we might also add another
> directory+mount into the docker-puppet container, which would be blank by
> default (unlike the existing, already populated /etc/puppet and
> /usr/share/openstack-puppet/modules). And we'd put that directory at the
> very start of modulepath. Then i *think* puppet would use a particular
> module from that dir *only*, not merge the contents with the rest of
> modulepath, so stale files in /etc/... or /usr/share/... wouldn't matter
> (didn't test it though). That should get us around the "tgz only adds files"
> problem without any rm -rf.
>

So the described problem is only a problem with puppet facts and
providers as they all get loaded from the entire module path. Normal
puppet classes are less conflict-y because it takes the first it finds
and stops.

> The above is somewhat of an orthogonal suggestion to the composable service
> approach, they would work well alongside i think. (And +1 on
> "DeployArtifacts as composable service" as something worth investigating /
> implementing.)
>

-1 to more services. We take a Heat time penalty for each new
composable service we add and in this case I don't think this should
be a service itself.  I think for this case, it would be better suited
as a host prep task than a defined service.  Providing a way for users
to define external host prep tasks might make more sense.

Thanks,
-Alex

> Jirka
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-19 Thread Jiří Stránský

On 19.6.2018 16:29, Lars Kellogg-Stedman wrote:

On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote:

Is this the same issue Carlos is trying to fix via
https://review.openstack.org/#/c/494517/ ?


That solves part of the problem, but it's not a complete solution.
In particular, it doesn't solve the problem that bit me: if you're
changing puppet providers (e.g., replacing
provider/keystone_config/ini_setting.rb with
provider/keystone_config/openstackconfig.rb), you still have the old
provider sitting around causing problems because unpacking a tarball
only *adds* files.


Yeah I think we've never seen this because normally the
/etc/puppet/modules tarball overwrites the symlink, effectively giving
you a new tree (the first time round at least).


But it doesn't, and that's the unexpected problem: if you replace the
/etc/puppet/modules/keystone symlink with a directory, then
/usr/share/openstack-puppet/modules/keystone is still there, and while
the manifests won't be used, the contents of the lib/ directory will
still be active.


Probably we could add something to the script to enable a forced
cleanup each update:

https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9


We could:

(a) unpack the replacement puppet modules into a temporary location,
   then

(b) for each module; rm -rf the target directory and then copy it into
   place

But! This would require deploy_artifacts.sh to know that it was
unpacking puppet modules rather than a generic tarball.


This would have to be optional, so we could add something like a
DeployArtifactsCleanupDirs parameter perhaps?


If we went with the above, sure.


One more thought which just occurred to me - we could add support for
a git checkout/pull to the script?


Reiterating our conversion in #tripleo, I think rather than adding a
bunch of specific functionality to the DeployArtifacts feature, it
would make more sense to add the ability to include some sort of
user-defined pre/post tasks, either as shell scripts or as ansible
playbooks or something.

On the other hand, I like your suggestion of just ditching
DeployArtifacts for a new composable service that defines
host_prep_tasks (or re-implenting DeployArtifacts as a composable
service), so I'm going to look at that as a possible alternative to
what I'm currently doing.



For the puppet modules specifically, we might also add another 
directory+mount into the docker-puppet container, which would be blank 
by default (unlike the existing, already populated /etc/puppet and 
/usr/share/openstack-puppet/modules). And we'd put that directory at the 
very start of modulepath. Then i *think* puppet would use a particular 
module from that dir *only*, not merge the contents with the rest of 
modulepath, so stale files in /etc/... or /usr/share/... wouldn't matter 
(didn't test it though). That should get us around the "tgz only adds 
files" problem without any rm -rf.


The above is somewhat of an orthogonal suggestion to the composable 
service approach, they would work well alongside i think. (And +1 on 
"DeployArtifacts as composable service" as something worth investigating 
/ implementing.)


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-19 Thread Alex Schultz
On Wed, Jun 13, 2018 at 9:50 AM, Emilien Macchi  wrote:
> Alan Bishop has been highly involved in the Storage backends integration in
> TripleO and Puppet modules, always here to update with new features, fix
> (nasty and untestable third-party backends) bugs and manage all the
> backports for stable releases:
> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22
>
> He's also well knowledgeable of how TripleO works and how containers are
> integrated, I would like to propose him as core on TripleO projects for
> patches related to storage things (Cinder, Glance, Swift, Manila, and
> backends).
>

Since there are no objections, I have added Alan to the cores list.

Thanks,
-Alex

> Please vote -1/+1,
> Thanks!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-19 Thread Lars Kellogg-Stedman
On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote:
> Is this the same issue Carlos is trying to fix via
> https://review.openstack.org/#/c/494517/ ?

That solves part of the problem, but it's not a complete solution.
In particular, it doesn't solve the problem that bit me: if you're
changing puppet providers (e.g., replacing
provider/keystone_config/ini_setting.rb with
provider/keystone_config/openstackconfig.rb), you still have the old
provider sitting around causing problems because unpacking a tarball
only *adds* files.

> Yeah I think we've never seen this because normally the
> /etc/puppet/modules tarball overwrites the symlink, effectively giving
> you a new tree (the first time round at least).

But it doesn't, and that's the unexpected problem: if you replace the
/etc/puppet/modules/keystone symlink with a directory, then
/usr/share/openstack-puppet/modules/keystone is still there, and while
the manifests won't be used, the contents of the lib/ directory will
still be active.

> Probably we could add something to the script to enable a forced
> cleanup each update:
> 
> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9

We could:

(a) unpack the replacement puppet modules into a temporary location,
  then

(b) for each module; rm -rf the target directory and then copy it into
  place

But! This would require deploy_artifacts.sh to know that it was
unpacking puppet modules rather than a generic tarball.

> This would have to be optional, so we could add something like a
> DeployArtifactsCleanupDirs parameter perhaps?

If we went with the above, sure.

> One more thought which just occurred to me - we could add support for
> a git checkout/pull to the script?

Reiterating our conversion in #tripleo, I think rather than adding a
bunch of specific functionality to the DeployArtifacts feature, it
would make more sense to add the ability to include some sort of
user-defined pre/post tasks, either as shell scripts or as ansible
playbooks or something.

On the other hand, I like your suggestion of just ditching
DeployArtifacts for a new composable service that defines
host_prep_tasks (or re-implenting DeployArtifacts as a composable
service), so I'm going to look at that as a possible alternative to
what I'm currently doing.

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI is down stop workflowing

2018-06-19 Thread Felix Enrique Llorente Pastora
Hi,

   We have the following bugs with fixes that need to land to unblock
check/gate jobs:

   https://bugs.launchpad.net/tripleo/+bug/1777451
   https://bugs.launchpad.net/tripleo/+bug/1777616

   You can check them out at #tripleo ooolpbot.

   Please stop workflowing temporally until they get merged.

BR.

-- 
Quique Llorente

Openstack TripleO CI
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] bug deputy report

2018-06-19 Thread Slawomir Kaplonski
Hi,

Last week I was on bug deputy and I basically forgot about it. So I went 
through bugs from last week yesterday. Below is summary of those bugs:

Neutron-vpnaas bug:
* libreswan ipsec driver doesn't work with libreswan versions 3.23+ - 
https://bugs.launchpad.net/neutron/+bug/1776840 

CI related bugs:
* Critical bug for stable/queens 
https://bugs.launchpad.net/neutron/+bug/1777190 - should be fixed already,
* TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost fullstack fails - 
https://bugs.launchpad.net/neutron/+bug/1776459 - I am checking logs for that, 
there are some patches related to it proposed but for now I don’t know exactly 
why this happens,
* neutron-rally job failing for stable/pike and stable/ocata - 
https://bugs.launchpad.net/neutron/+bug/1777506 - I am debugging why it happens 
like that,

DVR related bugs:
* DVR: Self recover from the loss of 'fg' ports in FIP Namespace - 
https://bugs.launchpad.net/neutron/+bug/1776984  - Swami is already working on 
it,
* DVR: FloatingIP create throws an error if the L3 agent is not running in the 
given host - https://bugs.launchpad.net/neutron/+bug/1776566 - Swami is already 
working on this one too,

DB related issues:
* Database connection was found disconnected; reconnecting: DBConnectionError - 
https://bugs.launchpad.net/neutron/+bug/1776896 - bug marker as Incomplete but 
IMO it should be closed as it doesn’t look like Neutron issue,

QoS issues:
* Inaccurate L3 QoS bandwidth - https://bugs.launchpad.net/neutron/+bug/1777598 
- reported today, fix already proposed

Docs bugs:
* [Doc] [FWaaS] Configuration of FWaaS v1 is confused - 
https://bugs.launchpad.net/neutron/+bug/1777547 - already in progress,

Already fixed issues reported this week:
* neutron-netns-cleanup explodes when trying to delete an OVS internal port - 
https://bugs.launchpad.net/neutron/+bug/1776469
* neutron-netns-cleanup does not configure privsep correctly - 
https://bugs.launchpad.net/neutron/+bug/1776468,
DVR scheduling checks wrong port binding profile for host in live-migration - 
https://bugs.launchpad.net/neutron/+bug/1776255

New RFE bugs:
* support vlan transparent in neutron network - 
https://bugs.launchpad.net/neutron/+bug/1777585 


— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-19 Thread Steven Hardy
On Wed, Jun 13, 2018 at 4:50 PM, Emilien Macchi  wrote:
> Alan Bishop has been highly involved in the Storage backends integration in
> TripleO and Puppet modules, always here to update with new features, fix
> (nasty and untestable third-party backends) bugs and manage all the
> backports for stable releases:
> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22
>
> He's also well knowledgeable of how TripleO works and how containers are
> integrated, I would like to propose him as core on TripleO projects for
> patches related to storage things (Cinder, Glance, Swift, Manila, and
> backends).
>
> Please vote -1/+1,

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team

2018-06-19 Thread Dougal Matthews
On 19 June 2018 at 10:27, Renat Akhmerov  wrote:

> Hi,
>
> I’d like to promote Vitalii Solodilov to the core team of Mistral. In my
> opinion, Vitalii is a very talented engineer  who has been demonstrating it
> by providing very high quality code and reviews in the last 6-7 months.
> He’s one of the people who doesn’t hesitate taking responsibility for
> solving challenging technical tasks. It’s been a great pleasure to work
> with Vitalii and I hope can will keep up doing great job.
>
> Core members, please vote.
>

+1 from me. Vitalii has been one of the most active reviewers and code
contributors through Queens and Rocky.


Vitalii’s statistics: http://stackalytics.com/?module=
> mistral-group&metric=marks&user_id=mcdoker18
>
> Thanks
>
> Renat Akhmerov
> @Nokia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team

2018-06-19 Thread András Kövi
+1 well deserved!

Renat Akhmerov  ezt írta (időpont: 2018. jún.
19., K, 11:28):

> Hi,
>
> I’d like to promote Vitalii Solodilov to the core team of Mistral. In my
> opinion, Vitalii is a very talented engineer  who has been demonstrating it
> by providing very high quality code and reviews in the last 6-7 months.
> He’s one of the people who doesn’t hesitate taking responsibility for
> solving challenging technical tasks. It’s been a great pleasure to work
> with Vitalii and I hope can will keep up doing great job.
>
> Core members, please vote.
>
> Vitalii’s statistics:
> http://stackalytics.com/?module=mistral-group&metric=marks&user_id=mcdoker18
>
> Thanks
>
> Renat Akhmerov
> @Nokia
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team

2018-06-19 Thread Renat Akhmerov
Hi,

I’d like to promote Vitalii Solodilov to the core team of Mistral. In my 
opinion, Vitalii is a very talented engineer  who has been demonstrating it by 
providing very high quality code and reviews in the last 6-7 months. He’s one 
of the people who doesn’t hesitate taking responsibility for solving 
challenging technical tasks. It’s been a great pleasure to work with Vitalii 
and I hope can will keep up doing great job.

Core members, please vote.

Vitalii’s statistics: 
http://stackalytics.com/?module=mistral-group&metric=marks&user_id=mcdoker18

Thanks

Renat Akhmerov
@Nokia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][heat][oslo.db][magnum] Configure maximum number of db connections

2018-06-19 Thread Spyros Trigazis
Hello lists,

With heat's team help I figured it out. Thanks Jay for looking into it.

The issue is coming from [1], where the max_overflow is set to
 executor_thread_pool_size if it is set to a lower value to address
another issue. In my case, I had a lot of RAM and CPU so I could
push for threads but I was "short" in db connections. The formula to
calculate the number of connections can be like this:
num_heat_hosts=4
heat_api_workers=2
heat_api_cfn_workers=2
num_engine_workers=4
executor_thread_pool_size = 22
max_pool_size=4
max_overflow=executor_thread_pool_size
num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers +
num_engine_workers + heat_api_cfn_workers)
832

And  a note for magnum deployments medium to large, see the options
we have changed in heat conf and change according to your needs.
The db configuration described here and changes we discovered in a
previous scale test can help to have a stable magnum and heat service.

For large stacks or projects with many stacks you need to change
the following in these values or better, according to your needs.

[Default]
executor_thread_pool_size = 22
max_resources_per_stack = -1
max_stacks_per_tenant = 1
action_retry_limit = 10
client_retry_limit = 10
engine_life_check_timeout = 600
max_template_size = 5242880
rpc_poll_timeout = 600
rpc_response_timeout = 600
num_engine_workers = 4

[database]
max_pool_size = 4
max_overflow = 22
Cheers,
Spyros

[heat_api]

workers = 2

[heat_api_cfn]
workers = 2

Cheers,
Spyros

ps We will update the magnum docs as well

[1]
http://git.openstack.org/cgit/openstack/heat/tree/heat/engine/service.py#n375


On Mon, 18 Jun 2018 at 19:39, Jay Pipes  wrote:

> +openstack-dev since I believe this is an issue with the Heat source code.
>
> On 06/18/2018 11:19 AM, Spyros Trigazis wrote:
> > Hello list,
> >
> > I'm hitting quite easily this [1] exception with heat. The db server is
> > configured to have 1000
> > max_connnections and 1000 max_user_connections and in the database
> > section of heat
> > conf I have these values set:
> > max_pool_size = 22
> > max_overflow = 0
> > Full config attached.
> >
> > I ended up with this configuration based on this formula:
> > num_heat_hosts=4
> > heat_api_workers=2
> > heat_api_cfn_workers=2
> > num_engine_workers=4
> > max_pool_size=22
> > max_overflow=0
> > num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers +
> > num_engine_workers + heat_api_cfn_workers)
> > 704
> >
> > What I have noticed is that the number of connections I expected with
> > the above formula is not respected.
> > Based on this formula each node (every node runs the heat-api,
> > heat-api-cfn and heat-engine) should
> > use up to 176 connections but they even reach 400 connections.
> >
> > Has anyone noticed a similar behavior?
>
> Looking through the Heat code, I see that there are many methods in the
> /heat/db/sqlalchemy/api.py module that use a SQLAlchemy session but
> never actually call session.close() [1] which means that the session
> will not be released back to the connection pool, which might be the
> reason why connections keep piling up.
>
> Not sure if there's any setting in Heat that will fix this problem.
> Disabling connection pooling will likely not help since connections are
> not properly being closed and returned to the connection pool to begin
> with.
>
> Best,
> -jay
>
> [1] Heat apparently doesn't use the oslo.db enginefacade transaction
> context managers either, which would help with this problem since the
> transaction context manager would take responsibility for calling
> session.flush()/close() appropriately.
>
>
> https://github.com/openstack/oslo.db/blob/43af1cf08372006aa46d836ec45482dd4b5b5349/oslo_db/sqlalchemy/enginefacade.py#L626
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cloudkitty] configuration, deployment or packaging issue?

2018-06-19 Thread Tobias Urdin
Hello,

Thanks Alex, I should probably improve my search-fu.

Is that commit in the RPM packages then I assume, so we need to ship a
metrics.yaml (which is kind of opinionated unless CloudKitty supplies a
default one)  and set the fetcher in the config file.


Perhaps somebody can confirm the above.

Best regards


On 06/19/2018 12:35 AM, Alex Schultz wrote:
> On Mon, Jun 18, 2018 at 4:08 PM, Tobias Urdin  
> wrote:
>> Hello CloudKitty team,
>>
>>
>> I'm having an issue with this review not going through and being stuck after
>> staring at it for a while now [1].
>>
>> Is there any configuration[2] issue that are causing the error[3]? Or is the
>> package broken?
>>
> Likely due to https://review.openstack.org/#/c/538256/ which appears
> to change the metrics.yaml format. It doesn't look backwards
> compatible so the puppet module probably needs updating.
>
>> Thanks for helping out!
>>
>> Best regards
>>
>>
>> [1] https://review.openstack.org/#/c/569641/
>>
>> [2]
>> http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/etc/cloudkitty/
>>
>> [3]
>> http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/cloudkitty/processor.txt.gz
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova API meeting schedule

2018-06-19 Thread Ghanshyam Mann
Thanks for response.  

We will go with options 2 which is to cancel the meeting and continue on nova 
channel. We will start API office hour on every Wednesday (27th June onwards) 
06:00 UTC on #openstack-nova channel. 

I pushed the patch to free the current slot of API meeting[1] and will update 
the API meeting wiki page for new timing and channel info.

[1] https://review.openstack.org/#/c/576398/ 

-gmann 


  On Mon, 11 Jun 2018 21:40:09 +0900 Chris Dent  
wrote  
 > On Mon, 11 Jun 2018, Ghanshyam wrote: 
 >  
 > > 2. If no member from USA/Europe TZ then, myself and Alex will 
 > > conduct the API meeting as office hour on Nova channel during our 
 > > day time (something between UTC+1 to  UTC + 9). There is not much 
 > > activity on Nova channel during our TZ so it will be ok to use 
 > > Nova channel.  In this case, we will release the current occupied 
 > > meeting channel. 
 >  
 > I think this is the better option since it works well for the people 
 > who are already actively interested. If that situation changes, you 
 > can always do something different. And if you do some kind of 
 > summary of anything important at the meeting (whenever the time) 
 > then people who can't attend can be in the loop. 
 >  
 > I was trying to attend the API meeting for a while (back when it was 
 > happening) but had to cut it out as its impossible to pay attention 
 > to everything and something had to give. 
 >  
 > --  
 > Chris Dent   ٩◔̯◔۶   https://anticdent.org/ 
 > freenode: cdent tw: 
 > @anticdent__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev