Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread IWAMOTO Toshihiro
At Wed, 1 Feb 2017 16:24:54 -0800,
Armando M. wrote:
> 
> Hi,
> 
> [TL;DR]: OpenStack services have steadily increased their memory
> footprints. We need a concerted way to address the oom-kills experienced in
> the openstack gate, as we may have reached a ceiling.
> 
> Now the longer version:
> 
> 
> We have been experiencing some instability in the gate lately due to a
> number of reasons. When everything adds up, this means it's rather
> difficult to merge anything and knowing we're in feature freeze, that adds
> to stress. One culprit was identified to be [1].
> 
> We initially tried to increase the swappiness, but that didn't seem to
> help. Then we have looked at the resident memory in use. When going back
> over the past three releases we have noticed that the aggregated memory
> footprint of some openstack projects has grown steadily. We have the
> following:

Not sure if it is due to memory shortage, VMs running CI jobs are
experiencing sluggishness, which may be the cause of ovs related
timeouts[1]. Tempest jobs run dstat to collect system info every
second. When timeouts[1] happen, dstat outputs are also often missing
for several seconds, which means a VM is having trouble scheduling
both ovs related processes and the dstat process.
Those ovs timeouts affect every project and happen much often than the
oom-kills.

Some details are on the lp bug page[2].

Correlation of such sluggishness and VM paging activities are not
clear. I wonder if VM hosts are under high load or if increasing VM
memory would help. Those VMs have no free ram for file cache and file
pages are read again and again, leading to extra IO loads on VM hosts
and adversely affecting other VMs on the same host.


[1] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22no%20response%20to%20inactivity%20probe%5C%22
[2] https://bugs.launchpad.net/neutron/+bug/1627106/comments/14

--
IWAMOTO Toshihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Joshua Harlow

Has anyone tried:

https://github.com/mgedmin/dozer/blob/master/dozer/leak.py#L72

This piece of middleware creates some nice graphs (using PIL) that may 
help identify which areas are using what memory (and/or leaking).


https://pypi.python.org/pypi/linesman might also be somewhat useful to 
have running.


How any process takes more than 100MB here blows my mind (horizon is 
doing nicely, ha); what are people caching in process to have RSS that 
large (1.95 GB, woah).


Armando M. wrote:

Hi,

[TL;DR]: OpenStack services have steadily increased their memory
footprints. We need a concerted way to address the oom-kills experienced
in the openstack gate, as we may have reached a ceiling.

Now the longer version:


We have been experiencing some instability in the gate lately due to a
number of reasons. When everything adds up, this means it's rather
difficult to merge anything and knowing we're in feature freeze, that
adds to stress. One culprit was identified to be [1].

We initially tried to increase the swappiness, but that didn't seem to
help. Then we have looked at the resident memory in use. When going back
over the past three releases we have noticed that the aggregated memory
footprint of some openstack projects has grown steadily. We have the
following:

  * Mitaka
  o neutron: 1.40GB
  o nova: 1.70GB
  o swift: 640MB
  o cinder: 730MB
  o keystone: 760MB
  o horizon: 17MB
  o glance: 538MB
  * Newton
  o neutron: 1.59GB (+13%)
  o nova: 1.67GB (-1%)
  o swift: 779MB (+21%)
  o cinder: 878MB (+20%)
  o keystone: 919MB (+20%)
  o horizon: 21MB (+23%)
  o glance: 721MB (+34%)
  * Ocata
  o neutron: 1.75GB (+10%)
  o nova: 1.95GB (%16%)
  o swift: 703MB (-9%)
  o cinder: 920MB (4%)
  o keystone: 903MB (-1%)
  o horizon: 25MB (+20%)
  o glance: 740MB (+2%)

Numbers are approximated and I only took a couple of samples, but in a
nutshell, the majority of the services have seen double digit growth
over the past two cycles in terms of the amount or RSS memory they use.

Since [1] is observed only since ocata [2], I imagine that's pretty
reasonable to assume that memory increase might as well be a determining
factor to the oom-kills we see in the gate.

Profiling and surgically reducing the memory used by each component in
each service is a lengthy process, but I'd rather see some gate relief
right away. Reducing the number of API workers helps bring the RSS
memory down back to mitaka levels:

  * neutron: 1.54GB
  * nova: 1.24GB
  * swift: 694MB
  * cinder: 778MB
  * keystone: 891MB
  * horizon: 24MB
  * glance: 490MB

However, it may have other side effects, like longer execution times, or
increase of timeouts.

Where do we go from here? I am not particularly fond of stop-gap [4],
but it is the one fix that most widely address the memory increase we
have experienced across the board.

Thanks,
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1656386

[2]
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22oom-killer%5C%22%20AND%20tags:syslog
[3]
http://logs.openstack.org/21/427921/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/82084c2/
[4] https://review.openstack.org/#/c/427921

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Weekly wrap-up

2017-02-02 Thread Richard Jones
Hi folks,

The mad rush to RC1 concluded this week with RC1 now tagged and the
stable/ocata branch created. We've got a couple of bugfixes that
should have made the RC1 release but just missed out so we're going to
try to get those in for RC2; otherwise RC1 is a good release, we can
all be happy with it!

This week's meeting covered a few topics beyond the impending RC1 release:

- There was some more talk of the PTG, and sadly a couple of folks
(myself included) won't be making it
- I volunteered to be the new Docs Liaison
- I welcomed our new (and returning) PTL for Pike, Rob Cresswell!


Have a great weekend all,

 Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] OpenStack Summit Boston-CFP closes February 6th

2017-02-02 Thread Sam P
Hi Erin,

 Thank you for the reminder and other info.
 In you previous mail,
 > Panels are allowed a total of four speakers plus one moderator, whereas
 > presentations and lightning talks are limited to two speakers.
 To the best of my knowledge, presentations limited to three speakers.
 Not sure about the lightning talks.


--- Regards,
Sampath



On Thu, Feb 2, 2017 at 4:14 AM, Erin Disney  wrote:
> Hi Everyone-
>
> Don’t forget: the Call for Presentations for the upcoming OpenStack Summit
> Boston closes next week!
>
> The submission deadline is February 6, 2017 at 11:59PM PDT (February 7, 2017
> at 6:59 UTC).
>
> Reminder: Proposed sessions must indicate a format: Panel, presentation or
> lightning talk. Each format has a maximum number of speakers associated.
> Panels are allowed a total of four speakers plus one moderator, whereas
> presentations and lightning talks are limited to two speakers.
>
> As a reminder, speakers are limited to a maximum of three submissions total.
>
> Contact speakersupp...@openstack.org with any submission questions.
>
> BOSTON REGISTRATION
> Attendee registration is now open. Purchase your discounted early bird
> passes now. Prices will increase in mid March.
>
> SPONSORSHIP SALES
> Summit sponsorship sales are also open. You can now sign the electronic
> contract here. If you plan to sponsor both 2017 OpenStack Summits (Boston in
> May & Sydney in November), then check out page 4 of the Boston Summit
> Sponsorship Prospectus for a special 15% discount on Sydney Summit
> sponsorship packages. Please note this only applies to companies who sponsor
> both the Boston Summit and Sydney Summit. Full details of the sponsorship
> signing process are outlined here.
>
> If you have any general Summit questions, contact us at
> sum...@openstack.org.
>
>
> Erin Disney
> OpenStack Marketing
> e...@openstack.org
>
>
> ___
> Community mailing list
> commun...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/community
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] ocata rc1 is (almost) ready to go

2017-02-02 Thread Matt Riedemann
I've got the patch up in the releases repo and the listed dependencies 
are all merged:


https://review.openstack.org/#/c/428531/

I've got a WIP on it until tomorrow when we can talk through some of the 
last items in the etherpad:


https://etherpad.openstack.org/p/nova-ocata-rc1-todos

Thanks to everyone involved the last few days to get us to this point. 
It's been rough but we're near the end.


--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FFE for novajoin in TripleO's undercloud

2017-02-02 Thread Emilien Macchi
On Thu, Feb 2, 2017 at 10:22 PM, Michael Still  wrote:
> What version of nova is tripleo using here? This wont work quite right if
> you're using Mitaka until https://review.openstack.org/#/c/427547/ lands and
> is released.

TripleO Ocata will use Nova Ocata, using the release from upstream
(currently nova-ocata-3 and soon nova-ocata-rc1).
This feature won't be backported to newton/mitaka, so no risk.

> Also, I didn't know novajoin existed and am pleased to have discovered it.
>
> Michael
>
>
>
> On Fri, Feb 3, 2017 at 11:27 AM, Juan Antonio Osorio 
> wrote:
>>
>> Hello,
>>
>> I would like to request an FFE to properly support the novajoin vendordata
>> plugin in TripleO. Most of the work has landed, however, we still need to
>> add it to TripleO's CI in order to have it officially supported.
>>
>> This is crucial for TLS-everywhere configuration's usability, since it
>> makes it easier to populate the required field's in the CA (which in our
>> case is FreeIPA). I'm currently working on a patch to add it to the
>> fakeha-caserver OVB job; which, after this is done, I hope to move from the
>> experimental queue, to the periodic one.
>>
>> BR
>>
>> --
>> Juan Antonio Osorio R.
>> e-mail: jaosor...@gmail.com
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Rackspace Australia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] Look Forward to Meeting You on Monday Feb 6 and Tuesday Feb 7 in Sunnyvale, CA

2017-02-02 Thread HU, BIN
Hello team,



As we discussed in [1] and [2], and announced in [3], we are all set for Monday 
Feb 6th and Tuesday Feb 7th in the EBC center at Juniper's office in Sunnyvale.



Many thanks to Sukhdev and Juniper team for hosting this meeting.



Please refer to [4] for more detailed logistic information, including driving 
directions. When you come in, you should directly go to the EBC center (not the 
front desk in the lobby). The admin in the EBC center will escort you to the 
appropriate room.



A tentative agenda can be found at [5]. If you have any questions, feel free to 
ask.



I am looking forward to meeting you.



Thank you

Bin



[1] 
http://eavesdrop.openstack.org/meetings/gluon/2017/gluon.2017-01-11-18.00.html

[2] 
http://eavesdrop.openstack.org/meetings/gluon/2017/gluon.2017-01-18-18.01.html

[3] http://lists.openstack.org/pipermail/openstack-dev/2017-January/110447.html

[4] https://wiki.openstack.org/wiki/Meetings/Gluon/Logistics-2017020607

[5] https://wiki.openstack.org/wiki/Meetings/Gluon/Agenda-2017020607



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FFE for novajoin in TripleO's undercloud

2017-02-02 Thread Michael Still
What version of nova is tripleo using here? This wont work quite right if
you're using Mitaka until https://review.openstack.org/#/c/427547/ lands
and is released.

Also, I didn't know novajoin existed and am pleased to have discovered it.

Michael



On Fri, Feb 3, 2017 at 11:27 AM, Juan Antonio Osorio 
wrote:

> Hello,
>
> I would like to request an FFE to properly support the novajoin vendordata
> plugin in TripleO. Most of the work has landed, however, we still need to
> add it to TripleO's CI in order to have it officially supported.
>
> This is crucial for TLS-everywhere configuration's usability, since it
> makes it easier to populate the required field's in the CA (which in our
> case is FreeIPA). I'm currently working on a patch to add it to the
> fakeha-caserver OVB job; which, after this is done, I hope to move from the
> experimental queue, to the periodic one.
>
> BR
>
> --
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Robert Collins
On 3 Feb. 2017 16:14, "Robert Collins"  wrote:

This may help. http://jam-bazaar.blogspot.co.nz/2009/11/memory-
debugging-with-meliae.html

-rob


Oh, and if i recall correctly run snake run supports both heapy and meliae.

,-rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Robert Collins
This may help.
http://jam-bazaar.blogspot.co.nz/2009/11/memory-debugging-with-meliae.html

-rob

On 3 Feb. 2017 10:39, "Armando M."  wrote:

>
>
> On 2 February 2017 at 13:36, Ihar Hrachyshka  wrote:
>
>> On Thu, Feb 2, 2017 at 7:44 AM, Matthew Treinish 
>> wrote:
>> > Yeah, I'm curious about this too, there seems to be a big jump in
>> Newton for
>> > most of the project. It might not a be a single common cause between
>> them, but
>> > I'd be curious to know what's going on there.
>>
>> Both Matt from Nova as well as me and Armando suspect
>> oslo.versionedobjects. Pattern of memory consumption raise somewhat
>> correlates with the level of adoption for the library, at least in
>> Neutron. That being said, we don't have any numbers, so at this point
>> it's just pointing fingers into Oslo direction. :) Armando is going to
>> collect actual memory profile.
>>
>
> I'll do my best, but I can't guarantee I can come up with something in
> time for RC.
>
>
>>
>> Ihar
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] congress 5.0.0.0rc1 (ocata)

2017-02-02 Thread Eric K
RC1 released. Great job everyone for getting our fixes in!

Let¹s continue testing and reporting bugs to make sure Ocata is a
rock-solid release!

On 2/2/17, 5:01 PM, "no-re...@openstack.org" 
wrote:

>
>Hello everyone,
>
>A new release candidate for congress for the end of the Ocata
>cycle is available!  You can find the source code tarball at:
>
>https://tarballs.openstack.org/congress/
>
>Unless release-critical issues are found that warrant a release
>candidate respin, this candidate will be formally released as the
>final Ocata release. You are therefore strongly
>encouraged to test and validate this tarball!
>
>Alternatively, you can directly test the stable/ocata release
>branch at:
>
>http://git.openstack.org/cgit/openstack/congress/log/?h=stable/ocata
>
>Release notes for congress can be found at:
>
>http://docs.openstack.org/releasenotes/congress/
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Final mascot

2017-02-02 Thread Matt Riedemann

I was sent a copy of the final mascot for Nova which is attached here.

The Foundation wants to have the mascots finalized before the PTG. This 
is just an opportunity for people to raise issues with it if they have any.


The original email:

--

Hi Matt,

I have a new revision from our illustration team for your team’s project 
mascot. We’re pushing hard to get all 60 of the mascots finalized by the 
PTG, so I’d love any feedback from your team as swiftly as possible. As 
a reminder, we can’t change the illustration style (since it’s 
consistent throughout the full mascot set) and so we’re just looking for 
problems with the creatures. Could you please let me know if your team 
has any final concerns?


--

So I guess reply here if you have any feedback or concerns with this.

--

Thanks,

Matt Riedemann
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Large Contributing OpenStack Operators working group?

2017-02-02 Thread Curtis
On Thu, Feb 2, 2017 at 1:14 PM, Jay Pipes  wrote:
> Hi,
>
> I was told about this group today. I have a few questions. Hopefully someone
> from this team can illuminate me with some answers.
>
> 1) What is the purpose of this group? The wiki states that the team "aims to
> define the use cases and identify and prioritise the requirements which are
> needed to deploy, manage, and run services on top of OpenStack. This work
> includes identifying functional gaps, creating blueprints, submitting and
> reviewing patches to the relevant OpenStack projects, contributing to
> working those items, tracking their completion."
>
> What is the difference between the LCOO and the following existing working
> groups?
>
>  * Large Deployment Team
>  * Massively Distributed Team
>  * Product Working Group
>  * Telco/NFV Working Group

I just wanted to add one thing here, and that is that the Telco/NFV
Working Group you mention above is really the Operators Telecom/NFV
Working Group, which is not the same group that had existed before,
and is meant to have an OpenStack Operators perspective. We have been
having some good meetings recently, and some of what we are hoping to
do is bring together various communities, eg. OpenStack Operators,
telecoms, OPNFV, and others, and act as a sort of bridge, as well as
actually generate some useful artifacts, though likely not any code
other than what we might get into something like osops.

The LCOO seems different to me b/c they will have actual human
resources to put into developing code. Or at least that is my
impression of what they are doing. I do think they should try to
adhere to more of the OpenStack community guidelines, eg. IRC and
such, but I was also recently on a 8 hour conference call with a
telecom; they love their conference calls. ;)

Thanks,
Curtis.

>
> 2) According to the wiki page, only companies that are "Multi-Cloud
> Operator[s] and/or Network Service Provider[s]" are welcome in this team.
> Why is the team called "Large Contributing OpenStack Operators" if it's only
> for Telcos? Further, if this is truly only for Telcos, why isn't the
> Telco/NFV working group appropriate?
>
> 3) Under the "Guiding principles" section of the above wiki, the top
> principle is "Align with the OpenStack Foundation". If this is the case, why
> did the group move its content to the closed Atlassian Confuence platform?
> Why does the group have a set of separate Slack channels instead of using
> the OpenStack mailing lists and IRC channels? Why is the OPNFV Jira used for
> tracking work items for the LCOO agenda?
>
> See https://wiki.openstack.org/wiki/Gluon/Tasks-Ocata for examples.
>
> 4) I see a lot of agenda items around projects like Gluon, Craton, Watcher,
> and Blazar. I don't see any concrete ideas about talking with the developers
> of the key infrastructure services that OpenStack is built around. How does
> the LCOO plan on reaching out to the developers of the long-standing
> OpenStack projects like Nova, Neutron, Cinder, and Keystone to drive their
> shared agenda?
>
> Thanks for reading and (hopefully) answering.
>
> -jay
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Blog: serverascode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Sergey (Sagi) Shnaidman for core on tripleo-ci

2017-02-02 Thread Wesley Hayutin
On Thu, Feb 2, 2017 at 9:35 AM, Attila Darazs  wrote:

> On 02/01/2017 08:37 PM, John Trowbridge wrote:
>
>>
>>
>> On 01/30/2017 10:56 AM, Emilien Macchi wrote:
>>
>>> Sagi, you're now core on TripleO CI repo. Thanks for your hard work on
>>> tripleo-quickstart transition, and also helping by keeping CI in good
>>> shape, your work is amazing!
>>>
>>> Congrats!
>>>
>>> Note: I couldn't add you to tripleo-ci group, but only to tripleo-core
>>> (Gerrit permissions), which mean you can +2 everything but we trust
>>> you to use it only on tripleo-ci. I'll figure out the Gerrit
>>> permissions later.
>>>
>>>
>> I also told Sagi that he should also feel free to +2 any
>> tripleo-quickstart/extras patches which are aimed at transitioning
>> tripleo-ci to use quickstart. I didn't really think about this as an
>> extra permission, as any tripleo core has +2 on
>> tripleo-quickstart/extras. However, I seem to have surprised the other
>> quickstart cores with this. None were opposed to the idea, but just
>> wanted to make sure that it was clearly communicated that this is allowed.
>>
>> If there is some objection to this, we can consider it further. FWIW,
>> Sagi has been consistently providing high quality critical reviews for
>> tripleo-quickstart/extras for some time now, and was pivotal in the
>> setup of the quickstart based OVB job.
>>
>
> Thanks for the clarification.
>
> And +1 on Sagi as a quickstart/extras core. I really appreciate his
> critical eyes on the changes.
>
> Attila


Thanks Emilien, John!
Congrats Sagi!


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Doug Hellmann
Excerpts from Octave J. Orgeron's message of 2017-02-02 15:08:14 -0700:
> Comments below..
> 
> On 2/2/2017 1:08 PM, Doug Hellmann wrote:
> > Excerpts from Octave J. Orgeron's message of 2017-02-02 12:16:15 -0700:
> >> Hi Doug,
> >>
> >> Comments below..
> >>
> >> Thanks,
> >> Octave
> >>
> >> On 2/2/2017 11:27 AM, Doug Hellmann wrote:
> >>> Excerpts from Octave J. Orgeron's message of 2017-02-02 09:40:23 -0700:

[snip]

> > So all existing scripts that create or modify tables will need to
> > be updated? That's going to be a lot of work. It will also be a lot
> > of work to ensure that new alter scripts are implemented using the
> > required logic, and that testing happens in the gates for all
> > projects supporting this feature to ensure there are no regressions
> > or behavioral changes in the applications as a result of the changes
> > in table definitions.
> >
> > I'll let the folks more familiar with databases in general and MySQL
> > in particular respond to some of the technical details, but I think
> > I should give you fair warning that you're taking on a very big
> > project, especially for someone new to the community.
> 
> Yes, this is major undertaking and major driver for Oracle to setup a 
> 3rd party CI so that we can automate regression testing against MySQL 
> Cluster. On the flip side, it helps solve some of the challenges with 

I'm not sure we would want to gate projects on CI run outside of
our infrastructure community. We've had some bad experiences with
that in the past. What are the options for running MySQL Cluster
on nodes upstream?

> larger deployments where an active/passive solution for MySQL DB is not 
> sufficient. So the pay-off is pretty big from an availability and 
> scale-out perspective.
> 
> But I do realize that I'll have to maintain this long-term and hopefully 
> get others to help out as more services are added to OpenStack.

Given the scope of the work being proposed, and the level of expertise
needed to do it, we're going to need to have more than one person
available to debug issues before we go to far ahead with it.

It might help to have an example or two of the sorts of migration
script changes you've described elsewhere. Maybe you can prepare a
sample patch for a project?

Are there tools that can look at a table definition and tell if it
meets the criteria for the cluster backend (the row size, types
used, whatever else)? Or is it something one has to do by hand?

> > I may not have been entirely clear. You need to add a function to
> > oslo.db to allow a user of oslo.db to read the configuration value
> > without knowing what that option name is. There are two reasons for
> > this policy:
> >
> > 1. Configuration options are supposed to be completely transparent
> > to the application developer using the library, otherwise they
> > would be parameters to the classes or functions in the library
> > instead of deployer-facing configuration options.
> >
> > oslo.config allows us to rename configuration options transparently
> > to deployers (they get a warning about the new name or location
> > for the option in the config file, but the library knows both
> > locations).
> >
> > The rename feature does not work when accessing options
> > programmatically, because we do not consider configuration options
> > to be part of the API of a library.  That means that cfg.CONF.foo.bar
> > can move to cfg.CONF.blah.bletch, and your code using it by the
> > old name will break.
> 
> This is correct. Neutron does exactly what you are describing where you 
> have to look under a neutron namespace instead of the cfg.CONF namespace 
> to find the actual configured setting from the .conf file.

The neutron migration scripts and configuration options are "owned" by
the same code, so that's fine. That's not the case with oslo.db.

> > 2. Accessing configuration options depends on having them registered,
> > and a user of the library that owns a configuration option may not
> > know which functions in the library to call to register the options.
> > As a result, they may try to use an option before it is actually
> > defined. Using an access function to read the value of an option
> > allows the library to ensure the option is registered before trying
> > to return the value.
> >
> > For those reasons, in cases where a configuration option needs to
> > be exposed outside of the library we require a function defined
> > inside the library where we can have unit tests that will break if
> > the configuration option is renamed or otherwise changed, and so
> > we can handle those changes without breaking applications consuming
> > the library.
> >
> > In this case, the migration scripts are outside of oslo.db, so they
> > will need a public function added to oslo.db to access the configuration
> > value. The function should first ensure that the new option is
> > registered, and then return the configured 

[openstack-dev] [all] Upcoming tox 2.6.0 release

2017-02-02 Thread Tony Breeds
Hi All,
I don't expect a problem but I also don't know how/if we control which tox
version is installed in out images.  Based on the thread here [1] It seems there
will be a tox 2.6.0 release real soon now.

Yours Tony.

[1] 
https://mail.python.org/mm3/archives/list/tox-...@python.org/message/MIYBH6UUKRBWMCWSA3EOVXYT5OQO6DDN/


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] congress 5.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for congress for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/congress/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/congress/log/?h=stable/ocata

Release notes for congress can be found at:

http://docs.openstack.org/releasenotes/congress/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] FFE for novajoin in TripleO's undercloud

2017-02-02 Thread Juan Antonio Osorio
Hello,

I would like to request an FFE to properly support the novajoin vendordata
plugin in TripleO. Most of the work has landed, however, we still need to
add it to TripleO's CI in order to have it officially supported.

This is crucial for TLS-everywhere configuration's usability, since it
makes it easier to populate the required field's in the CA (which in our
case is FreeIPA). I'm currently working on a patch to add it to the
fakeha-caserver OVB job; which, after this is done, I hope to move from the
experimental queue, to the periodic one.

BR

-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [stadium] subprojects on independent release cycle

2017-02-02 Thread Armando M.
Hi neutrinos,

I have put a number of patches in the merge queue for a few sub-projects.
We currently have a number of these that are on an independent release
schedule. In particular:

   - networking-bagpipe
   - networking-bgpvpn
   - networking-midonet
   - networking-odl
   - networking-sfc

Please make sure that between now and March 10th [1], you work to prepare
at least one ocata release that works with neutron's [2] and cut a stable
branch before than. That would incredibly help consumers who are interested
in assembling these bits together and start testing ocata as soon as it's
out.

Your collaboration is much appreciated.

Many thanks,
Armando

[1] https://releases.openstack.org/ocata/schedule.html
[2] https://review.openstack.org/#/c/428474/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-docs][security] Sec guide change

2017-02-02 Thread Lana Brindley
Nice work!

L

On 03/02/17 05:06, Alexandra Settle wrote:
> Hi everyone,
> 
>  
> 
> As of today, all bugs for the Security Guide will be managed by 
> ossp-security-documentation and no longer will be tracked using the OpenStack 
> manuals Launchpad.
> 
>  
> 
> All tracking for Security Guide related bugs can be found here: 
> https://bugs.launchpad.net/ossp-security-documentation
> 
>  
> 
> Big thanks to the security team and Ian Cordasco for creating and updating 
> the bug list in Launchpad!
> 
>  
> 
> Thank you,
> 
>  
> 
> Alex
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] mistral-dashboard 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for mistral-dashboard for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/mistral-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:


http://git.openstack.org/cgit/openstack/mistral-dashboard/log/?h=stable/ocata

Release notes for mistral-dashboard can be found at:

http://docs.openstack.org/releasenotes/mistral-dashboard/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] mistral 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for mistral for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/mistral/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/mistral/log/?h=stable/ocata

Release notes for mistral can be found at:

http://docs.openstack.org/releasenotes/mistral/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Ed Leafe
On Feb 2, 2017, at 10:16 AM, Matthew Treinish  wrote:

> 

If that was intentional, it is the funniest thing I’ve read today. :)

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-powervm 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for networking-powervm for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/networking-powervm/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:


http://git.openstack.org/cgit/openstack/networking-powervm/log/?h=stable/ocata

Release notes for networking-powervm can be found at:

http://docs.openstack.org/releasenotes/networking-powervm/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nova_powervm 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for nova_powervm for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/nova-powervm/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/nova_powervm/log/?h=stable/ocata

Release notes for nova_powervm can be found at:

http://docs.openstack.org/releasenotes/nova_powervm/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Kevin Benton
I'm referring to Apache sitting in between the services now as a TLS
terminator and connection proxy. That was not the configuration before but
it is now the default devstack behavior.

See this example from Newton:
http://logs.openstack.org/73/428073/2/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/028ea38/logs/apache_config/
Then this from master:
http://logs.openstack.org/32/421832/4/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5af5c7c/logs/apache_config/


>The ways in which OpenStack and oslo.service uses eventlet are known to
>have scaling bottle necks. The Keystone team saw substantial throughput
>gains going over to apache hosting.


Right, but there is a difference between scaling issues and a single worker
not being able to handle the peak 5 concurrent requests or so that the gate
jobs experience. The eventlet wsgi server should have no issues with our
gate load.

On Thu, Feb 2, 2017 at 3:01 PM, Sean Dague  wrote:

> On 02/02/2017 04:07 PM, Kevin Benton wrote:
> > This error seems to be new in the ocata cycle. It's either related to a
> > dependency change or the fact that we put Apache in between the services
> > now. Handling more concurrent requests than workers wasn't an issue
> > before.
> >
> > It seems that you are suggesting that eventlet can't handle concurrent
> > connections, which is the entire purpose of the library, no?
>
> The only services that are running on Apache in standard gate jobs are
> keystone and the placement api. Everything else is still the
> oslo.service stack (which is basically run eventlet as a preforking
> static worker count webserver).
>
> The ways in which OpenStack and oslo.service uses eventlet are known to
> have scaling bottle necks. The Keystone team saw substantial throughput
> gains going over to apache hosting.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-02-02 Thread Lucas Alvares Gomes
Hi,

> I'm with the group here.
>
> Let's find something that looks nice and is not offensive to anyone.
>

+1

...

Also, Heidi, I forgot to ask in the previous emails and I couldn't
find in the guidelines[0]: What license will these images have ? Will
people be allowed to share and create derivative works from it ( for
any purpose? ) ?

Note that our current mascot is licensed under CC BY-SA license [1]
and I think it's important to keep the freedom of it.

[0] https://www.openstack.org/project-mascots/
[1] https://creativecommons.org/licenses/by-sa/4.0/

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry] ceilometer-powervm 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for ceilometer-powervm for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/ceilometer-powervm/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:


http://git.openstack.org/cgit/openstack/ceilometer-powervm/log/?h=stable/ocata

Release notes for ceilometer-powervm can be found at:

http://docs.openstack.org/releasenotes/ceilometer-powervm/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Octave J. Orgeron
That refers to the total length of the row. InnoDB has a limit of 65k 
and NDB is limited to 14k.


A simple example would be the volumes table in Cinder where the row 
length goes beyond 14k. So in the IF logic block, I change columns types 
that are vastly oversized such as status and attach_status, which by 
default are 255 chars. So to determine a more appropriate size, I look 
through the Cinder code to find where the possible options/states are 
for those columns. Then I cut it down to a more reasonable size. I'm 
very careful when I cut the size of a string column to ensure that all 
of the possible values can be contained.


In cases where a column is extremely large for capturing the outputs of 
a command, I will change the type to Text or TinyText depending on the 
length required. A good example of this is in the agents table of 
Neutron where there is a column for configurations that has a string 
length of 4096 characters, which I change to Text. Text blobs are stored 
differently and do not count against the row length.


I've also observed differences between Kilo, Mitaka, and tip where even 
for InnoDB some of these tables are getting wider than can be supported. 
So in the case of Cinder, some of the columns have been shifted to 
separate tables to fit within 65k. I've seen the same thing in Neutron. 
So I fully expect that some of the services that have table bloat will 
have to cut the lengths or break the tables up over time anyways. As 
that happens, it reduces the amount of work for me, which is a good thing.


The most complicated database schemas to patch up are cinder, glance, 
neutron, and nova due to the size and complexity of their tables. Those 
also have a lot of churn between releases where the schema changes more 
often. Other services like keystone, heat, and ironic are considerably 
easier to work with and have well laid out tables that don't change much.


Thanks,
Octave

On 2/2/2017 1:25 PM, Mike Bayer wrote:



On 02/02/2017 02:52 PM, Mike Bayer wrote:


But more critically I noticed you referred to altering the names of
columns to suit NDB.  How will this be accomplished?   Changing a column
name in an openstack application is no longer trivial, because online
upgrades must be supported for applications like Nova and Neutron.  A
column name can't just change to a new name, both columns have to exist
and logic must be added to keep these columns synchronized.



correction, the phrase was "Row character length limits 65k -> 14k" - 
does this refer to the total size of a row?  I guess rows that store 
JSON or tables like keystone tokens are what you had in mind here, can 
you give specifics ?




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Oracle 
Octave J. Orgeron | Sr. Principal Architect and Software Engineer
Oracle Linux OpenStack
Mobile: +1-720-616-1550 
500 Eldorado Blvd. | Broomfield, CO 80021
Certified Oracle Enterprise Architect: Systems Infrastructure 

Green Oracle  Oracle is committed to 
developing practices and products that help protect the environment


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Octave J. Orgeron

Comments below..

On 2/2/2017 1:08 PM, Doug Hellmann wrote:

Excerpts from Octave J. Orgeron's message of 2017-02-02 12:16:15 -0700:

Hi Doug,

Comments below..

Thanks,
Octave

On 2/2/2017 11:27 AM, Doug Hellmann wrote:

Excerpts from Octave J. Orgeron's message of 2017-02-02 09:40:23 -0700:

Hi Doug,

One could try to detect the default engine. However, in MySQL Cluster,
you can support multiple storage engines. Only NDB is fully clustered
and replicated, so if you accidentally set a table to be InnoDB it won't
be replicated . So it makes more sense for the operator to be explicit
on which engine they want to use.

I think this change is probably a bigger scale item than I understood
it to be when you originally contacted me off-list for advice about
how to get started. I hope I haven't steered you too far wrong, but
at least the conversation is started.

As someone (Mike?) pointed out on the review, the option by itself
doesn't do much of anything, now. Before we add it, I think we'll
want to see some more detail about how it's going used. It may be
easier to have that broader conversation here on email than on the
patch currently up for review.

Understood, it's a complicated topic since it involves gritty details in
SQL Alchemy and Alembic that are masked from end-users and operators
alike. Figuring out how to make this work did take some time on my part.


It sounds like part of the plan is to use the configuration setting
to control how the migration scripts create tables. How will that
work? Does each migration need custom logic, or can we build helpers
into oslo.db somehow? Or will the option be passed to the database
to change its behavior transparently?

These are good questions. For each service, when the db sync or db
manage operation is done it will call into SQL Alchemy or Alembic
depending on the methods used by the given service. For example, most
use SQL Alchemy, but there are services like Ironic and Neutron that use
Alembic. It is within these scripts under the /db/* hierarchy
that the logic exist today to configure the database schema for any
given service. Both approaches will look at the schema version in the
database to determine where to start the create, upgrade, heal, etc.
operations. What my patches do is that in the scripts where a table
needs to be modified, there will be custom IF/THEN logic to check the
cfg.CONF.database.mysql_storage_engine setting to make the required
modifications. There are also use cases where the api.py or model(s).py
under the /db/ hierarchy needs to look at this setting as well
for API and CLI operations where mysql_engine is auto-inserted into DB
operations. In those use cases, I replace the hard coded "InnoDB" with
the mysql_storage_engine variable.

So all existing scripts that create or modify tables will need to
be updated? That's going to be a lot of work. It will also be a lot
of work to ensure that new alter scripts are implemented using the
required logic, and that testing happens in the gates for all
projects supporting this feature to ensure there are no regressions
or behavioral changes in the applications as a result of the changes
in table definitions.

I'll let the folks more familiar with databases in general and MySQL
in particular respond to some of the technical details, but I think
I should give you fair warning that you're taking on a very big
project, especially for someone new to the community.


Yes, this is major undertaking and major driver for Oracle to setup a 
3rd party CI so that we can automate regression testing against MySQL 
Cluster. On the flip side, it helps solve some of the challenges with 
larger deployments where an active/passive solution for MySQL DB is not 
sufficient. So the pay-off is pretty big from an availability and 
scale-out perspective.


But I do realize that I'll have to maintain this long-term and hopefully 
get others to help out as more services are added to OpenStack.





It would be interesting if we could develop some helpers to automate
this, but it would probably have to be at the SQL Alchemy or Alembic
levels. Unfortunately, throughout all of the OpenStack services today we
are hard coding things like mysql_engine, using InnoDB specific features
(savepoints, nested operations, etc.), and not following the strict SQL
orders for modifying table elements (foreign keys, constraints, and
indexes). That actually makes it difficult to support other MySQL
dialects or other databases out of the box. SQL Alchemy can be used to
fix some of these things if the SQL statements are all generic and we
follow strict SQL rules. But to change that would be a monumental
effort. That is why I took this approach of just adding custom logic.
There is a president for this already for Postgres and DB2 support in
some of the OpenStack services using custom logic to deal with similar
differences.

As to why we should place the configuration setting into oslo.db? Here
are a couple of logical reasons:

Oh, I'm not 

Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Sean Dague
On 02/02/2017 04:07 PM, Kevin Benton wrote:
> This error seems to be new in the ocata cycle. It's either related to a
> dependency change or the fact that we put Apache in between the services
> now. Handling more concurrent requests than workers wasn't an issue
> before.  
>
> It seems that you are suggesting that eventlet can't handle concurrent
> connections, which is the entire purpose of the library, no?

The only services that are running on Apache in standard gate jobs are
keystone and the placement api. Everything else is still the
oslo.service stack (which is basically run eventlet as a preforking
static worker count webserver).

The ways in which OpenStack and oslo.service uses eventlet are known to
have scaling bottle necks. The Keystone team saw substantial throughput
gains going over to apache hosting.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] freezer-api 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for freezer-api for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/freezer-api/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/freezer-api/log/?h=stable/ocata

Release notes for freezer-api can be found at:

http://docs.openstack.org/releasenotes/freezer-api/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] freezer-web-ui 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for freezer-web-ui for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/freezer-web-ui/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/freezer-web-ui/log/?h=stable/ocata

Release notes for freezer-web-ui can be found at:

http://docs.openstack.org/releasenotes/freezer-web-ui/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] freezer-dr 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for freezer-dr for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/freezer-dr/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/freezer-dr/log/?h=stable/ocata

Release notes for freezer-dr can be found at:

http://docs.openstack.org/releasenotes/freezer-dr/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] freezer 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for freezer for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/freezer/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/freezer/log/?h=stable/ocata

Release notes for freezer can be found at:

http://docs.openstack.org/releasenotes/freezer/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Octave J. Orgeron

Hi Doug,

Comments below..

Thanks,
Octave

On 2/2/2017 12:52 PM, Mike Bayer wrote:



On 02/02/2017 02:16 PM, Octave J. Orgeron wrote:

Hi Doug,

Comments below..

Thanks,
Octave

On 2/2/2017 11:27 AM, Doug Hellmann wrote:

It sounds like part of the plan is to use the configuration setting
to control how the migration scripts create tables. How will that
work? Does each migration need custom logic, or can we build helpers
into oslo.db somehow? Or will the option be passed to the database
to change its behavior transparently?


These are good questions. For each service, when the db sync or db
manage operation is done it will call into SQL Alchemy or Alembic
depending on the methods used by the given service. For example, most
use SQL Alchemy, but there are services like Ironic and Neutron that use
Alembic. It is within these scripts under the /db/* hierarchy
that the logic exist today to configure the database schema for any
given service. Both approaches will look at the schema version in the
database to determine where to start the create, upgrade, heal, etc.
operations. What my patches do is that in the scripts where a table
needs to be modified, there will be custom IF/THEN logic to check the
cfg.CONF.database.mysql_storage_engine setting to make the required
modifications. There are also use cases where the api.py or model(s).py
under the /db/ hierarchy needs to look at this setting as well
for API and CLI operations where mysql_engine is auto-inserted into DB
operations. In those use cases, I replace the hard coded "InnoDB" with
the mysql_storage_engine variable.


can you please clarify "replace the hard coded "InnoDB" " ?Are you 
proposing to send reviews for patches against all occurrences of 
"InnoDB" in files like 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py 
?The "InnoDB" keyword is hardcoded in hundreds of migration files 
across all openstack projects that use MySQL.   Are all of these going 
to be patched with some kind of conditional?


Yes, that is the plan to patch each of the scripts that has these and 
any other issues that need to be addressed.







It would be interesting if we could develop some helpers to automate
this, but it would probably have to be at the SQL Alchemy or Alembic
levels.


not really, you can build a hook that intercepts operations like 
CreateTable, or that intercepts SQL as it is emitted over a 
connection, in order to modify these values on the fly.  But that is a 
specific kind of approach with it's own set of surprises. 
Alternatively you can make an alternate SQLAlchemy dialect that no 
longer recognizes "mysql_*" as the prefix for these arguments. There's 
ways to do this part.


But more critically I noticed you referred to altering the names of 
columns to suit NDB.  How will this be accomplished?   Changing a 
column name in an openstack application is no longer trivial, because 
online upgrades must be supported for applications like Nova and 
Neutron.  A column name can't just change to a new name, both columns 
have to exist and logic must be added to keep these columns synchronized.


Putting the hooks into SQL Alchemy dialect would only solve things like 
the mysql_engine=,  savepoints,  and nested operations. It won't solve 
for the row length issues or be able to determine which ones to target 
since we don't have some method of specifying the potential lengths of 
contents. We also have to consider that Alembic doesn't have the same 
capabilities as SQL Alchemy, so if we invest in making enhancements 
there we still have Neutron and Ironic that wouldn't be able to benefit. 
I think being consistent is important as well here.


The patches don't change the names of columns, they only change the size 
or type. There is only a single occurrence that I've seen where a column 
name causes problems because it's using a reserved name in the SQL. I 
have a patch for that issue, which I believe is in Heat if I remember 
correctly.




Unfortunately, throughout all of the OpenStack services today we

are hard coding things like mysql_engine, using InnoDB specific features
(savepoints, nested operations, etc.), and not following the strict SQL
orders for modifying table elements (foreign keys, constraints, and
indexes).


Savepoints aren't InnoDB specific, they are a standard SQL feature and 
also their use is not widespread right now.   I'm not sure what you 
mean by "the strict SQL orders", we use ALTER TABLE as is standard in 
MySQL for this and it's behind an abstraction layer that supports 
other databases such as Postgresql.


Savepoints are not implemented yet in MySQL Cluster, but it's on the 
roadmap. As for the SQL ordering, what I'm talking about is the way some 
services will drop or modify foreign keys, constraints, or indexes in 
the wrong operation order. These have to be unfurled in the correct 
order and put back in the right order.  InnoDB does not enforce this, 
but 

[openstack-dev] [keystone] keystone 11.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for keystone for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/keystone/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/keystone/log/?h=stable/ocata

Release notes for keystone can be found at:

http://docs.openstack.org/releasenotes/keystone/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Kevin Benton
Note the HTTPS in the traceback in the bug report. Also the mention of
adjusting the Apache mpm settings to fix it. That seems to point to an
issue with Apache in the middle rather than eventlet and API_WORKERS.

On Feb 2, 2017 14:36, "Ihar Hrachyshka"  wrote:

> The BadStatusLine error is well known:
> https://bugs.launchpad.net/nova/+bug/1630664
>
> Now, it doesn't mean that the root cause of the error message is the
> same, and it may as well be that lowering the number of workers
> triggered it. All I am saying is we saw that error in the past.
>
> Ihar
>
> On Thu, Feb 2, 2017 at 1:07 PM, Kevin Benton  wrote:
> > This error seems to be new in the ocata cycle. It's either related to a
> > dependency change or the fact that we put Apache in between the services
> > now. Handling more concurrent requests than workers wasn't an issue
> before.
> >
> > It seems that you are suggesting that eventlet can't handle concurrent
> > connections, which is the entire purpose of the library, no?
> >
> > On Feb 2, 2017 13:53, "Sean Dague"  wrote:
> >>
> >> On 02/02/2017 03:32 PM, Armando M. wrote:
> >> >
> >> >
> >> > On 2 February 2017 at 12:19, Sean Dague  >> > > wrote:
> >> >
> >> > On 02/02/2017 02:28 PM, Armando M. wrote:
> >> > >
> >> > >
> >> > > On 2 February 2017 at 10:08, Sean Dague  >> > 
> >> > > >> wrote:
> >> > >
> >> > > On 02/02/2017 12:49 PM, Armando M. wrote:
> >> > > >
> >> > > >
> >> > > > On 2 February 2017 at 08:40, Sean Dague  >> >   >> > >
> >> > > > 
> >> >  >> > > >
> >> > > > On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> >> > > > 
> >> > > > > 
> >> > > > >
> >> > > > > We definitely aren't saying running a single worker
> is
> >> > how
> >> > > we recommend people
> >> > > > > run OpenStack by doing this. But it just adds on to
> >> > the
> >> > > differences between the
> >> > > > > gate and what we expect things actually look like.
> >> > > >
> >> > > > I'm all for actually getting to the bottom of this,
> but
> >> > > honestly real
> >> > > > memory profiling is needed here. The growth across
> >> > projects
> >> > > probably
> >> > > > means that some common libraries are some part of
> this.
> >> > The
> >> > > ever growing
> >> > > > requirements list is demonstrative of that. Code reuse
> >> > is
> >> > > good, but if
> >> > > > we are importing much of a library to get access to a
> >> > couple of
> >> > > > functions, we're going to take a bunch of memory
> weight
> >> > on that
> >> > > > (especially if that library has friendly auto imports
> in
> >> > top level
> >> > > > __init__.py so we can't get only the parts we want).
> >> > > >
> >> > > > Changing the worker count is just shuffling around
> deck
> >> > chairs.
> >> > > >
> >> > > > I'm not familiar enough with memory profiling tools in
> >> > python
> >> > > to know
> >> > > > the right approach we should take there to get this
> down
> >> > to
> >> > > individual
> >> > > > libraries / objects that are containing all our
> memory.
> >> > Anyone
> >> > > more
> >> > > > skilled here able to help lead the way?
> >> > > >
> >> > > >
> >> > > > From what I hear, the overall consensus on this matter is
> to
> >> > determine
> >> > > > what actually caused the memory consumption bump and how
> to
> >> > > address it,
> >> > > > but that's more of a medium to long term action. In fact,
> to
> >> > me
> >> > > this is
> >> > > > one of the top priority matters we should talk about at
> the
> >> > > imminent PTG.
> >> > > >
> >> > > > For the time being, and to provide relief to the gate,
> >> > should we
> >> > > want to
> >> > > > lock the API_WORKERS to 1? I'll post something for review
> >> > and see how
> >> > > > many people shoot it down :)
> >> > >
> >> > > I don't think we want to do that. It's going to force down
> the
> >> > eventlet
> >> > > API workers to being a single process, and it's not super
> >> > clear that
> >> > > eventlet handles backups on the inbound socket well. I
> >> > honestly would
> >> > > expect 

Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Mikhail Medvedev
On Thu, Feb 2, 2017 at 12:28 PM, Jeremy Stanley  wrote:
> On 2017-02-02 04:27:51 + (+), Dolph Mathews wrote:
>> What made most services jump +20% between mitaka and newton? Maybe there is
>> a common cause that we can tackle.
> [...]
>
> Almost hesitant to suggest this one but since we primarily use
> Ubuntu 14.04 LTS for stable/mitaka jobs and 16.04 LTS for later
> branches, could bloat in a newer release of the Python 2.7
> interpreter there (or something even lower-level still like glibc)
> be a contributing factor?

In our third-party CI (IBM KVM on Power) we run both stable/mitaka and
master on Ubuntu Xenial. I went ahead and plotted dstat graphs, see
http://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/dstat20170202/
. It does look like there is some difference in overall memory use -
mitaka uses a bit less. This is anecdotal, but still is an extra data
point. Also note that we have 12G of ram, and we do not see oom kills.

> I agree it's more likely bloat in some
> commonly-used module (possibly even one developed outside our
> community), but potential system-level overhead probably should also
> get some investigation.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Mikhail Medvedev
IBM

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Drivers meeting cancelled today

2017-02-02 Thread Armando M.
Hi,

With the release coming up, it's best to spend the time to polish what we
have.

Sorry for the short notice.

Thanks,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Clay Gerrard
On Thu, Feb 2, 2017 at 12:50 PM, Sean Dague  wrote:

>
> This is one of the reasons to get the wsgi stack off of eventlet and
> into a real webserver, as they handle HTTP request backups much much
> better.
>
>
To some extent I think this is generally true for *many* common workloads,
but the specifics depend *a lot* on the application under the webserver
that's servicing those requests.

I'm not entirely sure what you have in mind, and may be mistaken to assume
this is a reference to Apache/mod_wsgi?  If that's the case, depending on
how you configure it - aren't you still going to end up with an instance of
the wsgi application per worker-process and have the same front of line
queueing issue unless you increase workers?  Maybe if the application is
thread-safe you can use os thread workers - and preemptive interruption for
the GIL is more attractive for the application than eventlet's cooperative
interruption.  Either-way, it's not obvious that has a big impact on the
memory footprint issue (assume the issue is memory growth in the
application and not specifically eventlet.wsgi.server).  But you may have
more relevant experience than I do - happy to be enlightened!

Thanks,

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] POST /api-wg/news

2017-02-02 Thread Ken'ichi Ohmichi
2017-02-02 9:38 GMT-08:00 Chris Dent :
>
> Greetings OpenStack community,
>
> In today's meeting [0] after briefly covering old business we spent nearly
> 50 minutes going round in circles discussing the complex interactions of
> expectations of API stability, the need to fix bugs and the costs and
> benefits of microversions. We didn't make a lot of progress on the general
> issues, but we did #agree that a glance issue [4] should be treated as a
> code bug (not a documentation bug) that should be fixed. In some ways this
> position is not aligned with the ideal presented by stability guidelines but
> it is aligned with an original goal of the API-WG: consistency. It's unclear
> how to resolve this conflict, either in this specific instance or in the
> guidelines that the API-WG creates. As stated in response to one of the
> related reviews [5]: "If bugs like this don't get fixed properly in the
> code, OpenStack risks going down the path of Internet Explorer and people
> wind up writing client code to the bugs and that way lies madness."

I am not sure the code change can avoid the madness.
If we change the success status code (200 ->204) without any version
bumps, OpenStack clouds return different status codes on the same API
operations.
That will break OpenStack interoperability and clients' APPs need to
be changed for accepting 204 also as success operation.
That could make APPs code mudness.
I also think this is basically code bug, but this is hard to fix
because of big impact against the existing users.

Thanks
Ken Ohmichi

---

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Armando M.
On 2 February 2017 at 13:36, Ihar Hrachyshka  wrote:

> On Thu, Feb 2, 2017 at 7:44 AM, Matthew Treinish 
> wrote:
> > Yeah, I'm curious about this too, there seems to be a big jump in Newton
> for
> > most of the project. It might not a be a single common cause between
> them, but
> > I'd be curious to know what's going on there.
>
> Both Matt from Nova as well as me and Armando suspect
> oslo.versionedobjects. Pattern of memory consumption raise somewhat
> correlates with the level of adoption for the library, at least in
> Neutron. That being said, we don't have any numbers, so at this point
> it's just pointing fingers into Oslo direction. :) Armando is going to
> collect actual memory profile.
>

I'll do my best, but I can't guarantee I can come up with something in time
for RC.


>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Armando M.
On 2 February 2017 at 13:34, Ihar Hrachyshka  wrote:

> The BadStatusLine error is well known:
> https://bugs.launchpad.net/nova/+bug/1630664


That's the one! I knew it I had seen it in the past!


>
>
> Now, it doesn't mean that the root cause of the error message is the
> same, and it may as well be that lowering the number of workers
> triggered it. All I am saying is we saw that error in the past.
>
> Ihar
>
> On Thu, Feb 2, 2017 at 1:07 PM, Kevin Benton  wrote:
> > This error seems to be new in the ocata cycle. It's either related to a
> > dependency change or the fact that we put Apache in between the services
> > now. Handling more concurrent requests than workers wasn't an issue
> before.
> >
> > It seems that you are suggesting that eventlet can't handle concurrent
> > connections, which is the entire purpose of the library, no?
> >
> > On Feb 2, 2017 13:53, "Sean Dague"  wrote:
> >>
> >> On 02/02/2017 03:32 PM, Armando M. wrote:
> >> >
> >> >
> >> > On 2 February 2017 at 12:19, Sean Dague  >> > > wrote:
> >> >
> >> > On 02/02/2017 02:28 PM, Armando M. wrote:
> >> > >
> >> > >
> >> > > On 2 February 2017 at 10:08, Sean Dague  >> > 
> >> > > >> wrote:
> >> > >
> >> > > On 02/02/2017 12:49 PM, Armando M. wrote:
> >> > > >
> >> > > >
> >> > > > On 2 February 2017 at 08:40, Sean Dague  >> >   >> > >
> >> > > > 
> >> >  >> > > >
> >> > > > On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> >> > > > 
> >> > > > > 
> >> > > > >
> >> > > > > We definitely aren't saying running a single worker
> is
> >> > how
> >> > > we recommend people
> >> > > > > run OpenStack by doing this. But it just adds on to
> >> > the
> >> > > differences between the
> >> > > > > gate and what we expect things actually look like.
> >> > > >
> >> > > > I'm all for actually getting to the bottom of this,
> but
> >> > > honestly real
> >> > > > memory profiling is needed here. The growth across
> >> > projects
> >> > > probably
> >> > > > means that some common libraries are some part of
> this.
> >> > The
> >> > > ever growing
> >> > > > requirements list is demonstrative of that. Code reuse
> >> > is
> >> > > good, but if
> >> > > > we are importing much of a library to get access to a
> >> > couple of
> >> > > > functions, we're going to take a bunch of memory
> weight
> >> > on that
> >> > > > (especially if that library has friendly auto imports
> in
> >> > top level
> >> > > > __init__.py so we can't get only the parts we want).
> >> > > >
> >> > > > Changing the worker count is just shuffling around
> deck
> >> > chairs.
> >> > > >
> >> > > > I'm not familiar enough with memory profiling tools in
> >> > python
> >> > > to know
> >> > > > the right approach we should take there to get this
> down
> >> > to
> >> > > individual
> >> > > > libraries / objects that are containing all our
> memory.
> >> > Anyone
> >> > > more
> >> > > > skilled here able to help lead the way?
> >> > > >
> >> > > >
> >> > > > From what I hear, the overall consensus on this matter is
> to
> >> > determine
> >> > > > what actually caused the memory consumption bump and how
> to
> >> > > address it,
> >> > > > but that's more of a medium to long term action. In fact,
> to
> >> > me
> >> > > this is
> >> > > > one of the top priority matters we should talk about at
> the
> >> > > imminent PTG.
> >> > > >
> >> > > > For the time being, and to provide relief to the gate,
> >> > should we
> >> > > want to
> >> > > > lock the API_WORKERS to 1? I'll post something for review
> >> > and see how
> >> > > > many people shoot it down :)
> >> > >
> >> > > I don't think we want to do that. It's going to force down
> the
> >> > eventlet
> >> > > API workers to being a single process, and it's not super
> >> > clear that
> >> > > eventlet handles backups on the inbound socket well. I
> >> > honestly would
> >> > > expect that creates different hard to debug issues,
> especially
> >> > with high
> >> > > chatter rates between services.
> >> > >
> >> > 

Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Ihar Hrachyshka
On Thu, Feb 2, 2017 at 7:44 AM, Matthew Treinish  wrote:
> Yeah, I'm curious about this too, there seems to be a big jump in Newton for
> most of the project. It might not a be a single common cause between them, but
> I'd be curious to know what's going on there.

Both Matt from Nova as well as me and Armando suspect
oslo.versionedobjects. Pattern of memory consumption raise somewhat
correlates with the level of adoption for the library, at least in
Neutron. That being said, we don't have any numbers, so at this point
it's just pointing fingers into Oslo direction. :) Armando is going to
collect actual memory profile.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Armando M.
On 2 February 2017 at 12:50, Sean Dague  wrote:

> On 02/02/2017 03:32 PM, Armando M. wrote:
> >
> >
> > On 2 February 2017 at 12:19, Sean Dague  > > wrote:
> >
> > On 02/02/2017 02:28 PM, Armando M. wrote:
> > >
> > >
> > > On 2 February 2017 at 10:08, Sean Dague 
> > > >> wrote:
> > >
> > > On 02/02/2017 12:49 PM, Armando M. wrote:
> > > >
> > > >
> > > > On 2 February 2017 at 08:40, Sean Dague    > >
> > > > 
> >  > > >
> > > > On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> > > > 
> > > > > 
> > > > >
> > > > > We definitely aren't saying running a single worker is
> how
> > > we recommend people
> > > > > run OpenStack by doing this. But it just adds on to the
> > > differences between the
> > > > > gate and what we expect things actually look like.
> > > >
> > > > I'm all for actually getting to the bottom of this, but
> > > honestly real
> > > > memory profiling is needed here. The growth across
> projects
> > > probably
> > > > means that some common libraries are some part of this.
> The
> > > ever growing
> > > > requirements list is demonstrative of that. Code reuse is
> > > good, but if
> > > > we are importing much of a library to get access to a
> > couple of
> > > > functions, we're going to take a bunch of memory weight
> > on that
> > > > (especially if that library has friendly auto imports in
> > top level
> > > > __init__.py so we can't get only the parts we want).
> > > >
> > > > Changing the worker count is just shuffling around deck
> > chairs.
> > > >
> > > > I'm not familiar enough with memory profiling tools in
> > python
> > > to know
> > > > the right approach we should take there to get this down
> to
> > > individual
> > > > libraries / objects that are containing all our memory.
> > Anyone
> > > more
> > > > skilled here able to help lead the way?
> > > >
> > > >
> > > > From what I hear, the overall consensus on this matter is to
> > determine
> > > > what actually caused the memory consumption bump and how to
> > > address it,
> > > > but that's more of a medium to long term action. In fact, to
> me
> > > this is
> > > > one of the top priority matters we should talk about at the
> > > imminent PTG.
> > > >
> > > > For the time being, and to provide relief to the gate,
> should we
> > > want to
> > > > lock the API_WORKERS to 1? I'll post something for review
> > and see how
> > > > many people shoot it down :)
> > >
> > > I don't think we want to do that. It's going to force down the
> > eventlet
> > > API workers to being a single process, and it's not super
> > clear that
> > > eventlet handles backups on the inbound socket well. I
> > honestly would
> > > expect that creates different hard to debug issues, especially
> > with high
> > > chatter rates between services.
> > >
> > >
> > > I must admit I share your fear, but out of the tests that I have
> > > executed so far in [1,2,3], the house didn't burn in a fire. I am
> > > looking for other ways to have a substantial memory saving with a
> > > relatively quick and dirty fix, but coming up empty handed thus
> far.
> > >
> > > [1] https://review.openstack.org/#/c/428303/
> > 
> > > [2] https://review.openstack.org/#/c/427919/
> > 
> > > [3] https://review.openstack.org/#/c/427921/
> > 
> >
> > This failure in the first patch -
> > http://logs.openstack.org/03/428303/1/check/gate-tempest-
> dsvm-neutron-full-ubuntu-xenial/71f42ea/logs/screen-n-
> api.txt.gz?level=TRACE#_2017-02-02_19_14_11_751
> >  dsvm-neutron-full-ubuntu-xenial/71f42ea/logs/screen-n-
> api.txt.gz?level=TRACE#_2017-02-02_19_14_11_751>
> > looks exactly like I would expect by API Worker starvation.
> >
> >
> > Not sure I agree on this one, this has been observed multiple times in
> > the gate already [1] (though I am not sure there's a bug 

Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Ihar Hrachyshka
The BadStatusLine error is well known:
https://bugs.launchpad.net/nova/+bug/1630664

Now, it doesn't mean that the root cause of the error message is the
same, and it may as well be that lowering the number of workers
triggered it. All I am saying is we saw that error in the past.

Ihar

On Thu, Feb 2, 2017 at 1:07 PM, Kevin Benton  wrote:
> This error seems to be new in the ocata cycle. It's either related to a
> dependency change or the fact that we put Apache in between the services
> now. Handling more concurrent requests than workers wasn't an issue before.
>
> It seems that you are suggesting that eventlet can't handle concurrent
> connections, which is the entire purpose of the library, no?
>
> On Feb 2, 2017 13:53, "Sean Dague"  wrote:
>>
>> On 02/02/2017 03:32 PM, Armando M. wrote:
>> >
>> >
>> > On 2 February 2017 at 12:19, Sean Dague > > > wrote:
>> >
>> > On 02/02/2017 02:28 PM, Armando M. wrote:
>> > >
>> > >
>> > > On 2 February 2017 at 10:08, Sean Dague > > 
>> > > >> wrote:
>> > >
>> > > On 02/02/2017 12:49 PM, Armando M. wrote:
>> > > >
>> > > >
>> > > > On 2 February 2017 at 08:40, Sean Dague > >  > > >
>> > > > 
>> > > > > >
>> > > > On 02/02/2017 11:16 AM, Matthew Treinish wrote:
>> > > > 
>> > > > > 
>> > > > >
>> > > > > We definitely aren't saying running a single worker is
>> > how
>> > > we recommend people
>> > > > > run OpenStack by doing this. But it just adds on to
>> > the
>> > > differences between the
>> > > > > gate and what we expect things actually look like.
>> > > >
>> > > > I'm all for actually getting to the bottom of this, but
>> > > honestly real
>> > > > memory profiling is needed here. The growth across
>> > projects
>> > > probably
>> > > > means that some common libraries are some part of this.
>> > The
>> > > ever growing
>> > > > requirements list is demonstrative of that. Code reuse
>> > is
>> > > good, but if
>> > > > we are importing much of a library to get access to a
>> > couple of
>> > > > functions, we're going to take a bunch of memory weight
>> > on that
>> > > > (especially if that library has friendly auto imports in
>> > top level
>> > > > __init__.py so we can't get only the parts we want).
>> > > >
>> > > > Changing the worker count is just shuffling around deck
>> > chairs.
>> > > >
>> > > > I'm not familiar enough with memory profiling tools in
>> > python
>> > > to know
>> > > > the right approach we should take there to get this down
>> > to
>> > > individual
>> > > > libraries / objects that are containing all our memory.
>> > Anyone
>> > > more
>> > > > skilled here able to help lead the way?
>> > > >
>> > > >
>> > > > From what I hear, the overall consensus on this matter is to
>> > determine
>> > > > what actually caused the memory consumption bump and how to
>> > > address it,
>> > > > but that's more of a medium to long term action. In fact, to
>> > me
>> > > this is
>> > > > one of the top priority matters we should talk about at the
>> > > imminent PTG.
>> > > >
>> > > > For the time being, and to provide relief to the gate,
>> > should we
>> > > want to
>> > > > lock the API_WORKERS to 1? I'll post something for review
>> > and see how
>> > > > many people shoot it down :)
>> > >
>> > > I don't think we want to do that. It's going to force down the
>> > eventlet
>> > > API workers to being a single process, and it's not super
>> > clear that
>> > > eventlet handles backups on the inbound socket well. I
>> > honestly would
>> > > expect that creates different hard to debug issues, especially
>> > with high
>> > > chatter rates between services.
>> > >
>> > >
>> > > I must admit I share your fear, but out of the tests that I have
>> > > executed so far in [1,2,3], the house didn't burn in a fire. I am
>> > > looking for other ways to have a substantial memory saving with a
>> > > relatively quick and dirty fix, but coming up empty handed thus
>> > far.
>> > >
>> > > [1] https://review.openstack.org/#/c/428303/
>> >   

[openstack-dev] [horizon] horizon 11.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for horizon for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/horizon/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/ocata

Release notes for horizon can be found at:

http://docs.openstack.org/releasenotes/horizon/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Large Contributing OpenStack Operators working group?

2017-02-02 Thread Hayes, Graham
On 02/02/2017 20:17, Jay Pipes wrote:
> Hi,
>
> I was told about this group today. I have a few questions. Hopefully
> someone from this team can illuminate me with some answers.
>
> 1) What is the purpose of this group? The wiki states that the team
> "aims to define the use cases and identify and prioritise the
> requirements which are needed to deploy, manage, and run services on top
> of OpenStack. This work includes identifying functional gaps, creating
> blueprints, submitting and reviewing patches to the relevant OpenStack
> projects, contributing to working those items, tracking their completion."
>
> What is the difference between the LCOO and the following existing
> working groups?
>
>   * Large Deployment Team
>   * Massively Distributed Team
>   * Product Working Group
>   * Telco/NFV Working Group
>
> 2) According to the wiki page, only companies that are "Multi-Cloud
> Operator[s] and/or Network Service Provider[s]" are welcome in this
> team. Why is the team called "Large Contributing OpenStack Operators" if
> it's only for Telcos? Further, if this is truly only for Telcos, why
> isn't the Telco/NFV working group appropriate?
>
> 3) Under the "Guiding principles" section of the above wiki, the top
> principle is "Align with the OpenStack Foundation". If this is the case,
> why did the group move its content to the closed Atlassian Confuence
> platform? Why does the group have a set of separate Slack channels
> instead of using the OpenStack mailing lists and IRC channels? Why is
> the OPNFV Jira used for tracking work items for the LCOO agenda?
>
> See https://wiki.openstack.org/wiki/Gluon/Tasks-Ocata for examples.
>
> 4) I see a lot of agenda items around projects like Gluon, Craton,
> Watcher, and Blazar. I don't see any concrete ideas about talking with
> the developers of the key infrastructure services that OpenStack is
> built around. How does the LCOO plan on reaching out to the developers
> of the long-standing OpenStack projects like Nova, Neutron, Cinder, and
> Keystone to drive their shared agenda?
>
> Thanks for reading and (hopefully) answering.
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

The entire wiki page [0] for this group is  worrying.

For example:

 > If a member should fail to continue to meet these minimum criteria 
after joining, their membership may be revoked through an action of the 
board.

and no, that is not the OpenStack board.

 From the etherpad [1] -

 > Reminder that our etherpad documents should document what was 
discussed and our decisions but should not contain organizational 
sensitive information or similar in process items.


It does seem to be not fully "open" by our 4 opens. It says it is under
the user committee - but I do not see it listed on their page [2]


0 - https://wiki.openstack.org/wiki/LCOO
1 - 
https://etherpad.openstack.org/p/Large_Contributing_OpenStack_Operators_Master
2 - https://governance.openstack.org/uc/#working-groups

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo]FFE to update Eqlx and DellSc cinder templates

2017-02-02 Thread Emilien Macchi
On Thu, Feb 2, 2017 at 2:05 PM,   wrote:
> Dell - Internal Use - Confidential
>
>
>
>
>
> I would like to request a freeze exception to update Dell EqualLogic and
> Dell Storage Center cinder backend templates to use composable roles and
> services in Triple-o. This work is done and pending merge for the past few
> weeks. Without these we won’t be able to do upgrades.
>
>
>
> Dell Eqlx: https://review.openstack.org/#/c/422238/
>
> Dell Storage Center: https://review.openstack.org/#/c/425866/
>

Low risk to break something, change is not disruptive for CI and code
is almost good.
FFE granted!

>
>
>
> Thank you
>
> Rajini
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] FFE to add ScaleIO to Triple-o

2017-02-02 Thread Emilien Macchi
On Thu, Feb 2, 2017 at 2:00 PM,   wrote:
> Dell - Internal Use - Confidential
>
>
>
> I would like to request a feature freeze exception to ScaleIO cinder backend
> support  Triple-o. This work is done and pending review for the past three
> weeks. The puppet-cinder work is already merged.
>
>
>
> Pending review https://review.openstack.org/#/c/422238/

This change is not disruptive and passing CI. Granted!

>
>
> Thanks
>
> Rajini
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Kevin Benton
This error seems to be new in the ocata cycle. It's either related to a
dependency change or the fact that we put Apache in between the services
now. Handling more concurrent requests than workers wasn't an issue before.


It seems that you are suggesting that eventlet can't handle concurrent
connections, which is the entire purpose of the library, no?

On Feb 2, 2017 13:53, "Sean Dague"  wrote:

> On 02/02/2017 03:32 PM, Armando M. wrote:
> >
> >
> > On 2 February 2017 at 12:19, Sean Dague  > > wrote:
> >
> > On 02/02/2017 02:28 PM, Armando M. wrote:
> > >
> > >
> > > On 2 February 2017 at 10:08, Sean Dague 
> > > >> wrote:
> > >
> > > On 02/02/2017 12:49 PM, Armando M. wrote:
> > > >
> > > >
> > > > On 2 February 2017 at 08:40, Sean Dague    > >
> > > > 
> >  > > >
> > > > On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> > > > 
> > > > > 
> > > > >
> > > > > We definitely aren't saying running a single worker is
> how
> > > we recommend people
> > > > > run OpenStack by doing this. But it just adds on to the
> > > differences between the
> > > > > gate and what we expect things actually look like.
> > > >
> > > > I'm all for actually getting to the bottom of this, but
> > > honestly real
> > > > memory profiling is needed here. The growth across
> projects
> > > probably
> > > > means that some common libraries are some part of this.
> The
> > > ever growing
> > > > requirements list is demonstrative of that. Code reuse is
> > > good, but if
> > > > we are importing much of a library to get access to a
> > couple of
> > > > functions, we're going to take a bunch of memory weight
> > on that
> > > > (especially if that library has friendly auto imports in
> > top level
> > > > __init__.py so we can't get only the parts we want).
> > > >
> > > > Changing the worker count is just shuffling around deck
> > chairs.
> > > >
> > > > I'm not familiar enough with memory profiling tools in
> > python
> > > to know
> > > > the right approach we should take there to get this down
> to
> > > individual
> > > > libraries / objects that are containing all our memory.
> > Anyone
> > > more
> > > > skilled here able to help lead the way?
> > > >
> > > >
> > > > From what I hear, the overall consensus on this matter is to
> > determine
> > > > what actually caused the memory consumption bump and how to
> > > address it,
> > > > but that's more of a medium to long term action. In fact, to
> me
> > > this is
> > > > one of the top priority matters we should talk about at the
> > > imminent PTG.
> > > >
> > > > For the time being, and to provide relief to the gate,
> should we
> > > want to
> > > > lock the API_WORKERS to 1? I'll post something for review
> > and see how
> > > > many people shoot it down :)
> > >
> > > I don't think we want to do that. It's going to force down the
> > eventlet
> > > API workers to being a single process, and it's not super
> > clear that
> > > eventlet handles backups on the inbound socket well. I
> > honestly would
> > > expect that creates different hard to debug issues, especially
> > with high
> > > chatter rates between services.
> > >
> > >
> > > I must admit I share your fear, but out of the tests that I have
> > > executed so far in [1,2,3], the house didn't burn in a fire. I am
> > > looking for other ways to have a substantial memory saving with a
> > > relatively quick and dirty fix, but coming up empty handed thus
> far.
> > >
> > > [1] https://review.openstack.org/#/c/428303/
> > 
> > > [2] https://review.openstack.org/#/c/427919/
> > 
> > > [3] https://review.openstack.org/#/c/427921/
> > 
> >
> > This failure in the first patch -
> > http://logs.openstack.org/03/428303/1/check/gate-tempest-
> dsvm-neutron-full-ubuntu-xenial/71f42ea/logs/screen-n-
> api.txt.gz?level=TRACE#_2017-02-02_19_14_11_751
> > 

Re: [openstack-dev] [keystone] removing Guang Yee (gyee) from keystone-core

2017-02-02 Thread Brad Topol
+1!!! Thanks Guang for all your hard work and your outstanding contributions to Keystone.   You were always a pleasure to work with. I wish you all the best on your new adventure!
 
--Brad
Brad Topol, Ph.D.IBM Distinguished EngineerOpenStack(919) 543-0646Internet: bto...@us.ibm.comAssistant: Kendra Witherspoon (919) 254-0680
 
 
- Original message -From: Henry Nash To: "OpenStack Development Mailing List (not for usage questions)" Cc:Subject: Re: [openstack-dev] [keystone] removing Guang Yee (gyee) from keystone-coreDate: Thu, Feb 2, 2017 10:36 AM 
Thanks, Guang, for your valuable contributions.Henry> On 2 Feb 2017, at 05:13, Steve Martinelli  wrote:>> Due to inactivity and a change in his day job, Guang was informed that he would be removed from keystone-core, a change he understands and supports.>> I'd like to publicly thank Guang for his years of service as a core member. He juggled upstream and downstream responsibilities at HP while bringing real world use cases to the table.>> Thanks for everything Guang, o\>> Steve> __> OpenStack Development Mailing List (not for usage questions)> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Sean Dague
On 02/02/2017 03:32 PM, Armando M. wrote:
> 
> 
> On 2 February 2017 at 12:19, Sean Dague  > wrote:
> 
> On 02/02/2017 02:28 PM, Armando M. wrote:
> >
> >
> > On 2 February 2017 at 10:08, Sean Dague  
> > >> wrote:
> >
> > On 02/02/2017 12:49 PM, Armando M. wrote:
> > >
> > >
> > > On 2 February 2017 at 08:40, Sean Dague    >
> > > 
>  > >
> > > On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> > > 
> > > > 
> > > >
> > > > We definitely aren't saying running a single worker is how
> > we recommend people
> > > > run OpenStack by doing this. But it just adds on to the
> > differences between the
> > > > gate and what we expect things actually look like.
> > >
> > > I'm all for actually getting to the bottom of this, but
> > honestly real
> > > memory profiling is needed here. The growth across projects
> > probably
> > > means that some common libraries are some part of this. The
> > ever growing
> > > requirements list is demonstrative of that. Code reuse is
> > good, but if
> > > we are importing much of a library to get access to a
> couple of
> > > functions, we're going to take a bunch of memory weight
> on that
> > > (especially if that library has friendly auto imports in
> top level
> > > __init__.py so we can't get only the parts we want).
> > >
> > > Changing the worker count is just shuffling around deck
> chairs.
> > >
> > > I'm not familiar enough with memory profiling tools in
> python
> > to know
> > > the right approach we should take there to get this down to
> > individual
> > > libraries / objects that are containing all our memory.
> Anyone
> > more
> > > skilled here able to help lead the way?
> > >
> > >
> > > From what I hear, the overall consensus on this matter is to
> determine
> > > what actually caused the memory consumption bump and how to
> > address it,
> > > but that's more of a medium to long term action. In fact, to me
> > this is
> > > one of the top priority matters we should talk about at the
> > imminent PTG.
> > >
> > > For the time being, and to provide relief to the gate, should we
> > want to
> > > lock the API_WORKERS to 1? I'll post something for review
> and see how
> > > many people shoot it down :)
> >
> > I don't think we want to do that. It's going to force down the
> eventlet
> > API workers to being a single process, and it's not super
> clear that
> > eventlet handles backups on the inbound socket well. I
> honestly would
> > expect that creates different hard to debug issues, especially
> with high
> > chatter rates between services.
> >
> >
> > I must admit I share your fear, but out of the tests that I have
> > executed so far in [1,2,3], the house didn't burn in a fire. I am
> > looking for other ways to have a substantial memory saving with a
> > relatively quick and dirty fix, but coming up empty handed thus far.
> >
> > [1] https://review.openstack.org/#/c/428303/
> 
> > [2] https://review.openstack.org/#/c/427919/
> 
> > [3] https://review.openstack.org/#/c/427921/
> 
> 
> This failure in the first patch -
> 
> http://logs.openstack.org/03/428303/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/71f42ea/logs/screen-n-api.txt.gz?level=TRACE#_2017-02-02_19_14_11_751
> 
> 
> looks exactly like I would expect by API Worker starvation.
> 
> 
> Not sure I agree on this one, this has been observed multiple times in
> the gate already [1] (though I am not sure there's a bug for it), and I
> don't believe it has anything to do with the number of API workers,
> unless not even two workers are enough.

There is no guarntee that 2 workers are enough. I'm not surprised if we
see that failure some today. This was all guess work on trimming worker
counts to 

[openstack-dev] [all][infra] Removal of tox-db- jobs and launching of MySQL/PostgreSQL

2017-02-02 Thread Julien Danjou
Hi,

It seems infra is moving on on this project which was announced in
November:

  http://lists.openstack.org/pipermail/openstack-dev/2016-November/107784.html

Andreas Jager kindly sent a bunch of patches with a bash script to setup
MySQL/PostgreSQL.

However, I just want to point out that a more robust¹ solution written
in Python has been started a while back – and is widely used in
Telemetry and in Oslo for a while. It is named pifpaf:

  https://github.com/jd/pifpaf

Hope that can helps!

Cheers,

¹  YMMV

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Armando M.
On 2 February 2017 at 12:19, Sean Dague  wrote:

> On 02/02/2017 02:28 PM, Armando M. wrote:
> >
> >
> > On 2 February 2017 at 10:08, Sean Dague  > > wrote:
> >
> > On 02/02/2017 12:49 PM, Armando M. wrote:
> > >
> > >
> > > On 2 February 2017 at 08:40, Sean Dague 
> > > >> wrote:
> > >
> > > On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> > > 
> > > > 
> > > >
> > > > We definitely aren't saying running a single worker is how
> > we recommend people
> > > > run OpenStack by doing this. But it just adds on to the
> > differences between the
> > > > gate and what we expect things actually look like.
> > >
> > > I'm all for actually getting to the bottom of this, but
> > honestly real
> > > memory profiling is needed here. The growth across projects
> > probably
> > > means that some common libraries are some part of this. The
> > ever growing
> > > requirements list is demonstrative of that. Code reuse is
> > good, but if
> > > we are importing much of a library to get access to a couple of
> > > functions, we're going to take a bunch of memory weight on that
> > > (especially if that library has friendly auto imports in top
> level
> > > __init__.py so we can't get only the parts we want).
> > >
> > > Changing the worker count is just shuffling around deck chairs.
> > >
> > > I'm not familiar enough with memory profiling tools in python
> > to know
> > > the right approach we should take there to get this down to
> > individual
> > > libraries / objects that are containing all our memory. Anyone
> > more
> > > skilled here able to help lead the way?
> > >
> > >
> > > From what I hear, the overall consensus on this matter is to
> determine
> > > what actually caused the memory consumption bump and how to
> > address it,
> > > but that's more of a medium to long term action. In fact, to me
> > this is
> > > one of the top priority matters we should talk about at the
> > imminent PTG.
> > >
> > > For the time being, and to provide relief to the gate, should we
> > want to
> > > lock the API_WORKERS to 1? I'll post something for review and see
> how
> > > many people shoot it down :)
> >
> > I don't think we want to do that. It's going to force down the
> eventlet
> > API workers to being a single process, and it's not super clear that
> > eventlet handles backups on the inbound socket well. I honestly would
> > expect that creates different hard to debug issues, especially with
> high
> > chatter rates between services.
> >
> >
> > I must admit I share your fear, but out of the tests that I have
> > executed so far in [1,2,3], the house didn't burn in a fire. I am
> > looking for other ways to have a substantial memory saving with a
> > relatively quick and dirty fix, but coming up empty handed thus far.
> >
> > [1] https://review.openstack.org/#/c/428303/
> > [2] https://review.openstack.org/#/c/427919/
> > [3] https://review.openstack.org/#/c/427921/
>
> This failure in the first patch -
> http://logs.openstack.org/03/428303/1/check/gate-tempest-
> dsvm-neutron-full-ubuntu-xenial/71f42ea/logs/screen-n-
> api.txt.gz?level=TRACE#_2017-02-02_19_14_11_751
> looks exactly like I would expect by API Worker starvation.
>

Not sure I agree on this one, this has been observed multiple times in the
gate already [1] (though I am not sure there's a bug for it), and I don't
believe it has anything to do with the number of API workers, unless not
even two workers are enough.

[1]
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22('Connection%20aborted.'%2C%20BadStatusLine(%5C%22''%5C%22%2C)%5C%22



> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Mike Bayer



On 02/02/2017 02:52 PM, Mike Bayer wrote:


But more critically I noticed you referred to altering the names of
columns to suit NDB.  How will this be accomplished?   Changing a column
name in an openstack application is no longer trivial, because online
upgrades must be supported for applications like Nova and Neutron.  A
column name can't just change to a new name, both columns have to exist
and logic must be added to keep these columns synchronized.



correction, the phrase was "Row character length limits 65k -> 14k" - 
does this refer to the total size of a row?  I guess rows that store 
JSON or tables like keystone tokens are what you had in mind here, can 
you give specifics ?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Sean Dague
On 02/02/2017 02:28 PM, Armando M. wrote:
> 
> 
> On 2 February 2017 at 10:08, Sean Dague  > wrote:
> 
> On 02/02/2017 12:49 PM, Armando M. wrote:
> >
> >
> > On 2 February 2017 at 08:40, Sean Dague  
> > >> wrote:
> >
> > On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> > 
> > > 
> > >
> > > We definitely aren't saying running a single worker is how
> we recommend people
> > > run OpenStack by doing this. But it just adds on to the
> differences between the
> > > gate and what we expect things actually look like.
> >
> > I'm all for actually getting to the bottom of this, but
> honestly real
> > memory profiling is needed here. The growth across projects
> probably
> > means that some common libraries are some part of this. The
> ever growing
> > requirements list is demonstrative of that. Code reuse is
> good, but if
> > we are importing much of a library to get access to a couple of
> > functions, we're going to take a bunch of memory weight on that
> > (especially if that library has friendly auto imports in top level
> > __init__.py so we can't get only the parts we want).
> >
> > Changing the worker count is just shuffling around deck chairs.
> >
> > I'm not familiar enough with memory profiling tools in python
> to know
> > the right approach we should take there to get this down to
> individual
> > libraries / objects that are containing all our memory. Anyone
> more
> > skilled here able to help lead the way?
> >
> >
> > From what I hear, the overall consensus on this matter is to determine
> > what actually caused the memory consumption bump and how to
> address it,
> > but that's more of a medium to long term action. In fact, to me
> this is
> > one of the top priority matters we should talk about at the
> imminent PTG.
> >
> > For the time being, and to provide relief to the gate, should we
> want to
> > lock the API_WORKERS to 1? I'll post something for review and see how
> > many people shoot it down :)
> 
> I don't think we want to do that. It's going to force down the eventlet
> API workers to being a single process, and it's not super clear that
> eventlet handles backups on the inbound socket well. I honestly would
> expect that creates different hard to debug issues, especially with high
> chatter rates between services.
> 
> 
> I must admit I share your fear, but out of the tests that I have
> executed so far in [1,2,3], the house didn't burn in a fire. I am
> looking for other ways to have a substantial memory saving with a
> relatively quick and dirty fix, but coming up empty handed thus far.
> 
> [1] https://review.openstack.org/#/c/428303/
> [2] https://review.openstack.org/#/c/427919/
> [3] https://review.openstack.org/#/c/427921/

This failure in the first patch -
http://logs.openstack.org/03/428303/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/71f42ea/logs/screen-n-api.txt.gz?level=TRACE#_2017-02-02_19_14_11_751
looks exactly like I would expect by API Worker starvation.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Large Contributing OpenStack Operators working group?

2017-02-02 Thread Jay Pipes

Hi,

I was told about this group today. I have a few questions. Hopefully 
someone from this team can illuminate me with some answers.


1) What is the purpose of this group? The wiki states that the team 
"aims to define the use cases and identify and prioritise the 
requirements which are needed to deploy, manage, and run services on top 
of OpenStack. This work includes identifying functional gaps, creating 
blueprints, submitting and reviewing patches to the relevant OpenStack 
projects, contributing to working those items, tracking their completion."


What is the difference between the LCOO and the following existing 
working groups?


 * Large Deployment Team
 * Massively Distributed Team
 * Product Working Group
 * Telco/NFV Working Group

2) According to the wiki page, only companies that are "Multi-Cloud 
Operator[s] and/or Network Service Provider[s]" are welcome in this 
team. Why is the team called "Large Contributing OpenStack Operators" if 
it's only for Telcos? Further, if this is truly only for Telcos, why 
isn't the Telco/NFV working group appropriate?


3) Under the "Guiding principles" section of the above wiki, the top 
principle is "Align with the OpenStack Foundation". If this is the case, 
why did the group move its content to the closed Atlassian Confuence 
platform? Why does the group have a set of separate Slack channels 
instead of using the OpenStack mailing lists and IRC channels? Why is 
the OPNFV Jira used for tracking work items for the LCOO agenda?


See https://wiki.openstack.org/wiki/Gluon/Tasks-Ocata for examples.

4) I see a lot of agenda items around projects like Gluon, Craton, 
Watcher, and Blazar. I don't see any concrete ideas about talking with 
the developers of the key infrastructure services that OpenStack is 
built around. How does the LCOO plan on reaching out to the developers 
of the long-standing OpenStack projects like Nova, Neutron, Cinder, and 
Keystone to drive their shared agenda?


Thanks for reading and (hopefully) answering.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Doug Hellmann
Excerpts from Octave J. Orgeron's message of 2017-02-02 12:16:15 -0700:
> Hi Doug,
> 
> Comments below..
> 
> Thanks,
> Octave
> 
> On 2/2/2017 11:27 AM, Doug Hellmann wrote:
> > Excerpts from Octave J. Orgeron's message of 2017-02-02 09:40:23 -0700:
> >> Hi Doug,
> >>
> >> One could try to detect the default engine. However, in MySQL Cluster,
> >> you can support multiple storage engines. Only NDB is fully clustered
> >> and replicated, so if you accidentally set a table to be InnoDB it won't
> >> be replicated . So it makes more sense for the operator to be explicit
> >> on which engine they want to use.
> > I think this change is probably a bigger scale item than I understood
> > it to be when you originally contacted me off-list for advice about
> > how to get started. I hope I haven't steered you too far wrong, but
> > at least the conversation is started.
> >
> > As someone (Mike?) pointed out on the review, the option by itself
> > doesn't do much of anything, now. Before we add it, I think we'll
> > want to see some more detail about how it's going used. It may be
> > easier to have that broader conversation here on email than on the
> > patch currently up for review.
> 
> Understood, it's a complicated topic since it involves gritty details in 
> SQL Alchemy and Alembic that are masked from end-users and operators 
> alike. Figuring out how to make this work did take some time on my part.
> 
> >
> > It sounds like part of the plan is to use the configuration setting
> > to control how the migration scripts create tables. How will that
> > work? Does each migration need custom logic, or can we build helpers
> > into oslo.db somehow? Or will the option be passed to the database
> > to change its behavior transparently?
> 
> These are good questions. For each service, when the db sync or db 
> manage operation is done it will call into SQL Alchemy or Alembic 
> depending on the methods used by the given service. For example, most 
> use SQL Alchemy, but there are services like Ironic and Neutron that use 
> Alembic. It is within these scripts under the /db/* hierarchy 
> that the logic exist today to configure the database schema for any 
> given service. Both approaches will look at the schema version in the 
> database to determine where to start the create, upgrade, heal, etc. 
> operations. What my patches do is that in the scripts where a table 
> needs to be modified, there will be custom IF/THEN logic to check the 
> cfg.CONF.database.mysql_storage_engine setting to make the required 
> modifications. There are also use cases where the api.py or model(s).py 
> under the /db/ hierarchy needs to look at this setting as well 
> for API and CLI operations where mysql_engine is auto-inserted into DB 
> operations. In those use cases, I replace the hard coded "InnoDB" with 
> the mysql_storage_engine variable.

So all existing scripts that create or modify tables will need to
be updated? That's going to be a lot of work. It will also be a lot
of work to ensure that new alter scripts are implemented using the
required logic, and that testing happens in the gates for all
projects supporting this feature to ensure there are no regressions
or behavioral changes in the applications as a result of the changes
in table definitions.

I'll let the folks more familiar with databases in general and MySQL
in particular respond to some of the technical details, but I think
I should give you fair warning that you're taking on a very big
project, especially for someone new to the community.

> It would be interesting if we could develop some helpers to automate 
> this, but it would probably have to be at the SQL Alchemy or Alembic 
> levels. Unfortunately, throughout all of the OpenStack services today we 
> are hard coding things like mysql_engine, using InnoDB specific features 
> (savepoints, nested operations, etc.), and not following the strict SQL 
> orders for modifying table elements (foreign keys, constraints, and 
> indexes). That actually makes it difficult to support other MySQL 
> dialects or other databases out of the box. SQL Alchemy can be used to 
> fix some of these things if the SQL statements are all generic and we 
> follow strict SQL rules. But to change that would be a monumental 
> effort. That is why I took this approach of just adding custom logic. 
> There is a president for this already for Postgres and DB2 support in 
> some of the OpenStack services using custom logic to deal with similar 
> differences.
> 
> As to why we should place the configuration setting into oslo.db? Here 
> are a couple of logical reasons:

Oh, I'm not questioning putting the option in oslo.db. I think that's
clearly the right place to put it.

> 
>   * The configuration block for database settings for each service comes
> from the oslo.db namespace today under cfg.CONF.database.*. Placing
> it here makes the location consistent across all of the services.
>   * Within the SQL Alchemy and Alembic 

Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Mike Bayer



On 02/02/2017 02:16 PM, Octave J. Orgeron wrote:

Hi Doug,

Comments below..

Thanks,
Octave

On 2/2/2017 11:27 AM, Doug Hellmann wrote:

It sounds like part of the plan is to use the configuration setting
to control how the migration scripts create tables. How will that
work? Does each migration need custom logic, or can we build helpers
into oslo.db somehow? Or will the option be passed to the database
to change its behavior transparently?


These are good questions. For each service, when the db sync or db
manage operation is done it will call into SQL Alchemy or Alembic
depending on the methods used by the given service. For example, most
use SQL Alchemy, but there are services like Ironic and Neutron that use
Alembic. It is within these scripts under the /db/* hierarchy
that the logic exist today to configure the database schema for any
given service. Both approaches will look at the schema version in the
database to determine where to start the create, upgrade, heal, etc.
operations. What my patches do is that in the scripts where a table
needs to be modified, there will be custom IF/THEN logic to check the
cfg.CONF.database.mysql_storage_engine setting to make the required
modifications. There are also use cases where the api.py or model(s).py
under the /db/ hierarchy needs to look at this setting as well
for API and CLI operations where mysql_engine is auto-inserted into DB
operations. In those use cases, I replace the hard coded "InnoDB" with
the mysql_storage_engine variable.


can you please clarify "replace the hard coded "InnoDB" " ?Are you 
proposing to send reviews for patches against all occurrences of 
"InnoDB" in files like 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py 
?The "InnoDB" keyword is hardcoded in hundreds of migration files 
across all openstack projects that use MySQL.   Are all of these going 
to be patched with some kind of conditional?





It would be interesting if we could develop some helpers to automate
this, but it would probably have to be at the SQL Alchemy or Alembic
levels.


not really, you can build a hook that intercepts operations like 
CreateTable, or that intercepts SQL as it is emitted over a connection, 
in order to modify these values on the fly.  But that is a specific kind 
of approach with it's own set of surprises.   Alternatively you can make 
an alternate SQLAlchemy dialect that no longer recognizes "mysql_*" as 
the prefix for these arguments.   There's ways to do this part.


But more critically I noticed you referred to altering the names of 
columns to suit NDB.  How will this be accomplished?   Changing a column 
name in an openstack application is no longer trivial, because online 
upgrades must be supported for applications like Nova and Neutron.  A 
column name can't just change to a new name, both columns have to exist 
and logic must be added to keep these columns synchronized.


Unfortunately, throughout all of the OpenStack services today we

are hard coding things like mysql_engine, using InnoDB specific features
(savepoints, nested operations, etc.), and not following the strict SQL
orders for modifying table elements (foreign keys, constraints, and
indexes).


Savepoints aren't InnoDB specific, they are a standard SQL feature and 
also their use is not widespread right now.   I'm not sure what you mean 
by "the strict SQL orders", we use ALTER TABLE as is standard in MySQL 
for this and it's behind an abstraction layer that supports other 
databases such as Postgresql.





  * Many of the SQL Alchemy and Alembic scripts only import the minimal
set of python modules. If we imported others, we would also have to
initialize those name spaces which means a lot more code :(


I'm not sure what this means, can you clarify ?


   * Reduces the amount of overhead required to make these changes.

What sort of "overhead", do you mean code complexity, performance ?








Keep in mind that we do not encourage code outside of libraries to
rely on configuration settings defined within libraries, because
that limits our ability to change the names and locations of the
configuration variables.  If migration scripts need to access the
configuration setting we will need to add some sort of public API
to oslo.db to query the value. The function can simply return the
configured value.


Configuration parameters within any given service will make use of a
large namespace that pulls in things from oslo and the .conf files for a
given service. So even when an API, CLI, or DB related call is made,
these namespaces are key for things to work. In the case of the SQL
Alchemy and Alembic scripts, they also make use of this namespace with
oslo, oslo.db, etc. to figure out how to connect to the database and
other database settings. I don't think we need a public API for these
kinds of calls as the community already makes use of the libraries to
build the namespace. My oslo.db setting 

Re: [openstack-dev] [storlets] ocata branch

2017-02-02 Thread Eran Rom
Apologies for the date confusion in the below mail.
This should happen on the week of the 13th, and clearly we need to base our 
branch on Swift’s Ocata branch.

> On Feb 1, 2017, at 10:37 AM, Eran Rom  wrote:
> 
> Hi all,
> I will create the stable/ocata branch tomorrow my end of day.
> Would be great if we can land the python functional tests.
> 
> Thanks!
> Eran
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Octave J. Orgeron

Hi Doug,

Comments below..

Thanks,
Octave

On 2/2/2017 11:27 AM, Doug Hellmann wrote:

Excerpts from Octave J. Orgeron's message of 2017-02-02 09:40:23 -0700:

Hi Doug,

One could try to detect the default engine. However, in MySQL Cluster,
you can support multiple storage engines. Only NDB is fully clustered
and replicated, so if you accidentally set a table to be InnoDB it won't
be replicated . So it makes more sense for the operator to be explicit
on which engine they want to use.

I think this change is probably a bigger scale item than I understood
it to be when you originally contacted me off-list for advice about
how to get started. I hope I haven't steered you too far wrong, but
at least the conversation is started.

As someone (Mike?) pointed out on the review, the option by itself
doesn't do much of anything, now. Before we add it, I think we'll
want to see some more detail about how it's going used. It may be
easier to have that broader conversation here on email than on the
patch currently up for review.


Understood, it's a complicated topic since it involves gritty details in 
SQL Alchemy and Alembic that are masked from end-users and operators 
alike. Figuring out how to make this work did take some time on my part.




It sounds like part of the plan is to use the configuration setting
to control how the migration scripts create tables. How will that
work? Does each migration need custom logic, or can we build helpers
into oslo.db somehow? Or will the option be passed to the database
to change its behavior transparently?


These are good questions. For each service, when the db sync or db 
manage operation is done it will call into SQL Alchemy or Alembic 
depending on the methods used by the given service. For example, most 
use SQL Alchemy, but there are services like Ironic and Neutron that use 
Alembic. It is within these scripts under the /db/* hierarchy 
that the logic exist today to configure the database schema for any 
given service. Both approaches will look at the schema version in the 
database to determine where to start the create, upgrade, heal, etc. 
operations. What my patches do is that in the scripts where a table 
needs to be modified, there will be custom IF/THEN logic to check the 
cfg.CONF.database.mysql_storage_engine setting to make the required 
modifications. There are also use cases where the api.py or model(s).py 
under the /db/ hierarchy needs to look at this setting as well 
for API and CLI operations where mysql_engine is auto-inserted into DB 
operations. In those use cases, I replace the hard coded "InnoDB" with 
the mysql_storage_engine variable.


It would be interesting if we could develop some helpers to automate 
this, but it would probably have to be at the SQL Alchemy or Alembic 
levels. Unfortunately, throughout all of the OpenStack services today we 
are hard coding things like mysql_engine, using InnoDB specific features 
(savepoints, nested operations, etc.), and not following the strict SQL 
orders for modifying table elements (foreign keys, constraints, and 
indexes). That actually makes it difficult to support other MySQL 
dialects or other databases out of the box. SQL Alchemy can be used to 
fix some of these things if the SQL statements are all generic and we 
follow strict SQL rules. But to change that would be a monumental 
effort. That is why I took this approach of just adding custom logic. 
There is a president for this already for Postgres and DB2 support in 
some of the OpenStack services using custom logic to deal with similar 
differences.


As to why we should place the configuration setting into oslo.db? Here 
are a couple of logical reasons:


 * The configuration block for database settings for each service comes
   from the oslo.db namespace today under cfg.CONF.database.*. Placing
   it here makes the location consistent across all of the services.
 * Within the SQL Alchemy and Alembic scripts, this is one of the few
   common namespaces that are available without bringing in a larger
   number of modules across the services today.
 * Many of the SQL Alchemy and Alembic scripts only import the minimal
   set of python modules. If we imported others, we would also have to
   initialize those name spaces which means a lot more code :(
 * Reduces the amount of overhead required to make these changes.




Keep in mind that we do not encourage code outside of libraries to
rely on configuration settings defined within libraries, because
that limits our ability to change the names and locations of the
configuration variables.  If migration scripts need to access the
configuration setting we will need to add some sort of public API
to oslo.db to query the value. The function can simply return the
configured value.


Configuration parameters within any given service will make use of a 
large namespace that pulls in things from oslo and the .conf files for a 
given service. So even when an API, CLI, or DB related call is made, 
these 

Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Armando M.
On 2 February 2017 at 10:08, Sean Dague  wrote:

> On 02/02/2017 12:49 PM, Armando M. wrote:
> >
> >
> > On 2 February 2017 at 08:40, Sean Dague  > > wrote:
> >
> > On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> > 
> > > 
> > >
> > > We definitely aren't saying running a single worker is how we
> recommend people
> > > run OpenStack by doing this. But it just adds on to the
> differences between the
> > > gate and what we expect things actually look like.
> >
> > I'm all for actually getting to the bottom of this, but honestly real
> > memory profiling is needed here. The growth across projects probably
> > means that some common libraries are some part of this. The ever
> growing
> > requirements list is demonstrative of that. Code reuse is good, but
> if
> > we are importing much of a library to get access to a couple of
> > functions, we're going to take a bunch of memory weight on that
> > (especially if that library has friendly auto imports in top level
> > __init__.py so we can't get only the parts we want).
> >
> > Changing the worker count is just shuffling around deck chairs.
> >
> > I'm not familiar enough with memory profiling tools in python to know
> > the right approach we should take there to get this down to
> individual
> > libraries / objects that are containing all our memory. Anyone more
> > skilled here able to help lead the way?
> >
> >
> > From what I hear, the overall consensus on this matter is to determine
> > what actually caused the memory consumption bump and how to address it,
> > but that's more of a medium to long term action. In fact, to me this is
> > one of the top priority matters we should talk about at the imminent PTG.
> >
> > For the time being, and to provide relief to the gate, should we want to
> > lock the API_WORKERS to 1? I'll post something for review and see how
> > many people shoot it down :)
>
> I don't think we want to do that. It's going to force down the eventlet
> API workers to being a single process, and it's not super clear that
> eventlet handles backups on the inbound socket well. I honestly would
> expect that creates different hard to debug issues, especially with high
> chatter rates between services.
>

I must admit I share your fear, but out of the tests that I have executed
so far in [1,2,3], the house didn't burn in a fire. I am looking for other
ways to have a substantial memory saving with a relatively quick and dirty
fix, but coming up empty handed thus far.

[1] https://review.openstack.org/#/c/428303/
[2] https://review.openstack.org/#/c/427919/
[3] https://review.openstack.org/#/c/427921/


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-2, 6-10 Feb

2017-02-02 Thread Doug Hellmann
Focus
-

We are in the release candidate phase of the cycle. All project
teams should be fixing last minute release-critical bugs and testing
the prepared release candidates for their deliverables.

Release Tasks
-

The translation team will be increasing their work over the next
few weeks. Please land translation patches as quickly as possible
so we can include them in any future release candidates.

If you have not already done so, submit the instructions for creating
stable/ocata branches for your deliverables.

As part of creating the stable/ocata branch, the proposal bot will
send changes to update the settings in the new branch to ensure
reviews are submitted to the right branch in gerrit, to update
reno's configuration to include the release notes for the ocata
series, and to update the constraints settings in the branch. Please
review and approve these changes as quickly as possible.

General Notes
-

Immediately after the cycle-with-milestone projects have tagged
their first release candidate, we will branch devstack, grendade,
and the requirements repositories and then open development for the
pike cycle.

After this release cycle I will be reviewing the list of cycle-based
projects that do not prepare releases and providing that information
to the TC for a discussion about whether those projects should be
considered inactive, and therefore should be removed from the
official list. All deliverables saw at least one release for Newton,
and I hope we have the same results for Ocata.

The deadline for documenting community wide goal completion artifacts
is the end of the cycle. Please update the Ocata goals page with
any information needed to understand how the goal affected your
project, and whether there is any work left to be done.

Important Dates
---

Ocata Final Release candidate deadline: 16 Feb

Ocata release schedule: http://releases.openstack.org/ocata/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Domains support

2017-02-02 Thread Gema Gomez
Hi,

we've done this last week at Linaro. I have documented the process in a
blog post that is a walkthrough of a post by Steve Martinelli[1] from
the keystone team:

http://thetestingcorner.com/2017/01/30/ldap-authentication-for-openstack/

At the bottom of it there is a gerrit review with a patch to our ansible
playbooks that adds support for LDAP authentication. We kept the default
domain for services accounts and any other that needs to be managed
outside LDAP and then we have the LDAP domain for the actual end users.

Happy to review any patches or help with whichever one you are producing.

Hope that helps,
Gema

[1]
https://developer.ibm.com/opentech/2015/08/14/configuring-keystone-with-ibms-bluepages-ldap/

On 02/02/17 16:07, Dave Walker wrote:
> Try /etc/kolla/config/keystone/domains/keystone.$DOMAIN.conf
> 
> Thanks
> 
> On 2 February 2017 at 00:20, Christian Tardif
> > wrote:
> 
> Will sure give it a try ! And from a kolla perspective, it means
> that this file should go in
> /etc/kolla/config/domains/keystone.$DOMAIN.conf in order to be
> pushed to the relevant containers ?
> 
> 
> *Christian Tardif
> *christian.tar...@servinfo.ca 
> 
> SVP, pensez � l�environnement avant d�imprimer ce message.
> 
> 
> 
> 
> -- Message d'origine --
> De: "Dave Walker" >
> �: "OpenStack Development Mailing List (not for usage questions)"
>  >
> Envoy� : 2017-02-01 11:39:15
> Objet : Re: [openstack-dev] [kolla] Domains support
> 
>> Hi Christian,
>>
>> I added the domain support, but I didn't document it as well as I
>> should have. Apologies!
>>
>> This is the config I am using to talk to a windows AD server. 
>> Hope this helps.
>>
>> create a domain specific file:
>> etc/keystone/domains/keystone.$DOMAIN.conf:
>>
>> [ldap]
>> use_pool = true
>> pool_size = 10
>> pool_retry_max = 3
>> pool_retry_delay = 0.1
>> pool_connection_timeout = -1
>> pool_connection_lifetime = 600
>> use_auth_pool = false
>> auth_pool_size = 100
>> auth_pool_connection_lifetime = 60
>> url = ldap://server1:389,ldap://server2:389
>> user = CN=Linux SSSD Kerberos Service
>> Account,CN=Users,DC=example,DC=com
>> password = password
>> suffix   = dc=example,dc=com
>> user_tree_dn =
>> OU=Personnel,OU=Users,OU=example,DC=example,DC=com
>> user_objectclass = person
>> user_filter  = (memberOf=CN=mail,OU=GPO
>> Security,OU=Groups,OU=COMPANY,DC=example,DC=com)
>> user_id_attribute= sAMAccountName
>> user_name_attribute  = sAMAccountName
>> user_description_attribute = displayName
>> user_mail_attribute  = mail
>> user_pass_attribute  =
>> user_enabled_attribute   = userAccountControl
>> user_enabled_mask= 2
>> user_enabled_default = 512
>> user_attribute_ignore= password,tenant_id,tenants
>> group_tree_dn= OU=GPO
>> Security,OU=Groups,OU=COMPANY,DC=example,DC=com
>> group_name_attribute = name
>> group_id_attribute   = cn
>> group_objectclass= group
>> group_member_attribute   = member
>>
>> [identity]
>> driver = keystone.identity.backends.ldap.Identity
>>
>> [assignment]
>> driver = keystone.assignment.backends.sql.Assignment
>>
>> --
>> Kind Regards,
>> Dave Walker
>>
>> On 1 February 2017 at 05:03, Christian Tardif
>> > > wrote:
>>
>> Hi,
>>
>> I'm looking for domains support in Kolla. I've searched, but
>> didn't find anything relevant. Could someone point me how to
>> achieve this?
>>
>> What I'm really looking for, in fact, is a decent way or
>> setting auth through LDAP backend while keeping service users
>> (neutron, for example) in the SQL backend. I know that this
>> can be achieved with domains support (leaving default domain
>> on SQL, and another domain for LDAP users. Or maybe there's
>> another of doing this?
>>
>> Thanks,
>> 
>> 
>>
>> *Christian Tardif
>> *christian.tar...@servinfo.ca
>> 
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> 

[openstack-dev] [openstack-docs][security] Sec guide change

2017-02-02 Thread Alexandra Settle
Hi everyone,

As of today, all bugs for the Security Guide will be managed by 
ossp-security-documentation and no longer will be tracked using the OpenStack 
manuals Launchpad.

All tracking for Security Guide related bugs can be found here: 
https://bugs.launchpad.net/ossp-security-documentation

Big thanks to the security team and Ian Cordasco for creating and updating the 
bug list in Launchpad!

Thank you,

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo]FFE to update Eqlx and DellSc cinder templates

2017-02-02 Thread Rajini.Ram
Dell - Internal Use - Confidential



I would like to request a freeze exception to update Dell EqualLogic and Dell 
Storage Center cinder backend templates to use composable roles and services in 
Triple-o. This work is done and pending merge for the past few weeks. Without 
these we won't be able to do upgrades.



Dell Eqlx: https://review.openstack.org/#/c/422238/

Dell Storage Center: https://review.openstack.org/#/c/425866/





Thank you

Rajini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] FFE to add ScaleIO to Triple-o

2017-02-02 Thread Rajini.Ram
Dell - Internal Use - Confidential



I would like to request a feature freeze exception to ScaleIO cinder backend 
support  Triple-o. This work is done and pending review for the past three 
weeks. The puppet-cinder work is already merged.



Pending review https://review.openstack.org/#/c/422238/



Thanks

Rajini


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] FFE for tripleo collectd integration

2017-02-02 Thread Emilien Macchi
On Thu, Feb 2, 2017 at 1:07 PM, Lars Kellogg-Stedman  wrote:
> I would like to request a feature freeze exception for the collectd
> composable service patch:
>
>   
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-opstools-performance-monitoring
>
> The gerrit review implementing this is:
>
>   https://review.openstack.org/#/c/411048/
>
> The work on the composable service has been largely complete for several
> weeks, but there were some complication in getting tripleo ci to
> generate appropriate images.  I believe that a recently submitted patch
> will resolve that issue and unblock our ci:
>
>   https://review.openstack.org/#/c/427802/

Could you patch your THT patch to Depends-On the tripleo-ci patch, so
we can see if the package gets installed and if there is no blocker we
might have missed.
If no blocker, then ok for this one but I first want to see CI passing.

> Once the above patch has merged and new overcloud images are generated I
> believe the ci for the collectd integration will pass.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Jeremy Stanley
On 2017-02-02 04:27:51 + (+), Dolph Mathews wrote:
> What made most services jump +20% between mitaka and newton? Maybe there is
> a common cause that we can tackle.
[...]

Almost hesitant to suggest this one but since we primarily use
Ubuntu 14.04 LTS for stable/mitaka jobs and 16.04 LTS for later
branches, could bloat in a newer release of the Python 2.7
interpreter there (or something even lower-level still like glibc)
be a contributing factor? I agree it's more likely bloat in some
commonly-used module (possibly even one developed outside our
community), but potential system-level overhead probably should also
get some investigation.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Doug Hellmann
Excerpts from Octave J. Orgeron's message of 2017-02-02 09:40:23 -0700:
> Hi Doug,
> 
> One could try to detect the default engine. However, in MySQL Cluster, 
> you can support multiple storage engines. Only NDB is fully clustered 
> and replicated, so if you accidentally set a table to be InnoDB it won't 
> be replicated . So it makes more sense for the operator to be explicit 
> on which engine they want to use.

I think this change is probably a bigger scale item than I understood
it to be when you originally contacted me off-list for advice about
how to get started. I hope I haven't steered you too far wrong, but
at least the conversation is started.

As someone (Mike?) pointed out on the review, the option by itself
doesn't do much of anything, now. Before we add it, I think we'll
want to see some more detail about how it's going used. It may be
easier to have that broader conversation here on email than on the
patch currently up for review.

It sounds like part of the plan is to use the configuration setting
to control how the migration scripts create tables. How will that
work? Does each migration need custom logic, or can we build helpers
into oslo.db somehow? Or will the option be passed to the database
to change its behavior transparently?

Keep in mind that we do not encourage code outside of libraries to
rely on configuration settings defined within libraries, because
that limits our ability to change the names and locations of the
configuration variables.  If migration scripts need to access the
configuration setting we will need to add some sort of public API
to oslo.db to query the value. The function can simply return the
configured value.

What other behaviors are likely to be changed by the new option?
Will application runtime behavior need to know about the storage
engine?

Doug

> 
> Thanks,
> Octave
> 
> On 2/2/2017 6:46 AM, Doug Hellmann wrote:
> > Excerpts from Octave J. Orgeron's message of 2017-02-01 20:33:38 -0700:
> >> Hi Folks,
> >>
> >> I'm working on adding support for MySQL Cluster to the core OpenStack
> >> services. This will enable the community to benefit from an
> >> active/active, auto-sharding, and scale-out MySQL database. My approach
> >> is to have a single configuration setting in each core OpenStack service
> >> in the oslo.db configuration section called mysql_storage_engine that
> >> will enable the logic in the SQL Alchemy or Alembic upgrade scripts to
> >> handle the differences between InnoDB and NDB storage engines
> >> respectively. When enabled, this logic will make the required table
> >> schema changes around:
> >>
> >>* Row character length limits 65k -> 14k
> >>* Proper SQL ordering of foreign key, constraints, and index operations
> >>* Interception of savepoint and nested operations
> >>
> >> By default this functionality will not be enabled and will have no
> >> impact on the default InnoDB functionality. These changes have been
> >> tested on Kilo and Mitaka in previous releases of our OpenStack
> >> distributions with Tempest. I'm working on updating these patches for
> >> upstream consumption. We are also working on a 3rd party CI for
> >> regression testing against MySQL Cluster for the community.
> >>
> >> The first change set is for oslo.db and can be reviewed at:
> >>
> >> https://review.openstack.org/427970
> >>
> >> Thanks,
> >> Octave
> >>
> > Is it possible to detect the storage engine at runtime, instead of
> > having the operator configure it?
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-02-02 Thread Mathieu Mitchell

Oh...

I'm with the group here.

Let's find something that looks nice and is not offensive to anyone.

Mathieu

On 2017-02-02 11:52 AM, Jay Faulkner wrote:

https://en.wikipedia.org/wiki/Sign_of_the_horns came up in IRC, as the sign the 
bear is making. Obviously to me, I read it as the heavy metal gesture. 
Apparently it is offensive in some cultures, so I change my vote to -1, since I 
don’t want to offend folks in other parts of the world :).

-Jay


On Feb 1, 2017, at 12:38 PM, Jay Faulkner  wrote:

Of the options presented, I think the new 3.0 version most brings to mind a 
rocking bear.  It’s still tough to be OK with a new logo, given that pixie 
boots is beloved by our team partially because Lucas took the time to make us 
one — but it seems like not accepting a new logo created by the foundation 
would lead to Ironic getting less marketing and resources, so I’m not keen to 
go down that path. With that in mind, I’m +1 to version 3.0 of the bear.

-Jay


On Feb 1, 2017, at 12:05 PM, Heidi Joy Tretheway  wrote:

Hi Ironic team,

I’m sorry our second proposal again missed the mark. It wasn’t the illustrator’s 
intention to mimic the Bear Surprise painting that gained popularity in Russia as a meme. 
Our illustrator created a face-forward bear with paws shaped as if it had index and ring 
“fingers" down, like a hand gesture popular at a heavy metal concert. It was not 
meant to mimic the painting of a side-facing bear with paws and all “fingers" up to 
surprise. That said, once it’s seen, it’s tough to expect the community to un-see it, so 
we’ll take another approach.

The issue with your old mascot is twofold: it doesn’t fit the illustration 
style for the entire suite of 60+ mascots, and it contains a human-made element 
(drumsticks). As part of our overall guidelines, human-made objects and symbols 
were not allowed, and we applied these standards to all projects.

Your team told us you want a heavy metal bear, so we used the Kiss band-style 
makeup and the hand gesture to suggest metal without using an instrument or 
symbol. We tried to mimic your original logo’s expression. After releasing v1, 
we listened to your team’s comment that the first version was too angry 
looking, so you’ll see a range of expressions from fierce to neutral to happy.



I’d love to find a compromise with your team that will be in keeping with the 
style of the project logo suite. I’ll watch your ML for additional concerns 
about this proposed v3:


Our illustration team’s next step is to parse the community feedback from the 
Ironic team (note that there is a substantial amount of conflicting feedback 
from 21 members of your team) and determine if we have majority support for a 
single direction.

While new project logos are optional, virtually every project asked be 
represented in our family of logos. Only logos in this new style will be used 
on the project navigator and in other promotional ways.

Feel free to join me for a quick chat tomorrow at 9:30 a.m. Pacific:
Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/5038169769
Or iPhone one-tap (US Toll): +16465588656,5038169769# or 
+14086380968,5038169769#
Or Telephone: Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll)
Meeting ID: 503 816 9769
International numbers available: 
https://zoom.us/zoomconference?m=E5Gcj6WHnrCsWmjQRQr7KFsXkP9nAIaP





Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: heidi.tretheway




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Thank you.

2017-02-02 Thread Iury Gregory
Thank you very much for your contributions Cody! =)

2017-01-24 17:59 GMT-03:00 Matt Fischer :

> Cody,
>
> Thank you for your contributions over the years.
>
> On Fri, Jan 20, 2017 at 12:29 PM, Cody Herriges  wrote:
>
>> I attempted to send this out last week but think I messed it up by
>> sending from my work email address which isn't the one I am signed up to
>> the lists with.  Seeing Alex's note in IRC this morning reminded me that I
>> had probably screwed it up...
>>
>> I just wanted to let everyone know how much I truly appreciate the effort
>> you've all put into these modules over the years.  For me its been a long
>> standing example of the maturity and utility of Puppet.
>>
>> Also, thank you for accepting me back into the community as a core
>> reviewer after a long absence.  Ironically, my push to be move involved in
>> the OpenStack community started a movement for me inside Puppet that has
>> resulted in a role change from being an operator and developer to being a
>> manger in our Business Development team.  This has been happening
>> gradually, which is the reason for my reduced presence for several of the
>> past few months and became official last week.  Since it is now official it
>> marks the completion of the hand off of management of our internal cluster
>> to other individuals inside Puppet so I asked Alex to remove me from core.
>>
>> I'll likely still pop in and out of activity but it'll largely be for
>> personal reasons.  I hope to get a more hobby like enjoyment out of the low
>> level practioner bits of OpenStack from here on out.
>>
>> --
>> Cody Herriges
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

~


*Att[]'sIury Gregory Melo Ferreira *
*Master student in Computer Science at UFCG*

*Part of the puppet-manager-core team in OpenStack*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Sean Dague
On 02/02/2017 12:49 PM, Armando M. wrote:
> 
> 
> On 2 February 2017 at 08:40, Sean Dague  > wrote:
> 
> On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> 
> > 
> >
> > We definitely aren't saying running a single worker is how we recommend 
> people
> > run OpenStack by doing this. But it just adds on to the differences 
> between the
> > gate and what we expect things actually look like.
> 
> I'm all for actually getting to the bottom of this, but honestly real
> memory profiling is needed here. The growth across projects probably
> means that some common libraries are some part of this. The ever growing
> requirements list is demonstrative of that. Code reuse is good, but if
> we are importing much of a library to get access to a couple of
> functions, we're going to take a bunch of memory weight on that
> (especially if that library has friendly auto imports in top level
> __init__.py so we can't get only the parts we want).
> 
> Changing the worker count is just shuffling around deck chairs.
> 
> I'm not familiar enough with memory profiling tools in python to know
> the right approach we should take there to get this down to individual
> libraries / objects that are containing all our memory. Anyone more
> skilled here able to help lead the way?
> 
> 
> From what I hear, the overall consensus on this matter is to determine
> what actually caused the memory consumption bump and how to address it,
> but that's more of a medium to long term action. In fact, to me this is
> one of the top priority matters we should talk about at the imminent PTG.
> 
> For the time being, and to provide relief to the gate, should we want to
> lock the API_WORKERS to 1? I'll post something for review and see how
> many people shoot it down :)

I don't think we want to do that. It's going to force down the eventlet
API workers to being a single process, and it's not super clear that
eventlet handles backups on the inbound socket well. I honestly would
expect that creates different hard to debug issues, especially with high
chatter rates between services.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] FFE for tripleo collectd integration

2017-02-02 Thread Lars Kellogg-Stedman
I would like to request a feature freeze exception for the collectd
composable service patch:

  
https://blueprints.launchpad.net/tripleo/+spec/tripleo-opstools-performance-monitoring

The gerrit review implementing this is:

  https://review.openstack.org/#/c/411048/

The work on the composable service has been largely complete for several
weeks, but there were some complication in getting tripleo ci to
generate appropriate images.  I believe that a recently submitted patch
will resolve that issue and unblock our ci:

  https://review.openstack.org/#/c/427802/

Once the above patch has merged and new overcloud images are generated I
believe the ci for the collectd integration will pass.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Armando M.
On 2 February 2017 at 08:40, Sean Dague  wrote:

> On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> 
> > 
> >
> > We definitely aren't saying running a single worker is how we recommend
> people
> > run OpenStack by doing this. But it just adds on to the differences
> between the
> > gate and what we expect things actually look like.
>
> I'm all for actually getting to the bottom of this, but honestly real
> memory profiling is needed here. The growth across projects probably
> means that some common libraries are some part of this. The ever growing
> requirements list is demonstrative of that. Code reuse is good, but if
> we are importing much of a library to get access to a couple of
> functions, we're going to take a bunch of memory weight on that
> (especially if that library has friendly auto imports in top level
> __init__.py so we can't get only the parts we want).
>
> Changing the worker count is just shuffling around deck chairs.
>
> I'm not familiar enough with memory profiling tools in python to know
> the right approach we should take there to get this down to individual
> libraries / objects that are containing all our memory. Anyone more
> skilled here able to help lead the way?
>

>From what I hear, the overall consensus on this matter is to determine what
actually caused the memory consumption bump and how to address it, but
that's more of a medium to long term action. In fact, to me this is one of
the top priority matters we should talk about at the imminent PTG.

For the time being, and to provide relief to the gate, should we want to
lock the API_WORKERS to 1? I'll post something for review and see how many
people shoot it down :)

Thanks for your feedback!
Cheers,
Armando


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] heat 8.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for heat for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/heat/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/heat/log/?h=stable/ocata

Release notes for heat can be found at:

http://docs.openstack.org/releasenotes/heat/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-02-02 Thread Chris Dent


Greetings OpenStack community,

In today's meeting [0] after briefly covering old business we spent nearly 50 minutes 
going round in circles discussing the complex interactions of expectations of API 
stability, the need to fix bugs and the costs and benefits of microversions. We didn't 
make a lot of progress on the general issues, but we did #agree that a glance issue [4] 
should be treated as a code bug (not a documentation bug) that should be fixed. In some 
ways this position is not aligned with the ideal presented by stability guidelines but it 
is aligned with an original goal of the API-WG: consistency. It's unclear how to resolve 
this conflict, either in this specific instance or in the guidelines that the API-WG 
creates. As stated in response to one of the related reviews [5]: "If bugs like this 
don't get fixed properly in the code, OpenStack risks going down the path of Internet 
Explorer and people wind up writing client code to the bugs and that way lies 
madness."

One reminder: Projects should make sure that their liaison information is up to 
date at http://specs.openstack.org/openstack/api-wg/liaisons.html . If not, 
please provide a patch to doc/source/liaisons.json to update it.

Three guidelines were merged. None were frozen. Four are being worked on. See 
below.

# Newly Published Guidelines

* Add guideline for invalid query parameters
  https://review.openstack.org/#/c/417441/
* Add guidelines on usage of state vs. status
  https://review.openstack.org/#/c/411528/
* Clarify the status values in versions
  https://review.openstack.org/#/c/411849/

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

Nothing newly frozen.

# Guidelines Currently Under Review [3]

* Add guidelines for boolean names
  https://review.openstack.org/#/c/411529/

* Define pagination guidelines
  https://review.openstack.org/#/c/390973/

* Add API capabilities discovery guideline
  https://review.openstack.org/#/c/386555/

* [WIP] Refactor and re-validate api change guidelines
  https://review.openstack.org/#/c/421846/

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[0] 
http://eavesdrop.openstack.org/meetings/api_wg/2017/api_wg.2017-02-02-16.00.html
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://bugs.launchpad.net/glance/+bug/1656183
[5] https://review.openstack.org/#/c/425487/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Octave J. Orgeron

Hi Mike,

I've sent out another email that gives some more insight into how this 
will work for the other OpenStack services. The hook in the oslo.db 
namespace gives a global configuration point for enabling the patches 
elsewhere.


Thanks,
Octave

On 2/2/2017 9:24 AM, Mike Bayer wrote:



On 02/02/2017 10:25 AM, Monty Taylor wrote:

On 02/01/2017 09:33 PM, Octave J. Orgeron wrote:

Hi Folks,

I'm working on adding support for MySQL Cluster to the core OpenStack
services. This will enable the community to benefit from an
active/active, auto-sharding, and scale-out MySQL database. My approach
is to have a single configuration setting in each core OpenStack 
service

in the oslo.db configuration section called mysql_storage_engine that
will enable the logic in the SQL Alchemy or Alembic upgrade scripts to
handle the differences between InnoDB and NDB storage engines
respectively. When enabled, this logic will make the required table
schema changes around:

  * Row character length limits 65k -> 14k
  * Proper SQL ordering of foreign key, constraints, and index 
operations

  * Interception of savepoint and nested operations

By default this functionality will not be enabled and will have no
impact on the default InnoDB functionality. These changes have been
tested on Kilo and Mitaka in previous releases of our OpenStack
distributions with Tempest. I'm working on updating these patches for
upstream consumption. We are also working on a 3rd party CI for
regression testing against MySQL Cluster for the community.

The first change set is for oslo.db and can be reviewed at:

https://review.openstack.org/427970


Yay!

(You may not be aware, but there are several of us who used to be on the
MySQL Cluster team who are now on OpenStack. I've been wanting good NDB
support for a while. So thank you!)


as I noted on the review it would be nice to have some specifics of 
how this is to be accomplished as the code review posted doesn't show 
anything of how this would work.








__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Oracle 
Octave J. Orgeron | Sr. Principal Architect and Software Engineer
Oracle Linux OpenStack
Mobile: +1-720-616-1550 
500 Eldorado Blvd. | Broomfield, CO 80021
Certified Oracle Enterprise Architect: Systems Infrastructure 

Green Oracle  Oracle is committed to 
developing practices and products that help protect the environment


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer][postgresql][gate][telemetry] PostgreSQL gate failure (again)

2017-02-02 Thread Mike Bayer



On 02/02/2017 11:42 AM, Sean Dague wrote:


That's all fine and good, we just need to rewrite about 100,000 unit
tests to do that. I'm totally cool with someone taking that task on, but
making a decision about postgresql shouldn't be filibustered on
rewriting all the unit tests in OpenStack because of the ways we use sqlite.


two points:

first is, you don't need to rewrite any tests, just reorganize the 
fixtures.   This is all done top-level and I've always been trying to 
get people to standardize on oslo-db built-in fixtures more anyway, this 
would ultimately make that all easier.   We would need to use an 
efficient process for tearing down of data within a schema so that tests 
can run quickly without schema rebuilds, this is all under the realm of 
"roll back the transaction" testing which oslo.db supports though nobody 
is using this right now.   It would be a big change in how things run 
and there'd be individual issues to fix but it's not a rewrite of actual 
tests.   I am not in any hurry to do any of this.


second is, I'm not a "filibuster" vote at all :).   I'm like the least 
important person in the decision chain here and I didn't even -1 the 
proposal.Deprecating Postgresql alone and doing nothing else is 
definitely very easy and would make development simpler, whereas getting 
rid of SQLite would be a much bigger job.  I'm just pointing out that we 
shouldn't pretend we "target only one database" until we get rid of 
SQLite in our test suites.



OK, third bonus point.   If we do drop postgresql support, to the degree 
that we really remove it totally from test fixtures, oslo.db 
architectures, all of that, the codebase would probably become 
mysql-specific in subtle and not-so-subtle ways pretty quickly, and 
within a few cycles we should consider that we probably will never be 
able to target multiple databases again without a much larger 
"unwinding" effort.   So while not worrying about Postgresql is handy, I 
would miss the fact that targeting two real DBs keeps us honest in terms 
of being able to target multiple databases at all, because this is a 
door that once you close we're not going to be able to open again. I 
doubt that in oslo.db itself we would realistically ever drop the 
architectures that support multiple databases, though, and as oslo.db is 
a pretty simple library it should likely continue to target postgresql 
as nothing more than a "keeping things honest" sanity check.










-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Octave J. Orgeron

Hi Monty,

Thank you for the feedback. I'm excited about getting these patches 
upstream as everyone will be able to benefit from them.


Thanks,
Octave

On 2/2/2017 8:25 AM, Monty Taylor wrote:

On 02/01/2017 09:33 PM, Octave J. Orgeron wrote:

Hi Folks,

I'm working on adding support for MySQL Cluster to the core OpenStack
services. This will enable the community to benefit from an
active/active, auto-sharding, and scale-out MySQL database. My approach
is to have a single configuration setting in each core OpenStack service
in the oslo.db configuration section called mysql_storage_engine that
will enable the logic in the SQL Alchemy or Alembic upgrade scripts to
handle the differences between InnoDB and NDB storage engines
respectively. When enabled, this logic will make the required table
schema changes around:

   * Row character length limits 65k -> 14k
   * Proper SQL ordering of foreign key, constraints, and index operations
   * Interception of savepoint and nested operations

By default this functionality will not be enabled and will have no
impact on the default InnoDB functionality. These changes have been
tested on Kilo and Mitaka in previous releases of our OpenStack
distributions with Tempest. I'm working on updating these patches for
upstream consumption. We are also working on a 3rd party CI for
regression testing against MySQL Cluster for the community.

The first change set is for oslo.db and can be reviewed at:

https://review.openstack.org/427970

Yay!

(You may not be aware, but there are several of us who used to be on the
MySQL Cluster team who are now on OpenStack. I've been wanting good NDB
support for a while. So thank you!)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Oracle 
Octave J. Orgeron | Sr. Principal Architect and Software Engineer
Oracle Linux OpenStack
Mobile: +1-720-616-1550 
500 Eldorado Blvd. | Broomfield, CO 80021
Certified Oracle Enterprise Architect: Systems Infrastructure 

Green Oracle  Oracle is committed to 
developing practices and products that help protect the environment


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Andrey Kurilin
On Thu, Feb 2, 2017 at 6:40 PM, Sean Dague  wrote:

> On 02/02/2017 11:16 AM, Matthew Treinish wrote:
> 
> > 
> >
> > We definitely aren't saying running a single worker is how we recommend
> people
> > run OpenStack by doing this. But it just adds on to the differences
> between the
> > gate and what we expect things actually look like.
>
> I'm all for actually getting to the bottom of this, but honestly real
> memory profiling is needed here. The growth across projects probably
> means that some common libraries are some part of this. The ever growing
> requirements list is demonstrative of that. Code reuse is good, but if
> we are importing much of a library to get access to a couple of
> functions, we're going to take a bunch of memory weight on that
> (especially if that library has friendly auto imports in top level
> __init__.py so we can't get only the parts we want).
>

Sounds like the new version of "oslo-incubator" idea.


>
> Changing the worker count is just shuffling around deck chairs.
>
> I'm not familiar enough with memory profiling tools in python to know
> the right approach we should take there to get this down to individual
> libraries / objects that are containing all our memory. Anyone more
> skilled here able to help lead the way?
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Octave J. Orgeron

Hi Doug,

One could try to detect the default engine. However, in MySQL Cluster, 
you can support multiple storage engines. Only NDB is fully clustered 
and replicated, so if you accidentally set a table to be InnoDB it won't 
be replicated . So it makes more sense for the operator to be explicit 
on which engine they want to use.


Thanks,
Octave

On 2/2/2017 6:46 AM, Doug Hellmann wrote:

Excerpts from Octave J. Orgeron's message of 2017-02-01 20:33:38 -0700:

Hi Folks,

I'm working on adding support for MySQL Cluster to the core OpenStack
services. This will enable the community to benefit from an
active/active, auto-sharding, and scale-out MySQL database. My approach
is to have a single configuration setting in each core OpenStack service
in the oslo.db configuration section called mysql_storage_engine that
will enable the logic in the SQL Alchemy or Alembic upgrade scripts to
handle the differences between InnoDB and NDB storage engines
respectively. When enabled, this logic will make the required table
schema changes around:

   * Row character length limits 65k -> 14k
   * Proper SQL ordering of foreign key, constraints, and index operations
   * Interception of savepoint and nested operations

By default this functionality will not be enabled and will have no
impact on the default InnoDB functionality. These changes have been
tested on Kilo and Mitaka in previous releases of our OpenStack
distributions with Tempest. I'm working on updating these patches for
upstream consumption. We are also working on a 3rd party CI for
regression testing against MySQL Cluster for the community.

The first change set is for oslo.db and can be reviewed at:

https://review.openstack.org/427970

Thanks,
Octave


Is it possible to detect the storage engine at runtime, instead of
having the operator configure it?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Oracle 
Octave J. Orgeron | Sr. Principal Architect and Software Engineer
Oracle Linux OpenStack
Mobile: +1-720-616-1550 
500 Eldorado Blvd. | Broomfield, CO 80021
Certified Oracle Enterprise Architect: Systems Infrastructure 

Green Oracle  Oracle is committed to 
developing practices and products that help protect the environment


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-02-02 Thread Jay Faulkner
https://en.wikipedia.org/wiki/Sign_of_the_horns came up in IRC, as the sign the 
bear is making. Obviously to me, I read it as the heavy metal gesture. 
Apparently it is offensive in some cultures, so I change my vote to -1, since I 
don’t want to offend folks in other parts of the world :).

-Jay

> On Feb 1, 2017, at 12:38 PM, Jay Faulkner  wrote:
> 
> Of the options presented, I think the new 3.0 version most brings to mind a 
> rocking bear.  It’s still tough to be OK with a new logo, given that pixie 
> boots is beloved by our team partially because Lucas took the time to make us 
> one — but it seems like not accepting a new logo created by the foundation 
> would lead to Ironic getting less marketing and resources, so I’m not keen to 
> go down that path. With that in mind, I’m +1 to version 3.0 of the bear.
> 
> -Jay
> 
>> On Feb 1, 2017, at 12:05 PM, Heidi Joy Tretheway  
>> wrote:
>> 
>> Hi Ironic team,
>> 
>> I’m sorry our second proposal again missed the mark. It wasn’t the 
>> illustrator’s intention to mimic the Bear Surprise painting that gained 
>> popularity in Russia as a meme. Our illustrator created a face-forward bear 
>> with paws shaped as if it had index and ring “fingers" down, like a hand 
>> gesture popular at a heavy metal concert. It was not meant to mimic the 
>> painting of a side-facing bear with paws and all “fingers" up to surprise. 
>> That said, once it’s seen, it’s tough to expect the community to un-see it, 
>> so we’ll take another approach.
>> 
>> The issue with your old mascot is twofold: it doesn’t fit the illustration 
>> style for the entire suite of 60+ mascots, and it contains a human-made 
>> element (drumsticks). As part of our overall guidelines, human-made objects 
>> and symbols were not allowed, and we applied these standards to all projects.
>> 
>> Your team told us you want a heavy metal bear, so we used the Kiss 
>> band-style makeup and the hand gesture to suggest metal without using an 
>> instrument or symbol. We tried to mimic your original logo’s expression. 
>> After releasing v1, we listened to your team’s comment that the first 
>> version was too angry looking, so you’ll see a range of expressions from 
>> fierce to neutral to happy. 
>> 
>> 
>> 
>> I’d love to find a compromise with your team that will be in keeping with 
>> the style of the project logo suite. I’ll watch your ML for additional 
>> concerns about this proposed v3:   
>> 
>> 
>> Our illustration team’s next step is to parse the community feedback from 
>> the Ironic team (note that there is a substantial amount of conflicting 
>> feedback from 21 members of your team) and determine if we have majority 
>> support for a single direction. 
>> 
>> While new project logos are optional, virtually every project asked be 
>> represented in our family of logos. Only logos in this new style will be 
>> used on the project navigator and in other promotional ways. 
>> 
>> Feel free to join me for a quick chat tomorrow at 9:30 a.m. Pacific:
>> Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/5038169769
>> Or iPhone one-tap (US Toll): +16465588656,5038169769# or 
>> +14086380968,5038169769#
>> Or Telephone: Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll) 
>> Meeting ID: 503 816 9769
>> International numbers available: 
>> https://zoom.us/zoomconference?m=E5Gcj6WHnrCsWmjQRQr7KFsXkP9nAIaP
>> 
>> 
>> 
>> 
>>  
>> Heidi Joy Tretheway
>> Senior Marketing Manager, OpenStack Foundation
>> 503 816 9769 | Skype: heidi.tretheway
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer][postgresql][gate][telemetry] PostgreSQL gate failure (again)

2017-02-02 Thread Sean Dague
On 02/02/2017 11:48 AM, Julien Danjou wrote:
> On Wed, Feb 01 2017, Julien Danjou wrote:
> 
>> My questions are simple and can be organized in a tree:
>>
>>Does Nova want to support PostgreSQL?
>>  /  \
>>Yes  No
>>/  \
>>Why is there no gate? Ok, we'll have to remove the 
>>  PostgreSQL job in Ceilometer
> 
> Funnily enough, I don't think anyone has been able to clearly reply to
> this tree. My understanding is that Nova does not want to support
> PostgreSQL, therefore we'll remove the PostgreSQL job from Ceilometer.
> 
> The Ceilometer legacy storage engine being deprecated in Ocata, that has
> little important for us anyway.

I feel like the Answer was "No", with a pushback of "well Nova can't
make that decision in a vacuum", so we then escalated to "Ok, here is
the non vacuum solution".

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer][postgresql][gate][telemetry] PostgreSQL gate failure (again)

2017-02-02 Thread Julien Danjou
On Wed, Feb 01 2017, Julien Danjou wrote:

> My questions are simple and can be organized in a tree:
>
>Does Nova want to support PostgreSQL?
>  /  \
>Yes  No
>/  \
>Why is there no gate? Ok, we'll have to remove the 
>  PostgreSQL job in Ceilometer

Funnily enough, I don't think anyone has been able to clearly reply to
this tree. My understanding is that Nova does not want to support
PostgreSQL, therefore we'll remove the PostgreSQL job from Ceilometer.

The Ceilometer legacy storage engine being deprecated in Ocata, that has
little important for us anyway.

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] designate 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for designate for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/designate/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/designate/log/?h=stable/ocata

Release notes for designate can be found at:

http://docs.openstack.org/releasenotes/designate/

If you find an issue that could be considered release-critical, please
file it at:

http://bugs.launchpad.net/designate

and tag it *ocata-rc-potential* to bring it to the designate
release crew's attention.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] designate-dashboard 4.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for designate-dashboard for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/designate-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:


http://git.openstack.org/cgit/openstack/designate-dashboard/log/?h=stable/ocata

Release notes for designate-dashboard can be found at:

http://docs.openstack.org/releasenotes/designate-dashboard/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer][postgresql][gate][telemetry] PostgreSQL gate failure (again)

2017-02-02 Thread Sean Dague
On 02/02/2017 10:33 AM, Mike Bayer wrote:
> 
> 
> On 02/01/2017 10:22 AM, Monty Taylor wrote:
>>
>> I personally continue to be of the opinion that without an explicit
>> vocal and well-staffed champion, supporting postgres is more trouble
>> than it is worth. The vast majority of OpenStack deployments are on
>> MySQL - and what's more, the code is written with MySQL in mind.
>> Postgres and MySQL have different trade offs, different things each are
>> good at and different places in which each has weakness. By attempting
>> to support Postgres AND MySQL, we prevent ourselves from focusing
>> adequate attention on making sure that our support for one of them is
>> top-notch and in keeping with best practices for that database.
>>
>> So let me state my opinion slightly differently. I think we should
>> support one and only one RDBMS backend for OpenStack, and we should open
>> ourselves up to use advanced techniques for that backend. I don't
>> actually care whether that DB is MySQL or Postgres - but the corpus of
>> existing deployments on MySQL and the existing gate jobs I think make
>> the choice one way or the other simple.
> 
> 
> well, let me blow your mind and agree, but noting that this means, *we
> drop SQLite also*.   IMO every openstack developer should have
> MySQL/MariaDB running on their machine and that is part of what runs if
> you expect to run database-related unit tests.   Targeting just one
> database is very handy but if you really want to use the features
> without roadblocks, you need to go all the way.

That's all fine and good, we just need to rewrite about 100,000 unit
tests to do that. I'm totally cool with someone taking that task on, but
making a decision about postgresql shouldn't be filibustered on
rewriting all the unit tests in OpenStack because of the ways we use sqlite.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Sean Dague
On 02/02/2017 11:16 AM, Matthew Treinish wrote:

> 
> 
> We definitely aren't saying running a single worker is how we recommend people
> run OpenStack by doing this. But it just adds on to the differences between 
> the
> gate and what we expect things actually look like.

I'm all for actually getting to the bottom of this, but honestly real
memory profiling is needed here. The growth across projects probably
means that some common libraries are some part of this. The ever growing
requirements list is demonstrative of that. Code reuse is good, but if
we are importing much of a library to get access to a couple of
functions, we're going to take a bunch of memory weight on that
(especially if that library has friendly auto imports in top level
__init__.py so we can't get only the parts we want).

Changing the worker count is just shuffling around deck chairs.

I'm not familiar enough with memory profiling tools in python to know
the right approach we should take there to get this down to individual
libraries / objects that are containing all our memory. Anyone more
skilled here able to help lead the way?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [release] misleading release notes

2017-02-02 Thread Dean Troyer
On Thu, Feb 2, 2017 at 7:42 AM, Doug Hellmann  wrote:
> As long as the edit happens on the branch where the note should appear,
> it should be fine.

Excellent!  Just tried it and it does, thanks.

I think https://review.openstack.org/427842 is doing what you intend,
I can see the entire set of notes by walking the stable release pages
now.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-02 Thread Mike Bayer



On 02/02/2017 10:25 AM, Monty Taylor wrote:

On 02/01/2017 09:33 PM, Octave J. Orgeron wrote:

Hi Folks,

I'm working on adding support for MySQL Cluster to the core OpenStack
services. This will enable the community to benefit from an
active/active, auto-sharding, and scale-out MySQL database. My approach
is to have a single configuration setting in each core OpenStack service
in the oslo.db configuration section called mysql_storage_engine that
will enable the logic in the SQL Alchemy or Alembic upgrade scripts to
handle the differences between InnoDB and NDB storage engines
respectively. When enabled, this logic will make the required table
schema changes around:

  * Row character length limits 65k -> 14k
  * Proper SQL ordering of foreign key, constraints, and index operations
  * Interception of savepoint and nested operations

By default this functionality will not be enabled and will have no
impact on the default InnoDB functionality. These changes have been
tested on Kilo and Mitaka in previous releases of our OpenStack
distributions with Tempest. I'm working on updating these patches for
upstream consumption. We are also working on a 3rd party CI for
regression testing against MySQL Cluster for the community.

The first change set is for oslo.db and can be reviewed at:

https://review.openstack.org/427970


Yay!

(You may not be aware, but there are several of us who used to be on the
MySQL Cluster team who are now on OpenStack. I've been wanting good NDB
support for a while. So thank you!)


as I noted on the review it would be nice to have some specifics of how 
this is to be accomplished as the code review posted doesn't show 
anything of how this would work.








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Matthew Treinish
On Thu, Feb 02, 2017 at 11:10:22AM -0500, Matthew Treinish wrote:
> On Wed, Feb 01, 2017 at 04:24:54PM -0800, Armando M. wrote:
> > Hi,
> > 
> > [TL;DR]: OpenStack services have steadily increased their memory
> > footprints. We need a concerted way to address the oom-kills experienced in
> > the openstack gate, as we may have reached a ceiling.
> > 
> > Now the longer version:
> > 
> > 
> > We have been experiencing some instability in the gate lately due to a
> > number of reasons. When everything adds up, this means it's rather
> > difficult to merge anything and knowing we're in feature freeze, that adds
> > to stress. One culprit was identified to be [1].
> > 
> > We initially tried to increase the swappiness, but that didn't seem to
> > help. Then we have looked at the resident memory in use. When going back
> > over the past three releases we have noticed that the aggregated memory
> > footprint of some openstack projects has grown steadily. We have the
> > following:
> > 
> >- Mitaka
> >   - neutron: 1.40GB
> >   - nova: 1.70GB
> >   - swift: 640MB
> >   - cinder: 730MB
> >   - keystone: 760MB
> >   - horizon: 17MB
> >   - glance: 538MB
> >- Newton
> >- neutron: 1.59GB (+13%)
> >   - nova: 1.67GB (-1%)
> >   - swift: 779MB (+21%)
> >   - cinder: 878MB (+20%)
> >   - keystone: 919MB (+20%)
> >   - horizon: 21MB (+23%)
> >   - glance: 721MB (+34%)
> >- Ocata
> >   - neutron: 1.75GB (+10%)
> >   - nova: 1.95GB (%16%)
> >   - swift: 703MB (-9%)
> >   - cinder: 920MB (4%)
> >   - keystone: 903MB (-1%)
> >   - horizon: 25MB (+20%)
> >   - glance: 740MB (+2%)
> > 
> > Numbers are approximated and I only took a couple of samples, but in a
> > nutshell, the majority of the services have seen double digit growth over
> > the past two cycles in terms of the amount or RSS memory they use.
> > 
> > Since [1] is observed only since ocata [2], I imagine that's pretty
> > reasonable to assume that memory increase might as well be a determining
> > factor to the oom-kills we see in the gate.
> > 
> > Profiling and surgically reducing the memory used by each component in each
> > service is a lengthy process, but I'd rather see some gate relief right
> > away. Reducing the number of API workers helps bring the RSS memory down
> > back to mitaka levels:
> > 
> >- neutron: 1.54GB
> >- nova: 1.24GB
> >- swift: 694MB
> >- cinder: 778MB
> >- keystone: 891MB
> >- horizon: 24MB
> >- glance: 490MB
> > 
> > However, it may have other side effects, like longer execution times, or
> > increase of timeouts.
> > 
> > Where do we go from here? I am not particularly fond of stop-gap [4], but
> > it is the one fix that most widely address the memory increase we have
> > experienced across the board.
> 
> So I have a couple of concerns with doing this. We're only running with 2
> workers per api service now and dropping it down to 1 means we have no more
> memory head room in the future. So this feels like we're just delaying the
> inevitable maybe for a cycle or 2. When we first started hitting OOM issues a
> couple years ago we dropped from nprocs to nprocs/2. [5] Back then we were 
> also
> running more services per job, it was back in the day of the integrated 
> release
> so all those projects were running. (like ceilometer, heat, etc.) So in a 
> little
> over 2 years the memory consumption for the 7 services has increased to the
> point where we're making up for a bunch of extra services that don't run in 
> the
> job anymore and we had to drop the worker count in half since. So if we were 
> to
> do this we don't have anymore room for when things keep growing. I think now 
> is
> the time we should start seriously taking a stance on our memory footprint
> growth and see if we can get it under control.
> 
> My second concern is the same as you here, the long term effects of this 
> change
> aren't exactly clear. With the limited sample size of the test patch[4] we 
> can't
> really say if it'll negatively affect run time or job success rates. I don't 
> think
> it should be too bad, tempest is only making 4 api requests at a time, and 
> most of
> the services should be able to handle that kinda load with a single worker. 
> (I'd
> hope)
> 
> This also does bring up the question of the gate config being representative
> of how we recommend running OpenStack. Like the reasons we try to use default
> config values as much as possible in devstack. We definitely aren't saying
> running a single worker



We definitely aren't saying running a single worker is how we recommend people
run OpenStack by doing this. But it just adds on to the differences between the
gate and what we expect things actually look like.

> 
> But, I'm not sure any of that is a blocker for moving forward with dropping 
> down
> to a single worker.
> 
> As an aside, I 

Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Matthew Treinish
On Wed, Feb 01, 2017 at 04:24:54PM -0800, Armando M. wrote:
> Hi,
> 
> [TL;DR]: OpenStack services have steadily increased their memory
> footprints. We need a concerted way to address the oom-kills experienced in
> the openstack gate, as we may have reached a ceiling.
> 
> Now the longer version:
> 
> 
> We have been experiencing some instability in the gate lately due to a
> number of reasons. When everything adds up, this means it's rather
> difficult to merge anything and knowing we're in feature freeze, that adds
> to stress. One culprit was identified to be [1].
> 
> We initially tried to increase the swappiness, but that didn't seem to
> help. Then we have looked at the resident memory in use. When going back
> over the past three releases we have noticed that the aggregated memory
> footprint of some openstack projects has grown steadily. We have the
> following:
> 
>- Mitaka
>   - neutron: 1.40GB
>   - nova: 1.70GB
>   - swift: 640MB
>   - cinder: 730MB
>   - keystone: 760MB
>   - horizon: 17MB
>   - glance: 538MB
>- Newton
>- neutron: 1.59GB (+13%)
>   - nova: 1.67GB (-1%)
>   - swift: 779MB (+21%)
>   - cinder: 878MB (+20%)
>   - keystone: 919MB (+20%)
>   - horizon: 21MB (+23%)
>   - glance: 721MB (+34%)
>- Ocata
>   - neutron: 1.75GB (+10%)
>   - nova: 1.95GB (%16%)
>   - swift: 703MB (-9%)
>   - cinder: 920MB (4%)
>   - keystone: 903MB (-1%)
>   - horizon: 25MB (+20%)
>   - glance: 740MB (+2%)
> 
> Numbers are approximated and I only took a couple of samples, but in a
> nutshell, the majority of the services have seen double digit growth over
> the past two cycles in terms of the amount or RSS memory they use.
> 
> Since [1] is observed only since ocata [2], I imagine that's pretty
> reasonable to assume that memory increase might as well be a determining
> factor to the oom-kills we see in the gate.
> 
> Profiling and surgically reducing the memory used by each component in each
> service is a lengthy process, but I'd rather see some gate relief right
> away. Reducing the number of API workers helps bring the RSS memory down
> back to mitaka levels:
> 
>- neutron: 1.54GB
>- nova: 1.24GB
>- swift: 694MB
>- cinder: 778MB
>- keystone: 891MB
>- horizon: 24MB
>- glance: 490MB
> 
> However, it may have other side effects, like longer execution times, or
> increase of timeouts.
> 
> Where do we go from here? I am not particularly fond of stop-gap [4], but
> it is the one fix that most widely address the memory increase we have
> experienced across the board.

So I have a couple of concerns with doing this. We're only running with 2
workers per api service now and dropping it down to 1 means we have no more
memory head room in the future. So this feels like we're just delaying the
inevitable maybe for a cycle or 2. When we first started hitting OOM issues a
couple years ago we dropped from nprocs to nprocs/2. [5] Back then we were also
running more services per job, it was back in the day of the integrated release
so all those projects were running. (like ceilometer, heat, etc.) So in a little
over 2 years the memory consumption for the 7 services has increased to the
point where we're making up for a bunch of extra services that don't run in the
job anymore and we had to drop the worker count in half since. So if we were to
do this we don't have anymore room for when things keep growing. I think now is
the time we should start seriously taking a stance on our memory footprint
growth and see if we can get it under control.

My second concern is the same as you here, the long term effects of this change
aren't exactly clear. With the limited sample size of the test patch[4] we can't
really say if it'll negatively affect run time or job success rates. I don't 
think
it should be too bad, tempest is only making 4 api requests at a time, and most 
of
the services should be able to handle that kinda load with a single worker. (I'd
hope)

This also does bring up the question of the gate config being representative
of how we recommend running OpenStack. Like the reasons we try to use default
config values as much as possible in devstack. We definitely aren't saying
running a single worker

But, I'm not sure any of that is a blocker for moving forward with dropping down
to a single worker.

As an aside, I also just pushed up: https://review.openstack.org/#/c/428220/ to
see if that provides any useful info. I'm doubtful that it will be helpful,
because it's the combination of services running causing the issue. But it
doesn't really hurt to collect that.

-Matt Treinish

> [1] https://bugs.launchpad.net/neutron/+bug/1656386
> [2]
> http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22oom-killer%5C%22%20AND%20tags:syslog
> [3]
> http://logs.openstack.org/21/427921/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/82084c2/
> 

Re: [openstack-dev] [kolla] Domains support

2017-02-02 Thread Dave Walker
Try /etc/kolla/config/keystone/domains/keystone.$DOMAIN.conf

Thanks

On 2 February 2017 at 00:20, Christian Tardif 
wrote:

> Will sure give it a try ! And from a kolla perspective, it means that this
> file should go in /etc/kolla/config/domains/keystone.$DOMAIN.conf in
> order to be pushed to the relevant containers ?
> --
>
>
> *Christian Tardif*christian.tar...@servinfo.ca
>
> SVP, pensez à l’environnement avant d’imprimer ce message.
>
>
>
> -- Message d'origine --
> De: "Dave Walker" 
> À: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Envoyé : 2017-02-01 11:39:15
> Objet : Re: [openstack-dev] [kolla] Domains support
>
> Hi Christian,
>
> I added the domain support, but I didn't document it as well as I should
> have. Apologies!
>
> This is the config I am using to talk to a windows AD server.  Hope this
> helps.
>
> create a domain specific file:
> etc/keystone/domains/keystone.$DOMAIN.conf:
>
> [ldap]
> use_pool = true
> pool_size = 10
> pool_retry_max = 3
> pool_retry_delay = 0.1
> pool_connection_timeout = -1
> pool_connection_lifetime = 600
> use_auth_pool = false
> auth_pool_size = 100
> auth_pool_connection_lifetime = 60
> url = ldap://server1:389,ldap://server2:389
> user = CN=Linux SSSD Kerberos Service Account,CN=Users,DC=example,DC=com
> password = password
> suffix   = dc=example,dc=com
> user_tree_dn = OU=Personnel,OU=Users,OU=
> example,DC=example,DC=com
> user_objectclass = person
> user_filter  = (memberOf=CN=mail,OU=GPO
> Security,OU=Groups,OU=COMPANY,DC=example,DC=com)
> user_id_attribute= sAMAccountName
> user_name_attribute  = sAMAccountName
> user_description_attribute = displayName
> user_mail_attribute  = mail
> user_pass_attribute  =
> user_enabled_attribute   = userAccountControl
> user_enabled_mask= 2
> user_enabled_default = 512
> user_attribute_ignore= password,tenant_id,tenants
> group_tree_dn= OU=GPO Security,OU=Groups,OU=COMPANY,
> DC=example,DC=com
> group_name_attribute = name
> group_id_attribute   = cn
> group_objectclass= group
> group_member_attribute   = member
>
> [identity]
> driver = keystone.identity.backends.ldap.Identity
>
> [assignment]
> driver = keystone.assignment.backends.sql.Assignment
>
> --
> Kind Regards,
> Dave Walker
>
> On 1 February 2017 at 05:03, Christian Tardif <
> christian.tar...@servinfo.ca> wrote:
>
>> Hi,
>>
>> I'm looking for domains support in Kolla. I've searched, but didn't find
>> anything relevant. Could someone point me how to achieve this?
>>
>> What I'm really looking for, in fact, is a decent way or setting auth
>> through LDAP backend while keeping service users (neutron, for example) in
>> the SQL backend. I know that this can be achieved with domains support
>> (leaving default domain on SQL, and another domain for LDAP users. Or maybe
>> there's another of doing this?
>>
>> Thanks,
>> --
>>
>>
>> *Christian Tardif*christian.tar...@servinfo.ca
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer][postgresql][gate][telemetry] PostgreSQL gate failure (again)

2017-02-02 Thread Jay Pipes

On 02/02/2017 10:59 AM, Monty Taylor wrote:

On 02/02/2017 09:33 AM, Mike Bayer wrote:



On 02/01/2017 10:22 AM, Monty Taylor wrote:


I personally continue to be of the opinion that without an explicit
vocal and well-staffed champion, supporting postgres is more trouble
than it is worth. The vast majority of OpenStack deployments are on
MySQL - and what's more, the code is written with MySQL in mind.
Postgres and MySQL have different trade offs, different things each are
good at and different places in which each has weakness. By attempting
to support Postgres AND MySQL, we prevent ourselves from focusing
adequate attention on making sure that our support for one of them is
top-notch and in keeping with best practices for that database.

So let me state my opinion slightly differently. I think we should
support one and only one RDBMS backend for OpenStack, and we should open
ourselves up to use advanced techniques for that backend. I don't
actually care whether that DB is MySQL or Postgres - but the corpus of
existing deployments on MySQL and the existing gate jobs I think make
the choice one way or the other simple.



well, let me blow your mind and agree, but noting that this means, *we
drop SQLite also*.   IMO every openstack developer should have
MySQL/MariaDB running on their machine and that is part of what runs if
you expect to run database-related unit tests.   Targeting just one
database is very handy but if you really want to use the features
without roadblocks, you need to go all the way.


I could not possibly agree more strongly. Support for sqlite - which
literally nobody should EVER use in production causes much unnecessary
complexity.


I presume you don't mean for Swift.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer][postgresql][gate][telemetry] PostgreSQL gate failure (again)

2017-02-02 Thread Monty Taylor
On 02/02/2017 09:33 AM, Mike Bayer wrote:
> 
> 
> On 02/01/2017 10:22 AM, Monty Taylor wrote:
>>
>> I personally continue to be of the opinion that without an explicit
>> vocal and well-staffed champion, supporting postgres is more trouble
>> than it is worth. The vast majority of OpenStack deployments are on
>> MySQL - and what's more, the code is written with MySQL in mind.
>> Postgres and MySQL have different trade offs, different things each are
>> good at and different places in which each has weakness. By attempting
>> to support Postgres AND MySQL, we prevent ourselves from focusing
>> adequate attention on making sure that our support for one of them is
>> top-notch and in keeping with best practices for that database.
>>
>> So let me state my opinion slightly differently. I think we should
>> support one and only one RDBMS backend for OpenStack, and we should open
>> ourselves up to use advanced techniques for that backend. I don't
>> actually care whether that DB is MySQL or Postgres - but the corpus of
>> existing deployments on MySQL and the existing gate jobs I think make
>> the choice one way or the other simple.
> 
> 
> well, let me blow your mind and agree, but noting that this means, *we
> drop SQLite also*.   IMO every openstack developer should have
> MySQL/MariaDB running on their machine and that is part of what runs if
> you expect to run database-related unit tests.   Targeting just one
> database is very handy but if you really want to use the features
> without roadblocks, you need to go all the way.

I could not possibly agree more strongly. Support for sqlite - which
literally nobody should EVER use in production causes much unnecessary
complexity.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] trove-dashboard 8.0.0.0rc1 (ocata)

2017-02-02 Thread no-reply

Hello everyone,

A new release candidate for trove-dashboard for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/trove-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/trove-dashboard/log/?h=stable/ocata

Release notes for trove-dashboard can be found at:

http://docs.openstack.org/releasenotes/trove-dashboard/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] FFE to add Congress to Triple-o

2017-02-02 Thread Emilien Macchi
ok FFE granted.

On Thu, Feb 2, 2017 at 9:59 AM, Alan Pevec  wrote:
>
> Waiting for Alan to give -1/+1 on the RDO side, but it's a +1 for me
> on the TripleO side.
>
>
> +1 from me, I should finish missing PuLP dependency in RDO today
>
>
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >