Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Matt Riedemann

On 6/16/2017 9:46 AM, Eric Harney wrote:

On 06/16/2017 10:21 AM, Sean McGinnis wrote:


I don't think merging tests that are showing failures, then blacklisting
them, is the right approach. And as Eric points out, this isn't
necessarily just a failure with Ceph. There is a legitimate logical
issue with what this particular test is doing.

But in general, to get back to some of the earlier points, I don't think
we should be merging tests with known breakages until those breakages
can be first addressed.



As another example, this was the last round of this, in May:

https://review.openstack.org/#/c/332670/

which is a new tempest test for a Cinder API that is not supported by
all drivers.  The Ceph job failed on the tempest patch, correctly, the
test was merged, then the Ceph jobs broke:

https://bugs.launchpad.net/glance/+bug/1687538
https://review.openstack.org/#/c/461625/

This is really not a sustainable model.

And this is the _easy_ case, since Ceph jobs run in OpenStack infra and
are easily visible and trackable.  I'm not sure what the impact is on
Cinder third-party CI for other drivers.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



This is generally why we have config options in Tempest to not run tests 
that certain backends don't implement, like all of the backup/snapshot 
volume tests that the NFS job was failing on forever.


I think it's perfectly valid to have tests in Tempest for things that 
not all backends implement as long as they are configurable. It's up to 
the various CI jobs to configure Tempest properly for what they support 
and then work on reducing the number of things they don't support. We've 
been doing that for ages now.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Matt Riedemann

On 6/16/2017 8:13 PM, Matt Riedemann wrote:
Yeah there is a distinction between the ceph nv job that runs on 
nova/cinder/glance changes and the ceph job that runs on os-brick and 
glance_store changes. When we made the tempest dsvm ceph job non-voting 
we failed to mirror that in the os-brick/glance-store jobs. We should do 
that.


Here you go:

https://review.openstack.org/#/c/475095/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Matt Riedemann

On 6/16/2017 3:32 PM, Sean McGinnis wrote:


So, before we go further, ceph seems to be -nv on all projects right
now, right? So I get there is some debate on that patch, but is it
blocking anything?



Ceph is voting on os-brick patches. So it does block some things when
we run into this situation.

But again, we should avoid getting into this situation in the first
place, voting or no.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah there is a distinction between the ceph nv job that runs on 
nova/cinder/glance changes and the ceph job that runs on os-brick and 
glance_store changes. When we made the tempest dsvm ceph job non-voting 
we failed to mirror that in the os-brick/glance-store jobs. We should do 
that.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] security group OVO change

2017-06-16 Thread Isaku Yamahata
It also broke networking-odl.
The patch[1] is needed to unbreak.
[1] https://review.openstack.org/#/c/448420/

necessary db info is taken from context.session.new.
But with OVO, those expunge themselves with create method.
Those info needs to be passed as callback argument.

Thanks,

On Fri, Jun 16, 2017 at 01:25:28PM -0700,
Ihar Hrachyshka  wrote:

> To close the loop here,
> 
> - this also broke heat py3 job (https://launchpad.net/bugs/1698355)
> - we polished https://review.openstack.org/474575 to fix both
> vmware-nsx and heat issues
> - I also posted a patch for oslo.serialization for the bug that
> triggered MemoryError in heat gate:
> https://review.openstack.org/475052
> - the vmware-nsx adoption patch is at:
> https://review.openstack.org/#/c/474608/ and @boden is working on it,
> should be ready to go in due course.
> 
> Thanks and sorry for inconveniences,
> Ihar
> 
> On Thu, Jun 15, 2017 at 6:17 AM, Gary Kotton  wrote:
> > Hi,
> >
> > The commit https://review.openstack.org/284738 has broken decomposed plugins
> > (those that extend security groups and rules). The reason for this is that
> > there is a extend callback that we use which expects to get a database
> > object and the aforementioned patch passes a new neutron object.
> >
> > I have posted [i] to temporarily address the issue. An alternative is to
> > revert the patch until the decomposed plugins can figure out how to
> > correctly address this.
> >
> > Thanks
> >
> > Gary
> >
> > [i] https://review.openstack.org/474575
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Isaku Yamahata 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-16 Thread Michał Jastrzębski
So I'm trying to figure out how to actually use it.

We (and any other container based deploy..) will run into some
chicken/egg problem - you need to deploy container to generate big
yaml with defaults, then you need to overload it with your
configurations, validate if they're not deprecated, run container with
this ansible role (or module...really doesn't matter), spit out final
confg, lay it down, deploy container again. And that will have to be
done for every host class (as configs might differ host to host). Imho
a bit too much for this to be appealing (but I might be wrong). I'd
much rather have:
1. Yaml as input to oslo.config instead of broken ini
2. Validator to throw an error if one of our regular,
template-rendered, configs is deprecated

We can run this validator in gate to have quick feedback when
something gets deprecated.

Thoughts?
Michal

On 16 June 2017 at 13:24, Emilien Macchi  wrote:
> On Fri, Jun 16, 2017 at 11:09 AM, Jiří Stránský  wrote:
>> On 15.6.2017 19:06, Emilien Macchi wrote:
>>>
>>> I missed [tripleo] tag.
>>>
>>> On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi 
>>> wrote:

 If you haven't followed the "Configuration management with etcd /
 confd" thread [1], Doug found out that using confd to generate
 configuration files wouldn't work for the Cinder case where we don't
 know in advance of the deployment what settings to tell confd to look
 at.
 We are still looking for a generic way to generate *.conf files for
 OpenStack, that would be usable by Deployment tools and operators.
 Right now, Doug and I are investigating some tooling that would be
 useful to achieve this goal.

 Doug has prototyped an Ansible role that would generate configuration
 files by consumming 2 things:

 * Configuration schema, generated by Ben's work with Machine Readable
 Sample Config.
$ oslo-config-generator --namespace cinder --format yaml >
 cinder-schema.yaml

 It also needs: https://review.openstack.org/#/c/474306/ to generate
 some extra data not included in the original version.

 * Parameters values provided in config_data directly in the playbook:
 config_data:
   DEFAULT:
 transport_url: rabbit://user:password@hostname
 verbose: true

 There are 2 options disabled by default but which would be useful for
 production environments:
 * Set to true to always show all configuration values:
 config_show_defaults
 * Set to true to show the help text: config_show_help: true

 The Ansible module is available on github:
 https://github.com/dhellmann/oslo-config-ansible

 To try this out, just run:
$ ansible-playbook ./playbook.yml

 You can quickly see the output of cinder.conf:
  https://clbin.com/HmS58


 What are the next steps:

 * Getting feedback from Deployment Tools and operators on the concept
 of this module.
Maybe this module could replace what is done by Kolla with
 merge_configs and OpenStack Ansible with config_template.
 * On the TripleO side, we would like to see if this module could
 replace the Puppet OpenStack modules that are now mostly used for
 generating configuration files for containers.
A transition path would be having Heat to generate Ansible vars
 files and give it to this module. We could integrate the playbook into
 a new task in the composable services, something like
"os_gen_config_tasks", a bit like we already have for upgrade tasks,
 also driven by Ansible.
>>
>>
>> This sounds good to me, though one issue i can presently see is that Puppet
>> modules sometimes contain quite a bit of data processing logic ("smart"
>> variables which map 1-to-N rather than 1-to-1 to actual config values, and
>> often not just in openstack service configs, e.g. puppet-nova also
>> configures libvirt, etc.). Also we use some non-config aspects from the
>> Puppet modules (e.g. seeding Keystone tenants/services/endpoints/...). We'd
>> need to implement this functionality elsewhere when replacing the Puppet
>> modules. Not a blocker, but something to keep in mind.
>
> 2 interesting things:
>
> - For the logic that are done by puppet modules for some parameters:
> yes I agree, this problem isn't solved now. This thread talks about
> config management, with some data in entry, it's a very little step I
> know, but it's on purpose.
>   Once we figure how to do that, we can think about the data
> generation and where to put the logic (I think the logic is too
> opinionated to be in a common project, but I might be wrong).
>
> - Things like libvirt, mysql, etc will be managed by something else
> but Puppet I think; this is out of topic for now. For Keystone
> resources, same thing, we could use some native python clients or
> Ansible modules if we switch to 

Re: [openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-16 Thread Clint Byrum
Excerpts from John Dickinson's message of 2017-06-16 11:35:39 -0700:
> 
> On 16 Jun 2017, at 10:51, Clint Byrum wrote:
> 
> > This is great work.
> >
> > I'm sure you've already thought of this, but could you explain why
> > you've chosen not to put the small objects in the k/v store as part of
> > the value rather than in secondary large files?
> 
> I don't want to co-opt an answer from Alex, but I do want to point to some of 
> the other background on this LOSF work.
> 
> https://wiki.openstack.org/wiki/Swift/ideas/small_files
> https://wiki.openstack.org/wiki/Swift/ideas/small_files/experimentations
> https://wiki.openstack.org/wiki/Swift/ideas/small_files/implementation
> 

These are great. Thanks for sharing them, I understand a lot more now.

> Look at the second link for some context to your answer, but the summary is 
> "that means writing a file system, and writing a file system is really hard".
> 

I'm not sure we were thinking the same thing.

I was more asking, why not put the content of the object into the k/v
instead of the big_file_id:offset? My thinking was that for smaller
objects, you would just return the data immediately upon reading the k/v,
rather than then needing to go find the big file and read the offset.
However, I'm painfully aware that those directly involved with the problem
have likely thought of this. However, the experiments don't seem to show
that this was attempted. Perhaps I'm zooming too far out to see the real
problem space. You can all tell me to take my spray paint can and stop
staring at the bike shed if this is just too annoying. Seriously.

Of course, one important thing is, what does one consider "small"? Seems
like there's a size where the memory footprint of storing it in the
k/v would be justifiable if reads just returned immediately from k/v
vs. needing to also go get data from a big file on disk. Perhaps that
size is too low to really matter. I was hoping that this had been
considered and there was documentation, but I don't really see it.

Also the "writing your own filesystem" option in experiments seemed
more like a thing to do if you left the k/v stores out entirely.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-16 Thread Samuel Cassiba


> On Jun 16, 2017, at 07:28, Jay Pipes  wrote:
> 
> On 06/16/2017 09:57 AM, Emilien Macchi wrote:
>> On Thu, Jun 15, 2017 at 4:50 PM, Dean Troyer  wrote:
>>> On Thu, Jun 15, 2017 at 10:33 AM, Jay Pipes  wrote:
 I'd fully support the removal of all deployment projects from the "official
 OpenStack projects list".
>>> 
>>> Nice to hear Jay! :)
>>> 
>>> It was intentional from the beginning to not be in the deployment
>>> space, we allowed those projects in (not unanimously IIRC) and most of
>>> them did not evolve as expected.
>> Just for the record, it also happens out of the deployment space. We
>> allowed (not unanimously either, irrc) some projects to be part of the
>> Big Tent and some of them have died or are dying.
> 
> Sure, and this is a natural thing.
> 
> As I mentioned, I support removing Fuel from the official OpenStack projects 
> list because the project has lost the majority of its contributors and 
> Mirantis has effectively moved in a different direction, causing Fuel to be a 
> wilting flower (to use Thierry's delightful terminology).

It is not my intention to hijack this, but reading the thread compelled me to 
respond, maintaining the #4 deployment tool per the recent user survey and all. 
To be frank, Chef is almost right where Fuel is heading. I’m a little surprised 
we haven’t been shown the door yet, since people keep saying we’re dead. When 
Chef finally did cut Newton, we said our size as a team limited what we could 
produce, and that we were effectively keeping the lights on. To borrow the 
analogy, if Fuel is a wilting flower, Chef is a tumbleweed. Against all odds, 
it just keeps on tumbling. Some even think it’s dead. :)

> 
>>> I would not mind picking one winner and spending effort making an
>>> extremely easy, smooth, upgradable install that is The OneTrue
>>> OpenStack, I do not expect us to ever agree what that will look like
>>> so it is effectively never going to happen.  We've seen how far
>>> single-vendor projects have gone, and none of them reached that level.
>> Regarding all the company efforts to invest in one deployment tool,
>> it's going to be super hard to find The OneTrue and convince everyone
>> else to work on it.
> 
> Right, as Dean said above :)

Operators will pick what works best for their infrastructure and their needs, 
so let them, and be there for them when they fuck up and need help. Prescribing 
a One True Method will alienate those who might otherwise become the biggest 
cheerleaders. If OpenStack wants to become a distro, that’s one thing. If not, 
we’re swinging the pendulum pretty hard.

> 
>> Future will tell us but it's possible that deployments tools will be
>> reduced to 2 or 3 projects if it continues that way (Fuel is slowly
>> dying, Puppet OpenStack has less and less contributors, same for Chef
>> afik, etc).
> 
> Not sure about that. OpenStack Ansible and Kolla have emerged over the last 
> couple years as very strong communities with lots of momentum.
> 
> Sure, Chef has effectively died and yes, Puppet has become less shiny.

Ahem. We’re not dead, just few, super distributed and way stretched. We’re 
asynchronous to the point where pretty much only IRC and Gerrit makes sense to 
use. I can understand how you might misconstrue this as rigor mortis, so allow 
me to illuminate. We still manage to muddle through a review or three a month. 
Sure, there isn’t the rapid cadence we’d all hoped there would be, but my last 
rant highlighted on some of those deficiencies. Deployment tools, in this case, 
Chef, really lack a solid orchestration component to build from nothing, which 
has become more of an essential thing to have in the development of OpenStack. 
Compound this by an overly complex CI process to resemble something close to 
the real world, and you have what we have today. Please, don’t start sweeping 
Chef out the door with Fuel. I won’t sugar coat: it’s bad, but not to the point 
where we should say Chef has “died”. To say Chef has “died”, when we’re still 
pushing reviews, is mighty exclusionary and disrespectful of those that still 
do dedicate time and resources, even if that wasn’t the intention. I do what I 
can to help new users along their path, when they come across my radar. We 
still have newcomers. We still have semi-active contributors, no matter how 
many days between change sets.

Some things need a One True Path, but more so in what goes into the tools than 
tooling options themselves.  A set of standards would go well in that 
direction, but I refer you to the XKCD on standards in that case. I say this, 
lest we start alienating operators that can’t easily change the universe to 
turn their $10MM+ production clouds on a dime. Even at the Boston Summit, there 
were whispers of some people still using Chef. Chef hasn’t “effectively died”, 
just become way less shiny, boring even, without marketing and a strong team 
advocating for it. 

Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Sean McGinnis
> 
> So, before we go further, ceph seems to be -nv on all projects right
> now, right? So I get there is some debate on that patch, but is it
> blocking anything?
> 

Ceph is voting on os-brick patches. So it does block some things when
we run into this situation.

But again, we should avoid getting into this situation in the first
place, voting or no.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Sean McGinnis
> 
> == Need for a TC meeting next Tuesday ==
> 
> In order to make progress on the Pike goal selection, I think a
> dedicated IRC meeting will be necessary. We have a set of valid goals
> proposed already: we need to decide how many we should have, and which
> ones. Gerrit is not great to have that ranking discussion, so I think we
> should meet to come up with a set, and propose it on the mailing-list
> for discussion. We could use the regular meeting slot on Tuesday,
> 20:00utc. How does that sound ?
> 
> 

I have a busy couple of weeks of travel coming up, so I'm not sure if I will
be there or not. I will try to attend, and if that does not work out, I will
try to provided input via the ML before or after the meeting.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] no CI and upgrades meetings in next two weeks

2017-06-16 Thread Ihar Hrachyshka
Hi all,

subject says it all. Also note that upgrades meeting moved to
Thursday: https://review.openstack.org/474347

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] security group OVO change

2017-06-16 Thread Ihar Hrachyshka
To close the loop here,

- this also broke heat py3 job (https://launchpad.net/bugs/1698355)
- we polished https://review.openstack.org/474575 to fix both
vmware-nsx and heat issues
- I also posted a patch for oslo.serialization for the bug that
triggered MemoryError in heat gate:
https://review.openstack.org/475052
- the vmware-nsx adoption patch is at:
https://review.openstack.org/#/c/474608/ and @boden is working on it,
should be ready to go in due course.

Thanks and sorry for inconveniences,
Ihar

On Thu, Jun 15, 2017 at 6:17 AM, Gary Kotton  wrote:
> Hi,
>
> The commit https://review.openstack.org/284738 has broken decomposed plugins
> (those that extend security groups and rules). The reason for this is that
> there is a extend callback that we use which expects to get a database
> object and the aforementioned patch passes a new neutron object.
>
> I have posted [i] to temporarily address the issue. An alternative is to
> revert the patch until the decomposed plugins can figure out how to
> correctly address this.
>
> Thanks
>
> Gary
>
> [i] https://review.openstack.org/474575
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-16 Thread Emilien Macchi
On Fri, Jun 16, 2017 at 11:09 AM, Jiří Stránský  wrote:
> On 15.6.2017 19:06, Emilien Macchi wrote:
>>
>> I missed [tripleo] tag.
>>
>> On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi 
>> wrote:
>>>
>>> If you haven't followed the "Configuration management with etcd /
>>> confd" thread [1], Doug found out that using confd to generate
>>> configuration files wouldn't work for the Cinder case where we don't
>>> know in advance of the deployment what settings to tell confd to look
>>> at.
>>> We are still looking for a generic way to generate *.conf files for
>>> OpenStack, that would be usable by Deployment tools and operators.
>>> Right now, Doug and I are investigating some tooling that would be
>>> useful to achieve this goal.
>>>
>>> Doug has prototyped an Ansible role that would generate configuration
>>> files by consumming 2 things:
>>>
>>> * Configuration schema, generated by Ben's work with Machine Readable
>>> Sample Config.
>>>$ oslo-config-generator --namespace cinder --format yaml >
>>> cinder-schema.yaml
>>>
>>> It also needs: https://review.openstack.org/#/c/474306/ to generate
>>> some extra data not included in the original version.
>>>
>>> * Parameters values provided in config_data directly in the playbook:
>>> config_data:
>>>   DEFAULT:
>>> transport_url: rabbit://user:password@hostname
>>> verbose: true
>>>
>>> There are 2 options disabled by default but which would be useful for
>>> production environments:
>>> * Set to true to always show all configuration values:
>>> config_show_defaults
>>> * Set to true to show the help text: config_show_help: true
>>>
>>> The Ansible module is available on github:
>>> https://github.com/dhellmann/oslo-config-ansible
>>>
>>> To try this out, just run:
>>>$ ansible-playbook ./playbook.yml
>>>
>>> You can quickly see the output of cinder.conf:
>>>  https://clbin.com/HmS58
>>>
>>>
>>> What are the next steps:
>>>
>>> * Getting feedback from Deployment Tools and operators on the concept
>>> of this module.
>>>Maybe this module could replace what is done by Kolla with
>>> merge_configs and OpenStack Ansible with config_template.
>>> * On the TripleO side, we would like to see if this module could
>>> replace the Puppet OpenStack modules that are now mostly used for
>>> generating configuration files for containers.
>>>A transition path would be having Heat to generate Ansible vars
>>> files and give it to this module. We could integrate the playbook into
>>> a new task in the composable services, something like
>>>"os_gen_config_tasks", a bit like we already have for upgrade tasks,
>>> also driven by Ansible.
>
>
> This sounds good to me, though one issue i can presently see is that Puppet
> modules sometimes contain quite a bit of data processing logic ("smart"
> variables which map 1-to-N rather than 1-to-1 to actual config values, and
> often not just in openstack service configs, e.g. puppet-nova also
> configures libvirt, etc.). Also we use some non-config aspects from the
> Puppet modules (e.g. seeding Keystone tenants/services/endpoints/...). We'd
> need to implement this functionality elsewhere when replacing the Puppet
> modules. Not a blocker, but something to keep in mind.

2 interesting things:

- For the logic that are done by puppet modules for some parameters:
yes I agree, this problem isn't solved now. This thread talks about
config management, with some data in entry, it's a very little step I
know, but it's on purpose.
  Once we figure how to do that, we can think about the data
generation and where to put the logic (I think the logic is too
opinionated to be in a common project, but I might be wrong).

- Things like libvirt, mysql, etc will be managed by something else
but Puppet I think; this is out of topic for now. For Keystone
resources, same thing, we could use some native python clients or
Ansible modules if we switch to Ansible, etc.

Again, topic is really on "give me an ini file".

>>> * Another similar option to what Doug did is to write a standalone
>>> tool that would generate configuration, and for Ansible users we would
>>> write a new module to use this tool.
>>>Example:
>>>Step 1. oslo-config-generator --namespace cinder --format yaml >
>>> cinder-schema.yaml (note this tool already exists)
>>>Step 2. Create config_data.yaml in a specific format with
>>> parameters values for what we want to configure (note this format
>>> doesn't exist yet but look at what Doug did in the role, we could use
>>> the same kind of schema).
>>>Step 3. oslo-gen-config -i config_data.yaml -s schema.yaml >
>>> cinder.conf (note this tool doesn't exist yet)
>
>
> +1 on standalone tool which can be used in different contexts (by different
> higher level tools), this sounds generally useful.

Ack, good feedback.

>>>
>>>For Ansible users, we would write an Ansible module that would
>>> take in entry 2 files: the schema and 

Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-16 Thread Davanum Srinivas
Mikhail,

I have a TODO on my list - " adding a job that looks for new releases
and uploads them to tarballs periodically "

Thanks,
-- Dims

On Fri, Jun 16, 2017 at 3:32 PM, Mikhail Medvedev  wrote:
> On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague  wrote:
>> On 06/15/2017 10:06 PM, Tony Breeds wrote:
>>> Hi All,
>>>   I just push a review [1] to bump the minimum etcd version to
>>> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
>>> cycle to be making changes like this but releasing pike with a dependacy
>>> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
>>>
>>> Yours Tony.
>>>
>>> [1] https://review.openstack.org/474825
>>
>> It should be fine, no one is really using these much at this point.
>> However it looks like mirroring is not happening automatically? The
>> patch fails on not existing in the infra mirror.
>>
>> -Sean
>>
>
> It appears so. Also, IIRC, infra mirror would only host x86 binaries.
> Right now PowerKVM CI works by patching devstack-gate to override
> infra etcd download url. The fix [2] still needs to get merged to make
> it a bit easier to use d-g with your own etcd mirror.
>
> [2] https://review.openstack.org/#/c/467437/
>
> ---
> Mikhail Medvedev
> IBM OpenStack CI for KVM on Power
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-16 Thread Mikhail Medvedev
On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague  wrote:
> On 06/15/2017 10:06 PM, Tony Breeds wrote:
>> Hi All,
>>   I just push a review [1] to bump the minimum etcd version to
>> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
>> cycle to be making changes like this but releasing pike with a dependacy
>> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
>>
>> Yours Tony.
>>
>> [1] https://review.openstack.org/474825
>
> It should be fine, no one is really using these much at this point.
> However it looks like mirroring is not happening automatically? The
> patch fails on not existing in the infra mirror.
>
> -Sean
>

It appears so. Also, IIRC, infra mirror would only host x86 binaries.
Right now PowerKVM CI works by patching devstack-gate to override
infra etcd download url. The fix [2] still needs to get merged to make
it a bit easier to use d-g with your own etcd mirror.

[2] https://review.openstack.org/#/c/467437/

---
Mikhail Medvedev
IBM OpenStack CI for KVM on Power

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-16 Thread John Dickinson


On 16 Jun 2017, at 10:51, Clint Byrum wrote:

> This is great work.
>
> I'm sure you've already thought of this, but could you explain why
> you've chosen not to put the small objects in the k/v store as part of
> the value rather than in secondary large files?

I don't want to co-opt an answer from Alex, but I do want to point to some of 
the other background on this LOSF work.

https://wiki.openstack.org/wiki/Swift/ideas/small_files
https://wiki.openstack.org/wiki/Swift/ideas/small_files/experimentations
https://wiki.openstack.org/wiki/Swift/ideas/small_files/implementation

Look at the second link for some context to your answer, but the summary is 
"that means writing a file system, and writing a file system is really hard".

--John



>
> Excerpts from Alexandre Lécuyer's message of 2017-06-16 15:54:08 +0200:
>> Swift stores objects on a regular filesystem (XFS is recommended), one file 
>> per object. While it works fine for medium or big objects, when you have 
>> lots of small objects you can run into issues: because of the high count of 
>> inodes on the object servers, they can’t stay in cache, implying lot of 
>> memory usage and IO operations to fetch inodes from disk.
>>
>> In the past few months, we’ve been working on implementing a new storage 
>> backend in Swift. It is highly inspired by haystack[1]. In a few words, 
>> objects are stored in big files, and a Key/Value store provides information 
>> to locate an object (object hash -> big_file_id:offset). As the mapping in 
>> the K/V consumes less memory than an inode, it is possible to keep all 
>> entries in memory, saving many IO to locate the object. It also allows some 
>> performance improvements by limiting the XFS meta updates (e.g.: almost no 
>> inode updates as we write objects by using fdatasync() instead of fsync())
>>
>> One of the questions that was raised during discussions about this design 
>> is: do we want one K/V store per device, or one K/V store per Swift 
>> partition (= multiple K/V per device). The concern was about failure domain. 
>> If the only K/V gets corrupted, the whole device must be reconstructed. 
>> Memory usage is a major point in making a decision, so we did some benchmark.
>>
>> The key-value store is implemented over LevelDB.
>> Given a single disk with 20 million files (could be either one object 
>> replica or one fragment, if using EC)
>>
>> I have tested three cases :
>>- single KV for the whole disk
>>- one KV per partition, with 100 partitions per disk
>>- one KV per partition, with 1000 partitions per disk
>>
>> Single KV for the disk :
>>- DB size: 750 MB
>>- bytes per object: 38
>>
>> One KV per partition :
>> Assuming :
>>- 100 partitions on the disk (=> 100 KV)
>>- 16 bits part power (=> all keys in a given KV will have the same 16 bit 
>> prefix)
>>
>>- 7916 KB per KV, total DB size: 773 MB
>>- bytes per object: 41
>>
>> One KV per partition :
>> Assuming :
>>- 1000 partitions on the disk (=> 1000 KV)
>>- 16 bits part power (=> all keys in a given KV will have the same 16 bit 
>> prefix)
>>
>>- 1388 KB per KV, total DB size: 1355 MB total
>>- bytes per object: 71
>>
>>
>> A typical server we use for swift clusters has 36 drives, which gives us :
>> - Single KV : 26 GB
>> - Split KV, 100 partitions : 28 GB (+7%)
>> - Split KV, 1000 partitions : 48 GB (+85%)
>>
>> So, splitting seems reasonable if you don't have too many partitions.
>>
>> Same test, with 10 million files instead of 20
>>
>> - Single KV : 13 GB
>> - Split KV, 100 partitions : 18 GB (+38%)
>> - Split KV, 1000 partitions : 24 GB (+85%)
>>
>>
>> Finally, if we run a full compaction on the DB after the test, you get the
>> same memory usage in all cases, about 32 bytes per object.
>>
>> We have not made enough tests to know what would happen in production. 
>> LevelDB
>> does trigger compaction automatically on parts of the DB, but continuous 
>> change
>> means we probably would not reach the smallest possible size.
>>
>>
>> Beyond the size issue, there are other things to consider :
>> File descriptors limits : LevelDB seems to keep at least 4 file descriptors 
>> open during operation.
>>
>> Having one KV per partition also means you have to move entries between KVs 
>> when you change the part power. (if we want to support that)
>>
>> A compromise may be to split KVs on a small prefix of the object's hash, 
>> independent of swift's configuration.
>>
>> As you can see we're still thinking about this. Any ideas are welcome !
>> We will keep you updated about more "real world" testing. Among the tests we 
>> plan to check how resilient the DB is in case of a power loss.
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc

Re: [openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Mike Perez
On 11:17 Jun 16, Thierry Carrez wrote:
 


> == Need for a TC meeting next Tuesday ==
> 
> In order to make progress on the Pike goal selection, I think a
> dedicated IRC meeting will be necessary. We have a set of valid goals
> proposed already: we need to decide how many we should have, and which
> ones. Gerrit is not great to have that ranking discussion, so I think we
> should meet to come up with a set, and propose it on the mailing-list
> for discussion. We could use the regular meeting slot on Tuesday,
> 20:00utc. How does that sound ?

I will be there since I started facilitating this back at the forum.

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-16 Thread Clint Byrum
This is great work.

I'm sure you've already thought of this, but could you explain why
you've chosen not to put the small objects in the k/v store as part of
the value rather than in secondary large files?

Excerpts from Alexandre Lécuyer's message of 2017-06-16 15:54:08 +0200:
> Swift stores objects on a regular filesystem (XFS is recommended), one file 
> per object. While it works fine for medium or big objects, when you have lots 
> of small objects you can run into issues: because of the high count of inodes 
> on the object servers, they can’t stay in cache, implying lot of memory usage 
> and IO operations to fetch inodes from disk.
> 
> In the past few months, we’ve been working on implementing a new storage 
> backend in Swift. It is highly inspired by haystack[1]. In a few words, 
> objects are stored in big files, and a Key/Value store provides information 
> to locate an object (object hash -> big_file_id:offset). As the mapping in 
> the K/V consumes less memory than an inode, it is possible to keep all 
> entries in memory, saving many IO to locate the object. It also allows some 
> performance improvements by limiting the XFS meta updates (e.g.: almost no 
> inode updates as we write objects by using fdatasync() instead of fsync())
> 
> One of the questions that was raised during discussions about this design is: 
> do we want one K/V store per device, or one K/V store per Swift partition (= 
> multiple K/V per device). The concern was about failure domain. If the only 
> K/V gets corrupted, the whole device must be reconstructed. Memory usage is a 
> major point in making a decision, so we did some benchmark.
> 
> The key-value store is implemented over LevelDB.
> Given a single disk with 20 million files (could be either one object replica 
> or one fragment, if using EC)
> 
> I have tested three cases :
>- single KV for the whole disk
>- one KV per partition, with 100 partitions per disk
>- one KV per partition, with 1000 partitions per disk
> 
> Single KV for the disk :
>- DB size: 750 MB
>- bytes per object: 38
> 
> One KV per partition :
> Assuming :
>- 100 partitions on the disk (=> 100 KV)
>- 16 bits part power (=> all keys in a given KV will have the same 16 bit 
> prefix)
> 
>- 7916 KB per KV, total DB size: 773 MB
>- bytes per object: 41
> 
> One KV per partition :
> Assuming :
>- 1000 partitions on the disk (=> 1000 KV)
>- 16 bits part power (=> all keys in a given KV will have the same 16 bit 
> prefix)
> 
>- 1388 KB per KV, total DB size: 1355 MB total
>- bytes per object: 71
>
> 
> A typical server we use for swift clusters has 36 drives, which gives us :
> - Single KV : 26 GB
> - Split KV, 100 partitions : 28 GB (+7%)
> - Split KV, 1000 partitions : 48 GB (+85%)
> 
> So, splitting seems reasonable if you don't have too many partitions.
> 
> Same test, with 10 million files instead of 20
> 
> - Single KV : 13 GB
> - Split KV, 100 partitions : 18 GB (+38%)
> - Split KV, 1000 partitions : 24 GB (+85%)
> 
> 
> Finally, if we run a full compaction on the DB after the test, you get the
> same memory usage in all cases, about 32 bytes per object.
> 
> We have not made enough tests to know what would happen in production. LevelDB
> does trigger compaction automatically on parts of the DB, but continuous 
> change
> means we probably would not reach the smallest possible size.
> 
> 
> Beyond the size issue, there are other things to consider :
> File descriptors limits : LevelDB seems to keep at least 4 file descriptors 
> open during operation.
> 
> Having one KV per partition also means you have to move entries between KVs 
> when you change the part power. (if we want to support that)
> 
> A compromise may be to split KVs on a small prefix of the object's hash, 
> independent of swift's configuration.
> 
> As you can see we're still thinking about this. Any ideas are welcome !
> We will keep you updated about more "real world" testing. Among the tests we 
> plan to check how resilient the DB is in case of a power loss.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-16 Thread John Dickinson
Alex, this is fantastic work and great info. Thanks for sharing it.

Additional comments inline.

On 16 Jun 2017, at 6:54, Alexandre Lécuyer wrote:

> Swift stores objects on a regular filesystem (XFS is recommended), one file 
> per object. While it works fine for medium or big objects, when you have lots 
> of small objects you can run into issues: because of the high count of inodes 
> on the object servers, they can’t stay in cache, implying lot of memory usage 
> and IO operations to fetch inodes from disk.
>
> In the past few months, we’ve been working on implementing a new storage 
> backend in Swift. It is highly inspired by haystack[1]. In a few words, 
> objects are stored in big files, and a Key/Value store provides information 
> to locate an object (object hash -> big_file_id:offset). As the mapping in 
> the K/V consumes less memory than an inode, it is possible to keep all 
> entries in memory, saving many IO to locate the object. It also allows some 
> performance improvements by limiting the XFS meta updates (e.g.: almost no 
> inode updates as we write objects by using fdatasync() instead of fsync())
>
> One of the questions that was raised during discussions about this design is: 
> do we want one K/V store per device, or one K/V store per Swift partition (= 
> multiple K/V per device). The concern was about failure domain. If the only 
> K/V gets corrupted, the whole device must be reconstructed. Memory usage is a 
> major point in making a decision, so we did some benchmark.
>
> The key-value store is implemented over LevelDB.
> Given a single disk with 20 million files (could be either one object replica 
> or one fragment, if using EC)
>
> I have tested three cases :
>   - single KV for the whole disk
>   - one KV per partition, with 100 partitions per disk
>   - one KV per partition, with 1000 partitions per disk
>
> Single KV for the disk :
>   - DB size: 750 MB
>   - bytes per object: 38
>
> One KV per partition :
> Assuming :
>   - 100 partitions on the disk (=> 100 KV)
>   - 16 bits part power (=> all keys in a given KV will have the same 16 bit 
> prefix)
>
>   - 7916 KB per KV, total DB size: 773 MB
>   - bytes per object: 41
>
> One KV per partition :
> Assuming :
>   - 1000 partitions on the disk (=> 1000 KV)
>   - 16 bits part power (=> all keys in a given KV will have the same 16 bit 
> prefix)
>
>   - 1388 KB per KV, total DB size: 1355 MB total
>   - bytes per object: 71
>
> A typical server we use for swift clusters has 36 drives, which gives us :
> - Single KV : 26 GB
> - Split KV, 100 partitions : 28 GB (+7%)
> - Split KV, 1000 partitions : 48 GB (+85%)
>
> So, splitting seems reasonable if you don't have too many partitions.
>
> Same test, with 10 million files instead of 20
>
> - Single KV : 13 GB
> - Split KV, 100 partitions : 18 GB (+38%)
> - Split KV, 1000 partitions : 24 GB (+85%)
>
>
> Finally, if we run a full compaction on the DB after the test, you get the
> same memory usage in all cases, about 32 bytes per object.
>
> We have not made enough tests to know what would happen in production. LevelDB
> does trigger compaction automatically on parts of the DB, but continuous 
> change
> means we probably would not reach the smallest possible size.

This is likely a very good assumption (that the KV will continuously change and 
never get to minimum size).

My initial instinct is to go for one KV per drive.

One per partition does sound nice, but it is more sensitive to proper cluster 
configuration and deployment. For example, if an operator were to deploy a 
relatively small cluster but have a part power that's too big for the capacity, 
the KV strategy would end up with many thousands of mostly-empty partitions 
(imagine a 5-node cluster, 60 drives with a part power of 18 -- you're looking 
at more than 13k parts per drive per storage policy). Going for one KV per 
whole drive means that poor ring settings won't impact this area of storage as 
much.

>
>
> Beyond the size issue, there are other things to consider :
> File descriptors limits : LevelDB seems to keep at least 4 file descriptors 
> open during operation.
>
> Having one KV per partition also means you have to move entries between KVs 
> when you change the part power. (if we want to support that)

Yes, let's support that (in general)! But doing on KV per drive means it 
already works for this LOSF work.

>
> A compromise may be to split KVs on a small prefix of the object's hash, 
> independent of swift's configuration.

This is an interesting idea to explore. It will allow for smaller individual KV 
stores without being as sensitive to the ring parameters.

>
> As you can see we're still thinking about this. Any ideas are welcome !
> We will keep you updated about more "real world" testing. Among the tests we 
> plan to check how resilient the DB is in case of a power loss.

I'd also be very interested in other tests around concurrent access to the KV 
store. If we've only got one per whole 

Re: [openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Michał Jastrzębski
Since there are 2 topics that are very very important to me:
1. binary resolution waiting for votes
2. kolla stable:follows-policy tag

Is there anything I can to do help with either?

On 16 June 2017 at 09:23, Thierry Carrez  wrote:
> Clay Gerrard wrote:
>> I'm loving this new ML thing the TC is doing!  Like... I'm not going to
>> come to the meeting.  I'm not a helpful person in general and probably
>> wouldn't have anything productive to say.
>>
>> But I love the *idea* that I know *when and where* this is being decided
>> so that if I *did* care enough about community goals to come make a
>> stink about it I know exactly what I should do - _show up and say my
>> piece_!  Just this *idea* is going to help a *ton* later when John tells
>> me "shut up clay; just review the patch" [1] - because if I had
>> something to say about it i should have been there when it was time to
>> say something about it!
>
> FWIW the "decision" won't be made at the meeting, but we'll try to reach
> consensus on the set of goals we find reasonable to propose. Expect
> another heated thread as a result of the meeting :)
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Thierry Carrez
Clay Gerrard wrote:
> I'm loving this new ML thing the TC is doing!  Like... I'm not going to
> come to the meeting.  I'm not a helpful person in general and probably
> wouldn't have anything productive to say.
> 
> But I love the *idea* that I know *when and where* this is being decided
> so that if I *did* care enough about community goals to come make a
> stink about it I know exactly what I should do - _show up and say my
> piece_!  Just this *idea* is going to help a *ton* later when John tells
> me "shut up clay; just review the patch" [1] - because if I had
> something to say about it i should have been there when it was time to
> say something about it!

FWIW the "decision" won't be made at the meeting, but we'll try to reach
consensus on the set of goals we find reasonable to propose. Expect
another heated thread as a result of the meeting :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Mistral][Devstack] Confusion between auth_url and auth_uri in keystone middleware

2017-06-16 Thread Brant Knudson
On Thu, Jun 15, 2017 at 1:12 PM, Harry Rybacki  wrote:

> On Thu, Jun 15, 2017 at 1:57 PM, Brant Knudson  wrote:
> >
> >
> > On Thu, Jun 15, 2017 at 5:14 AM, Mikhail Fedosin 
> wrote:
> >>
> >> Recently I decided to remove deprecated parameters from
> keystone_authtoken
> >> mistral config and replace them with recommended function of devstack
> [1].
> >> In doing so, I discovered a strange behavior of configuration
> mechanism, and
> >> specifically parameters auth_uri and auth_url.
> >>
> >> 1. The parameter auth_url is not included in the list of the middleware
> >> parameters, there is auth_uri only [2]. Nevertheless, it must be
> present,
> >> because it's required by identity plugin [3]. Attempts to remove or
> replace
> >> it with the recommended auth_uri result with these stacktraces [4]
> >>
> >> 2. Even if auth_url is set, it can't be used later, because it is not
> >> registered in oslo_config [5]
> >>
> >> So I would like to get an advise from keystone team and understand what
> I
> >> should do in such cases. Official documentation doesn't add clarity on
> the
> >> matter because it recommends to use auth_uri in some cases and auth_url
> in
> >> others.
> >
> >
> > While to a human auth_uri and auth_url might look very similar they're
> > treated completely differently by auth_token / keystoneauth. One doesn't
> > replace the other in any way. So it shouldn't be surprising that
> > documentation would say to use auth_uri for one thing and auth_url for
> > something else.
> >
> In this case it's probably worth filing a docs bug against Keystone.
> If one person is confused by this, others likely are or will be.
>
> - Harry
>
>
I created a bug against keystonemiddleware:
https://bugs.launchpad.net/keystonemiddleware/+bug/1698401 . HTH.

- Brant


> >  - Brant
> >
> >
> >>
> >> My suggestion is to add auth_url in the list of keystone authtoken
> >> middleware config options, so that the parameter can be used by the
> others.
> >>
> >> Best,
> >> Mike
> >>
> >> [1] https://review.openstack.org/#/c/473796/
> >> [2]
> >> https://github.com/openstack/keystonemiddleware/blob/
> master/keystonemiddleware/auth_token/_opts.py#L31
> >> [3]
> >> https://github.com/openstack/keystoneauth/blob/master/
> keystoneauth1/loading/identity.py#L37
> >> [4] http://paste.openstack.org/show/612662/
> >> [5] http://paste.openstack.org/show/612664/
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Clay Gerrard
On Fri, Jun 16, 2017 at 7:19 AM, Doug Hellmann 
wrote:

> Excerpts from Thierry Carrez's message of 2017-06-16 11:17:30 +0200:
>
> > == Need for a TC meeting next Tuesday ==
> >
> > In order to make progress on the Pike goal selection, I think a
> > dedicated IRC meeting will be necessary. We have a set of valid goals
> > proposed already: we need to decide how many we should have, and which
> > ones. Gerrit is not great to have that ranking discussion, so I think we
> > should meet to come up with a set, and propose it on the mailing-list
> > for discussion. We could use the regular meeting slot on Tuesday,
> > 20:00utc. How does that sound ?
> >
>
> +1
>
>
I'm loving this new ML thing the TC is doing!  Like... I'm not going to
come to the meeting.  I'm not a helpful person in general and probably
wouldn't have anything productive to say.

But I love the *idea* that I know *when and where* this is being decided so
that if I *did* care enough about community goals to come make a stink
about it I know exactly what I should do - _show up and say my piece_!
Just this *idea* is going to help a *ton* later when John tells me "shut up
clay; just review the patch" [1] - because if I had something to say about
it i should have been there when it was time to say something about it!

Obvs, if anyone *else* has a passion about community goals and how
OpenStack uses them to push for positive change in the boarder ecosystem
(and thinks they can elucidate that on IRC to positive results).  *YOU*
should *totally* be there!

Y'all have fun,

-Clay

1. N.B. john is *not* a high conflict guy; but he's dealt with me for
~20years so he get's a pass
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-06-16 Thread Joshua Hesketh
On Sat, Jun 17, 2017 at 12:14 AM, Jeremy Stanley  wrote:

> On 2017-06-16 15:12:36 +1000 (+1000), Tony Breeds wrote:
> [...]
> > It seeems a little odd to be following up so long after I first started
> > this thread  but can someone on infra please process the EOLs as
> > described in [1].
> [...]
>
> I thought in prior discussions it had been determined that the
> Stable Branch team was going to start taking care of abandoning open
> changes and tagging branch tips (and eventually deleting the
> branches once we upgrade to a newer Gerrit release). Looks like some
> of these still have open changes and nothing has been tagged, so you
> want the Infra team to take those tasks back over for this round?
> Did you have any scripts you wanted used for this, or should I just
> wing it for now like I did in the past?
>

I'm happy to help do this if you'd like. Otherwise the script I've used for
the last few retirements is here:
http://git.openstack.org/cgit/openstack-infra/release-tools/tree/eol_branch.sh

I believe the intention was to add some hardening around that script and
automate it. However I think it was put on hold awaiting a new gerrit..
either that or nobody took it up.

Cheers,
Josh



> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Deprecate/Remove deferred_auth_method=password config option

2017-06-16 Thread Pavlo Shchelokovskyy
HI all,

On Fri, Jun 16, 2017 at 4:33 PM, Zane Bitter  wrote:

> On 16/06/17 05:09, Kaz Shinohara wrote:
>
>> I still takes `deferred _auth_method=password` behalf of trusts because
>> we don't enable trusts in the Keystone side due to some internal reason.
>>
>
> Free advice: whatever reason you have for not enabling trusts, storing
> user passwords in the Heat database is 100x worse.
>
> The issues what you pointed are correct(e.g. user_domain_id), we don't use
>> the domain well and also added some patches to skip those issues.
>>
>
> Why aren't those upstream?
>
> But I guess that the majority of heat users already moved to trusts and it
>> is obviously better solution in terms of security and granular role control.
>> As the edge case(perhaps), if a user want to take password auth, it would
>> be too tricky for them to introduce it, therefore I agree your 2nd option.
>>
>> If we will remove the `deferred_auth_method=password` from heat.conf,
>> should we keep `deferred_auth_method` self or will replace it to a new
>> config option just to specify the trusts enable/disable ?  Do you have any
>> idea on this?
>> Also I'm thinking that `reauthentication_method` also might be
>> changed/merged ?
>>
>> Regards,
>> Kaz Shinohara
>>
>>
>> 2017-06-16 14:11 GMT+09:00 Rabi Mishra  ramis...@redhat.com>>:
>>
>
> [snip]
>
> I'm not sure whether this works with keystone v2 and anyone is using
>> it or not. Keeping in mind that heat-cli is deprecated and keystone
>> v3 is now the default, we've 2 options
>>
>> 1. Continue to support 'deferred_auth_method=passsword' option and
>> fix all the above issues.
>>
>> 2. Remove/deprecate the option in pike itlsef.
>>
>> I would prefer option 2, but probably I miss some history and use
>> cases for it.
>>
>
> Am I right in thinking that any user (i.e. not just the [heat] service
> user) can create a trust? I still see occasional requests about 'standalone
> mode' for clouds that don't have Heat available to users (which I suspect
> is broken, otherwise people wouldn't be asking), and I'm guessing that
> standalone mode has heretofore required deferred_auth_method=password.
>

When trusts are enabled, generally any user can create a trust to any other
user, but only with itself as trustor  - there's a strict rule for that in
default keystone policy.json [0]. The only other reason that might block
this is when the user is already a trustee, and trust chaining is disabled
or already exhausted for this trustee. A tiny problem might be that it
seems you need to know both the user_id/project_id of trustor (can be
resolved by trustor itself) and the user_id of trustee - which is generally
impossible for non-admin users, so a trustee must give the trustor its id.

[0]
http://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json#n138


> So if we're going to remove the option then we should probably either
> officially disown standalone mode or rewrite the instructions such that it
> can be used with the trusts method.
>
> cheers,
> Zane.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Cheers,
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-16 Thread Jiří Stránský

On 15.6.2017 19:06, Emilien Macchi wrote:

I missed [tripleo] tag.

On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi  wrote:

If you haven't followed the "Configuration management with etcd /
confd" thread [1], Doug found out that using confd to generate
configuration files wouldn't work for the Cinder case where we don't
know in advance of the deployment what settings to tell confd to look
at.
We are still looking for a generic way to generate *.conf files for
OpenStack, that would be usable by Deployment tools and operators.
Right now, Doug and I are investigating some tooling that would be
useful to achieve this goal.

Doug has prototyped an Ansible role that would generate configuration
files by consumming 2 things:

* Configuration schema, generated by Ben's work with Machine Readable
Sample Config.
   $ oslo-config-generator --namespace cinder --format yaml > cinder-schema.yaml

It also needs: https://review.openstack.org/#/c/474306/ to generate
some extra data not included in the original version.

* Parameters values provided in config_data directly in the playbook:
config_data:
  DEFAULT:
transport_url: rabbit://user:password@hostname
verbose: true

There are 2 options disabled by default but which would be useful for
production environments:
* Set to true to always show all configuration values: config_show_defaults
* Set to true to show the help text: config_show_help: true

The Ansible module is available on github:
https://github.com/dhellmann/oslo-config-ansible

To try this out, just run:
   $ ansible-playbook ./playbook.yml

You can quickly see the output of cinder.conf:
 https://clbin.com/HmS58


What are the next steps:

* Getting feedback from Deployment Tools and operators on the concept
of this module.
   Maybe this module could replace what is done by Kolla with
merge_configs and OpenStack Ansible with config_template.
* On the TripleO side, we would like to see if this module could
replace the Puppet OpenStack modules that are now mostly used for
generating configuration files for containers.
   A transition path would be having Heat to generate Ansible vars
files and give it to this module. We could integrate the playbook into
a new task in the composable services, something like
   "os_gen_config_tasks", a bit like we already have for upgrade tasks,
also driven by Ansible.


This sounds good to me, though one issue i can presently see is that 
Puppet modules sometimes contain quite a bit of data processing logic 
("smart" variables which map 1-to-N rather than 1-to-1 to actual config 
values, and often not just in openstack service configs, e.g. 
puppet-nova also configures libvirt, etc.). Also we use some non-config 
aspects from the Puppet modules (e.g. seeding Keystone 
tenants/services/endpoints/...). We'd need to implement this 
functionality elsewhere when replacing the Puppet modules. Not a 
blocker, but something to keep in mind.



* Another similar option to what Doug did is to write a standalone
tool that would generate configuration, and for Ansible users we would
write a new module to use this tool.
   Example:
   Step 1. oslo-config-generator --namespace cinder --format yaml >
cinder-schema.yaml (note this tool already exists)
   Step 2. Create config_data.yaml in a specific format with
parameters values for what we want to configure (note this format
doesn't exist yet but look at what Doug did in the role, we could use
the same kind of schema).
   Step 3. oslo-gen-config -i config_data.yaml -s schema.yaml >
cinder.conf (note this tool doesn't exist yet)


+1 on standalone tool which can be used in different contexts (by 
different higher level tools), this sounds generally useful.




   For Ansible users, we would write an Ansible module that would
take in entry 2 files: the schema and the data. The module would just
run the tool provided by oslo.config.
   Example:
   - name: Generate cinder.conf
 oslo-gen-config: schema=cinder-schema.yaml
data=config_data.yaml


+1 for module rather than a role. "Take these inputs and produce that 
output" fits the module semantics better than role semantics IMO.


FWIW as i see it right now, this ^^ + ConfigMaps + immutable-config 
containers could result in a nicer/safer/more-debuggable containerized 
OpenStack setup than etcd + confd in daemon mode + mutable-config 
containers.





Please bring feedback and thoughts, it's really important to know what
folks from Installers think about this idea; again the ultimate goal
is to provide a reference tool to generate configuration in OpenStack,
in a way that scales and is friendly for our operators.

Thanks,

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-June/118176.html
--
Emilien Macchi






Have a good day,

Jirka

__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Sean Dague
On 06/16/2017 10:46 AM, Eric Harney wrote:
> On 06/16/2017 10:21 AM, Sean McGinnis wrote:
>>
>> I don't think merging tests that are showing failures, then blacklisting
>> them, is the right approach. And as Eric points out, this isn't
>> necessarily just a failure with Ceph. There is a legitimate logical
>> issue with what this particular test is doing.
>>
>> But in general, to get back to some of the earlier points, I don't think
>> we should be merging tests with known breakages until those breakages
>> can be first addressed.
>>
> 
> As another example, this was the last round of this, in May:
> 
> https://review.openstack.org/#/c/332670/
> 
> which is a new tempest test for a Cinder API that is not supported by
> all drivers.  The Ceph job failed on the tempest patch, correctly, the
> test was merged, then the Ceph jobs broke:
> 
> https://bugs.launchpad.net/glance/+bug/1687538
> https://review.openstack.org/#/c/461625/
> 
> This is really not a sustainable model.
> 
> And this is the _easy_ case, since Ceph jobs run in OpenStack infra and
> are easily visible and trackable.  I'm not sure what the impact is on
> Cinder third-party CI for other drivers.

Ah, so the issue is that
gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial is
Voting, because when the regex was made to stop ceph jobs from voting
(which they aren't on Nova, Tempest, Glance, or Cinder), it wasn't
applied there.

It's also a question about why a library is doing different back end
testing through full stack testing, instead of more targeted and
controlled behavior. Which I think is probably also less than ideal.

Both would be good things to fix.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Sean Dague
On 06/16/2017 09:51 AM, Sean McGinnis wrote:
>>
>> It would be useful to provide detailed examples. Everything is trade
>> offs, and having the conversation in the abstract is very difficult to
>> understand those trade offs.
>>
>>  -Sean
>>
> 
> We've had this issue in Cinder and os-brick. Usually around Ceph, but if
> you follow the user survey, that's the most popular backend.
> 
> The problem we see is the tempest test that covers this is non-voting.
> And there have been several cases so far where this non-voting job does
> not pass, due to a legitimate failure, but the tempest patch merges anyway.
> 
> 
> To be fair, these failures usually do point out actual problems that need
> to be fixed. Not always, but at least in a few cases. But instead of it
> being addressed first to make sure there is no disruption, it's suddenly
> a blocking issue that holds up everything until it's either reverted, skipped,
> or the problem is resolved.
> 
> Here's one recent instance: https://review.openstack.org/#/c/471352/

So, before we go further, ceph seems to be -nv on all projects right
now, right? So I get there is some debate on that patch, but is it
blocking anything?

Again, we seem to be missing specifics and a set of events here, which
lacking that everyone is trying to guess what the problems are, which I
don't think is effective.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Eric Harney
On 06/16/2017 10:21 AM, Sean McGinnis wrote:
> 
> I don't think merging tests that are showing failures, then blacklisting
> them, is the right approach. And as Eric points out, this isn't
> necessarily just a failure with Ceph. There is a legitimate logical
> issue with what this particular test is doing.
> 
> But in general, to get back to some of the earlier points, I don't think
> we should be merging tests with known breakages until those breakages
> can be first addressed.
> 

As another example, this was the last round of this, in May:

https://review.openstack.org/#/c/332670/

which is a new tempest test for a Cinder API that is not supported by
all drivers.  The Ceph job failed on the tempest patch, correctly, the
test was merged, then the Ceph jobs broke:

https://bugs.launchpad.net/glance/+bug/1687538
https://review.openstack.org/#/c/461625/

This is really not a sustainable model.

And this is the _easy_ case, since Ceph jobs run in OpenStack infra and
are easily visible and trackable.  I'm not sure what the impact is on
Cinder third-party CI for other drivers.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-16 Thread Dean Troyer
On Fri, Jun 16, 2017 at 8:57 AM, Emilien Macchi  wrote:
> Regarding all the company efforts to invest in one deployment tool,
> it's going to be super hard to find The OneTrue and convince everyone
> else to work on it.

The idea is not that everyone works on it, it is simply that OpenStack
_does_ have one single known tested common way to do things, ie
fulfill the role that many people think DevStack should have been when
not used for dev/testing. I make this point not to suggest that this
is the way forward, but as the starting point from which we hear a LOT
of requests and questions.

> Future will tell us but it's possible that deployments tools will be
> reduced to 2 or 3 projects if it continues that way (Fuel is slowly
> dying, Puppet OpenStack has less and less contributors, same for Chef
> afik, etc).

This is the market effects that ttx talks about, and is the setting
for another example of where we should be careful with documentation
and 'officialness' around projects that disappear, lest we repeat the
experiences with PostgreSQL and have deployers make choices based on
our docs that do not reflect reality.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-16 Thread Jay Pipes

On 06/16/2017 09:57 AM, Emilien Macchi wrote:

On Thu, Jun 15, 2017 at 4:50 PM, Dean Troyer  wrote:

On Thu, Jun 15, 2017 at 10:33 AM, Jay Pipes  wrote:

I'd fully support the removal of all deployment projects from the "official
OpenStack projects list".


Nice to hear Jay! :)

It was intentional from the beginning to not be in the deployment
space, we allowed those projects in (not unanimously IIRC) and most of
them did not evolve as expected.


Just for the record, it also happens out of the deployment space. We
allowed (not unanimously either, irrc) some projects to be part of the
Big Tent and some of them have died or are dying.


Sure, and this is a natural thing.

As I mentioned, I support removing Fuel from the official OpenStack 
projects list because the project has lost the majority of its 
contributors and Mirantis has effectively moved in a different 
direction, causing Fuel to be a wilting flower (to use Thierry's 
delightful terminology).



I would not mind picking one winner and spending effort making an
extremely easy, smooth, upgradable install that is The OneTrue
OpenStack, I do not expect us to ever agree what that will look like
so it is effectively never going to happen.  We've seen how far
single-vendor projects have gone, and none of them reached that level.


Regarding all the company efforts to invest in one deployment tool,
it's going to be super hard to find The OneTrue and convince everyone
else to work on it.


Right, as Dean said above :)


Future will tell us but it's possible that deployments tools will be
reduced to 2 or 3 projects if it continues that way (Fuel is slowly
dying, Puppet OpenStack has less and less contributors, same for Chef
afik, etc).


Not sure about that. OpenStack Ansible and Kolla have emerged over the 
last couple years as very strong communities with lots of momentum.


Sure, Chef has effectively died and yes, Puppet has become less shiny.

But the deployment and packaging space will always (IMHO) be the domain 
of the Next Shiny Thing.


Witness containers vs. VMs (as deployment targets).

Witness OS packages vs. virtualenv/pip installs vs. application 
container images.


Witness Pacemaker/OCF resource agents vs. an orchestrated level-based 
convergence system like k8s or New Heat.


Witness LTS releases vs. A/B deployments vs. continuous delivery.

Witness PostgreSQL vs. MySQL vs. NoSQL vs. NewSQL.

Witness message queue brokers vs. 0mq vs. etcd-as-system-bus.

As new tools, whether fads or long-lasting, come and go, so do 
deployment strategies and tooling. I'm afraid this won't change any time 
soon :)


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] nominating Abhishek Kekane for glance core

2017-06-16 Thread Brian Rosmaita
I'm nominating Abhishek Kekane (abhishekk on IRC) to be a Glance core
for the Pike cycle.  Abhishek has been around the Glance community for
a long time and is familiar with the architecture and design patterns
used in Glance and its related projects.  He's contributed code,
triaged bugs, provided bugfixes, and done quality reviews for Glance.

Abhishek has been proposed for Glance core before, but some members of
the community were concerned that he wasn't able to devote sufficient
time to Glance.  Given the current situation with the project,
however, it would be an enormous help to have someone as knowledgeable
about Glance as Abhishek to have +2 powers.  I discussed this with
Abhishek, he's aware that some in the community have that concern, and
he's agreed to be a core reviewer for the Pike cycle.  The community
can revisit his status early in Queens.

Now that I've written that down, that puts Abhishek in the same boat
as all core reviewers, i.e., their levels of participation and
commitment are assessed at the beginning of each cycle and adjustments
made.

In any case, I'd like to put Abhishek to work as soon as possible!  So
please reply to this message with comments or concerns before 23:59
UTC on Monday 19 June.  I'd like to confirm Abhishek as a core on
Tuesday 20 June.

thanks,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Sean McGinnis
> 
> yea, we had such cases and decided to have blacklist of tests not
> suitable for ceph. ceph job will exclude the tests failing on ceph.
> Jon is working on this - https://review.openstack.org/#/c/459774/
> 

I don't think merging tests that are showing failures, then blacklisting
them, is the right approach. And as Eric points out, this isn't
necessarily just a failure with Ceph. There is a legitimate logical
issue with what this particular test is doing.

But in general, to get back to some of the earlier points, I don't think
we should be merging tests with known breakages until those breakages
can be first addressed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-06-16 11:17:30 +0200:

> == Need for a TC meeting next Tuesday ==
> 
> In order to make progress on the Pike goal selection, I think a
> dedicated IRC meeting will be necessary. We have a set of valid goals
> proposed already: we need to decide how many we should have, and which
> ones. Gerrit is not great to have that ranking discussion, so I think we
> should meet to come up with a set, and propose it on the mailing-list
> for discussion. We could use the regular meeting slot on Tuesday,
> 20:00utc. How does that sound ?
> 

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Doug Hellmann
Excerpts from Ghanshyam Mann's message of 2017-06-16 23:05:08 +0900:
> On Fri, Jun 16, 2017 at 10:57 PM, Sean Dague  wrote:
> > On 06/16/2017 09:51 AM, Sean McGinnis wrote:
> >>>
> >>> It would be useful to provide detailed examples. Everything is trade
> >>> offs, and having the conversation in the abstract is very difficult to
> >>> understand those trade offs.
> >>>
> >>>  -Sean
> >>>
> >>
> >> We've had this issue in Cinder and os-brick. Usually around Ceph, but if
> >> you follow the user survey, that's the most popular backend.
> >>
> >> The problem we see is the tempest test that covers this is non-voting.
> >> And there have been several cases so far where this non-voting job does
> >> not pass, due to a legitimate failure, but the tempest patch merges anyway.
> >>
> >>
> >> To be fair, these failures usually do point out actual problems that need
> >> to be fixed. Not always, but at least in a few cases. But instead of it
> >> being addressed first to make sure there is no disruption, it's suddenly
> >> a blocking issue that holds up everything until it's either reverted, 
> >> skipped,
> >> or the problem is resolved.
> >>
> >> Here's one recent instance: https://review.openstack.org/#/c/471352/
> >
> > Sure, if ceph is the primary concern, that feels like it should be a
> > reasonable specific thing to fix. It's not a grand issue, it's a
> > specific mismatch on what configs should be common.
> 
> yea, we had such cases and decided to have blacklist of tests not
> suitable for ceph. ceph job will exclude the tests failing on ceph.
> Jon is working on this - https://review.openstack.org/#/c/459774/
> 
> This approach solve the problem without limiting tests scope. [1]
> 
> ..1 http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.html
> 
> -gmann

Is ceph behaving in an unexpected way or are the tests are making
implicit assumptions that might also cause trouble for other backends
if these tests ever make it into the suite used by the interop team?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-06-16 Thread Jeremy Stanley
On 2017-06-16 15:12:36 +1000 (+1000), Tony Breeds wrote:
[...]
> It seeems a little odd to be following up so long after I first started
> this thread  but can someone on infra please process the EOLs as
> described in [1].
[...]

I thought in prior discussions it had been determined that the
Stable Branch team was going to start taking care of abandoning open
changes and tagging branch tips (and eventually deleting the
branches once we upgrade to a newer Gerrit release). Looks like some
of these still have open changes and nothing has been tagged, so you
want the Infra team to take those tasks back over for this round?
Did you have any scripts you wanted used for this, or should I just
wing it for now like I did in the past?
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Ghanshyam Mann
On Fri, Jun 16, 2017 at 10:57 PM, Sean Dague  wrote:
> On 06/16/2017 09:51 AM, Sean McGinnis wrote:
>>>
>>> It would be useful to provide detailed examples. Everything is trade
>>> offs, and having the conversation in the abstract is very difficult to
>>> understand those trade offs.
>>>
>>>  -Sean
>>>
>>
>> We've had this issue in Cinder and os-brick. Usually around Ceph, but if
>> you follow the user survey, that's the most popular backend.
>>
>> The problem we see is the tempest test that covers this is non-voting.
>> And there have been several cases so far where this non-voting job does
>> not pass, due to a legitimate failure, but the tempest patch merges anyway.
>>
>>
>> To be fair, these failures usually do point out actual problems that need
>> to be fixed. Not always, but at least in a few cases. But instead of it
>> being addressed first to make sure there is no disruption, it's suddenly
>> a blocking issue that holds up everything until it's either reverted, 
>> skipped,
>> or the problem is resolved.
>>
>> Here's one recent instance: https://review.openstack.org/#/c/471352/
>
> Sure, if ceph is the primary concern, that feels like it should be a
> reasonable specific thing to fix. It's not a grand issue, it's a
> specific mismatch on what configs should be common.

yea, we had such cases and decided to have blacklist of tests not
suitable for ceph. ceph job will exclude the tests failing on ceph.
Jon is working on this - https://review.openstack.org/#/c/459774/

This approach solve the problem without limiting tests scope. [1]

..1 http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.html

-gmann

>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Eric Harney
On 06/15/2017 10:51 PM, Ghanshyam Mann wrote:
> On Fri, Jun 16, 2017 at 9:43 AM,   wrote:
>> https://review.openstack.org/#/c/471352/   may be an example
> 
> If this is case which is ceph related, i think we already discussed
> these kind of cases, where functionality depends on backend storage
> and how to handle corresponding tests failure [1].
> 
> Solution on that was Ceph job should exclude such test case which
> functionality is not implemented/supported in ceph byregex. Jon
> Bernard is working on this tests blacklist [2].
> 
> If there is any other job or case, then we can discuss/think of having
> job running for Tempest gate also which i think we do in most cases.
> 
> And about making ceph job as voting, i remember we did not do that due
> to stability ok job. Ceph job fails frequently and once Jon patches
> merge and job is consistently stable then we can make voting.
> 

I'm not convinced yet that this failure is purely Ceph-specific, at a
quick look.

I think what happens here is, unshelve performs an asynchronous delete
of a glance image, and returns as successful before the delete has
necessarily completed.  The check in tempest then sees that the image
still exists, and fails -- but this isn't valid, because the unshelve
API doesn't guarantee that this image is no longer there at the time it
returns.  This would fail on any image delete that isn't instantaneous.

Is there a guarantee anywhere that the unshelve API behaves how this
tempest test expects it to?

>>
>>
>> Original Mail
>> Sender:  ;
>> To:  ;
>> Date: 2017/06/16 05:25
>> Subject: Re: [openstack-dev] [all][qa][glance] some recent tempest problems
>>
>>
>> On 06/15/2017 01:04 PM, Brian Rosmaita wrote:
>>> This isn't a glance-specific problem though we've encountered it quite
>>> a few times recently.
>>>
>>> Briefly, we're gating on Tempest jobs that tempest itself does not
>>> gate on.  This leads to a situation where new tests can be merged in
>>> tempest, but wind up breaking our gate. We aren't claiming that the
>>> added tests are bad or don't provide value; the problem is that we
>>> have to drop everything and fix the gate.  This interrupts our current
>>> work and forces us to prioritize bugs to fix based not on what makes
>>> the most sense for the project given current priorities and resources,
>>> but based on whatever we can do to get the gates un-blocked.
>>>
>>> As we said earlier, this situation seems to be impacting multiple
>>> projects.
>>>
>>> One solution for this is to change our gating so that we do not run
>>> any Tempest jobs against Glance repositories that are not also gated
>>> by Tempest.  That would in theory open a regression path, which is why
>>> we haven't put up a patch yet.  Another way this could be addressed is
>>> by the Tempest team changing the non-voting jobs causing this
>>> situation into voting jobs, which would prevent such changes from
>>> being merged in the first place.  The key issue here is that we need
>>> to be able to prioritize bugs based on what's most important to each
>>> project.
>>>
>>> We want to be clear that we appreciate the work the Tempest team does.
>>> We abhor bugs and want to squash them too.  The problem is just that
>>> we're stretched pretty thin with resources right now, and being forced
>>> to prioritize bug fixes that will get our gate un-blocked is
>>> interfering with our ability to work on issues that may have a higher
>>> impact on end users.
>>>
>>> The point of this email is to find out whether anyone has a better
>>> suggestion for how to handle this situation.
>>
>> It would be useful to provide detailed examples. Everything is trade
>> offs, and having the conversation in the abstract is very difficult to
>> understand those trade offs.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
> 
> 
> ..1 http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.html
> 
> ..2 https://review.openstack.org/#/c/459774/ ,
> https://review.openstack.org/#/c/459445/
> 
> 
> -gmann
> 
>> __

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-16 Thread Emilien Macchi
On Thu, Jun 15, 2017 at 4:50 PM, Dean Troyer  wrote:
> On Thu, Jun 15, 2017 at 10:33 AM, Jay Pipes  wrote:
>> I'd fully support the removal of all deployment projects from the "official
>> OpenStack projects list".
>
> Nice to hear Jay! :)
>
> It was intentional from the beginning to not be in the deployment
> space, we allowed those projects in (not unanimously IIRC) and most of
> them did not evolve as expected.

Just for the record, it also happens out of the deployment space. We
allowed (not unanimously either, irrc) some projects to be part of the
Big Tent and some of them have died or are dying.

> I would not mind picking one winner and spending effort making an
> extremely easy, smooth, upgradable install that is The OneTrue
> OpenStack, I do not expect us to ever agree what that will look like
> so it is effectively never going to happen.  We've seen how far
> single-vendor projects have gone, and none of them reached that level.

Regarding all the company efforts to invest in one deployment tool,
it's going to be super hard to find The OneTrue and convince everyone
else to work on it.
Future will tell us but it's possible that deployments tools will be
reduced to 2 or 3 projects if it continues that way (Fuel is slowly
dying, Puppet OpenStack has less and less contributors, same for Chef
afik, etc).

> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Sean Dague
On 06/16/2017 09:51 AM, Sean McGinnis wrote:
>>
>> It would be useful to provide detailed examples. Everything is trade
>> offs, and having the conversation in the abstract is very difficult to
>> understand those trade offs.
>>
>>  -Sean
>>
> 
> We've had this issue in Cinder and os-brick. Usually around Ceph, but if
> you follow the user survey, that's the most popular backend.
> 
> The problem we see is the tempest test that covers this is non-voting.
> And there have been several cases so far where this non-voting job does
> not pass, due to a legitimate failure, but the tempest patch merges anyway.
> 
> 
> To be fair, these failures usually do point out actual problems that need
> to be fixed. Not always, but at least in a few cases. But instead of it
> being addressed first to make sure there is no disruption, it's suddenly
> a blocking issue that holds up everything until it's either reverted, skipped,
> or the problem is resolved.
> 
> Here's one recent instance: https://review.openstack.org/#/c/471352/

Sure, if ceph is the primary concern, that feels like it should be a
reasonable specific thing to fix. It's not a grand issue, it's a
specific mismatch on what configs should be common.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-16 Thread Alexandre Lécuyer

Swift stores objects on a regular filesystem (XFS is recommended), one file per 
object. While it works fine for medium or big objects, when you have lots of 
small objects you can run into issues: because of the high count of inodes on 
the object servers, they can’t stay in cache, implying lot of memory usage and 
IO operations to fetch inodes from disk.

In the past few months, we’ve been working on implementing a new storage backend 
in Swift. It is highly inspired by haystack[1]. In a few words, objects are stored 
in big files, and a Key/Value store provides information to locate an object 
(object hash -> big_file_id:offset). As the mapping in the K/V consumes less 
memory than an inode, it is possible to keep all entries in memory, saving many IO 
to locate the object. It also allows some performance improvements by limiting the 
XFS meta updates (e.g.: almost no inode updates as we write objects by using 
fdatasync() instead of fsync())

One of the questions that was raised during discussions about this design is: 
do we want one K/V store per device, or one K/V store per Swift partition (= 
multiple K/V per device). The concern was about failure domain. If the only K/V 
gets corrupted, the whole device must be reconstructed. Memory usage is a major 
point in making a decision, so we did some benchmark.

The key-value store is implemented over LevelDB.
Given a single disk with 20 million files (could be either one object replica 
or one fragment, if using EC)

I have tested three cases :
  - single KV for the whole disk
  - one KV per partition, with 100 partitions per disk
  - one KV per partition, with 1000 partitions per disk

Single KV for the disk :
  - DB size: 750 MB
  - bytes per object: 38

One KV per partition :
Assuming :
  - 100 partitions on the disk (=> 100 KV)
  - 16 bits part power (=> all keys in a given KV will have the same 16 bit 
prefix)

  - 7916 KB per KV, total DB size: 773 MB
  - bytes per object: 41

One KV per partition :
Assuming :
  - 1000 partitions on the disk (=> 1000 KV)
  - 16 bits part power (=> all keys in a given KV will have the same 16 bit 
prefix)

  - 1388 KB per KV, total DB size: 1355 MB total
  - bytes per object: 71
  


A typical server we use for swift clusters has 36 drives, which gives us :
- Single KV : 26 GB
- Split KV, 100 partitions : 28 GB (+7%)
- Split KV, 1000 partitions : 48 GB (+85%)

So, splitting seems reasonable if you don't have too many partitions.

Same test, with 10 million files instead of 20

- Single KV : 13 GB
- Split KV, 100 partitions : 18 GB (+38%)
- Split KV, 1000 partitions : 24 GB (+85%)


Finally, if we run a full compaction on the DB after the test, you get the
same memory usage in all cases, about 32 bytes per object.

We have not made enough tests to know what would happen in production. LevelDB
does trigger compaction automatically on parts of the DB, but continuous change
means we probably would not reach the smallest possible size.


Beyond the size issue, there are other things to consider :
File descriptors limits : LevelDB seems to keep at least 4 file descriptors 
open during operation.

Having one KV per partition also means you have to move entries between KVs 
when you change the part power. (if we want to support that)

A compromise may be to split KVs on a small prefix of the object's hash, 
independent of swift's configuration.

As you can see we're still thinking about this. Any ideas are welcome !
We will keep you updated about more "real world" testing. Among the tests we 
plan to check how resilient the DB is in case of a power loss.

--
Alex



[1]https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Sean McGinnis
> 
> It would be useful to provide detailed examples. Everything is trade
> offs, and having the conversation in the abstract is very difficult to
> understand those trade offs.
> 
>   -Sean
> 

We've had this issue in Cinder and os-brick. Usually around Ceph, but if
you follow the user survey, that's the most popular backend.

The problem we see is the tempest test that covers this is non-voting.
And there have been several cases so far where this non-voting job does
not pass, due to a legitimate failure, but the tempest patch merges anyway.


To be fair, these failures usually do point out actual problems that need
to be fixed. Not always, but at least in a few cases. But instead of it
being addressed first to make sure there is no disruption, it's suddenly
a blocking issue that holds up everything until it's either reverted, skipped,
or the problem is resolved.

Here's one recent instance: https://review.openstack.org/#/c/471352/

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Deprecate/Remove deferred_auth_method=password config option

2017-06-16 Thread Zane Bitter

On 16/06/17 05:09, Kaz Shinohara wrote:
I still takes `deferred _auth_method=password` behalf of trusts because 
we don't enable trusts in the Keystone side due to some internal reason.


Free advice: whatever reason you have for not enabling trusts, storing 
user passwords in the Heat database is 100x worse.


The issues what you pointed are correct(e.g. user_domain_id), we don't 
use the domain well and also added some patches to skip those issues.


Why aren't those upstream?

But I guess that the majority of heat users already moved to trusts and 
it is obviously better solution in terms of security and granular role 
control.
As the edge case(perhaps), if a user want to take password auth, it 
would be too tricky for them to introduce it, therefore I agree your 2nd 
option.


If we will remove the `deferred_auth_method=password` from heat.conf,  
should we keep `deferred_auth_method` self or will replace it to a new 
config option just to specify the trusts enable/disable ?  Do you have 
any idea on this?
Also I'm thinking that `reauthentication_method` also might be 
changed/merged ?


Regards,
Kaz Shinohara


2017-06-16 14:11 GMT+09:00 Rabi Mishra >:


[snip]


I'm not sure whether this works with keystone v2 and anyone is using
it or not. Keeping in mind that heat-cli is deprecated and keystone
v3 is now the default, we've 2 options

1. Continue to support 'deferred_auth_method=passsword' option and
fix all the above issues.

2. Remove/deprecate the option in pike itlsef.

I would prefer option 2, but probably I miss some history and use
cases for it.


Am I right in thinking that any user (i.e. not just the [heat] service 
user) can create a trust? I still see occasional requests about 
'standalone mode' for clouds that don't have Heat available to users 
(which I suspect is broken, otherwise people wouldn't be asking), and 
I'm guessing that standalone mode has heretofore required 
deferred_auth_method=password.


So if we're going to remove the option then we should probably either 
officially disown standalone mode or rewrite the instructions such that 
it can be used with the trusts method.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Jeremy Stanley
On 2017-06-16 11:17:30 +0200 (+0200), Thierry Carrez wrote:
[...]
> In order to make progress on the Pike goal selection, I think a
> dedicated IRC meeting will be necessary. We have a set of valid goals
> proposed already: we need to decide how many we should have, and which
> ones. Gerrit is not great to have that ranking discussion, so I think we
> should meet to come up with a set, and propose it on the mailing-list
> for discussion. We could use the regular meeting slot on Tuesday,
> 20:00utc. How does that sound ?

Works for me; I agree this would be helpful. Thanks for proposing!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage-dashboard] Alarm Header Blueprint

2017-06-16 Thread Waines, Greg
Actually just saw the horizon extensible header blueprint go approved.
see https://blueprints.launchpad.net/horizon/+spec/extensible-header

let me know what your thoughts are on the Vitrage-dashboard blueprint

Greg.

From: Greg Waines 
Date: Thursday, June 15, 2017 at 3:34 PM
To: "openstack-dev@lists.openstack.org" 
Cc: "Heller, Alon (Nokia - IL/Kfar Sava)" , "Afek, Ifat 
(Nokia - IL/Kfar Sava)" 
Subject: Re: [openstack-dev] [vitrage-dashboard] Alarm Header Blueprint

Alon,
Just checking if you’ve had a chance to look at the following blueprint:
https://blueprints.launchpad.net/vitrage-dashboard/+spec/alarm-header

The HORIZON blueprint that it depends on is, I believe, on the verge of
being approved ... Horizon maintainers want to move forward with it,
they’ve assigned a priority to the blueprint ... I think that means it will
move to Approved.

anyways,
would be interested in any comments you have on the Vitrage-Dashboard
blueprint,
Greg.



From: Greg Waines 
Date: Thursday, June 8, 2017 at 7:58 AM
To: "openstack-dev@lists.openstack.org" 
Cc: "Heller, Alon (Nokia - IL/Kfar Sava)" , "Afek, Ifat 
(Nokia - IL/Kfar Sava)" 
Subject: [openstack-dev] [vitrage-dashboard] Alarm Header Blueprint

I have registered a new blueprint in Vitrage-dashboard which leverages the 
proposed extensible headers of Horizon.

https://blueprints.launchpad.net/vitrage-dashboard/+spec/alarm-header

let me know your thoughts,
Greg


p.s proposed extensible header blueprint in horizon is here:
   https://blueprints.launchpad.net/horizon/+spec/extensible-header



From: "Afek, Ifat (Nokia - IL/Kfar Sava)" 
Date: Wednesday, June 7, 2017 at 4:52 AM
To: Greg Waines 
Cc: "Heller, Alon (Nokia - IL/Kfar Sava)" 
Subject: Re: vitrage-dashboard blueprints / specs

Hi Greg,

Adding Alon, a vitrage-dashboard core contributor.

In general, your plan seems great ☺ indeed, we don’t have many blueprints in 
vitrage-dashboard launchapd… the vitrage-dashboard team usually just implement 
the features and do code reviews without too many specs. You are welcome to 
write a blueprint-only description, and maybe send us (or add to the 
blueprintß) a UI mock.

Let us know if you need any help with that.

Best Regards,
Ifat.


From: "Waines, Greg" 
Date: Wednesday, 7 June 2017 at 1:33
To: "Afek, Ifat (Nokia - IL/Kfar Sava)" 
Subject: vitrage-dashboard blueprints / specs

Ifat,
Vitrage-dashboard seems to have very short 1-2 sentence blueprints and no spec 
files.

The Vitrage Alarm Count in Horizon Header work has turned into 3x blueprints now
- alarm-counts-api  in  Vitrage
- extensible-headers  in  Horizon <-- I am in process of 
submitting this to Horizon
- alarm-header  in  Vitrage-Dashboard

What do you suggest for the blueprint / spec for Vitrage-Dashboard ?
I could submit a blueprint using the blueprint-only template used by Horizon.

let me know what you think,
Greg.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 24) - devmode issues, promotion progress

2017-06-16 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting


= Devmode OVB issues =

Devmode OVB (the one you lunch with "./devmode-sh --ovb" is not able to 
deploy reliably on RDO Cloud due to DNS issues. This change[1] might 
help, but still having problems.



= Promotion job changes =

Moving the promotion jobs over to Quickstart is an important but 
difficult to achieve goal. It would be great to not debug jobs from the 
old system again. There's the first step towards that.


We retired the "periodic-tripleo-ci-centos-7-ovb-nonha" job and 
transitioned the "ha" one to run with Quickstart. The new job's name is 
"periodic-tripleo-ci-centos-7-ovb-ha-oooq" and it's already used to 
promote new DLRN hashes.


There's still an issue with it, which is fixed in this[2] change and it 
should start working properly soon (it already got through an overcloud 
deployment). Thanks Gabriele for leading this effort!


Migrating the remaining "periodic-tripleo-ci-centos-7-ovb-updates" job 
is not straightforward, as we don't have feature parity in Quickstart 
with this original job. The job name is misleading, as there are a lot 
of things tested within this job. What we miss is predictable placement, 
hostname mapping and predictable IPs, apart from the actual update, that 
we will leave to the Lifecycle team.



= Where to put tripleo-ci env files? =

Currently we're using Ben's repo[3] for OVB environment files, while THT 
has also env files[4] that we don't test upstream. That's not ideal and 
we started to discuss where to really store these configs and how to 
handle it properly. Should it be in the tripleo-ci repo? Should we have 
up-to-date and tested versions in THT? Can we backport those to stable 
branches?


We didn't really figure out the solution to this during the meeting, so 
feel free to continue the discussion here or next time.


Thank you for reading the summary. Have a great weekend!

Best regards,
Attila

[1] https://review.openstack.org/474334
[2] https://review.openstack.org/474504
[3] 
https://github.com/cybertron/openstack-virtual-baremetal/tree/master/network-templates
[4] 
https://github.com/rdo-management/tripleo-heat-templates/tree/mgt-master/environments


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-16 Thread Sean Dague
On 06/15/2017 10:06 PM, Tony Breeds wrote:
> Hi All,
>   I just push a review [1] to bump the minimum etcd version to
> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
> cycle to be making changes like this but releasing pike with a dependacy
> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
> 
> Yours Tony.
> 
> [1] https://review.openstack.org/474825

It should be fine, no one is really using these much at this point.
However it looks like mirroring is not happening automatically? The
patch fails on not existing in the infra mirror.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Strict validation in query parameters

2017-06-16 Thread Sean Dague
On 06/15/2017 10:01 PM, Matt Riedemann wrote:
> On 6/15/2017 8:43 PM, Alex Xu wrote:
>> We added new decorator 'query_schema' to support validate the query
>> parameters by JSON-Schema.
>>
>> It provides more strict valiadation as below:
>> * set the 'additionalProperties=False' in the schema, it means that
>> reject any invalid query parameters and return HTTPBadRequest 400 to
>> the user.
>> * use the marco function 'single_param' to declare the specific query
>> parameter only support single value. For example, the 'marker'
>> parameters for the pagination actually only one value is the valid. If
>> the user specific multiple values "marker=1=2", the validation
>> will return 400 to the user.
>>
>> Currently there is patch related to this:
>> https://review.openstack.org/#/c/459483/13/nova/api/openstack/compute/schemas/server_migrations.py
>>
>>
>> So my question is:
>> Are we all good with this strict validation in all the future
>> microversion?
>>
>> I didn't remember we explicit agreement this at somewhere, just want
>> to double check this is the direction everybody want to go.
>>
>> Thanks
>> Alex
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> I think this is fine and makes sense for new microversions. The spec for
> consistent query parameter validation does talk about it a bit:
> 
> https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/consistent-query-parameters-validation.html#proposed-change
> 
> 
> "The behaviour additionalProperties as below:
> 
> * When the value of additionalProperties is True means the extra query
> parameters are allowed. But those extra query parameters will be
> stripped out.
> * When the value of additionalProperties is False means the extra query
> aren’t allowed.
> 
> The value of additionalProperties will be True until we decide to
> restrict the parameters in the future, and it will be changed with new
> microversion."
> 
> I don't see a point in allowing someone to specify a query parameter
> multiple times if we only pick the first one from the list and use that.

Agreed. The point of doing strict validation and returning a 400 is to
help the user eliminate bugs in their program. If they specified marker
twice either they thought it did something, or they made a mistake. Both
are wrong. When we are silent on that front it means they may not be
getting the behavior they were expecting, which hurts their experience
with the API.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-16 Thread Vikash Kumar
On Fri, Jun 16, 2017 at 3:39 PM, Thierry Carrez 
wrote:

> Shake Chen wrote:
> > HI Vikash
> >
> > I think Kolla is suitable for official project for deployment
>
> Deployment tooling is, by nature, opinionated. You just can't enable
> everything and keep it manageable. As long as people will have differing
> opinions on how OpenStack pieces should be deployed, which drivers or
> components should actually be made available, or the level of
> fine-tuning that should be exposed, you will have different deployment
> tools.
>
> "Picking one" won't magically make everyone else stop working on their
> specific vision for it, and suddenly force everyone to focus on a single
> solution. It will, however, hamper open collaboration between
> organizations on alternative approaches.
>

​"Picking one" is not, but having the minimal one (core components) is
what I see in the best interest of Openstack. ​I agree that the
deployment is very opinionated but before some one go for deployment
they need to evaluate few things, get some confidence. Right now,
the only option is either turn to vendors or get the experts to do. Or Am i
missing something and Openstack can be evaluated by any organization
without any hassle ?
Having a minimal deployment software will ease this process and kind of
increase
the audience. I don't see having this will create any conflict or hinder any
collaboration.


> My personal view is that it's a space where we need to let flowers bloom
> and encourage open collaboration. We just need to clean up the garden
> when those flowers don't go anywhere.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-16 Thread Julien Danjou
On Fri, Jun 16 2017, Thierry Carrez wrote:

> I should have made it clearer in my original post that this discussion
> actually originated at the Board+TC+UC workshop in Boston, as part of
> the "better communicating what is openstack" subgroup.

This is still such a vague problem statement that it's hard to see how
any interesting outcome will emerge in this thread.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-16 Thread Thierry Carrez
Matt Riedemann wrote:
> On 6/15/2017 9:57 AM, Thierry Carrez wrote:
>> Obviously we are not the target audience for that term. I think we are
>> deep enough in OpenStack and technically-focused enough to see through
>> that. But reality is, the majority of the rest of the world is confused,
>> and needs help figuring it out. Giving the category a name is a way to
>> do that.
> 
> Maybe don't ask the inmates what the asylum/prison should be called. Why
> don't we have people that are confused about this weighing in on this
> thread? Oh right because they don't subscribe to, or read, or reply to a
> development mailing list.

I should have made it clearer in my original post that this discussion
actually originated at the Board+TC+UC workshop in Boston, as part of
the "better communicating what is openstack" subgroup.

We discussed producing better maps of OpenStack, but the most
marketing-minded people in the group also insisted that we need to move
on from "big tent" branding and explain our structure with new, less
confusing terminology.

This thread is just the TC part of the discussion (as it happens, the TC
owns the upstream project structure, so changes to this will have to go
through us). We said that the TC should discuss its affairs in open
threads, so here it is. That doesn't really make it a technical issue,
or an issue that we'd solely decide amongst the inmates in the asylum.
The discussion will ultimately feed back to that Board+TC+UC workgroup
to come up with a clearer plan. We just need to get general feedback on
the general issue, which is what this thread is about.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-16 Thread Thierry Carrez
Shake Chen wrote:
> HI Vikash
> 
> I think Kolla is suitable for official project for deployment 

Deployment tooling is, by nature, opinionated. You just can't enable
everything and keep it manageable. As long as people will have differing
opinions on how OpenStack pieces should be deployed, which drivers or
components should actually be made available, or the level of
fine-tuning that should be exposed, you will have different deployment
tools.

"Picking one" won't magically make everyone else stop working on their
specific vision for it, and suddenly force everyone to focus on a single
solution. It will, however, hamper open collaboration between
organizations on alternative approaches.

My personal view is that it's a space where we need to let flowers bloom
and encourage open collaboration. We just need to clean up the garden
when those flowers don't go anywhere.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] the driver composition and breaking changes to the supported interfaces

2017-06-16 Thread Dmitry Tantsur

Thanks!

I think this is what we seem to agree so far: keep the old interface and 
deprecate it usage.


On 06/13/2017 01:39 PM, tie...@vn.fujitsu.com wrote:

Hi,

Dmitry: Thanks for bringing this issue into discussion.

For the iRMC patch, I would vote for the first option as it is commonly used. 
But overall, I think it's great if ironic can provide a mechanism like the 
second one. But as you said, that is technically challenging.

Regards
TienDC

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, June 12, 2017 20:44
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [ironic] the driver composition and breaking changes 
to the supported interfaces

Hi folks!

I want to raise something we haven't apparently thought about when working on 
the driver composition reform.

For example, an iRMC patch [0] replaces 'pxe' boot with 'irmc-pxe'. This is the 
correct thing to do in this case. They're extending the PXE boot, and need a 
new class and a new entrypoint. We can expect more changes like this coming.

However, this change is breaking for users. Imagine a node explicitly created 
with:

   openstack baremetal node create --driver irmc --boot-interface pxe

On upgrade to Pike, such nodes will break and will require manual intervention 
to get it working again:

   openstack baremetal node set  --boot-interface irmc-pxe

What can we do about it? I see the following possibilities:

1. Keep "pxe" interface supported and issue a deprecation. This is relatively 
easy, but I'm not sure if it's always possible to keep the old interface working.

2. Change the driver composition reform to somehow allow the same names for different 
interfaces. e.g. "pxe" would point to PXEBoot for IPMI, but to IRMCPXEBoot for 
iRMC. This is technically challenging.

3. Only do a release note, and allow the breaking change to happen.

WDYT?

[0] https://review.openstack.org/#/c/416403

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-16 Thread Graham Hayes
On 15/06/17 22:35, Ed Leafe wrote:
> On Jun 15, 2017, at 3:35 PM, Jeremy Stanley  wrote:
> 
>> For me it's one of the most annoying yet challenging/interesting
>> aspects: free software development is as much about community and
>> politics as it is actual software development (perhaps more so).
> 
> Another way to look at it is how we see ourselves (as a community) and how 
> people on the outside see OpenStack. I would imagine that someone looking at 
> OpenStack for the first time would not care a whit about governance, repo 
> locations, etc. They would certainly care about "what do I need to do to use 
> this thing?"
> 
> What we call things isn't confusing to those of us in the community - well, 
> at least to those of us who take the time to read big long email threads like 
> this. We need to be clearer in how we represent OpenStack to outsiders. To 
> that end, I think that limiting the term "OpenStack" to a handful of the core 
> projects would make things a whole lot clearer. We can continue to present 
> everything else as a marketplace, or an ecosystem, or however the more 
> marketing-minded want to label it, but we should *not* call those projects 
> "OpenStack".
> 
> Now I know, I work on Nova, so I'm expecting responses that "of course you 
> don't care", or "OpenStack is people, and you're hurting our feelings!". So 
> flame away!

Where to start.

Most of the small projects are not complaining about "hurt feelings".

If the community want to follow advice from a certain tweet, and limit
OpenStack to Nova + its spinouts, we should do that. Just let the rest
of us know, so we can either start shutting down the projects, or look
at moving the projects to another foundation.

Of course we should probably change the OpenStack mission statement,
and give the board a heads up that all these project teams they talk
about publicly will be going away.

And, yes, coming from different project teams does mean that we will
have differing views on what should be in OpenStack, and its level of
priority - but (in my personal, biased opinion) we should not throw the
baby out with the bath water because we cannot find two names to
describe things.

> 
> -- Ed Leafe
> 
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Thierry Carrez
Hi!

Back on regular schedule, here is an update on the status of a number
of TC-proposed governance changes, in an attempt to rely less on a
weekly meeting to convey that information.

You can find the full status list of open topics at:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee


== Recently-approved changes ==

* Follow-up precisions on office hours [1]
* New git repositories: fuxi-golang
* Updated links to Vitrage developer documentation
* Pike goal responses for freezer, cloudkitty

[1] https://review.openstack.org/#/c/470926/


== Open discussions ==

A new draft was produced for the TC vision, taking into account some of
the feedback we received on the initial draft. Please review:

* Begin integrating vision feedback and editing for style [1]

[1] https://review.openstack.org/#/c/473620/

A new, hopefully more consensual revision of the database support
resolution was posted by Dirk. Please review it at:

* Declare plainly the current state of PostgreSQL in OpenStack [2]

[2] https://review.openstack.org/427880

A new item was proposed for addition on our top-5 wanted list: Glance
contributors. Please review and comment at:

* Add "Glance Contributors" to top-5 wanted list [3]

[3] https://review.openstack.org/474604

We did not have that many comments on John Garbutt's revised resolution
on ensuring that decisions are globally inclusive. Please see:

* Decisions should be globally inclusive [4]

[4] https://review.openstack.org/#/c/460946/

The discussion on Queens goals is a bit stalled, as Gerrit is not the
ideal tool to select x picks out of n proposals. I think a dedicated IRC
meeting could help us sort through the proposals and propose a set (see
below). In the mean time, please review:

* Discovery alignment, with two options: [5] [6]
* Policy and docs in code [7]
* Migrate off paste [8]
* Continuing Python 3.5+ Support​ [9]

[5] https://review.openstack.org/#/c/468436/
[6] https://review.openstack.org/#/c/468437/
[7] https://review.openstack.org/#/c/469954/
[8] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117747.html
[9] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117746.html

A proposal to remove "meritocracy" from the Four opens details was
posted by cdent. Please review at:

* Remove "meritocracy" from the opens [10]

[10] https://review.openstack.org/473510

Finally, a number of tag additions are still being discussed, so you
might want to give your opinion on them:

* assert:supports-rolling-upgrade for keystone [11]
* assert:supports-upgrade to Barbican [12]
* stable:follows-policy for Kolla [13]

[11] https://review.openstack.org/471427
[12] https://review.openstack.org/472547
[13] https://review.openstack.org/#/c/346455/


== Voting in progress ==

The Top 5 help wanted list (and the "doc owners" initial item) almost
have the votes necessary to merge. Please vote at:

* Introduce Top 5 help wanted list [14]
* Add "Doc owners" to top-5 wanted list [15]

[14] https://review.openstack.org/#/c/466684/
[15] https://review.openstack.org/#/c/469115/

The new "assert:supports-api-interoperability" tag also seems to have
broad support and be ready for merging. Please vote at:

* Introduce assert:supports-api-interoperability [16]

[16] https://review.openstack.org/418010

Finally, Doug's guidelines for managing releases of binary artifacts is
also missing a couple of votes to be approved:

* Guidelines for managing releases of binary artifacts [17]

[17] https://review.openstack.org/#/c/469265/


== TC member actions for the coming week(s) ==

johnthetubaguy to finalize updating "Describe what upstream support
means" with a new revision [https://review.openstack.org/440601]

flaper87 to update "Drop Technical Committee meetings" with a new
revision [https://review.openstack.org/459848]

Additionally, we are still looking for a volunteer TC member
sponsor/mentor to helm the Gluon team navigate the OpenStack seas as
they engage to become an official project. Any volunteer ?


== Need for a TC meeting next Tuesday ==

In order to make progress on the Pike goal selection, I think a
dedicated IRC meeting will be necessary. We have a set of valid goals
proposed already: we need to decide how many we should have, and which
ones. Gerrit is not great to have that ranking discussion, so I think we
should meet to come up with a set, and propose it on the mailing-list
for discussion. We could use the regular meeting slot on Tuesday,
20:00utc. How does that sound ?


-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Deprecate/Remove deferred_auth_method=password config option

2017-06-16 Thread Kaz Shinohara
Hi Rabi,


I still takes `deferred _auth_method=password` behalf of trusts because we
don't enable trusts in the Keystone side due to some internal reason.
The issues what you pointed are correct(e.g. user_domain_id), we don't use
the domain well and also added some patches to skip those issues.
But I guess that the majority of heat users already moved to trusts and it
is obviously better solution in terms of security and granular role
control.
As the edge case(perhaps), if a user want to take password auth, it would
be too tricky for them to introduce it, therefore I agree your 2nd option.

If we will remove the `deferred_auth_method=password` from heat.conf,
should we keep `deferred_auth_method` self or will replace it to a new
config option just to specify the trusts enable/disable ?  Do you have any
idea on this?
Also I'm thinking that `reauthentication_method` also might be
changed/merged ?

Regards,
Kaz Shinohara


2017-06-16 14:11 GMT+09:00 Rabi Mishra :

> Hi All,
>
> As we know,  'deferred_auth_method=trusts' being the default, we use
> trust_auth_plugin whenever a resource requires deferred_auth (any resource
> derived from SignalResponder and StackResource). We also support
> 'deferred_auth_method=password' where  'X-Auth-User'/username and
> 'X-Auth-Key'/password is passed in the request header and we then store
> them in 'user_creds' (rather than 'trust_id')  to create a 'password'
> auth_plugin when loading the stack with stored context for signalling. I
> assume for this very reason we've the '--include-pass' option in heat cli.
>
> However, when using keystone session(which is the default), we don't have
> the above implemented with SessionClient (i.e to pass the headers). There
> is a bug[1] and patch[2]  to add this to SessionClient in the review queue.
> Aslo, we don't have anything like '--include-pass' for osc.
>
> I've noticed that 'deferred_auth_method=password' is broken and does not
> work with keystone v3 at all. As we don't store the 'user_domain_id/name'
> in 'user_creds', we can not even intialize the 'password' auth_plugin when
> creating the StoredContext, as it would not be able to authenticate the
> user without the user_domain[3].
>
> I'm not sure whether this works with keystone v2 and anyone is using it or
> not. Keeping in mind that heat-cli is deprecated and keystone v3 is now the
> default, we've 2 options
>
> 1. Continue to support 'deferred_auth_method=passsword' option and fix
> all the above issues.
>
> 2. Remove/deprecate the option in pike itlsef.
>
> I would prefer option 2, but probably I miss some history and use cases
> for it.
>
> Thoughts?
>
>
> [1] https://bugs.launchpad.net/python-heatclient/+bug/1665321
>
> [2] https://review.openstack.org/435213
>
> [3] https://github.com/openstack/heat/blob/master/heat/common/
> context.py#L292
>
> --
> Regards,
> Rabi Mishra
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-16 Thread Julien Danjou
On Thu, Jun 15 2017, Doug Hellmann wrote:

> One of the *most* common complaints the TC gets from outside the
> contributor community is that people do not understand what projects
> are part of OpenStack and what parts are not. We have a clear
> definition of that in our minds (the projects that have said they
> want to be part of OpenStack, and agreed to put themselves under
> TC governance, with all of the policies that implies). That definition
> is so trivial to say, that it seems like a tautology.  However,
> looking in from the outside of the community, that definition isn't
> helpful.

I still wonder why they care. Who care, really? Can we have some people
that care on this thread so they explain directly what we're trying to
solve here?

Everything is just a bunch of free software projects to me. The
governance made zero difference in my contributions or direction of the
projects I PTL'ed.

(There's so much effort put around trivial things like that. Why do I
read only 20 emails in the "Glance is dying" thread and already 55 in
that "let's rename big tent". Sigh.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-16 Thread Julien Danjou
On Fri, Jun 16 2017, gordon chung wrote:

> *sigh* so this is why we can't have nice things :p
>
> as an aside, in telemetry project, we did something somewhat similar 
> when we renamed/rebranded to telemetry from ceilometer. we wrote several 
> notes to the ML, had a few blog posts, fixed the docs, mentioned the new 
> project structure in our presentations... 2 years on, we still 
> occasionally get asked "what's ceilometer", "is xyz not ceilometer?", or 
> "so ceilometer is deprecated?". to a certain extent i think we'll have 
> to be prepared to do some hand holding and say "hey, that's not what the 
> "big tent/."

Yeah, even the Foundation kept talking about Ceilometer instead of
Telemetry in some internal/summit branding. I just gave up on that.

They say time helps.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-10 and R-9, June 16-30

2017-06-16 Thread Thierry Carrez
Welcome to our regular release countdown email!

Development Focus
-

Teams should be reconsidering how much they can deliver in the Pike
release, prioritizing critical items and wrapping up what was started.

Actions
---

We are getting closer to the final Pike release. A number of deadlines
will affect us next month:

* On R-7 week (before July 13) we'll close the extra-atcs lists, so if
you have a contributor whose work is not reflected in commit authorship,
you might want to submit a patch to the openstack/governance repository
(reference/projects.yaml file) to reflect that

* On R-6 week (before Jul 20) we'll have final releases for non-client
libraries. Only emergency bug fix updates are allowed after that date,
not releases for FFEs. So you should prioritize any feature work that
includes work in libraries to ensure that it can be completed and the
library released before the freeze.

* On R-5 week (before Jul 27) we'll have final releases for client
libraries, together with Pike-3 milestone releases (and Feature freeze)
for projects following the cycle-with-milestones model.

For projects following the cycle-with-intermediary release model, and
who haven't done a release during the Pike cycle yet, time is running
low to do more than one release this cycle! That would be aodh, bifrost,
ceilometer, cloudkitty, karbor, magnum, panko, and tacker.

Upcoming Deadlines & Dates
--

Extra-ATC addition deadline: July 13
Non-client libraries final releases: July 20
Client libraries final releases: July 27
Pike-3 milestone (and Feature freeze): July 27
Final Pike release: August 30
Queens PTG in Denver: Sept 11-15

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev