Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-24 Thread Duncan Thomas
Cinder bugs list was far more manageable once this had been done.

It is worth sharing the tool for this? I realise it's fairly trivial to
write one, but some standardisation on the comment format etc seems
valuable, particularly for Q/A folks who work between different projects.

On 23 May 2016 at 14:02, Markus Zoeller  wrote:

> TL;DR: Automatic closing of 185 bug reports which are older than 18
> months in the week R-13. Skipping specific bug reports is possible. A
> bug report comment explains the reasons.
>
>
> I'd like to get rid of more clutter in our bug list to make it more
> comprehensible by a human being. For this, I'm targeting our ~185 bug
> reports which were reported 18 months ago and still aren't in progress.
> That's around 37% of open bug reports which aren't in progress. This
> post is about *how* and *when* I do it. If you have very strong reasons
> to *not* do it, let me hear them.
>
> When
> 
> I plan to do it in the week after the non-priority feature freeze.
> That's week R-13, at the beginning of July. Until this date you can
> comment on bug reports so they get spared from this cleanup (see below).
> Beginning from R-13 until R-5 (Newton-3 milestone), we should have
> enough time to gain some overview of the rest.
>
> I also think it makes sense to make this a repeated effort, maybe after
> each milestone/release or monthly or daily.
>
> How
> ---
> The bug reports which will be affected are:
> * in status: [new, confirmed, triaged]
> * AND without assignee
> * AND created at: > 18 months
> A preview of them can be found at [1].
>
> You can spare bug reports if you leave a comment there which says
> one of these (case-sensitive flags):
> * CONFIRMED FOR: NEWTON
> * CONFIRMED FOR: MITAKA
> * CONFIRMED FOR: LIBERTY
>
> The expired bug report will have:
> * status: won't fix
> * assignee: none
> * importance: undecided
> * a new comment which explains *why* this was done
>
> The comment the expired bug reports will get:
> This is an automated cleanup. This bug report got closed because
> it is older than 18 months and there is no open code change to
> fix this. After this time it is unlikely that the circumstances
> which lead to the observed issue can be reproduced.
> If you can reproduce it, please:
> * reopen the bug report
> * AND leave a comment "CONFIRMED FOR: "
>   Only still supported release names are valid.
>   valid example: CONFIRMED FOR: LIBERTY
>   invalid example: CONFIRMED FOR: KILO
> * AND add the steps to reproduce the issue (if applicable)
>
>
> Let me know if you think this comment gives enough information how to
> handle this situation.
>
>
> References:
> [1] http://45.55.105.55:8082/bugs-dashboard.html#tabExpired
>
> --
> Regards, Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][cinder] max_concurrent_builds in Cinder

2016-05-24 Thread Duncan Thomas
On 24 May 2016 at 05:46, John Griffith  wrote:

>
> ​Just curious about a couple things:  Is this attempting to solve a
> problem in the actual Cinder Volume Service or is this trying to solve
> problems with backends that can't keep up and deliver resources under heavy
> load?
>

I would posit that no backend can cope with infinite load, and with things
like A/A c-vol on the way, cinder is likely to get more efficient to the
point it will start stressing more backends. It is certainly worth thinking
about.

We've more than enough backend technologies that have different but
entirely reasonable metadata performance limitations, and several pieces of
code outside of backend's control (examples: FC zoning, iSCSI multipath)
seem to have clear scalability issues.

I think I share a worry that putting limits everywhere becomes a bandaid
that avoids fixing deeper problems, whether in cinder or on the backends
themselves.


> I get the copy-image to volume, that's a special case that certainly does
> impact Cinder services and the Cinder node itself, but there's already
> throttling going on there, at least in terms of IO allowed.
>

Which is probably not the behaviour we want - queuing generally gives a
better user experience than fair sharing beyond a certain point, since you
get to the point that *nothing* gets completed in a reasonable amount of
time with only moderate loads.

It also seems to be a very common thing for customers to try to boot 300
instances from volume as an early smoke test of a new cloud deployment.
I've no idea why, but I've seen it many times, and others have reported the
same thing. While I'm not entirely convinced it is a reasonable test, we
should probably make sure that the usual behaviour for this is not horrible
breakage. The image cache, if turned on, certainly helps massively with
this, but I think some form of queuing is a good thing for both image cache
work and probably backups too eventually.


> Also, I'm curious... would the exiting API Rate Limit configuration
> achieve the same sort of thing you want to do here?  Granted it's not
> selective but maybe it's worth mentioning.
>

Certainly worth mentioning, since I'm not sure how many people are aware it
exists. My experiences of it were that it was too limited to be actually
useful (it only rate limits a single process, and we've usually got more
than enough enough API workers across multiple nodes that very significant
loads are possible before tripping any reasonable per-process rate limit).



> If we did do something like this I would like to see it implemented as a
> driver config; but that wouldn't help if the problem lies in the Rabbit or
> RPC space.  That brings me back to wondering about exactly where we want to
> solve problems and exactly which.  If delete is causing problems like you
> describe I'd suspect we have an issue in our DB code (too many calls to
> start with) and that we've got some overhead elsewhere that should be
> eradicated.  Delete is a super simple operation on the Cinder side of
> things (and most back ends) so I'm a bit freaked out thinking that it's
> taxing resources heavily.
>

I agree we should definitely do more analysis of where the breakage occurs
before adding many limits or queues. Image copy stuff is an easy to analyse
first case - i/o stat can tell you exactly where the problem is.

Using the fake backend and a large number of API workers / nodes with a
pathological load trivially finds breakages currently, though it depends
exactly which code version you're running as to where the issues are. The
compare & update changes (aka race avoidance patches) have removed a bunch
of these, but seem to have led to a significant increase in DB load that
means it is easier to get DB timeouts and other issues.

As for delete being resource heavy, our reference driver provides a
pathological example with the secure delete code. Now that we've got a high
degree of confidence in the LVM thin code (specifically, I'm not aware of
any instances where it is worse than the LVM-thick code and I don't see any
open bugs that disagree), is it time to dump the LVM-thick support
completely?


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kuryr] Nested containers networking

2016-05-24 Thread Gal Sagie
Hi Hongbin,

Thank you for starting this thread.
The person that is going to work on this integration is Fawad (CC'ed) and
hopefully others will help
him (We have another person from Huawei that showed intrest in working on
this).

I think Fawad, given that he is the primary person working on this should
be Kuryr liaison for this integration
and it could help alot if he has a contact in Magnum that can work with him
on that closely.
I can also serve as the coordinator between these efforts given that Fawad
is too busy.

The first task in my view is to start describing the action items for this
integration, split the work and address
any unknown issues during this process.

I think that design wise things are pretty close (Fawad, please correct me
if there are any open issues)
and we are just waiting to start the work (and solve any issues as they
come)

Thanks
Gal.


On Mon, May 23, 2016 at 6:35 PM, Hongbin Lu  wrote:

> Hi Kuryr team,
>
>
>
> I want to start this ML to sync up the latest status of the nested
> container networking implementation. Could I know who is implementing this
> feature in Kuryr side and how Magnum team could help in this efforts? In
> addition, I wonder if it makes sense to establish cross-project liaisons
> between Kuryr and Magnum. Magnum relies on Kuryr to implement several
> important features so I think it is helpful to setup a communication
> channel between both teams. Thoughts?
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [higgins] Should we rename "Higgins"?

2016-05-24 Thread Shuu Mutou
Hi all,

Unfortunately "higgins" is used by media server project on Launchpad and CI 
software on PYPI. Now, we use "python-higgins" for our project on Launchpad.

IMO, we should rename project to prevent increasing points to patch.

How about "Gatling"? It's only association from Magnum. It's not used on both 
Launchpad and PYPI.
Is there any idea?

Renaming opportunity will come (it seems only twice in a year) on Friday, June 
3rd. Few projects will rename on this date.
http://markmail.org/thread/ia3o3vz7mzmjxmcx

And if project name issue will be fixed, I'd like to propose UI subproject.

Thanks,
Shu


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [infra] resolving issues with Intel NFV CI

2016-05-24 Thread Znoinski, Waldemar
Hi all,

There is an issue with jobs under Intel NFV CI not being registered.
The issue is currently being investigated. ThirdPartySystems Wiki was updated.

Updates to follow.


Waldemar Znoinski

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Log spool in the context

2016-05-24 Thread Alexis Lee
Hi,

I have a spec: https://review.openstack.org/227766
and implementation: https://review.openstack.org/316162
for adding a spooling logger to oslo.log. Neither is merged yet, reviews
welcome.

Looking at how I'd actually integrate this into Nova, most classes do:

LOG = logging.getLogger(__name__)

which is the recommended means of getting a logger. I need to get
certain code paths to use a different logger (if spooling is turned on).
This means I need to pass it around. If I modify method signatures I'm
bound to break back-compat for something.

Option 1: use a metaclass to register each SpoolManager as a singleton,
IE every call to SpoolManager('api') will return the same manager. I can
then do something like:

log = LOG
if CONF.spool_api:
log = SpoolManager('api').get_spool(context.request_id)

in every method.

Option 2: Put the logger on the context. We're already passing this
everywhere so it'd be awful convenient.

log = context.log or LOG

Option 3: ???

I like option 2, any objections to extending oslo.context like this?


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] change console proxy URL for cells v2 ?

2016-05-24 Thread Murray, Paul (HP Cloud)
I am implementing the following spec and have encountered a problem related
to the form of the console proxy URL provided to uses.

https://specs.openstack.org/openstack/nova-specs/specs/newton/approved/convert-consoles-to-objects.html
 

The spec changes the representation of the console auth token connection 
information
to be an object stored in the database, instead of the existing dict stored in 
memcache 
in the console auth server.

At the same time we are rolling out the changes for cells v2. After talking to 
alaski
we have decided that the tokens should go in the child cell database because the
tokens relate to instances.

This introduces a small problem. The console URL provided to users only includes
a token, so this is not enough to map to the cell and database that contains the
record related to the token.

The URL for the console proxy is determined by the compute nodes according to a
configuration parameter. So one solution to this problem is to have a different 
proxy
exposed for each cell and configure the compute nodes with the URL for the
appropriate proxy for its cell.

Another solution, which can actually be implemented after the first, could be 
to add
some information to the URL resource path that can be used by the proxy to map 
to
the cell and database containing the token record.

So the question is: what would be an appropriate addition to the URL resource 
path
that can be used to map to the cell? 
- it could be a UUID for the instance (my favourite)
- it could be something directly related to the cell
- it could be something else altogether

Does anyone have a better suggestion or a preference?

Paul


Paul Murray
Technical Lead, HPE Cloud
Hewlett Packard Enterprise
+44 117 316 2527





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-24 Thread Markus Zoeller
On 24.05.2016 09:34, Duncan Thomas wrote:
> Cinder bugs list was far more manageable once this had been done.
> 
> It is worth sharing the tool for this? I realise it's fairly trivial to
> write one, but some standardisation on the comment format etc seems
> valuable, particularly for Q/A folks who work between different projects.

A first draft (without the actual expiring) is at [1]. I'm going to
finish it this week. If there is a place in an OpenStack repo, just give
me a pointer and I'll push a change.

> On 23 May 2016 at 14:02, Markus Zoeller  wrote:
> 
>> TL;DR: Automatic closing of 185 bug reports which are older than 18
>> months in the week R-13. Skipping specific bug reports is possible. A
>> bug report comment explains the reasons.
>> [...]

References:
[1]
https://github.com/markuszoeller/openstack/blob/master/scripts/launchpad/expire_old_bug_reports.py

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Is verification of images in the image cache necessary?

2016-05-24 Thread Matthew Booth
During its periodic task, ImageCacheManager does a checksum of every image
in the cache. It verifies this checksum against a previously stored value,
or creates that value if it doesn't already exist.[1] Based on this
information it generates a log message if the image is corrupt, but
otherwise takes no action. Going by git, this has been the case since 2012.

The commit which added it was associated with 'blueprint
nova-image-cache-management phase 1'. I can't find this blueprint, but I
did find this page:
https://wiki.openstack.org/wiki/Nova-image-cache-management . This talks
about 'detecting images which are corrupt'. It doesn't explain why we would
want to do that, though. It also doesn't seem to have been followed through
in the last 4 years, suggesting that nobody's really that bothered.

I understand that corruption of bits on disks is a thing, but it's a thing
for more than just the image cache. I feel that this is a problem much
better solved at other layers, prime candidates being the block and
filesystem layers. There are existing robust solutions to bitrot at both of
these layers which would cover all aspects of data corruption, not just
this randomly selected slice.

As it stands, I think this code is regularly running a pretty expensive
task looking for something which will very rarely happen, only to generate
a log message which nobody is looking for. And it could be solved better in
other ways. Would anybody be sad if I deleted it?

Matt

[1] Incidentally, there also seems to be a bug in this implementation, in
that it doesn't hold the lock on the image itself at any point during the
hashing process, meaning that it cannot guarantee that the image has
finished downloading yet.
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Live migration meeting today

2016-05-24 Thread Murray, Paul (HP Cloud)
Agenda: https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration

Please add items if you want to guarantee time to talk about them.

Paul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] release version numbers: let's use semvers

2016-05-24 Thread Thomas Goirand
Hi,

A number of Fuel components were previously tagged as "9.0". This isn't
compatible with the semver scheme. Could we agree to switch to x.y.z
going forward?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Is verification of images in the image cache necessary?

2016-05-24 Thread John Garbutt
On 24 May 2016 at 10:16, Matthew Booth  wrote:
> During its periodic task, ImageCacheManager does a checksum of every image
> in the cache. It verifies this checksum against a previously stored value,
> or creates that value if it doesn't already exist.[1] Based on this
> information it generates a log message if the image is corrupt, but
> otherwise takes no action. Going by git, this has been the case since 2012.
>
> The commit which added it was associated with 'blueprint
> nova-image-cache-management phase 1'. I can't find this blueprint, but I did
> find this page: https://wiki.openstack.org/wiki/Nova-image-cache-management
> . This talks about 'detecting images which are corrupt'. It doesn't explain
> why we would want to do that, though. It also doesn't seem to have been
> followed through in the last 4 years, suggesting that nobody's really that
> bothered.
>
> I understand that corruption of bits on disks is a thing, but it's a thing
> for more than just the image cache. I feel that this is a problem much
> better solved at other layers, prime candidates being the block and
> filesystem layers. There are existing robust solutions to bitrot at both of
> these layers which would cover all aspects of data corruption, not just this
> randomly selected slice.

+1

That might mean improved docs on the need to configure such a thing.

> As it stands, I think this code is regularly running a pretty expensive task
> looking for something which will very rarely happen, only to generate a log
> message which nobody is looking for. And it could be solved better in other
> ways. Would anybody be sad if I deleted it?

For completeness, we need to deprecate it using the usual cycles:
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

I like the idea of checking the md5 matches before each boot, as it
mirrors the check we do after downloading from glance. Its possible
thats very unlikely to spot anything that shouldn't already be worried
about by something else. It may just be my love of symmetry that makes
me like that idea?

Thanks,
johnthetubaguy


> [1] Incidentally, there also seems to be a bug in this implementation, in
> that it doesn't hold the lock on the image itself at any point during the
> hashing process, meaning that it cannot guarantee that the image has
> finished downloading yet.
> --
> Matthew Booth
> Red Hat Engineering, Virtualisation Team
>
> Phone: +442070094448 (UK)
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Thierry Carrez

Chris Dent wrote:

[...]
I don't really know. I'm firmly in the camp that OpenStack needs to
be smaller and more tightly focused if a unitary thing called OpenStack
expects to be any good. So I'm curious about and interested in
strategies for figuring out where the boundaries are.

So that, of course, leads back to the original question: Is OpenStack
supposed to be a unitary.


As a data point, since I heard that question rhetorically asked quite a 
few times over the past year... There is an old answer to that, since a 
vote of the PPB (the ancestor of our TC) from June, 2011 which was never 
overruled or changed afterwards:


"OpenStack is a single product made of a lot of independent, but 
cooperating, components."


The log is an interesting read: 
http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-06-28-20.06.log.html


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Thierry Carrez

Morgan Fainberg wrote:

[...]  If we are accepting golang, I want it to be clearly
documented that the expectation is it is used exclusively where there is
a demonstrable case (such as with swift) and not a carte blanche to use
it wherever-you-please.

I want this to be a social contract looked at and enforced by the
community, not special permissions that are granted by the TC (I don't
want the TC to need to step in an approve every single use case of
golang, or javascript ...). It's bottlenecking back to the TC for
special permissions or inclusion (see reasons for the dissolution of the
"integrated release").

This isn't strictly an all or nothing case, this is a "how would we
enforce this?" type deal. Lean on infra to enforce that only projects
with the golang-is-ok-here tag are allowed to use it? I don't want
people to write their APIs in javascript (and node.js) nor in golang. I
would like to see most of the work continue with python as the primary
language. I just think it's unreasonable to lock tools behind a gate
that is stronger than the social / community contract (and outlined in
the resolution including X language).


+1

I'd prefer if we didn't have to special-case anyone, and we could come 
up with general rules that every OpenStack project follows. Any other 
solution is an administrative nightmare and a source of tension between 
projects (why are they special and not me).


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] update congress

2016-05-24 Thread Yue Xin
Hi Tim and all,

May I ask how to update the congress version with the Tokyo Hands on Lab
environment? cause I use the command "openstack congress list version" it
show the congress is 2013's version.

The reason why I want to update it is that I write a demo driver, which I
want to push data into the datasource table, but when I use 'curl -i -g -X
PUT ' command , the error is "501 not implemented" , same error comes with
'curl -X PUSH', I am not sure whether it comes from the version of congress
or not(I thought maybe congress is too old so it doesn't support push data).

Thank you very much for your response.

*Regards,*
*Yue*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] YAQL console for master node

2016-05-24 Thread Stanislaw Bogatkin
Hi all,

as you maybe know, new conditions for Fuel tasks were recently (in master
and mitaka branches) introduced. Right after this I got several questions
like 'hey, how can I check my new condition?' Answer could be 'use standard
yaql console', but it hasn't have Fuel internal yaql functions which were a
foundation for Fuel task conditions. As a result, I have written small
utility to give an opportunity for check new conditions on the fly: [0]. It
still in development but usable for most tasks developer usually need when
build new yaql condition for task.

If you has any questions about using this tool or want to propose any
improvement - don't hesitate and contact me. Or just fork it and do what
you want - it licensed under GPLv3. I would be glad if it helps someone.

[0] https://github.com/sorrowless/fuyaql

-- 
with best regards,
Stan.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Is verification of images in the image cache necessary?

2016-05-24 Thread Matthew Booth
On Tue, May 24, 2016 at 11:06 AM, John Garbutt  wrote:

> On 24 May 2016 at 10:16, Matthew Booth  wrote:
> > During its periodic task, ImageCacheManager does a checksum of every
> image
> > in the cache. It verifies this checksum against a previously stored
> value,
> > or creates that value if it doesn't already exist.[1] Based on this
> > information it generates a log message if the image is corrupt, but
> > otherwise takes no action. Going by git, this has been the case since
> 2012.
> >
> > The commit which added it was associated with 'blueprint
> > nova-image-cache-management phase 1'. I can't find this blueprint, but I
> did
> > find this page:
> https://wiki.openstack.org/wiki/Nova-image-cache-management
> > . This talks about 'detecting images which are corrupt'. It doesn't
> explain
> > why we would want to do that, though. It also doesn't seem to have been
> > followed through in the last 4 years, suggesting that nobody's really
> that
> > bothered.
> >
> > I understand that corruption of bits on disks is a thing, but it's a
> thing
> > for more than just the image cache. I feel that this is a problem much
> > better solved at other layers, prime candidates being the block and
> > filesystem layers. There are existing robust solutions to bitrot at both
> of
> > these layers which would cover all aspects of data corruption, not just
> this
> > randomly selected slice.
>
> +1
>
> That might mean improved docs on the need to configure such a thing.
>
> > As it stands, I think this code is regularly running a pretty expensive
> task
> > looking for something which will very rarely happen, only to generate a
> log
> > message which nobody is looking for. And it could be solved better in
> other
> > ways. Would anybody be sad if I deleted it?
>
> For completeness, we need to deprecate it using the usual cycles:
>
> https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html


I guess I'm arguing that it isn't a feature, and never has been: it really
doesn't do anything at all except generate a log message. Are log messages
part of the deprecation contract?

If operators are genuinely finding corrupt images to be a problem and find
this log message useful that would be extremely useful to know.


> I like the idea of checking the md5 matches before each boot, as it
> mirrors the check we do after downloading from glance. Its possible
> thats very unlikely to spot anything that shouldn't already be worried
> about by something else. It may just be my love of symmetry that makes
> me like that idea?
>

It just feels arbitrary to me for a few reasons. Firstly, it's only
relevant to storage schemes which use the file in the image cache as a
backing file. In this libvirt driver, this is just the qcow2 backend. While
this is the default, most users are actually using ceph. Assuming it isn't
cloning it directly from ceph-backed glance, the Rbd backend imports from
the image cache during spawn, and has nothing to do with it thereafter. So
for Rbd we'd want to check during spawn. Same for the Flat, Lvm and Ploop
backends.

Except that it's still arbitrary because we're not checking the Qcow
overlay on each boot. Or ephemeral or swap disks. Or Lvm, Flat or Rbd disks
at all. Or the operating system. And it's still expensive, and better done
by the block or filesystem layer.

I'm not personally convinced there's all that much point checking during
download either, but given that we're loading all the bits anyway that
check is essentially free. However, even if we decided we needed to defend
the system against bitrot above the block/filesystem layer (and I'm not at
all convinced of that) we'd want a coordinated design for it. Without one,
we risk implementing a bunch of disconnected/incomplete stuff that doesn't
meet anybody's needs, but burns resources anyway.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo_config] Improving Config Option Help Texts

2016-05-24 Thread Erno Kuvaja
Hi all,

Based on the not yet merged spec of categorized config options [0] some
project seems to have started improving the config option help texts. This
is great but I noticed scary trend on clutter to be added on these
sections. Now looking individual changes it does not look that bad at all
in the code 20 lines well structured templating. Until you start comparing
it to the example config files. Lots of this data is redundant to what is
generated to the example configs already and then the maths struck me.

In Glance only we have ~120 config options (this does not include
glance_store nor any other dependencies we pull in for our configs like
Keystone auth. Those +20 lines of templating just became over 2000 lines of
clutter in the example configs and if all projects does that we can
multiply the issue. I think no-one with good intention can say that it's
beneficial for our deployers and admins who are already struggling with the
configs.

So I beg you when you do these changes to the config option help fields
keep them short and compact. We have the Configuration Docs for extended
descriptions and cutely formatted repetitive fields, but lets keep those
off from the generated (Example) config files. At least I would like to be
able to fit more than 3 options on the screen at the time when reading
configs.

[0] https://review.openstack.org/#/c/295543/

Best regards,
Erno "jokke_" Kuvaja
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Chris Dent

On Tue, 24 May 2016, Thierry Carrez wrote:

Chris Dent wrote:

So that, of course, leads back to the original question: Is OpenStack
supposed to be a unitary.


As a data point, since I heard that question rhetorically asked quite a few 
times over the past year... There is an old answer to that, since a vote of 
the PPB (the ancestor of our TC) from June, 2011 which was never overruled or 
changed afterwards:


"OpenStack is a single product made of a lot of independent, but cooperating, 
components."


The log is an interesting read: 
http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-06-28-20.06.log.html


That is an interesting read, thank you for pointing that out. It all
feel very familiar.

As someone who has only entered the community in the last two years, and
when I did enter I was completely uninitiated (in the sense that I had
never really even heard of OpenStack so didn't have much in the way of
preconceived notions), I can say that the meaning and message behind
that quote was _not_ drilled into me. If we want it to be true and
active part of our culture, it should have been.

I'm not certain I have a strong preference for how things should be
either way (reading the log I felt a lot of empathy for notmyname's
position and thought "we'd be a much different (maybe better, unclear)
thing now if we'd headed that way") but I can say unequivocally that
it would make decision processes, now, much better if we could commit
to one way.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is verification of images in the image cache necessary?

2016-05-24 Thread Chris Dent

On Tue, 24 May 2016, Matthew Booth wrote:


I understand that corruption of bits on disks is a thing, but it's a thing
for more than just the image cache. I feel that this is a problem much
better solved at other layers, prime candidates being the block and
filesystem layers. There are existing robust solutions to bitrot at both of
these layers which would cover all aspects of data corruption, not just
this randomly selected slice.


Completely agree that this a problem that is better solved by other
tools in a more generic fashion. The presence of the feature in
OpenStack code adds unnecessary cruft and load for artificial value:
it doesn't actually protect against the problem in any real way.


--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Davanum Srinivas
+1 ttx. ("I'd prefer if we didn't have to special-case anyone")

-- Dims

On Tue, May 24, 2016 at 6:24 AM, Thierry Carrez  wrote:
> Morgan Fainberg wrote:
>>
>> [...]  If we are accepting golang, I want it to be clearly
>> documented that the expectation is it is used exclusively where there is
>> a demonstrable case (such as with swift) and not a carte blanche to use
>> it wherever-you-please.
>>
>> I want this to be a social contract looked at and enforced by the
>> community, not special permissions that are granted by the TC (I don't
>> want the TC to need to step in an approve every single use case of
>> golang, or javascript ...). It's bottlenecking back to the TC for
>> special permissions or inclusion (see reasons for the dissolution of the
>> "integrated release").
>>
>> This isn't strictly an all or nothing case, this is a "how would we
>> enforce this?" type deal. Lean on infra to enforce that only projects
>> with the golang-is-ok-here tag are allowed to use it? I don't want
>> people to write their APIs in javascript (and node.js) nor in golang. I
>> would like to see most of the work continue with python as the primary
>> language. I just think it's unreasonable to lock tools behind a gate
>> that is stronger than the social / community contract (and outlined in
>> the resolution including X language).
>
>
> +1
>
> I'd prefer if we didn't have to special-case anyone, and we could come up
> with general rules that every OpenStack project follows. Any other solution
> is an administrative nightmare and a source of tension between projects (why
> are they special and not me).
>
> --
> Thierry Carrez (ttx)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Duncan Thomas
On 24 May 2016 at 02:28, Gregory Haynes  wrote:

> On Mon, May 23, 2016, at 05:24 PM, Morgan Fainberg wrote:
>
> I really do not want to "special case" swift. It really doesn't go with
> the spirit of inclusion.
>
>
> I am not sure how inclusion is related to special casing. Inclusion here
> implies that some group is not being accepted in to our community. The
> excluded group here would be one writing software not in python, which is
> the same groups being excluded currently. This is an argument against the
> status quo, not against special casing.
>
>

It excludes any other project that might be possible under the more relaxed
rules. It says 'swift is a special snowflake that the rules don't apply to,
but you are not special enough, go away', which is the very definition of
exclusionary.


> I expect we will see a continued drive for whichever additional languages
> are supported once it's in place/allowed.
>
>
> That is the problem. The social and debugging costs of adding another
> language are a function of how much that other language is used. If one
> component of one project is written in another language then these costs
> should be fairly low. I agree that once we allow another language we should
> expect many projects to begin using it, and IMO most if not all of these
> cases except swift will not be warranted.
>

So designate have come up with a use-case detailed in this thread. Gnocchi
have suggested they might have one. Others quite possibly exist that
haven't even been explored yet because the rules were against it.

One alternative to ffor swift to split into two layer, a control plane and
a data plane, and for alternative dataplane implementations (i.e. the go
one, maybe something ceph based, etc) to sit outside of the openstack
umbrella. This is the model nearly every other openstack service has.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] - IRC Meeting time change

2016-05-24 Thread Gal Sagie
Hello Everyone,

Just wanted to let you all know we changed our even biweekly IRC meeting
time.
It is now one hour earlier.

The new meeting time is *every even Monday at 14:00 UTC in
#openstack-meeting-4*
You can see the changing patch here [1]

Thanks
Gal.

[1] https://review.openstack.org/#/c/319739/2
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][neutron] bonding?

2016-05-24 Thread Jim Rollenhagen
Hi,

There's rumors floating around about Neutron having a bonding model in
the near future. Are there any solid plans for that?

For context, as part of the multitenant networking work, ironic has a
portgroup concept proposed, where operators can configure bonding for
NICs in a baremetal machine. There are ML2 drivers that support this
model and will configure a bond.

Some folks have concerns about landing this code if Neutron is going to
support bonding as a first-class citizen. So before we delay any
further, I'd like to find out if there's any truth to this, and what the
timeline for that might look like.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Is verification of images in the image cache necessary?

2016-05-24 Thread Fichter, Dane G.
Hi John and Matt,

I actually have a spec and patch up for review addressing some of what you’re 
referring to below.

https://review.openstack.org/#/c/314222/
https://review.openstack.org/#/c/312210/

I think you’re quite right that the existing ImageCacheManager code serves 
little purpose. What I propose here is a cryptographically stronger 
verification meant to protect against both deliberate modification by an 
adversary, as well as accidental sources of disk corruption. If you like, I can 
deprecate the checksum-based verification code in the image cache as a part of 
this change. Feel free me to email me back or ping me on IRC (dane-fichter) in 
order to discuss more.

Thanks,

Dane Fichter

From: Matthew Booth mailto:mbo...@redhat.com>>
Reply-To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, May 24, 2016 at 6:49 AM
To: John Garbutt mailto:j...@johngarbutt.com>>
Cc: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>, 
"openstack-operat...@lists.openstack.org"
 
mailto:openstack-operat...@lists.openstack.org>>
Subject: Re: [openstack-dev] [Openstack-operators] [nova] Is verification of 
images in the image cache necessary?

On Tue, May 24, 2016 at 11:06 AM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 24 May 2016 at 10:16, Matthew Booth 
mailto:mbo...@redhat.com>> wrote:
> During its periodic task, ImageCacheManager does a checksum of every image
> in the cache. It verifies this checksum against a previously stored value,
> or creates that value if it doesn't already exist.[1] Based on this
> information it generates a log message if the image is corrupt, but
> otherwise takes no action. Going by git, this has been the case since 2012.
>
> The commit which added it was associated with 'blueprint
> nova-image-cache-management phase 1'. I can't find this blueprint, but I did
> find this page: https://wiki.openstack.org/wiki/Nova-image-cache-management
> . This talks about 'detecting images which are corrupt'. It doesn't explain
> why we would want to do that, though. It also doesn't seem to have been
> followed through in the last 4 years, suggesting that nobody's really that
> bothered.
>
> I understand that corruption of bits on disks is a thing, but it's a thing
> for more than just the image cache. I feel that this is a problem much
> better solved at other layers, prime candidates being the block and
> filesystem layers. There are existing robust solutions to bitrot at both of
> these layers which would cover all aspects of data corruption, not just this
> randomly selected slice.

+1

That might mean improved docs on the need to configure such a thing.

> As it stands, I think this code is regularly running a pretty expensive task
> looking for something which will very rarely happen, only to generate a log
> message which nobody is looking for. And it could be solved better in other
> ways. Would anybody be sad if I deleted it?

For completeness, we need to deprecate it using the usual cycles:
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

I guess I'm arguing that it isn't a feature, and never has been: it really 
doesn't do anything at all except generate a log message. Are log messages part 
of the deprecation contract?

If operators are genuinely finding corrupt images to be a problem and find this 
log message useful that would be extremely useful to know.


I like the idea of checking the md5 matches before each boot, as it
mirrors the check we do after downloading from glance. Its possible
thats very unlikely to spot anything that shouldn't already be worried
about by something else. It may just be my love of symmetry that makes
me like that idea?

It just feels arbitrary to me for a few reasons. Firstly, it's only relevant to 
storage schemes which use the file in the image cache as a backing file. In 
this libvirt driver, this is just the qcow2 backend. While this is the default, 
most users are actually using ceph. Assuming it isn't cloning it directly from 
ceph-backed glance, the Rbd backend imports from the image cache during spawn, 
and has nothing to do with it thereafter. So for Rbd we'd want to check during 
spawn. Same for the Flat, Lvm and Ploop backends.

Except that it's still arbitrary because we're not checking the Qcow overlay on 
each boot. Or ephemeral or swap disks. Or Lvm, Flat or Rbd disks at all. Or the 
operating system. And it's still expensive, and better done by the block or 
filesystem layer.

I'm not personally convinced there's all that much point checking during 
download either, but given that we're loading all the bits anyway that check is 
essentially free. However, even if we decided we needed to defend the system 
against bitrot above the block/filesystem layer (and I'm not at all convinced 
of

[openstack-dev] [Openstack-dev] scaling nova kvm and neutron l3-ha and ml2+openvswitch

2016-05-24 Thread Tobias Urdin
Hello,

(I didn't have any luck with this question on the openstack-operators
list so I'm throwing it out here to see if I can catch anyone, according
to Assaf he knew a couple of you devs running a similar environment, if
not feel free to do what you please with this email hehe)

I'm gonna give it a try here and see if anybody has a similar setup that
could answer some questions about scaling.
We are running Liberty with Nova with KVM and Neutron L3 HA and
ML2+Openvswitch.

* How many nova instances do you have?
* How many nova compute nodes do you have?
* How many neutron nodes do you have? (Network nodes that is hosting l3
agents, dhcp agents, openvswitch-plugin etc)
* What is your overall thought on the management ability on Openvswitch?
* What issue have you had related to scaling, performance etc?

Thankful for any data, I'm trying to give my employer real world usage
information on a similar setup.
Feel free to answer me privately if you prefer but I'm sure more people
here are curious if you want to share :)

Best regards
Tobias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] release version numbers: let's use semvers

2016-05-24 Thread Igor Kalnitsky
Hey Zigo,

In Python community there's a PEP-440 [1] that defines a versioning scheme. The 
thing you should know is, the PEP __is not__ compatible with semver, and it's 
totally fine to have two components version.

So I don't think we should force version changes from two-components to 
three-components scheme, since it won't be compatible with semver anyway.

Thanks,
Igor

[1]: https://www.python.org/dev/peps/pep-0440/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Nominating Zadara Storage VPSA CI for voting permissions

2016-05-24 Thread Sean McGinnis
On Tue, May 24, 2016 at 09:47:23AM +0300, Shlomi Avihou wrote:
> Hello,
> 
> 
> 
> I’d like to nominate Zadara Storage VPSA CI system for voting permissions
> in the Cinder program.
> 
> The CI system has been posting comments (non-voting) for the past 24 days.
> 
> 
> 
> *Third Party system info: *
> 
> https://wiki.openstack.org/wiki/ThirdPartySystems/ZadaraStorage_VPSA_CI
> 
> 
> 
> 
> *Ci Logs can be publicly accessed in : *
> http://openstack-ci-logs.zadarastorage.com/
> 
> 
> 
> If there’s any information missing please let me know.
> 
> 
> 
> Thanks,
> 
> Shlomi.

Thanks Shlomi, but we do not have any third party CI systems voting in
Cinder. It was discussed a while back (maybe during the Juno cycle?) and
we couldn't find a good reason to enable voting other than maybe making
a couple of scripting things slightly easier.

So no need to make the system voting at this time. Just continue to have
it run against all patches and post the results as a non-voting job.

Thanks,
Sean (smcginnis)

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] scaling nova kvm and neutron l3-ha and ml2+openvswitch

2016-05-24 Thread Anna Kamyshnikova
Hi!

Recently I performed scale testing of Neutron L3 HA Liberty. Test plan and
test results can be found
http://docs.openstack.org/developer/performance-docs/index.html.

On Tue, May 24, 2016 at 3:21 PM, Tobias Urdin 
wrote:

> Hello,
>
> (I didn't have any luck with this question on the openstack-operators
> list so I'm throwing it out here to see if I can catch anyone, according
> to Assaf he knew a couple of you devs running a similar environment, if
> not feel free to do what you please with this email hehe)
>
> I'm gonna give it a try here and see if anybody has a similar setup that
> could answer some questions about scaling.
> We are running Liberty with Nova with KVM and Neutron L3 HA and
> ML2+Openvswitch.
>
> * How many nova instances do you have?
> * How many nova compute nodes do you have?
> * How many neutron nodes do you have? (Network nodes that is hosting l3
> agents, dhcp agents, openvswitch-plugin etc)
> * What is your overall thought on the management ability on Openvswitch?
> * What issue have you had related to scaling, performance etc?
>
> Thankful for any data, I'm trying to give my employer real world usage
> information on a similar setup.
> Feel free to answer me privately if you prefer but I'm sure more people
> here are curious if you want to share :)
>
> Best regards
> Tobias
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Suggest to not deprecate os-migrations API

2016-05-24 Thread Sean Dague
On 05/23/2016 10:16 PM, Zhenyu Zheng wrote:
> Hi All,
> 
> According
> to 
> https://github.com/openstack/nova/blob/master/releasenotes/notes/os-migrations-ef225e5b309d5497.yaml
> , we are going to deprecate the old os-migrations API, and two new APIs:
> /servers/{server uuid}/migrations and /servers/{server
> uuid}/migrations/{migration id} is added.
> 
> As we can see, the newly added APIs cannot work if we don't know which
> instance is migrating. If our user uses HA applications or DRS
> applications such as openstack-watcher, automatic migrations can take
> place, we may not know which instance is migrating. And an API like the
> old os-migrations will be a really good choice to use, we can get all
> the current running migrations in simply one call. So I suggest not
> deprecate this API.
> 
> Any thoughts?

If watcher is migrating behind the scenes, is there a reason that
watcher is not providing the list of resources that it is migrating?

Right now os-migrations is not yet going away, we're just not going to
add any more details to it. It will only be a set of pointers to
/servers/{server uuid}/migrations/{migration id} where all the details
will exist. The language on "deprecation" is a little odd, and "frozen"
is probably what it should be called.

Long term /os-migrations is actually one of those weird meta resources
which we'd ideally like to not have. There are a lot of these things
which really come down to "give me this arbitrary slice of servers that
are x, y, z". Which is really a search engine. And to be fast, has to be
optimized as a search engine (i.e. all attributes are indexed for fast
access). Which is a different than a database. Fortunately there is a
project that is working on that space -
http://docs.openstack.org/developer/searchlight/.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-24 Thread Steven Hardy
On Tue, May 24, 2016 at 11:00:35AM +0200, Markus Zoeller wrote:
> On 24.05.2016 09:34, Duncan Thomas wrote:
> > Cinder bugs list was far more manageable once this had been done.
> > 
> > It is worth sharing the tool for this? I realise it's fairly trivial to
> > write one, but some standardisation on the comment format etc seems
> > valuable, particularly for Q/A folks who work between different projects.
> 
> A first draft (without the actual expiring) is at [1]. I'm going to
> finish it this week. If there is a place in an OpenStack repo, just give
> me a pointer and I'll push a change.

FWIW I had to do a similar thing recently when I set all TripleO bugs
reported pre-liberty Incomplete - I ended up hacking up process_bugs.py
from release-tools:

https://github.com/openstack-infra/release-tools/blob/master/process_bugs.py

Perhaps we can adapt one of the scripts there (or add a new one if needed)
that can be used for several projects?

Here's a diff of my hacked-up version FWIW (I never got around to cleaning
it up and pushing it anywhere):

http://paste.fedoraproject.org/370318/14640938/

It shows how to add the comment and set the state, which you may be able to
reuse.

One thing I am unsure of is, can you actually force-expire bugs, or only
mark them incomplete and wait for launchpad to expire them (exact criteria
for this aren't that clear to me..)

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Stable disk device instance rescue reviews for libvirt and possible implementation by other virt drivers

2016-05-24 Thread Lee Yarwood
Hello all,

https://review.openstack.org/#/q/topic:bp/virt-rescue-stable-disk-devices

I've been aimlessly pushing my patches around for this spec for a while
now and would really appreciate reviews from the community. Tempest and
devstack patches are also included in the above topic, reviews would
again be really appreciated for these.

I'd also like to ask if any other virt driver maintainers are looking to
implement this spec for their backends in Newton? The spec itself is
pretty straight forward but I'd be happy to help if there are questions
or concerns around getting this implemented outside of libvirt.

Thanks in advance,

Lee
-- 
Lee Yarwood
Red Hat

PGP : A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Plans to converge on one ldap client?

2016-05-24 Thread Corey Bryant
Hi All,

Are there any plans to converge on one ldap client across projects?  Some
projects have moved to ldap3 and others are using pyldap (both are in
global requirements).

The issue we're running into in Ubuntu is that we can only have one ldap
client in Ubuntu main, while the others will live in universe.

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-24 Thread John Griffith
On Tue, May 24, 2016 at 1:34 AM, Duncan Thomas 
wrote:

> Cinder bugs list was far more manageable once this had been done.
>
> It is worth sharing the tool for this? I realise it's fairly trivial to
> write one, but some standardisation on the comment format etc seems
> valuable, particularly for Q/A folks who work between different projects.
>
​consistency sure seems like a nice thing to me.​


>
> On 23 May 2016 at 14:02, Markus Zoeller 
> wrote:
>
>> TL;DR: Automatic closing of 185 bug reports which are older than 18
>> months in the week R-13. Skipping specific bug reports is possible. A
>> bug report comment explains the reasons.
>>
>>
>> I'd like to get rid of more clutter in our bug list to make it more
>> comprehensible by a human being. For this, I'm targeting our ~185 bug
>> reports which were reported 18 months ago and still aren't in progress.
>> That's around 37% of open bug reports which aren't in progress. This
>> post is about *how* and *when* I do it. If you have very strong reasons
>> to *not* do it, let me hear them.
>>
>> When
>> 
>> I plan to do it in the week after the non-priority feature freeze.
>> That's week R-13, at the beginning of July. Until this date you can
>> comment on bug reports so they get spared from this cleanup (see below).
>> Beginning from R-13 until R-5 (Newton-3 milestone), we should have
>> enough time to gain some overview of the rest.
>>
>> I also think it makes sense to make this a repeated effort, maybe after
>> each milestone/release or monthly or daily.
>>
>> How
>> ---
>> The bug reports which will be affected are:
>> * in status: [new, confirmed, triaged]
>> * AND without assignee
>> * AND created at: > 18 months
>> A preview of them can be found at [1].
>>
>> You can spare bug reports if you leave a comment there which says
>> one of these (case-sensitive flags):
>> * CONFIRMED FOR: NEWTON
>> * CONFIRMED FOR: MITAKA
>> * CONFIRMED FOR: LIBERTY
>>
>> The expired bug report will have:
>> * status: won't fix
>> * assignee: none
>> * importance: undecided
>> * a new comment which explains *why* this was done
>>
>> The comment the expired bug reports will get:
>> This is an automated cleanup. This bug report got closed because
>> it is older than 18 months and there is no open code change to
>> fix this. After this time it is unlikely that the circumstances
>> which lead to the observed issue can be reproduced.
>> If you can reproduce it, please:
>> * reopen the bug report
>> * AND leave a comment "CONFIRMED FOR: "
>>   Only still supported release names are valid.
>>   valid example: CONFIRMED FOR: LIBERTY
>>   invalid example: CONFIRMED FOR: KILO
>> * AND add the steps to reproduce the issue (if applicable)
>>
>>
>> Let me know if you think this comment gives enough information how to
>> handle this situation.
>>
>>
>> References:
>> [1] http://45.55.105.55:8082/bugs-dashboard.html#tabExpired
>>
>> --
>> Regards, Markus Zoeller (markus_z)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> --
> Duncan Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Clint Byrum
Excerpts from Geoff O'Callaghan's message of 2016-05-24 15:31:28 +1000:
> 
> > On 24 May 2016, at 3:13 PM, Clint Byrum  wrote:
> > 
> > 
> [snip]
> 
> > those other needs. Grab a python developer, land some code, and your
> > feature is there.
> 
> s/python/whateverlanguage/
> 

But that's just the point. If you have lots of languages, you have to
find developers who know whichever one your feature or bug fix needs to
be written in.

> > 
> >> I also never said, ship the source code and say ‘good luck’.   What I did 
> >> imply  was, due to a relaxing of coding platform requirements we might be 
> >> able to deliver a function at this performance point that  we may not have 
> >> been able to do otherwise.   We should always provide support and the 
> >> code,  but as to what language it’s written it i’m personally not fussed 
> >> and I have to deal with a variety of languages already so maybe that’s why 
> >> I don’t see it as a big problem.
> > 
> > This again assumes that one only buys software and does not ever
> > participate in its development in an ongoing basis. There's nothing
> > wrong with that, but this particular community is highly focused on
> > people who do want to participate and think that the ability to
> > adapt this cloud they've invested in to their changing business needs is
> > more important than any one feature.
> 
> No I didn’t say that at all and I don’t believe it’s assumed.I just said 
> I wasn’t fussed about what language it’s written in and just wanted 
> developers to be able to contribute if they had something to contribute.   
> 

Not being fussed about the language means not being fussed about who can
develop on it, so I took that to mean not being interested in developing
on it. I'm not sure either of us was "wrong" here, but I apologize for
assuming that's what you meant, if that's indeed not what you meant.

> > 
> >> 
> >> I understand there will be integration challenges and I agree with 
> >> cohesiveness being a good thing, but I also believe we must deliver value 
> >> more than cohesiveness.   The value is what operators want,  the 
> >> cohesiveness is what the developers may or may not want.
> >> 
> > 
> > We agree that delivering value to end users and operators is the #1
> > priority. I think we just disagree on the value of an open development
> > model and cohesion in the _community_.
> 
> It’s not open if you restrict developers based on programming language.
> Trust me I get cohesion and it’s value, we’ve reached the stage now where 
> cohesion is being questioned.  The questioning is a good thing and it is a 
> measure of the health of the community.

So, there's a funny principle here, where the word open is _so open_
that one can use it to classify any number of aspects, while ignoring
others, and still be correct.

I qualified the development model with the word open, because the way
we govern it, the way code and change move through the system, are 100%
transparent and available to anyone who wants to participate. But I agree,
it is less available to those who want to participate using languages
we've chosen to avoid. They have to begin at the governance level,
which they have, in fact, done by approaching the TC. But they may be
shout out, and that would make developing on OpenStack closed to them.

However, I don't think the TC takes this lightly, and they understand
that having it open to _more_ contribution is the goal. What I think may
not make sense to all parties, is that closing it to some, will keep it
open to many others. And what I think Thierry did by opening this thread
was try to figure out how many stand on either side of that decision.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-05-24 Thread Adrian Otto
Before considering a project rename.m, I suggest you seek guidance from the 
OpenStack technical committee, and/or the OpenStack-infra team. There is 
probably a simple workaround to the concern voiced below.

--
Adrian

> On May 24, 2016, at 1:37 AM, Shuu Mutou  wrote:
> 
> Hi all,
> 
> Unfortunately "higgins" is used by media server project on Launchpad and CI 
> software on PYPI. Now, we use "python-higgins" for our project on Launchpad.
> 
> IMO, we should rename project to prevent increasing points to patch.
> 
> How about "Gatling"? It's only association from Magnum. It's not used on both 
> Launchpad and PYPI.
> Is there any idea?
> 
> Renaming opportunity will come (it seems only twice in a year) on Friday, 
> June 3rd. Few projects will rename on this date.
> http://markmail.org/thread/ia3o3vz7mzmjxmcx
> 
> And if project name issue will be fixed, I'd like to propose UI subproject.
> 
> Thanks,
> Shu
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] YAQL console for master node

2016-05-24 Thread Aleksandr Didenko
Hi,

thank you Stas, long awaited tool :) Using it right now on the latest
Fuel-10.0, very helpful and saves a lot of time (switching between nodes to
test yaql for different roles is super cool).

Regards,
Alex


On Tue, May 24, 2016 at 12:50 PM, Stanislaw Bogatkin  wrote:

> Hi all,
>
> as you maybe know, new conditions for Fuel tasks were recently (in master
> and mitaka branches) introduced. Right after this I got several questions
> like 'hey, how can I check my new condition?' Answer could be 'use standard
> yaql console', but it hasn't have Fuel internal yaql functions which were a
> foundation for Fuel task conditions. As a result, I have written small
> utility to give an opportunity for check new conditions on the fly: [0]. It
> still in development but usable for most tasks developer usually need when
> build new yaql condition for task.
>
> If you has any questions about using this tool or want to propose any
> improvement - don't hesitate and contact me. Or just fork it and do what
> you want - it licensed under GPLv3. I would be glad if it helps someone.
>
> [0] https://github.com/sorrowless/fuyaql
>
> --
> with best regards,
> Stan.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][cinder] max_concurrent_builds in Cinder

2016-05-24 Thread Gorka Eguileor
On 23/05, Ivan Kolodyazhny wrote:
> Hi developers and operators,
> I would like to get any feedback from you about my idea before I'll start
> work on spec.
> 
> In Nova, we've got max_concurrent_builds option [1] to set 'Maximum number
> of instance builds to run concurrently' per each compute. There is no
> equivalent Cinder.

Hi,

First I want to say that I think this is a good idea because I know this
message will get diluted once I start with my mumbling.  ;-)

The first thing we should allow to control is the number of workers per
service, since we currently only allow setting it for the API nodes and
all other nodes will use a default of 1000.  I posted a patch [1] to
allow this and it's been sitting there for the last 3 months.  :'-(

As I see it not all mentioned problems are equal, and the main
distinction is caused by Cinder being not only in the control path but
also in the data path. Resulting in some of the issues being backend
specific limitations, that I believe should be address differently in
the specs.

For operations where Cinder is in the control path we should be
limiting/queuing operations in the cinder core code (for example the
manager) whereas when the limitation only applies to some drivers this
should be addressed by the drivers themselves.  Although the spec should
provide a clear mechanism/pattern to solve it in the drivers as well so
all drivers can use a similar pattern which will provide consistency,
making it easier to review and maintain.

The queuing should preserve the order of arrival of operations, which
file locks from Oslo concurrency and Tooz don't do.

> 
> Why do we need it for Cinder? IMO, it could help us to address following
> issues:
> 
>- Creation of N volumes at the same time increases a lot of resource
>usage by cinder-volume service. Image caching feature [2] could help us a
>bit in case when we create volume form image. But we still have to upload N
>images to the volumes backend at the same time.

This is an example where we are in the data path.

>- Deletion on N volumes at parallel. Usually, it's not very hard task
>for Cinder, but if you have to delete 100+ volumes at once, you can fit
>different issues with DB connections, CPU and memory usages. In case of
>LVM, it also could use 'dd' command to cleanup volumes.

This is a case where it is a backend limitation and should be handled by
the drivers.

I know some people say that deletion and attaching have problems when a
lot of them are requested to the c-vol nodes and that cinder cannot
handle the workload properly, but in my experience these cases are
always due to suboptimal cinder configuration, like a low number of DB
connections configured in cinder that make operations fight for a DB
connection creating big delays to complete operations.

>- It will be some kind of load balancing in HA mode: if cinder-volume
>process is busy with current operations, it will not catch message from
>RabbitMQ and other cinder-volume service will do it.

I don't understand what you mean with this.  Do you mean that Cinder
service will stop listening to the message queue when it reaches a
certain workload on the "heavy" operations?  Then wouldn't it also stop
processing "light" operations?

>- From users perspective, it seems that better way is to create/delete N
>volumes a bit slower than fail after X volumes were created/deleted.

I agree, it's better not to fail.  :-)

Cheers,
Gorka.

> 
> 
> [1]
> https://github.com/openstack/nova/blob/283da2bbb74d0a131a66703516d16698a05817c7/nova/conf/compute.py#L161-L163
> [2]
> https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Flavio Percoco

On 23/05/16 21:57 +0100, Chris Dent wrote:

[snip]


So, yet another way to frame the original question (in a loaded way)
may be: Are we trying to come up with a way of defining the community
that lets us carry on doing what we've been doing, haphazardly, or
are we trying to get the process of defining the community to bring
us to a point where we have some useful constraints that allow us to
more effectively reach goal X?

Begging, of course: What's X?

(To me, an unfettered big tent is a great way to keep riding the
great OpenStack enterprise boondoggle, but I'm not sure it's
resulting in a great experience for humans who aren't on that
train.)


The question I believe we're trying to answer is the second one. It seems clear
to me that what we're doing has some problems we need to fix and coming up with
a way to explain what we're doing without fixing those problems won't help us at
all.

I've expressed several times my concerns related to an unfettered big tent. My
belief is that we should focus on the goal and accept that we might need to set
stronger rules (for lack of a better word in my vocabulary) that will help
preserving the tent.

So, just to make sure I'm making myself clear, I believe we should go with
option #2 in Thierry's comment from May 23 11:3 on this[0] review. While I'm not
entirely opposed to #1 I think #2 is better for us at this point in time. Here's
a quote of Thierry's comment:

  "To summarize my view on this, I think our only two options here are (1)
  approve the addition of golang (with caveats on where it should be used
  to try to minimize useless churn), or (2) precise the line between
  'openstack projects' and 'dependencies of openstack projects' in a way
  that makes it obvious that components requiring such optimization as to
  require golang (or any other such language) should be developed as
  dependencies"

My main motivation is that I still believe option #1 will have a negative impact
on the community and, perhaps more importantly, I don't think it'll help
reaching the goal we've been talking about in this thread. Many people have been
asking for focus and I think #2 will do that, whereas #1 will open the doors to
a different set of problems and complexities that won't help with keeping the 
focus.

This is, of course, just my opinion.

[0] https://review.openstack.org/#/c/312267/

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [aodh] Tempest gate not working

2016-05-24 Thread Julien Danjou
Hi,

So it turns out we tried (especially Ryota) to add Tempest support via
https://review.openstack.org/#/c/303921/ for Aodh's gate, but it does
not actually run Tempest.

Otherwise, we would have notice something wrong in
https://review.openstack.org/#/c/318052/. As EmilienM noticed in Puppet
Aodh module, who actually runs Tempest, Aodh has a test to check the
combination alarm that now fails.

The tempest job log shows that it runs the functional tests, but not Tempest:
  
http://logs.openstack.org/52/318052/2/check/gate-aodh-dsvm-tempest-plugin-mongodb/5840d90/console.html.gz#_2016-05-19_14_01_49_072
  + ./post_test_hook.sh:main:56  :   sudo -E -H -u stack tox 
-efunctional

I've no idea how this exactly work, but if anyone has knowledge on gate
and Tempest, it'd be awesome to fix it. Or finish it, as the job are
still non-voting, I imagine nobody finished Ryota first attempt.

Cheeres,
-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Stable disk device instance rescue reviews for libvirt and possible implementation by other virt drivers

2016-05-24 Thread Jay Pipes

Hi Lee,

I'll try to get to these reviews later this afternoon. Thanks for your 
patience.


Best,
-jay

On 05/24/2016 08:53 AM, Lee Yarwood wrote:

Hello all,

https://review.openstack.org/#/q/topic:bp/virt-rescue-stable-disk-devices

I've been aimlessly pushing my patches around for this spec for a while
now and would really appreciate reviews from the community. Tempest and
devstack patches are also included in the above topic, reviews would
again be really appreciated for these.

I'd also like to ask if any other virt driver maintainers are looking to
implement this spec for their backends in Newton? The spec itself is
pretty straight forward but I'd be happy to help if there are questions
or concerns around getting this implemented outside of libvirt.

Thanks in advance,

Lee



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Is verification of images in the image cache necessary?

2016-05-24 Thread Matthew Booth
On Tue, May 24, 2016 at 1:15 PM, Fichter, Dane G. 
wrote:

> Hi John and Matt,
>
> I actually have a spec and patch up for review addressing some of what
> you’re referring to below.
>
> https://review.openstack.org/#/c/314222/
> https://review.openstack.org/#/c/312210/
>
> I think you’re quite right that the existing ImageCacheManager code serves
> little purpose. What I propose here is a cryptographically stronger
> verification meant to protect against both deliberate modification by an
> adversary, as well as accidental sources of disk corruption. If you like, I
> can deprecate the checksum-based verification code in the image cache as a
> part of this change. Feel free me to email me back or ping me on IRC
> (dane-fichter) in order to discuss more.
>

Thanks Dane, reviewed. I don't think the details are right yet, but I do
think this is the way to go. I also think we need to entirely divorce this
functionality from the image cache.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][cinder] max_concurrent_builds in Cinder

2016-05-24 Thread Michał Dulko


On 05/24/2016 04:38 PM, Gorka Eguileor wrote:
> On 23/05, Ivan Kolodyazhny wrote:
>> Hi developers and operators,
>> I would like to get any feedback from you about my idea before I'll start
>> work on spec.
>>
>> In Nova, we've got max_concurrent_builds option [1] to set 'Maximum number
>> of instance builds to run concurrently' per each compute. There is no
>> equivalent Cinder.
> Hi,
>
> First I want to say that I think this is a good idea because I know this
> message will get diluted once I start with my mumbling.  ;-)
>
> The first thing we should allow to control is the number of workers per
> service, since we currently only allow setting it for the API nodes and
> all other nodes will use a default of 1000.  I posted a patch [1] to
> allow this and it's been sitting there for the last 3 months.  :'-(
>
> As I see it not all mentioned problems are equal, and the main
> distinction is caused by Cinder being not only in the control path but
> also in the data path. Resulting in some of the issues being backend
> specific limitations, that I believe should be address differently in
> the specs.
>
> For operations where Cinder is in the control path we should be
> limiting/queuing operations in the cinder core code (for example the
> manager) whereas when the limitation only applies to some drivers this
> should be addressed by the drivers themselves.  Although the spec should
> provide a clear mechanism/pattern to solve it in the drivers as well so
> all drivers can use a similar pattern which will provide consistency,
> making it easier to review and maintain.
>
> The queuing should preserve the order of arrival of operations, which
> file locks from Oslo concurrency and Tooz don't do.

I would be seriously opposed to queuing done inside Cinder code. It
makes draining a service harder and increases impact of a failure of a
single service. We already have a queue system and it is whatever you're
running under oslo.messaging (RabbitMQ mostly). Making our RPC workers
number configurable for each service sounds like a best shot to me.

>> Why do we need it for Cinder? IMO, it could help us to address following
>> issues:
>>
>>- Creation of N volumes at the same time increases a lot of resource
>>usage by cinder-volume service. Image caching feature [2] could help us a
>>bit in case when we create volume form image. But we still have to upload 
>> N
>>images to the volumes backend at the same time.
> This is an example where we are in the data path.
>
>>- Deletion on N volumes at parallel. Usually, it's not very hard task
>>for Cinder, but if you have to delete 100+ volumes at once, you can fit
>>different issues with DB connections, CPU and memory usages. In case of
>>LVM, it also could use 'dd' command to cleanup volumes.
> This is a case where it is a backend limitation and should be handled by
> the drivers.
>
> I know some people say that deletion and attaching have problems when a
> lot of them are requested to the c-vol nodes and that cinder cannot
> handle the workload properly, but in my experience these cases are
> always due to suboptimal cinder configuration, like a low number of DB
> connections configured in cinder that make operations fight for a DB
> connection creating big delays to complete operations.
>
>>- It will be some kind of load balancing in HA mode: if cinder-volume
>>process is busy with current operations, it will not catch message from
>>RabbitMQ and other cinder-volume service will do it.
> I don't understand what you mean with this.  Do you mean that Cinder
> service will stop listening to the message queue when it reaches a
> certain workload on the "heavy" operations?  Then wouldn't it also stop
> processing "light" operations?
>
>>- From users perspective, it seems that better way is to create/delete N
>>volumes a bit slower than fail after X volumes were created/deleted.
> I agree, it's better not to fail.  :-)
>
> Cheers,
> Gorka.
>
>>
>> [1]
>> https://github.com/openstack/nova/blob/283da2bbb74d0a131a66703516d16698a05817c7/nova/conf/compute.py#L161-L163
>> [2]
>> https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Uns

Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-24 Thread Chris Hoge
+1

> On May 23, 2016, at 8:25 PM, Mike Perez  wrote:
> 
>> On 18:00 May 20, Nikhil Komawar wrote:
>> Hello all,
>> 
>> 
>> I want to propose having a dedicated virtual sync next week Thursday May
>> 26th at 1500UTC for one hour on the Import Refactor work [1] ongoing in
>> Glance. We are making a few updates to the spec; so it would be good to
>> have everyone on the same page and soon start merging those spec changes.
>> 
>> 
>> Also, I would like for this sync to be cross project one so that all the
>> different stakeholders are aware of the updates to this work even if you
>> just want to listen in.
>> 
>> 
>> Please vote with +1, 0, -1. Also, if the time doesn't work please
>> propose 2-3 additional time slots.
>> 
>> 
>> We can decide later on the tool and I will setup agenda if we have
>> enough interest.
> 
> +1
> 
> -- 
> Mike Perez
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Plans to converge on one ldap client?

2016-05-24 Thread Morgan Fainberg
On Tue, May 24, 2016 at 5:53 AM, Corey Bryant 
wrote:

> Hi All,
>
> Are there any plans to converge on one ldap client across projects?  Some
> projects have moved to ldap3 and others are using pyldap (both are in
> global requirements).
>
> The issue we're running into in Ubuntu is that we can only have one ldap
> client in Ubuntu main, while the others will live in universe.
>
> --
> Regards,
> Corey
>
>
Out of curiosity, what drives this requirement? pyldap and ldap3 do not
overlap in namespace and can co-install just fine. This is no different
than previously having python-ldap and ldap3.

It seems a little arbitrary to say only one of these can be in main, but
this is why i am asking.

--morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Dean Troyer
On Tue, May 24, 2016 at 8:20 AM, Flavio Percoco  wrote:
>
> So, just to make sure I'm making myself clear, I believe we should go with
> option #2 in Thierry's comment from May 23 11:3 on this[0] review. While
> I'm not
> entirely opposed to #1 I think #2 is better for us at this point in time.
> Here's
> a quote of Thierry's comment:
>
>   "To summarize my view on this, I think our only two options here are
> (1)
>   approve the addition of golang (with caveats on where it should be
> used
>   to try to minimize useless churn), or (2) precise the line between
>   'openstack projects' and 'dependencies of openstack projects' in a
> way
>   that makes it obvious that components requiring such optimization as
> to
>   require golang (or any other such language) should be developed as
>   dependencies"
>
> My main motivation is that I still believe option #1 will have a negative
> impact
> on the community and, perhaps more importantly, I don't think it'll help
> reaching the goal we've been talking about in this thread. Many people
> have been
> asking for focus and I think #2 will do that, whereas #1 will open the
> doors to
> a different set of problems and complexities that won't help with keeping
> the focus.
>

Option #2 without the followup of actually evaluating and removing things
that do not fit is really Option #3, do nothing. Which is what I am afraid
will happen.  No renewed focus, no growth, no goal.

On the language front, since we want focus, the exiting decisions re
languages should also be part of that re-evaluation for focus.  It sure
feels like JavaScript is in exactly the same boat as folks fear Golang will
be here (a special case, domain-specific, division of community (ask
Horizon devs)).  And Bash, well, that isn't even a language.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proposing Brian Haley for neutron-stable-maint

2016-05-24 Thread Ihar Hrachyshka

> On 17 May 2016, at 13:07, Ihar Hrachyshka  wrote:
> 
> Hi stable-maint-core and all,
> 
> I would like to propose Brian for neutron specific stable team.
> 
> His stats for neutron stable branches are (last 120 days):
> 
> mitaka: 19 reviews; liberty: 68 reviews (3rd place in the top); kilo: 16 
> reviews.
> 
> Brian helped the project with stabilizing liberty neutron/DVR jobs, and with 
> other L3 related stable matters. In his stable reviews, he shows attention to 
> details.
> 
> If Brian is added to the team, I will make sure he is aware of all stable 
> policy intricacies.
> 
> Side note: recently I added another person to the team (Cedric Brandilly), 
> and now I realize that I haven’t followed the usual approval process. That 
> said, the person also has decent stable review stats, and is aware of the 
> policy. If someone has doubts about that addition to the team, please ping me 
> and we will discuss how to proceed.
> 
> Ihar

OK, it’s a whole week for the thread, and there are no objections. I added 
Brian to the neutron-stable-maint gerrit group.

Welcome Brian!

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovo] NeutronDbObject concurrency issues

2016-05-24 Thread John Schwarz
The incorporation of tooz and Neutron is being discussed as part of
https://bugs.launchpad.net/neutron/+bug/1552680 as an RFE for Newton.
Hopefully I'll have some news to break in on this matter in the
upcoming days (and if I do I'll update on the launchpad to eliminate
duplicity).

On Tue, May 24, 2016 at 8:54 AM, Gary Kotton  wrote:
> Hi,
>
> We have used tooz to enable concurrency. Zookeeper and Redis worked well. I
> think that it is certainly something that we need to consider. The challenge
> becomes a deployment.
>
> Thanks
>
> Gary
>
>
>
> From: Damon Wang 
> Reply-To: OpenStack List 
> Date: Tuesday, May 24, 2016 at 5:58 AM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [neutron][ovo] NeutronDbObject concurrency
> issues
>
>
>
> Hi,
>
>
>
> I want to add an option which handle by another project Tooz.
>
>
>
> https://github.com/openstack/tooz
>
>
>
> with redis or some other drivers, it seems pretty a good choice.
>
>
>
> Any comments?
>
>
>
> Wei Wang
>
>
>
> 2016-05-17 6:53 GMT+08:00 Ilya Chukhnakov :
>
>
>
> On 16 May 2016, at 20:01, Michał Dulko  wrote:
>
>
> It's not directly related, but this reminds me of tests done by geguileo
> [1] some time ago that were comparing different methods of preventing DB
> race conditions in concurrent environment. Maybe you'll also find them
> useful as you'll probably need to do something like conditional update
> to increment a revision number.
>
> [1] https://github.com/Akrog/test-cinder-atomic-states
>
>
>
> Thanks for the link. The SQLA revisions are similar to the
> 'solutions/update_with_where',
>
> but they use the dedicated column for that [2]. And as long as it is
> properly configured,
>
> it happens 'automagically' (SQLA will take care of adding proper 'where' to
> 'update').
>
>
>
> [2] http://docs.sqlalchemy.org/en/latest/orm/versioning.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Ben Meyer
On 05/24/2016 11:13 AM, Dean Troyer wrote:
> On Tue, May 24, 2016 at 8:20 AM, Flavio Percoco  > wrote:
>
> So, just to make sure I'm making myself clear, I believe we should
> go with
> option #2 in Thierry's comment from May 23 11:3 on this[0] review.
> While I'm not
> entirely opposed to #1 I think #2 is better for us at this point
> in time. Here's
> a quote of Thierry's comment:
>
>   "To summarize my view on this, I think our only two options
> here are (1)
>   approve the addition of golang (with caveats on where it
> should be used
>   to try to minimize useless churn), or (2) precise the line
> between
>   'openstack projects' and 'dependencies of openstack
> projects' in a way
>   that makes it obvious that components requiring such
> optimization as to
>   require golang (or any other such language) should be
> developed as
>   dependencies"
>
> My main motivation is that I still believe option #1 will have a
> negative impact
> on the community and, perhaps more importantly, I don't think
> it'll help
> reaching the goal we've been talking about in this thread. Many
> people have been
> asking for focus and I think #2 will do that, whereas #1 will open
> the doors to
> a different set of problems and complexities that won't help with
> keeping the focus.
>
>
> Option #2 without the followup of actually evaluating and removing
> things that do not fit is really Option #3, do nothing. Which is what
> I am afraid will happen.  No renewed focus, no growth, no goal.
>
> On the language front, since we want focus, the exiting decisions re
> languages should also be part of that re-evaluation for focus.  It
> sure feels like JavaScript is in exactly the same boat as folks fear
> Golang will be here (a special case, domain-specific, division of
> community (ask Horizon devs)).  And Bash, well, that isn't even a
> language.

Just $0.02 - if you want to support a language, then it would seem like
having a full SDK for that language would be a first step so that people
inside and outside the community can use the language in a supported
manner. With an SDK, it seems like everyone will just reinvent the
wheel. That would also seem to further the goal of using the language as
the community intends - whether for services, clients, or UI - since the
SDK would be targeted appropriately. If no SDK, then special casing
would seem to the proper place.

Again, $0.02

Ben
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] release version numbers: let's use semvers

2016-05-24 Thread Roman Prykhodchenko
The only thing I would like to mention here is that scripts for making 
automatic releases on PyPi using OpenStack Infra won’t work, if the version is 
not formatted according to semver.

- romcheg

> 24 трав. 2016 р. о 14:34 Igor Kalnitsky  написав(ла):
> 
> Hey Zigo,
> 
> In Python community there's a PEP-440 [1] that defines a versioning scheme. 
> The thing you should know is, the PEP __is not__ compatible with semver, and 
> it's totally fine to have two components version.
> 
> So I don't think we should force version changes from two-components to 
> three-components scheme, since it won't be compatible with semver anyway.
> 
> Thanks,
> Igor
> 
> [1]: https://www.python.org/dev/peps/pep-0440/
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Is verification of images in the image cache necessary?

2016-05-24 Thread Dan Smith
> I like the idea of checking the md5 matches before each boot, as it
> mirrors the check we do after downloading from glance. Its possible
> thats very unlikely to spot anything that shouldn't already be worried
> about by something else. It may just be my love of symmetry that makes
> me like that idea?

IMHO, checking this at boot after we've already checked it on download
is not very useful. It supposes that the attacker was kind enough to
visit our system before an instance was booted and not after. If I have
rooted the system, it's far easier for me to show up after a bunch of
instances are booted and modify the base images (or even better, the
instance images themselves which are hard to validate from the host side).

I would also point out that if I'm going to root a compute node, the
first thing I'm going to do is disable the feature in nova-compute or in
some other way cripple it so it can't do its thing.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #82

2016-05-24 Thread Emilien Macchi
On Mon, May 23, 2016 at 1:24 PM, Emilien Macchi  wrote:
> Hi Puppeteers!
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting-4.
>
> Here's a first agenda:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160524
>
> Feel free to add more topics, and any outstanding bug and patch.
>
> See you tomorrow!

Our notes: 
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-05-24-15.00.html

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Plans to converge on one ldap client?

2016-05-24 Thread Corey Bryant
On Tue, May 24, 2016 at 11:11 AM, Morgan Fainberg  wrote:

>
>
> On Tue, May 24, 2016 at 5:53 AM, Corey Bryant 
> wrote:
>
>> Hi All,
>>
>> Are there any plans to converge on one ldap client across projects?  Some
>> projects have moved to ldap3 and others are using pyldap (both are in
>> global requirements).
>>
>> The issue we're running into in Ubuntu is that we can only have one ldap
>> client in Ubuntu main, while the others will live in universe.
>>
>> --
>> Regards,
>> Corey
>>
>>
> Out of curiosity, what drives this requirement? pyldap and ldap3 do not
> overlap in namespace and can co-install just fine. This is no different
> than previously having python-ldap and ldap3.
>
> It seems a little arbitrary to say only one of these can be in main, but
> this is why i am asking.
>
>
No problem, thanks for asking.  I agree, it's no different than python-ldap
and ldap3 and it's not a co-installability issue.  This is just a policy
for Ubuntu main.  If we file a Main Inclusion Request (MIR) for a new ldap
client then we'll be asked to work on what's needed to get the other client
package out of main, which consists of patching use of one client for the
other.

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-05-24 Thread Lucas Alvares Gomes
Hi,

> I'm working with Tien who is a submitter of one[1] of console specs.
> I joined the console session in Austin.
>
> In the session, we got the following consensus.
> - focus on serial console in Newton
> - use nova-serial proxy as is
>
> We also got some requirements[2] for this feature in the session.
> We have started cooperating with Akira and Yuiko who submitted another 
> similar spec[3].
> We're going to unite our specs and add solutions for the requirements ASAP.
>

Great stuff! So do we have an update on this?

I see [3] is now abandoned and a new spec was proposed recently [4].
Is [4] the result of the union of both specs?

> [1] ironic-ipmiproxy: https://review.openstack.org/#/c/296869/
> [2] https://etherpad.openstack.org/p/ironic-newton-summit-console
> [3] ironic-console-server: https://review.openstack.org/#/c/306755/

[4] https://review.openstack.org/#/c/319505

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][security] Finishing the job on threat analysis for Kolla

2016-05-24 Thread Steven Dake (stdake)
Rob and Doug,

At Summit we had 4 hours of highly productive work producing a list of "things" 
that can be "threatened".  We have about 4 or 5 common patterns where we follow 
the principle of least privilege.  On Friday of Summit we produced a list of 
all the things (in this case deployed containers).  I'm not sure who, I think 
it was Rob was working on a flow diagram for the least privileged case.  From 
there, the Kolla coresec team can produce the rest of the diagrams for 
increasing privileges.

I'd like to get that done, then move on to next steps.  Not sure what the next 
steps are, but lets cover the flow diagrams first since we know we need those.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Fox, Kevin M
OpenStack is more then the sum of its various pieces. Often features need to 
cross more then one project. Cross project work is already extremely hard 
without having to change languages between. Language change should be done very 
carefully and deliberately.

Thanks,
Kevin


From: Geoff O'Callaghan
Sent: Monday, May 23, 2016 5:59:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

Surely openstack is defined by it’s capabilities and interfaces rather than 
it’s internals.  Given the simplistic view that openstack is a collection of 
micro services connected by well defined api’s does it really matter what code 
is used inside that micro service (or macro service )?   If there is a 
community willing to bring and support code in language ‘x’,  isn’t that better 
than worrying about the off chance of existing developers wishing to move 
projects and not knowing the target language?Is there a fear that we’ll end 
up with a fork of nova (or others) written in rust ?
If we’re not open to evolving then we’ll die.

Just throwing in a different perspective.

Sorry about the top-post but it applies in general to the below discussion
Tks
Geoff

On 24 May 2016, at 9:28 AM, Gregory Haynes 
mailto:g...@greghaynes.net>> wrote:

On Mon, May 23, 2016, at 05:24 PM, Morgan Fainberg wrote:


On Mon, May 23, 2016 at 2:57 PM, Gregory Haynes 
mailto:g...@greghaynes.net>> wrote:
On Fri, May 20, 2016, at 07:48 AM, Thierry Carrez wrote:
> John Dickinson wrote:
> > [...]
> >> So the real question we need to answer is... where does OpenStack
> >> stop, and where does the wider open source community start ? If
> >> OpenStack is purely an "integration engine", glue code for other
> >> lower-level technologies like hypervisors, databases, or distributed
> >> block storage, then the scope is limited, Python should be plenty
> >> enough, and we don't need to fragment our community. If OpenStack is
> >> "whatever it takes to reach our mission", then yes we need to add one
> >> language to cover lower-level/native optimization, because we'll
> >> need that... and we need to accept the community cost as a
> >> consequence of that scope choice. Those are the only two options on
> >> the table.
> >>
> >> I'm actually not sure what is the best answer. But I'm convinced we,
> >> as a community, need to have a clear answer to that. We've been
> >> avoiding that clear answer until now, creating tension between the
> >> advocates of an ASF-like collection of tools and the advocates of a
> >> tighter-integrated "openstack" product. We have created silos and
> >> specialized areas as we got into the business of developing time-
> >> series databases or SDNs. As a result, it's not "one community"
> >> anymore. Should we further encourage that, or should we focus on
> >> what the core of our mission is, what we have in common, this
> >> integration engine that binds all those other open source projects
> >> into one programmable infrastructure solution ?
> >
> > You said the answer in your question. OpenStack isn't defined as an
> > integration engine[3]. The definition of OpenStack is whatever it
> > takes to fulfill our mission[4][5]. I don't mean that as a tautology.
> > I mean that we've already gone to the effort of defining OpenStack. It's
> > our mission statement. We're all about building a cloud platform upon
> > which people can run their apps ("cloud-native" or otherwise), so we
> > write the software needed to do that.
> >
> > So where does OpenStack stop and the wider community start? OpenStack
> > includes the projects needed to fulfill its mission.
>
> I'd totally agree with you if OpenStack was developed in a vacuum. But
> there is a large number of open source projects and libraries that
> OpenStack needs to fulfill its mission that are not in "OpenStack": they
> are external open source projects we depend on. Python, MySQL, libvirt,
> KVM, Ceph, OpenvSwitch, RabbitMQ... We are not asking that those should
> be included in OpenStack, and we are not NIHing replacements for those
> in OpenStack either.
>
> So it is not as clear-cut as you present it, and you can approach this
> dependency question from two directions.
>
> One is community-centric: "anything produced by our community is
> OpenStack". If we are missing a lower-level piece to achieve our mission
> and are developing it ourselves as a result, then it is OpenStack, even
> if it ends up being a message queue or a database.
>
> The other approach is product-centric: "lower-level pieces are OpenStack
> dependencies, rather than OpenStack itself". If we are missing a
> lower-level piece to achieve our mission and are developing it as a
> result, it could be developed on OpenStack infrastructure by members of
> the OpenStack community but it is not "OpenStack the product", it's an
> OpenStack *dependency*. It is not governed by the TC, it can use an

Re: [openstack-dev] [neutron][stable] proposing Brian Haley for neutron-stable-maint

2016-05-24 Thread Brian Haley

On 05/24/2016 11:17 AM, Ihar Hrachyshka wrote:



On 17 May 2016, at 13:07, Ihar Hrachyshka  wrote:

Hi stable-maint-core and all,

I would like to propose Brian for neutron specific stable team.

His stats for neutron stable branches are (last 120 days):

mitaka: 19 reviews; liberty: 68 reviews (3rd place in the top); kilo: 16 
reviews.

Brian helped the project with stabilizing liberty neutron/DVR jobs, and with 
other L3 related stable matters. In his stable reviews, he shows attention to 
details.

If Brian is added to the team, I will make sure he is aware of all stable 
policy intricacies.

Side note: recently I added another person to the team (Cedric Brandilly), and 
now I realize that I haven’t followed the usual approval process. That said, 
the person also has decent stable review stats, and is aware of the policy. If 
someone has doubts about that addition to the team, please ping me and we will 
discuss how to proceed.

Ihar


OK, it’s a whole week for the thread, and there are no objections. I added 
Brian to the neutron-stable-maint gerrit group.

Welcome Brian!


Thanks Ihar, and thanks to all for the +1's!

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Jay Pipes

On 05/24/2016 06:19 AM, Thierry Carrez wrote:

Chris Dent wrote:

[...]
I don't really know. I'm firmly in the camp that OpenStack needs to
be smaller and more tightly focused if a unitary thing called OpenStack
expects to be any good. So I'm curious about and interested in
strategies for figuring out where the boundaries are.

So that, of course, leads back to the original question: Is OpenStack
supposed to be a unitary.


As a data point, since I heard that question rhetorically asked quite a
few times over the past year... There is an old answer to that, since a
vote of the PPB (the ancestor of our TC) from June, 2011 which was never
overruled or changed afterwards:

"OpenStack is a single product made of a lot of independent, but
cooperating, components."

The log is an interesting read:
http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-06-28-20.06.log.html


Hmm, blast from the past. I'm sad I didn't make it to that meeting.

I would (now at least) have voted for #2: OpenStack is "a collection of 
independent projects that work together for some level of integration 
and releases".


This is how I believe OpenStack should be seen, as I wrote on Twitter 
relatively recently:


https://twitter.com/jaypipes/status/705794815338741761
https://twitter.com/jaypipes/status/705795095262441472

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Is verification of images in the image cache necessary?

2016-05-24 Thread Sean Dague
On 05/24/2016 11:54 AM, Dan Smith wrote:
>> I like the idea of checking the md5 matches before each boot, as it
>> mirrors the check we do after downloading from glance. Its possible
>> thats very unlikely to spot anything that shouldn't already be worried
>> about by something else. It may just be my love of symmetry that makes
>> me like that idea?
> 
> IMHO, checking this at boot after we've already checked it on download
> is not very useful. It supposes that the attacker was kind enough to
> visit our system before an instance was booted and not after. If I have
> rooted the system, it's far easier for me to show up after a bunch of
> instances are booted and modify the base images (or even better, the
> instance images themselves which are hard to validate from the host side).
> 
> I would also point out that if I'm going to root a compute node, the
> first thing I'm going to do is disable the feature in nova-compute or in
> some other way cripple it so it can't do its thing.

Right, we're way outside of an attestation chain here.

It does seem that once Nova has validated "once" that it moved the bits
from glance to "local" storage, it's job is done. Are there specific
issues that happened before that made this regular check something that
was needed?

If people are really concerned that things might get accidentally
written out from underneath them, doing a chattr -i after download so
the base images are immutable, and stray processes at least have to try
harder to change the data.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [infra] resolving issues with Intel NFV CI

2016-05-24 Thread Znoinski, Waldemar
The issue is now resolved.

Apparently the job wasn't registered with gearman server as there weren't any 
slaves available in Jenkins to run it at the time of zuul restart (as some old 
nodepool nodes were left over and no new VMs were spawned).



From: Znoinski, Waldemar [mailto:waldemar.znoin...@intel.com]
Sent: Tuesday, May 24, 2016 9:31 AM
To: openstack-in...@lists.openstack.org; OpenStack Development Mailing List 
(not for usage questions) 
Subject: [openstack-dev] [nova] [infra] resolving issues with Intel NFV CI

Hi all,

There is an issue with jobs under Intel NFV CI not being registered.
The issue is currently being investigated. ThirdPartySystems Wiki was updated.

Updates to follow.


Waldemar Znoinski


--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-24 Thread Doug Hellmann
Excerpts from Markus Zoeller's message of 2016-05-24 11:00:35 +0200:
> On 24.05.2016 09:34, Duncan Thomas wrote:
> > Cinder bugs list was far more manageable once this had been done.
> > 
> > It is worth sharing the tool for this? I realise it's fairly trivial to
> > write one, but some standardisation on the comment format etc seems
> > valuable, particularly for Q/A folks who work between different projects.
> 
> A first draft (without the actual expiring) is at [1]. I'm going to
> finish it this week. If there is a place in an OpenStack repo, just give
> me a pointer and I'll push a change.
> 
> > On 23 May 2016 at 14:02, Markus Zoeller  wrote:
> > 
> >> TL;DR: Automatic closing of 185 bug reports which are older than 18
> >> months in the week R-13. Skipping specific bug reports is possible. A
> >> bug report comment explains the reasons.
> >> [...]
> 
> References:
> [1]
> https://github.com/markuszoeller/openstack/blob/master/scripts/launchpad/expire_old_bug_reports.py
> 

Feel free to submit that to the openstack-infra/release-tools repo. We
have some other tools in that repo for managing launchpad bugs.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Log spool in the context

2016-05-24 Thread Doug Hellmann
Excerpts from Alexis Lee's message of 2016-05-24 09:34:36 +0100:
> Hi,
> 
> I have a spec: https://review.openstack.org/227766
> and implementation: https://review.openstack.org/316162
> for adding a spooling logger to oslo.log. Neither is merged yet, reviews
> welcome.
> 
> Looking at how I'd actually integrate this into Nova, most classes do:
> 
> LOG = logging.getLogger(__name__)
> 
> which is the recommended means of getting a logger. I need to get
> certain code paths to use a different logger (if spooling is turned on).
> This means I need to pass it around. If I modify method signatures I'm
> bound to break back-compat for something.
> 
> Option 1: use a metaclass to register each SpoolManager as a singleton,
> IE every call to SpoolManager('api') will return the same manager. I can
> then do something like:
> 
> log = LOG
> if CONF.spool_api:
> log = SpoolManager('api').get_spool(context.request_id)
> 
> in every method.
> 
> Option 2: Put the logger on the context. We're already passing this
> everywhere so it'd be awful convenient.
> 
> log = context.log or LOG
> 
> Option 3: ???
> 
> I like option 2, any objections to extending oslo.context like this?
> 
> 
> Alexis (lxsli)

Do you need more than one? The spec talks about different types of
requests having different SpoolManagers (api and scheduler are the 2
examples given).

What happens when the context is serialized and sent across an RPC call?
Is there some representation of the logger that the messaging code can
use on the other side to reconstruct the logger?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Summit evolution online town halls

2016-05-24 Thread Jonathan Bryce

Hi everyone,

You might have seen the FAQ we posted last week about the continuing work 
on evolving the format and structure of the Summits: 
http://www.openstack.org/blog/2016/05/faq-evolving-the-openstack-design-summit/


I wanted to send a reminder note out to highlight that Thierry and I will 
be hosting 2 online town halls tomorrow at 1130 and 1900 UTC to talk 
through the current thinking and answer any new questions. Links to the 
webinars are at the top of the blog post. If you have specific questions 
not covered in the FAQ that you'd like us to talk about, feel free to email 
either of us directly beforehand and we will add them to the list.


Jonathan




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-24 Thread Brian Rosmaita
+1

On 5/24/16, 11:11 AM, "Chris Hoge"  wrote:

>+1
>
>> On May 23, 2016, at 8:25 PM, Mike Perez  wrote:
>> 
>>> On 18:00 May 20, Nikhil Komawar wrote:
>>> Hello all,
>>> 
>>> 
>>> I want to propose having a dedicated virtual sync next week Thursday
>>>May
>>> 26th at 1500UTC for one hour on the Import Refactor work [1] ongoing in
>>> Glance. We are making a few updates to the spec; so it would be good to
>>> have everyone on the same page and soon start merging those spec
>>>changes.
>>> 
>>> 
>>> Also, I would like for this sync to be cross project one so that all
>>>the
>>> different stakeholders are aware of the updates to this work even if
>>>you
>>> just want to listen in.
>>> 
>>> 
>>> Please vote with +1, 0, -1. Also, if the time doesn't work please
>>> propose 2-3 additional time slots.
>>> 
>>> 
>>> We can decide later on the tool and I will setup agenda if we have
>>> enough interest.
>> 
>> +1
>> 
>> -- 
>> Mike Perez
>> 
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Plans to converge on one ldap client?

2016-05-24 Thread Morgan Fainberg
On Tue, May 24, 2016 at 8:55 AM, Corey Bryant 
wrote:

>
>
> On Tue, May 24, 2016 at 11:11 AM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>>
>>
>> On Tue, May 24, 2016 at 5:53 AM, Corey Bryant > > wrote:
>>
>>> Hi All,
>>>
>>> Are there any plans to converge on one ldap client across projects?
>>> Some projects have moved to ldap3 and others are using pyldap (both are in
>>> global requirements).
>>>
>>> The issue we're running into in Ubuntu is that we can only have one ldap
>>> client in Ubuntu main, while the others will live in universe.
>>>
>>> --
>>> Regards,
>>> Corey
>>>
>>>
>> Out of curiosity, what drives this requirement? pyldap and ldap3 do not
>> overlap in namespace and can co-install just fine. This is no different
>> than previously having python-ldap and ldap3.
>>
>> It seems a little arbitrary to say only one of these can be in main, but
>> this is why i am asking.
>>
>>
> No problem, thanks for asking.  I agree, it's no different than
> python-ldap and ldap3 and it's not a co-installability issue.  This is just
> a policy for Ubuntu main.  If we file a Main Inclusion Request (MIR) for a
> new ldap client then we'll be asked to work on what's needed to get the
> other client package out of main, which consists of patching use of one
> client for the other.
>
>
Ah, ok sure; still sounds a little silly imho, but only so much we can do
on that front ;). So the reality is keystone is considering ldap3, but
there have been concerns about ldap3's interface compared to the relatively
tried-and-true pyldap (a clean fork+py3 support of python-ldap). Long term
we may move to ldap3. Short term, we wanted python3 support, so the drop in
replacement for python-ldap was the clear winner (ldap3 is significantly
more work to support, and even when/if we support it there will be a period
where we support both, just in different drivers).

Basically, if we add ldap3 to keystone, we will be supporting both for a
non-insignificant time. For now we're leaning on pyldap for the foreseeable
future.


> --
> Regards,
> Corey
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fw: [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-24 Thread Catherine Cuong Diep

+1

- Catheine Diep
- Forwarded by Catherine Cuong Diep/San Jose/IBM on 05/24/2016 10:31 AM
-

From:   Brian Rosmaita 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   05/24/2016 10:30 AM
Subject:Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a
virtual sync dedicated to Import Refactor May 26th



+1

On 5/24/16, 11:11 AM, "Chris Hoge"  wrote:

>+1
>
>> On May 23, 2016, at 8:25 PM, Mike Perez  wrote:
>>
>>> On 18:00 May 20, Nikhil Komawar wrote:
>>> Hello all,
>>>
>>>
>>> I want to propose having a dedicated virtual sync next week Thursday
>>>May
>>> 26th at 1500UTC for one hour on the Import Refactor work [1] ongoing in
>>> Glance. We are making a few updates to the spec; so it would be good to
>>> have everyone on the same page and soon start merging those spec
>>>changes.
>>>
>>>
>>> Also, I would like for this sync to be cross project one so that all
>>>the
>>> different stakeholders are aware of the updates to this work even if
>>>you
>>> just want to listen in.
>>>
>>>
>>> Please vote with +1, 0, -1. Also, if the time doesn't work please
>>> propose 2-3 additional time slots.
>>>
>>>
>>> We can decide later on the tool and I will setup agenda if we have
>>> enough interest.
>>
>> +1
>>
>> --
>> Mike Perez
>>
>>
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Plans to converge on one ldap client?

2016-05-24 Thread Corey Bryant
On Tue, May 24, 2016 at 1:23 PM, Morgan Fainberg 
wrote:

>
>
> On Tue, May 24, 2016 at 8:55 AM, Corey Bryant 
> wrote:
>
>>
>>
>> On Tue, May 24, 2016 at 11:11 AM, Morgan Fainberg <
>> morgan.fainb...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, May 24, 2016 at 5:53 AM, Corey Bryant <
>>> corey.bry...@canonical.com> wrote:
>>>
 Hi All,

 Are there any plans to converge on one ldap client across projects?
 Some projects have moved to ldap3 and others are using pyldap (both are in
 global requirements).

 The issue we're running into in Ubuntu is that we can only have one
 ldap client in Ubuntu main, while the others will live in universe.

 --
 Regards,
 Corey


>>> Out of curiosity, what drives this requirement? pyldap and ldap3 do not
>>> overlap in namespace and can co-install just fine. This is no different
>>> than previously having python-ldap and ldap3.
>>>
>>> It seems a little arbitrary to say only one of these can be in main, but
>>> this is why i am asking.
>>>
>>>
>> No problem, thanks for asking.  I agree, it's no different than
>> python-ldap and ldap3 and it's not a co-installability issue.  This is just
>> a policy for Ubuntu main.  If we file a Main Inclusion Request (MIR) for a
>> new ldap client then we'll be asked to work on what's needed to get the
>> other client package out of main, which consists of patching use of one
>> client for the other.
>>
>>
> Ah, ok sure; still sounds a little silly imho, but only so much we can do
> on that front ;). So the reality is keystone is
>

Everything in main is fully supported, so limiting those efforts to a
single client makes sense.


> considering ldap3, but there have been concerns about ldap3's interface
> compared to the relatively tried-and-true pyldap (a clean fork+py3 support
> of python-ldap). Long term we may move to ldap3. Short term, we wanted
> python3 support, so the drop in replacement for python-ldap was the clear
> winner (ldap3 is significantly more work to support, and even when/if we
> support it there will be a period where we support both, just in different
> drivers).
>

I like the approach to having different drivers, at least for a transition
period.  That would be very useful from a distro perspective.


>
> Basically, if we add ldap3 to keystone, we will be supporting both for a
> non-insignificant time. For now we're leaning on pyldap for the foreseeable
> future.
>
>
>>
Thanks. Appreciate the information!

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: keystone federation user story

2016-05-24 Thread Alexander Makarov
Colleagues,

here is an actual use case for shadow users assignments, let's discuss
possible solutions: all suggestions are appreciated.

-- Forwarded message --
From: Andrey Grebennikov 
Date: Tue, May 24, 2016 at 9:43 AM
Subject: keystone federation user story
To: Alexander Makarov 


Main production usecase:
As a system administrator I need to create assignments for federated users
into the projects when the user has not authenticated for the first time.

Two different approaches.
1. A user has to be assigned directly into the project with the role Role1.
Since shadow users were implemented, Keystone database has the record of
the user when the federated user authenticates for the first time. When it
happens, the user gets unscoped token and Keystone registers the user in
the database with generated ID (the result of hashing the name and the
domain). At this point the user cannot get scoped token yet since the user
has not been assigned to any project.
Nonetheless there was a bug https://bugs.launchpad.net/keystone/+bug/1313956
which was abandoned, and the reporter says that currently it is possible to
assign role in the project to non-existing user (API only, no CLI). It
doesn't help much though since it is barely possible to predict the ID of
the user if it doesn't exist yet.

Potential solution - allow per-user project auto-creation. This will allow
the user to get scoped token with a pre-defined role (should be either
mentioned in config or in mapping) and execute operations right away.

Disadvantages: less control and order (will potentially end up with
infinite empty projects).
Benefits: user is authorized right away.

Another potential solution - clearly describe a possibility to assign
shadow user to a project (client should generate the ID correctly), even
though the user has not been authenticated for the first time yet.

Disadvantages: high risk of administrator's mistake when typing user's ID.
Benefits: user doesn't have to execute first dummy authentication in order
to be registered.

2. Operate with the groups. It means that the user is a member of the
remote group and we propose the groups to be assigned to the projects
instead of the users.
There is no concept of shadow groups yet, so it still has to be implemented.

Same problem - in order to be able to assign the group to the project
currently it has to exist in Keystone database.

It should be either allowed to pre-create the project for a group (based on
some specific flags in mappings), or it should be allowed to assign
non-existing groups into the projects.

I'd personally prefer to allow some special attribute to be specified in
either the config or mapping which will allow project auto-creation.
For example, user is added to the group "openstack" in the backend. In this
case this group is the part of SAML assertions (in case when SAML2 is used
as the protocol), and Keystone should recognize this group through the
mapping. When user makes login attempt, Keystone should pre-create the
project and assign pre-defined role in it. User gets access right away.


-- 
Andrey Grebennikov
Deployment Engineer
Mirantis Inc, Mountain View, CA



-- 
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-05-24 Thread Sean Dague
The team working on live migration testing started with an experimental
job on Ubuntu 16.04 to try to be using the latest and greatest libvirt +
qemu under the assumption that a set of issues we were seeing are
solved. The short answer is, it doesn't look like this is going to work.

We run tests on a bunch of different clouds. Those clouds expose
different cpu flags to us. These are not standard things that map to
"Haswell". It means live migration in the multinode cases can hit cpus
with different flags. So we found the requirement was to come up with a
least common denominator of cpu flags, which we call gate64, and push
that into the libvirt cpu_map.xml in devstack, and set whenever we are
in a multinode scenario.
(https://github.com/openstack-dev/devstack/blob/master/tools/cpu_map_update.py)
 Not ideal, but with libvirt 1.2.2 it works fine.

It turns out it works fine because libvirt *actually* seems to take the
data from cpu_map.xml and do a translation to what it believes qemu will
understand. On these systems apparently this turns into "-cpu
Opteron_G1,-pse36"
(http://logs.openstack.org/29/42529/24/check/gate-tempest-dsvm-multinode-full/5f504c5/logs/libvirt/qemu/instance-000b.txt.gz)

At some point between libvirt 1.2.2 and 1.3.1, this changed. Now libvirt
seems to be passing our cpu_model directly to qemu, and assumes that as
a user you will be responsible for writing all the  stanzas to
add/remove yourself. When libvirt sends 'gate64' to qemu, this explodes,
as qemu has no idea what we are talking about.
http://logs.openstack.org/34/319934/2/experimental/gate-tempest-dsvm-multinode-live-migration/b87d689/logs/screen-n-cpu.txt.gz#_2016-05-24_15_59_12_531

Unlike libvirt, which has a text file (xml) that configures the cpus
that could exist in the world, qemu builds this in statically at compile
time:
http://git.qemu.org/?p=qemu.git;a=blob;f=target-i386/cpu.c;h=895a386d3b7a94e363ca1bb98821d3251e70c0e0;hb=HEAD#l694


So, the existing cpu_map.xml workaround for our testing situation will
no longer work.

So, we have a number of open questions:

* Have our cloud providers standardized enough that we might get away
without this custom cpu model? (Have some of them done it and only use
those for multinode?)
* Is there any way to get this feature back in libvirt to do the cpu
computation?
* Would we have to build a whole nova feature around setting libvirt xml
 to be able to test live migration in our clouds?
* Other options?
* Do we give up and go herd goats?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo_config] Improving Config Option Help Texts

2016-05-24 Thread Ian Cordasco
-Original Message-
From: Erno Kuvaja 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: May 24, 2016 at 06:06:14
To: OpenStack Development Mailing List (not for usage questions)

Subject:  [openstack-dev] [all][oslo_config] Improving Config Option Help Texts

> Hi all,
>
> Based on the not yet merged spec of categorized config options [0] some
> project seems to have started improving the config option help texts. This
> is great but I noticed scary trend on clutter to be added on these
> sections. Now looking individual changes it does not look that bad at all
> in the code 20 lines well structured templating. Until you start comparing
> it to the example config files. Lots of this data is redundant to what is
> generated to the example configs already and then the maths struck me.
>
> In Glance only we have ~120 config options (this does not include
> glance_store nor any other dependencies we pull in for our configs like
> Keystone auth. Those +20 lines of templating just became over 2000 lines of
> clutter in the example configs and if all projects does that we can
> multiply the issue. I think no-one with good intention can say that it's
> beneficial for our deployers and admins who are already struggling with the
> configs.
>
> So I beg you when you do these changes to the config option help fields
> keep them short and compact. We have the Configuration Docs for extended
> descriptions and cutely formatted repetitive fields, but lets keep those
> off from the generated (Example) config files. At least I would like to be
> able to fit more than 3 options on the screen at the time when reading
> configs.
>
> [0] https://review.openstack.org/#/c/295543/

Hey Erno,

So here's where I have to very strongly disagree with you. That spec
was caused by operator feedback, specifically for projects that
provide multiple services that may or may not have separated config
files which and which already have "short and compact" descriptions
that are not very helpful to oeprators.

The *example* config files will have a lot more detail in them. Last I
saw (I've stopped driving that specification) there was going to be a
way to generate config files without all of the descriptions. That
means that for operators who don't care about that can ignore it when
they generate configuration files. Maybe the functionality doesn't
work right this instant, but I do believe that's a goal and it will be
implemented.

Beyond that, I don't think example/sample configuration files should
be treated differently from documentation, nor do I think that our
documentation team couldn't make use of the improved documentation
we're adding to each option. In short, I think this effort will
benefit many different groups of people in and around OpenStack.
Simply arguing that this is going to make the sample config files have
more lines of code is not a good argument against this. Please do
reconsider.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly sub team status report

2016-05-24 Thread Loo, Ruby

Hi,

We are rambunctious to present this week's subteam report for Ironic. As usual, 
this is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 16 May 2016)
- Ironic: 192 bugs (+2) + 178 wishlist items (+1). 0 new, 138 in progress (+4), 
0 critical, 32 high and 24 incomplete
- Inspector: 8 bugs (-1) + 20 wishlist items. 0 new, 7 in progress (-1), 0 
critical, 1 high (-1) and 0 incomplete (-1)
- Nova bugs with Ironic tag: 17 (+1). 3 new (+2), 0 critical, 1 high (+1)

Upgrade (aka Grenade) testing (jlvillal/mgould):

- Really awesome work this week!! We now have had a passing Grenade job (with 
about 15 unmerged patches).
- vsaienko figured out the networking issue that had been blocking us for over 
a week at the 'resource phase create' stage
- Big thanks to vsaienko and vdrok for all their work
- Now need to finish this up by getting the various patches cleaned up and 
merged into multiple projects.

Network isolation (Neutron/Ironic work) (jroll, TheJulia, devananda)

- patches were split, need review but are in merge conflict :(
- 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:im_full-2016-05-16_04

Gate improvements (jlvillal, lucasagomes, dtantsur)
===
- the old ramdisk jobs no longer run on git master, proceeding with removing 
the old ramdisk support

Node search API (jroll, lintan, rloo)
=
- none - may not be a new priority with new multi-compute spec

Node claims API (jroll, lintan)
===
- none - may not be a new priority with new multi-compute spec
Multiple compute hosts (jroll, devananda)
=
- wrote a new spec: https://review.openstack.org/#/c/320016/

Generic boot-from-volume (TheJulia, dtantsur, lucasagomes)
==
- none - spec requires update.

Driver composition (dtantsur)
=
- The spec https://review.openstack.org/188370 progresses well and there are a 
lot of bits requiring your comments already there:
- The high-level change description is there.
- API changes are there (unless I forgot something).
- The upgrade path is there.

Inspector (dtansur)
===
- python-ironic-inspector-client 1.7.0 released for Newton

Bifrost (TheJulia)
==
- TheJulia has been away, please ping if there are needed items asap.  She is 
aware no issues at this time.

.

Until June (no meeting next week),
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Log spool in the context

2016-05-24 Thread Sean Dague
On 05/24/2016 01:18 PM, Doug Hellmann wrote:
> Excerpts from Alexis Lee's message of 2016-05-24 09:34:36 +0100:
>> Hi,
>>
>> I have a spec: https://review.openstack.org/227766
>> and implementation: https://review.openstack.org/316162
>> for adding a spooling logger to oslo.log. Neither is merged yet, reviews
>> welcome.
>>
>> Looking at how I'd actually integrate this into Nova, most classes do:
>>
>> LOG = logging.getLogger(__name__)
>>
>> which is the recommended means of getting a logger. I need to get
>> certain code paths to use a different logger (if spooling is turned on).
>> This means I need to pass it around. If I modify method signatures I'm
>> bound to break back-compat for something.
>>
>> Option 1: use a metaclass to register each SpoolManager as a singleton,
>> IE every call to SpoolManager('api') will return the same manager. I can
>> then do something like:
>>
>> log = LOG
>> if CONF.spool_api:
>> log = SpoolManager('api').get_spool(context.request_id)
>>
>> in every method.
>>
>> Option 2: Put the logger on the context. We're already passing this
>> everywhere so it'd be awful convenient.
>>
>> log = context.log or LOG
>>
>> Option 3: ???
>>
>> I like option 2, any objections to extending oslo.context like this?
>>
>>
>> Alexis (lxsli)
> 
> Do you need more than one? The spec talks about different types of
> requests having different SpoolManagers (api and scheduler are the 2
> examples given).
> 
> What happens when the context is serialized and sent across an RPC call?
> Is there some representation of the logger that the messaging code can
> use on the other side to reconstruct the logger?

Right, the serialization of the context makes this something I'd rather
avoid. While in the past I've argued for adding more to context, it is
sort of useful that it's mostly just a container of attributes for being
a universal thing. Doug corrected my thinking there, and I thank him for
it.

I'd rather have it be more explicit.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] usage of ironic-lib

2016-05-24 Thread Loo, Ruby
Thanks for the feedback everyone. Lucas has submitted a patch to ironic-lib¹s 
README to clarify this: https://review.openstack.org/#/c/319251/.

--ruby


On 2016-05-20, 8:46 AM, "Jim Rollenhagen"  wrote:

>On Thu, May 19, 2016 at 01:21:35PM -0700, Devananda van der Veen wrote:
>> On 05/16/2016 07:14 AM, Lucas Alvares Gomes wrote:
>> > On Mon, May 16, 2016 at 2:57 PM, Loo, Ruby  wrote:
>> >> Hi,
>> >>
>> >> A patch to ironic-lib made me wonder about what is our supported usage of
>> >> ironic-lib. Or even the intent/scope of it. This patch changes a method,
>> >> Œbootable¹ parameter is removed and Œboot_flag¹ parameter is added [1].
>> >>
>> >> If this library/method is used by some out-of-tree thing (or even some
>> >> in-tree but outside of ironic), this will be a breaking change. If this
>> >> library is meant to be internal to ironic program itself, and to e.g. only
>> >> be used by ironic and IPA, then that is different. I was under the
>> >> impression that it was a library and meant to be used by whatever, no
>> >> restrictions on what that whatever was. It would be WAY easier if we 
>> >> limited
>> >> this for usage by only a few specified projects.
>> >>
>> >> What do people think?
>> >>
>> > 
>> > I still believe that the ironic-lib project was designed to share code
>> > between the Ironic projects _only_. Otherwise, if it the code was
>> > supposed to be shared across multiple projects we should have put it
>> > in oslo instead.
>> 
>> I agree, and don't see a compelling reason, today, for anyone to do the work 
>> to
>> make ironic-lib into a stable library. So...
>> 
>> I think we should keep ironic-lib where it is (in ironic, not oslo) and keep 
>> the
>> scope we intended (only for use within the Ironic project group [1]).
>> 
>> We should more clearly signal that intent within the library (eg, in the 
>> README)
>> and the project description (eg. on PyPI).
>> 
>> [1]
>> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L1915
>
>+1, let's not put extra burden on ourselves at this time.
>
>// jim
>
>> 
>> 
>> my 2c,
>> Devananda
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] New Core Reviewer (sent on behalf of Steve Martinelli)

2016-05-24 Thread Morgan Fainberg
I want to welcome Rodrigo Duarte (rodrigods) to the keystone core team.
Rodrigo has been a consistent contributor to keystone and has been
instrumental in the federation implementations. Over the last cycle he has
shown an understanding of the code base and contributed quality reviews.

I am super happy (as proxy for Steve) to welcome Rodrigo to the Keystone
Core team.

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Ian Cordasco
-Original Message-
From: Joshua Harlow 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: May 23, 2016 at 15:23:32
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

> Sean Dague wrote:
> > On 05/23/2016 03:34 PM, Gregory Haynes wrote:
> >> On Mon, May 23, 2016, at 11:48 AM, Doug Hellmann wrote:
> >>> Excerpts from Chris Dent's message of 2016-05-23 17:07:36 +0100:
>  On Mon, 23 May 2016, Doug Hellmann wrote:
> > Excerpts from Chris Dent's message of 2016-05-20 14:16:15 +0100:
> >> I don't think language does (or should) have anything to do with it.
> >>
> >> The question is whether or not the tool (whether service or
> >> dependent library) is useful to and usable outside the openstack-stack.
> >> For example gnocchi is useful to openstack but you can use it with 
> >> other
> >> stuff, therefore _not_ openstack. More controversially: swift can be
> >> usefully used all by its lonesome: _not_ openstack.
> >> Making a tool which is useful outside of the OpenStack context just
> >> seems like good software engineering - it seems odd that we would try
> >> and ensure our tools do not fit this description. Fortunately, many (or
> >> even most) of the tools we create *are* useful outside of the OpenStack
> >> world - pbr, git-review, diskimage-builder, (I hope) many of the oslo
> >> libraries. This is really a question of defining useful interfaces more
> >> than anything else, not a statement of whether a tool is part of our
> >> community.
> >
> > Only if you are willing to pay the complexity and debt cost of having
> > optional backends all over the place.
>  
> We seem to do this quite well even without building tools/projects that
> work outside of openstack ;)
>  
> Or are you saying that using such library/project(s) u have to accept
> that there will be many optional backends that you will likely not/never
> use that exist in said library/project (and thus are akin to dead weight)?
>  
> >
> > For instance, I think we're well beyond that point that Keystone being
> > optional should be a thing anywhere (and it is a thing in a number of
> > places). Keystone should be our auth system, all projects 100% depend on
> > it, and if you have different site needs, put that into a Keystone backend.
> >
> > Most of the oslo libraries require other oslo libraries, which is fine.
> > They aren't trying to solve the general purpose case of logging or
> > configuration or db access. They are trying to solve a specific set of
> > patterns that are applicable to OpenStack projects.
>  
> I just took a quick stab at annotating which ones (I think are) useable
> outside of openstack (without say bringing in the configuration
> ideology/pattern that oslo.config adds) and made the following:
>  
> automaton (useable)
> cliff (useable)
> cookiecutter (useable)

I'm catching up on this thread, but cookiecutter most certainly is *NOT* an 
OpenStack project: https://pypi.io/project/cookiecutter/

It was createdy by Audrey Roy Greenfeld. 

> debtcollector (useable)
> futurist (useable)
> osprofiler (useable?)
> oslo.cache
> oslo.concurrency
> oslo.context
> oslo.config
> oslo-cookiecutter
> oslo.db
> oslo.i18n
> oslo.log
> oslo.messaging
> oslo.middleware
> oslo.policy (useable?)
> oslo.privsep (useable?)
> oslo.reports
> oslo.rootwrap
> oslo.serialization (useable)
> oslo.service
> oslosphinx
> oslotest (useable)
> oslo.utils (useable)
> oslo.versionedobjects
> oslo.vmware
> hacking (useable)
> pbr (useable)
> pyCADF (useable?)
> stevedore (useable)
> taskflow (useable)
> tooz (useable)
>  
> So out of 33 that's about half (~17) that I think are useable outside
> without to many patterns/ideologies being forced on non-openstack folks
> (if your external project accepts the pattern of oslo.config then the
> number increases).
>  
> >
> > -Sean
> >
>  
> __  
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Log spool in the context

2016-05-24 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-05-24 14:16:13 -0400:
> On 05/24/2016 01:18 PM, Doug Hellmann wrote:
> > Excerpts from Alexis Lee's message of 2016-05-24 09:34:36 +0100:
> >> Hi,
> >>
> >> I have a spec: https://review.openstack.org/227766
> >> and implementation: https://review.openstack.org/316162
> >> for adding a spooling logger to oslo.log. Neither is merged yet, reviews
> >> welcome.
> >>
> >> Looking at how I'd actually integrate this into Nova, most classes do:
> >>
> >> LOG = logging.getLogger(__name__)
> >>
> >> which is the recommended means of getting a logger. I need to get
> >> certain code paths to use a different logger (if spooling is turned on).
> >> This means I need to pass it around. If I modify method signatures I'm
> >> bound to break back-compat for something.
> >>
> >> Option 1: use a metaclass to register each SpoolManager as a singleton,
> >> IE every call to SpoolManager('api') will return the same manager. I can
> >> then do something like:
> >>
> >> log = LOG
> >> if CONF.spool_api:
> >> log = SpoolManager('api').get_spool(context.request_id)
> >>
> >> in every method.
> >>
> >> Option 2: Put the logger on the context. We're already passing this
> >> everywhere so it'd be awful convenient.
> >>
> >> log = context.log or LOG
> >>
> >> Option 3: ???
> >>
> >> I like option 2, any objections to extending oslo.context like this?
> >>
> >>
> >> Alexis (lxsli)
> > 
> > Do you need more than one? The spec talks about different types of
> > requests having different SpoolManagers (api and scheduler are the 2
> > examples given).
> > 
> > What happens when the context is serialized and sent across an RPC call?
> > Is there some representation of the logger that the messaging code can
> > use on the other side to reconstruct the logger?
> 
> Right, the serialization of the context makes this something I'd rather
> avoid. While in the past I've argued for adding more to context, it is
> sort of useful that it's mostly just a container of attributes for being
> a universal thing. Doug corrected my thinking there, and I thank him for
> it.
> 
> I'd rather have it be more explicit.
> 
> -Sean
> 

Rather than forcing SpoolManager to be a singleton, maybe the thing
to do is build some functions for managing a singleton instance (or
one per type or whatever), and making that API convenient enough
that using the spool logger doesn't require adding a bunch of logic
and import_opt() calls all over the place.  Since it looks like the
convenience function would require looking at a config option owned
by the application, it probably shouldn't live in oslo.log, but
putting it in a utility module in nova might make sense.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Joshua Harlow

Ok, probably should be chopped out of the oslo lib list then ;)

I'll do that, since it seems to be really not the right name for what it 
is :)


Ian Cordasco wrote:

-Original Message-
From: Joshua Harlow
Reply: OpenStack Development Mailing List (not for usage 
questions)
Date: May 23, 2016 at 15:23:32
To: OpenStack Development Mailing List (not for usage 
questions)
Subject:  Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"


Sean Dague wrote:

On 05/23/2016 03:34 PM, Gregory Haynes wrote:

On Mon, May 23, 2016, at 11:48 AM, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2016-05-23 17:07:36 +0100:

On Mon, 23 May 2016, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2016-05-20 14:16:15 +0100:

I don't think language does (or should) have anything to do with it.

The question is whether or not the tool (whether service or
dependent library) is useful to and usable outside the openstack-stack.
For example gnocchi is useful to openstack but you can use it with other
stuff, therefore _not_ openstack. More controversially: swift can be
usefully used all by its lonesome: _not_ openstack.

Making a tool which is useful outside of the OpenStack context just
seems like good software engineering - it seems odd that we would try
and ensure our tools do not fit this description. Fortunately, many (or
even most) of the tools we create *are* useful outside of the OpenStack
world - pbr, git-review, diskimage-builder, (I hope) many of the oslo
libraries. This is really a question of defining useful interfaces more
than anything else, not a statement of whether a tool is part of our
community.

Only if you are willing to pay the complexity and debt cost of having
optional backends all over the place.


We seem to do this quite well even without building tools/projects that
work outside of openstack ;)

Or are you saying that using such library/project(s) u have to accept
that there will be many optional backends that you will likely not/never
use that exist in said library/project (and thus are akin to dead weight)?


For instance, I think we're well beyond that point that Keystone being
optional should be a thing anywhere (and it is a thing in a number of
places). Keystone should be our auth system, all projects 100% depend on
it, and if you have different site needs, put that into a Keystone backend.

Most of the oslo libraries require other oslo libraries, which is fine.
They aren't trying to solve the general purpose case of logging or
configuration or db access. They are trying to solve a specific set of
patterns that are applicable to OpenStack projects.


I just took a quick stab at annotating which ones (I think are) useable
outside of openstack (without say bringing in the configuration
ideology/pattern that oslo.config adds) and made the following:

automaton (useable)
cliff (useable)
cookiecutter (useable)


I'm catching up on this thread, but cookiecutter most certainly is *NOT* an 
OpenStack project: https://pypi.io/project/cookiecutter/

It was createdy by Audrey Roy Greenfeld.


debtcollector (useable)
futurist (useable)
osprofiler (useable?)
oslo.cache
oslo.concurrency
oslo.context
oslo.config
oslo-cookiecutter
oslo.db
oslo.i18n
oslo.log
oslo.messaging
oslo.middleware
oslo.policy (useable?)
oslo.privsep (useable?)
oslo.reports
oslo.rootwrap
oslo.serialization (useable)
oslo.service
oslosphinx
oslotest (useable)
oslo.utils (useable)
oslo.versionedobjects
oslo.vmware
hacking (useable)
pbr (useable)
pyCADF (useable?)
stevedore (useable)
taskflow (useable)
tooz (useable)

So out of 33 that's about half (~17) that I think are useable outside
without to many patterns/ideologies being forced on non-openstack folks
(if your external project accepts the pattern of oslo.config then the
number increases).


-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ian Cordasco



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Mike Perez
On 12:24 May 24, Thierry Carrez wrote:
> Morgan Fainberg wrote:
> >[...]  If we are accepting golang, I want it to be clearly
> >documented that the expectation is it is used exclusively where there is
> >a demonstrable case (such as with swift) and not a carte blanche to use
> >it wherever-you-please.
> >
> >I want this to be a social contract looked at and enforced by the
> >community, not special permissions that are granted by the TC (I don't
> >want the TC to need to step in an approve every single use case of
> >golang, or javascript ...). It's bottlenecking back to the TC for
> >special permissions or inclusion (see reasons for the dissolution of the
> >"integrated release").
> >
> >This isn't strictly an all or nothing case, this is a "how would we
> >enforce this?" type deal. Lean on infra to enforce that only projects
> >with the golang-is-ok-here tag are allowed to use it? I don't want
> >people to write their APIs in javascript (and node.js) nor in golang. I
> >would like to see most of the work continue with python as the primary
> >language. I just think it's unreasonable to lock tools behind a gate
> >that is stronger than the social / community contract (and outlined in
> >the resolution including X language).
> 
> +1
> 
> I'd prefer if we didn't have to special-case anyone, and we could come up
> with general rules that every OpenStack project follows. Any other solution
> is an administrative nightmare and a source of tension between projects (why
> are they special and not me).

I'm in agreement that I don't want to see the TC enforcing this. In fact as
Thierry has said, lets not special case anyone.

As soon as a special case is accepted, as nortoriously happens people are going
to go in a corner and rewrite things in Go. They will be upset later for not
communicating well on their intentions upfront, and the TC or a few strongly
opinionated folks in the community are going to be made the bad people just
about every time.

Community enforcing or not, I predict this to get out of hand and it's going to
create more community divide regardless.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Ian Cordasco
-Original Message-
From: Jay Pipes 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: May 24, 2016 at 11:35:42
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

> On 05/24/2016 06:19 AM, Thierry Carrez wrote:
> > Chris Dent wrote:
> >> [...]
> >> I don't really know. I'm firmly in the camp that OpenStack needs to
> >> be smaller and more tightly focused if a unitary thing called OpenStack
> >> expects to be any good. So I'm curious about and interested in
> >> strategies for figuring out where the boundaries are.
> >>
> >> So that, of course, leads back to the original question: Is OpenStack
> >> supposed to be a unitary.
> >
> > As a data point, since I heard that question rhetorically asked quite a
> > few times over the past year... There is an old answer to that, since a
> > vote of the PPB (the ancestor of our TC) from June, 2011 which was never
> > overruled or changed afterwards:
> >
> > "OpenStack is a single product made of a lot of independent, but
> > cooperating, components."
> >
> > The log is an interesting read:
> > http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-06-28-20.06.log.html
> >   
>  
> Hmm, blast from the past. I'm sad I didn't make it to that meeting.
>  
> I would (now at least) have voted for #2: OpenStack is "a collection of
> independent projects that work together for some level of integration
> and releases".
>  
> This is how I believe OpenStack should be seen, as I wrote on Twitter
> relatively recently:
>  
> https://twitter.com/jaypipes/status/705794815338741761
> https://twitter.com/jaypipes/status/705795095262441472

I'm honestly in the same boat as Chris. And I've constantly heard both. I also 
frankly am not sure I agree with the idea that OpenStack is one product. I 
think more along the lines of the way DefCore specifies OpenStack Compute as a 
Product, etc. I feel like if every project contributed to the OpenStack 
product, we might have a better adoption rate and a better knowledge base for 
how to make new services scale from day 1. Instead, we are definitely a loose 
collection of projects that integrate on some levels and produce what various 
people might combine to create a cloud.

I'm also not entirely that the answer remains true with the different defcore 
programs. It seems like DefCore makes us define a minimum viable OpenStack 
{Compute,Object Storage} and then you can add to that. But those two things are 
"OpenStack" and everything else is a nice additional feature. There's nothing 
that makes Barbican or Magnum or Ceilometer a core part of OpenStack. Yet 
they're projects of varying popularity that different people choose whether or 
not to deploy. If OpenStack were a product, I'd think that not deploying 
Ceilometer would be the exception.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron] bonding?

2016-05-24 Thread Armando M.
On 24 May 2016 at 04:51, Jim Rollenhagen  wrote:

> Hi,
>
> There's rumors floating around about Neutron having a bonding model in
> the near future. Are there any solid plans for that?
>

Who spreads these rumors :)?

To the best of my knowledge I have not seen any RFE proposed recently along
these lines.


> For context, as part of the multitenant networking work, ironic has a
> portgroup concept proposed, where operators can configure bonding for
> NICs in a baremetal machine. There are ML2 drivers that support this
> model and will configure a bond.
>
> Some folks have concerns about landing this code if Neutron is going to
> support bonding as a first-class citizen. So before we delay any
> further, I'd like to find out if there's any truth to this, and what the
> timeline for that might look like.
>
> Thanks!
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo_config] Improving Config Option Help Texts

2016-05-24 Thread Hemanth Makkapati
+1 to what Ian said.

We had an Ops session at the Austin summit on this and we didn't hear any 
concerns about the clutter from what I can recall.
Some notes from that session are here [0].
May be the clutter is not a problem after all? At least, not yet.

If it does become a problem down the line, we can probably move the 
descriptions around or have ways to not generate them at all like Ian 
suggested. 
But, keeping the help texts themselves short and compact won't solve anything. 
It is, in fact, against what we are trying to solve for here, I think.

-Hemanth

[0] https://etherpad.openstack.org/p/AUS-ops-Config-Options

From: Ian Cordasco 
Sent: Tuesday, May 24, 2016 1:03 PM
To: Erno Kuvaja; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][oslo_config] Improving Config Option Help
Texts

-Original Message-
From: Erno Kuvaja 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: May 24, 2016 at 06:06:14
To: OpenStack Development Mailing List (not for usage questions)

Subject:  [openstack-dev] [all][oslo_config] Improving Config Option Help Texts

> Hi all,
>
> Based on the not yet merged spec of categorized config options [0] some
> project seems to have started improving the config option help texts. This
> is great but I noticed scary trend on clutter to be added on these
> sections. Now looking individual changes it does not look that bad at all
> in the code 20 lines well structured templating. Until you start comparing
> it to the example config files. Lots of this data is redundant to what is
> generated to the example configs already and then the maths struck me.
>
> In Glance only we have ~120 config options (this does not include
> glance_store nor any other dependencies we pull in for our configs like
> Keystone auth. Those +20 lines of templating just became over 2000 lines of
> clutter in the example configs and if all projects does that we can
> multiply the issue. I think no-one with good intention can say that it's
> beneficial for our deployers and admins who are already struggling with the
> configs.
>
> So I beg you when you do these changes to the config option help fields
> keep them short and compact. We have the Configuration Docs for extended
> descriptions and cutely formatted repetitive fields, but lets keep those
> off from the generated (Example) config files. At least I would like to be
> able to fit more than 3 options on the screen at the time when reading
> configs.
>
> [0] https://review.openstack.org/#/c/295543/

Hey Erno,

So here's where I have to very strongly disagree with you. That spec
was caused by operator feedback, specifically for projects that
provide multiple services that may or may not have separated config
files which and which already have "short and compact" descriptions
that are not very helpful to oeprators.

The *example* config files will have a lot more detail in them. Last I
saw (I've stopped driving that specification) there was going to be a
way to generate config files without all of the descriptions. That
means that for operators who don't care about that can ignore it when
they generate configuration files. Maybe the functionality doesn't
work right this instant, but I do believe that's a goal and it will be
implemented.

Beyond that, I don't think example/sample configuration files should
be treated differently from documentation, nor do I think that our
documentation team couldn't make use of the improved documentation
we're adding to each option. In short, I think this effort will
benefit many different groups of people in and around OpenStack.
Simply arguing that this is going to make the sample config files have
more lines of code is not a good argument against this. Please do
reconsider.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Fox, Kevin M
Frankly, this is one of the major negatives we've felt from the Big Tent idea...

OpenStack use to be more of a product then it is now. When there were common 
problems to be solved, there was pressure applied to solve them in a way 
everyone (OpenStack Project, OpenStack Users, and Openstack Operators) would 
benefit from.

For recent example, there has been a lot of talk about reimplementing features 
from Barbican in Magnum, Keystone, etc, and not wanting to depend on Barbican. 
In the pre-tent days, we'd just fix Barbican to do the things we all need it 
to, and then start depending on it. Then, everyone could start depending on a 
solid secret store being there since everyone would deploy it because they 
would want at least one thing that depends on it. Say, Lbaas, or COE 
orchestration, and then adding more projects that depend on it would be easier 
for the operator. Instead I see a lot of of trying to implement a hack in each 
project to try and not depend on it, solving the problem for one project but 
for no one else.

Its a vicious chicken and the egg cycle we have created. Projects don't want to 
depend on things if its not commonly deployed. Operators don't want to deploy 
it if there's not a direct reason to, or something they do care about depending 
on it. So our projects are encouraged to do bad things now and I think we're 
all suffering for it.

Cross project work became much harder after the big tent because there was less 
reason to play nice with each other. OpenStack projects are already fairly 
insular and this has made it worse. Opening up additional languages makes it 
yet harder to work on the common stuff. I'm not against picking 1 additional 
language for performance critical stuff, but it should be carefully considered, 
and should have to be carefully reasoned about the need for.

Thanks,
Kevin


From: Ian Cordasco [sigmaviru...@gmail.com]
Sent: Tuesday, May 24, 2016 12:12 PM
To: Jay Pipes; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

-Original Message-
From: Jay Pipes 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: May 24, 2016 at 11:35:42
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

> On 05/24/2016 06:19 AM, Thierry Carrez wrote:
> > Chris Dent wrote:
> >> [...]
> >> I don't really know. I'm firmly in the camp that OpenStack needs to
> >> be smaller and more tightly focused if a unitary thing called OpenStack
> >> expects to be any good. So I'm curious about and interested in
> >> strategies for figuring out where the boundaries are.
> >>
> >> So that, of course, leads back to the original question: Is OpenStack
> >> supposed to be a unitary.
> >
> > As a data point, since I heard that question rhetorically asked quite a
> > few times over the past year... There is an old answer to that, since a
> > vote of the PPB (the ancestor of our TC) from June, 2011 which was never
> > overruled or changed afterwards:
> >
> > "OpenStack is a single product made of a lot of independent, but
> > cooperating, components."
> >
> > The log is an interesting read:
> > http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-06-28-20.06.log.html
>
> Hmm, blast from the past. I'm sad I didn't make it to that meeting.
>
> I would (now at least) have voted for #2: OpenStack is "a collection of
> independent projects that work together for some level of integration
> and releases".
>
> This is how I believe OpenStack should be seen, as I wrote on Twitter
> relatively recently:
>
> https://twitter.com/jaypipes/status/705794815338741761
> https://twitter.com/jaypipes/status/705795095262441472

I'm honestly in the same boat as Chris. And I've constantly heard both. I also 
frankly am not sure I agree with the idea that OpenStack is one product. I 
think more along the lines of the way DefCore specifies OpenStack Compute as a 
Product, etc. I feel like if every project contributed to the OpenStack 
product, we might have a better adoption rate and a better knowledge base for 
how to make new services scale from day 1. Instead, we are definitely a loose 
collection of projects that integrate on some levels and produce what various 
people might combine to create a cloud.

I'm also not entirely that the answer remains true with the different defcore 
programs. It seems like DefCore makes us define a minimum viable OpenStack 
{Compute,Object Storage} and then you can add to that. But those two things are 
"OpenStack" and everything else is a nice additional feature. There's nothing 
that makes Barbican or Magnum or Ceilometer a core part of OpenStack. Yet 
they're projects of varying popularity that different people choose whether or 
not to deploy. If OpenStack were a product, I'd think that not deploying 
Ceilometer would be the exceptio

Re: [openstack-dev] [Mistral][Zaqar][Ceilometer][Searchlight] Triggering Mistral workflows from Zaqar messages

2016-05-24 Thread Zane Bitter

On 22/05/16 22:38, Lingxian Kong wrote:

Hi, Zane, I think you must be interested in this:
https://review.openstack.org/#/c/308664/


Oh! Yes that is very interesting. I'm glad that other people are 
thinking about these kinds of problems also.


My blueprint has a different focus, in that it's more about allowing the 
_user_ to configure these actions. Listening to oslo_messaging 
notifications instead of to Zaqar messages limits you to operator use 
cases only. (It's also likely much more efficient - listening to a large 
number of queues simultaneously is not the use case Zaqar is optimised 
for, which is essentially why I changed my mind about whether this 
should be implemented within Mistral.)


At least two projects that I know of (Ceilometer and Searchlight) are 
looking at implementing proxying of oslo_messaging notifications into 
Zaqar queues, and when that is combined with the Zaqar notification 
trigger we'll open up this functionality to everyone - other OpenStack 
services, operators and end-users.


e.g. a user will be able to subscribe (via Ceilometer or Searchlight) to 
a notification about a particular server from Nova in a Zaqar queue and 
have that trigger a Mistral workflow that marks the server unhealthy in 
Heat and triggers a reconvergence of the stack.


Unfortunately, there's always a tension between getting something done 
and in users' hands right now - which is easier to do if you stay within 
the confines of one project - and building flexible, orthogonal, 
user-pluggable abstractions, which are IMHO in the long-term best 
interests of users but generally require the co-operation, co-ordination 
and deployment of multiple projects.


It seems like we, the OpenStack community, are failing not only at 
co-ordinating these kinds of features across projects, but in even know 
what is being planned in other projects. Perhaps we need some sort of 
"Autonomous Applications working group"?


cheers,
Zane.



Regards!
---
Lingxian Kong


On Fri, May 20, 2016 at 9:49 AM, Zane Bitter  wrote:

On 19/05/16 04:20, Thomas Herve wrote:


On Wed, May 18, 2016 at 8:49 PM, Zane Bitter  wrote:


I've been lobbying the Mistral developers for $SUBJECT since, basically,
forever.[1][2][3] I like to think after a couple of years I succeeded in
changing their view on it from "crazy" to merely "unrealistic".[4] In the
last few months I've had a couple of realisations though:

1) The 'pull' model I've been suggesting is the wrong one,
architecturally
speaking. It's asking Mistral to do too much to poll Zaqar queues.
2) A 'push' model is the correct architecture and it already exists in
the
form of Zaqar's Notifications, which suddenly makes this goal look very
realistic indeed.

I've posted a Zaqar spec for this here:

https://review.openstack.org/#/c/318202/



Commented. I certainly need some more time to think about it.



Thanks, I updated the spec based on your comments.


Not being super familiar with either project myself, I think this needs
close scrutiny from Mistral developers as well as Zaqar developers to
make
sure I haven't got any of the details wrong. I'd also welcome any
volunteers
interested in implementing this :)


One more long-term thing that I did *not* mention in the spec: there are
both Zaqar notifications and Mistral actions for sending email and
hitting
webhooks. These are two of the hardest things for a cloud operator to
secure. It would be highly advantageous if there were only _one_ place in
OpenStack where these were implemented. Either project would potentially
work - Zaqar notifications could call a simple, operator defined workflow
behind the scenes for email/webhook notifications; alternatively the
Mistral
email/webhook actions could drop a message on a Zaqar queue connected to
a
notification - although the former sounds easier to me. (And of course
clouds with only one of the services available could fall back to the
current plugins.) Something to think about for the future...



And you're forgetting about Aodh alarm notifications, which are
webhooks as well.



Yeah, those just need to go away :) Aodh can already post the notifications
to Zaqar instead, and hopefully we'll be able to migrate everything to that
method over time.


Unfortunately, I think none of them do a
particularly great job in terms of robustness and security.



Agree. The best you can do is still middling, and we're not there yet.


I
suggested moving away from doing webhook in Zaqar to Flavio in the
past, and I think he responded that he thought it was part of the core
functionality. It's hard to delegate such operations and create a
dependency on another project. Maybe we can start by creating some
library code.



I guess that's a start, but if I were running a large public cloud I'd
probably want to isolate this on its own network surrounded by some pretty
custom firewalls. A library doesn't really help make that easier.

cheers,
Zane.


__

Re: [openstack-dev] [ironic][neutron] bonding?

2016-05-24 Thread Sean M. Collins
The only thing I am remotely aware of that is relevant is:

https://bugs.launchpad.net/bugs/1558626

But that's really just in one agent.
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific technical areas

2016-05-24 Thread Augustina Ragwitz
Hi Emily,

I'm the Nova Mentoring Czar and we have a Wiki page with a list of
projects that would be good for new contributors:
https://wiki.openstack.org/wiki/Nova/Mentoring

For Nova, I'd encourage potential contributors to get involved with a
specific project so that mentoring can happen organically. Interested
folks are more than welcome to reach out to me, preferably by email.

-- 
Augustina Ragwitz
Sr Systems Software Engineer, HPE Cloud
Hewlett Packard Enterprise
---
irc: auggy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Adrian Otto

> On May 24, 2016, at 12:09 PM, Mike Perez  wrote:
> 
> On 12:24 May 24, Thierry Carrez wrote:
>> Morgan Fainberg wrote:
>>> [...]  If we are accepting golang, I want it to be clearly
>>> documented that the expectation is it is used exclusively where there is
>>> a demonstrable case (such as with swift) and not a carte blanche to use
>>> it wherever-you-please.
>>> 
>>> I want this to be a social contract looked at and enforced by the
>>> community, not special permissions that are granted by the TC (I don't
>>> want the TC to need to step in an approve every single use case of
>>> golang, or javascript ...). It's bottlenecking back to the TC for
>>> special permissions or inclusion (see reasons for the dissolution of the
>>> "integrated release").
>>> 
>>> This isn't strictly an all or nothing case, this is a "how would we
>>> enforce this?" type deal. Lean on infra to enforce that only projects
>>> with the golang-is-ok-here tag are allowed to use it? I don't want
>>> people to write their APIs in javascript (and node.js) nor in golang. I
>>> would like to see most of the work continue with python as the primary
>>> language. I just think it's unreasonable to lock tools behind a gate
>>> that is stronger than the social / community contract (and outlined in
>>> the resolution including X language).
>> 
>> +1
>> 
>> I'd prefer if we didn't have to special-case anyone, and we could come up
>> with general rules that every OpenStack project follows. Any other solution
>> is an administrative nightmare and a source of tension between projects (why
>> are they special and not me).
> 
> I'm in agreement that I don't want to see the TC enforcing this. In fact as
> Thierry has said, lets not special case anyone.
> 
> As soon as a special case is accepted, as nortoriously happens people are 
> going
> to go in a corner and rewrite things in Go. They will be upset later for not
> communicating well on their intentions upfront, and the TC or a few strongly
> opinionated folks in the community are going to be made the bad people just
> about every time.
> 
> Community enforcing or not, I predict this to get out of hand and it's going 
> to
> create more community divide regardless.

I remember in 2010, our founding intent was to converge on two languages for 
OpenStack Development: Python and C. We would prefer Python for things like 
control plane API services, and when needed for performance or other reasons, 
we would use C as an alternative. To my knowledge, since then nothing was ever 
written in C. We have a clear trend of high performance alternative solutions 
showing up in Golang. So, I suggest we go back to the original intent that we 
build things in Python as our preference, and allow teams to select a 
designated alternative when they have their own reasons to do that. I see no 
reason why that designated alternative can not be Golang[1].

Programming syles and languages evolve over time. Otherwise we’d all still be 
using FORTRAN from 1954. OpenStack, as a community needs to have a deliberate 
plan for how to track such evolution. Digging our heels in with a Python only 
attitude is not progressive enough. Giving choice of any option under the sun 
is not practical. We will strike a balance. Recognize that evolution requires 
duplication. There must be overlap (wasted effort maintaining common code in 
multiple languages) in order to allow evolution. This overlap is healthy. We 
don’t need our TC to decide when a project should be allowed to use a 
designated alternative. Set rules that allow for innovation, and then let 
projects decide on their own within such guidelines.

My proposal:

"Openstack projects shall use Python as the preferred programming language. 
Golang may be used as an alternative if the project leadership decides it is 
justified."

Additional (non-regulatory) guidance can also be offered by the OpenStack 
community to indicate when individual projects should decide to use an 
alternative language. In the future as we notice evolution around us, we may 
add other alternatives to that list.

Thanks,

Adrian

[1] I categorically reject the previous rhetoric that casts Golang as a 
premature language that we can’t rely on. That’s FUD, plain and simple. Stop it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo_config] Improving Config Option Help Texts

2016-05-24 Thread John Garbutt
On 24 May 2016 at 19:03, Ian Cordasco  wrote:
> -Original Message-
> From: Erno Kuvaja 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: May 24, 2016 at 06:06:14
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject:  [openstack-dev] [all][oslo_config] Improving Config Option Help 
> Texts
>
>> Hi all,
>>
>> Based on the not yet merged spec of categorized config options [0] some
>> project seems to have started improving the config option help texts. This
>> is great but I noticed scary trend on clutter to be added on these
>> sections. Now looking individual changes it does not look that bad at all
>> in the code 20 lines well structured templating. Until you start comparing
>> it to the example config files. Lots of this data is redundant to what is
>> generated to the example configs already and then the maths struck me.
>>
>> In Glance only we have ~120 config options (this does not include
>> glance_store nor any other dependencies we pull in for our configs like
>> Keystone auth. Those +20 lines of templating just became over 2000 lines of
>> clutter in the example configs and if all projects does that we can
>> multiply the issue. I think no-one with good intention can say that it's
>> beneficial for our deployers and admins who are already struggling with the
>> configs.
>>
>> So I beg you when you do these changes to the config option help fields
>> keep them short and compact. We have the Configuration Docs for extended
>> descriptions and cutely formatted repetitive fields, but lets keep those
>> off from the generated (Example) config files. At least I would like to be
>> able to fit more than 3 options on the screen at the time when reading
>> configs.
>>
>> [0] https://review.openstack.org/#/c/295543/
>
> Hey Erno,
>
> So here's where I have to very strongly disagree with you. That spec
> was caused by operator feedback, specifically for projects that
> provide multiple services that may or may not have separated config
> files which and which already have "short and compact" descriptions
> that are not very helpful to oeprators.

+1

The feedback at operator sessions in Manchester and Austin seemed to
back up the need for better descriptions.

More precisely, Operators should not need to read the code to
understand how to use the configuration option.

Now often that means they are longer. But they shouldn't be too long.

> The *example* config files will have a lot more detail in them. Last I
> saw (I've stopped driving that specification) there was going to be a
> way to generate config files without all of the descriptions. That
> means that for operators who don't care about that can ignore it when
> they generate configuration files. Maybe the functionality doesn't
> work right this instant, but I do believe that's a goal and it will be
> implemented.

Different modes of the config generator should help us cater for
multiple use cases.

I am leaving that as a discussion in oslo specs for the moment.

> Beyond that, I don't think example/sample configuration files should
> be treated differently from documentation, nor do I think that our
> documentation team couldn't make use of the improved documentation
> we're adding to each option. In short, I think this effort will
> benefit many different groups of people in and around OpenStack.
> Simply arguing that this is going to make the sample config files have
> more lines of code is not a good argument against this. Please do
> reconsider.

Now I have been discussing a change in Nova's approach to reduce the
size of some of them, but that was really for different reasons:
http://lists.openstack.org/pipermail/openstack-dev/2016-May/095538.html

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-24 Thread Steven Dake (stdake)
From: Steven Dake mailto:std...@cisco.com>>
Reply-To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, May 17, 2016 at 12:00 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

Hello core reviewers,

I am proposing Mauricio (mlima on irc) for the core review team.  He has done a 
fantastic job reviewing appearing in the middle of the pack for 90 days [1] and 
appearing as #2 in 45 days [2].  His IRC participation is also fantastic and 
does a good job on technical work including implementing Manila from zero 
experience :) as well as code cleanup all over the code base and documentation. 
 Consider my proposal a +1 vote.

I will leave voting open for 1 week until May 24th.  Please vote +1 (approve), 
or -2 (veto), or abstain.  I will close voting early if there is a veto vote, 
or a unanimous vote is reached.

Thanks,
-steve

[1] http://stackalytics.com/report/contribution/kolla/90
[2] http://stackalytics.com/report/contribution/kolla/45

Mauricio,

Congratulations, the vote passed with (iirc 10 votes).  A couple people are out 
on vacation, so I wouldn't take offense :)

I've added you to the appropriate group in gerrit.  Ping me when you next see 
me and I'll give you a crash course on the policies for core reviewers.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage meeting on May 25 - SKIPPED

2016-05-24 Thread Afek, Ifat (Nokia - IL)
Hi,

Vitrage weekly meeting on May 25 will be skipped, as many Vitrage developers 
won't be able to attend.
We will meet next week as usual.

Thanks,
Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] update congress

2016-05-24 Thread Tim Hinrichs
Hi Yue,

That version of Congress definitely doesn't have the Push driver.  The Push
driver code was implemented only in the latest release (Mitaka).

Here are the upgrade instructions.  They SHOULD work, but let us know if
you run into problems, both so we can help you and so we can correct the
instructions.

https://review.openstack.org/#/c/320652/2/README.rst

Tim

On Tue, May 24, 2016 at 3:40 AM Yue Xin  wrote:

> Hi Tim and all,
>
> May I ask how to update the congress version with the Tokyo Hands on Lab
> environment? cause I use the command "openstack congress list version" it
> show the congress is 2013's version.
>
> The reason why I want to update it is that I write a demo driver, which I
> want to push data into the datasource table, but when I use 'curl -i -g -X
> PUT ' command , the error is "501 not implemented" , same error comes with
> 'curl -X PUSH', I am not sure whether it comes from the version of congress
> or not(I thought maybe congress is too old so it doesn't support push data).
>
> Thank you very much for your response.
>
> *Regards,*
> *Yue*
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [higgins] Continued discussion from the last team meeting

2016-05-24 Thread Hongbin Lu
Hi all,

At the last team meeting, we tried to define the scope of the Higgins project. 
In general, we agreed to focus on the following features as an initial start:

* Build a container abstraction and use docker as the first 
implementation.

* Focus on basic container operations (i.e. CRUD), and leave advanced 
operations (i.e. keep container alive, rolling upgrade, etc.) to users or other 
projects/services.

* Start with non-nested container use cases (e.g. containers on 
physical hosts), and revisit nested container use cases (e.g. containers on 
VMs) later.
The items below needs further discussion so I started this ML to discuss it.

1.   Container composition: implement a docker compose like feature

2.   Container host management: abstract container host
For #1, it seems we broadly agreed that this is a useful feature. The argument 
is where this feature belongs to. Some people think this feature belongs to 
other projects, such as Heat, and others think it belongs to Higgins so we 
should implement it. For #2, we were mainly debating two things: where the 
container hosts come from (provisioned by Nova or provided by operators); 
should we expose host management APIs to end-users? Thoughts?

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-24 Thread Mauricio Lima
Thanks guys, it's a privilege to be part of team. :)

2016-05-24 17:02 GMT-03:00 Steven Dake (stdake) :

> From: Steven Dake 
> Reply-To: "openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, May 17, 2016 at 12:00 PM
> To: "openstack-dev@lists.openstack.org"  >
> Subject: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer
>
> Hello core reviewers,
>
> I am proposing Mauricio (mlima on irc) for the core review team.  He has
> done a fantastic job reviewing appearing in the middle of the pack for 90
> days [1] and appearing as #2 in 45 days [2].  His IRC participation is also
> fantastic and does a good job on technical work including implementing
> Manila from zero experience :) as well as code cleanup all over the code
> base and documentation.  Consider my proposal a +1 vote.
>
> I will leave voting open for 1 week until May 24th.  Please vote +1
> (approve), or –2 (veto), or abstain.  I will close voting early if there is
> a veto vote, or a unanimous vote is reached.
>
> Thanks,
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla/90
> [2] http://stackalytics.com/report/contribution/kolla/45
>
>
> Mauricio,
>
> Congratulations, the vote passed with (iirc 10 votes).  A couple people
> are out on vacation, so I wouldn't take offense :)
>
> I've added you to the appropriate group in gerrit.  Ping me when you next
> see me and I'll give you a crash course on the policies for core reviewers.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron][cloud-init] Questions around 'network_data.json'

2016-05-24 Thread Joshua Harlow

Hi there all,

I am working through code/refactoring in cloud-init to enable 
translation of the network_data.json file[1] provided by openstack (via 
nova via neutron?) into the equivalent sysconfig files (ubuntu files 
should already *mostly* work and systemd files are underway as well).


Code for this sysconfig (WIP) is @ 
https://gist.github.com/harlowja/d63a36de0b405d83be9bd3222a5454a7 
(requires base branch 
https://code.launchpad.net/~harlowja/cloud-init/cloud-init-net-refactor 
which did some tweaks to make things easier to work with).


Sadly there has been some questions around certain format conversion and 
what it means when certain formats are given, for example in the 
following example:


{
  "services": [
{
  "type": "dns",
  "address": "172.19.0.12"
}
  ],
  "networks": [
{
  "network_id": "dacd568d-5be6-4786-91fe-750c374b78b4",
  "type": "ipv4",
  "netmask": "255.255.252.0",
  "link": "tap1a81968a-79",
  "routes": [
{
  "netmask": "0.0.0.0",
  "network": "0.0.0.0",
  "gateway": "172.19.3.254"
}
  ],
  "ip_address": "172.19.1.34",
  "id": "network0"
}
  ],
  "links": [
{
  "ethernet_mac_address": "fa:16:3e:ed:9a:59",
  "mtu": null,
  "type": "bridge",
  "id": "tap1a81968a-79",
  "vif_id": "1a81968a-797a-400f-8a80-567f997eb93f"
}
  ]
}

In the cloud-init code what happens is that the links get connected to 
the networks and this gets then internally parsed, but for the bridge 
type listed here it appears information is missing about exactly what to 
bridge to (eth0, ethX? something else)?


Should the 'bridge' type just be ignored and it should just be treated 
as a physical device type (which will cause a different set of logic in 
cloud-init to apply); something else?


Thoughts would be appreciated,

[1] 
http://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/metadata-service-network-info.html#rest-api-impact


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Xvisor support

2016-05-24 Thread Michael Still
On Tue, May 24, 2016 at 11:42 PM, Muneeb Ahmad 
wrote:

> If not, can I add it's support? any ideas how can I do that?
>
> On Sat, May 21, 2016 at 10:23 PM, Muneeb Ahmad 
> wrote:
>
>> Hey guys,
>>
>> Does OpenStack support Xvisor?
>>
>
So, given I'd never heard of xvisor I think we can say that no one has been
thinking about supporting it.

Nova has a pluggable layer for adding hypervisors, and its certainly
possible to do that thing. That said, Nova is pretty hesitant these days to
add new hypervisor drivers, as its a lot of maintenance work for the team,
especially if the original authors of a driver wander off.

A first port of call is probably to work out if you could get support added
to libvirt, and then what exposing that in the libvirt driver for Nova
would look like.

Hope this helps,
Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cloud-init] Questions around 'network_data.json'

2016-05-24 Thread Mathieu Gagné
I think there is an implementation error.
The spec mentions that link type can be "vif", "phy" or (future) "bond".
Nova passes the "raw" Neutron VIF type instead.
IMO, "bridge" should be "vif" as per spec.
--
Mathieu


On Tue, May 24, 2016 at 5:20 PM, Joshua Harlow  wrote:
> Hi there all,
>
> I am working through code/refactoring in cloud-init to enable translation of
> the network_data.json file[1] provided by openstack (via nova via neutron?)
> into the equivalent sysconfig files (ubuntu files should already *mostly*
> work and systemd files are underway as well).
>
> Code for this sysconfig (WIP) is @
> https://gist.github.com/harlowja/d63a36de0b405d83be9bd3222a5454a7 (requires
> base branch
> https://code.launchpad.net/~harlowja/cloud-init/cloud-init-net-refactor
> which did some tweaks to make things easier to work with).
>
> Sadly there has been some questions around certain format conversion and
> what it means when certain formats are given, for example in the following
> example:
>
> {
>   "services": [
> {
>   "type": "dns",
>   "address": "172.19.0.12"
> }
>   ],
>   "networks": [
> {
>   "network_id": "dacd568d-5be6-4786-91fe-750c374b78b4",
>   "type": "ipv4",
>   "netmask": "255.255.252.0",
>   "link": "tap1a81968a-79",
>   "routes": [
> {
>   "netmask": "0.0.0.0",
>   "network": "0.0.0.0",
>   "gateway": "172.19.3.254"
> }
>   ],
>   "ip_address": "172.19.1.34",
>   "id": "network0"
> }
>   ],
>   "links": [
> {
>   "ethernet_mac_address": "fa:16:3e:ed:9a:59",
>   "mtu": null,
>   "type": "bridge",
>   "id": "tap1a81968a-79",
>   "vif_id": "1a81968a-797a-400f-8a80-567f997eb93f"
> }
>   ]
> }
>
> In the cloud-init code what happens is that the links get connected to the
> networks and this gets then internally parsed, but for the bridge type
> listed here it appears information is missing about exactly what to bridge
> to (eth0, ethX? something else)?
>
> Should the 'bridge' type just be ignored and it should just be treated as a
> physical device type (which will cause a different set of logic in
> cloud-init to apply); something else?
>
> Thoughts would be appreciated,
>
> [1]
> http://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/metadata-service-network-info.html#rest-api-impact
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cloud-init] Questions around 'network_data.json'

2016-05-24 Thread Ryan Harper
On Tue, May 24, 2016 at 4:20 PM, Joshua Harlow 
wrote:

> Hi there all,
>
> I am working through code/refactoring in cloud-init to enable translation
> of the network_data.json file[1] provided by openstack (via nova via
> neutron?) into the equivalent sysconfig files (ubuntu files should already
> *mostly* work and systemd files are underway as well).
>
> Code for this sysconfig (WIP) is @
> https://gist.github.com/harlowja/d63a36de0b405d83be9bd3222a5454a7
> (requires base branch
> https://code.launchpad.net/~harlowja/cloud-init/cloud-init-net-refactor
> which did some tweaks to make things easier to work with).
>
> Sadly there has been some questions around certain format conversion and
> what it means when certain formats are given, for example in the following
> example:
>
> {
>   "services": [
> {
>   "type": "dns",
>   "address": "172.19.0.12"
> }
>   ],
>   "networks": [
> {
>   "network_id": "dacd568d-5be6-4786-91fe-750c374b78b4",
>   "type": "ipv4",
>   "netmask": "255.255.252.0",
>   "link": "tap1a81968a-79",
>   "routes": [
> {
>   "netmask": "0.0.0.0",
>   "network": "0.0.0.0",
>   "gateway": "172.19.3.254"
> }
>   ],
>   "ip_address": "172.19.1.34",
>   "id": "network0"
> }
>   ],
>   "links": [
> {
>   "ethernet_mac_address": "fa:16:3e:ed:9a:59",
>   "mtu": null,
>   "type": "bridge",
>   "id": "tap1a81968a-79",
>   "vif_id": "1a81968a-797a-400f-8a80-567f997eb93f"
> }
>   ]
> }
>
> In the cloud-init code what happens is that the links get connected to the
> networks and this gets then internally parsed, but for the bridge type
> listed here it appears information is missing about exactly what to bridge
> to (eth0, ethX? something else)?
>
> Should the 'bridge' type just be ignored and it should just be treated as
> a physical device type (which will cause a different set of logic in
> cloud-init to apply); something else?
>

Thanks Josh,

In particular, I'm hoping to clarify that the spec is meant to describe
guest network configuration (be that a baremetal instance in Ironic or a
VM).  If that holds, then I think
exposing the compute node config (LinuxBridge in this case) into guest
network config is confusing, and instead the network_data.json should have
had type: vif or type: phys.

The same holds true for ovs setup's, we've seen this network_data.json:


http://paste.openstack.org/show/498749/

then ovs setups should emit type: vif  or type: phys to indicate a guest
'physical' interface.

Thanks,
Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific technical areas

2016-05-24 Thread Mike Perez
On 12:54 May 24, Augustina Ragwitz wrote:
> Hi Emily,
> 
> I'm the Nova Mentoring Czar and we have a Wiki page with a list of
> projects that would be good for new contributors:
> https://wiki.openstack.org/wiki/Nova/Mentoring
> 
> For Nova, I'd encourage potential contributors to get involved with a
> specific project so that mentoring can happen organically. Interested
> folks are more than welcome to reach out to me, preferably by email.

There's an assumption here that all projects have things in place to begin
mentoring people. With the people we've spoken to, sometimes just reaching on
IRC gave no answers. This is actually matching people to someone who has
knowledge and is interested/has time to mentor. Even if a match can't be made
right away, communication is made. First impressions with on boarding is key.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Need helps to implement the full baremetals support

2016-05-24 Thread Hongbin Lu
Hi all,

One of the most important feature that Magnum team wants to deliver in Newton 
is the full baremetal support. There is a blueprint [1] created for that and 
the blueprint was marked as "essential" (that is the highest priority). Spyros 
is the owner of the blueprint and he is looking for helps from other 
contributors. For now, we immediately needs help to fix the existing Ironic 
templates [2][3][4] that are used to provision a Kubernetes cluster on top of 
baremetal instances. These templates were used to work, but they become 
outdated right now. We need help to fix those Heat template as an initial step 
of the implementation. Contributors are expected to follow the Ironic devstack 
guide to setup the environment. Then, exercise those templates in Heat.

If you interest to take the work, please contact Spyros or me and we will 
coordinate the efforts.

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support
[2] 
https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster-fedora-ironic.yaml
[3] 
https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubemaster-fedora-ironic.yaml
[4] 
https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubeminion-fedora-ironic.yaml

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.context 2.4.0 release (newton)

2016-05-24 Thread no-reply
We are stoked to announce the release of:

oslo.context 2.4.0: Oslo Context library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.context

With package available at:

https://pypi.python.org/pypi/oslo.context

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.context

For more details, please see below.

Changes in oslo.context 2.3.0..2.4.0


cf33c02 Trivial: ignore openstack/common in flake8 exclude list
0511e11 Strip roles in from_environ
e192563 Allow deprecated headers in from_environ

Diffstat (except docs and test files)
-

oslo_context/context.py| 36 +++---
tox.ini|  2 +-
3 files changed, 81 insertions(+), 10 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.cache 1.8.0 release (newton)

2016-05-24 Thread no-reply
We are gleeful to announce the release of:

oslo.cache 1.8.0: Cache storage for Openstack projects.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.cache

With package available at:

https://pypi.python.org/pypi/oslo.cache

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.cache

For more details, please see below.

Changes in oslo.cache 1.7.0..1.8.0
--

6251a15 Trivial: ignore openstack/common in flake8 exclude list

Diffstat (except docs and test files)
-

tox.ini | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >