Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-15 Thread Cory Benfield

> On 14 Oct 2015, at 23:23, Thomas Goirand  wrote:
> I do understand that you don't like being called this way, though this
> is still the reality. Vendorizing still inflicting some major pain to a
> lot of your users:
> - This thread one of the demonstration of it.
> - You having to contact downstream distros is another.
> - The unbundling work inflicted to downstream package maintainers is a
> 3rd another.
> 
> So like it or not, it is a fact that it is difficult to work with
> requests because of the way it is released upstream.

As I said earlier, I’m not getting drawn into a debate about vendorizing in 
this forum. The last one of these was sufficiently toxic that I’m simply 
unwilling to have the discussion here. If you really want to talk about this 
again, I’m happy to take it out of this mailing list to somewhere where fewer 
people are going to make the discussion personal.

Note however that point 2 is not accurate. The main reason we have 
relationships with our downstream repackagers is for security release purposes. 
Per our security policy, we have exchanged GPG keys with them, and will make 
sure we contact them ahead of time so we can perform a synchronised release of 
security updates. In this instance we chose to use our relationship with our 
repackagers to get this change made, but it is not the main reason we 
communicate with them. (Also, they’re nice people!)

>> has had a policy in place for six months
>> that ensures that you can have the same result with pip and
>> system packages. For six months we have only updated to versions
>> of urllib3 that are actually released, and therefore that are
>> definitely available from pip (and potentially available from
>> the distribution).
>> 
>> The reason this has not been working is because the distributions,
>> when they unbundle us, have not been populating their setup.py to
>> reflect the dependency: only their own metadata. We’ve been in
>> contact with them, and this change is being made in the
>> distributions we have relationships with.
> 
> Though you could have avoid all of this pain if you were not bundling.
> Isn't all of this make you re-think your vendorizing policy? Or still
> not? I'm asking because I still didn't read your answer about the
> important question: since you aren't using specially crafted versions of
> urllib3 anymore, and now only using official releases, what's the reason
> that keeps you vendorizing? Not trying to convince you here, just trying
> to understand.

Again, I’m not being drawn into this discussion here.

Let me make one point, though. There are three people involved in a 
decision-making role on the requests project, and this is an important issue to 
every member of the team. This policy has been part of the requests project for 
a very long time, and we aren’t going to change it in a short space of time: 
I’m certainly not going to unilaterally do so. All I can promise you is that we 
continue to talk about this internally, and if we *unanimously* feel 
comfortable changing our policy we will do so. Until then, I’m happy to do my 
best to accommodate as many people as possible (which in this case I believe we 
have done).

Cory



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][senlin] Request to senlin project to big tent

2015-10-15 Thread Sean Dague
On 10/15/2015 05:13 AM, Qiming Teng wrote:
> Dear TC members,
> 
> Your reviews on the proposal are highly appreciated.
> 
> Subject: Add senlin project to big tent
> Link: https://review.openstack.org/#/c/235172/
> 
> Regards,
>  Qiming

Great. Just as an FYI, it's 3 weeks until the next TC meeting because so
many folks are going to .jp early and then there is summit. So if you
don't get feedback right away, realize it's the schedule, and not
anything about your proposal.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-15 Thread Sean Dague
On 10/14/2015 08:22 PM, Ian Wienand wrote:
> On 10/14/2015 11:08 AM, Zaro wrote:
>> We are soliciting feedback so please let us know what you think.
> 
> Since you asked :)
> 
> Mostly it's just different which is fine.  Two things I noticed when
> playing around, shown in [1]
> 
> When reviewing, the order "-1 0 +1" is kind of counter-intuitive to
> the usual dialog layout of the "most positive" thing on the left;
> e.g. [OK] [Cancel] dialogs.  I just found it odd to interact with.
> 
> Maybe people default themselves to -1 though :)
> 
> The colours for +1/-1 seem to be missing.  You've got to think a lot
> more to parse the +1/-1 rather than just glance at the colours.

Oh, -1 color is a site local change to the css IIRC. I think I made that
in in paris. It's something we could do for the new change screen as
well. The css selector is weird, but seems to work fine:

/* css attribute selector to make -1s show up red in new screen */
[title="This patch needs further work before it can be merged"] {
color: red;
}

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Require documenting changes with versionadded and versionchanged

2015-10-15 Thread Victor Stinner

Hi,

I propose that changes must now be documented in Oslo libraries. If a 
change is not documented, it must *not* be approved.


IMHO it's very important to document all changes. Otherwise, it becomes 
really hard to guess if a specific parameter or a specific function can 
be used just by reading the doc :-/ And we should not force users to 
always upgrading the Oslo libraries to the latest versions. It doesn't 
work on stable branches :-)


Currently, ".. versionadded:: x.y" and ".. versionchanged:: x.y" are not 
(widely) used in Oslo libraries. A good start would be to dig the Git 
history to add these tags. I started to do this for the oslo.config library:

https://review.openstack.org/#/c/235232/

I'm interested to write similar changes for other Oslo libraries.

Because I created this change, I started to vote -1 on all patches which 
oslo.config changes the API without documenting the doc.


What do you think?

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][release] 2015.1.2

2015-10-15 Thread Alan Pevec
2015-10-14 10:59 GMT+02:00 Thierry Carrez :
> Sean Dague wrote:
>> I think that whoever sets the tag should also push those fixes. We had
>> some kilo content bogging down the gate today because of this kind of
>> failure. Better to time this as close as possible with the tag setting.
>
> Right. The ideal process is to push the version bump and cut the tag
> from the commit just before that. That is how it's done on the main
> release when we start cutting the release branch at RC1.

I've now documented this step in StableReleaseBranch wiki:
https://wiki.openstack.org/w/index.php?title=StableBranchRelease=92628=92626

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Tempest] where fwaas tempest tests should be?

2015-10-15 Thread Takashi Yamamoto
hi,

i'm looking in fwaas tempest tests and have a question about code location.

currently,

- fwaas api tests and its rest client are in neutron repo
- there are no fwaas scenario tests

eventually,

- fwaas api tests should be moved into neutron-fwaas repo
- fwaaa scenario tests should be in neutron-fwaas repo too.
- the rest client will be in tempest-lib

is it right?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-15 Thread Sofer Athlan-Guyot
Gilles Dubreuil  writes:

> On 08/10/15 03:40, Rich Megginson wrote:
>> On 10/07/2015 09:08 AM, Sofer Athlan-Guyot wrote:
>>> Rich Megginson  writes:
>>>
 On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:
> Rich Megginson  writes:
>
>> On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
>>> Gilles Dubreuil  writes:
>>>
 On 30/09/15 03:43, Rich Megginson wrote:
> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>>> Gilles Dubreuil  writes:
>>>
 On 15/09/15 06:53, Rich Megginson wrote:
> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>> Hi,
>>
>> Gilles Dubreuil  writes:
>>
>>> A. The 'composite namevar' approach:
>>>
>>> keystone_tenant {'projectX::domainY': ... }
>>>   B. The 'meaningless name' approach:
>>>
>>>keystone_tenant {'myproject': name='projectX',
>>> domain=>'domainY',
>>> ...}
>>>
>>> Notes:
>>>   - Actually using both combined should work too with
>>> the domain
>>> supposedly overriding the name part of the domain.
>>>   - Please look at [1] this for some background
>>> between the two
>>> approaches:
>>>
>>> The question
>>> -
>>> Decide between the two approaches, the one we would like to
>>> retain for
>>> puppet-keystone.
>>>
>>> Why it matters?
>>> ---
>>> 1. Domain names are mandatory in every user, group or
>>> project.
>>> Besides
>>> the backward compatibility period mentioned earlier, where
>>> no domain
>>> means using the default one.
>>> 2. Long term impact
>>> 3. Both approaches are not completely equivalent which
>>> different
>>> consequences on the future usage.
>> I can't see why they couldn't be equivalent, but I may be
>> missing
>> something here.
> I think we could support both.  I don't see it as an either/or
> situation.
>
>>> 4. Being consistent
>>> 5. Therefore the community to decide
>>>
>>> Pros/Cons
>>> --
>>> A.
>> I think it's the B: meaningless approach here.
>>
>>>Pros
>>>  - Easier names
>> That's subjective, creating unique and meaningful name
>> don't look
>> easy
>> to me.
> The point is that this allows choice - maybe the user
> already has some
> naming scheme, or wants to use a more "natural" meaningful
> name -
> rather
> than being forced into a possibly "awkward" naming scheme
> with "::"
>
>   keystone_user { 'heat domain admin user':
> name => 'admin',
> domain => 'HeatDomain',
> ...
>   }
>
>   keystone_user_role {'heat domain admin
> user@::HeatDomain':
> roles => ['admin']
> ...
>   }
>
>>>Cons
>>>  - Titles have no meaning!
> They have meaning to the user, not necessarily to Puppet.
>
>>>  - Cases where 2 or more resources could exists
> This seems to be the hardest part - I still cannot figure
> out how
> to use
> "compound" names with Puppet.
>
>>>  - More difficult to debug
> More difficult than it is already? :P
>
>>>  - Titles mismatch when listing the resources
>>> (self.instances)
>>>
>>> B.
>>>Pros
>>>  - Unique titles guaranteed
>>>  - No ambiguity between resource found and their
>>> title
>>>Cons
>>>  - More complicated titles
>>> My vote
>>> 
>>> I would love to have the approach A for easier name.
>>> But I've seen the challenge of maintaining the providers
>>> behind the
>>> curtains and the confusion it creates with name/titles and
>>> when
>>> not sure
>>> 

Re: [openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-15 Thread Vladimir Kuklin
Gilles,

5xx errors like 503 and 502/504 could always be intermittent operational
issues. E.g. when you access your keystone backends through some proxy and
there is a connectivity issue between the proxy and backends which
disappears in 10 seconds, you do not need to rerun the puppet completely -
just retry the request.

Regarding "REST interfaces for all Openstack API" - this is very close to
another topic that I raised ([0]) - using native Ruby application and
handle the exceptions. Otherwise whenever we have an OpenStack client
(generic or neutron/glance/etc. one) sending us a message like '[111]
Connection refused' this message is very much determined by the framework
that OpenStack is using within this release for clients. It could be
`requests` or any other type of framework which sends different text
message depending on its version. So it is very bothersome to write a bunch
of 'if' clauses or gigantic regexps instead of handling simple Ruby
exception. So I agree with you here - we need to work with the API
directly. And, by the way, if you also support switching to native Ruby
OpenStack API client, please feel free to support movement towards it in
the thread [0]

Matt and Gilles,

Regarding puppet-healthcheck - I do not think that puppet-healtcheck
handles exactly what I am mentioning here - it is not running exactly at
the same time as we run the request.

E.g. 10 seconds ago everything was OK, then we had a temporary connectivity
issue, then everything is ok again in 10 seconds. Could you please describe
how puppet-healthcheck can help us solve this problem?

Or another example - there was an issue with keystone accessing token
database when you have several keystone instances running, or there was
some desync between these instances, e.g. you fetched the token at keystone
#1 and then you verify it again keystone #2. Keystone #2 had some issues
verifying it not due to the fact that token was bad, but due to the fact
that that keystone #2 had some issues. We would get 401 error and instead
of trying to rerun the puppet we would need just to handle this issue
locally by retrying the request.

[0] http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/66423

On Thu, Oct 15, 2015 at 12:23 PM, Gilles Dubreuil  wrote:

>
>
> On 15/10/15 12:42, Matt Fischer wrote:
> >
> >
> > On Thu, Oct 8, 2015 at 5:38 AM, Vladimir Kuklin  > > wrote:
> >
> > Hi, folks
> >
> > * Intro
> >
> > Per our discussion at Meeting #54 [0] I would like to propose the
> > uniform approach of exception handling for all puppet-openstack
> > providers accessing any types of OpenStack APIs.
> >
> > * Problem Description
> >
> > While working on Fuel during deployment of multi-node HA-aware
> > environments we faced many intermittent operational issues, e.g.:
> >
> > 401/403 authentication failures when we were doing scaling of
> > OpenStack controllers due to difference in hashing view between
> > keystone instances
> > 503/502/504 errors due to temporary connectivity issues
>
> The 5xx errors are not connectivity issues:
>
> 500 Internal Server Error
> 501 Not Implemented
> 502 Bad Gateway
> 503 Service Unavailable
> 504 Gateway Timeout
> 505 HTTP Version Not Supported
>
> I believe nothing should be done to trap them.
>
> The connectivity issues are different matter (to be addressed as
> mentioned by Matt)
>
> > non-idempotent operations like deletion or creation - e.g. if you
> > are deleting an endpoint and someone is deleting on the other node
> > and you get 404 - you should continue with success instead of
> > failing. 409 Conflict error should also signal us to re-fetch
> > resource parameters and then decide what to do with them.
> >
> > Obviously, it is not optimal to rerun puppet to correct such errors
> > when we can just handle an exception properly.
> >
> > * Current State of Art
> >
> > There is some exception handling, but it does not cover all the
> > aforementioned use cases.
> >
> > * Proposed solution
> >
> > Introduce a library of exception handling methods which should be
> > the same for all puppet openstack providers as these exceptions seem
> > to be generic. Then, for each of the providers we can introduce
> > provider-specific libraries that will inherit from this one.
> >
> > Our mos-puppet team could add this into their backlog and could work
> > on that in upstream or downstream and propose it upstream.
> >
> > What do you think on that, puppet folks?
> >
>
> The real issue is because we're dealing with openstackclient, a CLI tool
> and not an API. Therefore no error propagation is expected.
>
> Using REST interfaces for all Openstack API would provide all HTTP errors:
>
> Check for "HTTP Response Classes" in
> http://ruby-doc.org/stdlib-2.2.3/libdoc/net/http/rdoc/Net/HTTP.html
>
>
> > [0]
> 

Re: [openstack-dev] [puppet][ec2api] - First version of puppet module

2015-10-15 Thread Denis Egorenko
Hello Marcos,

I've look through on your module and it looks cool. Some code issues
inline. May be just only one nitpick from my side: can you add link to the
documentation in README file? Like it done in puppet-openstack modules.

Also thank you for caring of that and keeping in consistent with
puppet-openstack modules.

2015-10-15 10:49 GMT+03:00 Marcos Fermin Lobo :

> Hi all,
>
> I would like to report to all of you that a first version of OpenStack EC2
> API puppet module is available in order to get your feedback.
>
> This is the link https://github.com/cernops/puppet-ec2api
>
> Please feel free to contribute to this puppet module. All contributions
> and feedback are very welcome.
>
> Regards,
> Marcos.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Tempest] where fwaas tempest tests should be?

2015-10-15 Thread Assaf Muller
On Thu, Oct 15, 2015 at 7:25 AM, Takashi Yamamoto 
wrote:

> hi,
>
> i'm looking in fwaas tempest tests and have a question about code location.
>
> currently,
>
> - fwaas api tests and its rest client are in neutron repo
> - there are no fwaas scenario tests
>
> eventually,
>
> - fwaas api tests should be moved into neutron-fwaas repo
> - fwaaa scenario tests should be in neutron-fwaas repo too.
>

I believe scenario tests that invoke APIs outside of Neutron should
stay (Or be introduced to) Tempest.


> - the rest client will be in tempest-lib
>
> is it right?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Neutron Meetup in Tokyo Summit

2015-10-15 Thread Takashi Yamamoto
hi,

we are planing to have a social meetup for neutron developers
on Thursday evening.
anyone who are interested in neutron development are welcome.

the place is not fixed yet.  we want to estimate how many people want to go.
please fill in the following RSVP if you are interested.  sooner is better.
http://neutrontokyo.app.rsvpify.com/

the place suggestions are also welcome.
by default, it probably will be an izakaya. [1]

[1] https://en.wikipedia.org/wiki/Izakaya

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] [glance] [murano] Data Assets API for App Catalog

2015-10-15 Thread Alexander Tivelkov
Hi folks,

I’ve noticed that the Community Application Catalog has begun to implement
its own API, and it seems to me that we are going to have some significant
duplication of efforts with the code which has already been implemented in
Glance as Artifact Repository initiative (also known as Glance V3).
>From the very beginning of the App Catalog project (and I’ve been involved
in it since February) I’ve been proposing to use Glance as its backend,
because from my point of view it covers like 90% of the needed
functionality. But it looks like we have some kind of miscommunication
here, as I am getting some confusing questions and assumptions, like the
vision of Glance V3 being dedicated backend for Murano (which is definitely
incorrect).
So, I am writing the email to clarify my vision of what Glance V3 is and
how its features may be used to provide the REST API for Community App
Catalog.

1.  Versioned schema
First of all, Glance V3 operates on entities called “artifacts”, and these
ones perfectly map to the Data Assets of community app catalog. The
artifacts are strongly typed: there are artifact types for murano packages,
glance images, heat templates - and there may be (and will be) more. Each
artifact type is represented by a plugin, defining the schema of objects’
data and metadata and - optionally - custom logic. So, this thing is
extensible: when a new type of asset needs to be added to a catalog it can
be done really quickly by just defining the schema and putting that schema
into a plugin. Also, these plugins are versioned, so the possible changes
in the schema are handled properly.

2. Generic properties
Next, all the artifact types in Glance V3 have some generic metadata
properties (i.e. part of the schema which is common for all the types),
including the name, the version, description, authorship information and so
on. This also corresponds to the data schema of community app catalog. The
mapping is not 1:1, but we can work together on this to make sure that
these generic properties match the expectations of the catalog.

3. Versioning
Versions are very important for catalogs of objects, so Glance V3 was
initially designed keeping versioning questions in mind: each artifact has
a semver-based version assigned, so the artifacts having the same name but
different versions are grouped into the proper sequences. API is able to
query artifacts based on their version spec, e.g. it is possible to fetch
the latest artifact with the name “foo” having the version greater than 2.1
and less than 3.5.7 - or any other version spec, similar to pip or any
other similar tool. As far as I know, community app catalog does not have
such capabilities right now - and I strongly believe that it is really a
must have feature for a catalog to be successful. At least it is absolutely
mandatory for Murano packages, which are the only “real apps” among the
asset types right now.

4. Cross artifact dependencies
Glance V3 also has the dependency relations from the very beginning, so
they may be defined as part of artifact type schema. As a result, an
artifact may “reference” any number of other artifacts with various
semantic. For example, murano package may define a set of references to
other murano packages and call it “requires” - and this will act similar to
the requirements of a python package. Similar properties may be defined for
heat templates and glance images - they may reference each other with
various semantics.
Of course, the definitions of such dependencies may be done internally
inside the packages, so they may be resolved locally by the service which
is going to use it, but letting the catalog know about them will allow us
to do the import-export operations for any given artifacts and its
dependencies automatically, only by the means of the catalog itself.

5. Search and filtering API
Right now Glance V3 API is in experimental state (we plan to stabilize it
during the Mitaka cycle), but it already provides quite good capabilities
to discover things. It can search artifacts by their type, name and
(optionally) aforementioned version specs, by tag or even by arbitrary set
of metadata properties. We have plans to integrate Glance V3 with the
Searchlight project to have even more index and search capabilities using
its elastic search engine.

6. Data storage
As you probably know, Glance does not own the binary data of its images.
Instead, it provides an abstraction of the backend storage, which may be
swift, ceph, s3 or something else. The same approach is used in Glance V3
for artifacts data, but with more per-type control: particular artifact
types may be configured independently to store their blobs in different
backends. This may be of use for Community App Catalog which operates on
different storages for its assets.

7. Sharing and access control.
Glance V3 inherits the same access mechanics present in Glance V2: an
artifact may be visible to its owner tenant only, be public (i.e. visible
to all the 

Re: [openstack-dev] [Keystone] [Horizon] Pagination support for Identity dashboard entities

2015-10-15 Thread Boris Bobrov
Hey,

As I see, we decided to go the way of limiting and filtering of the 
list. Bugreport https://bugs.launchpad.net/keystone/+bug/1501698 was 
opened about this issue. I've coded a chain of patches to fix the bug, 
please review: https://review.openstack.org/#/c/234849/

On Friday 14 August 2015 12:46:40 Timur Sufiev wrote:
> Hello, Keystone folks!
> 
> I've just discovered an unfortunate fact that Horizon pagination for
> Tenants/Projects list that worked with Keystone v2 doesn't work
> with Keysone v3 anymore - its API call simply lacks the 'marker'
> and 'limit' parameters [1] that Horizon is relying upon. Meanwhile
> having looked through [2] and [3] I'm a bit confused: while
> Keystone v3 API states it supports [2] pagination for every kind of
> entities (by using 'page' and 'per_page' parameters), the related
> blueprint [3] is not yet approved, meaning that most likely the API
> implementation did not make it into existing Keystone codebase. So
> I wonder whether there are some plans to implement pagination for
> Keystone API calls that list its entities?
> 
> P.S. I'm aware of SearchLight project that tries to help Horizon
> with other OpenStack APIs (part of its mission), what I'm trying to
> understand here is are we going to have some fallback pagination
> mechanism for Horizon's Identity in a short-term/mid-term
> perspective.
> 
> [1] http://developer.openstack.org/api-ref-identity-admin-v2.html
> [2] http://developer.openstack.org/api-ref-identity-v3.html
> [3] https://blueprints.launchpad.net/keystone/+spec/pagination

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-10-15 Thread Tang Chen

Hi all,

The spec is now available here:
https://review.openstack.org/#/c/235169/

Please help to review.

Thanks.

On 10/14/2015 10:05 AM, Tang Chen wrote:

Hi, all,

Please help to review this BP.

https://blueprints.launchpad.net/nova/+spec/live-migration-state-machine


Currently, the migration_status field in Migration object is 
indicating the

status of migration process. But in the current code, it is represented
by pure string, like 'migrating', 'finished', and so on.

The strings could be confusing to different developers, e.g. there are 3
statuses representing the migration process is over successfully:
'finished', 'completed' and 'done'.
And 2 for migration in process: 'running' and 'migrating'.

So I think we should use constants or enum for these statuses.


Furthermore, Nikola has proposed to create a state machine for the 
statuses,
which is part of another abandoned BP. And this is also the work I'd 
like to go

on with. Please refer to:
https://review.openstack.org/#/c/197668/ 

https://review.openstack.org/#/c/197669/ 




Another proposal is: introduce a new member named "state" into Migration.
Use a state machine to handle this Migration.state, and leave 
migration_status

field a descriptive human readable free-form.


So how do you think ?

Thanks.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-15 Thread Dmitry Tantsur

On 10/15/2015 12:18 AM, Robert Collins wrote:

On 15 October 2015 at 11:11, Thomas Goirand  wrote:


One major pain point is unfortunately something ridiculously easy to
fix, but which nobody seems to care about: the long & short descriptions
format. These are usually buried into the setup.py black magic, which by
the way I feel is very unsafe (does PyPi actually execute "python
setup.py" to find out about description texts? I hope they are running
this in a sandbox...).

Since everyone uses the fact that PyPi accepts RST format for the long
description, there's nothing that can really easily fit the
debian/control. Probably a rst2txt tool would help, but still, the long
description would still be polluted with things like changelog, examples
and such (damned, why people think it's the correct place to put that...).

The only way I'd see to fix this situation, would be a PEP. This will
probably take a decade to have everyone switching to a new correct way
to write a long & short description...


Perhaps Debian (1 thing) should change, rather than trying to change
all the upstreams packaged in it (>20K) :)


+1. Both README and PyPI are for users, and I personally find detailed 
descriptions (especially a couple of simple examples) on the PyPI page 
to be of so much value.




-Rob





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][senlin] Request to senlin project to big tent

2015-10-15 Thread Qiming Teng
Dear TC members,

Your reviews on the proposal are highly appreciated.

Subject: Add senlin project to big tent
Link: https://review.openstack.org/#/c/235172/

Regards,
 Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Host maintenance mode proposal.

2015-10-15 Thread Tang Chen

Hi all,

I tried to implement a common host maintenance mode handling
in nova compute.

BP: https://blueprints.launchpad.net/nova/+spec/host-maintenance-mode
spec: https://review.openstack.org/#/c/228689/
patches: https://review.openstack.org/#/q/topic:bp/host-maintenance-mode,n,z

But according to John (johnthetubaguy), host maintenance mode is
a functionality we are going to remove. So would anybody give me
some advice if I want to go on with this work ?

BTW, if I want to talk about this in IRC meeting, which meeting should I
attend ? Nova API meeting on Tuesday ?

Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-15 Thread Gilles Dubreuil


On 15/10/15 12:42, Matt Fischer wrote:
> 
> 
> On Thu, Oct 8, 2015 at 5:38 AM, Vladimir Kuklin  > wrote:
> 
> Hi, folks
> 
> * Intro
> 
> Per our discussion at Meeting #54 [0] I would like to propose the
> uniform approach of exception handling for all puppet-openstack
> providers accessing any types of OpenStack APIs.
> 
> * Problem Description
> 
> While working on Fuel during deployment of multi-node HA-aware
> environments we faced many intermittent operational issues, e.g.:
> 
> 401/403 authentication failures when we were doing scaling of
> OpenStack controllers due to difference in hashing view between
> keystone instances
> 503/502/504 errors due to temporary connectivity issues

The 5xx errors are not connectivity issues:

500 Internal Server Error
501 Not Implemented
502 Bad Gateway
503 Service Unavailable
504 Gateway Timeout
505 HTTP Version Not Supported

I believe nothing should be done to trap them.

The connectivity issues are different matter (to be addressed as
mentioned by Matt)

> non-idempotent operations like deletion or creation - e.g. if you
> are deleting an endpoint and someone is deleting on the other node
> and you get 404 - you should continue with success instead of
> failing. 409 Conflict error should also signal us to re-fetch
> resource parameters and then decide what to do with them.
> 
> Obviously, it is not optimal to rerun puppet to correct such errors
> when we can just handle an exception properly.
> 
> * Current State of Art
> 
> There is some exception handling, but it does not cover all the
> aforementioned use cases.
> 
> * Proposed solution
> 
> Introduce a library of exception handling methods which should be
> the same for all puppet openstack providers as these exceptions seem
> to be generic. Then, for each of the providers we can introduce
> provider-specific libraries that will inherit from this one.
> 
> Our mos-puppet team could add this into their backlog and could work
> on that in upstream or downstream and propose it upstream.
> 
> What do you think on that, puppet folks?
> 

The real issue is because we're dealing with openstackclient, a CLI tool
and not an API. Therefore no error propagation is expected.

Using REST interfaces for all Openstack API would provide all HTTP errors:

Check for "HTTP Response Classes" in
http://ruby-doc.org/stdlib-2.2.3/libdoc/net/http/rdoc/Net/HTTP.html


> [0] 
> http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-06-15.00.html
> 
> 
> I think that we should look into some solutions here as I'm generally
> for something we can solve once and re-use. Currently we solve some of
> this at TWC by serializing our deploys and disabling puppet site wide
> while we do so. This avoids the issue of Keystone on one node removing
> and endpoint while the other nodes (who still have old code) keep trying
> to add it back.
> 
> For connectivity issues especially after service restarts, we're using
> puppet-healthcheck [0] and I'd like to discuss that more in Tokyo as an
> alternative to explicit retries and delays. It's in the etherpad so
> hopefully you can attend.

+1

> 
> [0] - https://github.com/puppet-community/puppet-healthcheck
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Host maintenance mode proposal.

2015-10-15 Thread Tang Chen


On 10/15/2015 05:23 PM, John Garbutt wrote:

On 15 October 2015 at 10:08, Tang Chen  wrote:

Hi all,

I tried to implement a common host maintenance mode handling
in nova compute.

BP: https://blueprints.launchpad.net/nova/+spec/host-maintenance-mode
spec: https://review.openstack.org/#/c/228689/
patches: https://review.openstack.org/#/q/topic:bp/host-maintenance-mode,n,z

But according to John (johnthetubaguy), host maintenance mode is
a functionality we are going to remove. So would anybody give me
some advice if I want to go on with this work ?

BTW, if I want to talk about this in IRC meeting, which meeting should I
attend ? Nova API meeting on Tuesday ?

Sorry, I hadn't spotted there was a spec up for review.
I will go take a look at that, and add my comments on there.


Thank you very much. :)



Certainly the XenAPI host maintenance mode was built for support of
XenServer pools.


Yes, I noticed that. And I was trying to do the same thing in libvirt 
driver.

But after digging into the code, I found that we can do this totally in
nova compute.

So the latest plan is to implement a common one in compute as the
default handling. Xen implemented its own version, that is OK. For those
drivers who don't have one, they can use the default one.

You can also refer to the v1 patch set. I think code is the best 
description.



That support is currently untested, so I want to push for that to be
deprecated, and eventually removed.


Please let me know your latest idea. And I'd like to help.

Thanks.



Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-15 Thread Thomas Goirand
On 10/15/2015 12:18 AM, Robert Collins wrote:
> On 15 October 2015 at 11:11, Thomas Goirand  wrote:
>>
>> One major pain point is unfortunately something ridiculously easy to
>> fix, but which nobody seems to care about: the long & short descriptions
>> format. These are usually buried into the setup.py black magic, which by
>> the way I feel is very unsafe (does PyPi actually execute "python
>> setup.py" to find out about description texts? I hope they are running
>> this in a sandbox...).
>>
>> Since everyone uses the fact that PyPi accepts RST format for the long
>> description, there's nothing that can really easily fit the
>> debian/control. Probably a rst2txt tool would help, but still, the long
>> description would still be polluted with things like changelog, examples
>> and such (damned, why people think it's the correct place to put that...).
>>
>> The only way I'd see to fix this situation, would be a PEP. This will
>> probably take a decade to have everyone switching to a new correct way
>> to write a long & short description...
> 
> Perhaps Debian (1 thing) should change, rather than trying to change
> all the upstreams packaged in it (>20K) :)
> 
> -Rob

Well, having the changlog (and other stuff) of packages merged into the
long description is not helpful, not for Debian, nor for upstream Python
packages.

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][ec2api] - First version of puppet module

2015-10-15 Thread Marcos Fermin Lobo
Hi all,

I would like to report to all of you that a first version of OpenStack EC2 API 
puppet module is available in order to get your feedback.

This is the link https://github.com/cernops/puppet-ec2api

Please feel free to contribute to this puppet module. All contributions and 
feedback are very welcome.

Regards,
Marcos.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron rolling upgrade - are we there yet?

2015-10-15 Thread Anna Kamyshnikova
Thanks a lot for bringing up this theme! I'm interested in working on online
data migration in Mitaka.

3.  Database migration

a.  Online schema migration was done in Liberty release, any work left
to do?

The work here is finished. The only thing is that I'm aware of is some
extra tests https://review.openstack.org/#/c/220091/. But this needs some
Alembic changes. All main functionality is implemented.

b.  TODO: Online data migration to be introduced in Mitaka cycle.

i. Online data
migration can be done during normal operation on the data.

   ii. There should be
also the script to invoke the data migration in the background.

c.  Currently the contract phase is doing the data migration. But since
the contract phase should be run offline, we should move the data migration
to preceding step. Also the contract phase should be blocked if there is
still relevant data in removed entities.

i. Contract phase
can be executed online, if there is all new code running in setup.

d.  The other strategy is to not drop tables, alter names or remove the
columns from the DB – what’s in, it’s in. We should put more attention on
code reviews, merge only additive changes and avoid questionable DB
modification.

Unfortunately sometimes we may need such changes, despite we always tried
to avoid it. As plugins were moved out of Neutron it can be easier now, but
I'm still not sure we can have restriction.

e.  The Neutron server should be updated first, in order to do data
translation between old format into new schema. When doing this, we can be
sure that old data would not be inserted into old DB structures.

On Wed, Oct 14, 2015 at 9:27 PM, Dan Smith  wrote:

> > I would like to gather all upgrade activities in Neutron in one place,
> > in order to summarizes the current status and future activities on
> > rolling upgrades in Mitaka.
>
> Glad to see this work really picking up steam in other projects!
>
> > b.  TODO: To have the rolling upgrade we have to implement the RPC
> > version pinning in conf.
> >
> > i. I’m not a big
> > fan of this solution, but we can work out better idea if needed.
>
> I'll just point to this:
>
>   https://review.openstack.org/#/c/233289/
>
> and if you go check the logs for the partial-ncpu job, you'll see
> something like this:
>
>   nova.compute.rpcapi  Automatically selected compute RPC version 4.5
> from minimum service version 2
>
> I think that some amount of RPC pinning is probably going to be required
> for most people in most places, given our current model. But I assume
> the concern is around requiring this to be a manual task the operators
> have to manage. The above patch is the first step towards nova removing
> this as something the operators have to know anything about.
>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-compute][nova][libvirt] Extending Nova-Compute for image prefetching

2015-10-15 Thread John Garbutt
On 10 October 2015 at 09:35, Alberto Geniola 
wrote:

> Hi Michael,
>
> and thank you for your answer.
>
> Indeed, what I want to do is to add methods to the ImageCache.py module
> (listing, adding, deleting). So far, this module only takes care of image
> deletion: this represents the "cache" of images. Now, I want to populate
> the cache with some images on the hypervisor (as you mention) without
> having any instance running it, yet. The method I want like to add should
> call the appropriate method from the hypervisor driver (let's say libvirt)
> to trigger the image download/creation without starting it (I guess
> something like calling _create_image() should do the trick).
>
> Your question is actually good: how will an user be  able to trigger this
> image caching mechanism?
> My idea is to extend the HTTP Nova Compute API. I would lilke to add a
> resource, let's say "CachedImages", to the API tree. In this way,
> interacting via CRUD operation we should be able to CALL/CAST rpc api and
> interact with the imagecache module.
>
> Is it clearer now? Do you see any problem in this approach?
>

I think prefetching is a problem that needs to be solved.

It would be great if you can submit a nova-spec on this, so we can discuss
it there:
https://github.com/openstack/nova-specs#readme

>From previous discussions, I like the idea of an API to tell a specific
compute node to add an image into its cache. Probably need to specify an
expiry time, for when it gets deleted if never used, etc.

In terms of a CRUD API, I am less sure about the cost/benefit. Thats mostly
because an RPC call in the API always goes badly (with timeouts, etc), and
adding DB tables feels like overkill, and creates a state sync problem. But
maybe I am not seeing the full picture there.

This seems like a good thing to discuss in the spec review, where there
will be more context around the use cases, etc.

Thanks,
johnthetubaguy




>
> On Thu, Oct 8, 2015 at 11:45 PM, Michael Still  wrote:
>
>> I think I'd rephrase your definition of pre-fetched to be honest --
>> something more like "images on this hypervisor node without a currently
>> running instance". So, your operations would become:
>>
>>  - trigger an image prefetching
>>  - list unused base images (and perhaps when they were last used)
>>  - delete an unused image
>>
>> All of that would need to tie into the image cache management code so
>> that its not stomping on your images. In fact, you're probably best of
>> adding all of this as tweaks to the image cache manager anyways.
>>
>> One question though -- who is calling these APIs? Are you adding a
>> central service to orchestrate these calls?
>>
>> Michael
>>
>>
>>
>> On Thu, Oct 8, 2015 at 10:50 PM, Alberto Geniola <
>> albertogeni...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> I'm considering to extend the Nova-Compute API in order to provide
>>> image-prefetching capabilities to OS.
>>>
>>> In order to allow image prefetching, I ended up with the need to add
>>> three different APIs on the nova-compute nodes:
>>>
>>>   1. Trigger an image prefetching
>>>
>>>   2. List prefetched images
>>>
>>>   3. Delete a prefetched image
>>>
>>>
>>>
>>> About the point 1 I saw I can re-use the libvirt driver function
>>> _create_image() to trigger the image prefetching. However, this approach
>>> will not store any information about the stored image locally. This leads
>>> to an issue: how do I retrieve a list of already fetched images? A quick
>>> and simple possibility would be having a local file, storing information
>>> about the fetched images. Would it be acceptable? Is there any best
>>> practice in OS community?
>>>
>>>
>>>
>>> Any ideas?
>>>
>>>
>>> Ty,
>>>
>>> Al.
>>>
>>> --
>>> Dott. Alberto Geniola
>>>
>>>   albertogeni...@gmail.com
>>>   +39-346-6271105
>>>   https://www.linkedin.com/in/albertogeniola
>>>
>>> Web: http://www.hw4u.it
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Rackspace Australia
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Dott. Alberto Geniola
>
>   albertogeni...@gmail.com
>   +39-346-6271105
>   https://www.linkedin.com/in/albertogeniola
>
> Web: http://www.hw4u.it
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [Nova] Host maintenance mode proposal.

2015-10-15 Thread John Garbutt
On 15 October 2015 at 10:08, Tang Chen  wrote:
> Hi all,
>
> I tried to implement a common host maintenance mode handling
> in nova compute.
>
> BP: https://blueprints.launchpad.net/nova/+spec/host-maintenance-mode
> spec: https://review.openstack.org/#/c/228689/
> patches: https://review.openstack.org/#/q/topic:bp/host-maintenance-mode,n,z
>
> But according to John (johnthetubaguy), host maintenance mode is
> a functionality we are going to remove. So would anybody give me
> some advice if I want to go on with this work ?
>
> BTW, if I want to talk about this in IRC meeting, which meeting should I
> attend ? Nova API meeting on Tuesday ?

Sorry, I hadn't spotted there was a spec up for review.
I will go take a look at that, and add my comments on there.

Certainly the XenAPI host maintenance mode was built for support of
XenServer pools.
That support is currently untested, so I want to push for that to be
deprecated, and eventually removed.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Require documenting changes with versionadded and versionchanged

2015-10-15 Thread Brant Knudson
On Thu, Oct 15, 2015 at 5:52 AM, Victor Stinner  wrote:

> Hi,
>
> I propose that changes must now be documented in Oslo libraries. If a
> change is not documented, it must *not* be approved.
>
> IMHO it's very important to document all changes. Otherwise, it becomes
> really hard to guess if a specific parameter or a specific function can be
> used just by reading the doc :-/ And we should not force users to always
> upgrading the Oslo libraries to the latest versions. It doesn't work on
> stable branches :-)
>
> Currently, ".. versionadded:: x.y" and ".. versionchanged:: x.y" are not
> (widely) used in Oslo libraries. A good start would be to dig the Git
> history to add these tags. I started to do this for the oslo.config library:
> https://review.openstack.org/#/c/235232/
>
> I'm interested to write similar changes for other Oslo libraries.
>
> Because I created this change, I started to vote -1 on all patches which
> oslo.config changes the API without documenting the doc.
>
> What do you think?
>
>
Victor
>
>
Submitters don't know what release their change is going to get into. They
might submit the review when version 1.1.0 is current so they mark it with
added in 1.2.0, but then the change doesn't get merged until after 1.4.0 is
tagged. Also, the submitter doesn't know what the next release is going to
be tagged as, since it might be 1.2.0 or it might be 2.0.0.

So this will create extra churn as reviews have to be updated with the
release #, and the docs will likely be wrong when reviewers forget to check
it (unless this can be automated).

We have the same problem with documenting when something is deprecated.

I don't think it's worth it to document in which release something is added
in the oslo libs. We're not charging for this stuff so if you want the
online docs to match your code use the latest. Or check out the tag and
generate the docs for the release you're on to look at to see if the
feature is there.

:: Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tacker] Proposing Bob Haddleton to core team

2015-10-15 Thread Sridhar Ramaswamy
I would like to nominate Bob Haddleton to the Tacker core team.

In the current Liberty cycle Bob made significant, across the board
contributions to Tacker [1]. Starting with many usability enhancements and
squashing bugs Bob has shown commitment and consistently produced high
quality code. To cap he recently landed Tacker's health monitoring
framework to enable loadable VNF monitoring. His knowledge in NFV area is a
huge plus for Tacker as we embark onto even greater challenges in the
Mitaka cycle.

Along the lines, we are actively looking to expand Tacker's core reviewer
team. If you are interested in the NFV Orchestration / VNF Manager space
please stop by and explore Tacker project [2].

Tacker team,

Please provide your -1 / +1 votes.

- Sridhar

[1]  
http://stackalytics.com/report/users/bob-haddleton
[2]  
https://wiki.openstack.org/wiki/Tacker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sender Auth Failure] Re: [neutron] Neutron rolling upgrade - are we there yet?

2015-10-15 Thread Johnston, Nate

On Oct 15, 2015, at 11:23 AM, Ihar Hrachyshka 
> wrote:

I also feel that upgrades are in lots of ways not only a technical issue, but a 
cultural one too. You should have reviewers being aware of all the moving 
parts, and how a seemingly innocent change can break the flow. That’s why I 
plan to start on a devref page specifically about upgrades, where we could lay 
ground about which scenarios we should support, and those we should not (f.e. 
we have plenty of compatibility code in agents that to handle old controller 
scenario, which should not be supported); how all pieces interact and behave in 
transition, and what to look for during reviews. Hopefully, once such a page is 
up and read by folks, we will be able to have more meaningful conversation 
about our upgrade strategy.

That is a fantastic idea, and would really help me understand this part of the 
Neutron code as a reviewer.  Thanks!

—N.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] new kilo release: 6.1.0

2015-10-15 Thread Emilien Macchi
We're happy to announce the release of:
* puppet-ceilometer 6.1.0
* puppet-cinder 6.1.0
* puppet-designate  6.1.0
* puppet-glance 6.1.0
* puppet-heat   6.1.0
* puppet-horizon6.1.0
* puppet-ironic 6.1.0
* puppet-keystone   6.1.0
* puppet-manila 6.1.0
* puppet-neutron6.1.0
* puppet-nova   6.1.0
* puppet-openstacklib   6.1.0
* puppet-sahara 6.1.0
* puppet-swift  6.1.0
* puppet-tempest6.1.0
* puppet-trove  6.1.0
* puppet-vswitch2.1.0

More details about releases:
https://wiki.openstack.org/wiki/Puppet/releases

Kudos to folks who helped to make that happen!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] OpenStack versioning in Fuel

2015-10-15 Thread Oleg Gelbukh
Hello,

I would like to highlight a problem that we are now going to have in Fuel
regarding versioning of OpenStack.

As you know, with introduction of the Big Tent policy it was decided that
since Liberty dev cycle versioning schema of the whole project changes.
Year-based versions won't be assigned to individual projects, nor the
coordinated release is going to have unified number [1]. Individual
projects will have semver version numbers, while numbering of the release
itself seems to be dropped.

However, in Fuel there is a lot of places where we use year-based version
of OpenStack release. [2] How are we going to handle this? Shall we have
openstack_version: 2015.2 all over the place? Or we should come up with
something more sophisticated? Or just drop OpenStack version component from
our versioning schema for good?

Please, share your opinions here or in corresponding reviews.

[1] http://ttx.re/new-versioning.html
[2] https://review.openstack.org/#/c/234296/

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Require documenting changes with versionadded and versionchanged

2015-10-15 Thread Joshua Harlow

Brant Knudson wrote:



On Thu, Oct 15, 2015 at 5:52 AM, Victor Stinner > wrote:

Hi,

I propose that changes must now be documented in Oslo libraries. If
a change is not documented, it must *not* be approved.

IMHO it's very important to document all changes. Otherwise, it
becomes really hard to guess if a specific parameter or a specific
function can be used just by reading the doc :-/ And we should not
force users to always upgrading the Oslo libraries to the latest
versions. It doesn't work on stable branches :-)

Currently, ".. versionadded:: x.y" and ".. versionchanged:: x.y" are
not (widely) used in Oslo libraries. A good start would be to dig
the Git history to add these tags. I started to do this for the
oslo.config library:
https://review.openstack.org/#/c/235232/

I'm interested to write similar changes for other Oslo libraries.

Because I created this change, I started to vote -1 on all patches
which oslo.config changes the API without documenting the doc.

What do you think?

Victor


Submitters don't know what release their change is going to get into.
They might submit the review when version 1.1.0 is current so they mark
it with added in 1.2.0, but then the change doesn't get merged until
after 1.4.0 is tagged. Also, the submitter doesn't know what the next
release is going to be tagged as, since it might be 1.2.0 or it might be
2.0.0.

So this will create extra churn as reviews have to be updated with the
release #, and the docs will likely be wrong when reviewers forget to
check it (unless this can be automated).

We have the same problem with documenting when something is deprecated.


+1

I had this problem with deprecation versioning (the debtcollector 
library functions take a version="XYZ", removal_version="ABC" params, 
see 
http://docs.openstack.org/developer/debtcollector/examples.html#further-customizing-the-emitted-messages) 
and it is pretty hard to get those two numbers right, especially with 
weekly releases and not knowing when a review will merge... I'm not 
saying we shouldn't try to do this, but we just have to figure out how 
to do it in a smart way.


Perhaps there need to be a gerrit based way to add these, so for example 
a review submitter would write:


".. versionadded:: $FILL_ME_IN_WHEN_MERGED" and ".. versionchanged:: 
$FILL_ME_IN_WHEN_MERGED" or something, and gerrit when merging code 
would change those into real numbers...




I don't think it's worth it to document in which release something is
added in the oslo libs. We're not charging for this stuff so if you want
the online docs to match your code use the latest. Or check out the tag
and generate the docs for the release you're on to look at to see if the
feature is there.

:: Brant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Require documenting changes with versionadded and versionchanged

2015-10-15 Thread Victor Stinner

Le 15/10/2015 12:52, Victor Stinner a écrit :

I started to do this for the oslo.config library:
https://review.openstack.org/#/c/235232/


More changes.

oslo.concurrency:
https://review.openstack.org/#/c/235416/

oslo.serialization:
https://review.openstack.org/#/c/235297/

oslo.utils:
https://review.openstack.org/#/c/235386/

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron rolling upgrade - are we there yet?

2015-10-15 Thread Ihar Hrachyshka
Hi Artur,

thanks a lot for caring about upgrades!

There are a lot of good points below. As you noted, surprisingly, we seem to 
have rolling upgrades working for RPC layer. Before we go into complicating 
database workflow by doing oslo.versionedobjects transition heavy-lifting, I 
would like us to spend cycles on making sure rolling upgrades work not just 
surprisingly, but also covered with appropriate gating (I speak grenade).

I also feel that upgrades are in lots of ways not only a technical issue, but a 
cultural one too. You should have reviewers being aware of all the moving 
parts, and how a seemingly innocent change can break the flow. That’s why I 
plan to start on a devref page specifically about upgrades, where we could lay 
ground about which scenarios we should support, and those we should not (f.e. 
we have plenty of compatibility code in agents that to handle old controller 
scenario, which should not be supported); how all pieces interact and behave in 
transition, and what to look for during reviews. Hopefully, once such a page is 
up and read by folks, we will be able to have more meaningful conversation 
about our upgrade strategy.

> On 14 Oct 2015, at 20:10, Korzeniewski, Artur  
> wrote:
> 
> Hi all,
> 
> I would like to gather all upgrade activities in Neutron in one place, in 
> order to summarizes the current status and future activities on rolling 
> upgrades in Mitaka.
> 

If you think it’s worth it, we can start up a new etherpad page to gather 
upgrade ideas and things to do.

> 
> 
> 1.  RPC versioning
> 
> a.  It is already implemented in Neutron.
> 
> b.  TODO: To have the rolling upgrade we have to implement the RPC 
> version pinning in conf.
> 
> i. I’m not a big fan 
> of this solution, but we can work out better idea if needed.

As Dan pointed out, and as I think Miguel was thinking about, we can have pin 
defined by agents in the cluster. Actually, we can have per agent pin.

> 
> c.  Possible unit/functional tests to catch RPC version incompatibilities 
> between RPC revisions.
> 
> d.  TODO: Multi-node Grenade job to have rolling upgrades covered in CI.

That is not for unit or functional test level.

As you mentioned, we already have grenade project that is designed to test 
upgrades. To validate RPC compatibility on rolling upgrade we would need so 
called ‘partial’ job (when different components are running with different 
versions; in case of neutron it would mean a new controller and old agents). 
The job is present in nova gate and validates RPC compatibility.

As far as I know, Russell Bryant was looking into introducing the job for 
neutron, but was blocked by ongoing grenade refactoring to support partial 
upgrades ‘the right way’ (using multinode setups). I think that we should check 
with grenade folks on that matter, I have heard start of Mitaka was ETA for 
this work to complete.

> 
> 2.  Message content versioning – versioned objects
> 
> a.  TODO: implement Oslo Versionobject in Mitaka cycle. The interesting 
> entities to be implemented: network, subnet, port, security groups…

Though we haven’t touched base neutron resources in Liberty, we introduced 
oslo.versionedobjects based NeutronObject class during Liberty as part of QoS 
effort. I plan to expand on that work during Mitaka.

The existing code for QoS resources can be found at:

https://github.com/openstack/neutron/tree/master/neutron/objects

> 
> b.  Will OVO have impact on vendor plugins?

It surely can have significant impact, but hopefully dict compat layer should 
make transition more smooth:

https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L50

> 
> c.  Be strict on changes in version objects in code review, any change in 
> object structure should increment the minor (backward-compatible) or major 
> (breaking change) RPC version.

That’s assuming we have a clear mapping of objects onto current RPC interfaces, 
which is not obvious. Another problem we would need to solve is core resource 
extensions (currently available in ml2 only), like qos or port_security, that 
modify resources based on controller configuration.

> 
> d.  Indirection API – message from newer format should be translated to 
> older version by neutron server.

For QoS, we used a new object agnostic subscriber mechanism to propagate 
changes applied to QoS objects into agents: 
http://docs.openstack.org/developer/neutron/devref/rpc_callbacks.html

It is already (expected) to downgrade objects based on agent version (note it’s 
not implemented yet, but will surely be ready during Mitaka):

https://github.com/openstack/neutron/blob/master/neutron/api/rpc/handlers/resources_rpc.py#L142

> 
> 3.  Database migration
> 
> a.  Online schema migration was done in Liberty release, any work left to 
> do?

Nothing specific, maybe a bug or two here and there.

> 
> 

Re: [openstack-dev] [oslo] Require documenting changes with versionadded and versionchanged

2015-10-15 Thread Victor Stinner

Le 15/10/2015 16:34, Brant Knudson a écrit :

Submitters don't know what release their change is going to get into.
They might submit the review when version 1.1.0 is current so they mark
it with added in 1.2.0, but then the change doesn't get merged until
after 1.4.0 is tagged. Also, the submitter doesn't know what the next
release is going to be tagged as, since it might be 1.2.0 or it might be
2.0.0.


I propose to expect that the next release is X.Y+1 where X.Y is the 
current release. If the release manager wants to increase the major 
version number, it's very easy to fix all documentation at once. Example 
with sed in oslo.concurrency to replace 2.7 with 3.0:


   sed -i -e 's/\(versionadded\|versionchanged\):: 2.7/\1:: 3.0/g' \
  $(find oslo_concurrency/ -name "*.py")


So this will create extra churn as reviews have to be updated with the
release #, and the docs will likely be wrong when reviewers forget to
check it (unless this can be automated).


If a the version X.Y+1 is released before the change is reviewed, the 
reviewer is responsible to complain about outdated 
versionadded/versionchanged markups.


I know that it adds more work, but I consider that it's worth it.


I don't think it's worth it to document in which release something is
added in the oslo libs. We're not charging for this stuff so if you want
the online docs to match your code use the latest. Or check out the tag
and generate the docs for the release you're on to look at to see if the
feature is there.


It's maybe time to think outside OpenStack. Oslo libraries can be used 
outside OpenStack and API stability and documenting changes matters even 
more outside OpenStack.


I wrote 4 patches to document all changes of 4 Oslo Libraries 
(concurrency, config, serialization, utils). See my patches, they are 
quite small. I took me 2 hours to dig the whole Git history since the 
creation of each library. Only some patches changes the API. For 
example, bug fixes don't have to be documented.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-15 Thread Markus Zoeller
Zaro wrote:
> We are soliciting feedback so please let us know what you think.

For spotting possible trivial bug fixes the new query option "delta"
will be useful. For example: "status:open delta:<100"

Would it be possible to create a "prio" label to help sorting out stuff?
If I understand [1] correctly, we could have something like like:

[label "Prio"]
function = NoOp
value = 0 Undecided
value = 1 critical
value = 2 high
value = 3 trivialfix

For example, this would allow to create a query with 

"status:open label:Prio=3" 

to get all reviews for trivial bug fixes.

Nevertheless, I'm looking forward to the upgrade.

References:
[1] 
https://gerrit-review.googlesource.com/Documentation/config-labels.html#label_custom

Regards,
Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Driver documentation in Ironic

2015-10-15 Thread Ramakrishnan G
Hi All,

This mail is related to driver-specific documentation in Ironic.

First a bit of context.  I work on iLO drivers in Ironic. Our team would
like to document both Ironic driver related stuff (which is related to
Ironic) and hardware related stuff like tested platforms, firmware
information, firmware issues, etc (which is not related to Ironic) in the
documentation. Today we keep it at two places - ironic related one in
ironic tree and (ironic + non-ironic) related one in wiki. It's hard for
both people who work on documentation and people who read this
documentation to update/refer information in two places.  Hence we decided
to raise the review [0] to move the content completely to wiki.  It got
mixed response.  While some people are okay with it, but some others
(including our ptl :)) feel it's worth putting it in-tree and try to
address the problems.

So what all are the problems ?
1) Ability to update the driver documentation not-related to Ironic easily
without waiting.
2) To save some core reviewers time who might not be familiar with the
hardware.

To solve the actual problem of updating the documentation easily while
keeping it in-tree, I checked with infra folks if a subset of a repository
can be +2ed/+Aed by another group.  They confirmed it's not possible
(unless there was a communication gap in that conversation, folks can
correct me if I am wrong).

The following are the options that I can think of to address this:

1) Easy approvals for patches solely related to driver documentation. Once
the driver team feels the documentation is ready, it can be +Aed by a core
team member skipping the normal process of review. Of course, fixing any
comments that come by, but not waiting for the normal rule of 2x+2s.

2) A separate repository for driver documentation controller by driver
developers (a bad idea ??)

3) Allow to push driver documentation to wiki for those who wish to.

Thoughts ???

[0] https://review.openstack.org/#/c/225602/

Regards,
Ramesh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][ec2api] - First version of puppet module

2015-10-15 Thread Emilien Macchi
Hi Marcos,

On 10/15/2015 03:49 AM, Marcos Fermin Lobo wrote:
> Hi all,
> 
> I would like to report to all of you that a first version of OpenStack
> EC2 API puppet module is available in order to get your feedback.
> 
> This is the link https://github.com/cernops/puppet-ec2api
> 
> Please feel free to contribute to this puppet module. All contributions
> and feedback are very welcome.
> 

If you want this module part of OpenStack, you would need to read:
https://wiki.openstack.org/wiki/Puppet/New_module

Which documents everything you need to know when creating new modules.
This is how we manage new modules, please follow the instructions and do
not hesitate to come up on IRC #puppet-openstack and discuss about it.
Everything you need to know about Puppet OpenStack is documented here:
https://wiki.openstack.org/wiki/Puppet


Thanks for your work!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cinder authentication with authtoken

2015-10-15 Thread Kanthi P
Hi,

I need a way to access cinder apis using auth-token. I used below API to
get the cinder client and tried getting the availability zones.


cinder_client = cc.Client(1, auth_token=self.ctxt.auth_token,
 project_id=self.ctxt.tenant,
  auth_url=self.ctxt.auth_url)
cinder_client.availability_zones.list()

But authentication fails with the exception:
"cinderclient.exceptions.BadRequest: Expecting to find username or userId
in passwordCredentials - the server could not comply with the request since
it is either malformed or otherwise incorrect. The client is assumed to be
in error. (HTTP 400)"

It works fine when I use username and api_key instead of auth_token

cinder_client = cc.Client(1, username="admin", api_key="X",
  project_id="admin",
  auth_url="http://X.X.X.X:5000/v2.0;)
cinder_client.availability_zones.list()

Please help me with a means of using cinder apis with auth_token.

Thanks,
Kanthi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-10-15 Thread Neil Jerram
On 02/10/15 12:33, Neil Jerram wrote:
> On 02/10/15 11:42, Neil Jerram wrote:
>> Thanks Kyle! I'm looking at this now for networking-calico.
> Done, please see https://pypi.python.org/pypi/networking-calico.
>
> When you release, how will the version number be decided?  [...]

Excitingly, I believe networking-calico is ready now for its first
release.  Kyle - would you mind doing the honours?

(I'm assuming you're still the right person to ask - please do correct
me if not!)

Thanks,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team

2015-10-15 Thread Telles Nobrega
Congrats Vitaly!!

On Thu, Oct 15, 2015 at 11:40 AM Sergey Lukjanov 
wrote:

> I think we have a quorum.
>
> Vitaly, congrats!
>
> On Tue, Oct 13, 2015 at 6:39 PM, Matthew Farrellee 
> wrote:
>
>> +1!
>>
>> On 10/12/2015 07:19 AM, Sergey Lukjanov wrote:
>>
>>> Hi folks,
>>>
>>> I'd like to propose Vitaly Gridnev as a member of the Sahara core
>>> reviewer team.
>>>
>>> Vitaly contributing to Sahara for a long time and doing a great job on
>>> reviewing and improving Sahara. Here are the statistics for reviews
>>> [0][1][2] and commits [3].
>>>
>>> Existing Sahara core reviewers, please vote +1/-1 for the addition of
>>> Vitaly to the core reviewer team.
>>>
>>> Thanks.
>>>
>>> [0]
>>>
>>> https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>>> [1] http://stackalytics.com/report/contribution/sahara-group/180
>>> [2] http://stackalytics.com/?metric=marks_id=vgridnev
>>> [3]
>>>
>>> https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>>>
>>> --
>>> Sincerely yours,
>>> Sergey Lukjanov
>>> Sahara Technical Lead
>>> (OpenStack Data Processing)
>>> Principal Software Engineer
>>> Mirantis Inc.
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Telles Nobrega
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-15 Thread Vladimir Kuklin
Matt

> You are right, it probably won't. At that point you are using puppet to
work around some fundamental issues in your OpenStack deployment.

Actually, as you know, with Fuel we are shipping our code to people who
have their own infrastructure. We do not have any control over that
infrastructure and any information about it. So we should expect the worst
- that sometimes such issues will happen and we need to take care of them
in the best possible way, e.g. someone tripped the wire and then put it
back into the switch. And it seems that we can do it right in puppet code
instead of making user wait for puppet rerun.

> Another one that is a deployment architecture problem. We solved this by
configuring the load balancer to direct keystone traffic to a single db
node, now we solve it with Fernet tokens. If you have this
> specific issue above it's going to manifest in all kinds of strange ways
and can even happen to control services like neutron/nova etc as well.
Which means even if we get puppet to pass with a bunch of
> retries, OpenStack is not healthy and the users will not be happy about
it.

Again, what you described is for the case when the system was in some
undesirable  state like reading from incorrect database and then got into
persistent working state. And you solve it by making load balancer aware of
which backend to send request to. But I am talking about sporadic failures
which from the statistical point of view look negligible and should not be
handled by load balancer. Imagine the situation when load balancer is ok
with that backend and this backend faces intermittent operational issue
like getting garbled response or having some bug in the code. This is a
sporadic failure which will not be caught by load balancer because if you
make it so sensitive to such issues it will behave poorly. So, I think, the
best option here is to handle such issues on application level.


On Thu, Oct 15, 2015 at 4:37 PM, Matt Fischer  wrote:

>
>
> On Thu, Oct 15, 2015 at 4:10 AM, Vladimir Kuklin 
> wrote:
>
>> Gilles,
>>
>> 5xx errors like 503 and 502/504 could always be intermittent operational
>> issues. E.g. when you access your keystone backends through some proxy and
>> there is a connectivity issue between the proxy and backends which
>> disappears in 10 seconds, you do not need to rerun the puppet completely -
>> just retry the request.
>>
>> Regarding "REST interfaces for all Openstack API" - this is very close
>> to another topic that I raised ([0]) - using native Ruby application and
>> handle the exceptions. Otherwise whenever we have an OpenStack client
>> (generic or neutron/glance/etc. one) sending us a message like '[111]
>> Connection refused' this message is very much determined by the framework
>> that OpenStack is using within this release for clients. It could be
>> `requests` or any other type of framework which sends different text
>> message depending on its version. So it is very bothersome to write a bunch
>> of 'if' clauses or gigantic regexps instead of handling simple Ruby
>> exception. So I agree with you here - we need to work with the API
>> directly. And, by the way, if you also support switching to native Ruby
>> OpenStack API client, please feel free to support movement towards it in
>> the thread [0]
>>
>> Matt and Gilles,
>>
>> Regarding puppet-healthcheck - I do not think that puppet-healtcheck
>> handles exactly what I am mentioning here - it is not running exactly at
>> the same time as we run the request.
>>
>> E.g. 10 seconds ago everything was OK, then we had a temporary
>> connectivity issue, then everything is ok again in 10 seconds. Could you
>> please describe how puppet-healthcheck can help us solve this problem?
>>
>
>
> You are right, it probably won't. At that point you are using puppet to
> work around some fundamental issues in your OpenStack deployment.
>
>
>>
>> Or another example - there was an issue with keystone accessing token
>> database when you have several keystone instances running, or there was
>> some desync between these instances, e.g. you fetched the token at keystone
>> #1 and then you verify it again keystone #2. Keystone #2 had some issues
>> verifying it not due to the fact that token was bad, but due to the fact
>> that that keystone #2 had some issues. We would get 401 error and instead
>> of trying to rerun the puppet we would need just to handle this issue
>> locally by retrying the request.
>>
>> [0] http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/66423
>>
>
> Another one that is a deployment architecture problem. We solved this by
> configuring the load balancer to direct keystone traffic to a single db
> node, now we solve it with Fernet tokens. If you have this specific issue
> above it's going to manifest in all kinds of strange ways and can even
> happen to control services like neutron/nova etc as well. Which means even
> if we get puppet to pass with a bunch of 

Re: [openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-15 Thread Matt Fischer
On Thu, Oct 15, 2015 at 4:10 AM, Vladimir Kuklin 
wrote:

> Gilles,
>
> 5xx errors like 503 and 502/504 could always be intermittent operational
> issues. E.g. when you access your keystone backends through some proxy and
> there is a connectivity issue between the proxy and backends which
> disappears in 10 seconds, you do not need to rerun the puppet completely -
> just retry the request.
>
> Regarding "REST interfaces for all Openstack API" - this is very close to
> another topic that I raised ([0]) - using native Ruby application and
> handle the exceptions. Otherwise whenever we have an OpenStack client
> (generic or neutron/glance/etc. one) sending us a message like '[111]
> Connection refused' this message is very much determined by the framework
> that OpenStack is using within this release for clients. It could be
> `requests` or any other type of framework which sends different text
> message depending on its version. So it is very bothersome to write a bunch
> of 'if' clauses or gigantic regexps instead of handling simple Ruby
> exception. So I agree with you here - we need to work with the API
> directly. And, by the way, if you also support switching to native Ruby
> OpenStack API client, please feel free to support movement towards it in
> the thread [0]
>
> Matt and Gilles,
>
> Regarding puppet-healthcheck - I do not think that puppet-healtcheck
> handles exactly what I am mentioning here - it is not running exactly at
> the same time as we run the request.
>
> E.g. 10 seconds ago everything was OK, then we had a temporary
> connectivity issue, then everything is ok again in 10 seconds. Could you
> please describe how puppet-healthcheck can help us solve this problem?
>


You are right, it probably won't. At that point you are using puppet to
work around some fundamental issues in your OpenStack deployment.


>
> Or another example - there was an issue with keystone accessing token
> database when you have several keystone instances running, or there was
> some desync between these instances, e.g. you fetched the token at keystone
> #1 and then you verify it again keystone #2. Keystone #2 had some issues
> verifying it not due to the fact that token was bad, but due to the fact
> that that keystone #2 had some issues. We would get 401 error and instead
> of trying to rerun the puppet we would need just to handle this issue
> locally by retrying the request.
>
> [0] http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/66423
>

Another one that is a deployment architecture problem. We solved this by
configuring the load balancer to direct keystone traffic to a single db
node, now we solve it with Fernet tokens. If you have this specific issue
above it's going to manifest in all kinds of strange ways and can even
happen to control services like neutron/nova etc as well. Which means even
if we get puppet to pass with a bunch of retries, OpenStack is not healthy
and the users will not be happy about it.

I don't want to give them impression that I am completely opposed to
retries, but on the other hand, when my deployment is broken, I want to
know quickly, not after 10 minutes of retries, so we need to balance that.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team

2015-10-15 Thread michael mccune

congrats Vitaly!

On 10/15/2015 10:38 AM, Sergey Lukjanov wrote:

I think we have a quorum.

Vitaly, congrats!

On Tue, Oct 13, 2015 at 6:39 PM, Matthew Farrellee > wrote:

+1!

On 10/12/2015 07:19 AM, Sergey Lukjanov wrote:

Hi folks,

I'd like to propose Vitaly Gridnev as a member of the Sahara core
reviewer team.

Vitaly contributing to Sahara for a long time and doing a great
job on
reviewing and improving Sahara. Here are the statistics for reviews
[0][1][2] and commits [3].

Existing Sahara core reviewers, please vote +1/-1 for the
addition of
Vitaly to the core reviewer team.

Thanks.

[0]

https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
[1] http://stackalytics.com/report/contribution/sahara-group/180
[2] http://stackalytics.com/?metric=marks_id=vgridnev
[3]

https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z

--
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team

2015-10-15 Thread Vitaly Gridnev
Thank you! I'm very happy to be a part of sahara core team.

On Thu, Oct 15, 2015 at 5:38 PM, Sergey Lukjanov 
wrote:

> I think we have a quorum.
>
> Vitaly, congrats!
>
> On Tue, Oct 13, 2015 at 6:39 PM, Matthew Farrellee 
> wrote:
>
>> +1!
>>
>> On 10/12/2015 07:19 AM, Sergey Lukjanov wrote:
>>
>>> Hi folks,
>>>
>>> I'd like to propose Vitaly Gridnev as a member of the Sahara core
>>> reviewer team.
>>>
>>> Vitaly contributing to Sahara for a long time and doing a great job on
>>> reviewing and improving Sahara. Here are the statistics for reviews
>>> [0][1][2] and commits [3].
>>>
>>> Existing Sahara core reviewers, please vote +1/-1 for the addition of
>>> Vitaly to the core reviewer team.
>>>
>>> Thanks.
>>>
>>> [0]
>>>
>>> https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>>> [1] http://stackalytics.com/report/contribution/sahara-group/180
>>> [2] http://stackalytics.com/?metric=marks_id=vgridnev
>>> [3]
>>>
>>> https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>>>
>>> --
>>> Sincerely yours,
>>> Sergey Lukjanov
>>> Sahara Technical Lead
>>> (OpenStack Data Processing)
>>> Principal Software Engineer
>>> Mirantis Inc.
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Vitaly Gridnev
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-15 Thread Emilien Macchi
This thread is really huge and only 3 people are talking.
Why don't you continue on an etherpad and do some brainstorm on it?
If you do so, please share the link here.

It would be much more effective in my opinion.

On 10/15/2015 08:26 AM, Sofer Athlan-Guyot wrote:
> Gilles Dubreuil  writes:
> 
>> On 08/10/15 03:40, Rich Megginson wrote:
>>> On 10/07/2015 09:08 AM, Sofer Athlan-Guyot wrote:
 Rich Megginson  writes:

> On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:
>> Rich Megginson  writes:
>>
>>> On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
 Gilles Dubreuil  writes:

> On 30/09/15 03:43, Rich Megginson wrote:
>> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
 Gilles Dubreuil  writes:

> On 15/09/15 06:53, Rich Megginson wrote:
>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>> Hi,
>>>
>>> Gilles Dubreuil  writes:
>>>
 A. The 'composite namevar' approach:

 keystone_tenant {'projectX::domainY': ... }
   B. The 'meaningless name' approach:

keystone_tenant {'myproject': name='projectX',
 domain=>'domainY',
 ...}

 Notes:
   - Actually using both combined should work too with
 the domain
 supposedly overriding the name part of the domain.
   - Please look at [1] this for some background
 between the two
 approaches:

 The question
 -
 Decide between the two approaches, the one we would like to
 retain for
 puppet-keystone.

 Why it matters?
 ---
 1. Domain names are mandatory in every user, group or
 project.
 Besides
 the backward compatibility period mentioned earlier, where
 no domain
 means using the default one.
 2. Long term impact
 3. Both approaches are not completely equivalent which
 different
 consequences on the future usage.
>>> I can't see why they couldn't be equivalent, but I may be
>>> missing
>>> something here.
>> I think we could support both.  I don't see it as an either/or
>> situation.
>>
 4. Being consistent
 5. Therefore the community to decide

 Pros/Cons
 --
 A.
>>> I think it's the B: meaningless approach here.
>>>
Pros
  - Easier names
>>> That's subjective, creating unique and meaningful name
>>> don't look
>>> easy
>>> to me.
>> The point is that this allows choice - maybe the user
>> already has some
>> naming scheme, or wants to use a more "natural" meaningful
>> name -
>> rather
>> than being forced into a possibly "awkward" naming scheme
>> with "::"
>>
>>   keystone_user { 'heat domain admin user':
>> name => 'admin',
>> domain => 'HeatDomain',
>> ...
>>   }
>>
>>   keystone_user_role {'heat domain admin
>> user@::HeatDomain':
>> roles => ['admin']
>> ...
>>   }
>>
Cons
  - Titles have no meaning!
>> They have meaning to the user, not necessarily to Puppet.
>>
  - Cases where 2 or more resources could exists
>> This seems to be the hardest part - I still cannot figure
>> out how
>> to use
>> "compound" names with Puppet.
>>
  - More difficult to debug
>> More difficult than it is already? :P
>>
  - Titles mismatch when listing the resources
 (self.instances)

 B.
Pros
  - Unique titles guaranteed
  - No ambiguity between resource found and their
 title
Cons
   

Re: [openstack-dev] [Neutron] [Tempest] where fwaas tempest tests should be?

2015-10-15 Thread Matthew Treinish
On Thu, Oct 15, 2015 at 08:25:40PM +0900, Takashi Yamamoto wrote:
> hi,
> 
> i'm looking in fwaas tempest tests and have a question about code location.
> 
> currently,
> 
> - fwaas api tests and its rest client are in neutron repo

This is a bad situation because it means that we're directly coupling fwaas to
the neutron repo. Everything that depends on fwaas should be separate and self
contained outside of the neutron repo. The longer that things are like this it
will just lead to more headaches down the road.

> - there are no fwaas scenario tests
> 
> eventually,
> 
> - fwaas api tests should be moved into neutron-fwaas repo
> - fwaaa scenario tests should be in neutron-fwaas repo too.

This is definitely the right direction.

> - the rest client will be in tempest-lib

The discussion on which clients are in scope for tempest-lib hasn't fully
happened yet. For right now we're taking a conservative approach and the clients
can live in tempest-lib if there are tests in the tempest tree using them.
(which is not fwaas) This might change eventually, (there will be a summit
session on it) but for right now I'd say the fwaas clients should live in the
fwaas repo. (they can and likely should still be based on the tempest-lib rest
client)

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Tempest] where fwaas tempest tests should be?

2015-10-15 Thread Matthew Treinish
On Thu, Oct 15, 2015 at 09:02:22AM -0400, Assaf Muller wrote:
> On Thu, Oct 15, 2015 at 7:25 AM, Takashi Yamamoto 
> wrote:
> 
> > hi,
> >
> > i'm looking in fwaas tempest tests and have a question about code location.
> >
> > currently,
> >
> > - fwaas api tests and its rest client are in neutron repo
> > - there are no fwaas scenario tests
> >
> > eventually,
> >
> > - fwaas api tests should be moved into neutron-fwaas repo
> > - fwaaa scenario tests should be in neutron-fwaas repo too.
> >
> 
> I believe scenario tests that invoke APIs outside of Neutron should
> stay (Or be introduced to) Tempest.

So testing the neutron advanced services was actually one of the first things
we decided was out of scope for tempest. (like well over a year ago) It took
some time to get equivalent testing setup elsewhere, but tests and support for
the advanced services were removed from tempest on purpose. I'd suggest that
you look at the tempest plugin interface:

http://docs.openstack.org/developer/tempest/plugin.html

if you'd like to make the fwaas tests (or any other adv. service tests)
integrate more cleanly with the rest of tempest.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team

2015-10-15 Thread Sergey Lukjanov
I think we have a quorum.

Vitaly, congrats!

On Tue, Oct 13, 2015 at 6:39 PM, Matthew Farrellee  wrote:

> +1!
>
> On 10/12/2015 07:19 AM, Sergey Lukjanov wrote:
>
>> Hi folks,
>>
>> I'd like to propose Vitaly Gridnev as a member of the Sahara core
>> reviewer team.
>>
>> Vitaly contributing to Sahara for a long time and doing a great job on
>> reviewing and improving Sahara. Here are the statistics for reviews
>> [0][1][2] and commits [3].
>>
>> Existing Sahara core reviewers, please vote +1/-1 for the addition of
>> Vitaly to the core reviewer team.
>>
>> Thanks.
>>
>> [0]
>>
>> https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>> [1] http://stackalytics.com/report/contribution/sahara-group/180
>> [2] http://stackalytics.com/?metric=marks_id=vgridnev
>> [3]
>>
>> https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-15 Thread Monty Taylor

On 10/15/2015 02:53 AM, Thomas Goirand wrote:

On 10/15/2015 12:18 AM, Robert Collins wrote:

On 15 October 2015 at 11:11, Thomas Goirand  wrote:


One major pain point is unfortunately something ridiculously easy to
fix, but which nobody seems to care about: the long & short descriptions
format. These are usually buried into the setup.py black magic, which by
the way I feel is very unsafe (does PyPi actually execute "python
setup.py" to find out about description texts? I hope they are running
this in a sandbox...).


You should go look at $package.egg-info/PKG-INFO in the source tarball. 
The short and long description are in that and it's a parseable format


It's an RFC822 format information file - so the following python gets 
the short description inside of an unpacked shade tarball:


import rfc822
a=rfc822.Message(open('shade.egg-info/PKG-INFO', 'r'))
print a['summary']

I think this should solve your problem of getting the information. (you 
don't need python - rfc822 is pretty parseable by a human)



Since everyone uses the fact that PyPi accepts RST format for the long
description, there's nothing that can really easily fit the
debian/control. Probably a rst2txt tool would help, but still, the long
description would still be polluted with things like changelog, examples
and such (damned, why people think it's the correct place to put that...).


The reason they think that is because PyPI is a place where people go to 
download things, and often quick and easy examples of how the library 
works help people to decide if it's for them or not. The uses cases are 
different.



The only way I'd see to fix this situation, would be a PEP. This will
probably take a decade to have everyone switching to a new correct way
to write a long & short description...


Perhaps Debian (1 thing) should change, rather than trying to change
all the upstreams packaged in it (>20K) :)

-Rob


Well, having the changlog (and other stuff) of packages merged into the
long description is not helpful, not for Debian, nor for upstream Python
packages.


I actually have gotten multiple requests from people to get changelogs 
uploaded to PyPI (which involves appending to the long description) 
because it is helpful to them. . I think right now people find value in 
the information being there so I don't think this one is very winnable 
right now. Sorry ...


In

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-15 Thread Thomas Goirand
On 10/15/2015 03:59 PM, Monty Taylor wrote:
> On 10/15/2015 02:53 AM, Thomas Goirand wrote:
>> Well, having the changlog (and other stuff) of packages merged into the
>> long description is not helpful, not for Debian, nor for upstream Python
>> packages.
> 
> I actually have gotten multiple requests from people to get changelogs
> uploaded to PyPI (which involves appending to the long description)
> because it is helpful to them. . I think right now people find value in
> the information being there so I don't think this one is very winnable
> right now. Sorry ...

It could be a win if PiPy was supporting other (more relevant) places
where to put the information published on the site.

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] getting rid of tablib completely (Requests + urllib3 + distro packages)

2015-10-15 Thread Thomas Goirand
On 10/15/2015 07:35 PM, Doug Hellmann wrote:
> Excerpts from Thomas Goirand's message of 2015-10-15 00:31:44 +0200:
> I wasn't clear. The reason cliff-tablib exists is because of earlier
> complaints (maybe from you?) about tablib. I didn't have time to
> rewrite the formatters, so I pulled them into their own package so
> it wasn't necessary to ship them. Now that we have the useful ones
> included in cliff directly in a way that doesn't use tablib, you
> should not need either cliff-tablib or tablib. We may need patches
> to upstream packages that still have the cliff-tablib dependency.
> Nothing should be using tablib directly, AFAIK.
> 
> Doug

Thanks for clearing this up.

I'll see if I can get rid of tablib in packages and if it still
continues to work. Though I may work on this a bit later: I've currently
started the huge task of uploading 102 Liberty (and related) packages to
move them from Debian Experimental to Sid, and after that, I'll have a 5
days break, then the Tokyo summit... But after that, removing tablib
from the equation will allow me to bring even more Python 3 support in
packages, which is great.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [fuel] Life cycle management use cases

2015-10-15 Thread Adam Heczko
Hi All, in regards to lifecycle management I've just submitted blueprint
covering security assessment of OpenStack clouds [0].
Note that it doesn't aim to intersect with Congress project. Congress is
focused on 'overlay' or cloud security policy.
Nessus integration is 100% oriented towards cloud control plane security,
ie. assessment of controller and hypervisor hosts.

[0] https://blueprints.launchpad.net/fuel/+spec/fuel-provision-nessus

On Fri, Oct 16, 2015 at 2:50 AM, Mike Scherbakov 
wrote:

> Hi Shamail,
> thank you for letting me know. I didn't come through the page when I was
> googling.
>
> We are collecting use cases though for underlay, so it's more relevant for
> such projects as Fuel, TripleO, Puppet OpenStack; than for VM, DB, etc.
> related management.
>
> If you have any work started on underlay, I'll be more than happy to
> collaborate.
>
> PS. Meanwhile we've got lots of feedback in the etherpad, thanks all!
>
> On Wed, Oct 14, 2015 at 3:34 PM Shamail  wrote:
>
>> Great topic...
>>
>> Please note that the Product WG[1] also has a user story focused on
>> Lifecycle Management.  While FUEL is one aspect of the overall workflow, we
>> would also like the team to consider project level enhancements (e.g.
>> garbage collection inside the DB).
>>
>> The Product WG would welcome your insights on lifecycle management
>> tremendously.  Please help by posting comments to our existing user
>> story[2].
>>
>>
>> [1] https://wiki.openstack.org/wiki/ProductTeam
>> [2]
>> http://specs.openstack.org/openstack/openstack-user-stories/user-stories/draft/lifecycle_management.html
>>
>> Thanks,
>> Shamail
>>
>> > On Oct 14, 2015, at 5:04 PM, Mike Scherbakov 
>> wrote:
>> >
>> > Hi fuelers,
>> > as we all know, Fuel lacks many of life cycle management (LCM) use
>> cases. It becomes a very hot issue for many of our users, as current LCM
>> capabilities are not very rich.
>> >
>> > In order to think how we can fix it, we need to collect use cases
>> first, and prioritize them if needed. So that whatever a change in
>> architecture we are about to make, we would need to ensure that we meet LCM
>> use cases or have a proposal how to close it in a foreseeable future.
>> >
>> > I started to collect use cases in the etherpad:
>> https://etherpad.openstack.org/p/lcm-use-cases.
>> >
>> > Please contribute in there.
>> >
>> > Thank you,
>> > --
>> > Mike Scherbakov
>> > #mihgen
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> ___
>> Product-wg mailing list
>> product...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/product-wg
>>
> --
> Mike Scherbakov
> #mihgen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Robert Collins
...
>> Now there is an explicit list of what works.
>>
>> Isn't that *better* ?
>>
> Yes, it's good to know what works, does that show multiple versions of
> the same package when multiple are known to work?

It shows the highest version we know to work. So you could in
principle take any version less than that that is permitted by the
package-local requirements. E.g. consider oslo.messaging and Nova.

Nova's dep on oslo.messaging is:
oslo.messaging>=1.8.0 # Apache-2.0

upper-constraints is:
oslo.messaging===2.5.0

So any version >= 1.8.0 and <= 2.5.0

We haven't successfully tested 2.6.0 (or it would be in
upper-constraints), and we didn't have any failures with versions in
between (or they would be listed in Nova's requirements as a != line.

BUT: we haven't (ever!) tested that the lowest versions we specify
work. When folk know they are adding a hard dependency on a version we
do raise the lower versions, but thats adhoc and best effort today.
I'd like to see a lower-constraints.txt reflecting the oldest version
that works across all of OpenStack (as a good boundary case to test) -
but we need to fix pip first to teach it to choose lower versions over
higher versions (https://github.com/pypa/pip/issues/3188 - I thought
I'd filed it previously but couldn't find it...)

More generally, we don't [yet] have the testing setup to test multiple
versions on an ongoing basis, so we can't actually make any statement
other than 'upper-constraints.txt is known to work'. Note: before
constraints we couldn't even make *that* statement. The statement we
could make then was 'if you look up the change in gerrit and from that
the CI dvsm test run which got through the gate, then you can
figureout *a* version of the dependencies that worked.

>  If so I can build a
> range from that.  It's not better as it is because I still don't know
> where liberty ends and mitaka begins.  Is there any place I can find that?

We don't always release every library in every release. See for
instance this warning from Juno's global-requirements.txt:

# NOTE(tonyb): the oslo.utils acceptable versions overlap in juno and kilo
# please ensure that any new releases are handled carefully
oslo.utils>=1.4.0,<1.5.0 # Apache-2.0

So - there may not in fact *be* a line where liberty ends and mitaka begins.

And if we can sort out the backwards compat stuff, we should be able
to actually get rid of that distinction, and instead say the much
simpler thing 'we're compatible with everything released while this
branch is supported'.

But in the interim, we're using semver everywhere, so at least for the
internal dependencies, you could just set a cap on the next major
version within Gentoo. It may be too restrictive, but it should work
(and if it doesn't then we've failed at semver and we need to figure
out how not to do that).

> What about a daily or weekly check without the constraints file so you
> know when something breaks?

We generate a new list daily with the latest releases on PyPI:
http://logs.openstack.org/periodic/propose-requirements-constraints-master/5b4e488/console.html
is an example log from the generation of such an update. That gets
proposed to the requirements project and can be reviewed here -
https://review.openstack.org/#/q/status:open+project:openstack/requirements+topic:openstack/requirements/constraints,n,z

So we see things that break, and then we can propose != exclusions for
them (or if its better, fixes to our projects - depends on the nature
of the interaction and so on).

>  This would allow you to at least know when
> you need to place caps and I could consume that.

Ish - it tells us where things break, and from that we can decide on
exclusion (single bad release) or cap (deliberate API break they won't
roll back). Caps are massively more disruptive, so we try to avoid
them.

> I'm fine with leaving
> caps off.  If I can consume mitaka deps in liberty then that's great :D
...
> It's a step backwards from my perspective, before I had a clear
> delineation where support of something stops.  Now I don't know when it
> stops and any given update could break the system.  I'm not sure how it
> could be smoother for other systems.

The problem is, you didn't actually have such a line (as should be
evident by the fact we've broken juno again and again and again since
th release).

> So, does this mean that I can just leave the packages uncapped and know
> that they will work?  Are there tests being run for this scenario?

No, though I *want* us to get to that place, and thats the debate
we're having about backwards compat in clients and libraries.

{Note that we've never systematically capped third party libraries and
so they are always a potential surprise too - but its again the
balance between presume-bad and presume-good behaviour from the
upstreams}.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__

Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Robert Collins
On 16 October 2015 at 12:09, Matthew Thode  wrote:
> On 10/15/2015 06:00 PM, Robert Collins wrote:

> Thanks for the replies, I think I have a way forward (using upper-reqs
> as a cap).  I might need to make a tool that munges the (test-)reqs.txt
> to generate one as an easier reference.  having to open up two files to
> reference is kinda annoying.

The parser in openstack_requirements for requirements files may be of
use to you in doing that. We use it to e.g. validate that patches to
requirements are compatible with the current upper-constraints.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-15 Thread Gilles Dubreuil


On 15/10/15 21:10, Vladimir Kuklin wrote:
> Gilles,
> 
> 5xx errors like 503 and 502/504 could always be intermittent operational
> issues. E.g. when you access your keystone backends through some proxy
> and there is a connectivity issue between the proxy and backends which
> disappears in 10 seconds, you do not need to rerun the puppet completely
> - just retry the request.
> 

Look, I don't have much experience with those errors in real case
scenarios. And this is just a details for my understanding,  those
errors are coming from a running HTTP service, therefore this is not a
connectivity issue to the service but something wrong beyond that.

> Regarding "REST interfaces for all Openstack API" - this is very close
> to another topic that I raised ([0]) - using native Ruby application and
> handle the exceptions. Otherwise whenever we have an OpenStack client
> (generic or neutron/glance/etc. one) sending us a message like '[111]
> Connection refused' this message is very much determined by the
> framework that OpenStack is using within this release for clients. It
> could be `requests` or any other type of framework which sends different
> text message depending on its version. So it is very bothersome to write
> a bunch of 'if' clauses or gigantic regexps instead of handling simple
> Ruby exception. So I agree with you here - we need to work with the API
> directly. And, by the way, if you also support switching to native Ruby
> OpenStack API client, please feel free to support movement towards it in
> the thread [0]
> 

Yes, I totally agree with you on that approach (native Ruby lib).
This why I mentioned it here because for me the exception handling would
be solved at once.

> Matt and Gilles,
> 
> Regarding puppet-healthcheck - I do not think that puppet-healtcheck
> handles exactly what I am mentioning here - it is not running exactly at
> the same time as we run the request.
> 
> E.g. 10 seconds ago everything was OK, then we had a temporary
> connectivity issue, then everything is ok again in 10 seconds. Could you
> please describe how puppet-healthcheck can help us solve this problem? 
> 
> Or another example - there was an issue with keystone accessing token
> database when you have several keystone instances running, or there was
> some desync between these instances, e.g. you fetched the token at
> keystone #1 and then you verify it again keystone #2. Keystone #2 had
> some issues verifying it not due to the fact that token was bad, but due
> to the fact that that keystone #2 had some issues. We would get 401
> error and instead of trying to rerun the puppet we would need just to
> handle this issue locally by retrying the request.
> 
> [0] http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/66423
> 
> On Thu, Oct 15, 2015 at 12:23 PM, Gilles Dubreuil  > wrote:
> 
> 
> 
> On 15/10/15 12:42, Matt Fischer wrote:
> >
> >
> > On Thu, Oct 8, 2015 at 5:38 AM, Vladimir Kuklin  
> > >> wrote:
> >
> > Hi, folks
> >
> > * Intro
> >
> > Per our discussion at Meeting #54 [0] I would like to propose the
> > uniform approach of exception handling for all puppet-openstack
> > providers accessing any types of OpenStack APIs.
> >
> > * Problem Description
> >
> > While working on Fuel during deployment of multi-node HA-aware
> > environments we faced many intermittent operational issues, e.g.:
> >
> > 401/403 authentication failures when we were doing scaling of
> > OpenStack controllers due to difference in hashing view between
> > keystone instances
> > 503/502/504 errors due to temporary connectivity issues
> 
> The 5xx errors are not connectivity issues:
> 
> 500 Internal Server Error
> 501 Not Implemented
> 502 Bad Gateway
> 503 Service Unavailable
> 504 Gateway Timeout
> 505 HTTP Version Not Supported
> 
> I believe nothing should be done to trap them.
> 
> The connectivity issues are different matter (to be addressed as
> mentioned by Matt)
> 
> > non-idempotent operations like deletion or creation - e.g. if you
> > are deleting an endpoint and someone is deleting on the other node
> > and you get 404 - you should continue with success instead of
> > failing. 409 Conflict error should also signal us to re-fetch
> > resource parameters and then decide what to do with them.
> >
> > Obviously, it is not optimal to rerun puppet to correct such errors
> > when we can just handle an exception properly.
> >
> > * Current State of Art
> >
> > There is some exception handling, but it does not cover all the
> > aforementioned use cases.
> >
> > * Proposed solution
>   

Re: [openstack-dev] [puppet] Proposing Denis Egorenko core

2015-10-15 Thread Jay Pipes

On 10/15/2015 02:44 PM, Dmitry Borodaenko wrote:

Congratulations Denis, keep it up!

On behalf of Fuel team, I'd like to thank Puppet OpenStack team for
inviting, welcoming, and enabling the massive expansion of collaboration
between our projects. We have gone a great distance from a monolithic
fork to a modularized downstream, from zero communication to having a
Fuel developer accepted as a Puppet OpenStack core reviewer, and it
wouldn't have been possible without Emilien's seminal email and without
help from all of you!


+1. As a TC member, this is exactly the kind of collaboration and 
cooperation that I was hoping to see emerge.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [fuel] Life cycle management use cases

2015-10-15 Thread Mike Scherbakov
Hi Shamail,
thank you for letting me know. I didn't come through the page when I was
googling.

We are collecting use cases though for underlay, so it's more relevant for
such projects as Fuel, TripleO, Puppet OpenStack; than for VM, DB, etc.
related management.

If you have any work started on underlay, I'll be more than happy to
collaborate.

PS. Meanwhile we've got lots of feedback in the etherpad, thanks all!

On Wed, Oct 14, 2015 at 3:34 PM Shamail  wrote:

> Great topic...
>
> Please note that the Product WG[1] also has a user story focused on
> Lifecycle Management.  While FUEL is one aspect of the overall workflow, we
> would also like the team to consider project level enhancements (e.g.
> garbage collection inside the DB).
>
> The Product WG would welcome your insights on lifecycle management
> tremendously.  Please help by posting comments to our existing user
> story[2].
>
>
> [1] https://wiki.openstack.org/wiki/ProductTeam
> [2]
> http://specs.openstack.org/openstack/openstack-user-stories/user-stories/draft/lifecycle_management.html
>
> Thanks,
> Shamail
>
> > On Oct 14, 2015, at 5:04 PM, Mike Scherbakov 
> wrote:
> >
> > Hi fuelers,
> > as we all know, Fuel lacks many of life cycle management (LCM) use
> cases. It becomes a very hot issue for many of our users, as current LCM
> capabilities are not very rich.
> >
> > In order to think how we can fix it, we need to collect use cases first,
> and prioritize them if needed. So that whatever a change in architecture we
> are about to make, we would need to ensure that we meet LCM use cases or
> have a proposal how to close it in a foreseeable future.
> >
> > I started to collect use cases in the etherpad:
> https://etherpad.openstack.org/p/lcm-use-cases.
> >
> > Please contribute in there.
> >
> > Thank you,
> > --
> > Mike Scherbakov
> > #mihgen
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> Product-wg mailing list
> product...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/product-wg
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Revision of the RFE/blueprint submission process

2015-10-15 Thread Armando M.
A follow-up...

On 13 October 2015 at 18:38, Armando M.  wrote:

> Hi neutrinos,
>
> From last cycle, the team has introduced the concept of RFE bugs [0]. I
> have suggested a number of refinements over the past few days [1,2,3] to
> streamline/clarify the process a bit further, also in an attempt to deal
> with the focus and breadth of the project [4,5].
>
> Having said that, I would invite not to give anything for granted, and use
> the ML and/or join us on the #openstack-neutron-release irc channel to tell
> us what works and what doesn't about the processes we have in place.
>
> There's no improvement without feedback!
>

We have 400+ blueprints lingering [1], most of them are now rotten. In
order to try and improve the situation I also refined/proposed changes
about how we register/use blueprints [2]. Comments are welcome.

I will go through the pile, and the painstaking effort of cleaning up the
list.

Cheers,
Armando

[1] https://blueprints.launchpad.net/neutron/+specs
[2] https://review.openstack.org/#/c/235640


>
> Cheers,
> Armando
>
> [0]
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements
> [1] https://review.openstack.org/#/c/231819/
> [2] https://review.openstack.org/#/c/234491/
> [3] https://review.openstack.org/#/c/234502/
> [4]
> http://git.openstack.org/cgit/openstack/election/tree/candidates/mitaka/Neutron/Armando_Migliaccio.txt#n29
> [5]
> http://git.openstack.org/cgit/openstack/election/tree/candidates/mitaka/Neutron/Armando_Migliaccio.txt#n38
> [6]
> http://docs.openstack.org/developer/neutron/policies/office-hours.html#neutron-ptl-office-hours
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Revision of the RFE/blueprint submission process

2015-10-15 Thread Armando M.
On 14 October 2015 at 00:17, Flavio Percoco  wrote:

> On 13/10/15 18:38 -0700, Armando M. wrote:
>
>> Hi neutrinos,
>>
>> From last cycle, the team has introduced the concept of RFE bugs [0]. I
>> have
>> suggested a number of refinements over the past few days [1,2,3] to
>> streamline/
>> clarify the process a bit further, also in an attempt to deal with the
>> focus
>> and breadth of the project [4,5].
>>
>> Having said that, I would invite not to give anything for granted, and
>> use the
>> ML and/or join us on the #openstack-neutron-release irc channel to tell
>> us what
>> works and what doesn't about the processes we have in place.
>>
>> There's no improvement without feedback!
>>
>
> Thanks for sharing.
>
> In Glance, we're about to jump into a very similar workflow and I'm
> glad to know it's worked for the neutron team so far.
>
> I'll be posting this on the mailing list soon and we can share
> feedback and experiences as they happen.
>

Thanks Flavio, I'd be happy to exchange notes. Every project is different
so I would expect that you would make adjustments. I would be keen to see
what we can borrow/share.

Btw, I am still working on some fine tuning, so stay...tuned :)

Cheers,
Armando


>
> Cheers,
> Flavio
>
>
>> Cheers,
>> Armando
>>
>> [0] http://docs.openstack.org/developer/neutron/policies/blueprints.html#
>> neutron-request-for-feature-enhancements
>> [1] https://review.openstack.org/#/c/231819/
>> [2] https://review.openstack.org/#/c/234491/
>> [3] https://review.openstack.org/#/c/234502/
>> [4]
>> http://git.openstack.org/cgit/openstack/election/tree/candidates/mitaka/
>> Neutron/Armando_Migliaccio.txt#n29
>> [5]
>> http://git.openstack.org/cgit/openstack/election/tree/candidates/mitaka/
>> Neutron/Armando_Migliaccio.txt#n38
>> [6]
>> http://docs.openstack.org/developer/neutron/policies/office-hours.html#
>> neutron-ptl-office-hours
>>
>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-15 Thread Thomas Goirand
On 10/15/2015 12:08 PM, Cory Benfield wrote:
>> On 14 Oct 2015, at 23:23, Thomas Goirand  wrote:
>> Though you could have avoid all of this pain if you were not bundling.
>> Isn't all of this make you re-think your vendorizing policy? Or still
>> not? I'm asking because I still didn't read your answer about the
>> important question: since you aren't using specially crafted versions of
>> urllib3 anymore, and now only using official releases, what's the reason
>> that keeps you vendorizing? Not trying to convince you here, just trying
>> to understand.
> 
> Again, I’m not being drawn into this discussion here.
> 
> Let me make one point, though. There are three people involved in a
> decision-making role on the requests project, and this is an important
> issue to every member of the team. This policy has been part of the
> requests project for a very long time, and we aren’t going to change
> it in a short space of time: I’m certainly not going to unilaterally
> do so. All I can promise you is that we continue to talk about this
> internally, and if we *unanimously* feel comfortable changing our policy
> we will do so. Until then, I’m happy to do my best to accommodate as
> many people as possible (which in this case I believe we have done).
> 
> Cory

Hi Cory,

Thanks for the above. However, it is still frustrating to not understand
your motivations, which was the only thing that pushed me to write these
lines (ie: still not trying to convince you...).

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2015-10-15 14:35:08 -0500:
> On 10/15/2015 02:17 PM, Matt Riedemann wrote:
> > 
> > 
> > On 10/15/2015 2:10 PM, Matthew Thode wrote:
> >> On 10/15/2015 02:04 PM, Robert Collins wrote:
> >>> On 16 October 2015 at 08:01, Matthew Thode
> >>>  wrote:
>  So, this is my perspective in packing liberty for Gentoo.
> 
>  We can have multiple versions of a package available to install,
>  because
>  of this we generally directly translate the valid dependency versions
>  from requirements.
> >>>
> >>> Cool.
> >>>
>  this
>   oslo.concurrency>=2.3.0 # Apache-2.0
>  becomes this
>   >=dev-python/oslo-concurrency-2.3.0[${PYTHON_USEDEP}]
> 
>  Now what happens when I package something from mitaka (2.7.0 in this
>  case would be mitaka).  It's undefined behaviour as far as I know.
> 
>  Basically, I can see no reason why the policy of caps changed from kilo
>  to liberty, it was actually nice to package for liberty, I can see this
>  going very bad very quick.
> >>>
> >>> They changed because it was causing huge trauma and multiple day long
> >>> gate wedges around release times. Covered in detail here -
> >>> http://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html
> >>>
> >>>
>  Where are my caps?
> >>>
> >>> The known good versions of dependencies for liberty are
> >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/liberty
> >>>
> >>>
> >> That's good, does that represent an upper constraint for the lower
> >> constraint imposed by by the package?  Why is this kept separate?
> >> Keeping it separate means that it's not trivial to merge them with
> >> what's in each package's requirements.
> > 
> > I'm not sure I understand the first question but I believe that u-c is
> > automatically adjusted and if there is a conflict with the minimum
> > version required of a dependency then the cap is adjusted (or vice-versa).
> > 
> > It's separate, at least in part, because the changes to
> > global-requirements are synced to all projects in the projects.txt file
> > in the requirements repo, which causes a bunch of churn to get those
> > changes approved in the registered projects and then released
> > appropriately.
> > 
> > The global sync to the ecosystem isn't fun, I'll agree, but I do think
> > that thinks have been better since Juno/Kilo because (1) we don't allow
> > <= caps in g-r anymore (we were not allowing patch version updates which
> > wedged us several times) and (2) we're better about releasing things
> > with minor version updates per semver - whereas in the past the cats
> > were releasing on their own volition and picking the version they
> > thought was best, which creeped into having multiple versions that could
> > be acceptable across branches, and we'd have wedges those ways too. I
> > think a lot of that has been fixed by the openstack/releases repo so
> > that the cats now have to line up for the release of their library with
> > a centralized team.
> > 
> 
> I'd agree with this, I still don't know what to cap things to.  I need
> to figure out what the caps should be...  It could be hard to sync
> across projects but like you say, most of that's gotten better since then.
> 
> >>
> >>> You should be able to trivially pull those versions out and into your
> >>> liberty set of packages.
> >>>
> >>> Theres another iteration on this in discussion now, which has to do
> >>> with backwards compat *and testing of cap changes*, we'll be in the
> >>> backwards compat fishbowl session in Tokyo if you're interested.
> >>>
> >>> -Rob
> >>>
> >>
> >> I'll be at the fishbowl :D
> >>
> > 
> 
> Anyone have a sched link to that fishbowl, I can't find it :(
> 

We're going to try to cover it in
http://mitakadesignsummit.sched.org/event/27c17a3c35d72997b372ddf4759fe1be#.ViAE67SO820
but we have a lot of other things to fit into that session so I want to
put this one last so we don't derail the other discussions.

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] opening stable/liberty

2015-10-15 Thread Sean McGinnis
On Thu, Oct 15, 2015 at 02:30:23PM -0400, Doug Hellmann wrote:
> One of the first steps for opening stable/liberty is to update the
> version settings in the branches to no longer use pre-versioning.
> Thierry submitted a bunch of patches to projects [0], and some are
> merging now.
> 
> Others are running into test failures, though, and need attention from
> project teams. We need release liaisons to look at them, or find someone
> to look at them, and take over the patches to add whatever fixes are
> needed so we can land them.
> 
> Please respond to this thread if you're taking over the patch for your
> project so we know who is working on each.

I'll monitor the Cinder patch and get in touch in relmgr irc if we run
in to any trouble.

> 
> Doug
> 
> [0] 
> https://review.openstack.org/#/q/branch:stable/liberty+topic:next-liberty-stable,n,z
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Do all OpenStack daemons support sd_notify?

2015-10-15 Thread Thomas Goirand
On 12/16/2014 11:59 AM, Alan Pevec wrote:
> There's one issue with Type=notify that Dan Prince discovered and
> where Type=simple works better for his use-case:
> if service startup takes more than DefaultTimeoutStartSec (90s by
> default) and systemd does not get notification from the service, it
> will consider it failed and try to restart it if Restart is defined in
> the service unit file.
> I tried to fix that by disabling timeout (example in Nova package
> https://review.gerrithub.io/13054 ) but then systemctl start blocks
> until notification is received.
> Copying Dan's review comment: "This still isn't quite right. It is
> better in that the systemctl doesn't think the service fails... rather
> it just seems to hang indefinately on 'systemctl start
> openstack-nova-compute', as in I never get my terminal back.
> My test case is:
> 1) Stop Nova compute. 2) Stop RabbitMQ. 3) Try to start Nova compute
> via systemctl.
> Could the RabbitMQ retry loop be preventing the notification?
> "
> 
> Current implementation in oslo service sends notification only just
> before entering the wait loop, because at that point all
> initialization should be done and service ready to serve. Does anyone
> have a suggestion for the better place where to notify service
> readiness?
> Or should just Dan's test-case step 3) be modified as:
> 3) Start Nova compute via systemctl start ... &  (i.e. in the background) ?
> 
> 
> Cheers,
> Alan

What you describe above looks like a defect in the implementation. Of
course, waiting for more than 90s should be considered as failed, and I
wouldn't want in any case to have this timeout increased. Failed
attempts to connect to Rabbit shouldn't, IMO, be the cause for not
sending the notify signal.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Large Deployment Team] Action/Work Items Ahead of Summit, please review

2015-10-15 Thread Mike Dorman
During our meeting [1] today we discussed our agenda [2] and action items for 
the LDT working session [3] at the summit.

We’ve compiled a list of a few things folks can be working on before getting to 
Tokyo to ensure this session is productive:


1. Neutron Segmented/Routed Networks:
   * Review the current WIP spec: [4]
   * Neutron likely will have their own session on this topic [5], so we may 
opt to have most of the discussion there.  Schedule for that is TBD, watch the 
ether pads [2] [5] for updates.

2. Neutron General
   * Dev team is seeking “pain points” feedback, please add yours to the 
etherpad [6]

3. Common Cells v1 Patch sets
   * If you have not already done so, please add any cells patches you are 
carrying to the etherpad [7]
   * We’d like to assign the low-hanging-fruit patches out to folks to get into 
Nova reviews as a step toward merging them into upstream

4. Public Clouds
   * Plan to focus discussion during the session on identifying specific gaps 
and/or RFEs needed by this constituency
   * Please add these to the etherpad [2] and come ready to discuss at the 
summit

5. Glance Asks
   * Similar to Public Clouds, this will focus on current gaps and RFEs
   * Fill them in on the etherpad [8] and come prepared to discuss

6. Performance Issues
   * The cross project group has a session specifically for this [9] [10], so 
we will forego this discussion in LDT in lieu of that


Thanks for your participation in getting these work items moved forward.  We 
have a big agenda with only 90 minutes.  We can accomplish more in Tokyo if we 
prepare some ahead of time!


[1]  
http://eavesdrop.openstack.org/meetings/large_deployments_team_monthly_meeting/2015/large_deployments_team_monthly_meeting.2015-10-15-16.01.html
[2]  https://etherpad.openstack.org/p/TYO-ops-large-deployments-team
[3]  http://sched.co/4Nl4
[4]  https://review.openstack.org/#/c/225384/
[5]  https://etherpad.openstack.org/p/mitaka-neutron-next-network-model
[6]  https://etherpad.openstack.org/p/mitaka-neutron-next-ops-painpoints
[7]  https://etherpad.openstack.org/p/PAO-LDT-cells-patches
[8]  https://etherpad.openstack.org/p/LDT-glance-asks
[9]  http://sched.co/4Qds
[10] 
https://etherpad.openstack.org/p/mitaka-cross-project-performance-team-kick-off
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2015-10-15 15:57:20 -0400:
> Excerpts from Matthew Thode's message of 2015-10-15 14:35:08 -0500:
> > On 10/15/2015 02:17 PM, Matt Riedemann wrote:
> > > 
> > > 
> > > On 10/15/2015 2:10 PM, Matthew Thode wrote:
> > >> On 10/15/2015 02:04 PM, Robert Collins wrote:
> > >>> On 16 October 2015 at 08:01, Matthew Thode
> > >>>  wrote:
> >  So, this is my perspective in packing liberty for Gentoo.
> > 
> >  We can have multiple versions of a package available to install,
> >  because
> >  of this we generally directly translate the valid dependency versions
> >  from requirements.
> > >>>
> > >>> Cool.
> > >>>
> >  this
> >   oslo.concurrency>=2.3.0 # Apache-2.0
> >  becomes this
> >   >=dev-python/oslo-concurrency-2.3.0[${PYTHON_USEDEP}]
> > 
> >  Now what happens when I package something from mitaka (2.7.0 in this
> >  case would be mitaka).  It's undefined behaviour as far as I know.
> > 
> >  Basically, I can see no reason why the policy of caps changed from kilo
> >  to liberty, it was actually nice to package for liberty, I can see this
> >  going very bad very quick.
> > >>>
> > >>> They changed because it was causing huge trauma and multiple day long
> > >>> gate wedges around release times. Covered in detail here -
> > >>> http://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html
> > >>>
> > >>>
> >  Where are my caps?
> > >>>
> > >>> The known good versions of dependencies for liberty are
> > >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/liberty
> > >>>
> > >>>
> > >> That's good, does that represent an upper constraint for the lower
> > >> constraint imposed by by the package?  Why is this kept separate?
> > >> Keeping it separate means that it's not trivial to merge them with
> > >> what's in each package's requirements.
> > > 
> > > I'm not sure I understand the first question but I believe that u-c is
> > > automatically adjusted and if there is a conflict with the minimum
> > > version required of a dependency then the cap is adjusted (or vice-versa).
> > > 
> > > It's separate, at least in part, because the changes to
> > > global-requirements are synced to all projects in the projects.txt file
> > > in the requirements repo, which causes a bunch of churn to get those
> > > changes approved in the registered projects and then released
> > > appropriately.
> > > 
> > > The global sync to the ecosystem isn't fun, I'll agree, but I do think
> > > that thinks have been better since Juno/Kilo because (1) we don't allow
> > > <= caps in g-r anymore (we were not allowing patch version updates which
> > > wedged us several times) and (2) we're better about releasing things
> > > with minor version updates per semver - whereas in the past the cats
> > > were releasing on their own volition and picking the version they
> > > thought was best, which creeped into having multiple versions that could
> > > be acceptable across branches, and we'd have wedges those ways too. I
> > > think a lot of that has been fixed by the openstack/releases repo so
> > > that the cats now have to line up for the release of their library with
> > > a centralized team.
> > > 
> > 
> > I'd agree with this, I still don't know what to cap things to.  I need
> > to figure out what the caps should be...  It could be hard to sync
> > across projects but like you say, most of that's gotten better since then.
> > 
> > >>
> > >>> You should be able to trivially pull those versions out and into your
> > >>> liberty set of packages.
> > >>>
> > >>> Theres another iteration on this in discussion now, which has to do
> > >>> with backwards compat *and testing of cap changes*, we'll be in the
> > >>> backwards compat fishbowl session in Tokyo if you're interested.
> > >>>
> > >>> -Rob
> > >>>
> > >>
> > >> I'll be at the fishbowl :D
> > >>
> > > 
> > 
> > Anyone have a sched link to that fishbowl, I can't find it :(
> > 
> 
> We're going to try to cover it in
> http://mitakadesignsummit.sched.org/event/27c17a3c35d72997b372ddf4759fe1be#.ViAE67SO820
> but we have a lot of other things to fit into that session so I want to
> put this one last so we don't derail the other discussions.

Looking back at the outline for that session, it's much more likely that
we'll postpone the discussion until Friday's meetup time.

http://mitakadesignsummit.sched.org/event/65564a6d98bb83d881b712fda033aaed#.ViAI0rSO820

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0057] DoS attack on Glance service can lead to interruption or disruption

2015-10-15 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

DoS attack on Glance service can lead to interruption or disruption
- ---

### Summary ###
The typical Glance workflow allows authenticated users to create an
image and upload the image content in a separate step. This can be
abused by malicious users to flood the Glance database with entries
for zero sized images.

### Affected Services / Software ###
Glance, Icehouse, Juno, Kilo, Liberty

### Discussion ###
Glance by default allows an authenticated user to create zero size
images. Those images do not consume resources on the storage backend
and do not hit any limits for size, but do take up space in the
database.

Malicious users can potentially cause database resource depletion with
an endless flood of 'image-create' requests.

### Recommended Actions ###
For current stable OpenStack releases, users can workaround this
vulnerability by using rate-limiting proxies to cover access to the
Glance API.  Rate-limiting is a common mechanism to prevent DoS and
Brute-Force attacks.  Rate limiting on the API requests allows a delay
in the consequences of the attack, but does not prevent it.

For example, if you are using a proxy such as Repose, enable the rate
limiting feature by following these steps:

  https://repose.atlassian.net/wiki/display/REPOSE/Rate+Limiting+Filter

An alternative approach to mitigate this issue would be to restrict
image creates to trusted administrators within your deployed Glance
policy.json file.

  "add_image": "role:admin",

Another preventative action would be to monitor the logs to identify
excessive image create requests.  One example of such a log message
is as follows (single line, wrapped):

-  begin example glance-api.log snippet 
DEBUG glance.registry.client.v1.api
[req-da1cafc0-f41f-4587-a484-672ba7f3546e
admin 8b04efc28055428c940505838314f262 - - -]
Adding image metadata... add_image_metadata
/usr/lib/python2.7/dist-packages/glance/registry/client/v1/api.py:161
-  end example glance-api.log snippet 

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0057
Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1401170
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJWICPsAAoJEJa+6E7Ri+EVDUkIAKdQX8YTAQqMTy/zrt3CqPBW
8onT/6zHsg5c/CvuHZE3t/sL6ycpXfOwONMbf/IYrGwazKyiJHOMWAXiVCPN+Itj
EncL8fqpqzKHyqCimZft1umBntypsGzwObMXlYk+0AU3CoLsu6PALSdFZ6Oe34wx
4m/ukz28q7iRS90DsFfJG4Qq+LG60W9pO2emxlAUo+b9KvzcjJDcHywi6sqL18BM
IcjuDxWbRnJRMFlp8EvG0mAyA+LdIVQOAMG242EPURplGtpJhoxjS/iWA28fyWQk
U/T3omTAicgOnYmkq3fPRuXAePLFCm937CsLgNji3RjScW/iUItB55zjJV+UZKM=
=nL7g
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Matthew Thode
On 10/15/2015 05:29 PM, Robert Collins wrote:
> On 16 October 2015 at 11:21, Matthew Thode  wrote:
>> On 10/15/2015 05:12 PM, Robert Collins wrote:
>>> On 16 October 2015 at 08:10, Matthew Thode  
>>> wrote:
 On 10/15/2015 02:04 PM, Robert Collins wrote:
>>> ...
>> Where are my caps?
>
> The known good versions of dependencies for liberty are
> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/liberty
>
 That's good, does that represent an upper constraint for the lower
 constraint imposed by by the package?  Why is this kept separate?
 Keeping it separate means that it's not trivial to merge them with
 what's in each package's requirements.
>>>
>>> It represents *one* known good combination of packages. We know that
>>> that combination passed CI, and we then do all our tests with it. To
>>> change it, we run it past CI and only move onto using the new set when
>>> its passed CI.
>>>
>>> If we merged it to each packages requirements, we'd reintroduce the
>>> deadlock that caused so much grief only 6 months back - I don't see
>>> any reason, desire, or tolerance for doing that upstream.
>>>
>>> Its kept separate because it requires 2N commits to shift known-good
>>> caps around for N repos using per-repo rules.
>>>
>>> With hundreds of repositories, it takes us hundreds of commits in two
>>> batches - and a round trip time of 2 hours per batch (check + gate) to
>>> shift *a single* requirement. With hundreds of dependencies, thats an
>>> intolerable amount of overhead.
>>>
>>
>> ya, that is annoying, unfortunately packages don't need to know *ONE*
>> good combination, they need to know them all, or at least the cap.  What
>> you are basically doing is shifting all of this extra work onto the
>> packagers, in fact I wouldn't be surprised if they needed to start
>> vendoring all of openstack in a virtualenv instead of doing actual packages.
> 
> Here's an example of the havoc caps cause:
>  https://review.openstack.org/#/c/234862/
> 
> I don't understand the statement about shifting work. Previously the
> situation was that you had to guess whether a given release of a
> dependency (both internal and external) worked with e.g. liberty.
> 
> Now there is an explicit list of what works.
> 
> Isn't that *better* ?
> 
Yes, it's good to know what works, does that show multiple versions of
the same package when multiple are known to work?  If so I can build a
range from that.  It's not better as it is because I still don't know
where liberty ends and mitaka begins.  Is there any place I can find that?

>> My question remains though, if someone pip installs nova liberty, will
>> it pull in deps from mitaka?  As it is now, without caps I cannot
>> reliably package openstack, do you have a solution? I should probably
>> start removing the liberty packages I did package since upstream seems
>> so hostile...
> 
> If you don't use the constraints file (which is pip consumable and
> published on the web so it can be used with pip install trivially)
> then yes, it will install the latest versions of all packages which
> are presumed to be doing backwards compatible changes. Things that we
> *know* are going to be broken still get caps - and we accept the cost
> of the 2-step dance needed to raise them.
> 

What about a daily or weekly check without the constraints file so you
know when something breaks?  This would allow you to at least know when
you need to place caps and I could consume that.  I'm fine with leaving
caps off.  If I can consume mitaka deps in liberty then that's great :D

> The big question here is, I think, 'should we assume OpenStack
> originated dependencies are better or worse than third party
> dependencies?' Third party dependencies we presume are backwards
> compatible (and will thus only break by mistake, which constraints
> guards against) unless we have reason not to - and we have open ended
> deps on them. This is where the spec about clients libraries is aimed.
> 
> It makes me very sad to know that you consider what we've done as
> hostile. We did it for a bunch of good reasons:
> 
>  - safer release process
>  - smoother updates for [apt, rpm at least] redistributors
>  - faster turnaround of dependency usage in CI
>  - step on the path to testing lower bounds
>  - step on the path to alignment with upstream packaging tooling
> 
> -Rob
> 
> 
It's a step backwards from my perspective, before I had a clear
delineation where support of something stops.  Now I don't know when it
stops and any given update could break the system.  I'm not sure how it
could be smoother for other systems.

So, does this mean that I can just leave the packages uncapped and know
that they will work?  Are there tests being run for this scenario?

-- 
Matthew Thode (prometheanfire)

__
OpenStack Development Mailing 

Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-15 Thread Thomas Goirand
On 10/15/2015 11:20 AM, Dmitry Tantsur wrote:
> On 10/15/2015 12:18 AM, Robert Collins wrote:
>> On 15 October 2015 at 11:11, Thomas Goirand  wrote:
>>>
>>> One major pain point is unfortunately something ridiculously easy to
>>> fix, but which nobody seems to care about: the long & short descriptions
>>> format. These are usually buried into the setup.py black magic, which by
>>> the way I feel is very unsafe (does PyPi actually execute "python
>>> setup.py" to find out about description texts? I hope they are running
>>> this in a sandbox...).
>>>
>>> Since everyone uses the fact that PyPi accepts RST format for the long
>>> description, there's nothing that can really easily fit the
>>> debian/control. Probably a rst2txt tool would help, but still, the long
>>> description would still be polluted with things like changelog, examples
>>> and such (damned, why people think it's the correct place to put
>>> that...).
>>>
>>> The only way I'd see to fix this situation, would be a PEP. This will
>>> probably take a decade to have everyone switching to a new correct way
>>> to write a long & short description...
>>
>> Perhaps Debian (1 thing) should change, rather than trying to change
>> all the upstreams packaged in it (>20K) :)
> 
> +1. Both README and PyPI are for users, and I personally find detailed
> descriptions (especially a couple of simple examples) on the PyPI page
> to be of so much value.

I do agree it is useful to have such thing in the PyPi pages. But it
doesn't change my opinion that it'd be nice if this was not mixed with
the long description in Python modules.

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] puppet-neutron + vswitch gates are broken

2015-10-15 Thread Emilien Macchi
ubuntu broke openvswitch last night because of:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1314887

I reached them on IRC and they are working on a new release with a fix.
We should expect our CI back later tonight or tomorrow morning.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Do all OpenStack daemons support sd_notify?

2015-10-15 Thread Alan Pevec
Thanks for responding, does this make it longest by time duration
thread ever? :)

2015-10-15 22:44 GMT+02:00 Thomas Goirand :
> On 12/16/2014 11:59 AM, Alan Pevec wrote:
...
>> Current implementation in oslo service sends notification only just
>> before entering the wait loop, because at that point all
>> initialization should be done and service ready to serve. Does anyone
>> have a suggestion for the better place where to notify service
>> readiness?
> What you describe above looks like a defect in the implementation. Of
> course, waiting for more than 90s should be considered as failed, and I
> wouldn't want in any case to have this timeout increased. Failed
> attempts to connect to Rabbit shouldn't, IMO, be the cause for not
> sending the notify signal.

But if service requires message bus for normal operations, it is not
ready to server requests, is it?

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Robert Collins
On 16 October 2015 at 08:10, Matthew Thode  wrote:
> On 10/15/2015 02:04 PM, Robert Collins wrote:
...
>>> Where are my caps?
>>
>> The known good versions of dependencies for liberty are
>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/liberty
>>
> That's good, does that represent an upper constraint for the lower
> constraint imposed by by the package?  Why is this kept separate?
> Keeping it separate means that it's not trivial to merge them with
> what's in each package's requirements.

It represents *one* known good combination of packages. We know that
that combination passed CI, and we then do all our tests with it. To
change it, we run it past CI and only move onto using the new set when
its passed CI.

If we merged it to each packages requirements, we'd reintroduce the
deadlock that caused so much grief only 6 months back - I don't see
any reason, desire, or tolerance for doing that upstream.

Its kept separate because it requires 2N commits to shift known-good
caps around for N repos using per-repo rules.

With hundreds of repositories, it takes us hundreds of commits in two
batches - and a round trip time of 2 hours per batch (check + gate) to
shift *a single* requirement. With hundreds of dependencies, thats an
intolerable amount of overhead.

>> You should be able to trivially pull those versions out and into your
>> liberty set of packages.
>>
>> Theres another iteration on this in discussion now, which has to do
>> with backwards compat *and testing of cap changes*, we'll be in the
>> backwards compat fishbowl session in Tokyo if you're interested.
>>
>> -Rob
>>
>
> I'll be at the fishbowl :D

Great!

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Robert Collins
On 16 October 2015 at 11:21, Matthew Thode  wrote:
> On 10/15/2015 05:12 PM, Robert Collins wrote:
>> On 16 October 2015 at 08:10, Matthew Thode  wrote:
>>> On 10/15/2015 02:04 PM, Robert Collins wrote:
>> ...
> Where are my caps?

 The known good versions of dependencies for liberty are
 http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/liberty

>>> That's good, does that represent an upper constraint for the lower
>>> constraint imposed by by the package?  Why is this kept separate?
>>> Keeping it separate means that it's not trivial to merge them with
>>> what's in each package's requirements.
>>
>> It represents *one* known good combination of packages. We know that
>> that combination passed CI, and we then do all our tests with it. To
>> change it, we run it past CI and only move onto using the new set when
>> its passed CI.
>>
>> If we merged it to each packages requirements, we'd reintroduce the
>> deadlock that caused so much grief only 6 months back - I don't see
>> any reason, desire, or tolerance for doing that upstream.
>>
>> Its kept separate because it requires 2N commits to shift known-good
>> caps around for N repos using per-repo rules.
>>
>> With hundreds of repositories, it takes us hundreds of commits in two
>> batches - and a round trip time of 2 hours per batch (check + gate) to
>> shift *a single* requirement. With hundreds of dependencies, thats an
>> intolerable amount of overhead.
>>
>
> ya, that is annoying, unfortunately packages don't need to know *ONE*
> good combination, they need to know them all, or at least the cap.  What
> you are basically doing is shifting all of this extra work onto the
> packagers, in fact I wouldn't be surprised if they needed to start
> vendoring all of openstack in a virtualenv instead of doing actual packages.

Here's an example of the havoc caps cause:
 https://review.openstack.org/#/c/234862/

I don't understand the statement about shifting work. Previously the
situation was that you had to guess whether a given release of a
dependency (both internal and external) worked with e.g. liberty.

Now there is an explicit list of what works.

Isn't that *better* ?

> My question remains though, if someone pip installs nova liberty, will
> it pull in deps from mitaka?  As it is now, without caps I cannot
> reliably package openstack, do you have a solution? I should probably
> start removing the liberty packages I did package since upstream seems
> so hostile...

If you don't use the constraints file (which is pip consumable and
published on the web so it can be used with pip install trivially)
then yes, it will install the latest versions of all packages which
are presumed to be doing backwards compatible changes. Things that we
*know* are going to be broken still get caps - and we accept the cost
of the 2-step dance needed to raise them.

The big question here is, I think, 'should we assume OpenStack
originated dependencies are better or worse than third party
dependencies?' Third party dependencies we presume are backwards
compatible (and will thus only break by mistake, which constraints
guards against) unless we have reason not to - and we have open ended
deps on them. This is where the spec about clients libraries is aimed.

It makes me very sad to know that you consider what we've done as
hostile. We did it for a bunch of good reasons:

 - safer release process
 - smoother updates for [apt, rpm at least] redistributors
 - faster turnaround of dependency usage in CI
 - step on the path to testing lower bounds
 - step on the path to alignment with upstream packaging tooling

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] establishing release liaisons for mitaka

2015-10-15 Thread Doug Hellmann
Excerpts from Sergey Lukjanov's message of 2015-10-15 22:34:21 +0300:
> Doug, just to confirm - for Sahara I'll continue being release liaison.

Thanks, Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Do all OpenStack daemons support sd_notify?

2015-10-15 Thread Thomas Goirand
On 12/15/2014 04:21 PM, Ihar Hrachyshka wrote:
> On 14/12/14 09:45, Thomas Goirand wrote:
>> Hi,
> 
>> As I am slowing fixing all systemd issues for the daemons of
>> OpenStack in Debian (and hopefully, have this ready before the
>> freeze of Jessie), I was wondering what kind of Type= directive to
>> put on the systemd .service files. I have noticed that in Fedora,
>> there's Type=notify. So my question is:
> 
>> Do all OpenStack daemons, as a rule, support the DBus sd_notify
>> thing? Should I always use Type=notify for systemd .service files?
>> Can this be called a general rule with no exception?
> 
> (I will talk about neutron only.)
> 
> I guess Type=notify is supposed to be used with daemons that use
> Service class from oslo-incubator that provides systemd notification
> mechanism, or call to systemd.notify_once() otherwise.
> 
> In terms of Neutron, neutron-server process is doing it, metadata
> agent also seems to do it, while OVS agent seems to not. So it really
> should depend on each service and the way it's implemented. You cannot
> just assume that every Neutron service reports back to systemd.
> 
> In terms of Fedora, we have Type=notify for neutron-server service only.
> 
> BTW now that more distributions are interested in shipping unit files
> for services, should we upstream them and ship the same thing in all
> interested distributions?

In Debian & Ubuntu, we use a system which, from a sysv-rc init script,
generates startup files for systemd, sysv-rc, and upstart. So if you
where shipping .service files, I'm not sure I'd use them (though they
may serve as good documentation...).

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Matthew Thode
On 10/15/2015 05:12 PM, Robert Collins wrote:
> On 16 October 2015 at 08:10, Matthew Thode  wrote:
>> On 10/15/2015 02:04 PM, Robert Collins wrote:
> ...
 Where are my caps?
>>>
>>> The known good versions of dependencies for liberty are
>>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/liberty
>>>
>> That's good, does that represent an upper constraint for the lower
>> constraint imposed by by the package?  Why is this kept separate?
>> Keeping it separate means that it's not trivial to merge them with
>> what's in each package's requirements.
> 
> It represents *one* known good combination of packages. We know that
> that combination passed CI, and we then do all our tests with it. To
> change it, we run it past CI and only move onto using the new set when
> its passed CI.
> 
> If we merged it to each packages requirements, we'd reintroduce the
> deadlock that caused so much grief only 6 months back - I don't see
> any reason, desire, or tolerance for doing that upstream.
> 
> Its kept separate because it requires 2N commits to shift known-good
> caps around for N repos using per-repo rules.
> 
> With hundreds of repositories, it takes us hundreds of commits in two
> batches - and a round trip time of 2 hours per batch (check + gate) to
> shift *a single* requirement. With hundreds of dependencies, thats an
> intolerable amount of overhead.
> 

ya, that is annoying, unfortunately packages don't need to know *ONE*
good combination, they need to know them all, or at least the cap.  What
you are basically doing is shifting all of this extra work onto the
packagers, in fact I wouldn't be surprised if they needed to start
vendoring all of openstack in a virtualenv instead of doing actual packages.

My question remains though, if someone pip installs nova liberty, will
it pull in deps from mitaka?  As it is now, without caps I cannot
reliably package openstack, do you have a solution? I should probably
start removing the liberty packages I did package since upstream seems
so hostile...

>>> You should be able to trivially pull those versions out and into your
>>> liberty set of packages.
>>>
>>> Theres another iteration on this in discussion now, which has to do
>>> with backwards compat *and testing of cap changes*, we'll be in the
>>> backwards compat fishbowl session in Tokyo if you're interested.
>>>
>>> -Rob
>>>
>>
>> I'll be at the fishbowl :D
> 
> Great!
> 
> -Rob
> 
> 


-- 
-- Matthew Thode (prometheanfire)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Robert Collins
On 16 October 2015 at 08:57, Doug Hellmann  wrote:
> Excerpts from Matthew Thode's message of 2015-10-15 14:35:08 -0500:
>> On 10/15/2015 02:17 PM, Matt Riedemann wrote:
>> >

>
> We're going to try to cover it in
> http://mitakadesignsummit.sched.org/event/27c17a3c35d72997b372ddf4759fe1be#.ViAE67SO820
> but we have a lot of other things to fit into that session so I want to
> put this one last so we don't derail the other discussions.

I thought it was
http://mitakadesignsummit.sched.org/event/0a2307779b4ab81892ba24de379e9dcc#.ViAMfd94u00
that it was slated for, where its squarely on topic :)

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-15 Thread Joshua Harlow

Ed Leafe wrote:

Wow, I seem to have unleashed a bunch of pent-up frustration in the
community! It's great to see everyone coming forward with their ideas
and insights for improving the way Nova (and, by extension, all of
OpenStack) can potentially scale.

I do have a few comments on the discussion:

1) This isn't a proposal to simply add some sort of DLM to Nova as a
magic cure-all. The concerns about Nova's ability to scale have to do
a lot more with the overall internal communication design.

2) I really liked the comment about "made-up numbers". It's so true:
we are all impressed by such examples of speed that we sometimes
forget whether speeding up X will improve the overall process to any
significant degree. The purpose of my original email back in July,
and the question I asked at the Nova midcycle, is if we could get
some numbers that would be a target to shoot for with any of these
experiments. Sure, I could come up with a test that shows a zillion
transactions per second, but if that doesn't result in a cloud being
able to schedule more efficiently, what's the point?

3) I like the idea of something like ZooKeeper, but my concern is how
to efficiently query the data. If, for example, we had records for
100K compute nodes, would it be possible to do the equivalent of
"SELECT * FROM resources WHERE resource_type = 'compute' AND
free_ram_mb>= 2048 AND …" - well, you get the idea. Are complex data
queries possible in ZK? I haven't been able to find that information
anywhere.


The idea is that u wouldn't do these queries against any remote source 
in the first place. Instead a scheduler would get notified (via a 
concept like 
http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataMode_watches) 
when a hypervisor updates its data in zookeeper (or other equivalent 
system); when that notification happens the scheduler then reads the 
data then updates some *local* data source with that information (this 
could be a in-memory dict, or a local sqlite, or something else better 
optimized for searching fast) and then from that point on that local 
source is used to do queries on. This way a hypervisor (compute-node) is 
performing *nearly* the equivalent of a push notification (like on your 
phone) to schedulers.




4) It is true that even in a very large deployment, it is possible to
keep all the relevant data needed for scheduling in memory. My
concern is how to efficiently search that data, much like in the ZK
scenario.


See above.



5) Concerns about Cassandra running with OpenJDK instead of the
Oracle JVM are troubling. I sent an email about this to one of the
people I know at DataStax, but so far have not received a response.
And while it would be great to have people contribute to OpenJDK to
make it compatible, keep in mind that that would be an ongoing
commitment, not just a one-time effort.

6) I remember discussions back in the Austin-Bexar time frame about
what Thierry referred to as 'flavor-based schedulers', and they were
immediately discounted as not sophisticated enough to handle the sort
of complex scheduling requests that were expected. I'd be interested
in finding out from the big cloud providers what percentage of their
requests would fall into this simple structure, and what percent are
more complicated than that. Having hosts listening to queues that
they know they can satisfy removes the raciness from the process,
although it would require some additional handling for the situation
where no host accepts the request. Still, it has the advantage of
being dead simple. Unfortunately, this would probably require a
bigger architectural change than integrating Cassandra into the
Scheduler would.


Another discussion that also should get talked about, but is again much 
larger in scope: https://review.openstack.org/#/c/210549/ (still WIP but 
the idea/problem/issue hopefully is clear).




I hope that those of us who will be at the Tokyo Summit and are
interested in these ideas can get together for an informal
discussion, and come up with some ideas for grand experiments and
reality checks. ;-)

BTW, I started playing around with some ideas, and thought that if
anyone wanted to also try Cassandra, I'd write up a quick how-to for
setting up a small cluster:
http://blog.leafe.com/small-scale-cassandra/. Using docker images
makes it a breeze!


-- Ed Leafe






__



OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Host maintenance mode proposal.

2015-10-15 Thread Tang Chen

Hi all,

I have update the spec.

https://review.openstack.org/#/c/228689/

I think implement it on nova client side may be better.
Please help to review.

Thanks.

On 10/15/2015 05:08 PM, Tang Chen wrote:

Hi all,

I tried to implement a common host maintenance mode handling
in nova compute.

BP: https://blueprints.launchpad.net/nova/+spec/host-maintenance-mode
spec: https://review.openstack.org/#/c/228689/
patches: 
https://review.openstack.org/#/q/topic:bp/host-maintenance-mode,n,z


But according to John (johnthetubaguy), host maintenance mode is
a functionality we are going to remove. So would anybody give me
some advice if I want to go on with this work ?

BTW, if I want to talk about this in IRC meeting, which meeting should I
attend ? Nova API meeting on Tuesday ?

Thanks.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Horizon] Cinder v2 endpoint name: volumev2 is required? volume is okay?

2015-10-15 Thread Ivan Kolodyazhny
Hi Akihiro,

Cinder itself doesn't care what keystone endpoint name is configured for
it. python-cinderclient uses cinderv2 for service name [1], [2] by default.

AFAIK, some other components like Rally expects "volumev2" endpoint for API
v2. "volume" is used for Tempest by default.

[1]
https://github.com/openstack/python-cinderclient/blob/92f3d35d5b6ba54704096288fb2a5ae93f5e1985/cinderclient/v2/shell.py#L206
[2]
https://github.com/openstack/python-cinderclient/blob/92f3d35d5b6ba54704096288fb2a5ae93f5e1985/cinderclient/shell.py#L585

Regards,
Ivan Kolodyazhny

On Wed, Oct 14, 2015 at 2:46 PM, Akihiro Motoki  wrote:

> Hi Cinder team,
>
> What is the expected service name of Cinder v2 API in keystone catalog?
> DevStack configures "volumev2" for Cinder v2 API by default.
>
> Can we "volume" for Cinder v2 API or do we always need to use "volumev2"
> in deployments with only Cinder v2 API (i.e. no v1 API) ?
>
> The question was raised during the Horizon review [1].
> The current Horizon code assumes "volumev2" is configured for Cinder v2
> API.
> Note that review [1] itself makes Horizon work only with Cinder v2 API.
> I would like to know Horizon team need to consider "volume" endpoint
> for Cinder v2 API or not.
>
> [1] https://review.openstack.org/#/c/151081/
> [2] https://bugs.launchpad.net/horizon/+bug/1415712
>
> Thanks,
> Akihiro
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ERROR : openstack Forbidden (HTTP 403)

2015-10-15 Thread Dhvanan Shah
Hi,
I have not been able to resolve it. The problem of "OpenStack Forbidden
(HTTP 403) still persists.

ERROR : Error appeared during Puppet run: 10.16.37.221_keystone.pp
Error:
/Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]:
Could not evaluate: Execution of '/usr/bin/openstack token issue --format
value' returned 1: WARNING: keystoneclient.auth.identity.generic.base
Discovering versions from the identity service failed when creating the
password plugin. Attempting to determine version from URL.
You will find full trace in log
/var/tmp/packstack/20151015-212559-xwC1zD/manifests/10.16.37.221_keystone.pp.log

Could someone please help me with this?



On Mon, Oct 12, 2015 at 5:16 PM, Dhvanan Shah <dhva...@gmail.com> wrote:

> Resolved it.
> The no_proxy env var needs to be set if your computer is located behind a
> authenticating proxy infrastructure.
>
> Source :
>
> https://ask.openstack.org/en/question/67203/kilo-deployment-using-packstack-fails-with-403-error-on-usrbinopenstack-service-list/
>
> On Mon, Oct 12, 2015 at 3:12 PM, Dhvanan Shah <dhva...@gmail.com> wrote:
>
>> Hi,
>>
>> I am getting this error while installing Openstack on Centos.
>> ERROR : Error appeared during Puppet run: 10.16.37.221_keystone.pp
>> Error: Could not prefetch keystone_service provider 'openstack':
>> Execution of '/usr/bin/openstack service list --quiet --format csv --long'
>> returned 1: ERROR: openstack Forbidden (HTTP 403)
>>
>> I've checked the permissions of the the executable files and they are not
>> the problem. So I'm not sure why I'm forbidden from executing this. Could
>> use some help!
>>
>> --
>> Dhvanan Shah
>>
>
>
>
> --
> Dhvanan Shah
>



-- 
Dhvanan Shah
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Design Summit Schedule

2015-10-15 Thread Matthew Treinish

Hi Everyone,

I just pushed up the QA schedule for design summit:

https://mitakadesignsummit.sched.org/overview/type/qa

Let me know if there are any big schedule conflicts or other issues, so we can
work through the problem.

Thanks,

Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Proposing Bob Haddleton to core team

2015-10-15 Thread Stephen Wong
+1

- Stephen

On Thu, Oct 15, 2015 at 8:47 AM, Sridhar Ramaswamy 
wrote:

> I would like to nominate Bob Haddleton to the Tacker core team.
>
> In the current Liberty cycle Bob made significant, across the board
> contributions to Tacker [1]. Starting with many usability enhancements and
> squashing bugs Bob has shown commitment and consistently produced high
> quality code. To cap he recently landed Tacker's health monitoring
> framework to enable loadable VNF monitoring. His knowledge in NFV area is a
> huge plus for Tacker as we embark onto even greater challenges in the
> Mitaka cycle.
>
> Along the lines, we are actively looking to expand Tacker's core reviewer
> team. If you are interested in the NFV Orchestration / VNF Manager space
> please stop by and explore Tacker project [2].
>
> Tacker team,
>
> Please provide your -1 / +1 votes.
>
> - Sridhar
>
> [1]  
> http://stackalytics.com/report/users/bob-haddleton
> [2]  
> https://wiki.openstack.org/wiki/Tacker
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Proposing Bob Haddleton to core team

2015-10-15 Thread Bharath Thiruveedula
+1

-Bharath T

On Thu, Oct 15, 2015 at 10:36 PM, Stephen Wong 
wrote:

> +1
>
> - Stephen
>
> On Thu, Oct 15, 2015 at 8:47 AM, Sridhar Ramaswamy 
> wrote:
>
>> I would like to nominate Bob Haddleton to the Tacker core team.
>>
>> In the current Liberty cycle Bob made significant, across the board
>> contributions to Tacker [1]. Starting with many usability enhancements and
>> squashing bugs Bob has shown commitment and consistently produced high
>> quality code. To cap he recently landed Tacker's health monitoring
>> framework to enable loadable VNF monitoring. His knowledge in NFV area is a
>> huge plus for Tacker as we embark onto even greater challenges in the
>> Mitaka cycle.
>>
>> Along the lines, we are actively looking to expand Tacker's core reviewer
>> team. If you are interested in the NFV Orchestration / VNF Manager space
>> please stop by and explore Tacker project [2].
>>
>> Tacker team,
>>
>> Please provide your -1 / +1 votes.
>>
>> - Sridhar
>>
>> [1]  
>> http://stackalytics.com/report/users/bob-haddleton
>> [2]  
>> https://wiki.openstack.org/wiki/Tacker
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Proposing Bob Haddleton to core team

2015-10-15 Thread Vishwanath Jayaraman
+1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Denis Egorenko core

2015-10-15 Thread Emilien Macchi


On 10/13/2015 04:29 PM, Emilien Macchi wrote:
> Denis Egorenko (degorenko) is working on Puppet OpenStack modules for
> quite some time now.
> 
> Some statistics [1] about his contributions (last 6 months):
> * 270 reviews
> * 49 negative reviews
> * 216 positive reviews
> * 36 disagreements
> * 30 commits
> 
> Beside stats, Denis is always here on IRC participating to meetings,
> helping our group discussions, and is always helpful with our community.
> 
> I honestly think Denis is on the right path to become a good core member
> team, he has strong knowledge in OpenStack deployments, knows enough
> about our coding style and his involvement in the project is really
> great. He's also a huge consumer of our modules since he's working on Fuel.
> 
> I would like to open the vote to promote Denis part of Puppet OpenStack
> core reviewers.
> 

3 positive feedback from our core reviewers.
0 negative feedback.

Welcome Denis, we're happy to have you onboard!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Denis Egorenko core

2015-10-15 Thread Denis Egorenko
Thank you! I will do my best!

2015-10-15 19:27 GMT+03:00 Emilien Macchi :

>
>
> On 10/13/2015 04:29 PM, Emilien Macchi wrote:
> > Denis Egorenko (degorenko) is working on Puppet OpenStack modules for
> > quite some time now.
> >
> > Some statistics [1] about his contributions (last 6 months):
> > * 270 reviews
> > * 49 negative reviews
> > * 216 positive reviews
> > * 36 disagreements
> > * 30 commits
> >
> > Beside stats, Denis is always here on IRC participating to meetings,
> > helping our group discussions, and is always helpful with our community.
> >
> > I honestly think Denis is on the right path to become a good core member
> > team, he has strong knowledge in OpenStack deployments, knows enough
> > about our coding style and his involvement in the project is really
> > great. He's also a huge consumer of our modules since he's working on
> Fuel.
> >
> > I would like to open the vote to promote Denis part of Puppet OpenStack
> > core reviewers.
> >
>
> 3 positive feedback from our core reviewers.
> 0 negative feedback.
>
> Welcome Denis, we're happy to have you onboard!
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Mitaka design summit schedule

2015-10-15 Thread Sergey Kraynev
Hi folks, I glad to inform you about final version of Heat schedule
for design summit.

Online version is available here:
- http://mitakadesignsummit.sched.org/type/Heat#.Vh_V7NSlyko

It will be useful for understanding rooms and time.

If you want to look on description and etherpad for each session,
please use wiki page:
- https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Heat

Heat Core-team, please do not hesitate to edit and add your ideas to
corresponding etherpads.

-- 
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Tempest] where fwaas tempest tests should be?

2015-10-15 Thread Madhusudhan Kandadai
On Thu, Oct 15, 2015 at 10:13 AM, Assaf Muller  wrote:

>
>
> On Thu, Oct 15, 2015 at 10:22 AM, Matthew Treinish 
> wrote:
>
>> On Thu, Oct 15, 2015 at 09:02:22AM -0400, Assaf Muller wrote:
>> > On Thu, Oct 15, 2015 at 7:25 AM, Takashi Yamamoto <
>> yamam...@midokura.com>
>> > wrote:
>> >
>> > > hi,
>> > >
>> > > i'm looking in fwaas tempest tests and have a question about code
>> location.
>> > >
>> > > currently,
>> > >
>> > > - fwaas api tests and its rest client are in neutron repo
>> > > - there are no fwaas scenario tests
>> > >
>> > > eventually,
>> > >
>> > > - fwaas api tests should be moved into neutron-fwaas repo
>> > > - fwaaa scenario tests should be in neutron-fwaas repo too.
>> > >
>> >
>> > I believe scenario tests that invoke APIs outside of Neutron should
>> > stay (Or be introduced to) Tempest.
>>
>> So testing the neutron advanced services was actually one of the first
>> things
>> we decided was out of scope for tempest. (like well over a year ago) It
>> took
>> some time to get equivalent testing setup elsewhere, but tests and
>> support for
>> the advanced services were removed from tempest on purpose.
>
>
> This is for both *aaS API tests and scenario tests?
>

I think yes. Scenario tests for LBaaS and VPNaaS are at this path:
LBaaS:
https://github.com/openstack/neutron-lbaas/tree/master/neutron_lbaas/tests/tempest/v2/scenario
VPNaaS: https://github.com/openstack/neutron-vpnaas/blob/master/rally-jobs/
Moving VPN api jobs in VPNaaS tree: https://review.openstack.org/#/c/211381/
(WIP)

>
>
>> I'd suggest that
>> you look at the tempest plugin interface:
>>
>> http://docs.openstack.org/developer/tempest/plugin.html
>>
>> if you'd like to make the fwaas tests (or any other adv. service tests)
>> integrate more cleanly with the rest of tempest.
>>
>> -Matt Treinish
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-10-15 Thread Kyle Mestery
On Thu, Oct 15, 2015 at 7:03 AM, Neil Jerram 
wrote:

> On 02/10/15 12:33, Neil Jerram wrote:
> > On 02/10/15 11:42, Neil Jerram wrote:
> >> Thanks Kyle! I'm looking at this now for networking-calico.
> > Done, please see https://pypi.python.org/pypi/networking-calico.
> >
> > When you release, how will the version number be decided?  [...]
>
> Excitingly, I believe networking-calico is ready now for its first
> release.  Kyle - would you mind doing the honours?
>
>
Cool! Yes, I can help you. Can you follow the instructions here [1]?

Thanks!
Kyle

[1]
http://docs.openstack.org/developer/neutron/policies/bugs.html#plugin-and-driver-repositories

> (I'm assuming you're still the right person to ask - please do correct
> me if not!)
>
> Thanks,
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] getting rid of tablib completely (Requests + urllib3 + distro packages)

2015-10-15 Thread Doug Hellmann
Excerpts from Thomas Goirand's message of 2015-10-15 00:31:44 +0200:
> On 10/13/2015 05:14 PM, Doug Hellmann wrote:
> > Excerpts from Thomas Goirand's message of 2015-10-13 12:38:00 +0200:
> >> On 10/12/2015 11:09 PM, Steve Baker wrote:
> >>> On 13/10/15 02:05, Thomas Goirand wrote:
> 
>  BTW, the same applies for tablib which is in a even more horrible state
>  that makes it impossible to package with Py3 support. But tablib could
>  be removed from our (build-)dependency list, if someone cares about
>  re-writing cliff-tablib, which IMO wouldn't be that much work. Doug, how
>  many beers shall I offer you for that work? :)
> 
> >>> Regarding tablib, cliff has had its own table formatter for some time,
> >>> and now has its own json and yaml formatters. I believe the only tablib
> >>> formatter left is the HTML one, which likely wouldn't be missed if it
> >>> was just dropped (or it could be simply reimplemented inside cliff).
> >>>
> >>> If the cliff deb depends on cliff-tablib
> >>
> >> It does.
> > 
> > That dependency is backwards. cliff-tablib should depend on cliff. Cliff
> > does not need cliff-tablib, but cliff-tablib is only useful if cliff is
> > installed.
> 
> My bad, sorry. python-cliff doesn't depends on cliff-tablib. Why did I
> say yes?
> 
> >> And also the below packages have a build-dependency on
> >> cliff-tablib:
> >>
> >> - python-neutronclient
> >> - python-openstackclient
> >>
> >> python-openstackclient also has a runtime depends on cliff-tablib.
> > 
> > Now that we have a cliff with the formatters provided by tablib, we can
> > update those dependencies to remove cliff-tablib. Someone just needs to
> > follow through on that with patches to the requirements files for the
> > clients.
> 
> Doug, the problem isn't cliff-tablib, the problem is tablib.
> 
> I don't really know how to describe the mess that this package is. It
> bundles so many outdated Python modules with hacks to force Py3 support
> into it, that it is impossible to package properly. Mostly, all the
> embedded Python modules in tablib have had newer upstream releases with
> real support for Py3 (instead of hacks in the bundled versions), though
> upgrading to them breaks tablib. Just doing "python3 setup.py install"
> fails on me because its trying to install the Py2 version. It's just
> horrible... :(
> 
> So please don't just remove cliff-tablib, which itself is fine, but
> really get rid of tablib as per the subject...

I wasn't clear. The reason cliff-tablib exists is because of earlier
complaints (maybe from you?) about tablib. I didn't have time to
rewrite the formatters, so I pulled them into their own package so
it wasn't necessary to ship them. Now that we have the useful ones
included in cliff directly in a way that doesn't use tablib, you
should not need either cliff-tablib or tablib. We may need patches
to upstream packages that still have the cliff-tablib dependency.
Nothing should be using tablib directly, AFAIK.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-15 Thread John Dickinson
I would love to see this!

It would really help review priorities and could easily be worked into review 
dashboards.

--John



On 15 Oct 2015, at 8:42, Markus Zoeller wrote:

> Zaro wrote:
>> We are soliciting feedback so please let us know what you think.
>
> For spotting possible trivial bug fixes the new query option "delta"
> will be useful. For example: "status:open delta:<100"
>
> Would it be possible to create a "prio" label to help sorting out stuff?
> If I understand [1] correctly, we could have something like like:
>
>  [label "Prio"]
>  function = NoOp
>  value = 0 Undecided
>  value = 1 critical
>  value = 2 high
>  value = 3 trivialfix
>
> For example, this would allow to create a query with
>
>  "status:open label:Prio=3"
>
> to get all reviews for trivial bug fixes.
>
> Nevertheless, I'm looking forward to the upgrade.
>
> References:
> [1]
> https://gerrit-review.googlesource.com/Documentation/config-labels.html#label_custom
>
> Regards,
> Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Proposing Bob Haddleton to core team

2015-10-15 Thread Sripriya Seetharam
+1   ☺

-Sripriya

From: Sridhar Ramaswamy [mailto:sric...@gmail.com]
Sent: Thursday, October 15, 2015 8:47 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Tacker] Proposing Bob Haddleton to core team

I would like to nominate Bob Haddleton to the Tacker core team.

In the current Liberty cycle Bob made significant, across the board 
contributions to Tacker [1]. Starting with many usability enhancements and 
squashing bugs Bob has shown commitment and consistently produced high quality 
code. To cap he recently landed Tacker's health monitoring framework to enable 
loadable VNF monitoring. His knowledge in NFV area is a huge plus for Tacker as 
we embark onto even greater challenges in the Mitaka cycle.

Along the lines, we are actively looking to expand Tacker's core reviewer team. 
If you are interested in the NFV Orchestration / VNF Manager space please stop 
by and explore Tacker project [2].

Tacker team,

Please provide your -1 / +1 votes.

- Sridhar

[1] http://stackalytics.com/report/users/bob-haddleton
[2] https://wiki.openstack.org/wiki/Tacker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Require documenting changes with versionadded and versionchanged

2015-10-15 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2015-10-15 08:54:47 -0700:
> Brant Knudson wrote:
> >
> >
> > On Thu, Oct 15, 2015 at 5:52 AM, Victor Stinner  > > wrote:
> >
> > Hi,
> >
> > I propose that changes must now be documented in Oslo libraries. If
> > a change is not documented, it must *not* be approved.
> >
> > IMHO it's very important to document all changes. Otherwise, it
> > becomes really hard to guess if a specific parameter or a specific
> > function can be used just by reading the doc :-/ And we should not
> > force users to always upgrading the Oslo libraries to the latest
> > versions. It doesn't work on stable branches :-)
> >
> > Currently, ".. versionadded:: x.y" and ".. versionchanged:: x.y" are
> > not (widely) used in Oslo libraries. A good start would be to dig
> > the Git history to add these tags. I started to do this for the
> > oslo.config library:
> > https://review.openstack.org/#/c/235232/
> >
> > I'm interested to write similar changes for other Oslo libraries.
> >
> > Because I created this change, I started to vote -1 on all patches
> > which oslo.config changes the API without documenting the doc.
> >
> > What do you think?
> >
> > Victor
> >
> >
> > Submitters don't know what release their change is going to get into.
> > They might submit the review when version 1.1.0 is current so they mark
> > it with added in 1.2.0, but then the change doesn't get merged until
> > after 1.4.0 is tagged. Also, the submitter doesn't know what the next
> > release is going to be tagged as, since it might be 1.2.0 or it might be
> > 2.0.0.
> >
> > So this will create extra churn as reviews have to be updated with the
> > release #, and the docs will likely be wrong when reviewers forget to
> > check it (unless this can be automated).
> >
> > We have the same problem with documenting when something is deprecated.
> 
> +1
> 
> I had this problem with deprecation versioning (the debtcollector 
> library functions take a version="XYZ", removal_version="ABC" params, 
> see 
> http://docs.openstack.org/developer/debtcollector/examples.html#further-customizing-the-emitted-messages)
>  
> and it is pretty hard to get those two numbers right, especially with 
> weekly releases and not knowing when a review will merge... I'm not 
> saying we shouldn't try to do this, but we just have to figure out how 
> to do it in a smart way.
> 
> Perhaps there need to be a gerrit based way to add these, so for example 
> a review submitter would write:
> 
> ".. versionadded:: $FILL_ME_IN_WHEN_MERGED" and ".. versionchanged:: 
> $FILL_ME_IN_WHEN_MERGED" or something, and gerrit when merging code 
> would change those into real numbers...

Gerrit can't change the commit because then the hash won't match.

Stand by for some announcements about release notes management coming
next week that will help solve this problem in a different way.

Doug

> 
> >
> > I don't think it's worth it to document in which release something is
> > added in the oslo libs. We're not charging for this stuff so if you want
> > the online docs to match your code use the latest. Or check out the tag
> > and generate the docs for the release you're on to look at to see if the
> > feature is there.
> >
> > :: Brant
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Proposing Bob Haddleton to core team

2015-10-15 Thread Karthik Natarajan
+1

From: Sridhar Ramaswamy [mailto:sric...@gmail.com]
Sent: Thursday, October 15, 2015 8:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Tacker] Proposing Bob Haddleton to core team

I would like to nominate Bob Haddleton to the Tacker core team.

In the current Liberty cycle Bob made significant, across the board 
contributions to Tacker [1]. Starting with many usability enhancements and 
squashing bugs Bob has shown commitment and consistently produced high quality 
code. To cap he recently landed Tacker's health monitoring framework to enable 
loadable VNF monitoring. His knowledge in NFV area is a huge plus for Tacker as 
we embark onto even greater challenges in the Mitaka cycle.

Along the lines, we are actively looking to expand Tacker's core reviewer team. 
If you are interested in the NFV Orchestration / VNF Manager space please stop 
by and explore Tacker project [2].

Tacker team,

Please provide your -1 / +1 votes.

- Sridhar

[1] http://stackalytics.com/report/users/bob-haddleton
[2] https://wiki.openstack.org/wiki/Tacker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Tempest] where fwaas tempest tests should be?

2015-10-15 Thread Assaf Muller
On Thu, Oct 15, 2015 at 10:22 AM, Matthew Treinish 
wrote:

> On Thu, Oct 15, 2015 at 09:02:22AM -0400, Assaf Muller wrote:
> > On Thu, Oct 15, 2015 at 7:25 AM, Takashi Yamamoto  >
> > wrote:
> >
> > > hi,
> > >
> > > i'm looking in fwaas tempest tests and have a question about code
> location.
> > >
> > > currently,
> > >
> > > - fwaas api tests and its rest client are in neutron repo
> > > - there are no fwaas scenario tests
> > >
> > > eventually,
> > >
> > > - fwaas api tests should be moved into neutron-fwaas repo
> > > - fwaaa scenario tests should be in neutron-fwaas repo too.
> > >
> >
> > I believe scenario tests that invoke APIs outside of Neutron should
> > stay (Or be introduced to) Tempest.
>
> So testing the neutron advanced services was actually one of the first
> things
> we decided was out of scope for tempest. (like well over a year ago) It
> took
> some time to get equivalent testing setup elsewhere, but tests and support
> for
> the advanced services were removed from tempest on purpose.


This is for both *aaS API tests and scenario tests?


> I'd suggest that
> you look at the tempest plugin interface:
>
> http://docs.openstack.org/developer/tempest/plugin.html
>
> if you'd like to make the fwaas tests (or any other adv. service tests)
> integrate more cleanly with the rest of tempest.
>
> -Matt Treinish
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] establishing release liaisons for mitaka

2015-10-15 Thread Sergey Lukjanov
Doug, just to confirm - for Sahara I'll continue being release liaison.

On Thu, Oct 15, 2015 at 3:40 AM, Lana Brindley 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 15/10/15 01:25, Doug Hellmann wrote:
> > As with the other cross-project teams, the release management team
> > relies on liaisons from each project to be available for coordination of
> > work across all teams. It's the start of a new cycle, so it's time to
> > find those liaison volunteers.
> >
> > We are working on updating the release documentation as part of the
> > Project Team Guide. Release liaison responsibilities are documented in
> > [0], and we will update that page with more details over time.
> >
> > In the past we have defaulted to having the PTL act as liaison if no
> > alternate is specified, and we will continue to do that during Mitaka.
> > If you plan to delegate release work to a liaison, especially for
> > submitting release requests, please update the wiki [1] to provide their
> > contact information. If you plan to serve as your team's liaison, please
> > add your contact details to the page.
>
> While PTLs are considering this important role, (and editing the
> cross-project liaison wiki page!), please consider who you would like to be
> your documentation liaison as well[1]. The docs team relies on the docs
> CPLs to provide technical depth on the things we write, so having a subject
> matter expert means we are going to be providing better documentation for
> your project.
>
> Thanks,
> Lana
>
> 1: https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
>
>
> - --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQEcBAEBCAAGBQJWHvYAAAoJELppzVb4+KUy9ysH/i/sksBb128N7uBfhOpHukDM
> 2Z1e8Dmx1rS5F2x4k4QIBqMPIpgAhtyryjCWTwi5rz/62bVG1rT/ArTJiqV+nQT0
> iLXBVVI++iZL5eAkaR3/VVyeOUUXoRIe/t+5MMrTZaVOB1nM1UkNLAWcoS6xaATf
> Y96dcv7EAhyw2Mmd8TlLL3/VBTZ4DYO1aQaQbAupGlNdkzOeHnLwU4kdH0T3ajKc
> ooSi5PYTY8blFTw/F1LfbPM8HkaCvV81YU8eSfRtLQeNg9WjgE5cMTXn7HIupKgA
> YmKCaQfE2rGGZpGF7d16C5A8UbGdKLnsZ6XjtjcVVbxB5diAMFSk3xMgjDfVdXM=
> =EZFY
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-10-15 Thread Matthew Thode
On 10/15/2015 02:17 PM, Matt Riedemann wrote:
> 
> 
> On 10/15/2015 2:10 PM, Matthew Thode wrote:
>> On 10/15/2015 02:04 PM, Robert Collins wrote:
>>> On 16 October 2015 at 08:01, Matthew Thode
>>>  wrote:
 So, this is my perspective in packing liberty for Gentoo.

 We can have multiple versions of a package available to install,
 because
 of this we generally directly translate the valid dependency versions
 from requirements.
>>>
>>> Cool.
>>>
 this
  oslo.concurrency>=2.3.0 # Apache-2.0
 becomes this
  >=dev-python/oslo-concurrency-2.3.0[${PYTHON_USEDEP}]

 Now what happens when I package something from mitaka (2.7.0 in this
 case would be mitaka).  It's undefined behaviour as far as I know.

 Basically, I can see no reason why the policy of caps changed from kilo
 to liberty, it was actually nice to package for liberty, I can see this
 going very bad very quick.
>>>
>>> They changed because it was causing huge trauma and multiple day long
>>> gate wedges around release times. Covered in detail here -
>>> http://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html
>>>
>>>
 Where are my caps?
>>>
>>> The known good versions of dependencies for liberty are
>>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/liberty
>>>
>>>
>> That's good, does that represent an upper constraint for the lower
>> constraint imposed by by the package?  Why is this kept separate?
>> Keeping it separate means that it's not trivial to merge them with
>> what's in each package's requirements.
> 
> I'm not sure I understand the first question but I believe that u-c is
> automatically adjusted and if there is a conflict with the minimum
> version required of a dependency then the cap is adjusted (or vice-versa).
> 
> It's separate, at least in part, because the changes to
> global-requirements are synced to all projects in the projects.txt file
> in the requirements repo, which causes a bunch of churn to get those
> changes approved in the registered projects and then released
> appropriately.
> 
> The global sync to the ecosystem isn't fun, I'll agree, but I do think
> that thinks have been better since Juno/Kilo because (1) we don't allow
> <= caps in g-r anymore (we were not allowing patch version updates which
> wedged us several times) and (2) we're better about releasing things
> with minor version updates per semver - whereas in the past the cats
> were releasing on their own volition and picking the version they
> thought was best, which creeped into having multiple versions that could
> be acceptable across branches, and we'd have wedges those ways too. I
> think a lot of that has been fixed by the openstack/releases repo so
> that the cats now have to line up for the release of their library with
> a centralized team.
> 

I'd agree with this, I still don't know what to cap things to.  I need
to figure out what the caps should be...  It could be hard to sync
across projects but like you say, most of that's gotten better since then.

>>
>>> You should be able to trivially pull those versions out and into your
>>> liberty set of packages.
>>>
>>> Theres another iteration on this in discussion now, which has to do
>>> with backwards compat *and testing of cap changes*, we'll be in the
>>> backwards compat fishbowl session in Tokyo if you're interested.
>>>
>>> -Rob
>>>
>>
>> I'll be at the fishbowl :D
>>
> 

Anyone have a sched link to that fishbowl, I can't find it :(

-- 
Matthew Thode

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Mitaka Summit Schedule

2015-10-15 Thread Jesse Pretorius
Hi everyone,

I've added the final details for the summit sessions to the etherpad:
https://etherpad.openstack.org/p/openstack-ansible-mitaka-summit

Our sessions are open to anyone with an interest in deploying OpenStack
with Ansible.

Note that our two primary topics for discussion are:

Image-based Deployments:
We ask the questions:
 - What benefits are there to using image-based deployments?
 - How are people doing it today?

Production-ready Upgrades
We ask the questions:
 - What does it take to do upgrades in a production OpenStack environment?
 - How can we orchestrate it successfully?

We are very interested in gathering feedback from a broad group of
OpenStack operators based on the above topics and would love to have your
feedback!

-- 
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-15 Thread Yuriy Taraday
On Wed, Oct 14, 2015 at 3:08 AM Zaro  wrote:

> Hello All,
>
> The openstack-infra team would like to upgrade from our Gerrit 2.8 to
> Gerrit 2.11.  We are proposing to do the upgrade shortly after the
> Mitaka summit.  The main motivation behind the upgrade is to allow us
> to take advantage of some of the new REST api, ssh commands, and
> stream events features.  Also we wanted to stay closer to upstream so
> it will be easier to pick up more recent features and fixes.
>
> We want to let everyone know that there is a big UI change in Gerrit
> 2.11.  The change screen (CS), which is the main view for a patchset,
> has been completely replaced with a new change screen (CS2).  While
> Gerrit 2.8 contains both old CS and CS2, I believe everyone in
> Openstack land is really just using the old CS.  CS2 really wasn't
> ready in 2.8 and really should never be used in that version.  The CS2
> has come a long way since then and many other big projects have moved
> to using Gerrit 2.11 so it's not a concern any longer.  If you would
> like a preview of Gerrit 2.11 and maybe help us test it, head over to
> http://review-dev.openstack.org.  If you are very opposed to CS2 then
> you may like Gertty (https://pypi.python.org/pypi/gertty) instead.  If
> neither option works for you then maybe you can help us create a new
> alternative :)
>
> We are soliciting feedback so please let us know what you think.
>

I think that's great news!
I've been using CS2 since it became an option and it even (mostly) worked
fine for me, so I've been waiting for so long for this upgrade.

Where should I direct issues I find in review-dev.openstack.org? I've found
two so far:
- "Unified diff" button in diff view (next to navigation arrows) always
leads to Internal Server Error;
- cgit links in change screen have "%2F" in URL instead of "/" which leads
to Apache's Not Found instead of cgit's one.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] opening stable/liberty

2015-10-15 Thread Doug Hellmann
One of the first steps for opening stable/liberty is to update the
version settings in the branches to no longer use pre-versioning.
Thierry submitted a bunch of patches to projects [0], and some are
merging now.

Others are running into test failures, though, and need attention from
project teams. We need release liaisons to look at them, or find someone
to look at them, and take over the patches to add whatever fixes are
needed so we can land them.

Please respond to this thread if you're taking over the patch for your
project so we know who is working on each.

Doug

[0] 
https://review.openstack.org/#/q/branch:stable/liberty+topic:next-liberty-stable,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-15 Thread Zaro
Thanks for the suggestion Markus.  I think the path for those types of
suggestions would be thru a spec:
http://specs.openstack.org/openstack-infra/infra-specs/


Thanks for reporting these Yuriy.  I've created a checklist[1] to
track things to verify before we actually do anything.  I think at
this point, the etherpad might be the most appropriate place to track
these types of issues.
>
> Where should I direct issues I find in review-dev.openstack.org? I've found
> two so far:
> - "Unified diff" button in diff view (next to navigation arrows) always
> leads to Internal Server Error;

Known issue, fixed here: https://gerrit-review.googlesource.com/#/c/71500/

> - cgit links in change screen have "%2F" in URL instead of "/" which leads
> to Apache's Not Found instead of cgit's one.

I believe the fix for this is to just disable the gitweb.urlEncode
configuration: 
https://gerrit-review.googlesource.com/Documentation/config-gerrit.html
Will try it out shortly.


[1] https://etherpad.openstack.org/p/test-gerrit-2.11

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >