Re: [openstack-dev] [Neutron][Kuryr] Kuryr Open Tasks

2015-10-07 Thread Antoni Segura Puimedon
On Wed, Oct 7, 2015 at 9:40 PM, Egor Guz  wrote:

> Gal, thx I a lot. I have created the pool
> http://doodle.com/poll/udpdw77evdpnsaq6 where everyone can vote for time
> slot.
>

Thanks Egor


>
> —
> Egor
>
>
> From: Gal Sagie >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >>
> Date: Tuesday, October 6, 2015 at 12:08
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>,
> Eran Gampel >,
> Antoni Segura Puimedon >,
> Irena Berezovsky >,
> Mohammad Banikazemi >, Taku Fukushima
> >, Salvatore
> Orlando >, sky fei <
> feisk...@gmail.com>, "
> digambarpati...@yahoo.co.in" <
> digambarpati...@yahoo.co.in>,
> Digambar Patil >
> Subject: [openstack-dev] [Neutron][Kuryr] Kuryr Open Tasks
>
> Hello All,
>
> I have opened a Trello board to track all Kuryr assigned tasks and their
> assignee.
> In addition to all the non assigned tasks we have defined.
>
> You can visit and look at the board here [1].
> Please email back if i missed you or any task that you are working on, or
> a task
> that you think needs to be on that list.
>
> This is only a temporary solution until we get everything organised, we
> plan to track everything with launchpad bugs (and the assigned blueprints)
>
> If you see any task from this list which doesn't have an assignee and you
> feel
> you have the time and the desire to contribute, please contact me and i
> will provide
> guideness.
>
> Thanks
> Gal
>
> [1] https://trello.com/b/cbIAXrQ2/project-kuryr
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-07 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2015-10-07 12:28:36 -0700:
> On 07/10/15 13:36, Ed Leafe wrote:
> > Several months ago I proposed an experiment [0] to see if switching the 
> > data model for the Nova scheduler to use Cassandra as the backend would be 
> > a significant improvement as opposed to the current design using multiple 
> > copies of the same data (compute_node in MySQL DB, HostState in memory in 
> > the scheduler, ResourceTracker in memory in the compute node) and trying to 
> > keep them all in sync via passing messages.
> 
> It seems to me (disclaimer: not a Nova dev) that which database to use 
> is completely irrelevant to your proposal, which is really about moving 
> the scheduling from a distributed collection of Python processes with 
> ad-hoc (or sometimes completely missing) synchronisation into the 
> database to take advantage of its well-defined semantics. But you've 
> framed it in such a way as to guarantee that this never gets discussed, 
> because everyone will be too busy arguing about whether or not Cassandra 
> is better than Galera.
> 

Your point is valid Zane, that the idea is more about having a
synchronized view of the scheduling state, and not about Cassandra.

I think Cassandra makes the proposal more realistic and easier to think
aboutthough, as Cassandra is focused on problems of the scale that this
represents. Galera won't do this well at any kind of scale, without
the added complexity and inefficiency of cells. So whatever Galera's
capability for a single node to handle the write churn of a truly
synchronized scheduler is, would be the maximum capacity of one cell.

I like the concrete nature of this proposal, and suggest people review
it as a whole, and not try to reduce it to its components without an
extremely strong reason to do so.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Monty Taylor

On 10/07/2015 09:24 AM, Sean Dague wrote:

On 10/07/2015 08:57 AM, Thierry Carrez wrote:

Sean Dague wrote:

We're starting to make plans for the next cycle. Long term plans are
getting made for details that would happen in one or two cycles.

As we already have the locations for the N and O summits I think we
should do the naming polls now and have names we can use for this
planning instead of letters. It's pretty minor but it doesn't seem like
there is any real reason to wait and have everyone come up with working
names that turn out to be confusing later.


That sounds fair. However the release naming process currently states[1]:

"""
The process to chose the name for a release begins once the location of
the design summit of the release to be named is announced and no sooner
than the opening of development of the previous release.
"""

...which if I read it correctly means we could pick N now, but not O. We
might want to change that (again) first.

[1] http://governance.openstack.org/reference/release-naming.html


Right, it seems like we should change it so that we can do naming as
soon as the location is announced.

For projects like Nova that are trying to plan things more than one
cycle out, having those names to hang those features on is massively
useful (as danpb also stated). Delaying for bureaucratic reasons just
seems silly. :)


So, for what it's worth, I remember discussing this when we discussed 
the current process, and the change you are proposing was one of the 
options put forward when we talked about it.


The reason for not doing all of them as soon as we know them was to keep 
a sense of ownership by the people who are actually working on the 
thing. Barcelona is a long way away and we'll all likely have rage quit 
by then, leaving the electorate for the name largely disjoint from the 
people working on the release.


Now, I hear you - and I'm not arguing that position. (In fact, I believe 
my original thought was in line with what you said here) BUT - I mostly 
want to point out that we have had this discussion, the discussion was 
not too long ago, it covered this point, and I sort of feel like if we 
have another discussion on naming process people might kill us with 
pitchforks.


Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Matthias Runge
On Wed, Oct 07, 2015 at 03:02:47PM +0200, Christian Berendt wrote:
> Is this list correct?
> 
> M = Tokyo
> N = Atlanta
> O = Barcelona
> P = ?

IIRC N should be Austin instead of Atlanta.
-- 
Matthias Runge 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Core Reviewers groups restructure

2015-10-07 Thread Dmitry Borodaenko
While we're waiting for openstack-infra team to finish the stackforge
migration and review our ACL changes, I implemented the rest of the
changes agreed in this thread:

- Fuel-core group removed everywhere.

- Per-project core groups populated with individual reviewers as quoted
  below. Exceptions:

  - Dennis Dmitriev was approved as core in fuel-qa, fuel-devops, and
fuel-ostf after this thread was started;

  - fuel-upgrades already excludes fuel-core so I couldn't modify it,
and the current list doesn't match Mike's email. It is up to current
cores [0] to bring it up to date.

[0] https://review.openstack.org/#/admin/groups/1004,members

fuel-specs and fuel-*-release groups will have to wait until ACL update
is merged (i.e. after October 17).

-- 
Dmitry Borodaenko

On Thu, Oct 01, 2015 at 03:59:47PM -0700, Dmitry Borodaenko wrote:
> This commit brings Fuel ACLs in sync with each other and in line with
> the agreement on this thread:
> https://review.openstack.org/230195
> 
> Please review carefully. Note that I intentionally didn't touch any of
> the plugins ACLs, primarily to save time for us and the
> openstack-infra team until after the stackforge->openstack namespace
> migration.
> 
> On Mon, Sep 21, 2015 at 4:17 PM, Mike Scherbakov
>  wrote:
> > Thanks guys.
> > So for fuel-octane then there are no actions needed.
> >
> > For fuel-agent-core group [1], looks like we are already good (it doesn't
> > have fuel-core group nested). But it would need to include fuel-infra group
> > and remove Aleksandra Fedorova (she will be a part of fuel-infra group).
> >
> > python-fuel-client-core [2] is good as well (no nested fuel-core). However,
> > there is another group python-fuelclient-release [3], which has to be
> > eliminated, and main python-fuelclient-core would just have fuel-infra group
> > included for maintenance purposes.
> >
> > [1] https://review.openstack.org/#/admin/groups/995,members
> > [2] https://review.openstack.org/#/admin/groups/551,members
> > [3] https://review.openstack.org/#/admin/groups/552,members
> >
> >
> > On Mon, Sep 21, 2015 at 11:06 AM Oleg Gelbukh  wrote:
> >>
> >> FYI, we have a separate core group for stackforge/fuel-octane repository
> >> [1].
> >>
> >> I'm supporting the move to modularization of Fuel with cleaner separation
> >> of authority and better defined interfaces. Thus, I'm +1 to such a change 
> >> as
> >> a part of that move.
> >>
> >> [1] https://review.openstack.org/#/admin/groups/1020,members
> >>
> >> --
> >> Best regards,
> >> Oleg Gelbukh
> >>
> >> On Sun, Sep 20, 2015 at 11:56 PM, Mike Scherbakov
> >>  wrote:
> >>>
> >>> Hi all,
> >>> as of my larger proposal on improvements to code review workflow [1], we
> >>> need to have cores for repositories, not for the whole Fuel. It is the 
> >>> path
> >>> we are taking for a while, and new core reviewers added to specific repos
> >>> only. Now we need to complete this work.
> >>>
> >>> My proposal is:
> >>>
> >>> Get rid of one common fuel-core [2] group, members of which can merge
> >>> code anywhere in Fuel. Some members of this group may cover a couple of
> >>> repositories, but can't really be cores in all repos.
> >>> Extend existing groups, such as fuel-library [3], with members from
> >>> fuel-core who are keeping up with large number of reviews / merges. This
> >>> data can be queried at Stackalytics.
> >>> Establish a new group "fuel-infra", and ensure that it's included into
> >>> any other core group. This is for maintenance purposes, it is expected to 
> >>> be
> >>> used only in exceptional cases. Fuel Infra team will have to decide whom 
> >>> to
> >>> include into this group.
> >>> Ensure that fuel-plugin-* repos will not be affected by removal of
> >>> fuel-core group.
> >>>
> >>> #2 needs specific details. Stackalytics can show active cores easily, we
> >>> can look at people with *:
> >>> http://stackalytics.com/report/contribution/fuel-web/180. This is for
> >>> fuel-web, change the link for other repos accordingly. If people are added
> >>> specifically to the particular group, leaving as is (some of them are no
> >>> longer active. But let's clean them up separately from this group
> >>> restructure process).
> >>>
> >>> fuel-library-core [3] group will have following members: Bogdan D.,
> >>> Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
> >>> fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin, Vitaly
> >>> Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
> >>> fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
> >>> fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
> >>> fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky, Nastya
> >>> Urlapova
> >>> fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
> >>> Konstantinov, Olga Gusarenko
> >>> fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry Pyzhov,
> >>> Sergii Golovatyuk, Vladimir 

Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-07 Thread Joshua Harlow

Isn't #2 the right approach?

Even if it might be more 'work' I personally would prefer #2 and do it 
right (and make it easier to do the right thing in the future via 
scripts, automation, other) vs. the other mentioned approaches.


If we were consuming, say a 3rd party library and that 3rd party library 
had/has a security issue, isn't the above the same thing that you would 
have to do?


My 2 cents.

Matt Riedemann wrote:

Here's why:

https://review.openstack.org/#/c/220622/

That's marked as fixing an OSSA which means we'll have to backport the
fix in nova but it depends on a change to strutils.mask_password in
oslo.utils, which required a release and a minimum version bump in
global-requirements.

To backport the change in nova, we either have to:

1. Copy mask_password out of oslo.utils and add it to nova in the
backport or,

2. Backport the oslo.utils change to a stable branch, release it as a
patch release, bump minimum required version in stable g-r and then
backport the nova change and depend on the backported oslo.utils stable
release - which also makes it a dependent library version bump for any
packagers/distros that have already frozen libraries for their stable
releases, which is kind of not fun.

So I'm thinking this is one of those things that should ultimately live
in oslo-incubator so it can live in the respective projects. If
mask_password were in oslo-incubator, we'd have just fixed and
backported it there and then synced to nova on master and stable
branches, no dependent library version bumps required.

Plus I miss the good old days of reviewing oslo-incubator
syncs...(joking of course).



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Anita Kuno
On 10/07/2015 06:22 PM, Monty Taylor wrote:
> On 10/07/2015 09:24 AM, Sean Dague wrote:
>> On 10/07/2015 08:57 AM, Thierry Carrez wrote:
>>> Sean Dague wrote:
 We're starting to make plans for the next cycle. Long term plans are
 getting made for details that would happen in one or two cycles.

 As we already have the locations for the N and O summits I think we
 should do the naming polls now and have names we can use for this
 planning instead of letters. It's pretty minor but it doesn't seem like
 there is any real reason to wait and have everyone come up with working
 names that turn out to be confusing later.
>>>
>>> That sounds fair. However the release naming process currently
>>> states[1]:
>>>
>>> """
>>> The process to chose the name for a release begins once the location of
>>> the design summit of the release to be named is announced and no sooner
>>> than the opening of development of the previous release.
>>> """
>>>
>>> ...which if I read it correctly means we could pick N now, but not O. We
>>> might want to change that (again) first.
>>>
>>> [1] http://governance.openstack.org/reference/release-naming.html
>>
>> Right, it seems like we should change it so that we can do naming as
>> soon as the location is announced.
>>
>> For projects like Nova that are trying to plan things more than one
>> cycle out, having those names to hang those features on is massively
>> useful (as danpb also stated). Delaying for bureaucratic reasons just
>> seems silly. :)
> 
> So, for what it's worth, I remember discussing this when we discussed
> the current process, and the change you are proposing was one of the
> options put forward when we talked about it.
> 
> The reason for not doing all of them as soon as we know them was to keep
> a sense of ownership by the people who are actually working on the
> thing. Barcelona is a long way away and we'll all likely have rage quit
> by then, leaving the electorate for the name largely disjoint from the
> people working on the release.
> 
> Now, I hear you - and I'm not arguing that position. (In fact, I believe
> my original thought was in line with what you said here) BUT - I mostly
> want to point out that we have had this discussion, the discussion was
> not too long ago, it covered this point, and I sort of feel like if we
> have another discussion on naming process people might kill us with
> pitchforks.

You are assuming that not having this conversation might shield you from
the pitchforks.

Anita.

> 
> Monty
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Intel PCI CI appears lost in the weeds

2015-10-07 Thread Matt Riedemann
Was seeing immediate posts on changes which I knew was bogus, and 
getting 404s on the logs:


http://52.27.155.124/232252/1

Anyone know what's going on?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] WARNING - breaking backwards compatibility in puppet-keystone

2015-10-07 Thread Rich Megginson

On 10/07/2015 03:54 PM, Matt Fischer wrote:


I thought the agreement was that default would be assumed so that we 
didn't break backwards compatibility?




puppet-heat had already started using domains, and had already written 
their code based on the implementation where an unqualified name was 
allowed if it was unique among all domains.  That code will need to 
change to specify the domain.  Any other code that was already using 
domains (which I'm assuming is hardly any, if at all) will also need to 
change.



On Oct 7, 2015 10:35 AM, "Rich Megginson" > wrote:


tl;dr You must specify a domain when using domain scoped resources.

If you are using domains with puppet-keystone, there is a proposed
patch that will break backwards compatibility.

https://review.openstack.org/#/c/226624/ Replace indirection calls

"Indirection calls are replaced with #fetch_project and
#fetch_user methods
using python-openstackclient (OSC).

Also removes the assumption that if a resource is unique within a
domain space
then the domain doesn't have to be specified."

It is the last part which is causing backwards compatibility to be
broken.  This patch requires that a domain scoped resource _must_
be qualified with the domain name if _not_ in the 'Default'
domain.  Previously, you did not have to qualify a resource name
with the domain if the name was unique in _all_ domains.  The
problem was this code relied heavily on puppet indirection, and
was complex and difficult to maintain.  We removed it in favor of
a very simple implementation: if the name is not qualified with a
domain, it must be in the 'Default' domain.

Here is an example from puppet-heat - the 'heat_admin' user has
been created in the 'heat_stack' domain previously.

ensure_resource('keystone_user_role', 'heat_admin@::heat_stack", {
  'roles' => ['admin'],
})

This means "assign the user 'heat_admin' in the unspecified domain
to have the domain scoped role 'admin' in the 'heat_stack'
domain". It is a domain scoped role, not a project scoped role,
because in "@::heat_stack" there is no project, only a domain.
Note that the domain for the 'heat_admin' user is unspecified. In
order to specify the domain you must use
'heat_admin::heat_stack@::heat_stack'. This is the recommended fix
- to fully qualify the user + domain.

The breakage manifests itself like this, from the logs::

2015-10-02 06:07:39.574 | Debug: Executing '/usr/bin/openstack
user show --format shell heat_admin --domain Default'
2015-10-02 06:07:40.505 | Error:
/Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin@::heat]:
Could not evaluate: No user heat_admin with domain  found

This is from the keystone_user_role code. Since the role user was
specified as 'heat_admin' with no domain, the keystone_user_role
code looks for 'heat_admin' in the 'Default' domain and can't find
it, and raises an error.

Right now, the only way to specify the domain is by adding
'::domain_name' to the user name, as
'heat_admin::heat_stack@::heat_stack'.  Sofer is working on a way
to add the domain name as a parameter of keystone_user_role -
https://review.openstack.org/226919 - so in the near future you
will be able to specify the resource like this:


ensure_resource('keystone_user_role', 'heat_admin@::heat_stack", {
  'roles' => ['admin'],
  'user_domain_name' => 'heat_stack',
})


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-07 Thread Robert Collins
On 8 October 2015 at 08:38, Matt Riedemann  wrote:
> Here's why:
>
> https://review.openstack.org/#/c/220622/
>
> That's marked as fixing an OSSA which means we'll have to backport the fix
> in nova but it depends on a change to strutils.mask_password in oslo.utils,
> which required a release and a minimum version bump in global-requirements.
>
> To backport the change in nova, we either have to:
>
> 1. Copy mask_password out of oslo.utils and add it to nova in the backport
> or,
>
> 2. Backport the oslo.utils change to a stable branch, release it as a patch
> release, bump minimum required version in stable g-r and then backport the
> nova change and depend on the backported oslo.utils stable release - which
> also makes it a dependent library version bump for any packagers/distros
> that have already frozen libraries for their stable releases, which is kind
> of not fun.
>
> So I'm thinking this is one of those things that should ultimately live in
> oslo-incubator so it can live in the respective projects. If mask_password
> were in oslo-incubator, we'd have just fixed and backported it there and
> then synced to nova on master and stable branches, no dependent library
> version bumps required.
>
> Plus I miss the good old days of reviewing oslo-incubator syncs...(joking of
> course).

Whats wrong with 2?  I mean, other than the work needed *because* we
made branches of oslo.utils: something I hope we can stop doing in M
(I have a draft spec up about this...)

Libraries have security bugs too, and packagers/distros need to update
them as well as the API servers: this is one of the reasons we have
backpressure on libraries being admitted into our dependency chain.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-07 Thread Chris Friesen

On 10/07/2015 11:36 AM, Ed Leafe wrote:


I've finally gotten around to finishing writing up that proposal [1], and I'd
like to hope that it would be the basis for future discussions about
addressing some of the underlying issues that exist in OpenStack for
historical reasons, and how we might rethink these choices today. I'd prefer
comments and discussion here on the dev list, so that all can see your ideas,
but I will be in Tokyo for the summit, and would also welcome some informal
discussion there, too.

-- Ed Leafe

 [1] http://blog.leafe.com/reimagining_scheduler/


I've wondered for a while (ever since I looked at the scheduler code, really) 
why we couldn't implement more of the scheduler as database transactions.


I haven't used Cassandra, so maybe you can clarify something about updates 
across a distributed DB.  I just read up on lightweight transactions, and it 
says that they're restricted to a single partition.  Is that an acceptable 
limitation for this usage?


Some points that might warrant further discussion:

1) Some resources (RAM) only require tracking amounts.  Other resources (CPUs, 
PCI devices) require tracking allocation of specific individual host resources 
(for CPU pinning, PCI device allocation, etc.).  Presumably for the latter we 
would have to actually do the allocation of resources at the time of the 
scheduling operation in order to update the database with the claimed resources 
in a race-free way.


2) Are you suggesting that all of nova switch to Cassandra, or just the 
scheduler and resource tracking portions?  If the latter, how would we handle 
things like pinned CPUs and PCI devices that are currently associated with 
specific instances in the nova DB?


3) The concept of the compute node updating the DB when things change is really 
orthogonal to the new scheduling model.  The current scheduling model would 
benefit from that as well.


4) It seems to me that to avoid races we need to do one of the following.  Which 
are you proposing?
a) Serialize the entire scheduling operation so that only one instance can 
schedule at once.
b) Make the evaluation of filters and claiming of resources a single atomic DB 
transaction.
c) Do a loop where we evaluate the filters, pick a destination, try to claim the 
resources in the DB, and retry the whole thing if the resources have already 
been claimed.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-07 Thread Davanum Srinivas
Matt,

My vote is for #1, as we should kill oslo-incubator in Mitaka.

Thanks,
Dims

On Wed, Oct 7, 2015 at 3:38 PM, Matt Riedemann 
wrote:

> Here's why:
>
> https://review.openstack.org/#/c/220622/
>
> That's marked as fixing an OSSA which means we'll have to backport the fix
> in nova but it depends on a change to strutils.mask_password in oslo.utils,
> which required a release and a minimum version bump in global-requirements.
>
> To backport the change in nova, we either have to:
>
> 1. Copy mask_password out of oslo.utils and add it to nova in the backport
> or,
>
> 2. Backport the oslo.utils change to a stable branch, release it as a
> patch release, bump minimum required version in stable g-r and then
> backport the nova change and depend on the backported oslo.utils stable
> release - which also makes it a dependent library version bump for any
> packagers/distros that have already frozen libraries for their stable
> releases, which is kind of not fun.
>
> So I'm thinking this is one of those things that should ultimately live in
> oslo-incubator so it can live in the respective projects. If mask_password
> were in oslo-incubator, we'd have just fixed and backported it there and
> then synced to nova on master and stable branches, no dependent library
> version bumps required.
>
> Plus I miss the good old days of reviewing oslo-incubator syncs...(joking
> of course).
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] WARNING - breaking backwards compatibility in puppet-keystone

2015-10-07 Thread Matt Fischer
I thought the agreement was that default would be assumed so that we didn't
break backwards compatibility?
On Oct 7, 2015 10:35 AM, "Rich Megginson"  wrote:

> tl;dr You must specify a domain when using domain scoped resources.
>
> If you are using domains with puppet-keystone, there is a proposed patch
> that will break backwards compatibility.
>
> https://review.openstack.org/#/c/226624/ Replace indirection calls
>
> "Indirection calls are replaced with #fetch_project and #fetch_user methods
> using python-openstackclient (OSC).
>
> Also removes the assumption that if a resource is unique within a domain
> space
> then the domain doesn't have to be specified."
>
> It is the last part which is causing backwards compatibility to be
> broken.  This patch requires that a domain scoped resource _must_ be
> qualified with the domain name if _not_ in the 'Default' domain.
> Previously, you did not have to qualify a resource name with the domain if
> the name was unique in _all_ domains.  The problem was this code relied
> heavily on puppet indirection, and was complex and difficult to maintain.
> We removed it in favor of a very simple implementation: if the name is not
> qualified with a domain, it must be in the 'Default' domain.
>
> Here is an example from puppet-heat - the 'heat_admin' user has been
> created in the 'heat_stack' domain previously.
>
> ensure_resource('keystone_user_role',  'heat_admin@::heat_stack", {
>   'roles' => ['admin'],
> })
>
> This means "assign the user 'heat_admin' in the unspecified domain to have
> the domain scoped role 'admin' in the 'heat_stack' domain". It is a domain
> scoped role, not a project scoped role, because in "@::heat_stack" there is
> no project, only a domain. Note that the domain for the 'heat_admin' user
> is unspecified. In order to specify the domain you must use
> 'heat_admin::heat_stack@::heat_stack'. This is the recommended fix - to
> fully qualify the user + domain.
>
> The breakage manifests itself like this, from the logs::
>
> 2015-10-02 06:07:39.574 | Debug: Executing '/usr/bin/openstack user
> show --format shell heat_admin --domain Default'
> 2015-10-02 06:07:40.505 | Error:
> /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin@::heat]:
> Could not evaluate: No user heat_admin with domain  found
>
> This is from the keystone_user_role code. Since the role user was
> specified as 'heat_admin' with no domain, the keystone_user_role code looks
> for 'heat_admin' in the 'Default' domain and can't find it, and raises an
> error.
>
> Right now, the only way to specify the domain is by adding '::domain_name'
> to the user name, as 'heat_admin::heat_stack@::heat_stack'.  Sofer is
> working on a way to add the domain name as a parameter of
> keystone_user_role - https://review.openstack.org/226919 - so in the near
> future you will be able to specify the resource like this:
>
>
> ensure_resource('keystone_user_role',  'heat_admin@::heat_stack", {
>   'roles' => ['admin'],
>   'user_domain_name' => 'heat_stack',
> })
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Google Hangout recording of volume manger locks

2015-10-07 Thread Dulko, Michal
On Wed, 2015-10-07 at 11:01 -0700, Walter A. Boring IV wrote:
> Hello folks,
>I just wanted to post up the YouTube link for the video hangout that 
> the Cinder team just had.
> 
> We had a good discussion about the local file locks in the volume 
> manager and how it affects the interaction
> of Nova with Cinder in certain cases.  We are trying to iron out how to 
> proceed ahead with removing the
> volume manager locks in a way that doesn't break the world.  The hope of 
> this is to eventually allow Cinder
> to run active/active HA c-vol services.
> 
> The Youtube.com link for the recording is here on my personal account:
> https://www.youtube.com/watch?v=D_iXpNcWDv8
> 
> 
> We discussed several things in the meeting:
> * The etherpad that was used as a basis for discussion:
> https://etherpad.openstack.org/p/cinder-active-active-vol-service-issues
> * What to do with the current volume manager locks and how do we remove 
> them?
> * How do we move forward with checking 'ING' states for volume actions?
> * What is the process for moving forward with the compare/swap patches 
> that Gorka has in gerrit.
> 
> 
> Action Items:
> *  We agreed to take a deeper look into the main compare/swap changes 
> that Gorka has in gerrit and see if we can get those to land.
>* https://review.openstack.org/#/c/205834/
>* https://review.openstack.org/#/c/218012/
> * Gorka is to update the patches and add the references to the 
> specs/blueprints for reference.
> * Gorka is going to post up follow up patch sets to test the removal of 
> each lock and see if it is sufficient to remove each individual lock.
> 
> 
> Follow up items:
> * Does it make sense for the community to create an OpenStack Cinder 
> youtube account, where the PTL owns the account, and we run
> each of our google hangouts through that.  The advantage of this is to 
> allow the community to participate openly, as well as record each of
> our Cinder hangouts for folks that can't attend the live event.  We 
> could use this account for the meetups as well as the conference sessions,
> and have them all recorded and saved in one spot.

Unfortunately I wasn't able to attend and after watching the video I
feel like I'm on the same page, so for me it seems like a brilliant
idea! I think recordings are very beneficial in cross-timezone
community.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-07 Thread Chris Friesen

On 10/07/2015 03:14 AM, Daniel P. Berrange wrote:


For suspended instances, the scenario is really the same as with completely
offline instances. The only extra step is that you need to migrate the saved
image state file, as well as the disk images. This is trivial once you have
done the code for migrating disk images offline, since its "just one more file"
to care about.  Officially apps aren't supposed to know where libvirt keeps
the managed save files, but I think it is fine for Nova to peek behind the
scenes to get them. Alternatively I'd be happy to see an API added to libvirt
to allow the managed save files to be uploaded & downloaded via a libvirt
virStreamPtr object, in the same way we provide APIs to  upload & download
disk volumes. This would avoid the need to know explicitly about the file
location for the managed save image.


Assuming we were using libvirt with the storage pools API could we currently 
(with existing libvirt) migrate domains that have been suspended with 
virDomainSave()?  Or is the only current option to have nova move the file over 
using passwordless access?


I'm assuming we want to work towards using storage pools to get away from the 
need for passwordless access between hypervisors, so having libvirt support 
would be useful.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [openstack-infra] stable changes to python-neutronclient unable to merge

2015-10-07 Thread Armando M.
On 6 October 2015 at 20:06, Armando M.  wrote:

> Hi folks,
>
> We are unable to merge stable changes to python-neutronclient (as shown in
> [1,2]) because of the missing master fixes [3,4]. We should be able to
> untangle Liberty with [5], but to unblock Kilo, I may need to squash [6]
> with a cherry pick of [3] and wait [5] to merge.
>
> Please bear with us until we get the situation sorted.
>
> Cheers,
> Armando
>
> [1]
> https://review.openstack.org/#/q/status:open+project:openstack/python-neutronclient+branch:stable/kilo,n,z
> [2]
> https://review.openstack.org/#/q/status:open+project:openstack/python-neutronclient+branch:stable/liberty,n,z
> [3] https://review.openstack.org/#/c/231731/
> [4] https://review.openstack.org/#/c/231797/
> [5] https://review.openstack.org/#/c/231796/
> [6] https://review.openstack.org/#/c/231797/
>
>
An update: Liberty is unjammed, but Kilo is still in progress.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-07 Thread Shinobu Kinjo
Yes, let's discuss this in Tokyo.

Shinobu

- Original Message -
From: "Clinton Knight" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, October 8, 2015 2:15:52 AM
Subject: Re: [openstack-dev] [Manila] CephFS native driver

Hi, John.  If you want to discuss this in Tokyo, I suggest you add it to
the etherpad:

https://etherpad.openstack.org/p/manila-mitaka-summit-topics


I look forward to meeting you at the Summit.  It¹d be great to see a demo
of your Ceph driver.

Clinton


On 10/7/15, 6:56 AM, "John Spray"  wrote:

>On Tue, Oct 6, 2015 at 11:59 AM, Deepak Shetty 
>wrote:
>>>
>>> Currently, as you say, a share is accessible to anyone who knows the
>>> auth key (created a the time the share is created).
>>>
>>> For adding the allow/deny path, I'd simply create and remove new ceph
>>> keys for each entity being allowed/denied.
>>
>>
>> Ok, but how does that map to the existing Manila access types (IP, User,
>> Cert) ?
>
>None of the above :-)
>
>Compared with certs, the difference with Ceph is that ceph is issuing
>credentials, rather than authorizing existing credentials[1]. So
>rather than the tenant saying "Here's a certificate that Alice has
>generated and will use to access the filesystem, please authorize it",
>the tenant would say "Please authorize someone called Bob to access
>the share, and let me know the key he should use to prove he is Bob".
>
>As far as I can tell, we can't currently expose that in Manila: the
>missing piece is a way to tag that generated key onto a
>ShareInstanceAccessMapping, so that somebody with the right to read
>from the Manila API can go read Bob's key, and give it to Bob so that
>he can mount the filesystem.
>
>That's why the first-cut compromise is to create a single auth
>identity for accessing the share, and expose the key as part of the
>share's export location.  It's then the user application's job to
>share out that key to whatever hosts need to access it.  The lack of
>Manila-mediated 'allow' is annoying but not intrinsically insecure.
>The security problem with this approach is that we're not providing a
>way to revoke/rotate the key without destroying the share.
>
>So anyway.  This might be a good topic for a conversation at the
>summit (or catch me up on the list if it's already been discussed in
>depth) -- should drivers be allowed to publish generated
>authentication tokens as part of the API for allowing access to a
>share?
>
>John
>
>
>1. Aside: We *could* do a certificate-like model if it was assumed
>that the Manila API consumer knew how to go and talk to Ceph out of
>band to generate their auth identity.  That way, they could go and
>create their auth identity in Ceph, and then ask Manila to grant that
>identity access to the share.  However, it would be pointless, because
>in ceph, anyone who can create an identity can also set the
>capabilities of it (i.e. if they can talk directly to ceph, they don't
>need Manila's permission to access the share).
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] IRC Meeting Thursday October 8th at 17:00UTC

2015-10-07 Thread Christopher Aedo
Greetings! Our next OpenStack App Catalog meeting will take place this
Thursday October 8th at 17:00 UTC in #openstack-meeting-3

The agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Please add agenda items if there's anything specific you would like to
discuss (or of course if the meeting time is not convenient for you
join us on IRC #openstack-app-catalog).

Please join us if you can!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-07 Thread Matt Riedemann



On 10/7/2015 6:00 PM, Robert Collins wrote:

On 8 October 2015 at 08:38, Matt Riedemann  wrote:

Here's why:

https://review.openstack.org/#/c/220622/

That's marked as fixing an OSSA which means we'll have to backport the fix
in nova but it depends on a change to strutils.mask_password in oslo.utils,
which required a release and a minimum version bump in global-requirements.

To backport the change in nova, we either have to:

1. Copy mask_password out of oslo.utils and add it to nova in the backport
or,

2. Backport the oslo.utils change to a stable branch, release it as a patch
release, bump minimum required version in stable g-r and then backport the
nova change and depend on the backported oslo.utils stable release - which
also makes it a dependent library version bump for any packagers/distros
that have already frozen libraries for their stable releases, which is kind
of not fun.

So I'm thinking this is one of those things that should ultimately live in
oslo-incubator so it can live in the respective projects. If mask_password
were in oslo-incubator, we'd have just fixed and backported it there and
then synced to nova on master and stable branches, no dependent library
version bumps required.

Plus I miss the good old days of reviewing oslo-incubator syncs...(joking of
course).


Whats wrong with 2?  I mean, other than the work needed *because* we
made branches of oslo.utils: something I hope we can stop doing in M
(I have a draft spec up about this...)

Libraries have security bugs too, and packagers/distros need to update
them as well as the API servers: this is one of the reasons we have
backpressure on libraries being admitted into our dependency chain.

-Rob




The work involved isn't the problem, I was more concerned about raising 
the minimum required version of a library on stable. But I guess it can 
happen and packagers/deployers/distros can update their packages on 
stable or patch them as needed (that's probably what we'd do internally 
since we have to legally clear each package we ship ourselves and 
version bumps are generally not fun for us on stable).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-07 Thread Flavio Percoco

On 06/10/15 12:11 -0400, Nikhil Komawar wrote:

Overall I think this is a good idea and the time frame proposal also looks
good. Few suggestions in-line.

On 10/6/15 10:36 AM, Flavio Percoco wrote:

   Greetings,

   Not so long ago, Erno started a thread[0] in this list to discuss the
   abandon policies for patches that haven't been updated in Glance.

   I'd like to go forward and start following that policy with some
   changes that you can find below:

   1) Lets do this on patches that haven't had any activity in the last 2
   months. This adds one more month to Erno's proposal. The reason being
   that during the lat cycle, there were some ups and downs in the review
   flow that caused some patches to get stuck.



+2 . I think 2 months is a reasonable time frame. Though, I think this should
be done on glance , python-glanceclient and glance-store repos and not
glance-specs. Specs can sometimes need to sit and wait while discussion may
happen at other places and then a gist is added back the spec.


Yup, no plans to apply this to glance-specs, just code.

Thanks for the feedback,
Flavio




   2) Do this just on master, for all patches regardless they fix a
   bug or implement a spec and for all patches regardless their review
   status.



+2 . No comments, looks clean.


   3) The patch will be first marked as a WIP and then abandoned if the
   patch is not updated in 1 week. This will put this patches at the
   begining of the queue but using the Glance review dashboard should
   help keeing focus.



While I think that one may give someone a email/irc heads up if the proposer
doesn't show up and we will use the context and wisdom of feedback this sorta
seems to imply for a general case when a developer is new and their intent to
get a patch in one cycle isn't clear.


   Unless there are some critical things missing in the above or strong
   opiniones against this, I'll make this effective starting next Monday
   October 12th.



I added some comments above for possible brainstorming. No serious objections,
looking forward to this cleanup process.


   Best regards,
   Flavio

   [0] http://lists.openstack.org/pipermail/openstack-dev/2015-February/
   056829.html



  


   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Thanks,
Nikhil



--
@flaper87
Flavio Percoco


pgpK_pi2Tj7Yr.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-07 Thread Flavio Percoco

On 06/10/15 17:54 +0200, Victor Stinner wrote:

Hi,

Le 06/10/2015 16:36, Flavio Percoco a écrit :

Not so long ago, Erno started a thread[0] in this list to discuss the
abandon policies for patches that haven't been updated in Glance.
(...)
1) Lets do this on patches that haven't had any activity in the last 2
months. This adds one more month to Erno's proposal. The reason being
that during the lat cycle, there were some ups and downs in the review
flow that caused some patches to get stuck.


Please don't do that. I sent a patch in June (20) and it was only 
reviewed in October (4)... There was no activity simply because I had 
nothing to add, everything was explained in the commit message, I was 
only waiting for a review...


I came on #openstack-glance to ask for review several time between 
August and September but nobody reviewed by patches (there was al.


Example of patch: https://review.openstack.org/#/c/193786/ (now merged)


Yes, I'm very aware of this case and it's great feedback. What
happened with these patches could have (or did happen) with other
patches. The review response is something that I'd definitely like us
to improve and this is not the solution.



It would be very frustrating to have to resend the same patch over and over.


There's no need to resend the patch. Just commenting saying that it's
still a valid update and it'll keep the patch from being abandoned.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpmF99M4Fd9v.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-07 Thread Flavio Percoco

On 06/10/15 23:36 +0900, Flavio Percoco wrote:

Greetings,

Not so long ago, Erno started a thread[0] in this list to discuss the
abandon policies for patches that haven't been updated in Glance.

I'd like to go forward and start following that policy with some
changes that you can find below:

1) Lets do this on patches that haven't had any activity in the last 2
months. This adds one more month to Erno's proposal. The reason being
that during the lat cycle, there were some ups and downs in the review
flow that caused some patches to get stuck.

2) Do this just on master, for all patches regardless they fix a
bug or implement a spec and for all patches regardless their review
status.

3) The patch will be first marked as a WIP and then abandoned if the
patch is not updated in 1 week. This will put this patches at the
begining of the queue but using the Glance review dashboard should
help keeing focus.

Unless there are some critical things missing in the above or strong
opiniones against this, I'll make this effective starting next Monday
October 12th.


I'd like to provide some extra data here. This is our current status:

==
Total patches without activity in the last 2 months: 73
Total patches closing a bug: 30
Total patches with negative review by core reviewers: 62
Total patches with negative review by non-core reviewers: 75
Total patches without a core review in the last patchset: 13
Total patches with negative review from Jenkins: 50
==

It's not ideal but it's also not a lot. I'd like to recover as many
patches as possible from the above and I'm happy to do that manually
if necessary.

Cheers,
Flavio


--
@flaper87
Flavio Percoco


pgpEt3dCWEcon.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-07 Thread Flavio Percoco

On 06/10/15 17:52 +0200, Julien Danjou wrote:

On Tue, Oct 06 2015, Flavio Percoco wrote:

I send patches to Glance from time to time, and they usually got 0
review for *weeks* (sometimes months, because, well there are no
reviewers active in Glance, so:


1) Lets do this on patches that haven't had any activity in the last 2
months. This adds one more month to Erno's proposal. The reason being
that during the lat cycle, there were some ups and downs in the review
flow that caused some patches to get stuck.


This is going to expire my patches that nobody cares about and that are
improving the code or fixing stuff people didn't encounter (yet).


3) The patch will be first marked as a WIP and then abandoned if the
patch is not updated in 1 week. This will put this patches at the
begining of the queue but using the Glance review dashboard should
help keeing focus.


Why WIP? If a patch is complete and waiting for reviewers I'm not sure
it helps.


WIP because I don't think we should abandon them right away - since
there are patches like yours and Victor's that matter - and there's no
status to say: "I'm sorry we screwed up and we didn't review your
patch. Please come to us and throw all your amazing patches in our
faces so that we'll review them... for realz"


The problem is that nobody is reviewing Glance patches (except you
recently it seems). That's not going to solve that. That's just going to
hide the issues under the carpet by lowering the total of patches that
needs review…


I'm not trying to solve the lack of reviews in Liberty by removing
patches. What I'd like to do, though, is help to keep around patches
that really matter.

I know there have been a huge lag on reviews which is something that
we'll be working on with a different workflow. The dashboard mentioned
is one of them.

We could certainly increase the number of months.

Thanks a lot for the feedback,
Flavio




My 2c,

--
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info




--
@flaper87
Flavio Percoco


pgpDx8liLHnv6.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-07 Thread Flavio Percoco

On 06/10/15 13:53 -0400, Doug Hellmann wrote:

Excerpts from Flavio Percoco's message of 2015-10-06 23:36:53 +0900:

Greetings,

Not so long ago, Erno started a thread[0] in this list to discuss the
abandon policies for patches that haven't been updated in Glance.

I'd like to go forward and start following that policy with some
changes that you can find below:

1) Lets do this on patches that haven't had any activity in the last 2
months. This adds one more month to Erno's proposal. The reason being
that during the lat cycle, there were some ups and downs in the review
flow that caused some patches to get stuck.

2) Do this just on master, for all patches regardless they fix a
bug or implement a spec and for all patches regardless their review
status.

3) The patch will be first marked as a WIP and then abandoned if the
patch is not updated in 1 week. This will put this patches at the
begining of the queue but using the Glance review dashboard should
help keeing focus.

Unless there are some critical things missing in the above or strong
opiniones against this, I'll make this effective starting next Monday
October 12th.

Best regards,
Flavio

[0] http://lists.openstack.org/pipermail/openstack-dev/2015-February/056829.html



In the past we've had discussions on the list about how abandoning
patches can be perceived as hostile to contributors, and that using
a review dashboard with good filters is a better solution. Since
you already have a dashboard, I suggest adding a section for patches
that are old but have no review comments (maybe you already have
that) and another for patches where the current viewer has voted
-1. The first highlights the patches for reviewers, and ignores
them when they are in a state where we're waiting for feedback or
an update, and the latter provides a list of patches the current
reviewer is involved in and may need to recheck for new comments.


We definitely don't want anyone to feel bad about this, especially
when it's our fault they patches are still hanging around. This is the
reason why the WIP phase is being proposed as away to ask the user to
come to use and remind us to do our job. Not ideal, sure, but it
happens to everyone.

The current dashboard has that section already and I really hope we
don't get to the point where abandoning patches ourselves is
necessary. The section is called "5 Days Without Feedback" and I gotta
be honest, it's worked (the dashboard in general) very well for me and
I hope for others as well.

There are patches from old contributors that we just know are never
going to be worked out. These patches I'd like to take a look closely
before abandoning them so that we can rebase them ourselves if they
are still relevant. Making this list shorter is what I'd like to
achieve with the proposed plan.

Hope the above makes sense,
Flavio

--
@flaper87
Flavio Percoco


pgpSJOU3hsy5w.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Congress] Tokyo sessions

2015-10-07 Thread Rui Chen
In my memory, there are 4 topics about OPNFV, congress gating, distributed
arch, Monasca.

Some details in IRC meeting log
http://eavesdrop.openstack.org/meetings/congressteammeeting/2015/congressteammeeting.2015-10-01-00.01.log.html

2015-10-08 9:48 GMT+08:00 zhangyali (D) :

> Hi Tim,
>
>
>
> Thanks for informing the meeting information. But does the meeting have
> some topics scheduled? I think it’s better to know what we are going to
> talk. Thanks so much!
>
>
>
> Yali
>
>
>
> *发件人:* Tim Hinrichs [mailto:t...@styra.com]
> *发送时间:* 2015年10月2日 2:52
> *收件人:* OpenStack Development Mailing List (not for usage questions)
> *主题:* [openstack-dev] [Congress] Tokyo sessions
>
>
>
> Hi all,
>
>
>
> We just got a tentative assignment for our meeting times in Tokyo.  Our 3
> meetings are scheduled back-to-back-to-back on Wed afternoon from
> 2:00-4:30p.  I don't think there's much chance of getting the meetings
> moved, but does anyone have a hard conflict?
>
>
>
> Here's our schedule for Wed:
>
>
>
> Wed 11:15-12:45 HOL
>
> Wed 2:00-2:40 Working meeting
>
> Wed 2:50-3:30 Working meeting
>
> Wed 3:40-4:20 Working meeting
>
>
>
> Tim
>
>
>
>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Intel PCI CI appears lost in the weeds

2015-10-07 Thread yongli he

Hi,  mriedem and all

Sorry for the CI problem.  we now back from holiday now, and find the 
problem and got solution.  CI will be back soon.


summary:
the LOG server connection lost, so the test result failed to uploading.

Yongli He



在 2015年10月08日 07:03, Matt Riedemann 写道:
Was seeing immediate posts on changes which I knew was bogus, and 
getting 404s on the logs:


http://52.27.155.124/232252/1

Anyone know what's going on?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Rochelle Grober
> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: Wednesday, October 07, 2015 3:48 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] naming N and O releases nowish
> 
> On 10/07/2015 06:22 PM, Monty Taylor wrote:
> > On 10/07/2015 09:24 AM, Sean Dague wrote:
> >> On 10/07/2015 08:57 AM, Thierry Carrez wrote:
> >>> Sean Dague wrote:
>  We're starting to make plans for the next cycle. Long term plans
> are
>  getting made for details that would happen in one or two cycles.
> 
>  As we already have the locations for the N and O summits I think
> we
>  should do the naming polls now and have names we can use for this
>  planning instead of letters. It's pretty minor but it doesn't seem
> like
>  there is any real reason to wait and have everyone come up with
> working
>  names that turn out to be confusing later.
> >>>
> >>> That sounds fair. However the release naming process currently
> >>> states[1]:
> >>>
> >>> """
> >>> The process to chose the name for a release begins once the
> location of
> >>> the design summit of the release to be named is announced and no
> sooner
> >>> than the opening of development of the previous release.
> >>> """
> >>>
> >>> ...which if I read it correctly means we could pick N now, but not
> O. We
> >>> might want to change that (again) first.
> >>>
> >>> [1] http://governance.openstack.org/reference/release-naming.html
> >>
> >> Right, it seems like we should change it so that we can do naming as
> >> soon as the location is announced.
> >>
> >> For projects like Nova that are trying to plan things more than one
> >> cycle out, having those names to hang those features on is massively
> >> useful (as danpb also stated). Delaying for bureaucratic reasons
> just
> >> seems silly. :)
> >
> > So, for what it's worth, I remember discussing this when we discussed
> > the current process, and the change you are proposing was one of the
> > options put forward when we talked about it.
> >
> > The reason for not doing all of them as soon as we know them was to
> keep
> > a sense of ownership by the people who are actually working on the
> > thing. Barcelona is a long way away and we'll all likely have rage
> quit
> > by then, leaving the electorate for the name largely disjoint from
> the
> > people working on the release.
> >
> > Now, I hear you - and I'm not arguing that position. (In fact, I
> believe
> > my original thought was in line with what you said here) BUT - I
> mostly
> > want to point out that we have had this discussion, the discussion
> was
> > not too long ago, it covered this point, and I sort of feel like if
> we
> > have another discussion on naming process people might kill us with
> > pitchforks.
> 
> You are assuming that not having this conversation might shield you
> from
> the pitchforks.
 
I, myself favor war hammers (very useful tool for separating plaster from 
lathe), but if we all rage quit, the new guard can always change the name as a 
middle finger salute to the old guard.  Let's be daring!  Let's name O, too!

--Rocky

> Anita.
> 
> >
> > Monty
> >
> >
> >
> ___
> ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][stable][release] 2015.1.2

2015-10-07 Thread Chuck Short
Hi,
stable/kilo is now frozen. I expect to do a release on Tuesday. If you need
to include something please let me know.

Thanks
chuck
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-07 Thread Ed Leafe
On Oct 7, 2015, at 6:00 PM, Chris Friesen  wrote:

> I've wondered for a while (ever since I looked at the scheduler code, really) 
> why we couldn't implement more of the scheduler as database transactions.
> 
> I haven't used Cassandra, so maybe you can clarify something about updates 
> across a distributed DB.  I just read up on lightweight transactions, and it 
> says that they're restricted to a single partition.  Is that an acceptable 
> limitation for this usage?

An implementation detail. A partition is defined by the partition key, not by 
any physical arrangement of nodes. The partition key would have to depend on 
the resource type, and whatever other columns would make such a query unique.

> Some points that might warrant further discussion:
> 
> 1) Some resources (RAM) only require tracking amounts.  Other resources 
> (CPUs, PCI devices) require tracking allocation of specific individual host 
> resources (for CPU pinning, PCI device allocation, etc.).  Presumably for the 
> latter we would have to actually do the allocation of resources at the time 
> of the scheduling operation in order to update the database with the claimed 
> resources in a race-free way.

Yes, that's correct. A lot of thought would have to be put into how to best 
represent these different types of resources, and that's something that I have 
ideas about, but would feel a whole lot better defining only after talking 
these concepts over with others who understand the underlying concepts better 
than I do.

> 2) Are you suggesting that all of nova switch to Cassandra, or just the 
> scheduler and resource tracking portions?  If the latter, how would we handle 
> things like pinned CPUs and PCI devices that are currently associated with 
> specific instances in the nova DB?

I am only thinking of the scheduler as a separate service. Perhaps Nova as a 
whole might benefit from switching to Cassandra for its database needs, but I 
haven't really thought about that at all.

> 3) The concept of the compute node updating the DB when things change is 
> really orthogonal to the new scheduling model.  The current scheduling model 
> would benefit from that as well.

Actually, it isn't that different. Compute nodes send updates to the scheduler 
when instances are created/deleted/resized/etc., so this isn't much of a 
stretch.

> 4) It seems to me that to avoid races we need to do one of the following.  
> Which are you proposing?
> a) Serialize the entire scheduling operation so that only one instance can 
> schedule at once.
> b) Make the evaluation of filters and claiming of resources a single atomic 
> DB transaction.
> c) Do a loop where we evaluate the filters, pick a destination, try to claim 
> the resources in the DB, and retry the whole thing if the resources have 
> already been claimed.

Probably a combination of b) and c). Filters would, for lack of a better term, 
add CSQL WHERE clauses to the query, which would return a set of acceptable 
hosts. Weighers would order these hosts in terms of desirability, and then the 
claim would be attempted. If the claim failed because the host had changed, the 
next acceptable host would be selected, etc. I don't imagine that "retrying the 
whole thing" would be an efficient option, unless there were no other 
acceptable hosts returned from the original filtering query.

Put another way: if we are in a racy situation, and two scheduler processes are 
trying to place a similar instance, both processes would most likely come up 
with the same set of hosts ordered in the same way. One of those processes 
would "win", and claim the first choice. The other would fail the transaction, 
and would then claim the second choice on the list. IMO, this is how you best 
deal with race conditions.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-07 Thread Chris Friesen

On 10/07/2015 07:23 PM, Ian Wells wrote:

On 7 October 2015 at 16:00, Chris Friesen > wrote:

1) Some resources (RAM) only require tracking amounts.  Other resources
(CPUs, PCI devices) require tracking allocation of specific individual host
resources (for CPU pinning, PCI device allocation, etc.).  Presumably for
the latter we would have to actually do the allocation of resources at the
time of the scheduling operation in order to update the database with the
claimed resources in a race-free way.


The whole process is inherently racy (and this is inevitable, and correct),
which is why the scheduler works the way it does:

- scheduler guesses at a host based on (guaranteed - hello distributed systems!)
outdated information
- VM is scheduled to a host that looks like it might work, and host attempts to
run it
- VM run may fail (because the information was outdated or has become outdated),
in which case we retry the schedule


Why is it inevitable?

Theoretically if the DB knew about what resources were originally available and 
what resources have been consumed, then it should be able to allocate resources 
race-free (possibly with some retries involved if racing against other 
schedulers updating the DB, but that would be internal to the scheduler itself).


Or does that just not scale enough and we need to use inherently racy models?

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Core Reviewers groups restructure

2015-10-07 Thread Dmitry Borodaenko
On Wed, Oct 07, 2015 at 02:04:52PM -0700, Dmitry Borodaenko wrote:
> While we're waiting for openstack-infra team to finish the stackforge
> migration and review our ACL changes, I implemented the rest of the
> changes agreed in this thread:
> 
> - Fuel-core group removed everywhere.
> 
> - Per-project core groups populated with individual reviewers as quoted
>   below. Exceptions:
> 
>   - Dennis Dmitriev was approved as core in fuel-qa, fuel-devops, and
> fuel-ostf after this thread was started;

Correction: I meant fuel-qa and fuel-devops here, not fuel-ostf.

>   - fuel-upgrades already excludes fuel-core so I couldn't modify it,
> and the current list doesn't match Mike's email. It is up to current
> cores [0] to bring it up to date.
> 
> [0] https://review.openstack.org/#/admin/groups/1004,members
> 
> fuel-specs and fuel-*-release groups will have to wait until ACL update
> is merged (i.e. after October 17).
> 
> -- 
> Dmitry Borodaenko
> 
> On Thu, Oct 01, 2015 at 03:59:47PM -0700, Dmitry Borodaenko wrote:
> > This commit brings Fuel ACLs in sync with each other and in line with
> > the agreement on this thread:
> > https://review.openstack.org/230195
> > 
> > Please review carefully. Note that I intentionally didn't touch any of
> > the plugins ACLs, primarily to save time for us and the
> > openstack-infra team until after the stackforge->openstack namespace
> > migration.
> > 
> > On Mon, Sep 21, 2015 at 4:17 PM, Mike Scherbakov
> >  wrote:
> > > Thanks guys.
> > > So for fuel-octane then there are no actions needed.
> > >
> > > For fuel-agent-core group [1], looks like we are already good (it doesn't
> > > have fuel-core group nested). But it would need to include fuel-infra 
> > > group
> > > and remove Aleksandra Fedorova (she will be a part of fuel-infra group).
> > >
> > > python-fuel-client-core [2] is good as well (no nested fuel-core). 
> > > However,
> > > there is another group python-fuelclient-release [3], which has to be
> > > eliminated, and main python-fuelclient-core would just have fuel-infra 
> > > group
> > > included for maintenance purposes.
> > >
> > > [1] https://review.openstack.org/#/admin/groups/995,members
> > > [2] https://review.openstack.org/#/admin/groups/551,members
> > > [3] https://review.openstack.org/#/admin/groups/552,members
> > >
> > >
> > > On Mon, Sep 21, 2015 at 11:06 AM Oleg Gelbukh  
> > > wrote:
> > >>
> > >> FYI, we have a separate core group for stackforge/fuel-octane repository
> > >> [1].
> > >>
> > >> I'm supporting the move to modularization of Fuel with cleaner separation
> > >> of authority and better defined interfaces. Thus, I'm +1 to such a 
> > >> change as
> > >> a part of that move.
> > >>
> > >> [1] https://review.openstack.org/#/admin/groups/1020,members
> > >>
> > >> --
> > >> Best regards,
> > >> Oleg Gelbukh
> > >>
> > >> On Sun, Sep 20, 2015 at 11:56 PM, Mike Scherbakov
> > >>  wrote:
> > >>>
> > >>> Hi all,
> > >>> as of my larger proposal on improvements to code review workflow [1], we
> > >>> need to have cores for repositories, not for the whole Fuel. It is the 
> > >>> path
> > >>> we are taking for a while, and new core reviewers added to specific 
> > >>> repos
> > >>> only. Now we need to complete this work.
> > >>>
> > >>> My proposal is:
> > >>>
> > >>> Get rid of one common fuel-core [2] group, members of which can merge
> > >>> code anywhere in Fuel. Some members of this group may cover a couple of
> > >>> repositories, but can't really be cores in all repos.
> > >>> Extend existing groups, such as fuel-library [3], with members from
> > >>> fuel-core who are keeping up with large number of reviews / merges. This
> > >>> data can be queried at Stackalytics.
> > >>> Establish a new group "fuel-infra", and ensure that it's included into
> > >>> any other core group. This is for maintenance purposes, it is expected 
> > >>> to be
> > >>> used only in exceptional cases. Fuel Infra team will have to decide 
> > >>> whom to
> > >>> include into this group.
> > >>> Ensure that fuel-plugin-* repos will not be affected by removal of
> > >>> fuel-core group.
> > >>>
> > >>> #2 needs specific details. Stackalytics can show active cores easily, we
> > >>> can look at people with *:
> > >>> http://stackalytics.com/report/contribution/fuel-web/180. This is for
> > >>> fuel-web, change the link for other repos accordingly. If people are 
> > >>> added
> > >>> specifically to the particular group, leaving as is (some of them are no
> > >>> longer active. But let's clean them up separately from this group
> > >>> restructure process).
> > >>>
> > >>> fuel-library-core [3] group will have following members: Bogdan D.,
> > >>> Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
> > >>> fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin, Vitaly
> > >>> Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
> > >>> fuel-astute-core [5]: 

Re: [openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-07 Thread Johnston, Nate
I can definitely help.

—N.

On Oct 7, 2015, at 8:11 PM, Edgar Magana 
> wrote:

Hello,

I would like to invite everybody to become an active contributor for the 
OpenStack Networking Guide: http://docs.openstack.org/networking-guide/

During the Liberty cycle we made a lot of progress and we feel that the guide 
is ready to have even more contributions and formalize a bit more the team 
around it.
The first thing that I want to propose is to have a regular meeting over IRC to 
discuss the progress and to welcome new contributors. This is the same process 
that other guides like the operators one are following currently.

The networking guide is based on this ToC: 
https://wiki.openstack.org/wiki/NetworkingGuide/TOC
Contribution process is the same that the rest of the OpenStack docs under the 
openstack-manuals git repo: 
https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source

Please, response to this thread and let me know if you could allocate some time 
to help us to make this guide a rock star as the other ones. Based on the 
responses, I will propose a couple of times for the IRC meeting that could 
allocate to everybody if possible, this is why is very important to let me know 
your time zone.

I am really looking forward to increase the contributors in this guide.

Thanks in advance!

Edgar Magana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-docs][Neutron] Networking Guide - Call for contributors

2015-10-07 Thread Edgar Magana
Hello,

I would like to invite everybody to become an active contributor for the 
OpenStack Networking Guide: http://docs.openstack.org/networking-guide/

During the Liberty cycle we made a lot of progress and we feel that the guide 
is ready to have even more contributions and formalize a bit more the team 
around it.
The first thing that I want to propose is to have a regular meeting over IRC to 
discuss the progress and to welcome new contributors. This is the same process 
that other guides like the operators one are following currently.

The networking guide is based on this ToC: 
https://wiki.openstack.org/wiki/NetworkingGuide/TOC
Contribution process is the same that the rest of the OpenStack docs under the 
openstack-manuals git repo: 
https://github.com/openstack/openstack-manuals/tree/master/doc/networking-guide/source

Please, response to this thread and let me know if you could allocate some time 
to help us to make this guide a rock star as the other ones. Based on the 
responses, I will propose a couple of times for the IRC meeting that could 
allocate to everybody if possible, this is why is very important to let me know 
your time zone.

I am really looking forward to increase the contributors in this guide.

Thanks in advance!

Edgar Magana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-07 Thread Ian Wells
On 7 October 2015 at 16:00, Chris Friesen 
wrote:

> 1) Some resources (RAM) only require tracking amounts.  Other resources
> (CPUs, PCI devices) require tracking allocation of specific individual host
> resources (for CPU pinning, PCI device allocation, etc.).  Presumably for
> the latter we would have to actually do the allocation of resources at the
> time of the scheduling operation in order to update the database with the
> claimed resources in a race-free way.
>

The whole process is inherently racy (and this is inevitable, and correct),
which is why the scheduler works the way it does:

- scheduler guesses at a host based on (guaranteed - hello distributed
systems!) outdated information
- VM is scheduled to a host that looks like it might work, and host
attempts to run it
- VM run may fail (because the information was outdated or has become
outdated), in which case we retry the schedule

In fact, with PCI devices the code has been written rather carefully to
make sure that they fit into this model.  There is central per-device
tracking (which, fwiw, I argued against back in the day) but that's not how
allocation works (or, considering how long it is since I looked, worked).

PCI devices are actually allocated from pools of equivalent devices, and
allocation works in the same manner as other scheduling: you work out from
the nova boot call what constraints a host must satisfy (in this case, in
number of PCI devices in specific pools), you check your best guess at
global host state against those constraints, and you pick one of the hosts
that meets the constraints to schedule on.

So: yes, there is a central registry of devices, which we try to keep up to
date - but this is for admins to refer to, it's not a necessity of
scheduling.  The scheduler input is the pool counts, which work largely the
same way as the available memory works as regards scheduling and updating.

No idea on CPUs, sorry, but again I'm not sure why the behaviour would be
any different: compare suspected host state against needs, schedule if it
fits, hope you got it right and tolerate if you didn't.

That being the case, it's worth noting that the database can be eventually
consistent and doesn't need to be transactional.  It's also worth
considering that the database can have multiple (mutually inconsistent)
copies.  There's no need to use a central datastore if you don't want to -
one theoretical example is to run multiple schedulers and let each
scheduler attempt to collate cloud state from unreliable messages from the
compute hosts.  This is not quite what happens today, because messages we
send over Rabbit are reliable and therefore costly.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-07 Thread Ed Leafe
On Oct 7, 2015, at 2:28 PM, Zane Bitter  wrote:

> It seems to me (disclaimer: not a Nova dev) that which database to use is 
> completely irrelevant to your proposal,

Well, not entirely. The difference is that what Cassandra offers that separates 
it from other DBs is exactly the feature that we need. The solution to the 
scheduler isn't to simply "use a database".

> which is really about moving the scheduling from a distributed collection of 
> Python processes with ad-hoc (or sometimes completely missing) 
> synchronisation into the database to take advantage of its well-defined 
> semantics. But you've framed it in such a way as to guarantee that this never 
> gets discussed, because everyone will be too busy arguing about whether or 
> not Cassandra is better than Galera.

Understood - all one has to do is review the original thread from back in July 
to see this happening. But the reason that I framed it then as an experiment in 
which we would come up with measures of success we could all agree on up-front 
was so that if someone else thought that Product Foo would be even better, we 
could set up a similar test bed and try it out. IOW, instead of bikeshedding, 
if you want a different color, you build another shed and we can all have a 
look.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-07 Thread Fox, Kevin M
I think if you went ahead and did the experiment, and had good results from it, 
the discussion would start to progress whether or not folks were fond of 
Cassandra or ...

Thanks,
Kevin

From: Ed Leafe [e...@leafe.com]
Sent: Wednesday, October 07, 2015 5:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Scheduler proposal

On Oct 7, 2015, at 2:28 PM, Zane Bitter  wrote:

> It seems to me (disclaimer: not a Nova dev) that which database to use is 
> completely irrelevant to your proposal,

Well, not entirely. The difference is that what Cassandra offers that separates 
it from other DBs is exactly the feature that we need. The solution to the 
scheduler isn't to simply "use a database".

> which is really about moving the scheduling from a distributed collection of 
> Python processes with ad-hoc (or sometimes completely missing) 
> synchronisation into the database to take advantage of its well-defined 
> semantics. But you've framed it in such a way as to guarantee that this never 
> gets discussed, because everyone will be too busy arguing about whether or 
> not Cassandra is better than Galera.

Understood - all one has to do is review the original thread from back in July 
to see this happening. But the reason that I framed it then as an experiment in 
which we would come up with measures of success we could all agree on up-front 
was so that if someone else thought that Product Foo would be even better, we 
could set up a similar test bed and try it out. IOW, instead of bikeshedding, 
if you want a different color, you build another shed and we can all have a 
look.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Congress] Tokyo sessions

2015-10-07 Thread zhangyali (D)
Hi Tim,

Thanks for informing the meeting information. But does the meeting have some 
topics scheduled? I think it’s better to know what we are going to talk. Thanks 
so much!

Yali

发件人: Tim Hinrichs [mailto:t...@styra.com]
发送时间: 2015年10月2日 2:52
收件人: OpenStack Development Mailing List (not for usage questions)
主题: [openstack-dev] [Congress] Tokyo sessions

Hi all,

We just got a tentative assignment for our meeting times in Tokyo.  Our 3 
meetings are scheduled back-to-back-to-back on Wed afternoon from 2:00-4:30p.  
I don't think there's much chance of getting the meetings moved, but does 
anyone have a hard conflict?

Here's our schedule for Wed:

Wed 11:15-12:45 HOL
Wed 2:00-2:40 Working meeting
Wed 2:50-3:30 Working meeting
Wed 3:40-4:20 Working meeting

Tim






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] backwards compat issue with PXEDeply and AgentDeploy drivers

2015-10-07 Thread Jim Rollenhagen
On Wed, Oct 07, 2015 at 12:41:55PM -0700, Devananda van der Veen wrote:
> Ramesh,
> 
> I thought about your points over night, and then looked at our in-tree
> driver code from stable/kilo and asked myself, "what if this driver was out
> of tree?" They'd all have broken -- for very similar reasons as what I
> encountered with my demo driver.
> 
> When we split the boot and deploy interfaces, we kept compatibility only at
> the boundary between ConductorManager and the Driver class. That's all we
> set out to do, because that's all we defined the interface to be. I can
> accept that, but I'd like us to think about whether the driver interface:
> a) is merely the interfaces defined in ironic/drivers/base.py, as we
> previously defined, or
> b) also includes one or more of the hardware-agnostic interface
> implementations (eg, PXEBoot, AgentDeploy, AgentRAID, inspector.Inspector)
> 
> As recent experience has taught me, these classes provide essential
> primitives for building new hardware drivers. If we want to support
> development of hardware drivers out of tree, but we don't want to include
> (b) in our definition of the API, then we are signalling that such drivers
> must be implemented entirely out of tree (iow, they're not allowed to
> borrow *any* functionality from ironic/drivers/modules/*).
> 
> And if we're signalling that, and someone goes and implements such a driver
> and then later wants to propose it upstream -- how will we feel about
> accepting a completely alternative implementation of, say, the pxe boot
> driver?

I agree; there's some hard things to think about here. I'd like to get a
definition of our driver API solidified and documented during Mitaka.
It's odd, because there are two pieces to (b) above; the names/methods
provided, and the actual behavior. I'm not sure that AgentDeploy is not
backwards compatible when it comes to names or methods, but we've
certainly changed the behavior. I've added this topic to our design
summit proposal list.

As for the original problem, I like ramesh's idea with the FakeBoot
driver. I think it will cause the least amount of breakage to
out-of-tree drivers, and still allow for an easy fix. We *do* need to be
very loud about this in release notes, ops mailing list, etc.

Deva, could you update the patch ASAP? I'd like to get it landed
tomorrow, so we can backport and get 4.2.1 out the door this week.

Thanks for digging into this! :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-07 Thread Zane Bitter

On 07/10/15 13:36, Ed Leafe wrote:

Several months ago I proposed an experiment [0] to see if switching the data 
model for the Nova scheduler to use Cassandra as the backend would be a 
significant improvement as opposed to the current design using multiple copies 
of the same data (compute_node in MySQL DB, HostState in memory in the 
scheduler, ResourceTracker in memory in the compute node) and trying to keep 
them all in sync via passing messages.


It seems to me (disclaimer: not a Nova dev) that which database to use 
is completely irrelevant to your proposal, which is really about moving 
the scheduling from a distributed collection of Python processes with 
ad-hoc (or sometimes completely missing) synchronisation into the 
database to take advantage of its well-defined semantics. But you've 
framed it in such a way as to guarantee that this never gets discussed, 
because everyone will be too busy arguing about whether or not Cassandra 
is better than Galera.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-07 Thread Matt Riedemann

Here's why:

https://review.openstack.org/#/c/220622/

That's marked as fixing an OSSA which means we'll have to backport the 
fix in nova but it depends on a change to strutils.mask_password in 
oslo.utils, which required a release and a minimum version bump in 
global-requirements.


To backport the change in nova, we either have to:

1. Copy mask_password out of oslo.utils and add it to nova in the 
backport or,


2. Backport the oslo.utils change to a stable branch, release it as a 
patch release, bump minimum required version in stable g-r and then 
backport the nova change and depend on the backported oslo.utils stable 
release - which also makes it a dependent library version bump for any 
packagers/distros that have already frozen libraries for their stable 
releases, which is kind of not fun.


So I'm thinking this is one of those things that should ultimately live 
in oslo-incubator so it can live in the respective projects. If 
mask_password were in oslo-incubator, we'd have just fixed and 
backported it there and then synced to nova on master and stable 
branches, no dependent library version bumps required.


Plus I miss the good old days of reviewing oslo-incubator 
syncs...(joking of course).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] backwards compat issue with PXEDeply and AgentDeploy drivers

2015-10-07 Thread Devananda van der Veen
Ramesh,

I thought about your points over night, and then looked at our in-tree
driver code from stable/kilo and asked myself, "what if this driver was out
of tree?" They'd all have broken -- for very similar reasons as what I
encountered with my demo driver.

When we split the boot and deploy interfaces, we kept compatibility only at
the boundary between ConductorManager and the Driver class. That's all we
set out to do, because that's all we defined the interface to be. I can
accept that, but I'd like us to think about whether the driver interface:
a) is merely the interfaces defined in ironic/drivers/base.py, as we
previously defined, or
b) also includes one or more of the hardware-agnostic interface
implementations (eg, PXEBoot, AgentDeploy, AgentRAID, inspector.Inspector)

As recent experience has taught me, these classes provide essential
primitives for building new hardware drivers. If we want to support
development of hardware drivers out of tree, but we don't want to include
(b) in our definition of the API, then we are signalling that such drivers
must be implemented entirely out of tree (iow, they're not allowed to
borrow *any* functionality from ironic/drivers/modules/*).

And if we're signalling that, and someone goes and implements such a driver
and then later wants to propose it upstream -- how will we feel about
accepting a completely alternative implementation of, say, the pxe boot
driver?

Curious what others think...
-deva


On Mon, Oct 5, 2015 at 11:35 PM, Ramakrishnan G <
rameshg87.openst...@gmail.com> wrote:

>
> Well it's nice to fix, but I really don't know if we should be fixing it.
> As discussed in one of the Ironic meetings before, we might need to define
> what is our driver API or SDK or DDK or whatever we choose to call it .
> Please see inline for my thoughts.
>
> On Tue, Oct 6, 2015 at 5:54 AM, Devananda van der Veen <
> devananda@gmail.com> wrote:
>
>> tldr; the boot / deploy interface split we did broke an out of tree
>> driver. I've proposed a patch. We should get a fix into stable/liberty too.
>>
>> Longer version...
>>
>> I was rebasing my AMTTool driver [0] on top of master because the in-tree
>> one still does not work for me, only to discover that my driver suddenly
>> failed to deploy. I have filed this bug
>>   https://bugs.launchpad.net/ironic/+bug/1502980
>> because we broke at least one out of tree driver (mine). I highly suspect
>> we've broken many other out of tree drivers that relied on either the
>> PXEDeploy or AgentDeploy interfaces that were present in Kilo release. Both
>> classes in Liberty are making explicit calls to "task.driver.boot" -- and
>> kilo-era driver classes did not define this interface.
>>
>
>
> I would like to think more about what really our driver API is ? We have a
> couple of well defined interfaces in ironic/drivers/base.py which people
> may follow, implement an out-of-tree driver, make it a stevedore entrypoint
> and get it working with Ironic.
>
> But
>
> 1) Do we promise them that in-tree implementations of these interfaces
> will always exist.  For example in boot/deploy work done in Liberty, we
> removed the class PxeDeploy [1].  It actually got broken down to PXEBoot
> and ISCSIDeploy.  In the first place, do we guarantee that they will exist
> for ever in the same place with the same name ? :)
>
> 2) Do we really promise the in-tree implementations of these interfaces
> will behave the same way ? For example, the broken stuff AgentDeploy which
> is an implementation of our DeployInterface.  Do we guarantee that this
> implementation will always keep doing what ever it was every time code is
> rebased ?
>
> [1]
> https://review.openstack.org/#/c/166513/19/ironic/drivers/modules/pxe.py
>
>
>
>>
>> I worked out a patch for the AgentDeploy driver and have proposed it here:
>>   https://review.openstack.org/#/c/231215/1
>>
>> I'd like to ask folks to review it quickly -- we should fix this ASAP and
>> backport it to stable/liberty before the next release, if possible. We
>> should also make a similar fix for the PXEDeploy class. If anyone gets to
>> this before I do, please reply here and let me know so we don't duplicate
>> effort.
>>
>
>
> This isn't going to be as same as above as there is no longer a PXEDeploy
> class any more.  We might need to create a new class PXEDeploy which
> probably inherits from ISCSIDeploy and has task.driver.boot worked around
> in the same way as the above patch.
>
>
>
>>
>> Also, Jim already spotted something in the review that is a bit
>> concerning. It seems like the IloVirtualMediaAgentVendorInterface class
>> expects the driver it is attached to *not* to have a boot interface and
>> *not* to call boot.clean_up_ramdisk. Conversely, other drivers may be
>> expecting AgentVendorInterface to call boot.clean_up_ramdisk -- since that
>> was its default behavior in Kilo. I'm not sure what the right way to fix
>> this is, but I lean towards updating the in-tree driver so we remain
>> 

Re: [openstack-dev] [Neutron][Kuryr] Kuryr Open Tasks

2015-10-07 Thread Egor Guz
Gal, thx I a lot. I have created the pool 
http://doodle.com/poll/udpdw77evdpnsaq6 where everyone can vote for time slot.

—
Egor


From: Gal Sagie >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, October 6, 2015 at 12:08
To: "OpenStack Development Mailing List (not for usage questions)" 
>, 
Eran Gampel >, Antoni 
Segura Puimedon >, Irena Berezovsky 
>, Mohammad Banikazemi 
>, Taku Fukushima 
>, Salvatore Orlando 
>, sky fei 
>, 
"digambarpati...@yahoo.co.in" 
>, Digambar 
Patil >
Subject: [openstack-dev] [Neutron][Kuryr] Kuryr Open Tasks

Hello All,

I have opened a Trello board to track all Kuryr assigned tasks and their 
assignee.
In addition to all the non assigned tasks we have defined.

You can visit and look at the board here [1].
Please email back if i missed you or any task that you are working on, or a task
that you think needs to be on that list.

This is only a temporary solution until we get everything organised, we plan to 
track everything with launchpad bugs (and the assigned blueprints)

If you see any task from this list which doesn't have an assignee and you feel
you have the time and the desire to contribute, please contact me and i will 
provide
guideness.

Thanks
Gal

[1] https://trello.com/b/cbIAXrQ2/project-kuryr

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-07 Thread Daniel P. Berrange
On Tue, Oct 06, 2015 at 11:43:52AM -0600, Chris Friesen wrote:
> On 10/06/2015 11:27 AM, Paul Carlton wrote:
> >
> >
> >On 06/10/15 17:30, Chris Friesen wrote:
> >>On 10/06/2015 08:11 AM, Daniel P. Berrange wrote:
> >>>On Tue, Oct 06, 2015 at 02:54:21PM +0100, Paul Carlton wrote:
> https://review.openstack.org/#/c/85048/ was raised to address the
> migration of instances that are not running but people did not warm to
> the idea of bringing a stopped/suspended instance to a paused state to
> migrate it.  Is there any work in progress to get libvirt enhanced to
> perform the migration of non active virtual machines?
> >>>
> >>>Libvirt can "migrate" the configuration of an inactive VM, but does
> >>>not plan todo anything related to storage migration. OpenStack could
> >>>already solve this itself by using libvirt storage pool APIs to
> >>>copy storage volumes across, but the storage pool worked in Nova
> >>>is stalled
> >>>
> >>>https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/use-libvirt-storage-pools,n,z
> >>>
> >>
> >>What is the libvirt API to migrate a paused/suspended VM? Currently nova 
> >>uses
> >>dom.managedSave(), so it doesn't know what file libvirt used to save the
> >>state.  Can libvirt migrate that file transparently?
> >>
> >>I had thought we might switch to virDomainSave() and then use the cold
> >>migration framework, but that requires passwordless ssh.  If there's a way 
> >>to
> >>get libvirt to handle it internally via the storage pool API then that would
> >>be better.
> 
> 
> >So my reading of this is the issue could be addressed in Mitaka by
> >implementing
> >http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/use-libvirt-storage-pools.html
> >
> >and
> >https://review.openstack.org/#/c/126979/4/specs/kilo/approved/migrate-libvirt-volumes.rst
> >
> >
> >is there any prospect of this being progressed?
> 
> Paul, that would avoid the need for cold migrations to use passwordless ssh
> between nodes.  However, I think there may be additional work to handle
> migrating paused/suspended instances--still waiting for Daniel to address
> that bit.

Migrating paused VMs should "just work" - certainly at the libvirt/QEMU
level there's no distinction between a paused & running VM wrt migration.
I know that historically Nova has blocked migration if the VM is paused
and I recall patches to remove that pointless restriction. I can't
remember if they ever merged.

For suspended instances, the scenario is really the same as with completely
offline instances. The only extra step is that you need to migrate the saved
image state file, as well as the disk images. This is trivial once you have
done the code for migrating disk images offline, since its "just one more file"
to care about.  Officially apps aren't supposed to know where libvirt keeps
the managed save files, but I think it is fine for Nova to peek behind the
scenes to get them. Alternatively I'd be happy to see an API added to libvirt
to allow the managed save files to be uploaded & downloaded via a libvirt
virStreamPtr object, in the same way we provide APIs to  upload & download
disk volumes. This would avoid the need to know explicitly about the file
location for the managed save image.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-07 Thread Daniel P. Berrange
On Wed, Oct 07, 2015 at 10:26:05AM +0100, Paul Carlton wrote:
> I'd be happy to take this on in Mitaka

Ok, first step would be to re-propose the old Kilo spec against Mitaka and
we should be able to fast-approve it.

> >>>So my reading of this is the issue could be addressed in Mitaka by
> >>>implementing
> >>>http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/use-libvirt-storage-pools.html
> >>>
> >>>and
> >>>https://review.openstack.org/#/c/126979/4/specs/kilo/approved/migrate-libvirt-volumes.rst

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack extras.d support going away at M1 - your jobs may break if you rely on it in your dsvm jobs

2015-10-07 Thread Neil Jerram
On 07/10/15 12:12, Sean Dague wrote:
> We've had devstack plugins for about 10 months. They provide a very "pro
> user" experience by letting you enable arbitrary plugins with:
>
> enable_plugin $name git://git.openstack.org/openstack/$project [$branch]
>
> They have reasonable documentation here
> http://docs.openstack.org/developer/devstack/plugins.html

enable_plugin is indeed great.

A related question, if I may: has there been any discussion of
backporting enable_plugin support to e.g. DevStack's stable/juno
branch?  It would be cool to be able to use a DevStack plugin with
earlier OpenStack releases.

Thanks,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Thomas Herve
On Wed, Oct 7, 2015 at 12:59 PM, Roman Prykhodchenko  wrote:

> What I can extract now from this thread is that Fuel should switch to
> testr because of the following reasons:
>
> - Diversity of tools is a bad idea on a project scale
> - testrepository and related components are used in OpenStack Infra
> environment for much more tasks than just running tests
> - py.test won’t be added to global-requirements so there always be a
> chance of another dependency hell
> - Sticking to global requirements is an idea which is in the scope of
> discussions around Fuel.
>
> Sounds like that’s the point when we should just file appropriate bugs and
> use testr in smaller components, e. g., Fuel Client, first and then try in
> in Nailgun.
>

I'd say that using testr in the default tox targets and thus in the gate is
the reasonable choice, for the reasons mentioned elsewhere. That said, I
also think that it's also fair to allow alternate test runners on the local
developer environment. You may have to make some tweaks from time to time,
but most of the time py.test should be able to run the test suite supported
by testr (this is not necessarily true for nose for example which doesn't
support testscenarios AFAIK).

This way you standardize on the tools (testr remains the "source of
truth"), but make local debugging much nicer.

-- 
Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-10-07 Thread Matthew Booth
On Fri, Sep 25, 2015 at 3:44 PM, Ihar Hrachyshka 
wrote:

> Hi all,
>
> releases are approaching, so it’s the right time to start some bike
> shedding on the mailing list.
>
> Recently I got pointed out several times [1][2] that I violate our commit
> message requirement [3] for the message lines that says: "Subsequent lines
> should be wrapped at 72 characters.”
>
> I agree that very long commit message lines can be bad, f.e. if they are
> 200+ chars. But <= 79 chars?.. Don’t think so. Especially since we have 79
> chars limit for the code.
>
> We had a check for the line lengths in openstack-dev/hacking before but it
> was killed [4] as per openstack-dev@ discussion [5].
>
> I believe commit message lines of <=80 chars are absolutely fine and
> should not get -1 treatment. I propose to raise the limit for the guideline
> on wiki accordingly.
>

IIUC, the lower limit for commit messages is because git displays them
indented by default, which means that lines which are 80 chars long will
wrap on a display which is 80 chars wide. I personally use terminal windows
which are 80 chars wide, and I do find long lines in commit messages
annoying, so I'm personally in favour of retaining the lower limit. Can't
say I'd storm any castles if it was changed, but if most people use git the
way I do[1] I guess it should stay.

Matt

[1] I have no idea if this is the case.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Min libvirt for Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1

2015-10-07 Thread Tim Bell
> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: 07 October 2015 13:02
> To: Sean Dague 
> Cc: OpenStack Development Mailing List (not for usage questions)
> ; openstack-
> operat...@lists.openstack.org
> Subject: Re: [Openstack-operators] [openstack-dev] [nova] Min libvirt for
> Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1
> 
> On Wed, Oct 07, 2015 at 06:55:44AM -0400, Sean Dague wrote:
> > On 10/07/2015 06:46 AM, Daniel P. Berrange wrote:
> > > In the Liberty version of OpenStack we had a min libvirt of 0.9.11
> > > and printed a warning on startup if you had < 0.10.2, to the effect
> > > that Mitaka will required 0.10.2
> > >
> > > This mail is a reminder that we will[1] mandate libvirt >= 0.10.2
> > > when Mitaka is released.
> > >
> > >
> > > Looking forward to the N release, I am suggesting that we target
> > > a new min libvirt of 1.1.1 for that cycle.
> > >
> > > Based on info in
> > >
> > >https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
> > >
> > > this will exclude the following distros running running Nova Nxxx
> > > release:
> > >
> > >  - Fedora 20 - it will be end-of-life way before Nxxx is released
> > >
> > >  - RHEL 6 - Red Hat stopped shipping Nova on RHEL-6 after Icehouse
> > > and base distro only supports Python 2.6
> > >
> > >  - OpenSUSE 12 - this was end-of-life about 6 months ago now
> > >
> > >  - SLES 11 - base distro only supports Python 2.6
> > >
> > >  - Debian Wheezy - Debian Jessie is current stable, and
Wheezy-backports
> > >provides new enough libvirt for people who wish to
> > >  stay on Wheezy
> > >
> > > The min distros required would thus be Fedora 21, RHEL 7.0, OpenSUSE
> > > 13 SLES 12, Debian Wheezy and Ubuntu 14.04 (Trusty LTS)
> > >
> > > Regards,
> > > Daniel
> > >
> > > [1] https://review.openstack.org/#/c/231917/
> >
> > Isn't RHEL 7.1 just an update stream on RHEL 7.0? It seems a little
> > weird to keep the 1.1.1 support instead of just going up to 1.2.2.
> 
> Yes & no. There are in fact two different streams users can take with
RHEL.
> They can stick on a bugfix only stream, which would be 7.0.1, 7.0.2, etc,
or
> they can take the bugfix + features stream which is 7.1, 7.2, etc. They
can't
> stick on the bugfix only stream forever though, so given that by time Nxx
is
> released
> 7.2 will also be available, we are probably justified in dropping
> 7.0 support.
> 
> The next oldest distro libvirt would be Debian Wheezy-backports at 1.2.1.
> If we are happy to force Debian users to Jessie, then next oldest after
that is
> Ubuntu 14.04 LTS with 1.2.2.
> 

Although Red Hat is no longer supporting RHEL 6 after Icehouse, a number of
users such as GoDaddy and CERN are using Software Collections to run the
Python 2.7 code.

However, since this modification would only take place when Mitaka gets
released, this would realistically give those sites a year to complete
migration to RHEL/CentOS 7 assuming they are running from one of the
community editions.

What does the 1.1.1 version bring that is the motivation for raising the
limit ?

Tim

> 
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
:|
> |: http://libvirt.org  -o- http://virt-manager.org
:|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
:|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
:|
> 
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] revisiting minimum libvirt version

2015-10-07 Thread Daniel P. Berrange
On Wed, Oct 07, 2015 at 06:32:53AM -0400, Sean Dague wrote:
> The following review https://review.openstack.org/#/c/171098 attempts to
> raise the minimum libvirt version to 1.0.3.
> 
> In May that was considered a no go -
> http://lists.openstack.org/pipermail/openstack-operators/2015-May/007012.html
> 
> Can we reconsider that decision and up this to 1.2 for what we're
> regularly testing with. It would also allow some cleaning out of a lot
> of conditional pathing, which is getting pretty deep in ifdefs -
> https://github.com/openstack/nova/blob/251e09ab69e5dd1ba2c917175bb408c708843f6e/nova/virt/libvirt/driver.py#L359-L424

I've actually just sent a thread suggesting we pick 1.1.1:

  http://lists.openstack.org/pipermail/openstack-dev/2015-October/076302.html

It is possible we could decide to pick a 1.2.x release, if we're willing to
drop further distros. Lets continue the discussion in that other thread
I created.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Roman Prykhodchenko
What I can extract now from this thread is that Fuel should switch to testr 
because of the following reasons:

- Diversity of tools is a bad idea on a project scale
- testrepository and related components are used in OpenStack Infra environment 
for much more tasks than just running tests
- py.test won’t be added to global-requirements so there always be a chance of 
another dependency hell
- Sticking to global requirements is an idea which is in the scope of 
discussions around Fuel.

Sounds like that’s the point when we should just file appropriate bugs and use 
testr in smaller components, e. g., Fuel Client, first and then try in in 
Nailgun.


- romcheg

> 7 жовт. 2015 р. о 02:06 Monty Taylor  написав(ла):
> 
> On 10/06/2015 06:01 PM, Thomas Goirand wrote:
>> On 10/06/2015 01:14 PM, Yuriy Taraday wrote:
>>> On Mon, Oct 5, 2015 at 5:40 PM Roman Prykhodchenko >> > wrote:
>>> 
>>> Atm I have the following pros. and cons. regarding testrepository:
>>> 
>>> pros.:
>>> 
>>> 1. It’s ”standard" in OpenStack so using it gives Fuel more karma
>>> and moves it more under big tent
>>> 
>>> 
>>> I don't think that big tent model aims at eliminating diversity of tools
>>> we use in our projects. A collection of web frameworks used in big tent
>>> is an example of that.
>> 
>> From the downstream distro point of view, I don't agree in general, and
>> with the web framework in particular. (though it's less a concern for
>> the testr vs pbr). We keep adding dependencies and duplicates, but never
>> remove them. For example, tablib and suds/sudsjurko need to be removed
>> because they are not maintainable, there's not much work to do so, but
>> nobody does the work...
> 
> The Big Tent has absolutely no change in opinion about eliminating diversity 
> of tools. OpenStack has ALWAYS striven to reduce diversity of tools. Big Tent 
> applies OpenStack to more things that request to be part of OpenStack.
> 
> Nothing has changed in the intent.
> 
> Diversity of tools in a project this size is a bad idea. Always has been. 
> Always will be.
> 
> The amount of web frameworks in use is a bug.
> 
>>> 2. It’s in global requirements, so it doesn’t cause dependency hell
>>> 
>>> That can be solved by adding py.test to openstack/requirements.
> 
> No, it cannot. py.test/testr is not about dependency management. It's about a 
> much bigger picture of how OpenStack does development and how that 
> development can be managed.
> 
>> I'd very much prefer if we could raise the barrier for getting a 3rd
>> party new dependency in. I hope we can talk about this in Tokyo. That
>> being said, indeed, adding py.test isn't so much of a problem, as it is
>> widely used, already packaged, and maintained upstream. I'd still prefer
>> if all projects were using the same testing framework and test runner
>> though.
> 
> As I said earlier in this thread, it has already been decided by the TC long 
> ago that we will use testr. Barring a (very unlikely) TC rescinding of that 
> decision, OpenStack projects use testr. There is zero value in expanding the 
> number of test runners.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Flavio Percoco

On 07/10/15 07:47 -0400, Sean Dague wrote:

We're starting to make plans for the next cycle. Long term plans are
getting made for details that would happen in one or two cycles.

As we already have the locations for the N and O summits I think we
should do the naming polls now and have names we can use for this
planning instead of letters. It's pretty minor but it doesn't seem like
there is any real reason to wait and have everyone come up with working
names that turn out to be confusing later.


Unless there's a good reason for not doing this, I'm ok with the
above.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] revisiting minimum libvirt version

2015-10-07 Thread Sean Dague
The following review https://review.openstack.org/#/c/171098 attempts to
raise the minimum libvirt version to 1.0.3.

In May that was considered a no go -
http://lists.openstack.org/pipermail/openstack-operators/2015-May/007012.html

Can we reconsider that decision and up this to 1.2 for what we're
regularly testing with. It would also allow some cleaning out of a lot
of conditional pathing, which is getting pretty deep in ifdefs -
https://github.com/openstack/nova/blob/251e09ab69e5dd1ba2c917175bb408c708843f6e/nova/virt/libvirt/driver.py#L359-L424

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-07 Thread John Spray
On Tue, Oct 6, 2015 at 11:59 AM, Deepak Shetty  wrote:
>>
>> Currently, as you say, a share is accessible to anyone who knows the
>> auth key (created a the time the share is created).
>>
>> For adding the allow/deny path, I'd simply create and remove new ceph
>> keys for each entity being allowed/denied.
>
>
> Ok, but how does that map to the existing Manila access types (IP, User,
> Cert) ?

None of the above :-)

Compared with certs, the difference with Ceph is that ceph is issuing
credentials, rather than authorizing existing credentials[1]. So
rather than the tenant saying "Here's a certificate that Alice has
generated and will use to access the filesystem, please authorize it",
the tenant would say "Please authorize someone called Bob to access
the share, and let me know the key he should use to prove he is Bob".

As far as I can tell, we can't currently expose that in Manila: the
missing piece is a way to tag that generated key onto a
ShareInstanceAccessMapping, so that somebody with the right to read
from the Manila API can go read Bob's key, and give it to Bob so that
he can mount the filesystem.

That's why the first-cut compromise is to create a single auth
identity for accessing the share, and expose the key as part of the
share's export location.  It's then the user application's job to
share out that key to whatever hosts need to access it.  The lack of
Manila-mediated 'allow' is annoying but not intrinsically insecure.
The security problem with this approach is that we're not providing a
way to revoke/rotate the key without destroying the share.

So anyway.  This might be a good topic for a conversation at the
summit (or catch me up on the list if it's already been discussed in
depth) -- should drivers be allowed to publish generated
authentication tokens as part of the API for allowing access to a
share?

John


1. Aside: We *could* do a certificate-like model if it was assumed
that the Manila API consumer knew how to go and talk to Ceph out of
band to generate their auth identity.  That way, they could go and
create their auth identity in Ceph, and then ask Manila to grant that
identity access to the share.  However, it would be pointless, because
in ceph, anyone who can create an identity can also set the
capabilities of it (i.e. if they can talk directly to ceph, they don't
need Manila's permission to access the share).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Min libvirt for Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1

2015-10-07 Thread Sean Dague
On 10/07/2015 07:02 AM, Daniel P. Berrange wrote:
> On Wed, Oct 07, 2015 at 06:55:44AM -0400, Sean Dague wrote:
>> On 10/07/2015 06:46 AM, Daniel P. Berrange wrote:
>>> In the Liberty version of OpenStack we had a min libvirt of 0.9.11 and
>>> printed a warning on startup if you had < 0.10.2, to the effect that
>>> Mitaka will required 0.10.2
>>>
>>> This mail is a reminder that we will[1] mandate libvirt >= 0.10.2 when
>>> Mitaka is released.
>>>
>>>
>>> Looking forward to the N release, I am suggesting that we target
>>> a new min libvirt of 1.1.1 for that cycle.
>>>
>>> Based on info in
>>>
>>>https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
>>>
>>> this will exclude the following distros running running Nova Nxxx
>>> release:
>>>
>>>  - Fedora 20 - it will be end-of-life way before Nxxx is released
>>>
>>>  - RHEL 6 - Red Hat stopped shipping Nova on RHEL-6 after Icehouse
>>> and base distro only supports Python 2.6
>>>
>>>  - OpenSUSE 12 - this was end-of-life about 6 months ago now
>>>
>>>  - SLES 11 - base distro only supports Python 2.6
>>>
>>>  - Debian Wheezy - Debian Jessie is current stable, and Wheezy-backports
>>>provides new enough libvirt for people who wish to
>>>stay on Wheezy
>>>
>>> The min distros required would thus be Fedora 21, RHEL 7.0, OpenSUSE 13
>>> SLES 12, Debian Wheezy and Ubuntu 14.04 (Trusty LTS)
>>>
>>> Regards,
>>> Daniel
>>>
>>> [1] https://review.openstack.org/#/c/231917/
>>
>> Isn't RHEL 7.1 just an update stream on RHEL 7.0? It seems a little
>> weird to keep the 1.1.1 support instead of just going up to 1.2.2.
> 
> Yes & no. There are in fact two different streams users can take
> with RHEL. They can stick on a bugfix only stream, which would be
> 7.0.1, 7.0.2, etc, or they can take the bugfix + features stream
> which is 7.1, 7.2, etc. They can't stick on the bugfix only
> stream forever though, so given that by time Nxx is released
> 7.2 will also be available, we are probably justified in dropping
> 7.0 support.
> 
> The next oldest distro libvirt would be Debian Wheezy-backports at 1.2.1.
> If we are happy to force Debian users to Jessie, then next oldest after
> that is Ubuntu 14.04 LTS with 1.2.2.

1.2.1 seems reasonable, it's also probably worth asking the Debian folks
if they can put 1.2.2 into the backport stream.

I think it might also be worth pre-declaring the O minimum as well so
that instead of just following the distros we are signaling what we'd
like in there. Because in the O time frame it feels like 1.2.8 would be
a reasonable minimum, and that would give distros a year of warning to
ensure they got things there.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Min libvirt for Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1

2015-10-07 Thread Daniel P. Berrange
On Wed, Oct 07, 2015 at 07:17:09AM -0400, Sean Dague wrote:
> On 10/07/2015 07:02 AM, Daniel P. Berrange wrote:
> > On Wed, Oct 07, 2015 at 06:55:44AM -0400, Sean Dague wrote:
> >> Isn't RHEL 7.1 just an update stream on RHEL 7.0? It seems a little
> >> weird to keep the 1.1.1 support instead of just going up to 1.2.2.
> > 
> > Yes & no. There are in fact two different streams users can take
> > with RHEL. They can stick on a bugfix only stream, which would be
> > 7.0.1, 7.0.2, etc, or they can take the bugfix + features stream
> > which is 7.1, 7.2, etc. They can't stick on the bugfix only
> > stream forever though, so given that by time Nxx is released
> > 7.2 will also be available, we are probably justified in dropping
> > 7.0 support.
> > 
> > The next oldest distro libvirt would be Debian Wheezy-backports at 1.2.1.
> > If we are happy to force Debian users to Jessie, then next oldest after
> > that is Ubuntu 14.04 LTS with 1.2.2.
> 
> 1.2.1 seems reasonable, it's also probably worth asking the Debian folks
> if they can put 1.2.2 into the backport stream.
> 
> I think it might also be worth pre-declaring the O minimum as well so
> that instead of just following the distros we are signaling what we'd
> like in there. Because in the O time frame it feels like 1.2.8 would be
> a reasonable minimum, and that would give distros a year of warning to
> ensure they got things there.

FYI I extended the distro support wiki page with details of the min
libvirt we have required in each Nova release:

  
https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Nova_release_min_version

We could just add a row for O release with an educated guess
as to a possible target, to give people an idea of where we're
likely to go.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack extras.d support going away at M1 - your jobs may break if you rely on it in your dsvm jobs

2015-10-07 Thread Neil Jerram
On 07/10/15 12:26, Sean Dague wrote:
> On 10/07/2015 07:17 AM, Neil Jerram wrote:
>> On 07/10/15 12:12, Sean Dague wrote:
>>> We've had devstack plugins for about 10 months. They provide a very "pro
>>> user" experience by letting you enable arbitrary plugins with:
>>>
>>> enable_plugin $name git://git.openstack.org/openstack/$project [$branch]
>>>
>>> They have reasonable documentation here
>>> http://docs.openstack.org/developer/devstack/plugins.html
>> enable_plugin is indeed great.
>>
>> A related question, if I may: has there been any discussion of
>> backporting enable_plugin support to e.g. DevStack's stable/juno
>> branch?  It would be cool to be able to use a DevStack plugin with
>> earlier OpenStack releases.
>>
>> Thanks,
>> Neil
> stable/juno is very close to eol (I think there are only a couple of
> months left), so I don't think that it's worth the backporting effort at
> this time. If someone else did it, I'd review the code, but I don't
> think it's a priority.
>
>   -Sean
>

OK, thanks for this answer.

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-07 Thread Flavio Percoco

On 07/10/15 11:12 +0200, Julien Danjou wrote:

On Wed, Oct 07 2015, Flavio Percoco wrote:


I'm not trying to solve the lack of reviews in Liberty by removing
patches. What I'd like to do, though, is help to keep around patches
that really matter.


I think that's where you are making a mistake. They are contributors,
like me or Victor, are knocking on the Glance doors for months now,
sending patches that resolve technical debt rather than adding new
debt^Wfeatures. Currently, these patches are not seen as important and
are often "dismissed". So I'm pretty sure they are going to expire with
this new system.


I can't do anything for the past failures other than saying I'm sorry.
As I mentioned in previous emails in this thread, the work to make the
review process better is unrelated to the topic on this email, really.

I wouldn't say that your patches (or Victor's) weren't important for
the team but I would like to avoid getting into the details of the
past, tbh.


Imagine that if you were merging patches from me, Victor, and people
like us, we would continue to send many of them, and mid-term, you'd get
some new blood on your core team.


I don't think this needs to be explained and I trust the whole Glance
core team to know this. Although, it's better to be explicit than
implicit so, thanks.



What is proposed here is really focusing on making life easier for the
current core team which is in large majority inactive.


This is were I think I'm failing to communicate the intention here.
The dashboard[0] I've put up is the one that intends to make the core
team's life easier.

[0] http://bit.ly/glance-review-dashboard



Don't read me wrong. I know you and Nikhil are both well-intentioned by
proposing that. I just think it's going to be worse, because it won't
improve much and you're going to push new contributors away.


Absolutely, I value everyone's feedback on this thread a lot. I hope
I'm explaining the goal correctly. If I'm not, I'm more than happy to
talk more about this (See my email with some stats for example).

Cheers,
Flavio


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] revisiting minimum libvirt version

2015-10-07 Thread Sean Dague
On 10/07/2015 06:51 AM, Daniel P. Berrange wrote:
> On Wed, Oct 07, 2015 at 06:32:53AM -0400, Sean Dague wrote:
>> The following review https://review.openstack.org/#/c/171098 attempts to
>> raise the minimum libvirt version to 1.0.3.
>>
>> In May that was considered a no go -
>> http://lists.openstack.org/pipermail/openstack-operators/2015-May/007012.html
>>
>> Can we reconsider that decision and up this to 1.2 for what we're
>> regularly testing with. It would also allow some cleaning out of a lot
>> of conditional pathing, which is getting pretty deep in ifdefs -
>> https://github.com/openstack/nova/blob/251e09ab69e5dd1ba2c917175bb408c708843f6e/nova/virt/libvirt/driver.py#L359-L424
> 
> I've actually just sent a thread suggesting we pick 1.1.1:
> 
>   http://lists.openstack.org/pipermail/openstack-dev/2015-October/076302.html
> 
> It is possible we could decide to pick a 1.2.x release, if we're willing to
> drop further distros. Lets continue the discussion in that other thread
> I created.

Yep, no problem. I responded there and we can consider that the live thread.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Thomas Goirand
On 10/07/2015 02:06 AM, Monty Taylor wrote:
> The Big Tent has absolutely no change in opinion about eliminating
> diversity of tools. OpenStack has ALWAYS striven to reduce diversity of
> tools. Big Tent applies OpenStack to more things that request to be part
> of OpenStack.
> 
> Nothing has changed in the intent.
> 
> Diversity of tools in a project this size is a bad idea. Always has
> been. Always will be.
> 
> The amount of web frameworks in use is a bug.

Thanks a lot Monty. I am very happy that you have this opinion.

In such a case, could Zaqar get rid of Falcon? :)

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack extras.d support going away at M1 - your jobs may break if you rely on it in your dsvm jobs

2015-10-07 Thread Sean Dague
On 10/07/2015 07:31 AM, Kai Qiang Wu wrote:
> Hi Sean,
> 
> 
> Do you mean all other projects, like Barbican (non-standard
> implementation with copy/paste ways) would break in devstack ?

Yes, that's what I mean. The copy/paste method will not work after that
point, which is why there is a heads up now to get things fixed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FYI: Updated Mitaka specs template

2015-10-07 Thread Daniel P. Berrange
FYI anyone who is pushing specs for review against Mitaka should be aware
that yesterday we merged a change to the spec template. Specifically we
have removed the "Project priority" section of the template, since it has
been a source of much confusion, cannot be filled out until after the
summit decides on priorities and priority specs are already tracked via
etherpad.

So if anyone has a spec up for review, simply delete the "Project priority"
section of your template when pushing your next update of it. It should
have already only contained the word "None" in any case :-)

Once priorities are decided we will track priority specs via this page:

  https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking

Regards,
Daniel

[1] https://review.openstack.org/#/c/230916/
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Sean Dague
We're starting to make plans for the next cycle. Long term plans are
getting made for details that would happen in one or two cycles.

As we already have the locations for the N and O summits I think we
should do the naming polls now and have names we can use for this
planning instead of letters. It's pretty minor but it doesn't seem like
there is any real reason to wait and have everyone come up with working
names that turn out to be confusing later.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-07 Thread Paul Carlton

I'd be happy to take this on in Mitaka

On 07/10/15 10:14, Daniel P. Berrange wrote:

On Tue, Oct 06, 2015 at 11:43:52AM -0600, Chris Friesen wrote:

On 10/06/2015 11:27 AM, Paul Carlton wrote:


On 06/10/15 17:30, Chris Friesen wrote:

On 10/06/2015 08:11 AM, Daniel P. Berrange wrote:

On Tue, Oct 06, 2015 at 02:54:21PM +0100, Paul Carlton wrote:

https://review.openstack.org/#/c/85048/ was raised to address the
migration of instances that are not running but people did not warm to
the idea of bringing a stopped/suspended instance to a paused state to
migrate it.  Is there any work in progress to get libvirt enhanced to
perform the migration of non active virtual machines?

Libvirt can "migrate" the configuration of an inactive VM, but does
not plan todo anything related to storage migration. OpenStack could
already solve this itself by using libvirt storage pool APIs to
copy storage volumes across, but the storage pool worked in Nova
is stalled

https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/use-libvirt-storage-pools,n,z


What is the libvirt API to migrate a paused/suspended VM? Currently nova uses
dom.managedSave(), so it doesn't know what file libvirt used to save the
state.  Can libvirt migrate that file transparently?

I had thought we might switch to virDomainSave() and then use the cold
migration framework, but that requires passwordless ssh.  If there's a way to
get libvirt to handle it internally via the storage pool API then that would
be better.



So my reading of this is the issue could be addressed in Mitaka by
implementing
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/use-libvirt-storage-pools.html

and
https://review.openstack.org/#/c/126979/4/specs/kilo/approved/migrate-libvirt-volumes.rst


is there any prospect of this being progressed?

Paul, that would avoid the need for cold migrations to use passwordless ssh
between nodes.  However, I think there may be additional work to handle
migrating paused/suspended instances--still waiting for Daniel to address
that bit.

Migrating paused VMs should "just work" - certainly at the libvirt/QEMU
level there's no distinction between a paused & running VM wrt migration.
I know that historically Nova has blocked migration if the VM is paused
and I recall patches to remove that pointless restriction. I can't
remember if they ever merged.

For suspended instances, the scenario is really the same as with completely
offline instances. The only extra step is that you need to migrate the saved
image state file, as well as the disk images. This is trivial once you have
done the code for migrating disk images offline, since its "just one more file"
to care about.  Officially apps aren't supposed to know where libvirt keeps
the managed save files, but I think it is fine for Nova to peek behind the
scenes to get them. Alternatively I'd be happy to see an API added to libvirt
to allow the managed save files to be uploaded & downloaded via a libvirt
virStreamPtr object, in the same way we provide APIs to  upload & download
disk volumes. This would avoid the need to know explicitly about the file
location for the managed save image.

Regards,
Daniel


--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".




smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Fuel] 8.0 Region name support / Multi-DC

2015-10-07 Thread Roman Sokolkov
Sheena, thanks. I agree with Chris full Multi-DC it's different scale task.

For now Services just need +1 tiny step from Fuel/Product in favor
supporting current Multi-DC deployments architectures. (i.e. shared
Keystone)

Andrew, Ruslan, Mike,

i've created tiny blueprint
https://blueprints.launchpad.net/fuel/+spec/expose-region-name-to-ui

We just need to expose already existing functionality to UI.

Can someone pickup this blueprint? And/Or reassign to appropriate team.

Thanks

On Fri, Oct 2, 2015 at 7:41 PM, Sheena Gregson 
wrote:

> Forwarding since Chris isn’t subscribed.
>
>
>
> *From:* Chris Clason [mailto:ccla...@mirantis.com]
> *Sent:* Friday, October 02, 2015 6:30 PM
> *To:* Sheena Gregson ; OpenStack Development
> Mailing List (not for usage questions) 
> *Subject:* Re: [openstack-dev] [Fuel] 8.0 Region name support / Multi-DC
>
>
>
> We are doing some technology evaluations with the intent of publishing
> reference architectures at various scale points (500, 1500, 2000 etc). Part
> of this work will be to determine how to best partition the nodes in to
> regions based on scale limits of OpenStack components and workload
> characteristics. The work we are doing increased in scope significantly, so
> the first RA will be coming at the end of Q1 or early Q2.
>
>
>
> We do plan on using some components of Fuel for our testing but the main
> purpose is path finding. The work we do will eventually make it into Fuel,
> but we are going to run in front of it a bit.
>
>
>
> On Fri, Oct 2, 2015 at 9:19 AM Sheena Gregson 
> wrote:
>
> Plans for multi-DC: my understanding is that we are working on developing
> a whitepaper in Q4 that will provide a possible OpenStack multi-DC
> configuration, but I do not know whether or not we intend to include Fuel
> in the scope of this work (my guess would be no).  Chris – I copied you in
> case you wanted to comment here.
>
>
>
> Regarding specifying region names in UI, is it possible to specify region
> names in API?  And (apologies for my ignorance on this one) what is the
> relative equivalence to environments in Fuel (e.g. 1 environment : many
> regions, 1 environment == 1 region)?
>
>
>
> *From:* Roman Sokolkov [mailto:rsokol...@mirantis.com]
> *Sent:* Friday, October 02, 2015 5:26 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [Fuel] 8.0 Region name support / Multi-DC
>
>
>
> Folks,
>
>
>
> i've dug around 8.0 roadmap and didn't find anythind regarding Multi-DC
> support.
>
>
>
> My ask is about tiny(but useful) feature: give user ability to *specify
> Region name in UI.*
>
>
>
> Region name is already in every puppet module, so we just need to add this
> to UI.
>
>
>
> Do we have smth already?
>
>
>
> More general question: What are our plans in regards Multi-DC?
>
>
>
> Thanks
>
>
>
> --
>
> Roman Sokolkov,
>
> Deployment Engineer,
>
> Mirantis, Inc.
> Skype rsokolkov,
> rsokol...@mirantis.com
>
> --
>
> Chris Clason
>
> Director of Architecture
>
> ccla...@mirantis.com
>
> Mobile: +1.408.409.0295
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Roman Sokolkov,
Deployment Engineer,
Mirantis, Inc.
Skype rsokolkov,
rsokol...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Glance review dashboard

2015-10-07 Thread Flavio Percoco

Greetings,

I brought this up at our meeting last week and I'd like to reach a
broader audience.

I've put together a dashboard[0] for Glance reviews to help increasing
focus on relevant patches. This dashboard is generated using
gerrit-dash-creator[1] and contributions/feedback are more than
welcomed.

If you've been using the dashboard already, please, do provide
feedabck. Thanks!

Cheers,
Flavio

[0] http://bit.ly/glance-review-dashboard
[1] https://github.com/stackforge/gerrit-dash-creator

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] devstack extras.d support going away at M1 - your jobs may break if you rely on it in your dsvm jobs

2015-10-07 Thread Sean Dague
Before we had devstack plugins, we had a kind of janky extras.d
mechanism. A bunch of projects implemented some odd copy / paste
mechanism in test jobs to use that in unexpected / unsupported ways.

We've had devstack plugins for about 10 months. They provide a very "pro
user" experience by letting you enable arbitrary plugins with:

enable_plugin $name git://git.openstack.org/openstack/$project [$branch]

They have reasonable documentation here
http://docs.openstack.org/developer/devstack/plugins.html

We're now getting to the point where some projects like Magnum are
getting into trouble trying to build jobs with projects like Barbican,
because Magnum uses devstack plugins, and Barbican has some odd non
plugin copy paste method. Building composite test jobs are thus really
wonky.

This is a heads up that at Mitaka 1 milestone the extras.d support will
be removed. The copy / paste method was never supported, and now it will
explicitly break. Now would be a great time for teams to prioritize
getting to the real plugin architecture.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Yuriy Taraday
On Wed, Oct 7, 2015 at 12:51 AM Monty Taylor  wrote:

> On 10/06/2015 10:52 AM, Sebastian Kalinowski wrote:
> > I've already wrote in the review that caused this thread that I do not
> want
> > to blindly follow rules for using one or another. We should always
> consider
> > technical requirements. And I do not see a reason to leave py.test (and
> > nobody
> > show me such reason) and replace it with something else.
>
> Hi!
>
> The reason is that testrepository is what OpenStack uses and as I
> understand it, Fuel wants to join the Big Tent.
>

It saddens me that once again choice of library is being forced upon a
project based on what other projects use, not on technical merit. py.test
is more than just a (way better) test runner, it allows to write tests with
less boilerplate and more power. While its features are not extensively
used in Fuel code, switching to testr would still require changing test
logic which is generally bad (that's why mox is still in use in OpenStack).
Can we avoid that?

The use of testr is documented in the Project Testing Interface:
>
>
> http://git.openstack.org/cgit/openstack/governance/tree/reference/project-testing-interface.rst#n78
>
> There are many reasons for it, but in large part we are continually
> adding more and more tools to process subunit output across the board in
> the Gate. subunit2sql is an important one, as it will be feeding into
> expanded test result dashboards.
>
> We also have zuul features in the pipeline to be able to watch the
> subunit streams in real time to respond more quickly to issues in test
> runs.
>

We also have standard job builders based around tox and testr. Having
> project divergence in this area is a non-starter when there are over 800
> repositories.
>

So it seems that all that's needed to keep py.test as an option is a plugin
for py.test that generates subunit stream like Robert said, is that right?

In short, while I understand that this seems like an area where a
> project can do whatever it wants to, it really isn't. If it's causing
> you excessive pain, I recommend connecting with Robert on ways to make
> improvements to testrepository. Those improvements will also have the

effect of improving life for the rest of OpenStack, which is also a
> great reason why we all use the same tools rather than foster an
> environment of per-project snowflakes.
>

I wouldn't call py.test a snowflake. It's a very well-established testing
tool and OpenStack projects could benefit from using it if we integrate it
with testr well.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-07 Thread Julien Danjou
On Wed, Oct 07 2015, Flavio Percoco wrote:

> I'm not trying to solve the lack of reviews in Liberty by removing
> patches. What I'd like to do, though, is help to keep around patches
> that really matter.

I think that's where you are making a mistake. They are contributors,
like me or Victor, are knocking on the Glance doors for months now,
sending patches that resolve technical debt rather than adding new
debt^Wfeatures. Currently, these patches are not seen as important and
are often "dismissed". So I'm pretty sure they are going to expire with
this new system.

Imagine that if you were merging patches from me, Victor, and people
like us, we would continue to send many of them, and mid-term, you'd get
some new blood on your core team.

What is proposed here is really focusing on making life easier for the
current core team which is in large majority inactive.

Don't read me wrong. I know you and Nikhil are both well-intentioned by
proposing that. I just think it's going to be worse, because it won't
improve much and you're going to push new contributors away.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-07 Thread Anna Kamyshnikova
I can' say that I have any great plans for this cycle, but I would like
look into L3 HA (L3 HA + DVR) feature, probably some bugfixes in this area
and online data migration as logical continuation of online migration
support that was done in Liberty.

On Tue, Oct 6, 2015 at 8:34 PM, Ihar Hrachyshka  wrote:

> > On 06 Oct 2015, at 19:10, Thomas Goirand  wrote:
> >
> > On 10/01/2015 03:45 PM, Ihar Hrachyshka wrote:
> >> Hi all,
> >>
> >> I talked recently with several contributors about what each of us plans
> for the next cycle, and found it’s quite useful to share thoughts with
> others, because you have immediate yay/nay feedback, and maybe find
> companions for next adventures, and what not. So I’ve decided to ask
> everyone what you see the team and you personally doing the next cycle, for
> fun or profit.
> >>
> >> That’s like a PTL nomination letter, but open to everyone! :) No
> commitments, no deadlines, just list random ideas you have in mind or in
> your todo lists, and we’ll all appreciate the huge pile of awesomeness no
> one will ever have time to implement even if scheduled for Xixao release.
> >>
> >> To start the fun, I will share my silly ideas in the next email.
> >>
> >> Ihar
> >
> > Could we have oslo-config-generator flat neutron.conf as a release goal
> > for Mitaka as well? The current configuration layout makes it difficult
> > for distributions to catch-up with working by default config.
>
> Good idea. I think we had some patches for that. I will try to keep it on
> my plate for M.
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Running Debian packages on top of Trusty

2015-10-07 Thread Sofer Athlan-Guyot
Hi,

On 2 Oct 2015, iberezovs...@mirantis.com wrote:

> Hello,
>
> thanks for bringing up this topic, that's what I wanted to discuss on
> next puppet-openstack irc meeting.
>
> So, user case is following: users may want to install Debian packages
> on Ubuntu host or vice versa,
> the same problem can probably happen with CentOS, RHEL, Fedora; or
> users may use non-official
> package repositories with their own package (service) naming strategy
> and so on.
> Current situation in puppet modules is following that package and
> service names are (let's say)
> hardcoded in 'params' class (e.g. [0]). But in situation that I've
> described it won't work.
> Puppet will try to use Ubuntu names on Ubuntu host and it won't allow
> to install and work with
> Debian packages.
>
> I've researched puppet modules and found an interesting example which
> can help to solve
> this issue. It's implemented in puppetlabs mongodb module:
> they have 'globals' class [1] that allows to override most part of
> parameters from 'params' class [2].
>
> So, I've decided to rework this soltuion and use it in OpenStack
> modules. As result I got draft patch
> for ceilometer module [3]. By default we use parameters from 'params'
> class, but every parameter
> can be now overridden using 'globals' class.
>
> OpenStack Puppet team, what do you think about this solution?

Here is another track that you may follow.  For instance, to have access
to the code variables there
https://github.com/openstack/puppet-nova/blob/master/manifests/params.pp#L100-L107
on an Ubuntu system you could just do this :

env FACTER_operatingsystem=Debian puppet agent -t 

You can override any facts on a system using the environment variable
"FACTER_"

For instance on my system:

  $ facter -p 2>/dev/null | grep osfamily
  osfamily => RedHat

  $ env FACTER_osfamily=Ubuntu facter -p 2>/dev/null | grep osfamily

  osfamily => Ubuntu

Is this method wouldn't be enough for your purpose ?

Check https://puppetlabs.com/blog/facter-part-1-facter-101 for more
information.

>
> Also, I'l bring up this topic on weekly puppet-openstack irc meeting.
>
> [0] -
> https://github.com/openstack/puppet-ceilometer/blob/master/manifests/params.
> pp
> [1] -
> https://github.com/puppetlabs/puppetlabs-mongodb/blob/master/manifests/globals.
> pp
> [2] -
> https://github.com/puppetlabs/puppetlabs-mongodb/blob/master/manifests/params.
> pp
> [3] - https://review.openstack.org/#/c/229918/
>
> 2015-10-02 15:43 GMT+03:00 Ivan Udovichenko
> :
>
> Hello,
>
> On 10/02/2015 03:15 PM, Emilien Macchi wrote:
>> Hey Thomas,
>>
>> On 10/02/2015 04:33 AM, Thomas Goirand wrote:
>> [...]
>>> We also may need, at some point, to add the type mosdebian and
> moscentos
>>> to the list of supported package suite, as there still will be
> some
>>> differences between the upstream Debian or CentOS packages.
> What is the
>>> best way to add this variable values?
>>>
>>> Could you Puppet experts explain to me and my Mirantis
> colleagues again?
>>
>> So we partially discussed about that during our last weekly
> meeting [1]
>> and it come out the best way to support both Debian & Ubuntu are
> Puppet
>> conditionals, like we already have in place.
>>
>> [1]
>>
> http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_
> openstack.2015-09-29-15.00.html
>
> It does not solve the original problem. Let's say you want to
> install
> Debian packages on-top of Ubuntu, it will fail and you will have
> to use
> workarounds, for example in the params.pp [1] you have specified.
>
> [1]
> https://github.com/openstack/puppet-nova/blob/master/manifests/params.
> pp#L100-L107
>
>>
>> See the example with puppet-nova |2] where we use
> $::operatingsystem
>> fact [3] to detect if we're running Ubuntu or Debian.
>> If we're running Ubuntu, we take reference from UCA packaging.
> If
>> Debian, we take your work as reference.
>>
>> [2]
>>
> https://github.com/openstack/puppet-nova/blob/master/manifests/params.
> pp#L100-L107
>> [3] https://puppetlabs.com/facter
>>
>
> What we need is some variable which can override the decision
> which
> Operating System is used and thereby required packages will be
> installed. At least for Debian, that is what we really need.
> I'd be grateful if you look into it. Thank you.
>
>>
>>> Sorry that I didn't take notes about it and couldn't explain,
>>> Cheers,
>>>
>>> Thomas Goirand (zigo)
>>>
>>> P.S: Where may I find the best tutorial to get up-to-speed
> about puppet,
>>> so that I know what I'm talking about next time?
>>>
>>
>> I personally learnt (and am still learning) by using official
>> documentation [4], that I suggest you to start with.
>>
>> [4] http://docs.puppetlabs.com/puppet/
>>
>> Hope it helps,
>>
>>
>>
>
>
>> _
> ___
> __
>> OpenStack Development Mailing List (not for usage questions)
>> 

Re: [openstack-dev] FW: [Fuel] 8.0 Region name support / Multi-DC

2015-10-07 Thread Adam Heczko
Hi, although I'm not participating in this story since very beginning, let
me add my 2 cents.
For scalability purposes Nova considers rather use of 'cells' rather than
'regions' construct.
Regions as name suggests deals with geographically dispersed data centre
locations.
In regards to Fuel architecture, since Fuel supports only one PXE network,
it is IMO unable to deploy multi region clouds.
Fuel uses 'environments' construct, but again it doesn't fit to 'region'
nor 'cell', since Fuel's 'environment' deploys just another cluster (with
own set of controllers, computes etc.) over the shared PXE network.
It is probably quite affordable to add 'cells' capability to Fuel, maybe
through Fuel-plugins mechanism, which could decouple nova-scheduler and
related roles from 'main' controller role.
For true multi-region capability, it would be required to operate
multi-cobbler Fuel instances / multiple PXE networks with appropriate
'region' names provided.
A initial approach to it would be probably to deploy multiple Fuel
instances (one Fuel per region) and then bound them altogether through
RESTful API / operate at scale through API, at least when it comes to
Keystone and Galera cluster configuration.
There are several approaches to multi region, maybe good one would be
plugin allowing to select remote data centre Galera cluster as a partner
for replication.
I'm not sure at this moment how HA would be operated this way, since
Keystone utilizes memcached for various operations. Would multi-region
memcached memory states also be synchronized?
So multi-region DC could rise up a lot related to it problems.

Regards,

A.



On Wed, Oct 7, 2015 at 11:49 AM, Roman Sokolkov 
wrote:

> Sheena, thanks. I agree with Chris full Multi-DC it's different scale task.
>
> For now Services just need +1 tiny step from Fuel/Product in favor
> supporting current Multi-DC deployments architectures. (i.e. shared
> Keystone)
>
> Andrew, Ruslan, Mike,
>
> i've created tiny blueprint
> https://blueprints.launchpad.net/fuel/+spec/expose-region-name-to-ui
>
> We just need to expose already existing functionality to UI.
>
> Can someone pickup this blueprint? And/Or reassign to appropriate team.
>
> Thanks
>
> On Fri, Oct 2, 2015 at 7:41 PM, Sheena Gregson 
> wrote:
>
>> Forwarding since Chris isn’t subscribed.
>>
>>
>>
>> *From:* Chris Clason [mailto:ccla...@mirantis.com]
>> *Sent:* Friday, October 02, 2015 6:30 PM
>> *To:* Sheena Gregson ; OpenStack Development
>> Mailing List (not for usage questions) > >
>> *Subject:* Re: [openstack-dev] [Fuel] 8.0 Region name support / Multi-DC
>>
>>
>>
>> We are doing some technology evaluations with the intent of publishing
>> reference architectures at various scale points (500, 1500, 2000 etc). Part
>> of this work will be to determine how to best partition the nodes in to
>> regions based on scale limits of OpenStack components and workload
>> characteristics. The work we are doing increased in scope significantly, so
>> the first RA will be coming at the end of Q1 or early Q2.
>>
>>
>>
>> We do plan on using some components of Fuel for our testing but the main
>> purpose is path finding. The work we do will eventually make it into Fuel,
>> but we are going to run in front of it a bit.
>>
>>
>>
>> On Fri, Oct 2, 2015 at 9:19 AM Sheena Gregson 
>> wrote:
>>
>> Plans for multi-DC: my understanding is that we are working on developing
>> a whitepaper in Q4 that will provide a possible OpenStack multi-DC
>> configuration, but I do not know whether or not we intend to include Fuel
>> in the scope of this work (my guess would be no).  Chris – I copied you in
>> case you wanted to comment here.
>>
>>
>>
>> Regarding specifying region names in UI, is it possible to specify region
>> names in API?  And (apologies for my ignorance on this one) what is the
>> relative equivalence to environments in Fuel (e.g. 1 environment : many
>> regions, 1 environment == 1 region)?
>>
>>
>>
>> *From:* Roman Sokolkov [mailto:rsokol...@mirantis.com]
>> *Sent:* Friday, October 02, 2015 5:26 PM
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Subject:* [openstack-dev] [Fuel] 8.0 Region name support / Multi-DC
>>
>>
>>
>> Folks,
>>
>>
>>
>> i've dug around 8.0 roadmap and didn't find anythind regarding Multi-DC
>> support.
>>
>>
>>
>> My ask is about tiny(but useful) feature: give user ability to *specify
>> Region name in UI.*
>>
>>
>>
>> Region name is already in every puppet module, so we just need to add
>> this to UI.
>>
>>
>>
>> Do we have smth already?
>>
>>
>>
>> More general question: What are our plans in regards Multi-DC?
>>
>>
>>
>> Thanks
>>
>>
>>
>> --
>>
>> Roman Sokolkov,
>>
>> Deployment Engineer,
>>
>> Mirantis, Inc.
>> Skype rsokolkov,
>> rsokol...@mirantis.com
>>
>> --
>>
>> Chris Clason
>>
>> Director of Architecture

[openstack-dev] [nova] Min libvirt for Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1

2015-10-07 Thread Daniel P. Berrange
In the Liberty version of OpenStack we had a min libvirt of 0.9.11 and
printed a warning on startup if you had < 0.10.2, to the effect that
Mitaka will required 0.10.2

This mail is a reminder that we will[1] mandate libvirt >= 0.10.2 when
Mitaka is released.


Looking forward to the N release, I am suggesting that we target
a new min libvirt of 1.1.1 for that cycle.

Based on info in

   https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix

this will exclude the following distros running running Nova Nxxx
release:

 - Fedora 20 - it will be end-of-life way before Nxxx is released

 - RHEL 6 - Red Hat stopped shipping Nova on RHEL-6 after Icehouse
and base distro only supports Python 2.6

 - OpenSUSE 12 - this was end-of-life about 6 months ago now

 - SLES 11 - base distro only supports Python 2.6

 - Debian Wheezy - Debian Jessie is current stable, and Wheezy-backports
   provides new enough libvirt for people who wish to
   stay on Wheezy

The min distros required would thus be Fedora 21, RHEL 7.0, OpenSUSE 13
SLES 12, Debian Wheezy and Ubuntu 14.04 (Trusty LTS)

Regards,
Daniel

[1] https://review.openstack.org/#/c/231917/
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Min libvirt for Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1

2015-10-07 Thread Daniel P. Berrange
On Wed, Oct 07, 2015 at 06:55:44AM -0400, Sean Dague wrote:
> On 10/07/2015 06:46 AM, Daniel P. Berrange wrote:
> > In the Liberty version of OpenStack we had a min libvirt of 0.9.11 and
> > printed a warning on startup if you had < 0.10.2, to the effect that
> > Mitaka will required 0.10.2
> > 
> > This mail is a reminder that we will[1] mandate libvirt >= 0.10.2 when
> > Mitaka is released.
> > 
> > 
> > Looking forward to the N release, I am suggesting that we target
> > a new min libvirt of 1.1.1 for that cycle.
> > 
> > Based on info in
> > 
> >https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
> > 
> > this will exclude the following distros running running Nova Nxxx
> > release:
> > 
> >  - Fedora 20 - it will be end-of-life way before Nxxx is released
> > 
> >  - RHEL 6 - Red Hat stopped shipping Nova on RHEL-6 after Icehouse
> > and base distro only supports Python 2.6
> > 
> >  - OpenSUSE 12 - this was end-of-life about 6 months ago now
> > 
> >  - SLES 11 - base distro only supports Python 2.6
> > 
> >  - Debian Wheezy - Debian Jessie is current stable, and Wheezy-backports
> >provides new enough libvirt for people who wish to
> >stay on Wheezy
> > 
> > The min distros required would thus be Fedora 21, RHEL 7.0, OpenSUSE 13
> > SLES 12, Debian Wheezy and Ubuntu 14.04 (Trusty LTS)
> > 
> > Regards,
> > Daniel
> > 
> > [1] https://review.openstack.org/#/c/231917/
> 
> Isn't RHEL 7.1 just an update stream on RHEL 7.0? It seems a little
> weird to keep the 1.1.1 support instead of just going up to 1.2.2.

Yes & no. There are in fact two different streams users can take
with RHEL. They can stick on a bugfix only stream, which would be
7.0.1, 7.0.2, etc, or they can take the bugfix + features stream
which is 7.1, 7.2, etc. They can't stick on the bugfix only
stream forever though, so given that by time Nxx is released
7.2 will also be available, we are probably justified in dropping
7.0 support.

The next oldest distro libvirt would be Debian Wheezy-backports at 1.2.1.
If we are happy to force Debian users to Jessie, then next oldest after
that is Ubuntu 14.04 LTS with 1.2.2.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack extras.d support going away at M1 - your jobs may break if you rely on it in your dsvm jobs

2015-10-07 Thread Sean Dague
On 10/07/2015 07:17 AM, Neil Jerram wrote:
> On 07/10/15 12:12, Sean Dague wrote:
>> We've had devstack plugins for about 10 months. They provide a very "pro
>> user" experience by letting you enable arbitrary plugins with:
>>
>> enable_plugin $name git://git.openstack.org/openstack/$project [$branch]
>>
>> They have reasonable documentation here
>> http://docs.openstack.org/developer/devstack/plugins.html
> 
> enable_plugin is indeed great.
> 
> A related question, if I may: has there been any discussion of
> backporting enable_plugin support to e.g. DevStack's stable/juno
> branch?  It would be cool to be able to use a DevStack plugin with
> earlier OpenStack releases.
> 
> Thanks,
> Neil

stable/juno is very close to eol (I think there are only a couple of
months left), so I don't think that it's worth the backporting effort at
this time. If someone else did it, I'd review the code, but I don't
think it's a priority.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Roman Prykhodchenko
Yuri,

sticking to global requirements and interacting deeper with OpenStack Infra are 
up-to-date objectives for Fuel and those are pretty much technical question. 
However, software development is not only solving technical tasks, it also 
incorporates interaction between people and other teams so you cannot separate 
those thinks, even if it sounds too much like politics.

- romcheg

> 7 жовт. 2015 р. о 13:20 Yuriy Taraday  написав(ла):
> 
> On Wed, Oct 7, 2015 at 12:51 AM Monty Taylor  > wrote:
> On 10/06/2015 10:52 AM, Sebastian Kalinowski wrote:
> > I've already wrote in the review that caused this thread that I do not want
> > to blindly follow rules for using one or another. We should always consider
> > technical requirements. And I do not see a reason to leave py.test (and
> > nobody
> > show me such reason) and replace it with something else.
> 
> Hi!
> 
> The reason is that testrepository is what OpenStack uses and as I
> understand it, Fuel wants to join the Big Tent.
> 
> It saddens me that once again choice of library is being forced upon a 
> project based on what other projects use, not on technical merit. py.test is 
> more than just a (way better) test runner, it allows to write tests with less 
> boilerplate and more power. While its features are not extensively used in 
> Fuel code, switching to testr would still require changing test logic which 
> is generally bad (that's why mox is still in use in OpenStack). Can we avoid 
> that?
> 
> The use of testr is documented in the Project Testing Interface:
> 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/project-testing-interface.rst#n78
>  
> 
> 
> There are many reasons for it, but in large part we are continually
> adding more and more tools to process subunit output across the board in
> the Gate. subunit2sql is an important one, as it will be feeding into
> expanded test result dashboards.
> 
> We also have zuul features in the pipeline to be able to watch the
> subunit streams in real time to respond more quickly to issues in test runs.
> 
> We also have standard job builders based around tox and testr. Having
> project divergence in this area is a non-starter when there are over 800
> repositories.
> 
> So it seems that all that's needed to keep py.test as an option is a plugin 
> for py.test that generates subunit stream like Robert said, is that right?
> 
> In short, while I understand that this seems like an area where a
> project can do whatever it wants to, it really isn't. If it's causing
> you excessive pain, I recommend connecting with Robert on ways to make
> improvements to testrepository. Those improvements will also have the
> effect of improving life for the rest of OpenStack, which is also a
> great reason why we all use the same tools rather than foster an
> environment of per-project snowflakes.
> 
> I wouldn't call py.test a snowflake. It's a very well-established testing 
> tool and OpenStack projects could benefit from using it if we integrate it 
> with testr well.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-07 Thread Daniel P. Berrange
On Tue, Oct 06, 2015 at 06:27:12PM +0100, Paul Carlton wrote:
> 
> 
> On 06/10/15 17:30, Chris Friesen wrote:
> >On 10/06/2015 08:11 AM, Daniel P. Berrange wrote:
> >>On Tue, Oct 06, 2015 at 02:54:21PM +0100, Paul Carlton wrote:
> >>>https://review.openstack.org/#/c/85048/ was raised to address the
> >>>migration of instances that are not running but people did not warm to
> >>>the idea of bringing a stopped/suspended instance to a paused state to
> >>>migrate it.  Is there any work in progress to get libvirt enhanced to
> >>>perform the migration of non active virtual machines?
> >>
> >>Libvirt can "migrate" the configuration of an inactive VM, but does
> >>not plan todo anything related to storage migration. OpenStack could
> >>already solve this itself by using libvirt storage pool APIs to
> >>copy storage volumes across, but the storage pool worked in Nova
> >>is stalled
> >>
> >>https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/use-libvirt-storage-pools,n,z
> >>
> >
> >What is the libvirt API to migrate a paused/suspended VM? Currently nova
> >uses dom.managedSave(), so it doesn't know what file libvirt used to save
> >the state.  Can libvirt migrate that file transparently?
> >
> >I had thought we might switch to virDomainSave() and then use the cold
> >migration framework, but that requires passwordless ssh.  If there's a way
> >to get libvirt to handle it internally via the storage pool API then that
> >would be better.
> >
> >Chris
> >
> >__
> >
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> So my reading of this is the issue could be addressed in Mitaka by
> implementing
> http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/use-libvirt-storage-pools.html
> and
> https://review.openstack.org/#/c/126979/4/specs/kilo/approved/migrate-libvirt-volumes.rst
> 
> is there any prospect of this being progressed?

The guy who started that work, Solly Ross, is no longer involved in the
Nova project. The overall idea is still sound, but the patches need more
work to get them into a state suitable for serious review & potential
merge. So it is basically waiting for someone motivated to take over
the existing patches Solly did...

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] ports management

2015-10-07 Thread Neil Jerram
Just some initial thoughts here, as I'm not yet clear on many of your
details.

On 06/10/15 16:55, Peter V. Saveliev wrote:
> …
>
>
> The problem.
> 
>
>
> There are use cases, when it us needed to attach vnic to some specific 
> network interface instead of br-int.
>
> For example, working with trunk ports, it is better to attach vnic to a 
> specific trunk bridge, and the bridge add to br-int. But it doesn't fit 
> in the current design.
I think this corresponds to a modified form of 'VIF plugging', which is
something that currently happens in Nova; see nova/virt/libvirt/vif.py

>
> There are several possible ways to solve the issue:
>
> 1. make the user responsible to pass the ready-to-use port to nova, so 
> nova will not care about adding port by libvirt to the bridge
> 2. make the neutron service synchronously call the agent to create the 
> required interface, e.g. the trunk bridge.
> 3. make the neutron somehow to delay vif plug
Why would a delay help?  (Also note, as above, that VIF plugging is in
Nova.)

> 4. make the nova to create the required port
A port is a Neutron object, so not sure what you could mean here.

Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Min libvirt for Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1

2015-10-07 Thread Daniel P. Berrange
On Wed, Oct 07, 2015 at 11:46:58AM +0100, Daniel P. Berrange wrote:
> In the Liberty version of OpenStack we had a min libvirt of 0.9.11 and
> printed a warning on startup if you had < 0.10.2, to the effect that
> Mitaka will required 0.10.2
> 
> This mail is a reminder that we will[1] mandate libvirt >= 0.10.2 when
> Mitaka is released.
> 
> 
> Looking forward to the N release, I am suggesting that we target
> a new min libvirt of 1.1.1 for that cycle.
> 
> Based on info in
> 
>https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
> 
> this will exclude the following distros running running Nova Nxxx
> release:
> 
>  - Fedora 20 - it will be end-of-life way before Nxxx is released
> 
>  - RHEL 6 - Red Hat stopped shipping Nova on RHEL-6 after Icehouse
> and base distro only supports Python 2.6
> 
>  - OpenSUSE 12 - this was end-of-life about 6 months ago now
> 
>  - SLES 11 - base distro only supports Python 2.6
> 
>  - Debian Wheezy - Debian Jessie is current stable, and Wheezy-backports
>provides new enough libvirt for people who wish to
>  stay on Wheezy
> 
> The min distros required would thus be Fedora 21, RHEL 7.0, OpenSUSE 13
> SLES 12, Debian Wheezy and Ubuntu 14.04 (Trusty LTS)

If we want to be slightly more aggressive and target 1.2.1 we would
additionally loose RHEL-7.0 and OpenSUSE 13.1.  This is probably
not the end of the world, since by the time Nxxx is released, I
expect people will have moved to a newer minor update of those
distros (RHEL-7.1 / OpenSUSE 13.2).

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Min libvirt for Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1

2015-10-07 Thread Sean Dague
On 10/07/2015 06:46 AM, Daniel P. Berrange wrote:
> In the Liberty version of OpenStack we had a min libvirt of 0.9.11 and
> printed a warning on startup if you had < 0.10.2, to the effect that
> Mitaka will required 0.10.2
> 
> This mail is a reminder that we will[1] mandate libvirt >= 0.10.2 when
> Mitaka is released.
> 
> 
> Looking forward to the N release, I am suggesting that we target
> a new min libvirt of 1.1.1 for that cycle.
> 
> Based on info in
> 
>https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
> 
> this will exclude the following distros running running Nova Nxxx
> release:
> 
>  - Fedora 20 - it will be end-of-life way before Nxxx is released
> 
>  - RHEL 6 - Red Hat stopped shipping Nova on RHEL-6 after Icehouse
> and base distro only supports Python 2.6
> 
>  - OpenSUSE 12 - this was end-of-life about 6 months ago now
> 
>  - SLES 11 - base distro only supports Python 2.6
> 
>  - Debian Wheezy - Debian Jessie is current stable, and Wheezy-backports
>provides new enough libvirt for people who wish to
>  stay on Wheezy
> 
> The min distros required would thus be Fedora 21, RHEL 7.0, OpenSUSE 13
> SLES 12, Debian Wheezy and Ubuntu 14.04 (Trusty LTS)
> 
> Regards,
> Daniel
> 
> [1] https://review.openstack.org/#/c/231917/

Isn't RHEL 7.1 just an update stream on RHEL 7.0? It seems a little
weird to keep the 1.1.1 support instead of just going up to 1.2.2.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Yuriy Taraday
On Wed, Oct 7, 2015 at 3:14 AM Monty Taylor  wrote:

> On 10/06/2015 06:01 PM, Thomas Goirand wrote:
> > On 10/06/2015 01:14 PM, Yuriy Taraday wrote:
> >> On Mon, Oct 5, 2015 at 5:40 PM Roman Prykhodchenko  >> > wrote:
> >>
> >>  Atm I have the following pros. and cons. regarding testrepository:
> >>
> >>  pros.:
> >>
> >>  1. It’s ”standard" in OpenStack so using it gives Fuel more karma
> >>  and moves it more under big tent
> >>
> >>
> >> I don't think that big tent model aims at eliminating diversity of tools
> >> we use in our projects. A collection of web frameworks used in big tent
> >> is an example of that.
> >
> >  From the downstream distro point of view, I don't agree in general, and
> > with the web framework in particular. (though it's less a concern for
> > the testr vs pbr). We keep adding dependencies and duplicates, but never
> > remove them. For example, tablib and suds/sudsjurko need to be removed
> > because they are not maintainable, there's not much work to do so, but
> > nobody does the work...
>
> The Big Tent has absolutely no change in opinion about eliminating
> diversity of tools. OpenStack has ALWAYS striven to reduce diversity of
> tools. Big Tent applies OpenStack to more things that request to be part
> of OpenStack.
>
> Nothing has changed in the intent.
>
> Diversity of tools in a project this size is a bad idea. Always has
> been. Always will be.
>
> The amount of web frameworks in use is a bug.
>

I'm sorry, that was my mistake. I just can't remember any project that was
declined place under big tent (or integrated) because of a library in use.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Min libvirt for Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1

2015-10-07 Thread Daniel P. Berrange
On Wed, Oct 07, 2015 at 11:13:12AM +, Tim Bell wrote:
> 
> Although Red Hat is no longer supporting RHEL 6 after Icehouse, a number of
> users such as GoDaddy and CERN are using Software Collections to run the
> Python 2.7 code.

Do you have any educated guess as to when you might switch to deploying
new OpenStack version exclusively on RHEL 7 ? I understand such a switch
is likely to take a while so you can test its performance and reliability
and so on, but I'm assuming you'll eventually switch ?

> However, since this modification would only take place when Mitaka gets
> released, this would realistically give those sites a year to complete
> migration to RHEL/CentOS 7 assuming they are running from one of the
> community editions.
> 
> What does the 1.1.1 version bring that is the motivation for raising the
> limit ?

If we require 1.1.1 we could have unconditional support for

 - Hot-unplug of PCI devices (needs 1.1.1)
 - Live snapshots (needs 1.0.0)
 - Live volume snapshotting (needs 1.1.1)
 - Disk sector discard support (needs 1.0.6)
 - Hyper-V clock tunables (needs 1.0.0 & 1.1.0)

If you lack those versions, in case of hotunplug, and live volume
snapshots we just refuse the corresponding API call. With live
snapshots we fallback to non-live snapshots. For disk discard and
hyperv clock we just run with degraded functionality. The lack of
hyperv clock tunables means Windows guests will have unreliable
time keeping and are likely to suffer random BSOD, which I think
is a particularly important issue.


And of course we remove a bunch of conditional logic from Nova
which simplifies the code paths and removes code paths which
rarely get testing coverage.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack extras.d support going away at M1 - your jobs may break if you rely on it in your dsvm jobs

2015-10-07 Thread Kai Qiang Wu
Hi Sean,


Do you mean all other projects, like Barbican (non-standard implementation
with copy/paste ways) would break in devstack ?




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Sean Dague 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   07/10/2015 07:13 pm
Subject:[openstack-dev] [all] devstack extras.d support going away at
M1 - your jobs may break if you rely on it in your dsvm jobs



Before we had devstack plugins, we had a kind of janky extras.d
mechanism. A bunch of projects implemented some odd copy / paste
mechanism in test jobs to use that in unexpected / unsupported ways.

We've had devstack plugins for about 10 months. They provide a very "pro
user" experience by letting you enable arbitrary plugins with:

enable_plugin $name git://git.openstack.org/openstack/$project [$branch]

They have reasonable documentation here
http://docs.openstack.org/developer/devstack/plugins.html

We're now getting to the point where some projects like Magnum are
getting into trouble trying to build jobs with projects like Barbican,
because Magnum uses devstack plugins, and Barbican has some odd non
plugin copy paste method. Building composite test jobs are thus really
wonky.

This is a heads up that at Mitaka 1 milestone the extras.d support will
be removed. The copy / paste method was never supported, and now it will
explicitly break. Now would be a great time for teams to prioritize
getting to the real plugin architecture.

 -Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Michal Rostecki
On Wed, Oct 7, 2015 at 12:59 PM, Roman Prykhodchenko  wrote:
> What I can extract now from this thread is that Fuel should switch to testr
> because of the following reasons:
>
> - Diversity of tools is a bad idea on a project scale

We already have diversity about frameworks (or lack of them) in
OpenStack. We have Pecan, Flask, wsgiref, Django.

> - testrepository and related components are used in OpenStack Infra
> environment for much more tasks than just running tests

If by "more tasks" you mean parallel testing, py.test also has a
possibility to do that by pytest-xdist.

> - py.test won’t be added to global-requirements so there always be a chance
> of another dependency hell

As Igor Kalnitsky said, py.test doesn't have much requirements.
https://github.com/pytest-dev/pytest/blob/master/setup.py#L58
It's only argparse, which already is in global requirements without
any version pinned.

> - Sticking to global requirements is an idea which is in the scope of
> discussions around Fuel.
>
> Sounds like that’s the point when we should just file appropriate bugs and
> use testr in smaller components, e. g., Fuel Client, first and then try in
> in Nailgun.
>
>
> - romcheg
>

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-07 Thread Assaf Muller
On Wed, Oct 7, 2015 at 5:44 AM, Anna Kamyshnikova <
akamyshnik...@mirantis.com> wrote:

> I can' say that I have any great plans for this cycle, but I would like
> look into L3 HA (L3 HA + DVR) feature,
>

The agent side patch was merged yesterday, and the server side patch needs
reviews: https://review.openstack.org/#/c/143169/.
Your work in L3 HA land is greatly appreciated :)


> probably some bugfixes in this area and online data migration as logical
> continuation of online migration support that was done in Liberty.
>
> On Tue, Oct 6, 2015 at 8:34 PM, Ihar Hrachyshka 
> wrote:
>
>> > On 06 Oct 2015, at 19:10, Thomas Goirand  wrote:
>> >
>> > On 10/01/2015 03:45 PM, Ihar Hrachyshka wrote:
>> >> Hi all,
>> >>
>> >> I talked recently with several contributors about what each of us
>> plans for the next cycle, and found it’s quite useful to share thoughts
>> with others, because you have immediate yay/nay feedback, and maybe find
>> companions for next adventures, and what not. So I’ve decided to ask
>> everyone what you see the team and you personally doing the next cycle, for
>> fun or profit.
>> >>
>> >> That’s like a PTL nomination letter, but open to everyone! :) No
>> commitments, no deadlines, just list random ideas you have in mind or in
>> your todo lists, and we’ll all appreciate the huge pile of awesomeness no
>> one will ever have time to implement even if scheduled for Xixao release.
>> >>
>> >> To start the fun, I will share my silly ideas in the next email.
>> >>
>> >> Ihar
>> >
>> > Could we have oslo-config-generator flat neutron.conf as a release goal
>> > for Mitaka as well? The current configuration layout makes it difficult
>> > for distributions to catch-up with working by default config.
>>
>> Good idea. I think we had some patches for that. I will try to keep it on
>> my plate for M.
>>
>> Ihar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Min libvirt for Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1

2015-10-07 Thread Tim Bell


> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: 07 October 2015 13:25
> To: Tim Bell 
> Cc: Sean Dague ; OpenStack Development Mailing List
> (not for usage questions) ; openstack-
> operat...@lists.openstack.org
> Subject: Re: [Openstack-operators] [openstack-dev] [nova] Min libvirt for
> Mitaka is 0.10.2 and suggest Nxxx uses 1.1.1
>
> On Wed, Oct 07, 2015 at 11:13:12AM +, Tim Bell wrote:
> >
> > Although Red Hat is no longer supporting RHEL 6 after Icehouse, a
> > number of users such as GoDaddy and CERN are using Software
> > Collections to run the Python 2.7 code.
>
> Do you have any educated guess as to when you might switch to deploying
> new OpenStack version exclusively on RHEL 7 ? I understand such a switch is
> likely to take a while so you can test its performance and reliability and 
> so on,
> but I'm assuming you'll eventually switch ?
>

I think we'll be all 7 by spring next year (i.e. when we install Liberty). The 
software collections work is not for the faint hearted and 7 brings lots of 
good things with it for operations so we want to get there as soon as 
possible. Thus, I think we'd be fine with a change in Mitaka (especially given 
the points you mention below).

> > However, since this modification would only take place when Mitaka
> > gets released, this would realistically give those sites a year to
> > complete migration to RHEL/CentOS 7 assuming they are running from one
> > of the community editions.
> >
> > What does the 1.1.1 version bring that is the motivation for raising
> > the limit ?
>
> If we require 1.1.1 we could have unconditional support for
>
>  - Hot-unplug of PCI devices (needs 1.1.1)
>  - Live snapshots (needs 1.0.0)
>  - Live volume snapshotting (needs 1.1.1)
>  - Disk sector discard support (needs 1.0.6)
>  - Hyper-V clock tunables (needs 1.0.0 & 1.1.0)
>
> If you lack those versions, in case of hotunplug, and live volume snapshots
> we just refuse the corresponding API call. With live snapshots we fallback 
> to
> non-live snapshots. For disk discard and hyperv clock we just run with
> degraded functionality. The lack of hyperv clock tunables means Windows
> guests will have unreliable time keeping and are likely to suffer random
> BSOD, which I think is a particularly important issue.
>
> And of course we remove a bunch of conditional logic from Nova which
> simplifies the code paths and removes code paths which rarely get testing
> coverage.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ 
> :|
> |: http://libvirt.org  -o- http://virt-manager.org 
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ 
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc 
> :|


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Roman Prykhodchenko
Michał,

some comments in-line

>> - testrepository and related components are used in OpenStack Infra
>> environment for much more tasks than just running tests
> 
> If by "more tasks" you mean parallel testing, py.test also has a
> possibility to do that by pytest-xdist.

As Monthy mentioned, it’s not only about testing, it’s more about deeper 
integration with OpenStack Infra.


>> - py.test won’t be added to global-requirements so there always be a chance
>> of another dependency hell
> 
> As Igor Kalnitsky said, py.test doesn't have much requirements.
> https://github.com/pytest-dev/pytest/blob/master/setup.py#L58
> It's only argparse, which already is in global requirements without
> any version pinned.

It’s not only about py.test, there is an up-to-date objective of sticking all 
requirements to global-requirements because we have big problems because of 
that every release.

> 
> Cheers,
> Michal
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Ed Leafe
On Oct 7, 2015, at 6:47 AM, Sean Dague  wrote:

> We're starting to make plans for the next cycle. Long term plans are
> getting made for details that would happen in one or two cycles.
> 
> As we already have the locations for the N and O summits I think we
> should do the naming polls now and have names we can use for this
> planning instead of letters. It's pretty minor but it doesn't seem like
> there is any real reason to wait and have everyone come up with working
> names that turn out to be confusing later.

That makes sense, and it also has the advantage that it might give sufficient 
time to weed out undesirable names, such as what happened with the M naming 
process.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators][nova] Nova DB archiving script

2015-10-07 Thread Matt Riedemann



On 10/6/2015 11:49 AM, Mike Dorman wrote:

I posted a patch against one of the Nova DB archiving scripts in the
osops-tools-generic repo a few days ago to support additional tables:

https://review.openstack.org/#/c/229013/2

We’d like a few more folks to review to make sure it looks good.  Please
take a few minutes and take a look.  Thanks!



___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



I've commented in the review but I'd like to see this actually addressed 
in the nova-manage db archive_deleted_rows command if possible, or at 
least tracking what is currently broken about that command so that we 
can at least attempt to investigate it.


I'm cross-posting this to the dev list for the nova team.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-07 Thread Sofer Athlan-Guyot
Rich Megginson  writes:

> On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:
>> Rich Megginson  writes:
>>
>>> On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
 Gilles Dubreuil  writes:

> On 30/09/15 03:43, Rich Megginson wrote:
>> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
 Gilles Dubreuil  writes:

> On 15/09/15 06:53, Rich Megginson wrote:
>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>> Hi,
>>>
>>> Gilles Dubreuil  writes:
>>>
 A. The 'composite namevar' approach:

keystone_tenant {'projectX::domainY': ... }
  B. The 'meaningless name' approach:

   keystone_tenant {'myproject': name='projectX',
 domain=>'domainY',
 ...}

 Notes:
  - Actually using both combined should work too with the domain
 supposedly overriding the name part of the domain.
  - Please look at [1] this for some background between the two
 approaches:

 The question
 -
 Decide between the two approaches, the one we would like to
 retain for
 puppet-keystone.

 Why it matters?
 ---
 1. Domain names are mandatory in every user, group or project.
 Besides
 the backward compatibility period mentioned earlier, where no 
 domain
 means using the default one.
 2. Long term impact
 3. Both approaches are not completely equivalent which different
 consequences on the future usage.
>>> I can't see why they couldn't be equivalent, but I may be missing
>>> something here.
>> I think we could support both.  I don't see it as an either/or
>> situation.
>>
 4. Being consistent
 5. Therefore the community to decide

 Pros/Cons
 --
 A.
>>> I think it's the B: meaningless approach here.
>>>
   Pros
 - Easier names
>>> That's subjective, creating unique and meaningful name don't look
>>> easy
>>> to me.
>> The point is that this allows choice - maybe the user already has 
>> some
>> naming scheme, or wants to use a more "natural" meaningful name -
>> rather
>> than being forced into a possibly "awkward" naming scheme with "::"
>>
>>  keystone_user { 'heat domain admin user':
>>name => 'admin',
>>domain => 'HeatDomain',
>>...
>>  }
>>
>>  keystone_user_role {'heat domain admin user@::HeatDomain':
>>roles => ['admin']
>>...
>>  }
>>
   Cons
 - Titles have no meaning!
>> They have meaning to the user, not necessarily to Puppet.
>>
 - Cases where 2 or more resources could exists
>> This seems to be the hardest part - I still cannot figure out how
>> to use
>> "compound" names with Puppet.
>>
 - More difficult to debug
>> More difficult than it is already? :P
>>
 - Titles mismatch when listing the resources 
 (self.instances)

 B.
   Pros
 - Unique titles guaranteed
 - No ambiguity between resource found and their title
   Cons
 - More complicated titles
 My vote
 
 I would love to have the approach A for easier name.
 But I've seen the challenge of maintaining the providers behind the
 curtains and the confusion it creates with name/titles and when
 not sure
 about the domain we're dealing with.
 Also I believe that supporting self.instances consistently with
 meaningful name is saner.
 Therefore I vote B
>>> +1 for B.
>>>
>>> My view is that this should be the advertised way, but the other
>>> method
>>> (meaningless) should be there if the user need it.
>>>
>>> So as far as I'm concerned the two idioms should co-exist.  This
>>> would
>>> mimic what is possible with all puppet resources.  For instance
>>> you can:
>>>
>>>   file { '/tmp/foo.bar': ensure => present }
>>>
>>> and you can
>>>

Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Christian Berendt
On 10/07/2015 02:57 PM, Thierry Carrez wrote:
> ...which if I read it correctly means we could pick N now, but not O. We
> might want to change that (again) first.

Is this list correct?

M = Tokyo
N = Atlanta
O = Barcelona
P = ?

Christian.

-- 
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Running Debian packages on top of Trusty

2015-10-07 Thread Thomas Goirand
On 10/07/2015 12:22 PM, Sofer Athlan-Guyot wrote:
> Hi,
> 
> On 2 Oct 2015, iberezovs...@mirantis.com wrote:
> 
>> Hello,
>>
>> thanks for bringing up this topic, that's what I wanted to discuss on
>> next puppet-openstack irc meeting.
>>
>> So, user case is following: users may want to install Debian packages
>> on Ubuntu host or vice versa,
>> the same problem can probably happen with CentOS, RHEL, Fedora; or
>> users may use non-official
>> package repositories with their own package (service) naming strategy
>> and so on.
>> Current situation in puppet modules is following that package and
>> service names are (let's say)
>> hardcoded in 'params' class (e.g. [0]). But in situation that I've
>> described it won't work.
>> Puppet will try to use Ubuntu names on Ubuntu host and it won't allow
>> to install and work with
>> Debian packages.
>>
>> I've researched puppet modules and found an interesting example which
>> can help to solve
>> this issue. It's implemented in puppetlabs mongodb module:
>> they have 'globals' class [1] that allows to override most part of
>> parameters from 'params' class [2].
>>
>> So, I've decided to rework this soltuion and use it in OpenStack
>> modules. As result I got draft patch
>> for ceilometer module [3]. By default we use parameters from 'params'
>> class, but every parameter
>> can be now overridden using 'globals' class.
>>
>> OpenStack Puppet team, what do you think about this solution?
> 
> Here is another track that you may follow.  For instance, to have access
> to the code variables there
> https://github.com/openstack/puppet-nova/blob/master/manifests/params.pp#L100-L107
> on an Ubuntu system you could just do this :
> 
> env FACTER_operatingsystem=Debian puppet agent -t 
> 
> You can override any facts on a system using the environment variable
> "FACTER_"
> 
> For instance on my system:
> 
>   $ facter -p 2>/dev/null | grep osfamily
>   osfamily => RedHat
> 
>   $ env FACTER_osfamily=Ubuntu facter -p 2>/dev/null | grep osfamily  
>   
>   osfamily => Ubuntu
> 
> Is this method wouldn't be enough for your purpose ?
>
> Check https://puppetlabs.com/blog/facter-part-1-facter-101 for more
> information.

I'm not sure, as I'm not a puppet specialist...

We don't want to overwrite the parameter about the distribution, because
some are really dependent of the distro. For example, the libvirt unix
group is libvirt in Debian, but libvirtd in Ubuntu. This difference has
to stay depending on the OS type, which we absolutely do not want to
overwrite. So we do want variables for the *OpenStack package type*
which is running on top of the operating system.

Will what you wrote above help in this regard?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] conflicting names in python-openstackclient: could we have some exception handling please?

2015-10-07 Thread Ryan Brown

On 10/06/2015 05:15 PM, Thomas Goirand wrote:

Hi,

tl;dr: let's add a exception handling so that python-*client having
conflicting command names isn't a problem anymore, and "openstack help"
always work as much as it can.


Standardizing on "openstack   verb" would likely be 
the best solution for both the immediate problem and for the broader 
"naming stuff" issue.


Sharing a flat namespace is a recipe for pain with a growing number of 
projects. Devs and users are unlikely to use every project, they 
probably won't notice conflicts naturally except in cases like horizon.


If we look over the fence at AWS, you'll note that their nice unified 
CLI that stops the non-uniform `awk` bloodshed is namespaced.


- aws s3 ...
- aws cloudformation ...
- aws ec2 ...

A flat namespace was a mostly-fine idea when all integrated projects 
were expected to put their CLI in-tree in openstackclient. There were 
reviews, and discussions about what noun belonged to whom.


Noun conflict will only get worse: lots of projects will share words 
like stack, domain, user, container, address, and so on.


Namespaces are one honking great idea -- let's do more of those!


Longer version:

This is just a suggestion for contributors to python-openstackclient.

I saw a few packages that had conflicts with the namespace of others
within openstackclient. To the point that typing "openstack help" just
fails. Here's an example:

# openstack help
[ ...]
   project create  Create new project
   project delete  Delete project(s)
   project list   List projects
   project setSet project properties
   project show   Display project details
Could not load EntryPoint.parse('ptr_record_list =
designateclient.v2.cli.reverse:ListFloatingIPCommand')
'ArgumentParser' object has no attribute 'debug'

This first happened to me with saharaclient. Lucky, upgrading to latest
version fixed it. Then I had the problem with zaqarclient, which I fixed
with a few patches to its setup.cfg. Then now designate, but this time,
patching setup.cfg doesn't seem to cut it (ie: after changing the name
of the command, "openstack help" just fails).

Note: I don't care which project is at fault, this isn't the point here.
The point is that command name conflicts aren't handled (see below)
which is the problem.


+1, this isn't a problem specific to any project, it's systemic with 
flat namespacing.



With Horizon being a large consumer of nearly all python-*client
packages, removing one of them also removes Horizon in my CI which is
not what I want to (or can) do to debug a tempest problem. End of the
story: since Liberty b3, I never could have "openstack help" to work
correctly in my CI... :(


O.O That's unfortunate.


Which leads me to write this:

Since we have a very large amount of projects, with each and everyone of
them adding new commands to openstackclient, I would really nice if we
could have some kind of checks to make sure that conflicts are either 1/
not possible or 2/ handled gracefully.


To your (1) we could have a gate job that installs all the clients and 
fails on conflicts.


The downside of doing that without addressing the namespace problem is 
that there will be inconsistent conventions everywhere. Zaqar will have 
"openstack queue " but "openstack message flavor ..." which creates 
the sort of confusion a unified client is supposed to avoid.


A central reservation process for nouns won't really scale, but 
namespacing will because we *already* namespace projects.



Your thoughts?
Cheers,

Thomas Goirand (zigo)

P.S: It wasn't the point of this message, but do we have a fix for
designateclient? It'd be nice to have this fixed before Liberty is out.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] It's time to update the Liberty release notes

2015-10-07 Thread Alexis Lee
Now with committer names.

Matt Riedemann said on Thu, Oct 01, 2015 at 01:27:38PM -0500:
> Here are the commits in liberty that had the UpgradeImpact tag:

% git log --format='%h %<(18,trunc)%cn %s' -i --grep UpgradeImpact \
remotes/origin/stable/kilo..remotes/origin/stable/liberty
0b49934 Zhenyu Zheng   CONF.allow_resize_to_same_host should check only 
once in controller
4a9e14a Sylvain Bauza  Update ComputeNode values with allocation ratios in 
the RT
4a18f7d John Garbutt   api: use v2.1 only in api-paste.ini
11507ee John Garbutt   api: deprecate the concept of extensions in v2.1
9c91781 Eli Qiao   Add missing rules in policy.json
1b8a2e0 Dan Smith  Adding user_id handling to keypair index, show and 
create api calls
0283234 Maxim Nestratovlibvirt: Always default device names at boot
725c54e He Jie Xu  Remove db layer hard-code permission checks for 
quota_class_create/update
1dbb322 He Jie Xu  Remove db layer hard-code permission checks for 
quota_class_get_all_by_name
4d6a50a ShaoHe FengRemove db layer hard-code permission checks for 
floating_ip_dns
55e63f8 Davanum Srinivas.. Allow non-admin to list all tenants based on policy
92807d6 jichenjc   Remove redundant policy check from 
security_group_default_rule
2a01a1b Matt Riedemann Remove hv_type translation shim for powervm
dcd4be6 He Jie Xu  Remove db layer hard-code permission checks for 
quota_get_all_*
06e6056 jichenjc   Remove cell policy check
d03b716 Matt Riedemann libvirt: deprecate libvirt version usage < 0.10.2
5309120 Dan Smith  Update kilo version alias

> Here are the DocImpact changes:

% git log --format='%h %<(18,trunc)%cn %s' -i --grep DocImpact \
remotes/origin/stable/kilo..remotes/origin/stable/liberty
bc6f30d He Jie Xu  Give instance default hostname if hostname is empty
4ee4f9f Nikola Dipanov RT: track evacuation migrations
9095b36 Davanum Srinivas.. Expose keystoneclient's session and auth plugin 
loading parameters
4a9e14a Sylvain Bauza  Update ComputeNode values with allocation ratios in 
the RT
4a18f7d John Garbutt   api: use v2.1 only in api-paste.ini
11507ee John Garbutt   api: deprecate the concept of extensions in v2.1
45d1e3c ghanshyam  Expose VIF net-id attribute in os-virtual-interfaces
9d353e5 Michael Still  libvirt: take account of disks in migration data size
17e5911 Michael Still  Add deprecated_for_removal parm for deprecated 
neutron_ops
95940cc Michael Still  Don't allow instance to overcommit against itself
9cd9e66 Davanum Srinivas   Add rootwrap daemon mode support
c250aca Jay Pipes  Allow compute monitors in different namespaces
434ce2a Marian Horban  Added processing /compute URL
2c0a306 Dan Smith  Limit parallel live migrations in progress
da33ab4 Daniel P. Berrange libvirt: set caps on maximum live migration time
07c7e5c Daniel P. Berrange libvirt: support management of downtime during 
migration
60d08e6 Chuck Carmack  Add documentation for the nova-cells command.
ae5a329 Marian Horban  libvirt:Rsync remote FS driver was added
9a09674 Vladik Romanovsky  libvirt: enable virtio-net multiqueue
8a7b1e8 Chuck Carmack  :Add documentation for the nova-idmapshift command.
bf91d9f Sergey Nikitin Added missed '-' to the rest_api_version_history.rst
1b8a2e0 Dan Smith  Adding user_id handling to keypair index, show and 
create api calls
622a845 Gary KottonMetadata: support proxying loadbalancers
2f7403b Radoslav Gerganov  VMware: map one nova-compute to one VC cluster
ace11d3 Radoslav Gerganov  VMware: add serial port device
ab35779 Radomir Dopieral.. Handle SSL termination proxies for version list
6739df7 Dan Smith  Include DiskFilter in the default list
5e5ef99 Thang Pham VMware: Add support for swap disk
49a572a Ghanshyam Mann Show 'locked' information in server details
4252420 Gary KottonVMware: add resource limits for disk
f1f46a0 Gary KottonVMware: Resource limits for memory
7aec88c Gary KottonVMware: add support for cores per socket
bc3b6cc Maxim Nestratovlibvirt: rename parallels driver to virtuozzo
95f1d47 Mike DormanAdd console allowed origins setting
d0ee3ab Shiina, Hironori   libvirt:Add a driver API to inject an NMI
50c8f93 Radoslav Gerganov  Add MKS console support
abf20cd abhishekkekane Execute _poll_shelved_instances only if 
shelved_offload_time is > 0
973f312 Jay Pipes  Use stevedore for loading monitor extensions
9260ea1 andrewbogott   Include project_id in instance metadata.
d9c696a Dan Smith  Make evacuate leave a record for the source compute 
host to process
6fe967b Gary KottonCells: add instance cell registration utility to 
nova-manage
93a5a67 Sergey Nikitin Removed extra '-' from rest_api_version_history.rst
bad76e6 Gary KottonVMware: convert driver to use nova.objects.ImageMeta
56feb2b 

Re: [openstack-dev] conflicting names in python-openstackclient: could we have some exception handling please?

2015-10-07 Thread Hayes, Graham
On 07/10/15 14:42, Ryan Brown wrote:
> On 10/06/2015 05:15 PM, Thomas Goirand wrote:
>> Hi,
>>
>> tl;dr: let's add a exception handling so that python-*client having
>> conflicting command names isn't a problem anymore, and "openstack help"
>> always work as much as it can.
> 
> Standardizing on "openstack   verb" would likely be 
> the best solution for both the immediate problem and for the broader 
> "naming stuff" issue.
> 
> Sharing a flat namespace is a recipe for pain with a growing number of 
> projects. Devs and users are unlikely to use every project, they 
> probably won't notice conflicts naturally except in cases like horizon.
> 
> If we look over the fence at AWS, you'll note that their nice unified 
> CLI that stops the non-uniform `awk` bloodshed is namespaced.
> 
> - aws s3 ...
> - aws cloudformation ...
> - aws ec2 ...
> 
> A flat namespace was a mostly-fine idea when all integrated projects 
> were expected to put their CLI in-tree in openstackclient. There were 
> reviews, and discussions about what noun belonged to whom.
> 
> Noun conflict will only get worse: lots of projects will share words 
> like stack, domain, user, container, address, and so on.
> 
> Namespaces are one honking great idea -- let's do more of those!
> 
>> Longer version:
>>
>> This is just a suggestion for contributors to python-openstackclient.
>>
>> I saw a few packages that had conflicts with the namespace of others
>> within openstackclient. To the point that typing "openstack help" just
>> fails. Here's an example:
>>
>> # openstack help
>> [ ...]
>>project create  Create new project
>>project delete  Delete project(s)
>>project list   List projects
>>project setSet project properties
>>project show   Display project details
>> Could not load EntryPoint.parse('ptr_record_list =
>> designateclient.v2.cli.reverse:ListFloatingIPCommand')
>> 'ArgumentParser' object has no attribute 'debug'
>>
>> This first happened to me with saharaclient. Lucky, upgrading to latest
>> version fixed it. Then I had the problem with zaqarclient, which I fixed
>> with a few patches to its setup.cfg. Then now designate, but this time,
>> patching setup.cfg doesn't seem to cut it (ie: after changing the name
>> of the command, "openstack help" just fails).
>>
>> Note: I don't care which project is at fault, this isn't the point here.
>> The point is that command name conflicts aren't handled (see below)
>> which is the problem.
> 
> +1, this isn't a problem specific to any project, it's systemic with 
> flat namespacing.
> 
>> With Horizon being a large consumer of nearly all python-*client
>> packages, removing one of them also removes Horizon in my CI which is
>> not what I want to (or can) do to debug a tempest problem. End of the
>> story: since Liberty b3, I never could have "openstack help" to work
>> correctly in my CI... :(
> 
> O.O That's unfortunate.
> 
>> Which leads me to write this:
>>
>> Since we have a very large amount of projects, with each and everyone of
>> them adding new commands to openstackclient, I would really nice if we
>> could have some kind of checks to make sure that conflicts are either 1/
>> not possible or 2/ handled gracefully.
> 
> To your (1) we could have a gate job that installs all the clients and 
> fails on conflicts.
> 
> The downside of doing that without addressing the namespace problem is 
> that there will be inconsistent conventions everywhere. Zaqar will have 
> "openstack queue " but "openstack message flavor ..." which creates 
> the sort of confusion a unified client is supposed to avoid.
> 
> A central reservation process for nouns won't really scale, but 
> namespacing will because we *already* namespace projects.
> 
>> Your thoughts?
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
>> P.S: It wasn't the point of this message, but do we have a fix for
>> designateclient? It'd be nice to have this fixed before Liberty is out.

Is there a bug filed for it? I haven't seen this before and it seems to
be working for me locally :/

(I have openstackclient == 1.7.1 & designateclient == 1.5.0)

If we can find the issue we will try and get a fix out.



-- 
Graham Hayes
Software Engineer
DNS as a Service
Advanced Network Services
HP Helion Cloud - Platform Services

GPG Key: 7D28E972


graham.ha...@hpe.com
M +353 87 377 8315

P +353 1 525 1589
Dublin,
Ireland

HP

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] It's time to update the Liberty release notes

2015-10-07 Thread Juvonen, Tomi (Nokia - FI/Espoo)
This also had DocImpact, but flag was not there.

ff80032 Roman Dobosz   New nova API call to mark nova-compute down

br,
Tomi

-Original Message-
From: EXT Alexis Lee [mailto:lx...@hpe.com] 
Sent: Wednesday, October 07, 2015 4:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] It's time to update the Liberty release 
notes

Now with committer names.

Matt Riedemann said on Thu, Oct 01, 2015 at 01:27:38PM -0500:
> Here are the commits in liberty that had the UpgradeImpact tag:

% git log --format='%h %<(18,trunc)%cn %s' -i --grep UpgradeImpact \
remotes/origin/stable/kilo..remotes/origin/stable/liberty
0b49934 Zhenyu Zheng   CONF.allow_resize_to_same_host should check only 
once in controller
4a9e14a Sylvain Bauza  Update ComputeNode values with allocation ratios in 
the RT
4a18f7d John Garbutt   api: use v2.1 only in api-paste.ini
11507ee John Garbutt   api: deprecate the concept of extensions in v2.1
9c91781 Eli Qiao   Add missing rules in policy.json
1b8a2e0 Dan Smith  Adding user_id handling to keypair index, show and 
create api calls
0283234 Maxim Nestratovlibvirt: Always default device names at boot
725c54e He Jie Xu  Remove db layer hard-code permission checks for 
quota_class_create/update
1dbb322 He Jie Xu  Remove db layer hard-code permission checks for 
quota_class_get_all_by_name
4d6a50a ShaoHe FengRemove db layer hard-code permission checks for 
floating_ip_dns
55e63f8 Davanum Srinivas.. Allow non-admin to list all tenants based on policy
92807d6 jichenjc   Remove redundant policy check from 
security_group_default_rule
2a01a1b Matt Riedemann Remove hv_type translation shim for powervm
dcd4be6 He Jie Xu  Remove db layer hard-code permission checks for 
quota_get_all_*
06e6056 jichenjc   Remove cell policy check
d03b716 Matt Riedemann libvirt: deprecate libvirt version usage < 0.10.2
5309120 Dan Smith  Update kilo version alias

> Here are the DocImpact changes:

% git log --format='%h %<(18,trunc)%cn %s' -i --grep DocImpact \
remotes/origin/stable/kilo..remotes/origin/stable/liberty
bc6f30d He Jie Xu  Give instance default hostname if hostname is empty
4ee4f9f Nikola Dipanov RT: track evacuation migrations
9095b36 Davanum Srinivas.. Expose keystoneclient's session and auth plugin 
loading parameters
4a9e14a Sylvain Bauza  Update ComputeNode values with allocation ratios in 
the RT
4a18f7d John Garbutt   api: use v2.1 only in api-paste.ini
11507ee John Garbutt   api: deprecate the concept of extensions in v2.1
45d1e3c ghanshyam  Expose VIF net-id attribute in os-virtual-interfaces
9d353e5 Michael Still  libvirt: take account of disks in migration data size
17e5911 Michael Still  Add deprecated_for_removal parm for deprecated 
neutron_ops
95940cc Michael Still  Don't allow instance to overcommit against itself
9cd9e66 Davanum Srinivas   Add rootwrap daemon mode support
c250aca Jay Pipes  Allow compute monitors in different namespaces
434ce2a Marian Horban  Added processing /compute URL
2c0a306 Dan Smith  Limit parallel live migrations in progress
da33ab4 Daniel P. Berrange libvirt: set caps on maximum live migration time
07c7e5c Daniel P. Berrange libvirt: support management of downtime during 
migration
60d08e6 Chuck Carmack  Add documentation for the nova-cells command.
ae5a329 Marian Horban  libvirt:Rsync remote FS driver was added
9a09674 Vladik Romanovsky  libvirt: enable virtio-net multiqueue
8a7b1e8 Chuck Carmack  :Add documentation for the nova-idmapshift command.
bf91d9f Sergey Nikitin Added missed '-' to the rest_api_version_history.rst
1b8a2e0 Dan Smith  Adding user_id handling to keypair index, show and 
create api calls
622a845 Gary KottonMetadata: support proxying loadbalancers
2f7403b Radoslav Gerganov  VMware: map one nova-compute to one VC cluster
ace11d3 Radoslav Gerganov  VMware: add serial port device
ab35779 Radomir Dopieral.. Handle SSL termination proxies for version list
6739df7 Dan Smith  Include DiskFilter in the default list
5e5ef99 Thang Pham VMware: Add support for swap disk
49a572a Ghanshyam Mann Show 'locked' information in server details
4252420 Gary KottonVMware: add resource limits for disk
f1f46a0 Gary KottonVMware: Resource limits for memory
7aec88c Gary KottonVMware: add support for cores per socket
bc3b6cc Maxim Nestratovlibvirt: rename parallels driver to virtuozzo
95f1d47 Mike DormanAdd console allowed origins setting
d0ee3ab Shiina, Hironori   libvirt:Add a driver API to inject an NMI
50c8f93 Radoslav Gerganov  Add MKS console support
abf20cd abhishekkekane Execute _poll_shelved_instances only if 
shelved_offload_time is > 0
973f312 Jay Pipes  Use stevedore for loading monitor extensions
9260ea1 andrewbogott 

Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Flavio Percoco

On 07/10/15 15:02 +0200, Christian Berendt wrote:

On 10/07/2015 02:57 PM, Thierry Carrez wrote:

...which if I read it correctly means we could pick N now, but not O. We
might want to change that (again) first.


Is this list correct?

M = Tokyo
N = Atlanta


Austin, Texas.


O = Barcelona
P = ?

Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Daniel P. Berrange
On Wed, Oct 07, 2015 at 07:47:31AM -0400, Sean Dague wrote:
> We're starting to make plans for the next cycle. Long term plans are
> getting made for details that would happen in one or two cycles.
> 
> As we already have the locations for the N and O summits I think we
> should do the naming polls now and have names we can use for this
> planning instead of letters. It's pretty minor but it doesn't seem like
> there is any real reason to wait and have everyone come up with working
> names that turn out to be confusing later.

Yep, it would be nice to have names decided further in advance than
we have done in the past. It saves having to refer to N, O
all the time, or having people invent their own temporary names like
Lemming and Muppet...

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Process to clean up the review queue from non-active patches

2015-10-07 Thread Bunting, Niall
> From: Julien Danjou [jul...@danjou.info]
> Sent: 07 October 2015 10:12
> 
> On Wed, Oct 07 2015, Flavio Percoco wrote:
> 
> > I'm not trying to solve the lack of reviews in Liberty by removing
> > patches. What I'd like to do, though, is help to keep around patches
> > that really matter.
> 
> I think that's where you are making a mistake. They are contributors,
> like me or Victor, are knocking on the Glance doors for months now,
> sending patches that resolve technical debt rather than adding new
> debt^Wfeatures. Currently, these patches are not seen as important and
> are often dismissed. So I'm pretty sure they are going to expire with
> this new system.
> 
> Imagine that if you were merging patches from me, Victor, and people
> like us, we would continue to send many of them, and mid-term, you'd get
> some new blood on your core team.
> 
> What is proposed here is really focusing on making life easier for the
> current core team which is in large majority inactive.
> 
> Don't read me wrong. I know you and Nikhil are both well-intentioned by
> proposing that. I just think it's going to be worse, because it won't
> improve much and you're going to push new contributors away.

If your patches are sitting there waiting for review once they get off
the top couple of pages they are likely to become buried, as they are
waiting for review. This is unlikely to benefit anyone.

How would an active user keep their patches it the active/up for review
state? By just posting bump comments?

Any type of change Juilen is talking about could keep on being dismissed,
and this could end up in some sort of game to keep your patch above
the non-review line for it just to be ignored. This would definitely be
a bad thing, therefore I think any patches that are picked up as old,
should be reviewed before being marked as WIP. As this would mean if
the patch is moved out of WIP it would be less likely to get stuck in
some sort of loop as it has a review.

We also have to be careful about alienating contributors, we should make
sure they know why there work got marked WIP with a link to a wiki page
explaining the process. However if this forces a review, they may also
be happy that they eventually got a review on their patch.

My thoughts,
Niall

Edit: As this did not send to the list originally.
Flavio points out that they aim to review patches that are unmarked wip,
I think that system could work as long as it avoids the problem of patches
potentially becoming stuck in a circle.

And It should be kept in mind that even old reviews could still be relevant,
and the best course of action may not to just mark them wip without thought.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] naming N and O releases nowish

2015-10-07 Thread Thierry Carrez
Sean Dague wrote:
> We're starting to make plans for the next cycle. Long term plans are
> getting made for details that would happen in one or two cycles.
> 
> As we already have the locations for the N and O summits I think we
> should do the naming polls now and have names we can use for this
> planning instead of letters. It's pretty minor but it doesn't seem like
> there is any real reason to wait and have everyone come up with working
> names that turn out to be confusing later.

That sounds fair. However the release naming process currently states[1]:

"""
The process to chose the name for a release begins once the location of
the design summit of the release to be named is announced and no sooner
than the opening of development of the previous release.
"""

...which if I read it correctly means we could pick N now, but not O. We
might want to change that (again) first.

[1] http://governance.openstack.org/reference/release-naming.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-07 Thread Sofer Athlan-Guyot
Rich Megginson  writes:

> On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:
>> Rich Megginson  writes:
>>
>>> On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
 Gilles Dubreuil  writes:

> On 30/09/15 03:43, Rich Megginson wrote:
>> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
 Gilles Dubreuil  writes:

> On 15/09/15 06:53, Rich Megginson wrote:
>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>> Hi,
>>>
>>> Gilles Dubreuil  writes:
>>>
 A. The 'composite namevar' approach:

keystone_tenant {'projectX::domainY': ... }
  B. The 'meaningless name' approach:

   keystone_tenant {'myproject': name='projectX',
 domain=>'domainY',
 ...}

 Notes:
  - Actually using both combined should work too with the domain
 supposedly overriding the name part of the domain.
  - Please look at [1] this for some background between the two
 approaches:

 The question
 -
 Decide between the two approaches, the one we would like to
 retain for
 puppet-keystone.

 Why it matters?
 ---
 1. Domain names are mandatory in every user, group or project.
 Besides
 the backward compatibility period mentioned earlier, where no 
 domain
 means using the default one.
 2. Long term impact
 3. Both approaches are not completely equivalent which different
 consequences on the future usage.
>>> I can't see why they couldn't be equivalent, but I may be missing
>>> something here.
>> I think we could support both.  I don't see it as an either/or
>> situation.
>>
 4. Being consistent
 5. Therefore the community to decide

 Pros/Cons
 --
 A.
>>> I think it's the B: meaningless approach here.
>>>
   Pros
 - Easier names
>>> That's subjective, creating unique and meaningful name don't look
>>> easy
>>> to me.
>> The point is that this allows choice - maybe the user already has 
>> some
>> naming scheme, or wants to use a more "natural" meaningful name -
>> rather
>> than being forced into a possibly "awkward" naming scheme with "::"
>>
>>  keystone_user { 'heat domain admin user':
>>name => 'admin',
>>domain => 'HeatDomain',
>>...
>>  }
>>
>>  keystone_user_role {'heat domain admin user@::HeatDomain':
>>roles => ['admin']
>>...
>>  }
>>
   Cons
 - Titles have no meaning!
>> They have meaning to the user, not necessarily to Puppet.
>>
 - Cases where 2 or more resources could exists
>> This seems to be the hardest part - I still cannot figure out how
>> to use
>> "compound" names with Puppet.
>>
 - More difficult to debug
>> More difficult than it is already? :P
>>
 - Titles mismatch when listing the resources 
 (self.instances)

 B.
   Pros
 - Unique titles guaranteed
 - No ambiguity between resource found and their title
   Cons
 - More complicated titles
 My vote
 
 I would love to have the approach A for easier name.
 But I've seen the challenge of maintaining the providers behind the
 curtains and the confusion it creates with name/titles and when
 not sure
 about the domain we're dealing with.
 Also I believe that supporting self.instances consistently with
 meaningful name is saner.
 Therefore I vote B
>>> +1 for B.
>>>
>>> My view is that this should be the advertised way, but the other
>>> method
>>> (meaningless) should be there if the user need it.
>>>
>>> So as far as I'm concerned the two idioms should co-exist.  This
>>> would
>>> mimic what is possible with all puppet resources.  For instance
>>> you can:
>>>
>>>   file { '/tmp/foo.bar': ensure => present }
>>>
>>> and you can
>>>

  1   2   >