Re: [openstack-dev] [sahara] 2014.1.1 preparation

2014-06-02 Thread Andrew Lazarev
https://review.openstack.org/#/c/93564/ has no sense without
https://review.openstack.org/#/c/87573

+1 on merging DOC bugs you listed and these 2 EDP bugs

Andrew.


On Mon, Jun 2, 2014 at 11:08 PM, Sergey Lukjanov 
wrote:

> /me proposing to backport:
>
> Docs:
>
> https://review.openstack.org/#/c/87531/ Change IRC channel name to
> #openstack-sahara
> https://review.openstack.org/#/c/96621/ Added validate_edp method to
> Plugin SPI doc
> https://review.openstack.org/#/c/89647/ Updated architecture diagram in
> docs
>
> EDP:
>
> https://review.openstack.org/#/c/93564/
> https://review.openstack.org/#/c/93564/
>
> On Tue, Jun 3, 2014 at 10:03 AM, Sergey Lukjanov 
> wrote:
> > Hey folks,
> >
> > this Thu, June 5 is the date for 2014.1.1 release. We already have
> > some back ported patches to the stable/icehouse branch, so, the
> > question is do we need some more patches to back port? Please, propose
> > them here.
> >
> > 2014.1 - stable/icehouse diff:
> > https://github.com/openstack/sahara/compare/2014.1...stable/icehouse
> >
> > Thanks.
> >
> > --
> > Sincerely yours,
> > Sergey Lukjanov
> > Sahara Technical Lead
> > (OpenStack Data Processing)
> > Principal Software Engineer
> > Mirantis Inc.
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] 2014.1.1 preparation

2014-06-02 Thread Andrew Lazarev
correction: https://review.openstack.org/#/c/96621/ (Added validate_edp
method to
Plugin SPI doc) has no sense without https://review.openstack.org/#/c/87573
(Fix running EDP job on transient cluster) where validate_edp was
introduced.

Andrew.


On Mon, Jun 2, 2014 at 11:34 PM, Andrew Lazarev 
wrote:

> https://review.openstack.org/#/c/93564/ has no sense without
> https://review.openstack.org/#/c/87573
>
> +1 on merging DOC bugs you listed and these 2 EDP bugs
>
> Andrew.
>
>
> On Mon, Jun 2, 2014 at 11:08 PM, Sergey Lukjanov 
> wrote:
>
>> /me proposing to backport:
>>
>> Docs:
>>
>> https://review.openstack.org/#/c/87531/ Change IRC channel name to
>> #openstack-sahara
>> https://review.openstack.org/#/c/96621/ Added validate_edp method to
>> Plugin SPI doc
>> https://review.openstack.org/#/c/89647/ Updated architecture diagram in
>> docs
>>
>> EDP:
>>
>> https://review.openstack.org/#/c/93564/
>> https://review.openstack.org/#/c/93564/
>>
>> On Tue, Jun 3, 2014 at 10:03 AM, Sergey Lukjanov 
>> wrote:
>> > Hey folks,
>> >
>> > this Thu, June 5 is the date for 2014.1.1 release. We already have
>> > some back ported patches to the stable/icehouse branch, so, the
>> > question is do we need some more patches to back port? Please, propose
>> > them here.
>> >
>> > 2014.1 - stable/icehouse diff:
>> > https://github.com/openstack/sahara/compare/2014.1...stable/icehouse
>> >
>> > Thanks.
>> >
>> > --
>> > Sincerely yours,
>> > Sergey Lukjanov
>> > Sahara Technical Lead
>> > (OpenStack Data Processing)
>> > Principal Software Engineer
>> > Mirantis Inc.
>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Use of AngularJS

2014-06-02 Thread Matthias Runge
On Tue, Jun 03, 2014 at 07:49:04AM +0200, Radomir Dopieralski wrote:
> On 06/02/2014 05:13 PM, Adam Nelson wrote:
> > I think that you would use the PyPI version anyway:
> > 
> > https://pypi.python.org/pypi/django-angular/0.7.2
> > 
> > That's how most of the other Python dependencies work, even in the
> > distribution packages.
> 
> That is not true. As all components of OpenStack, Horizon has to be
> packaged at the end of the cycle, with all of its dependencies.
> 

I already packaged python-django-angular for Fedora (and EPEL), it's
just waiting for review [1]. 

>From a distro standpoint, every dependency needs to be packaged, and
this is not limited to Horizon dependencies as well.
On the other side, we don't break each time, when someone releases a new
setuptools or keystoneclient to pypi.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1099473
-- 
Matthias Runge 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] PTL Candidacy Open

2014-06-02 Thread Noorul Islam K M
Anita Kuno  writes:

> On 05/27/2014 06:25 PM, Adrian Otto wrote:
>
>> Team,
>> 
>> If you would like to declare a candidacy for PTL for Solum, you may send an 
>> email with the subject "[Solum] Solum PTL Candidacy” to this mailing list 
>> declaring your candidacy. Please respond with candidacy notices no later 
>> than 00:00 UTC on 2014-06-02. 
>> 
>> The following rules apply:
>> 
>> https://wiki.openstack.org/wiki/Solum/Elections
>> 
>> Thanks,
>> 
>> Adrian
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> According to the election guidelines set out in the above wiki url, no
> candidate came forward for the Solum PTL position and the current PTL
> retains his position.
>
> https://review.openstack.org/#/admin/groups/231,members
>
> Congratulations to Adrian Otto!
>

Congratulations Adrian!

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Review blueprint specs on gerrit

2014-06-02 Thread Mike Scherbakov
Thanks Dmitry.
I'm walking through one of the proposed designs:
https://review.openstack.org/#/c/96429/,
http://docs-draft.openstack.org/29/96429/4/check/gate-fuel-specs-docs/4590dac/doc/build/html/specs/5.1/access-control-master-node.html

I've noticed a few things in the process which I'd propose for changing
(let's keep this email thread to discuss process itself, and not the
content of the blueprint, which I'll comment in a separate thread):

   1. It is unclear who are mandatory people to review the blueprint.
   For example, we have two +1th in the review, can I merge it now?
   Probably not... I think core developers need to come up with ideas who is
   mandatory to review a design, and put their names into changeset. For any
   feature, we must have QA lead approving it (whether QA lead reviews it
   itself, or puts +1 on behalf of other QA team expert. In the latter case,
   that person has to be in a list of reviewers.)
   2. As we want to have more agile-like model with 2 week iterations, in
   order to get feedback sooner on where we are in the release cycle and keep
   some teams working on stuff which will come up in the release after current
   one, then it makes sense to split the work onto iterations. It is great to
   see that this design reflects multiple stages, which can be considered as
   iterations. I'm not sure that every stage can fit into 2 week cycle, but
   that's what I would love to achieve - when we have a clear scope for every
   iteration, and ability to re-arrange things after every iteration. See
   proposed schedule with iterations at
   https://wiki.openstack.org/wiki/Fuel/5.1_Release_Schedule
   3. While I'm Ok with anyone ++ing review request while it's not yet
   completed, I don't think mandatory core reviewer should do that - and I'd
   suggest that core reviewer should rather -1 it, providing comments and
   marking areas which are not yet complete. In this particular review we have
   few sections empty.

Anyone has an opinion on this?

Thanks,


On Wed, May 28, 2014 at 5:35 AM, Dmitry Pyzhov  wrote:

> Guys,
>
> from now on we should keep all our 5.1 blueprint specs in one place: 
> fuel-specs
> repo . We do it same way as
> nova, so you can use their instruction
>  as a guideline.
>
> Once again. All specifications for 5.1 blueprints need to be moved to
> stackforge. Here is example link:
> https://github.com/stackforge/fuel-specs/blob/master/specs/template.rst.
>
> Jenkins builds every request and adds link to html docs to the comments.
> For example: https://review.openstack.org/#/c/96145/.
>
> I propose to send feedback for this workflow into this mailing thread.
>
> Also, take a look on review guidelines
> .
> It contains some useful information, you know.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] 2014.1.1 preparation

2014-06-02 Thread Sergey Lukjanov
/me proposing to backport:

Docs:

https://review.openstack.org/#/c/87531/ Change IRC channel name to
#openstack-sahara
https://review.openstack.org/#/c/96621/ Added validate_edp method to
Plugin SPI doc
https://review.openstack.org/#/c/89647/ Updated architecture diagram in docs

EDP:

https://review.openstack.org/#/c/93564/ https://review.openstack.org/#/c/93564/

On Tue, Jun 3, 2014 at 10:03 AM, Sergey Lukjanov  wrote:
> Hey folks,
>
> this Thu, June 5 is the date for 2014.1.1 release. We already have
> some back ported patches to the stable/icehouse branch, so, the
> question is do we need some more patches to back port? Please, propose
> them here.
>
> 2014.1 - stable/icehouse diff:
> https://github.com/openstack/sahara/compare/2014.1...stable/icehouse
>
> Thanks.
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] 2014.1.1 preparation

2014-06-02 Thread Sergey Lukjanov
Hey folks,

this Thu, June 5 is the date for 2014.1.1 release. We already have
some back ported patches to the stable/icehouse branch, so, the
question is do we need some more patches to back port? Please, propose
them here.

2014.1 - stable/icehouse diff:
https://github.com/openstack/sahara/compare/2014.1...stable/icehouse

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] [Infra] Making Ironic vote as a third-party Nova driver

2014-06-02 Thread Joshua Hesketh

Howdy,

Here is my first pass at allowing different voters for different pipelines.
https://review.openstack.org/#/c/97391/2

Cheers,
Josh

Rackspace Australia

On 5/30/14 3:30 AM, Devananda van der Veen wrote:
On Wed, May 28, 2014 at 10:54 PM, Joshua Hesketh 
mailto:joshua.hesk...@rackspace.com>> 
wrote:


On 5/29/14 8:52 AM, James E. Blair wrote:

Devananda van der Veen mailto:devananda@gmail.com>> writes:

Hi all!

This is a follow-up to several summit discussions on
how-do-we-deprecate-baremetal, a summary of the plan
forward, a call to
raise awareness of the project's status, and hopefully
gain some interest
from folks on nova-core to help with spec and code reviews.

The nova.virt.ironic driver lives in Ironic's git tree
today [1]. We're
cleaning it up and submitting it to Nova again this cycle.
I've posted
specs [2] outlining the design and planned upgrade
process. Earlier today,
we enabled voting in Ironic's check and gate queues for the
tempest-dsvm-virtual-ironic job. This runs a tempest
scenario test [3]
against devstack, exercising Nova with the Ironic driver
to PXE boot a
virtual machine. It has been running for a few months on
Ironic, and has
been stable for more than a month. However, because Ironic
is not
integrated, we also can't vote in check/gate queues on
integrated projects
(like Nova). We can - and do - report the test result in a
non-voting way,
though that's easy to miss, since it looks like every
other non-voting test.

At the summit [4], it was suggested that we make this job
report as though
it were a third-party CI test for a Nova driver. This
would be removed at
the time that Ironic graduates and the job is allowed to
vote in the gate.
Until that time, I'm happy to have the nova.virt.ironic
driver reporting as
a third-party driver (even though it's not) simply to help
raise awareness
(third-party CI jobs are watched more closely than
non-voting jobs) and
decrease the likelihood that Nova developers will
inadvertently break
Ironic's gate.

Given that there's a concrete plan forward, why am I
sending this email to
all three teams? A few reasons:
- document the plan that we discussed
- many people from infra and nova were not present during
the discussion
and may not be aware of the details
- I may have gotten something wrong (it was a long week)
- and mostly because I don't technically know how to make
an upstream job
report as though it's a third-party job, and am hoping
someone wants to
volunteer to help figure that out

I think it's a reasonable plan.  To elaborate a bit, I think we
identified three categories of jobs that we run:

a) jobs that are voting
b) jobs that are non-voting because they are advisory
c) jobs that are non-voting for policy reasons but we feel fairly
strongly about

There's a pretty subtle distinction between b and c.  Ideally,
there
shouldn't be any.  We've tried to minimize the number of
non-voting jobs
to make sure that people don't ignore them.  Nonetheless, it
seems that
a large enough number of people still do that non-voting jobs are
considered ineffective in Nova.  I think it's worth noting the
potential
danger of de-emphasizing the actual results.  It may make other
non-voting jobs even less effective than they already are.

The intent is to make the jobs described by (c) into voting
jobs, but in
a way that they can still be overridden if need be.  The aim
is to help
new (eg, incubated) projects join the integrated gate in a way
that lets
them prove they are sufficiently mature to do so without
impacting the
currently integrated projects.  I believe we're currently
thinking that
point is after their integration approval.  If we are
comfortable with
incubated projects being able to block the integrated gate
earlier, we
could simply make the non-voting jobs voting instead.

Back to the proposal at hand.  I think we should call the
kinds of jobs
described in (c) as "non-binding".

The best way to do that is to register a second user with
Gerrit for
Zuul to use, and have 

Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-06-02 Thread Radomir Dopieralski
On 05/31/2014 11:13 PM, Jeremy Stanley wrote:
> On 2014-05-29 20:55:01 + (+), Lyle, David wrote:
> [...]
>> There are several more xstatic packages that horizon will pull in that are
>> maintained outside openstack. The packages added are only those that did
>> not have existing xstatic packages. These packages will be updated very
>> sparingly, only when updating say bootstrap or jquery versions.
> [...]
> 
> I'll admit that my Web development expertise is probably almost 20
> years stale at this point, so forgive me if this is a silly
> question: what is the reasoning against working with the upstreams
> who do not yet distribute needed Javascript library packages to help
> them participate in the distribution channels you need?

There is nothing stopping us from doing that. On the other hand,
I don't expect much success with that. Those are JavaScript and/or
style/resources libraries, and the authors usually don't know or
care about python packaging. Some of those libraries don't even have
proper releases or versioning! We can reach out and ask them to
include the packages in their releases (where they have them), but
we need the packages now -- we are already using those libraries and
we need to clean up how we bundle them.

> This strikes
> me as similar to forking a Python library which doesn't publish to
> PyPI, just so you can publish it to PyPI.

There is no fork, as we are not modifying the source code.

> When some of these
> dependencies begin to publish xstatic packages themselves, do the
> equivalent repositories in Gerrit get decommissioned at that point?

Yes, we will hand over the keys to the pypi entries, and get rid of the
repositories on our side.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] PTL elections

2014-06-02 Thread Sergey Lukjanov
Only one candidate has been proposed for this position, so, due to the
election process we have the MagnetoDB PTL for the Juno cycle.

Congratulations to Ilya Sviridov!

Elections doc updated -
https://wiki.openstack.org/wiki/MagnetoDB/PTL_Elections_Juno

Thanks.

On Mon, May 26, 2014 at 4:16 PM, Sergey Lukjanov  wrote:
> Hi folks,
>
> due to the requirement to have PTL for the program, we're running
> elections for the MagnetoDB PTL for Juno cycle. Schedule and policies
> are fully aligned with official OpenStack PTLs elections.
>
> You can find more info in official Juno elections wiki page [0] and
> the same page for MagnetoDB elections [1], additionally some more info
> in official nominations opening email [2].
>
> Timeline:
>
> till 05:59 UTC May 30, 2014: Open candidacy to MagnetoDB PTL positions
> May 30, 2014 - 1300 UTC June 6, 2014: PTL elections
>
> To announce your candidacy please start a new openstack-dev at
> lists.openstack.org mailing list thread with the following subject:
> "[MagnetoDB] PTL Candidacy".
>
> [0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
> [1] https://wiki.openstack.org/wiki/MagnetoDB/PTL_Elections_Juno
> [2] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html
>
> Thank you.
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Use of AngularJS

2014-06-02 Thread Radomir Dopieralski
On 06/02/2014 05:13 PM, Adam Nelson wrote:
> I think that you would use the PyPI version anyway:
> 
> https://pypi.python.org/pypi/django-angular/0.7.2
> 
> That's how most of the other Python dependencies work, even in the
> distribution packages.

That is not true. As all components of OpenStack, Horizon has to be
packaged at the end of the cycle, with all of its dependencies.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] request for review

2014-06-02 Thread YAMAMOTO Takashi
can anyone please review this small fix for ofagent?
https://review.openstack.org/#/c/88224/
it's unfortunate a simple fix like this taking months to be merged.

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-compute rpc version

2014-06-02 Thread abhishek jain
Hi Russell

Below are the details...

controller node...

nova --version
2.17.0.122

nova-compute  --version
2014.2

compute node.

nova --version
2.17.0.122

nova-compute --version
2013.2.4

Can you help me what i need to change in order to achieve the desired
functioonality.



Thaks


On Tue, Jun 3, 2014 at 2:16 AM, Russell Bryant  wrote:

> On 06/02/2014 08:20 AM, abhishek jain wrote:
> > |Hi
> > |
> >
> > |
> > I'm getting following error in nova-compute logs when trying to boot VM
> from controller node onto compute node ...
> >
> >  Specified RPC version, 3.23, not supported
> >
> > Please help regarding this.
>
> It sounds like you're using an older nova-compute with newer controller
> services (without the configuration to allow a live ugprade).  Check the
> versions of Nova services you have running.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Pluggable conductor manager

2014-06-02 Thread Craig Vyvial
Sent that a little too quickly...

This is to be more inline with the other services we have ie. taskmanager
[1] that you can override if you see fit. We decided this was an oversight
from the original creation and should be added.

[1]
https://github.com/openstack/trove/blob/master/trove/cmd/taskmanager.py#L26

-Craig


On Mon, Jun 2, 2014 at 10:27 PM, Craig Vyvial  wrote:

> This is to be more inline with the other services we have ie. taskmanager
> [1] that you can override if you see fit.
>
>
>
> On Mon, Jun 2, 2014 at 3:55 PM, Russell Bryant  wrote:
>
>> On 06/02/2014 09:23 AM, boden wrote:
>> > On 4/28/2014 2:58 PM, Dan Smith wrote:
>> >>> I'd like to propose the ability to support a pluggable trove conductor
>> >>> manager. Currently the trove conductor manager is hard-coded [1][2]
>> and
>> >>> thus is always 'trove.conductor.manager.Manager'. I'd like to see this
>> >>> conductor manager class be pluggable like nova does [3].
>> >>
>> >> Note that most of us don't like this and we're generally trying to get
>> >> rid of these sorts of things. I actually didn't realize that
>> >> conductor.manager was exposed in the CONF, and was probably just done
>> to
>> >> mirror other similar settings.
>> >>
>> >> Making arbitrary classes pluggable like this without a structured and
>> >> stable API is really just asking for trouble when people think it's a
>> >> pluggable interface.
>> >>
>> >> So, you might not want to use "because nova does it" as a reason to add
>> >> it to trove like this :)
>> >>
>> >> --Dan
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> >>
>> >
>> > Thanks for the input Dan.
>> >
>> > Is the real concern here that the conductor API(s) and manager are
>> > coupled based on version?
>>
>> FWIW, I really don't like this either.
>>
>> I snipped some implementation detail proposals here.  I missed why you
>> want to do this in the first place.  This seems far from an obvious plug
>> point to me.
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Pluggable conductor manager

2014-06-02 Thread Craig Vyvial
This is to be more inline with the other services we have ie. taskmanager
[1] that you can override if you see fit.



On Mon, Jun 2, 2014 at 3:55 PM, Russell Bryant  wrote:

> On 06/02/2014 09:23 AM, boden wrote:
> > On 4/28/2014 2:58 PM, Dan Smith wrote:
> >>> I'd like to propose the ability to support a pluggable trove conductor
> >>> manager. Currently the trove conductor manager is hard-coded [1][2] and
> >>> thus is always 'trove.conductor.manager.Manager'. I'd like to see this
> >>> conductor manager class be pluggable like nova does [3].
> >>
> >> Note that most of us don't like this and we're generally trying to get
> >> rid of these sorts of things. I actually didn't realize that
> >> conductor.manager was exposed in the CONF, and was probably just done to
> >> mirror other similar settings.
> >>
> >> Making arbitrary classes pluggable like this without a structured and
> >> stable API is really just asking for trouble when people think it's a
> >> pluggable interface.
> >>
> >> So, you might not want to use "because nova does it" as a reason to add
> >> it to trove like this :)
> >>
> >> --Dan
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >
> > Thanks for the input Dan.
> >
> > Is the real concern here that the conductor API(s) and manager are
> > coupled based on version?
>
> FWIW, I really don't like this either.
>
> I snipped some implementation detail proposals here.  I missed why you
> want to do this in the first place.  This seems far from an obvious plug
> point to me.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] scheduler sub-group meeting agenda 6/3

2014-06-02 Thread Dugger, Donald D
1) Forklift (tasks & status)
2) No-db scheduler discussion (BP ref - https://review.openstack.org/#/c/92128/ 
)
3) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] HTTP REST API error status code for out of quota errors

2014-06-02 Thread Dolph Mathews
On Mon, Jun 2, 2014 at 7:21 PM, Christopher Yeoh  wrote:

> Hi,
>
> There's been a few patches like this floating around recently which fix
> the incorrect use of 413 as the http error code when a request fails
> because of the requestor is out of quota.
>
> https://review.openstack.org/#/c/95671/
>
> Now 413 is definitely wrong, but sometimes the change is made to 400 or
> 403. Having had a look around at different REST APIs (non openstack) out
> there, 403 does seem to be the most popular choice.
>
>
400 is wrong as well: "The request could not be understood by the server
due to malformed syntax." In this case, the syntax is fine, it's the
requested action that the server is rejecting.

+1 for 403


> Its not totally clear from the rfc (http://www.ietf.org/rfc/rfc2616.txt)
> what it should be, but 403 to me does seem to the most appropriate, and
> whilst its not a big thing I think we should at least be consistent about
> this across all our openstack REST APIs.
>
> Anyone have any objections to this?
>
> Regards,
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Cleaning up our bug list

2014-06-02 Thread Devananda van der Veen
Hi all!

I'd like to draw attention to our list of open bugs:
  https://bugs.launchpad.net/ironic/+bugs

And ask that, if you have a bug assigned to you, please ensure you're
actively working on it. If you're not, please un-assign yourself from the
bug so it becomes visible / available to others. You can see your
personalized list by adding your username to this link:

https://bugs.launchpad.net/ironic/+bugs?search=Search&field.assignee=YOURNAMEHERE

Dmitry has volunteered to help clean up our bugs list, and will start
pinging people // untargeting bugs soon.

Thanks!
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] HTTP REST API error status code for out of quota errors

2014-06-02 Thread Ken'ichi Ohmichi
2014-06-03 9:21 GMT+09:00 Christopher Yeoh :
> Hi,
>
> There's been a few patches like this floating around recently which fix the
> incorrect use of 413 as the http error code when a request fails because of
> the requestor is out of quota.
>
> https://review.openstack.org/#/c/95671/
>
> Now 413 is definitely wrong, but sometimes the change is made to 400 or 403.
> Having had a look around at different REST APIs (non openstack) out there,
> 403 does seem to be the most popular choice.
>
> Its not totally clear from the rfc (http://www.ietf.org/rfc/rfc2616.txt)
> what it should be, but 403 to me does seem to the most appropriate, and
> whilst its not a big thing I think we should at least be consistent about
> this across all our openstack REST APIs.

+1 for 403.
The 403 words
  "The server understood the request, but is refusing to fulfill it.
  Authorization will not help and the request SHOULD NOT be repeated."
would fit to out of quota.


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Short term scaling strategies for large Heat stacks

2014-06-02 Thread Clint Byrum
Excerpts from Steve Baker's message of 2014-06-02 14:37:25 -0700:
> On 31/05/14 07:01, Zane Bitter wrote:
> > On 29/05/14 19:52, Clint Byrum wrote:
> >
> >> update-failure-recovery
> >> ===
> >>
> >> This is a blueprint I believe Zane is working on to land in Juno. It
> >> will
> >> allow us to retry a failed create or update action. Combined with the
> >> separate controller/compute node strategy, this may be our best option,
> >> but it is unclear whether that code will be available soon or not. The
> >> chunking is definitely required, because with 500 compute nodes, if
> >> node #250 fails, the remaining 249 nodes that are IN_PROGRESS will be
> >> cancelled, which makes the impact of a transient failure quite extreme.
> >> Also without chunking, we'll suffer from some of the performance
> >> problems we've seen where a single engine process will have to do all of
> >> the work to bring up a stack.
> >>
> >> Pros: * Uses blessed strategy
> >>
> >> Cons: * Implementation is not complete
> >>   * Still suffers from heavy impact of failure
> >>   * Requires chunking to be feasible
> >
> > I've already started working on this and I'm expecting to have this
> > ready some time between the j-1 and j-2 milestones.
> >
> > I think these two strategies combined could probably get you a long
> > way in the short term, though obviously they are not a replacement for
> > the convergence strategy in the long term.
> >
> >
> > BTW You missed off another strategy that we have discussed in the
> > past, and which I think Steve Baker might(?) be working on: retrying
> > failed calls at the client level.
> >
> As part of the client-plugins blueprint I'm planning on implementing
> retry policies on API calls. So when currently we call:
> self.nova().servers.create(**kwargs)
> 
> This will soon be:
> self.client().servers.create(**kwargs)
> 
> And with a retry policy (assuming the default unique-ish server name is
> used):
> self.client_plugin().call_with_retry_policy('cleanup_yr_mess_and_try_again',
> self.client().servers.create, **kwargs)
> 
> This should be suitable for handling transient errors on API calls such
> as 500s, response timeouts or token expiration. It shouldn't be used for
> resources which later come up in an ERROR state; convergence or
> update-failure-recovery would be better for that.
> 

Steve this is fantastic work and sorely needed. Thank you for working on
it.

Unfortunately, ERROR state machines is the majority of our problem. IPMI
and PXE can be unreliable in some environments, and sometimes machines
are broken in subtle ways. Also, the odd bug in Neutron, Nova, or Ironic
will cause this.

Convergence is not available to us for the short term, and really
update-failure-recovery is some time off too, so we need more solutions
unfortunately.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] HTTP REST API error status code for out of quota errors

2014-06-02 Thread Christopher Yeoh
Hi,

There's been a few patches like this floating around recently which fix the
incorrect use of 413 as the http error code when a request fails because of
the requestor is out of quota.

https://review.openstack.org/#/c/95671/

Now 413 is definitely wrong, but sometimes the change is made to 400 or
403. Having had a look around at different REST APIs (non openstack) out
there, 403 does seem to be the most popular choice.

Its not totally clear from the rfc (http://www.ietf.org/rfc/rfc2616.txt)
what it should be, but 403 to me does seem to the most appropriate, and
whilst its not a big thing I think we should at least be consistent about
this across all our openstack REST APIs.

Anyone have any objections to this?

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-02 Thread Amir Sadoughi
Hi all,

In the Neutron weekly meeting today[0], we discussed the ovs-firewall-driver 
blueprint[1]. Moving forward, OVS features today will give us "80%" of the 
iptables security groups behavior. Specifically, OVS lacks connection tracking 
so it won’t have a RELATED feature or stateful rules for non-TCP flows. (OVS 
connection tracking is currently under development, to be released by 2015[2]). 
To make the “20%" difference more explicit to the operator and end user, we 
have proposed feature configuration to provide security group rules API 
validation that would validate based on connection tracking ability, for 
example.

Several ideas floated up during the chat today, I wanted to expand the 
discussion to the mailing list for further debate. Some ideas include:
- marking ovs-firewall-driver as experimental in Juno
- What does it mean to be marked as “experimental”?
- performance improvements under a new OVS firewall driver untested so far 
(vthapar is working on this)
- incomplete implementation will cause confusion, educational burden
- debugging OVS is new to users compared to debugging old iptables
- waiting for upstream OVS to implement (OpenStack K- or even L- cycle)

In my humble opinion, merging the blueprint for Juno will provide us a viable, 
more performant security groups implementation than what we have available 
today.

Amir


[0] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-02-21.01.log.html
[1] https://review.openstack.org/#/c/89712/
[2] http://openvswitch.org/pipermail/dev/2014-May/040567.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Bug Day on 6/4

2014-06-02 Thread Tracy Jones

Hi Folks - nova is going to have a bug day on Wednesday, 6/4.  During that day 
we are asking people to take a break from feature work and help fix and/or 
review bugs for the day.We hang out on  #openstack-bugday

We admire our progress on 
http://status.openstack.org/bugday/


Please help out -  Here are some handy links



  *   All Nova Bugs: https://bugs.launchpad.net/nova
  *   Bugs that have gone stale: 
https://bugs.launchpad.net/nova/+bugs?orderby=date_last_updated&field.status%3Alist=INPROGRESS&assignee_option=any
  *   Untriaged 
Bugs
  *   Critical 
Bugs
  *   Bugs without 
owners
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Create keystoneclient with existing token?

2014-06-02 Thread Steven Hardy
Hi all,

Looking for some help with $subject:

What I'm trying to do is take an existing token (a trust scoped token,
which cannot be use to request another token), and initialize the auth_ref
correctly in a keystoneclient object.

The problem is keystoneclient always requests a new token, via the
TokenMethod auth plugin, and re-requesting a new token won't work for
trust-scoped tokens.

However, the API is already successfully validating the token, and adding
all the token details to keystone.token_info in the request environment, so
can I just store the token_info in the context, and use that to initialize
the auth_ref, without an additional call to keystone?

I've tried the latter approach, so far unsuccessfully.  The problem seems
to be that the token_info doesn't match the expected format for the kwargs
when validating the auth_ref in AccessInfo.factory

Are there any examples of reusing token_info from auth_token in this way?

Any assistance would be much appreciated!

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Short term scaling strategies for large Heat stacks

2014-06-02 Thread Mike Spreitzer
Steve Baker  wrote on 06/02/2014 05:37:25 PM:

> > BTW You missed off another strategy that we have discussed in the
> > past, and which I think Steve Baker might(?) be working on: retrying
> > failed calls at the client level.
> >
> As part of the client-plugins blueprint I'm planning on implementing
> retry policies on API calls. So when currently we call:
> self.nova().servers.create(**kwargs)
> 
> This will soon be:
> self.client().servers.create(**kwargs)
> 
> And with a retry policy (assuming the default unique-ish server name is
> used):
> 
self.client_plugin().call_with_retry_policy('cleanup_yr_mess_and_try_again',
> self.client().servers.create, **kwargs)
> 
> This should be suitable for handling transient errors on API calls such
> as 500s, response timeouts or token expiration. It shouldn't be used for
> resources which later come up in an ERROR state; convergence or
> update-failure-recovery would be better for that.

Response timeouts can be problematic here for non-idempotent operations, 
right?

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Short term scaling strategies for large Heat stacks

2014-06-02 Thread Steve Baker
On 31/05/14 07:01, Zane Bitter wrote:
> On 29/05/14 19:52, Clint Byrum wrote:
>
>> update-failure-recovery
>> ===
>>
>> This is a blueprint I believe Zane is working on to land in Juno. It
>> will
>> allow us to retry a failed create or update action. Combined with the
>> separate controller/compute node strategy, this may be our best option,
>> but it is unclear whether that code will be available soon or not. The
>> chunking is definitely required, because with 500 compute nodes, if
>> node #250 fails, the remaining 249 nodes that are IN_PROGRESS will be
>> cancelled, which makes the impact of a transient failure quite extreme.
>> Also without chunking, we'll suffer from some of the performance
>> problems we've seen where a single engine process will have to do all of
>> the work to bring up a stack.
>>
>> Pros: * Uses blessed strategy
>>
>> Cons: * Implementation is not complete
>>   * Still suffers from heavy impact of failure
>>   * Requires chunking to be feasible
>
> I've already started working on this and I'm expecting to have this
> ready some time between the j-1 and j-2 milestones.
>
> I think these two strategies combined could probably get you a long
> way in the short term, though obviously they are not a replacement for
> the convergence strategy in the long term.
>
>
> BTW You missed off another strategy that we have discussed in the
> past, and which I think Steve Baker might(?) be working on: retrying
> failed calls at the client level.
>
As part of the client-plugins blueprint I'm planning on implementing
retry policies on API calls. So when currently we call:
self.nova().servers.create(**kwargs)

This will soon be:
self.client().servers.create(**kwargs)

And with a retry policy (assuming the default unique-ish server name is
used):
self.client_plugin().call_with_retry_policy('cleanup_yr_mess_and_try_again',
self.client().servers.create, **kwargs)

This should be suitable for handling transient errors on API calls such
as 500s, response timeouts or token expiration. It shouldn't be used for
resources which later come up in an ERROR state; convergence or
update-failure-recovery would be better for that.

These policies can start out simple and hard-coded, but there is
potential for different policies to be specified in heat.conf to cater
for the specific failure modes of a given cloud.

Expected to be ready j-1 -> j-2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] PTL Candidacy Open

2014-06-02 Thread Anita Kuno
On 05/27/2014 06:25 PM, Adrian Otto wrote:
> Team,
> 
> If you would like to declare a candidacy for PTL for Solum, you may send an 
> email with the subject "[Solum] Solum PTL Candidacy” to this mailing list 
> declaring your candidacy. Please respond with candidacy notices no later than 
> 00:00 UTC on 2014-06-02. 
> 
> The following rules apply:
> 
> https://wiki.openstack.org/wiki/Solum/Elections
> 
> Thanks,
> 
> Adrian
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
According to the election guidelines set out in the above wiki url, no
candidate came forward for the Solum PTL position and the current PTL
retains his position.

https://review.openstack.org/#/admin/groups/231,members

Congratulations to Adrian Otto!

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Your suggestions in the BP

2014-06-02 Thread Stephen Balukoff
Hi ya'll!

Comments inline:


On Sun, Jun 1, 2014 at 1:09 PM, Brandon Logan 
wrote:

> Hi Eugene and Sam,
>
> On Sun, 2014-06-01 at 12:07 +0400, Eugene Nikanorov wrote:
> > Hi Sam,
> >
> > Eugene, please comment on the migration process bellow.
> >
> > I think that closing down the "status" handling should be done
> > in phase 1.
> > I don't mind. If you're talking about provisioning status then such
> > status (if we're still going to maintain it per each kind of object)
> > goes to various associations: loadbalancer-listener, or
> > loadbalancer-listener-pool, etc.
> > Not a big deal of course, it was just my initial intent to limit phase
> > #1 as much as possible.
>
> I was hoping to limit it as well to keep it focused on just the
> refactoring portion.  I didn't want the scope to include all new
> features and changes under the sun.  It also makes reviewing much
> simpler.
>

I'm OK with limiting scope here so long as we don't implement something
that is effectively "forward compatible" with whatever we will probably
want to do in the future. (So, having a discussion around this is probably
worthwhile.)  To phrase this another way, what consumes the 'status'
information, and what do they really want to know?


>
> >
> >
> > Missing to do so, will create tests and other depending
> > workflows that assume the "current" status field, which will
> > add  a technology debt to this new code.
> > I'd say it would depend on the strategy options you're suggestion
> > below.
> > As far as bw compatibility is concerned (if it's concerned at all), we
> > have to support existing status field, so that would not be any
> > additional debt.
> >
> >
> > Migration and co-existence:
> > I think that it would be better to have the new object model
> > and API done in a way that does not "break" existing code, and
> > then switch the "old" api to redirect to the "new" api.
> > Basically this means creating another lbaas plugin, that expose
> > existing lbaas api extension.
> > I'm not sure how this can be done considering the difference between
> > new proposed api and existing api.
> >
> > This might be done in one of the two ways bellow:
> > 1. Rename all objects in the "new" api so you have a clear
> > demarcation point. This might be sufficient.
> > I'm not sure how this could be done, can you explain?
> > I actually would consider changing the prefix to /v3/ to not to deal
> > with any renamings, that would require some minor refactoring on
> > extension framework side.
> His suggestion in the BP was to rename pool, healthmonitor, and member
> to group, healthcheck, and node respectively.  Since loadbalancer and
> listener are already new those don't have to be renamed.  This way the
> old object models and db tables remain intact and the old API can still
> function as before.
> >
> > 2. Copy the existing LBaaS "extension" and create a
> > "new-lbaas" extension with new object names, then create a
> > "new old lbaas" extension that has the "old API" but redirect
> > to the "new API"
> > I also don't fully understand this, please explain.
> I need more clarification on this as well.  Sounds like you're saying to
> create a lbaas extension v2 with the new object names.  Then copy the
> existing lbaas extension and change it to redirect to the v2 extension.
> If that is the case, why create a "new old lbaas" and just change the
> "old lbaas" extension?
> >
> >
> >
> > Doing 2, can allow "co-existence" of old code with old drivers
> > until new code with new drivers can take its place.
> > New extension + new plugin, is that what you are suggesting? To me it
> > would be the cleanest and the most simple way to execute the
> > transition, but... i'm not sure it was a consensus on design session.
>
> I agree this would be the cleanest but I was under the impression this
> was not an accepted way to go.  I'd honestly prefer a v2 extension and
> v2 plugin.  This would require different names for the object model and
> db tables since you don't want the old api and new api sharing the same
> tables.  We can either append v2 to the names or rename them entirely.
> Sam suggested group for pool, healthcheck for healthmonitor, and node
> for member.  I'd prefer nodepool for pool myself.
>

nodepool isn't a bad name, eh. To throw this into the pot, too: How about
'backend' for the renamed pool (or does that imply too much)?


>
> Either way, I need to know if we can go with this route or not.  I've
> started on writing the code a bit but relationship conversations has
> stalled that some.  I think if we can go with this route it will make
> the code much more clear.
>
> Thanks,
> Brandon
>
>
Yep, knowing this is going to be key to where we need to put engineering
time into this, eh.

Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
__

[openstack-dev] [Nova][Ceilometer] Need some advices for blueprint

2014-06-02 Thread Felix Lee

Dear all,
Due to our business model and we have various hardware platforms to 
support IaaS, so, we need more informations from hypervisors, such as 
cpu model name(more specifically, it would look like: "Intel E5-2630L" ) 
and memory frequency in order to give our customers more objective value 
that is close to practical cpu power as much as possible... But, it 
seems there is no available plugin in ceilometer-compute-agent for 
collecting such information, so, I am about to submit a blueprint for 
this and implement it.
But, here, I noticed ceilometer-compute-agent relies on nova-compute 
pretty much (e.g. in order to get cpu information, we have to enable 
compute_monitors = ComputeDriverCPUMonitor in Nova ), so, I am wondering 
if I want to submit blueprint for adding such capabilities to 
ceilometer-compute-agent, which project should I submit to? Ceilometer 
or Nova? or both..?

It would be very grateful if having any advice from you.


Thanks in advance
&
Best regards,
Felix Lee ~

--
Felix H.T Lee   Academia Sinica Grid & Cloud.
Tel: +886-2-27898308
Office: Room P111, Institute of Physics, 128 Academia Road, Section 2, 
Nankang, Taipei 115, Taiwan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Pluggable conductor manager

2014-06-02 Thread Russell Bryant
On 06/02/2014 09:23 AM, boden wrote:
> On 4/28/2014 2:58 PM, Dan Smith wrote:
>>> I'd like to propose the ability to support a pluggable trove conductor
>>> manager. Currently the trove conductor manager is hard-coded [1][2] and
>>> thus is always 'trove.conductor.manager.Manager'. I'd like to see this
>>> conductor manager class be pluggable like nova does [3].
>>
>> Note that most of us don't like this and we're generally trying to get
>> rid of these sorts of things. I actually didn't realize that
>> conductor.manager was exposed in the CONF, and was probably just done to
>> mirror other similar settings.
>>
>> Making arbitrary classes pluggable like this without a structured and
>> stable API is really just asking for trouble when people think it's a
>> pluggable interface.
>>
>> So, you might not want to use "because nova does it" as a reason to add
>> it to trove like this :)
>>
>> --Dan
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
> 
> Thanks for the input Dan.
> 
> Is the real concern here that the conductor API(s) and manager are
> coupled based on version?

FWIW, I really don't like this either.

I snipped some implementation detail proposals here.  I missed why you
want to do this in the first place.  This seems far from an obvious plug
point to me.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova default quotas

2014-06-02 Thread Joe Gordon
On Mon, Jun 2, 2014 at 12:29 PM, Matt Riedemann 
wrote:

>
>
> On 6/2/2014 12:53 PM, Joe Gordon wrote:
>
>>
>>
>>
>> On Thu, May 29, 2014 at 10:46 AM, Matt Riedemann
>> mailto:mrie...@linux.vnet.ibm.com>> wrote:
>>
>>
>>
>> On 5/27/2014 4:44 PM, Vishvananda Ishaya wrote:
>>
>> I’m not sure that this is the right approach. We really have to
>> add the old extension back for compatibility, so it might be
>> best to simply keep that extension instead of adding a new way
>> to do it.
>>
>> Vish
>>
>> On May 27, 2014, at 1:31 PM, Cazzolato, Sergio J
>> > > wrote:
>>
>> I have created a blueprint to add this functionality to nova.
>>
>> https://review.openstack.org/#__/c/94519/
>>
>> 
>>
>>
>> -Original Message-
>> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com
>> ]
>> Sent: Tuesday, May 27, 2014 5:11 PM
>> To: OpenStack Development Mailing List (not for usage
>> questions)
>> Subject: Re: [openstack-dev] [nova] nova default quotas
>>
>> Phil,
>>
>> You are correct and this seems to be an error. I don't think
>> in the earlier ML thread[1] that anyone remembered that the
>> quota classes were being used for default quotas. IMO we
>> need to revert this removal as we (accidentally) removed a
>> Havana feature with no notification to the community. I've
>> reactivated a bug[2] and marked it critcal.
>>
>> Vish
>>
>> [1]
>> http://lists.openstack.org/__pipermail/openstack-dev/2014-_
>> _February/027574.html
>> > February/027574.html>
>> [2] https://bugs.launchpad.net/__nova/+bug/1299517
>>
>> 
>>
>> On May 27, 2014, at 12:19 PM, Day, Phil > > wrote:
>>
>> Hi Vish,
>>
>> I think quota classes have been removed from Nova now.
>>
>> Phil
>>
>>
>> Sent from Samsung Mobile
>>
>>
>>  Original message 
>> From: Vishvananda Ishaya
>> Date:27/05/2014 19:24 (GMT+00:00)
>> To: "OpenStack Development Mailing List (not for usage
>> questions)"
>> Subject: Re: [openstack-dev] [nova] nova default quotas
>>
>> Are you aware that there is already a way to do this
>> through the cli using quota-class-update?
>>
>> http://docs.openstack.org/__
>> user-guide-admin/content/cli___set_quotas.html
>>
>> > set_quotas.html>
>> (near the bottom)
>>
>> Are you suggesting that we also add the ability to use
>> just regular quota-update? I'm not sure i see the need
>> for both.
>>
>> Vish
>>
>> On May 20, 2014, at 9:52 AM, Cazzolato, Sergio J
>> > > wrote:
>>
>> I would to hear your thoughts about an idea to add a
>> way to manage the default quota values through the
>> API.
>>
>> The idea is to use the current quota api, but
>> sending ''default' instead of the tenant_id. This
>> change would apply to quota-show and quota-update
>> methods.
>>
>> This approach will help to simplify the
>> implementation of another blueprint named
>> per-flavor-quotas
>>
>> Feedback? Suggestions?
>>
>>
>> Sergio Juan Cazzolato
>> Intel Software Argentina
>>
>> _
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.__org
>> 
>> http://lists.openstack.org/__
>> cgi-bin/mailman/listinfo/__openstack-dev
>> > openstack-dev>
>>
>>
>>
>> _
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.__org
>> 
>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
>> openstack-dev
>> 

Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-06-02 Thread Robert Collins
I think you should add the constraints you have.

Realistically though, not every will be there, and thats fine. There
are some folk we'll need there (e.g. I suspect I'm one of those, but
maybe not!)

My constraints are:
- need to be in Sydney for the 1st-5th, remembering there is an
international date line between NC and Sydney.
- need to be home for most of a week between the mid cycle meetup and
PyCon AU (family).

So I can do anytime from Aug 11th on, or anytime ending before or on
July the 25th - and I'm going to put that in the etherpad now :0

-Rob



On 3 June 2014 04:51, Ben Nemec  wrote:
> On 05/30/2014 06:58 AM, Jaromir Coufal wrote:
>> On 2014/30/05 10:00, Thomas Spatzier wrote:
>>> Excerpt from Zane Bitter's message on 29/05/2014 20:57:10:
>>>
 From: Zane Bitter 
 To: openstack-dev@lists.openstack.org
 Date: 29/05/2014 20:59
 Subject: Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle
 collaborative meetup
>>> 
 BTW one timing option I haven't seen mentioned is to follow Pycon-AU's
 model of running e.g. Friday-Tuesday (July 25-29). I know nobody wants
 to be stuck in Raleigh, NC on a weekend (I've lived there, I understand
 ;), but for folks who have a long ways to travel it's one weekend lost
 instead of two.
>>>
>>> +1 - excellent idea!
>>
>> It looks that there is an interest in these dates, so I added 3rd option
>> to the etherpad [0].
>>
>> For one more time, I would like to ask potential attendees to put
>> yourselves to dates which would work for you.
>>
>> -- Jarda
>>
>> [0] https://etherpad.openstack.org/p/juno-midcycle-meetup
>
> Just to clarify, I should add my name to the list if I _can_ make it to
> a given proposal, even if I don't know for sure that I will be going?
>
> I don't know what the travel situation is yet so I can't commit to being
> there on any dates, but I can certainly say which dates would work for
> me if I can make it.
>
> -Ben
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-compute rpc version

2014-06-02 Thread Russell Bryant
On 06/02/2014 08:20 AM, abhishek jain wrote:
> |Hi 
> |
> 
> |
> I'm getting following error in nova-compute logs when trying to boot VM from 
> controller node onto compute node ...
> 
>  Specified RPC version, 3.23, not supported
> 
> Please help regarding this.

It sounds like you're using an older nova-compute with newer controller
services (without the configuration to allow a live ugprade).  Check the
versions of Nova services you have running.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] blueprint priorities

2014-06-02 Thread Doug Hellmann
Oslo team,

I've updated the blueprints listed for juno for oslo [1] and
oslo.messaging [2] based on the specs that have been submitted to the
oslo-specs repository [3] and taken a stab at priorities for all of
them. Please look over the list so we can discuss the priorities at
the meeting this week.

Thanks,
Doug

1. https://blueprints.launchpad.net/oslo/juno
2. https://blueprints.launchpad.net/oslo.messaging/juno
3. https://review.openstack.org/#/q/project:openstack/oslo-specs+status:open,n,z

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-06-02 Thread Carl Baldwin
Paul,

I'm curious.  Have you been able to update to a client using requests?
 Has it solved your problem?

Carl

On Thu, May 29, 2014 at 11:15 AM, Paul Ward  wrote:
> Yes, we're still on a code level that uses httplib2.  I noticed that as
> well, but wasn't sure if that would really
> help here as it seems like an ssl thing itself.  But... who knows??  I'm not
> sure how consistently we can
> recreate this, but if we can, I'll try using that patch to use requests and
> see if that helps.
>
>
>
> "Armando M."  wrote on 05/29/2014 11:52:34 AM:
>
>> From: "Armando M." 
>
>
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> ,
>> Date: 05/29/2014 11:58 AM
>
>> Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient
>>
>> Hi Paul,
>>
>> Just out of curiosity, I am assuming you are using the client that
>> still relies on httplib2. Patch [1] replaced httplib2 with requests,
>> but I believe that a new client that incorporates this change has not
>> yet been published. I wonder if the failures you are referring to
>> manifest themselves with the former http library rather than the
>> latter. Could you clarify?
>>
>> Thanks,
>> Armando
>>
>> [1] - https://review.openstack.org/#/c/89879/
>>
>> On 29 May 2014 17:25, Paul Ward  wrote:
>> > Well, for my specific error, it was an intermittent ssl handshake error
>> > before the request was ever sent to the
>> > neutron-server.  In our case, we saw that 4 out of 5 resize operations
>> > worked, the fifth failed with this ssl
>> > handshake error in neutronclient.
>> >
>> > I certainly think a GET is safe to retry, and I agree with your
>> > statement
>> > that PUTs and DELETEs probably
>> > are as well.  This still leaves a change in nova needing to be made to
>> > actually a) specify a conf option and
>> > b) pass it to neutronclient where appropriate.
>> >
>> >
>> > Aaron Rosen  wrote on 05/28/2014 07:38:56 PM:
>> >
>> >> From: Aaron Rosen 
>> >
>> >
>> >> To: "OpenStack Development Mailing List (not for usage questions)"
>> >> ,
>> >> Date: 05/28/2014 07:44 PM
>> >
>> >> Subject: Re: [openstack-dev] [neutron] Supporting retries in
>> >> neutronclient
>> >>
>> >> Hi,
>> >>
>> >> I'm curious if other openstack clients implement this type of retry
>> >> thing. I think retrying on GET/DELETES/PUT's should probably be okay.
>> >>
>> >> What types of errors do you see in the neutron-server when it fails
>> >> to respond? I think it would be better to move the retry logic into
>> >> the server around the failures rather than the client (or better yet
>> >> if we fixed the server :)). Most of the times I've seen this type of
>> >> failure is due to deadlock errors caused between (sqlalchemy and
>> >> eventlet *i think*) which cause the client to eventually timeout.
>> >>
>> >> Best,
>> >>
>> >> Aaron
>> >>
>> >
>> >> On Wed, May 28, 2014 at 11:51 AM, Paul Ward  wrote:
>> >> Would it be feasible to make the retry logic only apply to read-only
>> >> operations?  This would still require a nova change to specify the
>> >> number of retries, but it'd also prevent invokers from shooting
>> >> themselves in the foot if they call for a write operation.
>> >>
>> >>
>> >>
>> >> Aaron Rosen  wrote on 05/27/2014 09:40:00 PM:
>> >>
>> >> > From: Aaron Rosen 
>> >>
>> >> > To: "OpenStack Development Mailing List (not for usage questions)"
>> >> > ,
>> >> > Date: 05/27/2014 09:44 PM
>> >>
>> >> > Subject: Re: [openstack-dev] [neutron] Supporting retries in
>> >> > neutronclient
>> >> >
>> >> > Hi,
>> >>
>> >> >
>> >> > Is it possible to detect when the ssl handshaking error occurs on
>> >> > the client side (and only retry for that)? If so I think we should
>> >> > do that rather than retrying multiple times. The danger here is
>> >> > mostly for POST operations (as Eugene pointed out) where it's
>> >> > possible for the response to not make it back to the client and for
>> >> > the operation to actually succeed.
>> >> >
>> >> > Having this retry logic nested in the client also prevents things
>> >> > like nova from handling these types of failures individually since
>> >> > this retry logic is happening inside of the client. I think it would
>> >> > be better not to have this internal mechanism in the client and
>> >> > instead make the user of the client implement retry so they are
>> >> > aware of failures.
>> >> >
>> >> > Aaron
>> >> >
>> >>
>> >> > On Tue, May 27, 2014 at 10:48 AM, Paul Ward 
>> >> > wrote:
>> >> > Currently, neutronclient is hardcoded to only try a request once in
>> >> > retry_request by virtue of the fact that it uses self.retries as the
>> >> > retry count, and that's initialized to 0 and never changed.  We've
>> >> > seen an issue where we get an ssl handshaking error intermittently
>> >> > (seems like more of an ssl bug) and a retry would probably have
>> >> > worked.  Yet, since neutronclient only tries once and gives up, it
>> >> > fails the entire operation.  Here is the code in question:
>> >> >
>> >> > https://githu

Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-06-02 Thread Carl Baldwin
+1.  After reading through this thread, I think that a blind --retries
N could be harmful and unwise given the current API definition.  Users
that need a retry for an SSL error are going to get in to the habit of
adding --retries N to all their calls and they'll end up in trouble
because they really should be taking action on the particular error
that occurs, not just retrying on any error.

Carl

On Tue, May 27, 2014 at 8:40 PM, Aaron Rosen  wrote:
> Hi,
>
> Is it possible to detect when the ssl handshaking error occurs on the client
> side (and only retry for that)? If so I think we should do that rather than
> retrying multiple times. The danger here is mostly for POST operations (as
> Eugene pointed out) where it's possible for the response to not make it back
> to the client and for the operation to actually succeed.
>
> Having this retry logic nested in the client also prevents things like nova
> from handling these types of failures individually since this retry logic is
> happening inside of the client. I think it would be better not to have this
> internal mechanism in the client and instead make the user of the client
> implement retry so they are aware of failures.
>
> Aaron
>
>
> On Tue, May 27, 2014 at 10:48 AM, Paul Ward  wrote:
>>
>> Currently, neutronclient is hardcoded to only try a request once in
>> retry_request by virtue of the fact that it uses self.retries as the retry
>> count, and that's initialized to 0 and never changed.  We've seen an issue
>> where we get an ssl handshaking error intermittently (seems like more of an
>> ssl bug) and a retry would probably have worked.  Yet, since neutronclient
>> only tries once and gives up, it fails the entire operation.  Here is the
>> code in question:
>>
>>
>> https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L1296
>>
>> Does anybody know if there's some explicit reason we don't currently allow
>> configuring the number of retries?  If not, I'm inclined to propose a change
>> for just that.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-02 Thread Sean Dague
On 06/02/2014 01:05 PM, Sean Dague wrote:
> On 06/02/2014 09:21 AM, Matthew Treinish wrote:
> 
>>> The url for this is -  http://goo.gl/g4aMjM
>>>
>>> (the long url is very long:
>>> https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade+OR+project%3Aopenstack%2Fqa-specs%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%2Cjenkins+NOT+label%3ACode-Review%3C%3D-1%2Cself+NOT+label%3ACode-Review%3E%3D1%2Cself&title=QA+Review+Inbox&QA+Specs=project%3Aopenstack%2Fqa-specs&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode-Review%3C%3D2+age%3A5d&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade%29+label%3ACode-Review%3E%3D2+limit%3A50&Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review%3C%3D-1+limit%3A50&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+label%3ACode-Review%3C%3D2+age%3A2d
>>>
>>> The url can be regenerated easily using the gerrit-dash-creator.
>>>
>>
>> These generated URLs don't quite work as expected for me, I see a bunch of 
>> -1s
>> from jenkins in all the sections. Other things like reviews with -2s showing 
>> up
>> "in need final +2", or reviews with -2s and +2s from me being listed in the 
>> "but
>> haven't voted in the current revision". Also the top section just seems to 
>> list
>> every open QA program review regardless of it's current review state.
>>
>> I'll take a look at the code and see if I can help figure out what's going 
>> on.
> 
> It appears that there is some issue in Firefox vs. Gerrit here where
> Firefox is incorrectly over unescaping the URL, thus it doesn't work.
> Chrome works fine. As I'm on Linux that's the extent of what I can
> natively test.
> 
> I filed a Firefox bug here -
> https://bugzilla.mozilla.org/show_bug.cgi?id=1019073

The following updated url seems to work for firefox: https://goo.gl/oGYH4s

Thanks to dtantsur for figuring out the extra escaping on commas you
needed to work with firefox.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] [mistral] [heat] keystone chained trusts / oauth

2014-06-02 Thread Steve Martinelli
> 1. There doesn't seem to be any non-global way to
prevent oauth accesskeys
> from expiring.  We need delegation to last the (indefinite) lifetime
of the
> heat stack, so the delegatation cannot expire.
> 2. Most (all?) of the oauth interfaces are admin-only.  I'm not
clear if
> this is a blocker, but it seems like it's the opposite of what we
currently
> do with trusts, where a (non-admin) user can delegate a subset of
their
> roles via a trust, which is created using their token.

For issue #2, I think you're right,
we should probably make the oauth interfaces https://github.com/openstack/keystone/blob/master/etc/policy.json#L109..L114
have the same policy as the trust ones, (user_id:%(trust.trustor_user_id)s).

#1 is a bit more work, it would definitely be a new blueprint. 

I'll see what I can do about providing
a more realistic example.


Regards,

Steve Martinelli
Software Developer - Openstack
Keystone Core Member





Phone:
1-905-413-2851
E-mail: steve...@ca.ibm.com

8200 Warden Ave
Markham, ON L6G 1C7
Canada


Steven Hardy  wrote on 06/02/2014
06:41:48 AM:

> From: Steven Hardy 
> To: "OpenStack Development Mailing List
(not for usage questions)" 
> , 
> Date: 06/02/2014 06:43 AM
> Subject: Re: [openstack-dev] [solum] [mistral]
[heat] keystone 
> chained trusts / oauth
> 
> Hi Angus,
> 
> On Wed, May 28, 2014 at 12:56:52AM +, Angus Salkeld wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> > 
> > Hi all
> > 
> > During our Solum meeting it was felt we should make sure that
all three
> > team are on the same page wrt $subject.
> > 
> > I'll describe the use case we are trying to solve and hopefully
get some
> > guidance from the keystone team about the best way forward.
> > 
> > Solum implements a ci/cd pipeline that we want to trigger based
on a git
> > receive hook. What we do is generate a magic webhook (should
be
> > ec2signed url - on the todo list) and when it is hit we want
> > to call mistral-execution-create (which runs a workflow that
calls
> > to other openstack services (heat is one of them).
> > 
> > We currently use a trust token and that fails because both mistral
and
> > heat want to create trust tokens as well :-O (trust tokens can't
be
> > rescoped).
> 
> So, I've been looking into this, and there are two issues:
> 
> 1. On stack-create, heat needs to create a trust so it can do deferred
> operation on behalf of the user.  To do this we will require
explicit
> support for chained delegation in keystone, which does not currently
exist.
> I've been speaking to ayoung about it, and plan to raise a spec for
this
> work soon.  The best quick-fix is probably to always create a
stack when
> the user calls Solum (even if it's an empty stack), using their
> non-trust-scoped token.
> 
> 2. Heat doesn't currently work (even for non-create operations) with
a
> trust-scoped token.  The reason for this is primarily that keystoneclient
> always tries to request a new token to populate the auth_ref (e.g
service
> catalog etc), so there is no way to just validate the existing trust-scoped
> token.  AFAICS this requires a new keystoneclient auth plugin,
which I'm
> working on right now, I already posted a patch for the heat part of
the
> fix:
> 
> https://review.openstack.org/#/c/96452/
> 
> > 
> > So what is the best mechanism for this? I spoke to Steven Hardy
at
> > summit and he suggested (after talking to some keystone folks)
we all
> > move to using the new oauth functionality in keystone.
> > 
> > I believe there might be some limitations to oauth (are roles
supported?).
> 
> I spent a bit of time digging into oauth last week, based on this
example
> provided by Steve Martinelli:
> 
> https://review.openstack.org/#/c/80193/
> 
> Currently, I can't see how we can use this as a replacement for our
current
> use-cases for trusts:
> 1. There doesn't seem to be any non-global way to prevent oauth accesskeys
> from expiring.  We need delegation to last the (indefinite) lifetime
of the
> heat stack, so the delegatation cannot expire.
> 2. Most (all?) of the oauth interfaces are admin-only.  I'm not
clear if
> this is a blocker, but it seems like it's the opposite of what we
currently
> do with trusts, where a (non-admin) user can delegate a subset of
their
> roles via a trust, which is created using their token.
> 
> What would be *really* helpful, is if we could work towards another
> example, which demostrates something closer to the Solum/Heat use-case
for
> delegation (as opposed to the current example which just shows an
admin
> delegating their admin-ness).
> 
> e.g (these users/roles exist by default in devstack deployments):
> 
> 1. User "demo" delegates "heat_stack_owner" role
in project "demo" to the
> "heat" service user.  The resulting delegation-secret
to be stored by heat
> must not expire, and it must be possible for the "heat"
user to explicitly
> impersonate user "demo".
> 
> Until we can see how that use-case can be solved with oauth, I don't
think
> we can make a

Re: [openstack-dev] [Fuel] Backporting bugfixes to stable releases

2014-06-02 Thread Mike Scherbakov
Thanks Dmitry.
Can we do #2 using new cool gerrit feature (thanks to Infra team!) ?
- when patch is merged, button "Cherry Pick To" appears near Review button,
you can easily choose branch by clicking on it?


On Mon, Jun 2, 2014 at 12:33 PM, Dmitry Borodaenko  wrote:

> Our experience in backporting leftover bugfixes from MOST 5.0 to 4.1
> was not pleasant, primarily because too many backport commits had to
> be dealt with at the same time.
>
> We can do better next time if we follow a couple of simple rules:
>
> 1) When you create a new bug with High or Critical priority or upgrade
> an existing bug, always check if this bug is present in the supported
> stable release series (at least one most recent stable release). If it
> is present there, target it to all affected series (even if you don't
> expect to be able to eventually backport a fix). If it's not present
> in stable releases, document that on the bug so that other people
> don't have to re-check.
>
> 2) When you propose a fix for a bug, cherry-pick the fix commit onto
> the stable/x.x branch for each series it is targeted to. Use the same
> Change-Id and topic (git review -t bug/) to make it easier to
> track down all backports for that bug.
>
> 3) When you approve a bugfix commit for master branch, use the
> information available so far on the bug and in the review request to
> review and maybe update backporting status of the bug. Is its priority
> high enough to need a backport? Is it targeted to all affected release
> series? Are there backport commits already? For all series where
> backport should exist and doesn't, create a backport review request
> yourself. For all other affected series, change bug status to Won't
> Fix and explain in bug comments.
>
> Needless to say, we should keep following the rule #0, too: do not
> merge commits into stable branches until it was merged into master or
> documented why it doesn't apply to master.
>
> --
> Dmitry Borodaenko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Adopt Spec

2014-06-02 Thread Malini Kamalambal
+1 – Requiring specs for every blueprint is going to make the development 
process very cumbersome, and will take us back to waterfall days.
I like how the Marconi team operates now, with design decisions being made in 
IRC/ team meetings.
So Spec might become more of an overhead than add value, given how our team 
functions.

'If' we agree to use Specs, we should use that only for the blue prints that 
make sense.
For example, the unit test decoupling that we are working on now – this one 
will be a good candidate to use specs, since there is a lot of back and forth 
going on how to do this.
On the other hand something like Tempest Integration for Marconi will not 
warrant a spec, since it is pretty straightforward what needs to be done.
In the past we have had discussions around where to document certain design 
decisions (e.g. Which endpoint/verb is the best fit for pop operation?)
Maybe spec is the place for these?

We should leave it to the implementor to decide, if the bp warrants a spec or 
not & what should be in the spec.


From: Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 1:33 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

I’ve been in roles where enormous amounts of time were spent on writing specs, 
and in roles where specs where non-existent. Like most things, I’ve become 
convinced that success lies in moderation between the two extremes.

I think it would make sense for big specs, but I want to be careful we use it 
judiciously so that we don’t simply apply more process for the sake of more 
process. It is tempting to spend too much time recording every little detail in 
a spec, when that time could be better spent in regular communication between 
team members and with customers, and on iterating the code (short iterations 
between demo/testing, so you ensure you are on staying on track and can address 
design problems early, often).

IMO, specs are best used more as summaries, containing useful big-picture 
ideas, diagrams, and specific “memory pegs” to help us remember what was 
discussed and decided, and calling out specific “promises” for future 
conversations where certain design points are TBD.

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 at 9:51 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Marconi] Adopt Spec

Hello all,

We are seeing more & more design questions in #openstack-marconi.
It will be a good idea to formalize our design process a bit more & start using 
spec.
We are kind of late to the party –so we already have a lot of precedent ahead 
of us.

Thoughts?

Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Backporting bugfixes to stable releases

2014-06-02 Thread Dmitry Borodaenko
Our experience in backporting leftover bugfixes from MOST 5.0 to 4.1
was not pleasant, primarily because too many backport commits had to
be dealt with at the same time.

We can do better next time if we follow a couple of simple rules:

1) When you create a new bug with High or Critical priority or upgrade
an existing bug, always check if this bug is present in the supported
stable release series (at least one most recent stable release). If it
is present there, target it to all affected series (even if you don't
expect to be able to eventually backport a fix). If it's not present
in stable releases, document that on the bug so that other people
don't have to re-check.

2) When you propose a fix for a bug, cherry-pick the fix commit onto
the stable/x.x branch for each series it is targeted to. Use the same
Change-Id and topic (git review -t bug/) to make it easier to
track down all backports for that bug.

3) When you approve a bugfix commit for master branch, use the
information available so far on the bug and in the review request to
review and maybe update backporting status of the bug. Is its priority
high enough to need a backport? Is it targeted to all affected release
series? Are there backport commits already? For all series where
backport should exist and doesn't, create a backport review request
yourself. For all other affected series, change bug status to Won't
Fix and explain in bug comments.

Needless to say, we should keep following the rule #0, too: do not
merge commits into stable branches until it was merged into master or
documented why it doesn't apply to master.

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova default quotas

2014-06-02 Thread Matt Riedemann



On 6/2/2014 12:53 PM, Joe Gordon wrote:




On Thu, May 29, 2014 at 10:46 AM, Matt Riedemann
mailto:mrie...@linux.vnet.ibm.com>> wrote:



On 5/27/2014 4:44 PM, Vishvananda Ishaya wrote:

I’m not sure that this is the right approach. We really have to
add the old extension back for compatibility, so it might be
best to simply keep that extension instead of adding a new way
to do it.

Vish

On May 27, 2014, at 1:31 PM, Cazzolato, Sergio J
mailto:sergio.j.cazzol...@intel.com>> wrote:

I have created a blueprint to add this functionality to nova.

https://review.openstack.org/#__/c/94519/



-Original Message-
From: Vishvananda Ishaya [mailto:vishvana...@gmail.com
]
Sent: Tuesday, May 27, 2014 5:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] nova default quotas

Phil,

You are correct and this seems to be an error. I don't think
in the earlier ML thread[1] that anyone remembered that the
quota classes were being used for default quotas. IMO we
need to revert this removal as we (accidentally) removed a
Havana feature with no notification to the community. I've
reactivated a bug[2] and marked it critcal.

Vish

[1]

http://lists.openstack.org/__pipermail/openstack-dev/2014-__February/027574.html


[2] https://bugs.launchpad.net/__nova/+bug/1299517


On May 27, 2014, at 12:19 PM, Day, Phil mailto:philip@hp.com>> wrote:

Hi Vish,

I think quota classes have been removed from Nova now.

Phil


Sent from Samsung Mobile


 Original message 
From: Vishvananda Ishaya
Date:27/05/2014 19:24 (GMT+00:00)
To: "OpenStack Development Mailing List (not for usage
questions)"
Subject: Re: [openstack-dev] [nova] nova default quotas

Are you aware that there is already a way to do this
through the cli using quota-class-update?


http://docs.openstack.org/__user-guide-admin/content/cli___set_quotas.html


(near the bottom)

Are you suggesting that we also add the ability to use
just regular quota-update? I'm not sure i see the need
for both.

Vish

On May 20, 2014, at 9:52 AM, Cazzolato, Sergio J
mailto:sergio.j.cazzol...@intel.com>> wrote:

I would to hear your thoughts about an idea to add a
way to manage the default quota values through the API.

The idea is to use the current quota api, but
sending ''default' instead of the tenant_id. This
change would apply to quota-show and quota-update
methods.

This approach will help to simplify the
implementation of another blueprint named
per-flavor-quotas

Feedback? Suggestions?


Sergio Juan Cazzolato
Intel Software Argentina

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org


http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev





_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org


http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev





_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org


http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev





___

Re: [openstack-dev] [Marconi] Kafka support and high throughput

2014-06-02 Thread Janczuk, Tomasz
Keith,

Have you put HTTP protocol in front of Kafka, or are you using Kafka¹s
native protocol?

Can you also expand a little on your performance requirements? What does
³high throughput² mean to you in terms of the messaging patterns (# of
producers and consumers, # of queues and queue partitions, message size,
desired throughput)?

Thanks,
Tomasz Janczuk
@tjanczuk
HP

On 5/31/14, 11:56 PM, "Flavio Percoco"  wrote:

>On 30/05/14 06:03 -0700, Keith Newstadt wrote:
>>Has anyone given thought to using Kafka to back Marconi?  And has there
>>been discussion about adding high throughput APIs to Marconi.
>>
>>We're looking at providing Kafka as a messaging service for our
>>customers, in a scenario where throughput is a priority.  We've had good
>>luck using both streaming HTTP interfaces and long poll interfaces to
>>get high throughput for other web services we've built.  Would this use
>>case be appropriate in the context of the Marconi roadmap?
>
>
>Hi,
>
>Kafka would be a good store to back Marconi with. We've had some
>feedback from the community w.r.t Kafka and there seems to be lot of
>interest on it. The team is not currently targeting it but we could
>probably do something after the J release.
>
>That said, Marconi's plugin architecture allows people to create
>third-party drivers and obviously use them. It'd be really nice to see
>some work going on that area as an external driver.
>
>Thanks,
>Flavio
>
>-- 
>@flaper87
>Flavio Percoco
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Removing Get and Delete Messages by ID

2014-06-02 Thread Janczuk, Tomasz
First of all, I think the removal of ³get message[s] by ID² is a great
change, it moves Marconi APIs closer to a typical messaging semantics.
However, I still see the ³list messages² API in the spec
(https://wiki.openstack.org/wiki/Marconi/specs/api/v1.1#List_Messages). Is
it the plan to leave this API in, or is it also going to be removed? If
the motivation for removing the ³get message[s] by ID² was to make it
easier to support different store backends (e.g. AMQP), I would expect the
same argument to apply to the ³list messages² API which allows random
access to messages in a queue.

Regarding deleting claimed messages, I think it should be possible to
claim multiple messages but then delete any of them individually. For
reference, that is the semantics of a ³batch claim² that SQS, Azure, and
IronMQ have - during batch claim a number of messages can be claimed, but
each of them is assigned a unique ³token² than can later be used to delete
just that message. I believe there is a good reason it is organized that
way: even if a batch of messages is claimed, their processing can be
completely unrelated, and executed in different time frames. It does not
make sense to make logical completion of message A conditional on the
success of processing of message B within a batch. It also does not make
sense to hold up completion of all messages in a batch until the
completion of the message that takes most time to process.

Put it another way, a batch is merely an optimization over the atomic
operation of claiming and deleting a single message. The optimization can
allow multiple messages to be claimed at once; it can also allow multiple
messages to be deleted at once (I believe AMQP has that semantics); but it
should not prevent the basic use case of claiming or deleting of a single
message. 

On 5/29/14, 1:22 AM, "Flavio Percoco"  wrote:

>On 28/05/14 17:01 +, Kurt Griffiths wrote:
>>Crew, as discussed in the last team meeting, I have updated the API v1.1
>>spec
>>to remove the ability to get one or more messages by ID. This was done to
>>remove unnecessary complexity from the API, and to make it easier to
>>support
>>different types of message store backends.
>>
>>However, this now leaves us with asymmetric semantics. On the one hand,
>>we do
>>not allow retrieving messages by ID, but we still support deleting them
>>by ID.
>>It seems to me that deleting a message only makes sense in the context
>>of a
>>claim or pop operation. In the case of a pop, the message is already
>>deleted by
>>the time the client receives it, so I don¹t see a need for including a
>>message
>>ID in the response. When claiming a batch of messages, however, the
>>client
>>still needs some way to delete each message after processing it. In this
>>case,
>>we either need to allow the client to delete an entire batch of messages
>>using
>>the claim ID, or we still need individual message IDs (hrefs) that can be
>>DELETEd. 
>>
>>Deleting a batch of messages can be accomplished in V1.0 using ³delete
>>multiple
>>messages by ID². Regardless of how it is done, I¹ve been wondering if it
>>is
>>actually an anti-pattern; if a worker crashes after processing N
>>messages, but
>>before deleting those same N messages, the system is left with several
>>messages
>>that another worker will pick up and potentially reprocess, although the
>>work
>>has already been done. If the work is idempotent, this isn¹t a big deal.
>>Otherwise, the client will have to have a way to check whether a message
>>has
>>already been processed, ignoring it if it has. But whether it is 1
>>message or N
>>messages left in a bad state by the first worker, the other worker has to
>>follow the same logic, so perhaps it would make sense after all to
>>simply allow
>>deleting entire batches of claimed messages by claim ID, and not
>>worrying about
>>providing individual message hrefs/IDs for deletion.
>
>There are some risks related to claiming a set of messages and process
>them in batch rather than processing 1 message at a time. However,
>some of those risks are valid for both scenarios. For instance, if a
>worker claims just 1 message and dies before deleting it, the server
>will be left with an already processed message.
>
>I believe this is very specific to the each use-case. Based on their
>needs, users will have to choose between 'pop'ng' messages out of the
>queue or caliming them. One way to provide more info to the user is by
>keeping track of how many times (or even the last time) a message has
>been claimed. I'm not a big fan of this because it'll add more
>complexity and more importantly we won't be able to support this on
>the AMQP driver.
>
>It's common to have this kind of 'tolerance' implemented in the
>client-side. The server must guarantee the delivery mechanism whereas
>the client must be tolerant enough based on the use-case.
>
>>
>>With all this in mind, I¹m starting to wonder if I should revert my
>>changes to
>>the spec, and wait to address these changes in

Re: [openstack-dev] [Murano] [meetings] Murano bug scrub

2014-06-02 Thread Timur Nurlygayanov
Thanks for today's bug scrub meeting!

The meeting minutes are available by the following links:

Minutes:
http://eavesdrop.openstack.org/meetings/murano_bug_scrub/2014/murano_bug_scrub.2014-06-02-17.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/murano_bug_scrub/2014/murano_bug_scrub.2014-06-02-17.02.txt
Log:
http://eavesdrop.openstack.org/meetings/murano_bug_scrub/2014/murano_bug_scrub.2014-06-02-17.02.log.html

We plan to continue our meeting 4 June, at 4:00 - 6:00 PM UTC, in *#murano*
IRC.

You are welcome!



On Tue, May 27, 2014 at 2:41 PM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi all,
>
> We want to schedule the bug scrub meeting for Murano project to 06/02/14
> (June 2), at 1700 UTC.
> On this meeting we will discuss all new bugs, which we plan to fix in
> juno-1 release cycle.
>
> All actual descriptions of Murano bugs are available here:
> https://bugs.launchpad.net/murano
>
> If you want to participate, welcome to the IRC *#murano* chanel!
>
>
> Thank you!
>
> --
>
> Timur,
> QA Engineer
> OpenStack Projects
> Mirantis Inc
>



-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Passing flat_injected flag through instance metadata

2014-06-02 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Then I think the answer to the original question is that this requires
a new spec, unless one already exists?

- -Ben

On 06/02/2014 12:50 PM, Vishvananda Ishaya wrote:
> We have discussed this a bunch in the past, and the right
> implementation here is to put the network configuration in a
> standard format (json?) in both the config drive and metadata.
> 
> cloud-init can be modified to read from that format and write out a
> proper /etc/network/interfaces (or appropriate files for the guest
> distro)
> 
> Vish
> 
> On Jun 2, 2014, at 10:20 AM, ebaysf, yvempati 
> wrote:
> 
>> Hi, Thanks for getting back to me.
>> 
>> The current flat_injected flag is set in the hypervisor
>> nova.conf. The config drive data uses this flag to set the static
>> network configuration. What I am trying to accomplish is to pass
>> the flat_injected file through the instance metadata during the
>> boot time and use it during the config drive network
>> configuration rather setting the flag at the hypervisor level.
>> 
>> Regards, Yashwanth Vempati
>> 
>> On 6/2/14, 9:30 AM, "Ben Nemec"  wrote:
>> 
>>> On 05/30/2014 05:29 PM, ebaysf, yvempati wrote:
 Hello all, I am new to the openstack community and I am
 looking for feedback.
 
 We would like to implement a feature that allows user to
 pass flat_injected flag through instance metadata. We would
 like to enable this feature for images that support config
 drive. This feature helps us to decrease the dependency on
 dhcp server and  to maintain a uniform configuration across
 all the hypervisors running in our cloud. In order to enable
 this feature should I create a blue print and later
 implement or can this feature be implemented by filing a
 bug.
>>> 
>>> I'm not sure I understand what you're trying to do here.  As I
>>> recall, when flat_injected is set the static network
>>> configuration is already included in the config drive data.  I
>>> believe there have been some changes around file injection, but
>>> that shouldn't affect config drive as far as I know.
>>> 
>>> If you just need that functionality and it's not working
>>> anymore then a bug might be appropriate, but if you need
>>> something else then a blueprint/spec will be needed.
>>> 
>>> -Ben
>>> 
>>> ___ OpenStack-dev
>>> mailing list OpenStack-dev@lists.openstack.org 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>> 
___
>> OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTjLpUAAoJEDehGd0Fy7uqPQAH/0y4j/0KUnB3veQjjopxKeRp
Uf+lK3YrnGqvqLIh9VWv58exHshvWFh6RRWys7wCuPH6vf1/Lzt3X3M6NdLP/INh
2no8kj2V7Z9ex6Q6yS9Rt3YVX5Rt2xgyg2QPiZNnCLTJ2NfBRdgxVYW5WobcKOY3
n6ymaEqJxcGGnz2GcBGxiXekWqA9F3mEIND10KjrPL5j9B6zLW4ffknX6tgT/uUn
JwZrKC3r++tdNpATnJ2tD9W5A5wzVHtnaglnfFGJRwRVj+aPiu8Jt8awNah+mf2Q
dTdKiL7OY2q6Tv1wGpxGObq69fZDYcnYyXBK2/2+yHYz5TrfzsDY/cyb8YJ659I=
=xLDw
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova default quotas

2014-06-02 Thread Joe Gordon
On Thu, May 29, 2014 at 10:46 AM, Matt Riedemann  wrote:

>
>
> On 5/27/2014 4:44 PM, Vishvananda Ishaya wrote:
>
>> I’m not sure that this is the right approach. We really have to add the
>> old extension back for compatibility, so it might be best to simply keep
>> that extension instead of adding a new way to do it.
>>
>> Vish
>>
>> On May 27, 2014, at 1:31 PM, Cazzolato, Sergio J <
>> sergio.j.cazzol...@intel.com> wrote:
>>
>>  I have created a blueprint to add this functionality to nova.
>>>
>>> https://review.openstack.org/#/c/94519/
>>>
>>>
>>> -Original Message-
>>> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
>>> Sent: Tuesday, May 27, 2014 5:11 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [nova] nova default quotas
>>>
>>> Phil,
>>>
>>> You are correct and this seems to be an error. I don't think in the
>>> earlier ML thread[1] that anyone remembered that the quota classes were
>>> being used for default quotas. IMO we need to revert this removal as we
>>> (accidentally) removed a Havana feature with no notification to the
>>> community. I've reactivated a bug[2] and marked it critcal.
>>>
>>> Vish
>>>
>>> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-
>>> February/027574.html
>>> [2] https://bugs.launchpad.net/nova/+bug/1299517
>>>
>>> On May 27, 2014, at 12:19 PM, Day, Phil  wrote:
>>>
>>>  Hi Vish,

 I think quota classes have been removed from Nova now.

 Phil


 Sent from Samsung Mobile


  Original message 
 From: Vishvananda Ishaya
 Date:27/05/2014 19:24 (GMT+00:00)
 To: "OpenStack Development Mailing List (not for usage questions)"
 Subject: Re: [openstack-dev] [nova] nova default quotas

 Are you aware that there is already a way to do this through the cli
 using quota-class-update?

 http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html
 (near the bottom)

 Are you suggesting that we also add the ability to use just regular
 quota-update? I'm not sure i see the need for both.

 Vish

 On May 20, 2014, at 9:52 AM, Cazzolato, Sergio J <
 sergio.j.cazzol...@intel.com> wrote:

  I would to hear your thoughts about an idea to add a way to manage the
> default quota values through the API.
>
> The idea is to use the current quota api, but sending ''default'
> instead of the tenant_id. This change would apply to quota-show and
> quota-update methods.
>
> This approach will help to simplify the implementation of another
> blueprint named per-flavor-quotas
>
> Feedback? Suggestions?
>
>
> Sergio Juan Cazzolato
> Intel Software Argentina
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> The reverted series for nova on master is here [1].
>

I don't think we want a full revert here, the feature that we broke is the
ability to easily update the default quota values without restarting any
services, not quota-classes themselves. Given that I see 3 paths forward:

1. Provide an alternate way to do this. OpenStack already has an implicit
assumption that one has a way of rolling out config files across all
machines. so we can teach oslo.config to know which config options can be
updated without a restart.  While this definitely breaks the API, this is a
rarely used API and we can avoid breaking functionality at least.
2. Do a partial revert of this API to only support overriding the default
quota values. Hopefully while doing this we can simplify the quota logic
and reduce the number of DB calls needed. This way we can restore the
working part of the API and not the unimplemented quota-class logic itself.
3. Do a full revert and re-add all the unimplemented quota-class logic, we
now have just re-added a non-working API.

While I would prefer to take path 1 as I think that gets us closer to where
we should be, I think path 2 is safer approach.


>
> Once that's merged I can work on backporting the revert for the API change
> to stable/icehouse, which will be a little tricky given conflicts from
> master.
>
> [1] https://review.openstack.org/#/q/

Re: [openstack-dev] [Nova]Passing flat_injected flag through instance metadata

2014-06-02 Thread Vishvananda Ishaya
We have discussed this a bunch in the past, and the right implementation here 
is to put the network configuration in a standard format (json?) in both the 
config drive and metadata.

cloud-init can be modified to read from that format and write out a proper 
/etc/network/interfaces (or appropriate files for the guest distro)

Vish

On Jun 2, 2014, at 10:20 AM, ebaysf, yvempati  wrote:

> Hi,
> Thanks for getting back to me.
> 
> The current flat_injected flag is set in the hypervisor nova.conf. The
> config drive data uses this flag to set the static network configuration.
> What I am trying to accomplish is to pass the flat_injected file through
> the instance metadata during the boot time and use it during the config
> drive network configuration rather setting the flag at the hypervisor
> level.
> 
> Regards,
> Yashwanth Vempati
> 
> On 6/2/14, 9:30 AM, "Ben Nemec"  wrote:
> 
>> On 05/30/2014 05:29 PM, ebaysf, yvempati wrote:
>>> Hello all,
>>> I am new to the openstack community and I am looking for feedback.
>>> 
>>> We would like to implement a feature that allows user to pass
>>> flat_injected flag through instance metadata. We would like to enable
>>> this feature for images that support config drive. This feature helps us
>>> to decrease the dependency on dhcp server and  to maintain a uniform
>>> configuration across all the hypervisors running in our cloud. In order
>>> to enable this feature should I create a blue print and later implement
>>> or can this feature be implemented by filing a bug.
>> 
>> I'm not sure I understand what you're trying to do here.  As I recall,
>> when flat_injected is set the static network configuration is already
>> included in the config drive data.  I believe there have been some
>> changes around file injection, but that shouldn't affect config drive as
>> far as I know.
>> 
>> If you just need that functionality and it's not working anymore then a
>> bug might be appropriate, but if you need something else then a
>> blueprint/spec will be needed.
>> 
>> -Ben
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova default quotas

2014-06-02 Thread Joe Gordon
On Thu, May 29, 2014 at 5:45 AM, Day, Phil  wrote:

>
>
>
>
> *From:* Kieran Spear [mailto:kisp...@gmail.com]
> *Sent:* 28 May 2014 06:05
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [nova] nova default quotas
>
>
>
> Hi Joe,
>
>
>
> On 28/05/2014, at 11:21 AM, Joe Gordon  wrote:
>
>
>
>
>
>
>
> On Tue, May 27, 2014 at 1:30 PM, Kieran Spear  wrote:
>
>
> On 28/05/2014, at 6:11 AM, Vishvananda Ishaya 
> wrote:
>
> > Phil,
> >
>
> > You are correct and this seems to be an error. I don’t think in the
> earlier ML thread[1] that anyone remembered that the quota classes were
> being used for default quotas. IMO we need to revert this removal as we
> (accidentally) removed a Havana feature with no notification to the
> community. I’ve reactivated a bug[2] and marked it critical.
>
> +1.
>
> We rely on this to set the default quotas in our cloud.
>
>
>
> Hi Kieran,
>
>
>
> Can you elaborate on this point. Do you actually use the full quota-class
> functionality that allows for quota classes, if so what provides the quota
> classes? If you only use this for setting the default quotas, why do you
> prefer the API and not setting the config file?
>
>
>
> We just need the defaults. My comment was more to indicate that yes, this
> is being used by people. I'm sure we could switch to using the config file,
> and generally I prefer to keep configuration in code, but finding out about
> this half way through a release cycle isn't ideal.
>
>
>
> I notice that only the API has been removed in Icehouse, so I'm assuming
> the impact is limited to *changing* the defaults, which we don't do often.
> I was initially worried that after upgrading to Icehouse we'd be left with
> either no quotas or whatever the config file defaults are, but it looks
> like this isn't the case.
>
>
>
> Unfortunately the API removal in Nova was followed by similar changes in
> novaclient and Horizon, so fixing Icehouse at this point is probably going
> to be difficult.
>
>
>
> *[Day, Phil]  I think we should revert the changes in all three system
> then.   We have the rules about not breaking API compatibility in place for
> a reason, if we want to be taken seriously as a stable API then we need to
> be prepared to roll back if we goof-up.*
>
>
>
> *Joe – was there a nova-specs BP for the change ?  I’m wondering how this
> one slipped through*
>

That's a good question.

the API extension quota-classes has been around for a very long time and
never actually worked [0]. This was brought up again in February 2014 and
the original author chimed in saying it doesn't work [1]. When this came up
there was no discussion around the default quota value functionality, and
it didn't come up in any of the reviews. Because this was supposed to be
just removing dead code there as no nova-specs BP for it.


[0] http://lists.openstack.org/pipermail/openstack-dev/2014-May/036053.html
[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027574.html



>
>
>
>
> Cheers,
>
> Kieran
>
>
>
>
>
>
> Kieran
>
>
> >
> > Vish
> >
> > [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/027574.html
> > [2] https://bugs.launchpad.net/nova/+bug/1299517
> >
> > On May 27, 2014, at 12:19 PM, Day, Phil  wrote:
> >
> >> Hi Vish,
> >>
> >> I think quota classes have been removed from Nova now.
> >>
> >> Phil
> >>
> >>
> >> Sent from Samsung Mobile
> >>
> >>
> >>  Original message 
> >> From: Vishvananda Ishaya
> >> Date:27/05/2014 19:24 (GMT+00:00)
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> Subject: Re: [openstack-dev] [nova] nova default quotas
> >>
> >> Are you aware that there is already a way to do this through the cli
> using quota-class-update?
> >>
> >> http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html
> (near the bottom)
> >>
> >> Are you suggesting that we also add the ability to use just regular
> quota-update? I’m not sure i see the need for both.
> >>
> >> Vish
> >>
> >> On May 20, 2014, at 9:52 AM, Cazzolato, Sergio J <
> sergio.j.cazzol...@intel.com> wrote:
> >>
> >>> I would to hear your thoughts about an idea to add a way to manage the
> default quota values through the API.
> >>>
> >>> The idea is to use the current quota api, but sending ''default'
> instead of the tenant_id. This change would apply to quota-show and
> quota-update methods.
> >>>
> >>> This approach will help to simplify the implementation of another
> blueprint named per-flavor-quotas
> >>>
> >>> Feedback? Suggestions?
> >>>
> >>>
> >>> Sergio Juan Cazzolato
> >>> Intel Software Argentina
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://list

Re: [openstack-dev] [Marconi] Adopt Spec

2014-06-02 Thread Kurt Griffiths
I’ve been in roles where enormous amounts of time were spent on writing specs, 
and in roles where specs where non-existent. Like most things, I’ve become 
convinced that success lies in moderation between the two extremes.

I think it would make sense for big specs, but I want to be careful we use it 
judiciously so that we don’t simply apply more process for the sake of more 
process. It is tempting to spend too much time recording every little detail in 
a spec, when that time could be better spent in regular communication between 
team members and with customers, and on iterating the code (short iterations 
between demo/testing, so you ensure you are on staying on track and can address 
design problems early, often).

IMO, specs are best used more as summaries, containing useful big-picture 
ideas, diagrams, and specific “memory pegs” to help us remember what was 
discussed and decided, and calling out specific “promises” for future 
conversations where certain design points are TBD.

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 at 9:51 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Marconi] Adopt Spec

Hello all,

We are seeing more & more design questions in #openstack-marconi.
It will be a good idea to formalize our design process a bit more & start using 
spec.
We are kind of late to the party –so we already have a lot of precedent ahead 
of us.

Thoughts?

Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Passing flat_injected flag through instance metadata

2014-06-02 Thread ebaysf, yvempati
Hi,
Thanks for getting back to me.

The current flat_injected flag is set in the hypervisor nova.conf. The
config drive data uses this flag to set the static network configuration.
What I am trying to accomplish is to pass the flat_injected file through
the instance metadata during the boot time and use it during the config
drive network configuration rather setting the flag at the hypervisor
level.

Regards,
Yashwanth Vempati

On 6/2/14, 9:30 AM, "Ben Nemec"  wrote:

>On 05/30/2014 05:29 PM, ebaysf, yvempati wrote:
>> Hello all,
>> I am new to the openstack community and I am looking for feedback.
>> 
>> We would like to implement a feature that allows user to pass
>>flat_injected flag through instance metadata. We would like to enable
>>this feature for images that support config drive. This feature helps us
>>to decrease the dependency on dhcp server and  to maintain a uniform
>>configuration across all the hypervisors running in our cloud. In order
>>to enable this feature should I create a blue print and later implement
>>or can this feature be implemented by filing a bug.
>
>I'm not sure I understand what you're trying to do here.  As I recall,
>when flat_injected is set the static network configuration is already
>included in the config drive data.  I believe there have been some
>changes around file injection, but that shouldn't affect config drive as
>far as I know.
>
>If you just need that functionality and it's not working anymore then a
>bug might be appropriate, but if you need something else then a
>blueprint/spec will be needed.
>
>-Ben
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-02 Thread Sean Dague
On 06/02/2014 09:21 AM, Matthew Treinish wrote:

>> The url for this is -  http://goo.gl/g4aMjM
>>
>> (the long url is very long:
>> https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade+OR+project%3Aopenstack%2Fqa-specs%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%2Cjenkins+NOT+label%3ACode-Review%3C%3D-1%2Cself+NOT+label%3ACode-Review%3E%3D1%2Cself&title=QA+Review+Inbox&QA+Specs=project%3Aopenstack%2Fqa-specs&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode-Review%3C%3D2+age%3A5d&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade%29+label%3ACode-Review%3E%3D2+limit%3A50&Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review%3C%3D-1+limit%3A50&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+label%3ACode-Review%3C%3D2+age%3A2d
>>
>> The url can be regenerated easily using the gerrit-dash-creator.
>>
> 
> These generated URLs don't quite work as expected for me, I see a bunch of -1s
> from jenkins in all the sections. Other things like reviews with -2s showing 
> up
> "in need final +2", or reviews with -2s and +2s from me being listed in the 
> "but
> haven't voted in the current revision". Also the top section just seems to 
> list
> every open QA program review regardless of it's current review state.
> 
> I'll take a look at the code and see if I can help figure out what's going on.

It appears that there is some issue in Firefox vs. Gerrit here where
Firefox is incorrectly over unescaping the URL, thus it doesn't work.
Chrome works fine. As I'm on Linux that's the extent of what I can
natively test.

I filed a Firefox bug here -
https://bugzilla.mozilla.org/show_bug.cgi?id=1019073

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Meeting minutes

2014-06-02 Thread Timur Nurlygayanov
Hi all,

Thanks to all participants for visiting the Mistral meeting in
#openstack-meeting today!

The meeting minutes can be found by the following links:

Minutes:
http://eavesdrop.openstack.org/meetings/mistral_meeting/2014/mistral_meeting.2014-06-02-16.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/mistral_meeting/2014/mistral_meeting.2014-06-02-16.00.txt
Log:
http://eavesdrop.openstack.org/meetings/mistral_meeting/2014/mistral_meeting.2014-06-02-16.00.log.html


*Renat,* we discussed some questions on this meeting, let's review open
ideas after your holiday.


-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-06-02 Thread Ben Nemec
On 05/30/2014 06:58 AM, Jaromir Coufal wrote:
> On 2014/30/05 10:00, Thomas Spatzier wrote:
>> Excerpt from Zane Bitter's message on 29/05/2014 20:57:10:
>>
>>> From: Zane Bitter 
>>> To: openstack-dev@lists.openstack.org
>>> Date: 29/05/2014 20:59
>>> Subject: Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle
>>> collaborative meetup
>> 
>>> BTW one timing option I haven't seen mentioned is to follow Pycon-AU's
>>> model of running e.g. Friday-Tuesday (July 25-29). I know nobody wants
>>> to be stuck in Raleigh, NC on a weekend (I've lived there, I understand
>>> ;), but for folks who have a long ways to travel it's one weekend lost
>>> instead of two.
>>
>> +1 - excellent idea!
> 
> It looks that there is an interest in these dates, so I added 3rd option 
> to the etherpad [0].
> 
> For one more time, I would like to ask potential attendees to put 
> yourselves to dates which would work for you.
> 
> -- Jarda
> 
> [0] https://etherpad.openstack.org/p/juno-midcycle-meetup

Just to clarify, I should add my name to the list if I _can_ make it to
a given proposal, even if I don't know for sure that I will be going?

I don't know what the travel situation is yet so I can't commit to being
there on any dates, but I can certainly say which dates would work for
me if I can make it.

-Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-02 Thread Sean Dague
On 06/02/2014 12:17 PM, Doug Hellmann wrote:
> On Mon, Jun 2, 2014 at 6:57 AM, Sean Dague  wrote:
>> Towards the end of the summit there was a discussion about us using a
>> shared review dashboard to see if a common view by the team would help
>> accelerate people looking at certain things. I spent some time this
>> weekend working on a tool to make building custom dashboard urls much
>> easier.
>>
>> My current proposal is the following, and would like comments on it:
>> https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash
>>
>> All items in the dashboard are content that you've not voted on in the
>> current patch revision, that you don't own, and that have passing
>> Jenkins test results.
>>
>> 1. QA Specs - these need more eyes, so we highlight them at top of page
>> 2. Patches that are older than 5 days, with no code review
>> 3. Patches that you are listed as a reviewer on, but haven't voting on
>> current version
>> 4. Patches that already have a +2, so should be landable if you agree.
>> 5. Patches that have no negative code review feedback on them
>> 6. Patches older than 2 days, with no code review
>>
>> These are definitely a judgement call on what people should be looking
>> at, but this seems a pretty reasonable triaging list. I'm happy to have
>> a discussion on changes to this list.
>>
>> The url for this is -  http://goo.gl/g4aMjM
>>
>> (the long url is very long:
>> https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade+OR+project%3Aopenstack%2Fqa-specs%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%2Cjenkins+NOT+label%3ACode-Review%3C%3D-1%2Cself+NOT+label%3ACode-Review%3E%3D1%2Cself&title=QA+Review+Inbox&QA+Specs=project%3Aopenstack%2Fqa-specs&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode-Review%3C%3D2+age%3A5d&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade%29+label%3ACode-Review%3E%3D2+limit%3A50&Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review%3C%3D-1+limit%3A50&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+label%3ACode-Review%3C%3D2+age%3A2d
>>
>> The url can be regenerated easily using the gerrit-dash-creator.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> What do you think about tying this to the list of repositories in the
> governance repo (like https://review.openstack.org/#/c/92597/) and
> generating similar dashboards for all of the programs that way?

That would be possible, realistically I expect that different programs
might think about their review flows differently. I consider the example
dashboards there to be that, examples, which I find very useful (and
actively use most of them on a daily basis).

 In the qa-program case we roughly agreed on that prioritizing criteria
at the summit, so I feel like there is group buy in. I wouldn't want to
assume that the review priorities we set automatically applied to other
program's cultures.

IIRC the swift team had some other queries they found useful, hopefully
this would make it really easy for them to build a team dashboard (which
could be in the tree or not, however they feel like it).

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday June 3rd at 19:00 UTC

2014-06-02 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday June 3rd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Passing flat_injected flag through instance metadata

2014-06-02 Thread Ben Nemec
On 05/30/2014 05:29 PM, ebaysf, yvempati wrote:
> Hello all,
> I am new to the openstack community and I am looking for feedback.
> 
> We would like to implement a feature that allows user to pass flat_injected 
> flag through instance metadata. We would like to enable this feature for 
> images that support config drive. This feature helps us to decrease the 
> dependency on dhcp server and  to maintain a uniform configuration across all 
> the hypervisors running in our cloud. In order to enable this feature should 
> I create a blue print and later implement or can this feature be implemented 
> by filing a bug.

I'm not sure I understand what you're trying to do here.  As I recall,
when flat_injected is set the static network configuration is already
included in the config drive data.  I believe there have been some
changes around file injection, but that shouldn't affect config drive as
far as I know.

If you just need that functionality and it's not working anymore then a
bug might be appropriate, but if you need something else then a
blueprint/spec will be needed.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-02 Thread Doug Hellmann
On Mon, Jun 2, 2014 at 6:57 AM, Sean Dague  wrote:
> Towards the end of the summit there was a discussion about us using a
> shared review dashboard to see if a common view by the team would help
> accelerate people looking at certain things. I spent some time this
> weekend working on a tool to make building custom dashboard urls much
> easier.
>
> My current proposal is the following, and would like comments on it:
> https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash
>
> All items in the dashboard are content that you've not voted on in the
> current patch revision, that you don't own, and that have passing
> Jenkins test results.
>
> 1. QA Specs - these need more eyes, so we highlight them at top of page
> 2. Patches that are older than 5 days, with no code review
> 3. Patches that you are listed as a reviewer on, but haven't voting on
> current version
> 4. Patches that already have a +2, so should be landable if you agree.
> 5. Patches that have no negative code review feedback on them
> 6. Patches older than 2 days, with no code review
>
> These are definitely a judgement call on what people should be looking
> at, but this seems a pretty reasonable triaging list. I'm happy to have
> a discussion on changes to this list.
>
> The url for this is -  http://goo.gl/g4aMjM
>
> (the long url is very long:
> https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade+OR+project%3Aopenstack%2Fqa-specs%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%2Cjenkins+NOT+label%3ACode-Review%3C%3D-1%2Cself+NOT+label%3ACode-Review%3E%3D1%2Cself&title=QA+Review+Inbox&QA+Specs=project%3Aopenstack%2Fqa-specs&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode-Review%3C%3D2+age%3A5d&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade%29+label%3ACode-Review%3E%3D2+limit%3A50&Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review%3C%3D-1+limit%3A50&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+label%3ACode-Review%3C%3D2+age%3A2d
>
> The url can be regenerated easily using the gerrit-dash-creator.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

What do you think about tying this to the list of repositories in the
governance repo (like https://review.openstack.org/#/c/92597/) and
generating similar dashboards for all of the programs that way?

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-06-02 Thread Mitsuhiro Tanino
Hi Deepak-san,

Thank you for your comment. Please see following comments.

>>1) There is a lof of manual work needed here.. like every time the new host 
>>added.. admin needs to do FC zoning to ensure that LU is visible by the host.

Right. Compared to LVMiSCSI driver, proposed driver are needed some manual 
admin works.

>>Also the method you mentioend for refreshing (echo '---' > ...) doesn't work 
>>reliably across all storage types does it ?

The "echo" command is already used in rescan_hosts() at linuxfc.py before 
connecting new volume to a instance.
As you mentioned,  whether this command works properly or not depends on 
storage types.
Therefore, admin needs to confirm the command working properly.

>>2) In Slide 1-1 .. how ( and who?) ensures that the compute nodes don't step 
>>on each other is using the LVs ? In other words.. how is it ensured that LV1 
>>is not used by compute nodes 1 and 2 at the same time ?

In my understanding, Nova can't assign single cinder volume(ex. VOL1) to 
multiple instances.
After attaching the VOL1 to an instance, a status of the VOL1 is changed to 
"in-use" and
user can’t attach the VOL1 to other instances.

>>3) In slide 1-2, you show that the LU1 is seen as /dev/sdx on all the nodes.. 
>>this is wrong.. it can be seen as anything (/dev/sdx on control node, sdn on 
>>compute 1, sdz on compute 2) so assumign sdx on all nodes is wrong. How does 
>>this different device names handled.. in short, how does compute node 2 knows 
>>that LU1 is actually sdn and not sdz (assuming you had > 1 LUs provisioned)

Right. Same device name may not be assigned on all nodes.
At my proposed driver, admin needs to create PV and VG manually.
Therefore, all nodes do not have to recognize the LU1 as /dev/sdx.

>>4) What abt multipath ? In most prod env.. the FC storage will be 
>>multipath'ed.. hence you will actually see sdx and sdy on each node and you 
>>actually need to use mpathN (which is multipathe'd to sdx anx sdy) device and 
>>NOT the sd? device to take adv of the customer multipath env. How does the 
>>nodes know which mpath? device to use and which mpath? device maps to which 
>>LU on the array ?

As I mentioned above, admin creates PV and VG manually at my proposed driver. 
If a product environment
uses multipath, admin can create a PV and VG on top of mpath device, using 
"pvcreate /dev/mpath/mpathX".

>>5) Doesnt this new proposal also causes the compute nodes to be physcially 
>>connected (via FC) to the array, which means more wiring and need for FC HBA 
>>on compute nodes. With LVMiSCSI, we don't need FC HBA on compute nodes so you 
>>are actualluy adding cost of each FC HBA to the compute nodes and slowly 
>>turning commodity system to non-commodity ;-) (in a way)

I think this is depends on a customer's or cloud provider's requirement.(In 
slide P9)
If the requirement is low cost and none FC cloud environment, LVMiSCSI is 
appropriate driver.
If better I/O performance is required, proposed driver or vendor cinder storage 
driver with FC
are appropriate, because these drivers can directly issue I/O to volumes via FC.

>>6) Last but not the least... since you are using 1 BIG LU on the array to 
>>host multiple volumes, you cannot possibly take adv of the premium, efficient 
>>snapshot/clone/mirroring features of the array, since they are at LU level, 
>>not at the LV level. LV snapshots have limitations (as mentioned by you in 
>>other thread) and are always in-efficient compared to array snapshots. Why 
>>would someone want to use less efficient method when they invested on a 
>>expensive array ?

Right. If user uses array volume directly, user can take adv of efficient 
snapshot/clone/mirroring features.

As I wrote in a reply e-mail to Avishay-san, in OpenStack cloud environment, 
workloads of
storages have been increasing and it is difficult to manage the workloads 
because every user
have a  permission to execute storage operations via cinder.
In order to use expensive array more efficient, I think it is better to reduce 
hardware based storage
workload by offloading the workload to software based volume operation on a 
case by case.

If we have two drivers in regards to a storage, we can provide volume both way 
as the situation demands.
Ex.
  As for "Standard" type storage, use proposed software based LVM cinder driver.
  As for "High performance" type storage, use hardware based cinder driver.(Ex. 
Higher charge than "Standard" volume)

This is one of use-case of my proposed driver.

Regards,
Mitsuhiro Tanino 
 HITACHI DATA SYSTEMS
 c/o Red Hat, 314 Littleton Road, Westford, MA 01886

From: Deepak Shetty [mailto:dpkshe...@gmail.com]
Sent: Wednesday, May 28, 2014 3:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Support LVM on a shared LU

Mitsuhiro,
  Few questions that come to my mind based on your proposal

1) There is a lof of manual work needed here.. like every time the new host 
added.

[openstack-dev] [barbican] Meeting Monday June 2nd at 20:00 UTC

2014-06-02 Thread Douglas Mendizabal
Hi Everyone,

The Barbican team is hosting our weekly meeting today, Monday June 2nd, at
20:00 UTC in #openstack-meeting-alt

Meeting agenda is available here
https://wiki.openstack.org/wiki/Meetings/Barbican and everyone is welcomed
to add agenda items.

You can check this link
http://time.is/0800PM_2_Jun_2014_in_UTC/CDT/EDT/PDT?Barbican_Weekly_Meeting
if you need to figure out what 20:00 UTC means in your time.

-Douglas Mendizábal





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Use of AngularJS

2014-06-02 Thread Adam Nelson
I think that you would use the PyPI version anyway:

https://pypi.python.org/pypi/django-angular/0.7.2

That's how most of the other Python dependencies work, even in the
distribution packages.

--
Kili - Cloud for Africa: kili.io
Musings: twitter.com/varud 
More Musings: varud.com
About Adam: www.linkedin.com/in/adamcnelson


On Mon, Jun 2, 2014 at 6:01 PM, Musso, Veronica A <
veronica.a.mu...@intel.com> wrote:

> Hi,
>
> It seems there is an issue with the django-angular integration. The
> problem is it is not available in the Ubuntu/Fedora packages, and its
> developers are not planning to include it.
> What can I do in this case? Is there any workaround?
>
> Thanks!
> Veronica
>
> --
>
> Message: 16
> Date: Mon, 2 Jun 2014 11:42:46 +0200 (CEST)
> From: Maxime Vidori 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [Horizon] Use of AngularJS
> Message-ID:
> <194952217.9616469.1401702166482.javamail.zim...@enovance.com>
> Content-Type: text/plain; charset=utf-8
>
> Hello,
>
> Seems to be a good idea, I will take a look at this package which seems to
> have a lot of features. The project seems pretty active, and I think it can
> be a good idea to dig into this kind of packages.
>
> - Original Message -
> From: "Veronica A Musso" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, May 29, 2014 5:30:04 PM
> Subject: [openstack-dev] [Horizon] Use of AngularJS
>
> Hello,
>
> During the last Summit the use of AngularJS in Horizon was discussed and
> there is the intention to do a better use of it in the dashboards.
>  I think this blueprint could help
> https://blueprints.launchpad.net/horizon/+spec/django-angular-integration,
> since it proposes the integration of Django-Angular (
> http://django-angular.readthedocs.org/en/latest/index.html).
> I would like to know the community opinion about it, due I could start its
> implementation.
>
> Thanks!
>
> Best Regards,
> Veronica Musso
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Use of AngularJS

2014-06-02 Thread Musso, Veronica A
Hi,

It seems there is an issue with the django-angular integration. The problem is 
it is not available in the Ubuntu/Fedora packages, and its developers are not 
planning to include it.
What can I do in this case? Is there any workaround?

Thanks!
Veronica

--

Message: 16
Date: Mon, 2 Jun 2014 11:42:46 +0200 (CEST)
From: Maxime Vidori 
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [Horizon] Use of AngularJS
Message-ID:
<194952217.9616469.1401702166482.javamail.zim...@enovance.com>
Content-Type: text/plain; charset=utf-8

Hello,

Seems to be a good idea, I will take a look at this package which seems to have 
a lot of features. The project seems pretty active, and I think it can be a 
good idea to dig into this kind of packages. 

- Original Message -
From: "Veronica A Musso" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, May 29, 2014 5:30:04 PM
Subject: [openstack-dev] [Horizon] Use of AngularJS

Hello,

During the last Summit the use of AngularJS in Horizon was discussed and there 
is the intention to do a better use of it in the dashboards.
 I think this blueprint could help 
https://blueprints.launchpad.net/horizon/+spec/django-angular-integration, 
since it proposes the integration of Django-Angular 
(http://django-angular.readthedocs.org/en/latest/index.html).
I would like to know the community opinion about it, due I could start its 
implementation.

Thanks!

Best Regards,
Veronica Musso


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Kafka support and high throughput

2014-06-02 Thread Keith Newstadt
Thanks for the responses Flavio, Roland.

Some background on why I'm asking:  we're using Kafka as the message queue for 
a stream processing service we're building, which we're delivering to our 
internal customers as a service along with OpenStack.  We're considering 
building a high throughput ingest API to get the clients' data streams into the 
stream processing service.  It occurs to me that this API is simply a messaging 
API, and so I'm wondering if we should consider building this high throughput 
API as part of the Marconi project.

Has this topic come up in the Marconi team's discussions, and would it fit into 
the vision of the Marconi roadmap?

Thanks,
Keith Newstadt
keith_newst...@symantec.com
@knewstadt


Date: Sun, 1 Jun 2014 15:01:40 +
From: "Hochmuth, Roland M" 
To: OpenStack List 
Subject: Re: [openstack-dev] [Marconi] Kafka support and high
throughput
Message-ID: 
Content-Type: text/plain; charset="us-ascii"

There are some folks in HP evaluating different messaging technologies for
Marconi, such as RabbitMQ and Kafka. I'll ping them and maybe they can
share
some information.

On a related note, the Monitoring as a Service solution we are working
on uses Kafka. This was just open-sourced at,
https://github.com/hpcloud-mon,
and will be moving over to StackForge starting next week. The architecture
is at,
https://github.com/hpcloud-mon/mon-arch.

I haven't really looked at Marconi. If you are interested in
throughput, low latency, durability, scale and fault-tolerance Kafka
seems like a great choice.

It has been also pointed out from various sources that possibly Kafka
could be another oslo.messaging transport. Are you looking into that as
that would be very interesting to me and something that is on my task
list that I haven't gotten to yet.


On 5/30/14, 7:03 AM, "Keith Newstadt"  wrote:

>Has anyone given thought to using Kafka to back Marconi?  And has there
>been discussion about adding high throughput APIs to Marconi.
>
>We're looking at providing Kafka as a messaging service for our
>customers, in a scenario where throughput is a priority.  We've had good
>luck using both streaming HTTP interfaces and long poll interfaces to get
>high throughput for other web services we've built.  Would this use case
>be appropriate in the context of the Marconi roadmap?
>
>Thanks,
>Keith Newstadt
>keith_newst...@symantec.com
>




Keith Newstadt
Cloud Services Architect
Cloud Platform Engineering
Symantec Corporation 
www.symantec.com


Office: (781) 530-2299  Mobile: (617) 513-1321 
Email: keith_newst...@symantec.com
Twitter: @knewstadt




This message (including any attachments) is intended only for the use of the 
individual or entity to which it is addressed and may contain information that 
is non-public, proprietary, privileged, confidential, and exempt from 
disclosure under applicable law or may constitute as attorney work product. If 
you are not the intended recipient, you are hereby notified that any use, 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, notify us 
immediately by telephone and (i) destroy this message if a facsimile or (ii) 
delete this message immediately if this is an electronic communication.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Marconi] Adopt Spec

2014-06-02 Thread Malini Kamalambal
Hello all,

We are seeing more & more design questions in #openstack-marconi.
It will be a good idea to formalize our design process a bit more & start using 
spec.
We are kind of late to the party –so we already have a lot of precedent ahead 
of us.

Thoughts?

Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] KMIP support

2014-06-02 Thread John Wood
Hello Robert,

Nathan Reller has created a blueprint for this effort here: 
https://blueprints.launchpad.net/barbican/+spec/kmip-secret-store

The first of several CRs to implement this feature is underway here: 
https://review.openstack.org/#/c/94710/

I'll defer to others regarding the open KMIP client status.

Thanks,
John



From: Clark, Robert Graham [robert.cl...@hp.com]
Sent: Sunday, June 01, 2014 2:17 PM
To: OpenStack List
Subject: [openstack-dev] [Barbican] KMIP support

All,

I’m researching a bunch of HSM applications and I’m struggling to find much 
info. I was wondering about the progress of KMIP support in Barbican? Is this 
waiting on an open python KMIP support?

Also, is the “OpenStack KMIP Client” ever going to be a thing? 
(https://wiki.openstack.org/wiki/KMIPclient)

Cheers
-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Ironic] [Ceilometer] [Horizon] [TripleO] Nodes Management UI - designs

2014-06-02 Thread Jay Dobies
Very nicely done, seeing this stuff laid out is really useful. A few 
comments:



= Page 3 =

* Nit: The rocker switch for power is a bit odd to me since it looks 
like it can be toggled.


* Can you show an example of a non-healthy node? Is it just an X instead 
of a check or are there different degrees/forms of unhealthy that can be 
discerned at this level?


* I didn't realize this until the next page and the nodes with bells on 
them, but there's no indication in this table of which node may have an 
alarm associated with it. Is there no way of viewing the node-alarm 
association from this view?



= Page 4 =

* I'm not trying to be a pain in the ass about the counts in the summary 
section, but they are kinda confusing me as I try to read this page 
without guidance.


** I see 26 nodes but it says 28. That's largely a test data nit that 
doesn't affect my understanding.


** It says 0 alarms, but I see three alarm bells. That one is a bit more 
than test data anal-retentiveness since it's making me wonder if I'm 
interpretting the bells correctly as alarms.


** It looks like this is a grid view, so I might be expecting too much, 
but is there any sorting available based on status? I'm guessing the 
columns in the previous view can be sorted (which will be very useful) 
but without something similar here, I wonder to its effectiveness if I 
can't couple the alarmed or non-running machines.



= Page 5 =

* I retract my previous statement about the sorting, the Group By 
example is what I was getting at. Can I drill into a particular group 
and see just those nodes?



= Page 6 =

* This is a cool idea, showing at the summary level why a node is 
unhealthy. What happens if it passes multiple thresholds? Do we just 
show one of the problematic values (assuming there's a priority to the 
metrics so we show the most important one)?



= Page 10 =

* Nit: The tags seem to take up prime screen real estate for something 
I'm not sure is terribly important on this page. Perhaps the intended 
use for them is more important than I'm giving credit.


* Is Flavors Consumption always displayed, or is that just the result of 
an the alarm? If it was unhealthy due to CPU usage, would that appear 
instead/in addition to?



= Page 11 =

* In this view, will we know about configured thresholds? I'm wondering 
if we can color or otherwise highlight more at-risk metrics to 
immediately grab the user's attention.



On 05/28/2014 05:18 PM, Jaromir Coufal wrote:

Hi All,

There is a lot of tags in the subject of this e-mail but believe me that
all listed projects (and even more) are relevant for the designs which I
am sending out.

Nodes management section in Horizon is being expected for a while and
finally I am sharing the results of designing around it.

http://people.redhat.com/~jcoufal/openstack/horizon/nodes/2014-05-28_nodes-ui.pdf


These views are based on modular approach and combination of multiple
services together; for example:
* Ironic - HW details and management
* Ceilometer - Monitoring graphs
* TripleO/Tuskar - Deployment Roles
etc.

Whenever some service is missing, that particular functionality should
be disabled and not displayed to a user.

I am sharing this without any bigger description so that I can get
feedback whether people can get oriented in the UI without hints. Of
course you cannot get each and every detail without exploring, having
tooltips, etc. But the goal for each view is to manage to express at
least the main purpose without explanation. If it does not, it needs to
be fixed.

Next week I will organize a recorded broadcast where I will walk you
through the designs, explain high-level vision, details and I will try
to answer questions if you have any. So feel free to comment anything or
ask whatever comes to your mind here in this thread, so that I can cover
your concerns. Any feedback is very welcome - positive so that I know
what you think that works, as well as negative so that we can improve
the result before implementation.

Thank you all
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-02 Thread Sean Dague
On 06/02/2014 09:21 AM, Matthew Treinish wrote:
> On Mon, Jun 02, 2014 at 06:57:04AM -0400, Sean Dague wrote:
>> Towards the end of the summit there was a discussion about us using a
>> shared review dashboard to see if a common view by the team would help
>> accelerate people looking at certain things. I spent some time this
>> weekend working on a tool to make building custom dashboard urls much
>> easier.
>>
>> My current proposal is the following, and would like comments on it:
>> https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash
> 
> I like this idea, it's definitely a good idea to try and help prioritize 
> certain
> reviews to try and streamline reviewing.
> 
> I'm wondering do you think we'll eventually bring this into infra? I 
> definitely
> get JJB vibes from this too, and I think having a similar workflow for 
> creating
> dashboards would be awesome.

Site level dashboard support is proposed here for jeepyb -
https://review.openstack.org/#/c/94260/

I'd also like project level dashboards that could be approved by the
local core teams. A path to do that well hasn't been sorted yet. Until
then I figure we can work with client dashboards.

>> All items in the dashboard are content that you've not voted on in the
>> current patch revision, that you don't own, and that have passing
>> Jenkins test results.
>>
>> 1. QA Specs - these need more eyes, so we highlight them at top of page
>> 2. Patches that are older than 5 days, with no code review
>> 3. Patches that you are listed as a reviewer on, but haven't voting on
>> current version
>> 4. Patches that already have a +2, so should be landable if you agree.
>> 5. Patches that have no negative code review feedback on them
>> 6. Patches older than 2 days, with no code review
>>
>> These are definitely a judgement call on what people should be looking
>> at, but this seems a pretty reasonable triaging list. I'm happy to have
>> a discussion on changes to this list.
> 
> I think this priority list is good for right now. I try to do this same basic
> prioritization when I'm doing reviews. (although maybe not using exact day 
> counts)
> Although, I'm hoping that eventually reviews on the qa-specs repo will be 
> active
> enough that we won't need to prioritize it over other repos. But, until then I
> think putting it at the top is the right move.
> 
>>
>> The url for this is -  http://goo.gl/g4aMjM
>>
>> (the long url is very long:
>> https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade+OR+project%3Aopenstack%2Fqa-specs%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%2Cjenkins+NOT+label%3ACode-Review%3C%3D-1%2Cself+NOT+label%3ACode-Review%3E%3D1%2Cself&title=QA+Review+Inbox&QA+Specs=project%3Aopenstack%2Fqa-specs&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode-Review%3C%3D2+age%3A5d&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade%29+label%3ACode-Review%3E%3D2+limit%3A50&Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review%3C%3D-1+limit%3A50&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+label%3ACode-Review%3C%3D2+age%3A2d
>>
>> The url can be regenerated easily using the gerrit-dash-creator.
>>
> r
> These generated URLs don't quite work as expected for me, I see a bunch of -1s
> from jenkins in all the sections. 

They aren't Jenkins -1, they are other CI systems.
https://review.openstack.org/#/settings/preferences (check - Display
Person Name In Review Category) to see that.

Other things like reviews with -2s showing up
> "in need final +2", 

I tended not to filter out -2s for that to it would be more clear to see
there was a conflict going on. Typically in those situations I vote -1
to say I agree with the -2 that's there, and move on. Then it's been
voted on so drops from your list.

> or reviews with -2s and +2s from me being listed in the "but
> haven't voted in the current revision". 

That's odd, and definitely not intended.

Also the top section just seems to list
> every open QA program review regardless of it's current review state.

The top section does list every qa-spec that is open and you don't have
a vote on. That was intentional. The point being that everyone should
read them and vote on them. Once you do they go away from your list.

Unlike normal reviews I don't think that masking qa-specs once they have
a single negative piece of feedback is the right workflow. Especially as
there are a small enough number.

> I'll take a look at the code and see if I can help figure out what's going on.

Sure, pull requests welcomed. :)

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature

Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-06-02 Thread Matthias Runge
On Sat, May 31, 2014 at 09:13:35PM +, Jeremy Stanley wrote:
> I'll admit that my Web development expertise is probably almost 20
> years stale at this point, so forgive me if this is a silly
> question: what is the reasoning against working with the upstreams
> who do not yet distribute needed Javascript library packages to help
> them participate in the distribution channels you need? This strikes
> me as similar to forking a Python library which doesn't publish to
> PyPI, just so you can publish it to PyPI. When some of these
> dependencies begin to publish xstatic packages themselves, do the
> equivalent repositories in Gerrit get decommissioned at that point?

we need those libraries installable or provided as a python package.
Using xstatic solves this for us in a very nice way.

I must admit, I'd prefer those (javascript) libraries installable as 
(rpm) packages, but that makes it even more complicated e.g gate. It
seems, many folks here try to avoid distro packages at all.
-- 
Matthias Runge 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Proposal for shared review dashboard

2014-06-02 Thread Dmitry Tantsur
Hi folks,

Inspired by great work by Sean Dague [1], I have created a review
dashboard for Ironic projects. Main ideas:

Ordering:
0. Viewer's own patches, that have any kind of negative feedback
1. Specs
2. Changes w/o negative feedback, with +2 already
3. Changes that did not have any feedback for 5 days
4. Changes without negative feedback (no more than 50)
5. Other changes (no more than 20)

Shows only verified patches, except for 0 and 5.
Never shows WIP patches.

I'll be thankful for any tips on how to include prioritization from
Launchpad bugs.

Short link: http://goo.gl/hqRrRw
Long link: [2]

Source code (will create PR after discussion on today's meeting): 
https://github.com/Divius/gerrit-dash-creator
To generate a link, use:
$ ./gerrit-dash-creator dashboards/ironic.dash

Dmitry.

[1] https://github.com/Divius/gerrit-dash-creator
[2] https://review.openstack.org/#/dashboard/?foreach=%28project%
3Aopenstack%2Fironic+OR+project%3Aopenstack%2Fpython-ironicclient+OR
+project%3Aopenstack%2Fironic-python-agent+OR+project%3Aopenstack%
2Fironic-specs%29+status%3Aopen+NOT+label%3AWorkflow%3C%3D-1+NOT+label%
3ACode-Review%3C%3D-2+NOT+label%3AWorkflow%3E%3D1&title=Ironic+Inbox&My
+Patches+Requiring+Attention=owner%3Aself+%28label%3AVerified-1%
252cjenkins+OR+label%3ACode-Review-1%29&Ironic+Specs=NOT+owner%3Aself
+project%3Aopenstack%2Fironic-specs&Needs+Approval=label%3AVerified%3E%
3D1%252cjenkins+NOT+owner%3Aself+label%3ACode-Review%3E%3D2+NOT+label%
3ACode-Review-1&5+Days+Without+Feedback=label%3AVerified%3E%3D1%
252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%2Fironic-specs+NOT
+label%3ACode-Review%3C%3D2+age%3A5d&No+Negative+Feedback=label%
3AVerified%3E%3D1%252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%
2Fironic-specs+NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%
3E%3D2+limit%3A50&Other=label%3AVerified%3E%3D1%252cjenkins+NOT+owner%
3Aself+NOT+project%3Aopenstack%2Fironic-specs+label%3ACode-Review-1
+limit%3A20



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New setuptools release seems to break some of the OS projects

2014-06-02 Thread Dina Belova
Folks,

setuptools 4.0.1 and 3.7.1 have been released - these should fix the issue.

-- Dina


On Mon, Jun 2, 2014 at 4:45 PM, Dina Belova  wrote:

> The newest setuptools has been removed from the gate mirror - the latest
> there now is 3.6 - that *might* in the gate.
> We'll see if it'll help. It looks like there is first successful Neutron
> job there :)
>
> -- Dina
>
>
> On Mon, Jun 2, 2014 at 4:36 PM, Eoghan Glynn  wrote:
>
>>
>>
>>
>> > Alex reported the bug against setuptools
>> > (
>> https://bitbucket.org/pypa/setuptools/issue/213/regression-setuptools-37-installation
>> )
>> > if you want to track progress.
>>
>>
>> Thanks Doug,
>>
>> In the meantime, I'm wondering do we have any way of insulating
>> ourselves against breakages like this?
>>
>> (along the lines of a version-cap that we'd apply in the global
>> requirements.txt, for dependencies pulled in that way).
>>
>> Cheers,
>> Eoghan
>>
>>
>> > Doug
>> >
>> > On Mon, Jun 2, 2014 at 8:07 AM, Dina Belova 
>> wrote:
>> > > Folks, o/
>> > >
>> > > I did not find the appropriate discussion in the ML, so decided to
>> start it
>> > > myself - I see that new setuptools release seems to break at least
>> some of
>> > > the OpenStack gates and even more.
>> > >
>> > > Here is the bug: https://bugs.launchpad.net/ceilometer/+bug/1325514
>> > >
>> > > It hits Tempest, Ceilometer and Keystoneclient at least due to the
>> > > discussion in the bug.
>> > >
>> > > Some of the variants were discussed in the #openstack-infra channel,
>> but I
>> > > see no solution found.
>> > >
>> > > Do we have idea how to fix it?
>> > >
>> > > Best regards,
>> > >
>> > > Dina Belova
>> > >
>> > > Software Engineer
>> > >
>> > > Mirantis Inc.
>> > >
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>



-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Pluggable conductor manager

2014-06-02 Thread boden

On 4/28/2014 2:58 PM, Dan Smith wrote:

I'd like to propose the ability to support a pluggable trove conductor
manager. Currently the trove conductor manager is hard-coded [1][2] and
thus is always 'trove.conductor.manager.Manager'. I'd like to see this
conductor manager class be pluggable like nova does [3].


Note that most of us don't like this and we're generally trying to get
rid of these sorts of things. I actually didn't realize that
conductor.manager was exposed in the CONF, and was probably just done to
mirror other similar settings.

Making arbitrary classes pluggable like this without a structured and
stable API is really just asking for trouble when people think it's a
pluggable interface.

So, you might not want to use "because nova does it" as a reason to add
it to trove like this :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





Thanks for the input Dan.

Is the real concern here that the conductor API(s) and manager are 
coupled based on version?


If so what if we took a versioned factory approach? For example:
(a) In conf specify the version to use (by default) and a mapping from 
version to conductor factory:


conductor_version = 1.1
conductor_factory = 1.0:trove.conductor.v1.factory.Factory, 
1.1:trove.conductor.v1_1.factory.Factory



(b) Define an abstract base factory which can create manager(s) and 
api(s) for a specific version:


class AbstractFactory(object):

@staticmethod
def manager_classname(manager_id=None):
raise NotImplementedError()

@staticmethod
def api_classname(api_id=None):
raise NotImplementedError()

@staticmethod
def version():
raise NotImplementedError()


(c) For each version, define a concrete factory. For example in trove 
for version 1.0:


class Factory(AbstractFactory):

@staticmethod
def manager_classname(manager_id=None):
return 'trove.conductor.manager.Manager'

@staticmethod
def api_classname(api_id=None):
return 'trove.conductor.api.API'

@staticmethod
def version():
return '1.0'


(d) Impl a simple interface to the factories so consumers can get a 
factory instance to work with. e.g.:


factory = Factories.factory() # get factory based on conductor_version
conductor_mgr = factory.manager_classname() # 1.1 manager
conductor_api = factory.api_classname() # 1.1 API

# use std import utils to load objects...


Would such an approach mitigate some of your concerns? If not please 
elaborate your desired means. Also please keep in mind I'm proposing the 
above for trove conductor (at least initially).


Thanks



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-02 Thread Matthew Treinish
On Mon, Jun 02, 2014 at 06:57:04AM -0400, Sean Dague wrote:
> Towards the end of the summit there was a discussion about us using a
> shared review dashboard to see if a common view by the team would help
> accelerate people looking at certain things. I spent some time this
> weekend working on a tool to make building custom dashboard urls much
> easier.
> 
> My current proposal is the following, and would like comments on it:
> https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash

I like this idea, it's definitely a good idea to try and help prioritize certain
reviews to try and streamline reviewing.

I'm wondering do you think we'll eventually bring this into infra? I definitely
get JJB vibes from this too, and I think having a similar workflow for creating
dashboards would be awesome.

> 
> All items in the dashboard are content that you've not voted on in the
> current patch revision, that you don't own, and that have passing
> Jenkins test results.
> 
> 1. QA Specs - these need more eyes, so we highlight them at top of page
> 2. Patches that are older than 5 days, with no code review
> 3. Patches that you are listed as a reviewer on, but haven't voting on
> current version
> 4. Patches that already have a +2, so should be landable if you agree.
> 5. Patches that have no negative code review feedback on them
> 6. Patches older than 2 days, with no code review
> 
> These are definitely a judgement call on what people should be looking
> at, but this seems a pretty reasonable triaging list. I'm happy to have
> a discussion on changes to this list.

I think this priority list is good for right now. I try to do this same basic
prioritization when I'm doing reviews. (although maybe not using exact day 
counts)
Although, I'm hoping that eventually reviews on the qa-specs repo will be active
enough that we won't need to prioritize it over other repos. But, until then I
think putting it at the top is the right move.

> 
> The url for this is -  http://goo.gl/g4aMjM
> 
> (the long url is very long:
> https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade+OR+project%3Aopenstack%2Fqa-specs%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%2Cjenkins+NOT+label%3ACode-Review%3C%3D-1%2Cself+NOT+label%3ACode-Review%3E%3D1%2Cself&title=QA+Review+Inbox&QA+Specs=project%3Aopenstack%2Fqa-specs&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode-Review%3C%3D2+age%3A5d&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade%29+label%3ACode-Review%3E%3D2+limit%3A50&Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review%3C%3D-1+limit%3A50&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+label%3ACode-Review%3C%3D2+age%3A2d
> 
> The url can be regenerated easily using the gerrit-dash-creator.
> 

These generated URLs don't quite work as expected for me, I see a bunch of -1s
from jenkins in all the sections. Other things like reviews with -2s showing up
"in need final +2", or reviews with -2s and +2s from me being listed in the "but
haven't voted in the current revision". Also the top section just seems to list
every open QA program review regardless of it's current review state.

I'll take a look at the code and see if I can help figure out what's going on.

-Matt Treinish


pgp8MQd5G4C9D.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [Openstack-dev] New RA for Galera

2014-06-02 Thread Sergii Golovatiuk
Hi crew,

Thank you for starting this topic. I've already performed the research and
started blueprint. Since we changed our blueprint strategy, I made it in
rst format and added it to Gerrit workflow. Feel free to participate.

https://review.openstack.org/#/c/97191/
http://docs-draft.openstack.org/91/97191/8/check/gate-fuel-specs-docs/a1e7c72/doc/build/html/specs/5.1/pacemaker-galera-resource-agent.html

It's still draft as some discussions/tests are still going on.

>PoC RA get GTID from sql (SHOW STATUS LIKE ‚wsrep_last_committed’) if
MySQL is running, in other case RA start mysqld with --wsrep-recover. I
>skipped grastate.dat because in all my test this file had commit_id set to
-1.

Percona way is more robust as they can restore the state even when CIB
became corrupted and whole cluster went down (power outage).

~Sergii




On Mon, Jun 2, 2014 at 3:09 PM, Bartosz Kupidura 
wrote:

> Vladimir,
>
>
> Wiadomość napisana przez Vladimir Kuklin  w dniu 2
> cze 2014, o godz. 13:49:
>
> > Bartosz, if you look into what Percona guys are doing - you will see
> here:
> https://github.com/percona/percona-pacemaker-agents/blob/new_pxc_ra/agents/pxc_resource_agent#L516
> that they first try to use MySQL and then to get GTID from grastate.dat.
> Also, I am wondering if you are using cluster-wide attributes instead of
> node-attributes. If you use node-scoped attributes, then shadow/commit
> commands should not affect anything.
>
> PoC RA get GTID from sql (SHOW STATUS LIKE ‚wsrep_last_committed’) if
> MySQL is running, in other case RA start mysqld with --wsrep-recover. I
> skipped grastate.dat because in all my test this file had commit_id set to
> -1.
>
> In PoC i use only node-attributes (crm_attribute --node $HOSTNAME
> --lifetime forever --name gtid --update $GTID).
>
> >
> >
> > On Mon, Jun 2, 2014 at 2:34 PM, Bogdan Dobrelya 
> wrote:
> > On 05/29/2014 02:06 PM, Bartosz Kupidura wrote:
> > > Hello,
> > >
> > >
> > > Wiadomość napisana przez Vladimir Kuklin  w
> dniu 29 maj 2014, o godz. 12:09:
> > >
> > >> may be the problem is that you are using liftetime crm attributes
> instead of 'reboot' ones. shadow/commit is used by us because we need
> transactional behaviour in some cases. if you turn crm_shadow off, then you
> will experience problems with multi-state resources and
> location/colocation/order constraints. so we need to find a way to make
> commits transactional. there are two ways:
> > >> 1) rewrite corosync providers to use crm_diff command and apply it
> instead of shadow commit that can swallow cluster attributes sometimes
> > >
> > > In PoC i removed all cs_commit/cs_shadow, and looks that everything is
> working. But as you says, this can lead to problems with more complicated
> deployments.
> > > This need to be verified.
> > >
> > >> 2) store 'reboot' attributes instead of lifetime ones
> > >
> > > I test with —lifetime forever and reboot. No difference for
> cs_commit/cs_shadow fail.
> > >
> > > Moreover we need method to store GTID permanent (to support whole
> cluster reboot).
> >
> > Please note, GTID could always be fetched from the
> > /var/lib/mysql/grastate.dat at the galera node
> >
> > > If we want to stick to cs_commit/cs_shadow, we need other method to
> store GTID than crm_attribute.
> >
> > WE could use a modified ocf::pacemaker:SysInfo resource. We could put
> > GTID there and use it the similar way as I did for fencing PoC[0] (for
> > free space monitoring)
> >
> > [0]
> >
> https://github.com/bogdando/fuel-library-1/blob/ha_fencing_WIP/deployment/puppet/cluster/manifests/fencing_primitives.pp#L41-L70
> >
> > >
> > >>
> > >>
> > >>
> > >> On Thu, May 29, 2014 at 12:42 PM, Bogdan Dobrelya <
> bdobre...@mirantis.com> wrote:
> > >> On 05/27/14 16:44, Bartosz Kupidura wrote:
> > >>> Hello,
> > >>> Responses inline.
> > >>>
> > >>>
> > >>> Wiadomość napisana przez Vladimir Kuklin  w
> dniu 27 maj 2014, o godz. 15:12:
> > >>>
> >  Hi, Bartosz
> > 
> >  First of all, we are using openstack-dev for such discussions.
> > 
> >  Second, there is also Percona's RA for Percona XtraDB Cluster,
> which looks like pretty similar, although it is written in Perl. May be we
> could derive something useful from it.
> > 
> >  Next, if you are working on this stuff, let's make it as open for
> the community as possible. There is a blueprint for Galera OCF script:
> https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script.
> It would be awesome if you wrote down the specification and sent  newer
> galera ocf code change request to fuel-library gerrit.
> > >>>
> > >>> Sure, I will update this blueprint.
> > >>> Change request in fuel-library:
> https://review.openstack.org/#/c/95764/
> > >>
> > >> That is a really nice catch, Bartosz, thank you. I believe we should
> > >> review the new OCF script thoroughly and consider omitting
> > >> cs_commits/cs_shadows as well. What would be the downsides?
> > >>
> > >>>
> > 
> >  Speaking of crm_attribute stu

Re: [openstack-dev] New setuptools release seems to break some of the OS projects

2014-06-02 Thread Dina Belova
The newest setuptools has been removed from the gate mirror - the latest
there now is 3.6 - that *might* in the gate.
We'll see if it'll help. It looks like there is first successful Neutron
job there :)

-- Dina


On Mon, Jun 2, 2014 at 4:36 PM, Eoghan Glynn  wrote:

>
>
>
> > Alex reported the bug against setuptools
> > (
> https://bitbucket.org/pypa/setuptools/issue/213/regression-setuptools-37-installation
> )
> > if you want to track progress.
>
>
> Thanks Doug,
>
> In the meantime, I'm wondering do we have any way of insulating
> ourselves against breakages like this?
>
> (along the lines of a version-cap that we'd apply in the global
> requirements.txt, for dependencies pulled in that way).
>
> Cheers,
> Eoghan
>
>
> > Doug
> >
> > On Mon, Jun 2, 2014 at 8:07 AM, Dina Belova 
> wrote:
> > > Folks, o/
> > >
> > > I did not find the appropriate discussion in the ML, so decided to
> start it
> > > myself - I see that new setuptools release seems to break at least
> some of
> > > the OpenStack gates and even more.
> > >
> > > Here is the bug: https://bugs.launchpad.net/ceilometer/+bug/1325514
> > >
> > > It hits Tempest, Ceilometer and Keystoneclient at least due to the
> > > discussion in the bug.
> > >
> > > Some of the variants were discussed in the #openstack-infra channel,
> but I
> > > see no solution found.
> > >
> > > Do we have idea how to fix it?
> > >
> > > Best regards,
> > >
> > > Dina Belova
> > >
> > > Software Engineer
> > >
> > > Mirantis Inc.
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New setuptools release seems to break some of the OS projects

2014-06-02 Thread Eoghan Glynn



> Alex reported the bug against setuptools
> (https://bitbucket.org/pypa/setuptools/issue/213/regression-setuptools-37-installation)
> if you want to track progress.


Thanks Doug,

In the meantime, I'm wondering do we have any way of insulating
ourselves against breakages like this?

(along the lines of a version-cap that we'd apply in the global
requirements.txt, for dependencies pulled in that way).

Cheers,
Eoghan

 
> Doug
> 
> On Mon, Jun 2, 2014 at 8:07 AM, Dina Belova  wrote:
> > Folks, o/
> >
> > I did not find the appropriate discussion in the ML, so decided to start it
> > myself - I see that new setuptools release seems to break at least some of
> > the OpenStack gates and even more.
> >
> > Here is the bug: https://bugs.launchpad.net/ceilometer/+bug/1325514
> >
> > It hits Tempest, Ceilometer and Keystoneclient at least due to the
> > discussion in the bug.
> >
> > Some of the variants were discussed in the #openstack-infra channel, but I
> > see no solution found.
> >
> > Do we have idea how to fix it?
> >
> > Best regards,
> >
> > Dina Belova
> >
> > Software Engineer
> >
> > Mirantis Inc.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New setuptools release seems to break some of the OS projects

2014-06-02 Thread Dina Belova
Doug, thanks, will track this bug's solution there.


On Mon, Jun 2, 2014 at 4:17 PM, Doug Hellmann 
wrote:

> Alex reported the bug against setuptools
> (
> https://bitbucket.org/pypa/setuptools/issue/213/regression-setuptools-37-installation
> )
> if you want to track progress.
>
> Doug
>
> On Mon, Jun 2, 2014 at 8:07 AM, Dina Belova  wrote:
> > Folks, o/
> >
> > I did not find the appropriate discussion in the ML, so decided to start
> it
> > myself - I see that new setuptools release seems to break at least some
> of
> > the OpenStack gates and even more.
> >
> > Here is the bug: https://bugs.launchpad.net/ceilometer/+bug/1325514
> >
> > It hits Tempest, Ceilometer and Keystoneclient at least due to the
> > discussion in the bug.
> >
> > Some of the variants were discussed in the #openstack-infra channel, but
> I
> > see no solution found.
> >
> > Do we have idea how to fix it?
> >
> > Best regards,
> >
> > Dina Belova
> >
> > Software Engineer
> >
> > Mirantis Inc.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nova-compute rpc version

2014-06-02 Thread abhishek jain
Hi


I'm getting following error in nova-compute logs when trying to boot
VM from controller node onto compute node ...

 Specified RPC version, 3.23, not supported

Please help regarding this.


Thanks
Abhishek Jain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New setuptools release seems to break some of the OS projects

2014-06-02 Thread Doug Hellmann
Alex reported the bug against setuptools
(https://bitbucket.org/pypa/setuptools/issue/213/regression-setuptools-37-installation)
if you want to track progress.

Doug

On Mon, Jun 2, 2014 at 8:07 AM, Dina Belova  wrote:
> Folks, o/
>
> I did not find the appropriate discussion in the ML, so decided to start it
> myself - I see that new setuptools release seems to break at least some of
> the OpenStack gates and even more.
>
> Here is the bug: https://bugs.launchpad.net/ceilometer/+bug/1325514
>
> It hits Tempest, Ceilometer and Keystoneclient at least due to the
> discussion in the bug.
>
> Some of the variants were discussed in the #openstack-infra channel, but I
> see no solution found.
>
> Do we have idea how to fix it?
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [Openstack-dev] New RA for Galera

2014-06-02 Thread Bartosz Kupidura
Vladimir,


Wiadomość napisana przez Vladimir Kuklin  w dniu 2 cze 
2014, o godz. 13:49:

> Bartosz, if you look into what Percona guys are doing - you will see here: 
> https://github.com/percona/percona-pacemaker-agents/blob/new_pxc_ra/agents/pxc_resource_agent#L516
>  that they first try to use MySQL and then to get GTID from grastate.dat. 
> Also, I am wondering if you are using cluster-wide attributes instead of 
> node-attributes. If you use node-scoped attributes, then shadow/commit 
> commands should not affect anything.

PoC RA get GTID from sql (SHOW STATUS LIKE ‚wsrep_last_committed’) if MySQL is 
running, in other case RA start mysqld with --wsrep-recover. I skipped 
grastate.dat because in all my test this file had commit_id set to -1.

In PoC i use only node-attributes (crm_attribute --node $HOSTNAME --lifetime 
forever --name gtid --update $GTID).

> 
> 
> On Mon, Jun 2, 2014 at 2:34 PM, Bogdan Dobrelya  
> wrote:
> On 05/29/2014 02:06 PM, Bartosz Kupidura wrote:
> > Hello,
> >
> >
> > Wiadomość napisana przez Vladimir Kuklin  w dniu 29 
> > maj 2014, o godz. 12:09:
> >
> >> may be the problem is that you are using liftetime crm attributes instead 
> >> of 'reboot' ones. shadow/commit is used by us because we need 
> >> transactional behaviour in some cases. if you turn crm_shadow off, then 
> >> you will experience problems with multi-state resources and 
> >> location/colocation/order constraints. so we need to find a way to make 
> >> commits transactional. there are two ways:
> >> 1) rewrite corosync providers to use crm_diff command and apply it instead 
> >> of shadow commit that can swallow cluster attributes sometimes
> >
> > In PoC i removed all cs_commit/cs_shadow, and looks that everything is 
> > working. But as you says, this can lead to problems with more complicated 
> > deployments.
> > This need to be verified.
> >
> >> 2) store 'reboot' attributes instead of lifetime ones
> >
> > I test with —lifetime forever and reboot. No difference for 
> > cs_commit/cs_shadow fail.
> >
> > Moreover we need method to store GTID permanent (to support whole cluster 
> > reboot).
> 
> Please note, GTID could always be fetched from the
> /var/lib/mysql/grastate.dat at the galera node
> 
> > If we want to stick to cs_commit/cs_shadow, we need other method to store 
> > GTID than crm_attribute.
> 
> WE could use a modified ocf::pacemaker:SysInfo resource. We could put
> GTID there and use it the similar way as I did for fencing PoC[0] (for
> free space monitoring)
> 
> [0]
> https://github.com/bogdando/fuel-library-1/blob/ha_fencing_WIP/deployment/puppet/cluster/manifests/fencing_primitives.pp#L41-L70
> 
> >
> >>
> >>
> >>
> >> On Thu, May 29, 2014 at 12:42 PM, Bogdan Dobrelya  
> >> wrote:
> >> On 05/27/14 16:44, Bartosz Kupidura wrote:
> >>> Hello,
> >>> Responses inline.
> >>>
> >>>
> >>> Wiadomość napisana przez Vladimir Kuklin  w dniu 27 
> >>> maj 2014, o godz. 15:12:
> >>>
>  Hi, Bartosz
> 
>  First of all, we are using openstack-dev for such discussions.
> 
>  Second, there is also Percona's RA for Percona XtraDB Cluster, which 
>  looks like pretty similar, although it is written in Perl. May be we 
>  could derive something useful from it.
> 
>  Next, if you are working on this stuff, let's make it as open for the 
>  community as possible. There is a blueprint for Galera OCF script: 
>  https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script. 
>  It would be awesome if you wrote down the specification and sent  newer 
>  galera ocf code change request to fuel-library gerrit.
> >>>
> >>> Sure, I will update this blueprint.
> >>> Change request in fuel-library: https://review.openstack.org/#/c/95764/
> >>
> >> That is a really nice catch, Bartosz, thank you. I believe we should
> >> review the new OCF script thoroughly and consider omitting
> >> cs_commits/cs_shadows as well. What would be the downsides?
> >>
> >>>
> 
>  Speaking of crm_attribute stuff. I am very surprised that you are saying 
>  that node attributes are altered by crm shadow commit. We are using 
>  similar approach in our scripts and have never faced this issue.
> >>>
> >>> This is probably because you update crm_attribute very rarely. And with 
> >>> my approach GTID attribute is updated every 60s on every node (3 updates 
> >>> in 60s, in standard HA setup).
> >>>
> >>> You can try to update any attribute in loop during deploying cluster to 
> >>> trigger fail with corosync diff.
> >>
> >> It sounds reasonable and we should verify it.
> >> I've updated the statuses for related bugs and attached them to the
> >> aforementioned blueprint as well:
> >> https://bugs.launchpad.net/fuel/+bug/1283062/comments/7
> >> https://bugs.launchpad.net/fuel/+bug/1281592/comments/6
> >>
> >>
> >>>
> 
>  Corosync 2.x support is in our roadmap, but we are not sure that we will 
>  use Corosync 2.x earlier than 6.x release series start.
> >>

[openstack-dev] New setuptools release seems to break some of the OS projects

2014-06-02 Thread Dina Belova
Folks, o/

I did not find the appropriate discussion in the ML, so decided to start it
myself - I see that new setuptools release seems to break at least some of
the OpenStack gates and even more.

Here is the bug: https://bugs.launchpad.net/ceilometer/+bug/1325514

It hits Tempest, Ceilometer and Keystoneclient at least due to the
discussion in the bug.

Some of the variants were discussed in the #openstack-infra channel, but I
see no solution found.

Do we have idea how to fix it?

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [Openstack-dev] New RA for Galera

2014-06-02 Thread Vladimir Kuklin
Bartosz, if you look into what Percona guys are doing - you will see here:
https://github.com/percona/percona-pacemaker-agents/blob/new_pxc_ra/agents/pxc_resource_agent#L516
that they first try to use MySQL and then to get GTID from grastate.dat.
Also, I am wondering if you are using cluster-wide attributes instead of
node-attributes. If you use node-scoped attributes, then shadow/commit
commands should not affect anything.


On Mon, Jun 2, 2014 at 2:34 PM, Bogdan Dobrelya 
wrote:

> On 05/29/2014 02:06 PM, Bartosz Kupidura wrote:
> > Hello,
> >
> >
> > Wiadomość napisana przez Vladimir Kuklin  w dniu
> 29 maj 2014, o godz. 12:09:
> >
> >> may be the problem is that you are using liftetime crm attributes
> instead of 'reboot' ones. shadow/commit is used by us because we need
> transactional behaviour in some cases. if you turn crm_shadow off, then you
> will experience problems with multi-state resources and
> location/colocation/order constraints. so we need to find a way to make
> commits transactional. there are two ways:
> >> 1) rewrite corosync providers to use crm_diff command and apply it
> instead of shadow commit that can swallow cluster attributes sometimes
> >
> > In PoC i removed all cs_commit/cs_shadow, and looks that everything is
> working. But as you says, this can lead to problems with more complicated
> deployments.
> > This need to be verified.
> >
> >> 2) store 'reboot' attributes instead of lifetime ones
> >
> > I test with —lifetime forever and reboot. No difference for
> cs_commit/cs_shadow fail.
> >
> > Moreover we need method to store GTID permanent (to support whole
> cluster reboot).
>
> Please note, GTID could always be fetched from the
> /var/lib/mysql/grastate.dat at the galera node
>
> > If we want to stick to cs_commit/cs_shadow, we need other method to
> store GTID than crm_attribute.
>
> WE could use a modified ocf::pacemaker:SysInfo resource. We could put
> GTID there and use it the similar way as I did for fencing PoC[0] (for
> free space monitoring)
>
> [0]
>
> https://github.com/bogdando/fuel-library-1/blob/ha_fencing_WIP/deployment/puppet/cluster/manifests/fencing_primitives.pp#L41-L70
>
> >
> >>
> >>
> >>
> >> On Thu, May 29, 2014 at 12:42 PM, Bogdan Dobrelya <
> bdobre...@mirantis.com> wrote:
> >> On 05/27/14 16:44, Bartosz Kupidura wrote:
> >>> Hello,
> >>> Responses inline.
> >>>
> >>>
> >>> Wiadomość napisana przez Vladimir Kuklin  w
> dniu 27 maj 2014, o godz. 15:12:
> >>>
>  Hi, Bartosz
> 
>  First of all, we are using openstack-dev for such discussions.
> 
>  Second, there is also Percona's RA for Percona XtraDB Cluster, which
> looks like pretty similar, although it is written in Perl. May be we could
> derive something useful from it.
> 
>  Next, if you are working on this stuff, let's make it as open for the
> community as possible. There is a blueprint for Galera OCF script:
> https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script.
> It would be awesome if you wrote down the specification and sent  newer
> galera ocf code change request to fuel-library gerrit.
> >>>
> >>> Sure, I will update this blueprint.
> >>> Change request in fuel-library:
> https://review.openstack.org/#/c/95764/
> >>
> >> That is a really nice catch, Bartosz, thank you. I believe we should
> >> review the new OCF script thoroughly and consider omitting
> >> cs_commits/cs_shadows as well. What would be the downsides?
> >>
> >>>
> 
>  Speaking of crm_attribute stuff. I am very surprised that you are
> saying that node attributes are altered by crm shadow commit. We are using
> similar approach in our scripts and have never faced this issue.
> >>>
> >>> This is probably because you update crm_attribute very rarely. And
> with my approach GTID attribute is updated every 60s on every node (3
> updates in 60s, in standard HA setup).
> >>>
> >>> You can try to update any attribute in loop during deploying cluster
> to trigger fail with corosync diff.
> >>
> >> It sounds reasonable and we should verify it.
> >> I've updated the statuses for related bugs and attached them to the
> >> aforementioned blueprint as well:
> >> https://bugs.launchpad.net/fuel/+bug/1283062/comments/7
> >> https://bugs.launchpad.net/fuel/+bug/1281592/comments/6
> >>
> >>
> >>>
> 
>  Corosync 2.x support is in our roadmap, but we are not sure that we
> will use Corosync 2.x earlier than 6.x release series start.
> >>>
> >>> Yeah, moreover corosync CMAP is not synced between cluster nodes (or
> maybe im doing something wrong?). So we need other solution for this...
> >>>
> >>
> >> We should use CMAN for Corosync 1.x, perhaps.
> >>
> 
> 
>  On Tue, May 27, 2014 at 3:08 PM, Bartosz Kupidura <
> bkupid...@mirantis.com> wrote:
>  Hello guys!
>  I would like to start discussion on a new resource agent for
> galera/pacemaker.
> 
>  Main features:
>  * Support cluster boostrap
>  * Support reboot any node in cluster
>  * Suppo

[openstack-dev] [qa] shared review dashboard proposal

2014-06-02 Thread Sean Dague
Towards the end of the summit there was a discussion about us using a
shared review dashboard to see if a common view by the team would help
accelerate people looking at certain things. I spent some time this
weekend working on a tool to make building custom dashboard urls much
easier.

My current proposal is the following, and would like comments on it:
https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash

All items in the dashboard are content that you've not voted on in the
current patch revision, that you don't own, and that have passing
Jenkins test results.

1. QA Specs - these need more eyes, so we highlight them at top of page
2. Patches that are older than 5 days, with no code review
3. Patches that you are listed as a reviewer on, but haven't voting on
current version
4. Patches that already have a +2, so should be landable if you agree.
5. Patches that have no negative code review feedback on them
6. Patches older than 2 days, with no code review

These are definitely a judgement call on what people should be looking
at, but this seems a pretty reasonable triaging list. I'm happy to have
a discussion on changes to this list.

The url for this is -  http://goo.gl/g4aMjM

(the long url is very long:
https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade+OR+project%3Aopenstack%2Fqa-specs%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%2Cjenkins+NOT+label%3ACode-Review%3C%3D-1%2Cself+NOT+label%3ACode-Review%3E%3D1%2Cself&title=QA+Review+Inbox&QA+Specs=project%3Aopenstack%2Fqa-specs&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode-Review%3C%3D2+age%3A5d&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade%29+label%3ACode-Review%3E%3D2+limit%3A50&Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review%3C%3D-1+limit%3A50&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+label%3ACode-Review%3C%3D2+age%3A2d

The url can be regenerated easily using the gerrit-dash-creator.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] [mistral] [heat] keystone chained trusts / oauth

2014-06-02 Thread Steven Hardy
Hi Angus,

On Wed, May 28, 2014 at 12:56:52AM +, Angus Salkeld wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hi all
> 
> During our Solum meeting it was felt we should make sure that all three
> team are on the same page wrt $subject.
> 
> I'll describe the use case we are trying to solve and hopefully get some
> guidance from the keystone team about the best way forward.
> 
> Solum implements a ci/cd pipeline that we want to trigger based on a git
> receive hook. What we do is generate a magic webhook (should be
> ec2signed url - on the todo list) and when it is hit we want
> to call mistral-execution-create (which runs a workflow that calls
> to other openstack services (heat is one of them).
> 
> We currently use a trust token and that fails because both mistral and
> heat want to create trust tokens as well :-O (trust tokens can't be
> rescoped).

So, I've been looking into this, and there are two issues:

1. On stack-create, heat needs to create a trust so it can do deferred
operation on behalf of the user.  To do this we will require explicit
support for chained delegation in keystone, which does not currently exist.
I've been speaking to ayoung about it, and plan to raise a spec for this
work soon.  The best quick-fix is probably to always create a stack when
the user calls Solum (even if it's an empty stack), using their
non-trust-scoped token.

2. Heat doesn't currently work (even for non-create operations) with a
trust-scoped token.  The reason for this is primarily that keystoneclient
always tries to request a new token to populate the auth_ref (e.g service
catalog etc), so there is no way to just validate the existing trust-scoped
token.  AFAICS this requires a new keystoneclient auth plugin, which I'm
working on right now, I already posted a patch for the heat part of the
fix:

https://review.openstack.org/#/c/96452/

> 
> So what is the best mechanism for this? I spoke to Steven Hardy at
> summit and he suggested (after talking to some keystone folks) we all
> move to using the new oauth functionality in keystone.
> 
> I believe there might be some limitations to oauth (are roles supported?).

I spent a bit of time digging into oauth last week, based on this example
provided by Steve Martinelli:

https://review.openstack.org/#/c/80193/

Currently, I can't see how we can use this as a replacement for our current
use-cases for trusts:
1. There doesn't seem to be any non-global way to prevent oauth accesskeys
from expiring.  We need delegation to last the (indefinite) lifetime of the
heat stack, so the delegatation cannot expire.
2. Most (all?) of the oauth interfaces are admin-only.  I'm not clear if
this is a blocker, but it seems like it's the opposite of what we currently
do with trusts, where a (non-admin) user can delegate a subset of their
roles via a trust, which is created using their token.

What would be *really* helpful, is if we could work towards another
example, which demostrates something closer to the Solum/Heat use-case for
delegation (as opposed to the current example which just shows an admin
delegating their admin-ness).

e.g (these users/roles exist by default in devstack deployments):

1. User "demo" delegates "heat_stack_owner" role in project "demo" to the
"heat" service user.  The resulting delegation-secret to be stored by heat
must not expire, and it must be possible for the "heat" user to explicitly
impersonate user "demo".

Until we can see how that use-case can be solved with oauth, I don't think
we can make any progress on actually adopting it.

The next part of the use-case would be working out how either a delegation
secret can be shared between services (e.g Solum/Heat), or how delegation
can be chained between services, but the first thing is working out the
basic user->service delegation model IMO.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [Openstack-dev] New RA for Galera

2014-06-02 Thread Bogdan Dobrelya
On 05/29/2014 02:06 PM, Bartosz Kupidura wrote:
> Hello,
> 
> 
> Wiadomość napisana przez Vladimir Kuklin  w dniu 29 maj 
> 2014, o godz. 12:09:
> 
>> may be the problem is that you are using liftetime crm attributes instead of 
>> 'reboot' ones. shadow/commit is used by us because we need transactional 
>> behaviour in some cases. if you turn crm_shadow off, then you will 
>> experience problems with multi-state resources and location/colocation/order 
>> constraints. so we need to find a way to make commits transactional. there 
>> are two ways:
>> 1) rewrite corosync providers to use crm_diff command and apply it instead 
>> of shadow commit that can swallow cluster attributes sometimes
> 
> In PoC i removed all cs_commit/cs_shadow, and looks that everything is 
> working. But as you says, this can lead to problems with more complicated 
> deployments.
> This need to be verified.
> 
>> 2) store 'reboot' attributes instead of lifetime ones
> 
> I test with —lifetime forever and reboot. No difference for 
> cs_commit/cs_shadow fail.
> 
> Moreover we need method to store GTID permanent (to support whole cluster 
> reboot). 

Please note, GTID could always be fetched from the
/var/lib/mysql/grastate.dat at the galera node

> If we want to stick to cs_commit/cs_shadow, we need other method to store 
> GTID than crm_attribute.

WE could use a modified ocf::pacemaker:SysInfo resource. We could put
GTID there and use it the similar way as I did for fencing PoC[0] (for
free space monitoring)

[0]
https://github.com/bogdando/fuel-library-1/blob/ha_fencing_WIP/deployment/puppet/cluster/manifests/fencing_primitives.pp#L41-L70

> 
>>
>>
>>
>> On Thu, May 29, 2014 at 12:42 PM, Bogdan Dobrelya  
>> wrote:
>> On 05/27/14 16:44, Bartosz Kupidura wrote:
>>> Hello,
>>> Responses inline.
>>>
>>>
>>> Wiadomość napisana przez Vladimir Kuklin  w dniu 27 
>>> maj 2014, o godz. 15:12:
>>>
 Hi, Bartosz

 First of all, we are using openstack-dev for such discussions.

 Second, there is also Percona's RA for Percona XtraDB Cluster, which looks 
 like pretty similar, although it is written in Perl. May be we could 
 derive something useful from it.

 Next, if you are working on this stuff, let's make it as open for the 
 community as possible. There is a blueprint for Galera OCF script: 
 https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script. It 
 would be awesome if you wrote down the specification and sent  newer 
 galera ocf code change request to fuel-library gerrit.
>>>
>>> Sure, I will update this blueprint.
>>> Change request in fuel-library: https://review.openstack.org/#/c/95764/
>>
>> That is a really nice catch, Bartosz, thank you. I believe we should
>> review the new OCF script thoroughly and consider omitting
>> cs_commits/cs_shadows as well. What would be the downsides?
>>
>>>

 Speaking of crm_attribute stuff. I am very surprised that you are saying 
 that node attributes are altered by crm shadow commit. We are using 
 similar approach in our scripts and have never faced this issue.
>>>
>>> This is probably because you update crm_attribute very rarely. And with my 
>>> approach GTID attribute is updated every 60s on every node (3 updates in 
>>> 60s, in standard HA setup).
>>>
>>> You can try to update any attribute in loop during deploying cluster to 
>>> trigger fail with corosync diff.
>>
>> It sounds reasonable and we should verify it.
>> I've updated the statuses for related bugs and attached them to the
>> aforementioned blueprint as well:
>> https://bugs.launchpad.net/fuel/+bug/1283062/comments/7
>> https://bugs.launchpad.net/fuel/+bug/1281592/comments/6
>>
>>
>>>

 Corosync 2.x support is in our roadmap, but we are not sure that we will 
 use Corosync 2.x earlier than 6.x release series start.
>>>
>>> Yeah, moreover corosync CMAP is not synced between cluster nodes (or maybe 
>>> im doing something wrong?). So we need other solution for this...
>>>
>>
>> We should use CMAN for Corosync 1.x, perhaps.
>>


 On Tue, May 27, 2014 at 3:08 PM, Bartosz Kupidura  
 wrote:
 Hello guys!
 I would like to start discussion on a new resource agent for 
 galera/pacemaker.

 Main features:
 * Support cluster boostrap
 * Support reboot any node in cluster
 * Support reboot whole cluster
 * To determine which node have latest DB version, we should use galera 
 GTID (Global Transaction ID)
 * Node with latest GTID is galera PC (primary component) in case of 
 reelection
 * Administrator can manually set node as PC

 GTID:
 * get GTID from mysqld --wsrep-recover or SQL query 'SHOW STATUS LIKE 
 ‚wsrep_local_state_uuid''
 * store GTID as crm_attribute for node (crm_attribute --node $HOSTNAME 
 --lifetime $LIFETIME --name gtid --update $GTID)
 * on every monitor/stop/start action update GTID for given node
 * GTID can have 3 format:
>>

Re: [openstack-dev] [Horizon] Use of AngularJS

2014-06-02 Thread Maxime Vidori
Hello,

Seems to be a good idea, I will take a look at this package which seems to have 
a lot of features. The project seems pretty active, and I think it can be a 
good idea to dig into this kind of packages. 

- Original Message -
From: "Veronica A Musso" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, May 29, 2014 5:30:04 PM
Subject: [openstack-dev] [Horizon] Use of AngularJS

Hello,

During the last Summit the use of AngularJS in Horizon was discussed and there 
is the intention to do a better use of it in the dashboards.
 I think this blueprint could help 
https://blueprints.launchpad.net/horizon/+spec/django-angular-integration, 
since it proposes the integration of Django-Angular 
(http://django-angular.readthedocs.org/en/latest/index.html).
I would like to know the community opinion about it, due I could start its 
implementation.

Thanks!

Best Regards,
Verónica Musso

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] keystone

2014-06-02 Thread Alan Pevec
> After restarting keystone with the following command,
> $service openstack-keystone restart
> it is giving a message "Aborting wait for keystone to start". Could you
> please help on what the problem could be?

This is not an appropriate topic for the development mailing list,
please open a question on ask.openstack.org with relevant information
like operating system and package  version,  logfiles etc.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Murano API improvements

2014-06-02 Thread Stan Lagun
I think API need to be redesigned at some point. There is a blueprint for
this: https://blueprints.launchpad.net/murano/+spec/api-vnext
It seems reasonable to implement new API on new framework at once

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 


On Mon, Jun 2, 2014 at 12:21 PM, Ruslan Kamaldinov  wrote:

> Let's follow the standard procedure. Both blueprints lack specification of
> implementation details. There also has to be someone willing to implement
> these
> blueprints in near feature.
>
> I'm not opposed to these ideas and I'd really like to see Pecan added
> during
> Juno, but we still need to follow the procedure. I cannot approve an idea,
> it
> should be a specification. Let's work together on the new API specification
> first, then we'll need to find a volunteer to implement it on top of Pecan.
>
>
> --
> Ruslan
>
> On Mon, Jun 2, 2014 at 8:35 AM, Timur Nurlygayanov
>  wrote:
> > Hi all,
> >
> > We need to rewrite Murano API on new API framework and we have the
> commit:
> > https://review.openstack.org/#/c/60787
> > (Sergey, sorry, but -1 from me, need to fix small isses)
> >
> > Also, today I created blueprint:
> > https://blueprints.launchpad.net/murano/+spec/murano-api-workers
> > this feature allows to run many API threads on one host and this allows
> to
> > scale Murano API processes.
> >
> > I suggest to update and merge this commit with migration to Pecan
> framework
> > and after that we can easily implement this blueprint and add many other
> > improvements to Murano API and Murano python agent.
> >
> > Ruslan, could you please approve these blueprints and target them to some
> > milestone?
> >
> >
> > Thank you!
> >
> > --
> >
> > Timur,
> > QA Engineer
> > OpenStack Projects
> > Mirantis Inc
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] access-control-master-node

2014-06-02 Thread Lukasz Oles
After some discussion on IRC we updated blueprint. Now it's available as
review here https://review.openstack.org/#/c/96429/2
Nice looking version is here
http://docs-draft.openstack.org/29/96429/3/check/gate-fuel-specs-docs/d5b32d5/doc/build/html/specs/5.1/access-control-master-node.html

Blueprint was split into 4 stages so now we can implement it in smaller
steps.

Please comment.

Regards,


On Tue, May 27, 2014 at 8:41 PM, Andrew Woodward  wrote:

> AFIK, if we implement ironic as a replacement for cobbler, we will
> have Keystone on the fuel-master anyway. Supporting OAuth as an
> additional authentication entry would awesome too, but I'm not sure if
> there would be much demand over Keystone.
>
> On Tue, May 27, 2014 at 8:31 AM, Lukasz Oles  wrote:
> > There is some misunderstanding here. By using keystone I mean running
> > keystone on fuel master node. After all it's just python program. It's
> used
> > by OpenStack as authorization tool but it also can be used as standalone
> > software or by different tools completely not connected with OpenStack.
> > In future if  want to use LDAP source, keystone already have plugin for
> it.
> >
> > Regards
> >
> >
> > On Tue, May 27, 2014 at 5:08 PM, David Easter 
> wrote:
> >>
> >> The other challenge of utilizing Keystone is which one to use.  Fuel
> >> enables the deployment of multiple cloud environments from one UI; so
> when
> >> accessing the Fuel Master Node, it would be ambiguous which already
> deployed
> >> Keystone to contact for authentication.  If/When Triple-O is utilized,
> one
> >> could perhaps see designating the Keystone of the undercloud; but that’s
> >> more a future requirement.
> >>
> >> For now, I’d suggest an internal authentication in the immediate short
> >> term.  External auth sources can be added in future milestones – most
> likely
> >> an LDAP source that’s outside the deployed clouds and designated by IT.
> >>
> >> Thanks,
> >>
> >> - David J. Easter
> >>   Director of Product Management, Mirantis
> >>
> >> From: Jesse Pretorius 
> >> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Date: Tuesday, May 27, 2014 at 7:43 AM
> >>
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Subject: Re: [openstack-dev] [Fuel-dev] access-control-master-node
> >>
> >> On 27 May 2014 13:42, Lukasz Oles  wrote:
> >>>
> >>> Hello fuelers,
> >>>
> >>> we(I and Kamil) would like start discussion about "Enforce access
> control
> >>> for Fuel UI" blueprint
> >>> https://blueprints.launchpad.net/fuel/+spec/access-control-master-node
> .
> >>>
> >>> First question to David, as he proposed this bp. Do you want to add
> more
> >>> requirements?
> >>>
> >>> To all. What do you think about using keystone as authorization tool?
> We
> >>> described all pros/cons in the specification.
> >>
> >>
> >> I would suggest both an internal authentication database and the option
> of
> >> plugging additional options in, with keystone being one of them and
> perhaps
> >> something like oauth being another.
> >>
> >> Keystone may not be available at the time of the build, or accessible
> from
> >> the network that's used for the initial build.
> >> ___ OpenStack-dev mailing
> list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> --
> >> Mailing list: https://launchpad.net/~fuel-dev
> >> Post to : fuel-...@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~fuel-dev
> >> More help   : https://help.launchpad.net/ListHelp
> >>
> >
> >
> >
> > --
> > Łukasz Oleś
> >
> > --
> > Mailing list: https://launchpad.net/~fuel-dev
> > Post to : fuel-...@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~fuel-dev
> > More help   : https://help.launchpad.net/ListHelp
> >
>
>
>
> --
> Andrew
> Mirantis
> Ceph community
>



-- 
Łukasz Oleś
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Murano API improvements

2014-06-02 Thread Ruslan Kamaldinov
Let's follow the standard procedure. Both blueprints lack specification of
implementation details. There also has to be someone willing to implement these
blueprints in near feature.

I'm not opposed to these ideas and I'd really like to see Pecan added during
Juno, but we still need to follow the procedure. I cannot approve an idea, it
should be a specification. Let's work together on the new API specification
first, then we'll need to find a volunteer to implement it on top of Pecan.


--
Ruslan

On Mon, Jun 2, 2014 at 8:35 AM, Timur Nurlygayanov
 wrote:
> Hi all,
>
> We need to rewrite Murano API on new API framework and we have the commit:
> https://review.openstack.org/#/c/60787
> (Sergey, sorry, but -1 from me, need to fix small isses)
>
> Also, today I created blueprint:
> https://blueprints.launchpad.net/murano/+spec/murano-api-workers
> this feature allows to run many API threads on one host and this allows to
> scale Murano API processes.
>
> I suggest to update and merge this commit with migration to Pecan framework
> and after that we can easily implement this blueprint and add many other
> improvements to Murano API and Murano python agent.
>
> Ruslan, could you please approve these blueprints and target them to some
> milestone?
>
>
> Thank you!
>
> --
>
> Timur,
> QA Engineer
> OpenStack Projects
> Mirantis Inc
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL][Design session 5.1] 28/05/2014 meeting minutes

2014-06-02 Thread Vladimir Kuklin
Oh, my bad. Sorry, June 5th, of course. Thank you, Lukasz.


On Fri, May 30, 2014 at 7:06 PM, Lukasz Oles  wrote:

> june 5th?
>
>
> On Fri, May 30, 2014 at 4:33 PM, Vladimir Kuklin 
> wrote:
>
>> Guys, we gonna have a more extended design FUEL Library design meeting in
>> IRC channel during regular FUEL Meeting  on May 5th. So, feel free to add
>> blueprints to meeting agenda and we will consider adding them to 5.1
>> Roadmap.
>>
>>
>> On Wed, May 28, 2014 at 7:41 PM, Vladimir Kuklin 
>> wrote:
>>
>>> Hey, folks
>>>
>>> We did a meeting today regarding 5.1-targeted blueprints and design.
>>>
>>> Here is the document with the results:
>>>
>>> https://etherpad.openstack.org/p/fuel-library-5.1-design-session
>>>
>>> Obviously, we need several additional meetings to build up roadmap for
>>> 5.1, but I think this was a really good start. Thank you all.
>>>
>>> We will continue to work on this during this and next working week. Hope
>>> to see you all on weekly IRC meeting tomorrow. Feel free to propose your
>>> blueprints and ideas for 5.1 release.
>>> https://wiki.openstack.org/wiki/Meetings/Fuel
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> Mirantis, Inc.
>>> +7 (495) 640-49-04
>>> +7 (926) 702-39-68
>>> Skype kuklinvv
>>> 45bk3, Vorontsovskaya Str.
>>> Moscow, Russia,
>>> www.mirantis.com 
>>> www.mirantis.ru
>>> vkuk...@mirantis.com
>>>
>>
>>
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>> 45bk3, Vorontsovskaya Str.
>> Moscow, Russia,
>> www.mirantis.com 
>> www.mirantis.ru
>> vkuk...@mirantis.com
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Łukasz Oleś
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Adding Tuskar to weekly IRC meetings agenda

2014-06-02 Thread Jaromir Coufal

On 2014/30/05 22:37, James Polley wrote:




On 30 May 2014, at 8:13 pm, Jaromir Coufal  wrote:

Hi All,

I would like to propose to add Tuskar as a permanent topic to the agenda for 
our weekly IRC meetings. It is an official TripleO's project, there happening 
quite a lot around it and we are targeting for Juno to have something solid. So 
I think that it is important for us to regularly keep track on what is going on 
there.



Sounds good to me.

What do you think we would talk about under this topic? I'm thinking that a 
brief summary of changes since last week, and any blockers tuskar is seeing 
from the broader project would be a good start?


Yeah, I am thinking about something similar. Communicate direction, 
discuss changes, progress, blockers. Similar as for all other topics.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] keystone

2014-06-02 Thread Tizy Ninan
Hi,

After restarting keystone with the following command,
*$service openstack-keystone restart*
it is giving a message "*Aborting wait for keystone to start*". Could you
please help on what the problem could be?

Thanks,
Tizy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev