Re: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-16 Thread Steven Hardy
On Wed, Mar 16, 2016 at 01:57:34PM +0300, Sergey Kraynev wrote:
> Hi Heaters,
> 
> The Mitaka release is close to finish, so it's good time for reviewing
> results of work.
> One of this results is analyze contribution results for the last release 
> cycle.
> According to the data [1] we have one good candidate for nomination to
> core-review team:
> Oleksii Chuprykov.
> During this release he showed significant value of review metric.
> His review were valuable and useful. Also He has enough level of
> expertise in Heat code.
> So I think he is worthy to join to core-reviewers team.
> 
> I ask you to vote and decide his destiny.
>  +1 - if you agree with his candidature
>  -1  - if you disagree with his candidature

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-16 Thread Attila Fazekas

NO : For any kind of extra quota service.

In other places I saw other reasons for a quota service or similar,
 the actual cost of this approach is higher than most people would think so NO.


Maybe Library,
But I do not want to see for example the bad pattern used in nova to spread 
everywhere.

The quota usage handling MUST happen in the same DB transaction as the 
resource record (volume, server..) create/update/delete  .

There is no need for.:
- reservation-expirer services or periodic tasks ..
- there is no need for quota usage correcter shell scripts or whatever
- multiple commits


We have a transaction capable DB, to help us,
not using it would be lame.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061338.html

- Original Message -
> From: "Nikhil Komawar" 
> To: "OpenStack Development Mailing List" 
> Sent: Wednesday, March 16, 2016 7:25:26 AM
> Subject: [openstack-dev] [cross-project] [all] Quotas -- service vs. library
> 
> Hello everyone,
> 
> tl;dr;
> I'm writing to request some feedback on whether the cross project Quotas
> work should move ahead as a service or a library or going to a far
> extent I'd ask should this even be in a common repository, would
> projects prefer to implement everything from scratch in-tree? Should we
> limit it to a guideline spec?
> 
> But before I ask anymore, I want to specifically thank Doug Hellmann,
> Joshua Harlow, Davanum Srinivas, Sean Dague, Sean McGinnis and  Andrew
> Laski for the early feedback that has helped provide some good shape to
> the already discussions.
> 
> Some more context on what the happenings:
> We've this in progress spec [1] up for providing context and platform
> for such discussions. I will rephrase it to say that we plan to
> introduce a new 'entity' in the Openstack realm that may be a library or
> a service. Both concepts have trade-offs and the WG wanted to get more
> ideas around such trade-offs from the larger community.
> 
> Service:
> This would entail creating a new project and will introduce managing
> tables for quotas for all the projects that will use this service. For
> example if Nova, Glance, and Cinder decide to use it, this 'entity' will
> be responsible for handling the enforcement, management and DB upgrades
> of the quotas logic for all resources for all three projects. This means
> less pain for projects during the implementation and maintenance phase,
> holistic view of the cloud and almost a guarantee of best practices
> followed (no clutter or guessing around what different projects are
> doing). However, it results into a big dependency; all projects rely on
> this one service for right enforcement, avoiding races (if do not
> incline on implementing some of that in-tree) and DB
> migrations/upgrades. It will be at the core of the cloud and prone to
> attack vectors, bugs and margin of error.
> 
> Library:
> A library could be thought of in two different ways:
> 1) Something that does not deal with backed DB models, provides a
> generic enforcement and management engine. To think ahead a little bit
> it may be a ABC or even a few standard implementation vectors that can
> be imported into a project space. The project will have it's own API for
> quotas and the drivers will enforce different types of logic; per se
> flat quota driver or hierarchical quota driver with custom/project
> specific logic in project tree. Project maintains it's own DB and
> upgrades thereof.
> 2) A library that has models for DB tables that the project can import
> from. Thus the individual projects will have a handy outline of what the
> tables should look like, implicitly considering the right table values,
> arguments, etc. Project has it's own API and implements drivers in-tree
> by importing this semi-defined structure. Project maintains it's own
> upgrades but will be somewhat influenced by the common repo.
> 
> Library would keep things simple for the common repository and sourcing
> of code can be done asynchronously as per project plans and priorities
> without having a strong dependency. On the other hand, there is a
> likelihood of re-implementing similar patterns in different projects
> with individual projects taking responsibility to keep things up to
> date. Attack vectors, bugs and margin of error are project responsibilities
> 
> Third option is to avoid all of this and simply give guidelines, best
> practices, right packages to each projects to implement quotas in-house.
> Somewhat undesirable at this point, I'd say. But we're all ears!
> 
> Thank you for reading and I anticipate more feedback.
> 
> [1] https://review.openstack.org/#/c/284454/
> 
> --
> 
> Thanks,
> Nikhil
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


Re: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-16 Thread Kanagaraj Manickam
+1

On Wed, Mar 16, 2016 at 4:27 PM, Sergey Kraynev 
wrote:

> Hi Heaters,
>
> The Mitaka release is close to finish, so it's good time for reviewing
> results of work.
> One of this results is analyze contribution results for the last release
> cycle.
> According to the data [1] we have one good candidate for nomination to
> core-review team:
> Oleksii Chuprykov.
> During this release he showed significant value of review metric.
> His review were valuable and useful. Also He has enough level of
> expertise in Heat code.
> So I think he is worthy to join to core-reviewers team.
>
> I ask you to vote and decide his destiny.
>  +1 - if you agree with his candidature
>  -1  - if you disagree with his candidature
>
> [1] http://stackalytics.com/report/contribution/heat-group/120
>
> --
> Regards,
> Sergey.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-16 Thread Thomas Herve
On Wed, Mar 16, 2016 at 11:57 AM, Sergey Kraynev  wrote:
> Hi Heaters,
>
> The Mitaka release is close to finish, so it's good time for reviewing
> results of work.
> One of this results is analyze contribution results for the last release 
> cycle.
> According to the data [1] we have one good candidate for nomination to
> core-review team:
> Oleksii Chuprykov.
> During this release he showed significant value of review metric.
> His review were valuable and useful. Also He has enough level of
> expertise in Heat code.
> So I think he is worthy to join to core-reviewers team.
>
> I ask you to vote and decide his destiny.
>  +1 - if you agree with his candidature
>  -1  - if you disagree with his candidature

+1!

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-16 Thread Sergey Kraynev
Hi Heaters,

The Mitaka release is close to finish, so it's good time for reviewing
results of work.
One of this results is analyze contribution results for the last release cycle.
According to the data [1] we have one good candidate for nomination to
core-review team:
Oleksii Chuprykov.
During this release he showed significant value of review metric.
His review were valuable and useful. Also He has enough level of
expertise in Heat code.
So I think he is worthy to join to core-reviewers team.

I ask you to vote and decide his destiny.
 +1 - if you agree with his candidature
 -1  - if you disagree with his candidature

[1] http://stackalytics.com/report/contribution/heat-group/120

-- 
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of cluster status

2016-03-16 Thread Vladimir Kuklin
Folks

As I generally support the idea of getting rid of cluster status, this
requires thorough design. My opinion here is that we should leave it as a
function of nodes state until we come up with a variant of better
calculation of cluster status. Nevertheless it is true that cluster status
is actually a function of other primary data and should be calculated on
the client side. I suggest that we move towards more fine-grained
component-based architecture (simplest example is OpenStack Fuel vs
non-OpenStack Fuel) and figure out a way of calculating each component's
status. Then we should calculate each component's status and then a cluster
status should be an aggregate of those. For example, we could say that the
only components we have right now are nodes and the aggregate is based on
the nodes status and whether they are critical or not.

On Tue, Mar 15, 2016 at 9:16 PM, Andrew Woodward  wrote:

>
>
> On Tue, Mar 15, 2016 at 4:04 AM Roman Prykhodchenko  wrote:
>
>> Fuelers,
>>
>> I would like to continue the series of "Getting rid of …" emails. This
>> time I’d like to talk about statuses of clusters.
>>
>> The issues with that attribute is that it is not actually related to real
>> world very much and represents nothing. A few month ago I proposed to make
>> it more real-world-like [1] by replacing a simple string by an aggregated
>> value. However, after task based deployment was introduced even that
>> approach lost its connection to the real world.
>>
>> My idea is to get rid of that attribute from a cluster and start working
>> with status of every single node in it. Nevertheless, we only have tasks
>> that are executed on nodes now, so we cannot apply the "status" term to
>> them. What if we replace that with a sort of boolean value called
>> maintenance_mode (or similar) that we will use to tell if the node is
>> operational or not. After that we will be able to use an aggregated
>> property for cluster and check, if there are any nodes that are under a
>> progress of performing some tasks on them.
>>
>
> Yes, we still need an operations attribute, I'm not sure a bool is enough,
> but you are quite correct, setting the status of the cluster after
> operational == True based on the result of a specific node failing, is in
> practice invalid.
>
> At the same time, operational == True is not necessarily deployment
> succeeded, its more along the line of deployment validated, which may be
> further testing passing like ostf, or more manual in the operator wants to
> do more testing their own prior to changing the state.
>
> As we adventure in to the LCM flow, we actually need status of each
> component in addition of the general status of the cluster to determine the
> proper course of action the on the next operation.
>
> For example nova-compute
> if the cluster is not operational, then we can provision compute nodes,
> and have them enabled, or active in the scheduler automatically. However if
> the cluster is operational, a new compute node must be disabled, or
> otherwise blocked from the default scheduler until the node has received
> validation. In this case the interpretation of operational is quite simple
>
> For example ceph
> Here we care less about the status of the cluster (slightly, this example
> ignores ceph's impact on nova-compute), and more about the status of the
> service. In the case that we deploy ceph-osd's when their are not replica
> factor osd hosts online (3) the we can provision the OSD's similar to
> nova-compute,  in that we can bring them all online and active and data
> could be placed to them immediately (more or less). but if the ceph status
> is operational, then we have to take a different action, the OSD's have to
> be brought in disabled, and gradually(probably by the operator) have their
> data weight increased so they don't clog the network with data peering
> which causes the clients may woes.
>
>
>> Thoughts, ideas?
>>
>>
>> References:
>>
>> 1. https://blueprints.launchpad.net/fuel/+spec/complex-cluster-status
>>
>>
>> - romcheg
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__

Re: [openstack-dev] [packstack] Update packstack core list

2016-03-16 Thread Javier Pena


- Original Message -
> 2016-03-16 11:23 GMT+01:00 Lukas Bezdicka :
> >> ...
> >> - Martin Mágr
> >> - Iván Chavero
> >> - Javier Peña
> >> - Alan Pevec
> >>
> >> I have a doubt about Lukas, he's contributed an awful lot to
> >> Packstack, just not over the last 90 days. Lukas, will you be
> >> contributing in the future? If so, I'd include him in the proposal as
> >> well.
> >
> > Thanks, yeah I do plan to contribute just haven't had time lately for
> > packstack.
> 
> I'm also adding David Simard who recently contributed integration tests.
> 
> Since there hasn't been -1 votes for a week, I went ahead and
> implemented group membership changes in gerrit.
> Thanks to the past core members, we will welcome you back on the next
> 
> One more topic to discuss is if we need PTL election? I'm not sure we
> need formal election yet and de-facto PTL has been Martin Magr, so if
> there aren't other proposal let's just name Martin our overlord?
> 

+1 for Martin ;)

> Cheers,
> Alan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal

2016-03-16 Thread Dmitry Tantsur

On 03/15/2016 01:53 PM, Serge Kovaleff wrote:

Dear All,

Let's compare functional abilities of both solutions.

Till the recent Mitaka release Ironic-inspector had only Introspection
ability.

Discovery part is proposed and implemented by Anton Arefiev. We should
align expectations and current and future functionality.

Adding Tags to attract the Inspector community.


Hi!

It would be great to see what we can do to fit the nailgun use case. 
Unfortunately, I don't know much about it right now. What are you missing?




Cheers,
Serge Kovaleff
http://www.mirantis.com 
cell: +38 (063) 83-155-70

On Tue, Mar 15, 2016 at 2:07 PM, Alexander Saprykin
mailto:asapry...@mirantis.com>> wrote:

Dear all,

Thank you for the opinions about this problem.

I would agree with Roman, that it is always better to reuse
solutions than re-inventing the wheel. We should investigate
possibility of using ironic-inspector and integrating it into fuel.

Best regards,
Alexander Saprykin

2016-03-15 13:03 GMT+01:00 Sergii Golovatiuk
mailto:sgolovat...@mirantis.com>>:

My strong +1 to drop off nailgun-agent completely in favour of
ironic-inspector. Even taking into consideration we'lll need to
extend  ironic-inspector for fuel needs.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Mar 15, 2016 at 11:06 AM, Roman Prykhodchenko
mailto:m...@romcheg.me>> wrote:

My opition on this is that we have too many re-invented
wheels in Fuel and it’s better think about replacing them
with something we can re-use than re-inventing them one more
time.

Let’s take a look at Ironic and try to figure out how we can
use its features for the same purpose.


- romcheg
 > 15 бер. 2016 р. о 10:38 Neil Jerram
mailto:neil.jer...@metaswitch.com>> написав(ла):
 >
 > On 15/03/16 07:11, Vladimir Kozhukalov wrote:
 >> Alexander,
 >>
 >> We have many other places where use Ruby (astute, puppet
custom types,
 >> etc.). I don't think it is a good reason to re-write
something just
 >> because it is written in Ruby. You are right about
tests, about plugins,
 >> but let's look around. Ironic community has already
invented discovery
 >> component (btw written in python) and I can't see any
reason why we
 >> should continue putting efforts in nailgun agent and not
try to switch
 >> to ironic-inspector.
 >
 > +1 in general terms.  It's strange to me that there are
so many
 > OpenStack deployment systems that each do each piece of
the puzzle in
 > their own way (Fuel, Foreman, MAAS/Juju etc.) - and which
also means
 > that I need substantial separate learning in order to use
all these
 > systems.  It would be great to see some consolidation.
 >
 > Regards,
 >   Neil
 >
 >
 >

__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

 >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_

Re: [openstack-dev] [packstack] Update packstack core list

2016-03-16 Thread Alan Pevec
2016-03-16 11:23 GMT+01:00 Lukas Bezdicka :
>> ...
>> - Martin Mágr
>> - Iván Chavero
>> - Javier Peña
>> - Alan Pevec
>>
>> I have a doubt about Lukas, he's contributed an awful lot to
>> Packstack, just not over the last 90 days. Lukas, will you be
>> contributing in the future? If so, I'd include him in the proposal as
>> well.
>
> Thanks, yeah I do plan to contribute just haven't had time lately for
> packstack.

I'm also adding David Simard who recently contributed integration tests.

Since there hasn't been -1 votes for a week, I went ahead and
implemented group membership changes in gerrit.
Thanks to the past core members, we will welcome you back on the next

One more topic to discuss is if we need PTL election? I'm not sure we
need formal election yet and de-facto PTL has been Martin Magr, so if
there aren't other proposal let's just name Martin our overlord?

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO][Neutron] neutron-lbaas agent service placement

2016-03-16 Thread Qasim Sarfraz
Thanks Ben.

On Tue, Mar 15, 2016 at 10:51 PM, Ben Nemec  wrote:

> On 03/14/2016 10:18 AM, Qasim Sarfraz wrote:
> > Hi Triple-O folks,
> >
> > I was planning to enable neutron-lbaas-agent on a overcloud deployment
> > but couldn't find any useful documentation. Can someone please point me
> > to the required documentation? Is there a heat/puppet workflow available
> > for this service?
> >
> > Also I had following questions regarding neutron-lbaas service placement:
> >
> >   * I am not able to find a network node or neutron node role in tripleo
> > templates [1] consequently the service will be placed on
> > controllers. Correct?
>
> Yeah, there's work under way to allow custom placement of services, but
> for the moment it would probably need to run on the controllers.
>
Makes sense. Is there a discussion going on for this or some patch set
adding this functionality? I will be happy to be part of that effort.

> >   * Is it possible to run multiple instances of this service and use
> > HAproxy to provide VIP to the services?
>
> Assuming the service supports this, it should be doable.


> >   * Is it possible to run the service on the compute nodes? If yes is
> > there a installation workflow for this.
>
> It's possible, but to my knowledge there isn't any existing support for
> LBaaS in TripleO.  To enable it, you would need to:
>
> -Add it to the TripleO loadbalancer puppet manifest:
>
> https://github.com/openstack/puppet-tripleo/blob/master/manifests/loadbalancer.pp
> -Add the necessary hieradata to enable it in tripleo-heat-templates.
>
> This is assuming there is existing puppet support for it.  If not, there
> would be some additional steps to get that into the puppet modules we use.
>
> Thanks for the pointer. I will have to add support in [1] and take care of
heat/puppet/hiera workflow for automated installation. Correct?
[1] -
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/manifests/overcloud_controller_pacemaker.pp


> -Ben
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Qasim Sarfraz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Wishlist bugs == (trivial) blueprint?

2016-03-16 Thread Thierry Carrez

Sean Dague wrote:

It's a trade off. Would you rather keep Wishlist mechanism and have ~30
extra bugs in every release, and have to hunt for a new bug lead twice
as often? That's my gut feel on the break down here.

To get the bug backlog under control, we have to make hard calls here.
This is one of them. Once we're working with < 400 open issues, deciding
to reopen the Wishlist mechanism is a thing we can and should revisit.


You're right that as soon as a project is resource-constrained (be it 
patch authors, core reviewers bandwidth or spec reviewers) and you can't 
get everything on your own list done anyway, you're likely to gradually 
stop looking at extra sources of inspiration. You start by ignoring the 
unqualified "wishlist bugs", then if you still can't get your own things 
done you'll likely ignore the more qualified "backlog specs" and if all 
else fails you'll start ignoring the bug reports altogether.


In an ideal world you'd either grow the resources / bandwidth, or get to 
the bottom of what you absolutely need to get done, and then start 
paying attention to those feedback channels again. Those feedback 
channels are essential to keep the pulse on the users problems and 
needs, and avoid echo chamber effects. But then if you just can't give 
them any attention, having them existing and ignored is worse than not 
having them at all.


So if Nova currently is in that resource-constrained situation (and I 
think it is), it's better to clearly set expectations and close the 
wishlist bugs feedback mechanism, rather than keeping it open and 
completely ignore it.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packstack] Update packstack core list

2016-03-16 Thread Lukas Bezdicka
On Tue, 2016-03-08 at 06:41 -0500, Javier Pena wrote:
> > Hi,
> > 
> > [post originally sent on RDO-list but I've been told I should use
> > this
> > channel]
> > 
> > I've looked at packstack core-list [1] and I suggest we revisit to
> > keep
> > only active contributors [2] in the core members list.
> > 
> > The list seems super big comparing to who is actually active on the
> > project; in a meritocracy world it would make sense to revisit that
> > list.
> > 
> > Thanks,
> > 
> > [1] https://review.openstack.org/#/admin/groups/124,members
> > [2] http://stackalytics.com/report/contribution/packstack/90
> > 
> 
> I agree with Emilien. Looking at the active contributors, my proposal
> for the core list would be:
> 
> - Martin Mágr
> - Iván Chavero
> - Javier Peña
> - Alan Pevec
> 
> I have a doubt about Lukas, he's contributed an awful lot to
> Packstack, just not over the last 90 days. Lukas, will you be
> contributing in the future? If so, I'd include him in the proposal as
> well.

Thanks, yeah I do plan to contribute just haven't had time lately for
packstack.

> 
> Thoughts?
> Javier
> 
> > --
> > Emilien Macchi
> > 
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO][Neutron] neutron-lbaas agent service placement

2016-03-16 Thread Qasim Sarfraz
Thanks Nir for pointing that out [1]. I am considering moving to LBaaSv2
moving forward.

Also, Response inline for the other questions.

[1] - https://review.openstack.org/#/c/286381
[2] - http://docs.openstack.org/ha-guide/controller-ha-haproxy.html

On Tue, Mar 15, 2016 at 2:21 PM, Nir Magnezi  wrote:

> Hi Qasim,
>
> Replied inline to your non-triple-o specific related questions.
> Also regarding your following email, any specific reason you try to use
> LBaaSv1?
> I highly recommend that you go with LBaaSv2 (you may still choose haproxy
> if you wish), since LBaaSv1 is deprecated[1] and will be removed[2]
> sometime in the future.
>
> [1] https://wiki.openstack.org/wiki/ReleaseNotes/Liberty --> Deprecated
> Features
> [2] https://review.openstack.org/#/c/286381
>
> Nir
>
> On Mon, Mar 14, 2016 at 5:18 PM, Qasim Sarfraz 
> wrote:
>
>> Hi Triple-O folks,
>>
>> I was planning to enable neutron-lbaas-agent on a overcloud deployment
>> but couldn't find any useful documentation. Can someone please point me to
>> the required documentation? Is there a heat/puppet workflow available for
>> this service?
>>
>> Also I had following questions regarding neutron-lbaas service placement:
>>
>>- I am not able to find a network node or neutron node role in
>>tripleo templates [1] consequently the service will be placed on
>>controllers. Correct?
>>- Is it possible to run multiple instances of this service and use
>>HAproxy to provide VIP to the services?
>>
>> Do you mean HAProxy in from of the LBaaS agent? could you elaborate?
> I have not tested this myself, but I suspect you will run into some
> difficulties. The VIP for your loadbalancer is actually a neutron port. a
> port contains a hostname in its binding information.
>
 I was talking about the HAProxy used to provide load balancing to
OpenSatck services [2]. Sorry for the confusion, i hope that explains.

>
>>- Is it possible to run the service on the compute nodes? If yes is
>>there a installation workflow for this.
>>
>> Neutron wise, it should work as long as you have L2 agent (which you
> should since it's a compute node) on your server.
>
>
Thanks. I will test it out.

> Tagged Neutron, as someone from the Neutron or sub-projects teams might
>> have already have answers for these.
>>
>> [1] - https://github.com/openstack/tripleo-heat-templates
>>
>> --
>> Regards,
>> Qasim Sarfraz
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Qasim Sarfraz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-16 Thread Sean Dague
On 03/16/2016 05:46 AM, Duncan Thomas wrote:
> On 16 March 2016 at 09:15, Tim Bell  > wrote:
> 
> Then, there were major reservations from the PTLs at the impacts in
> terms of
> latency, ability to reconcile and loss of control (transactions are
> difficult, transactions
> across services more so).
> 
> 
> Not just PTLs :-)
>  
> 
> 
> I would favor a library, at least initially. If we cannot agree on a
> library, it
> is unlikely that we can get a service adopted (even if it is desirable).
> 
> A library (along the lines of 1 or 2 above) would allow consistent
> implementation
> of nested quotas and user quotas. Nested quotas is currently only
> implemented
> in Cinder and user quota implementations vary between projects which is
> confusing.
> 
> 
> It is worth noting that the cinder implementation has been found rather
> lacking in correctness, atomicity requirements and testing - I wouldn't
> suggest taking it as anything other than a PoC to be honest. Certainly
> it should not be cargo-culted into another project in its present state.

I think a library approach should probably start from scratch, with
lessons learned from Cinder, but not really copied code, for just that
reason.

This is hard code to get right, which is why it's various degrees of
wrong in every project in OpenStack.

A common library with it's own db tables and migration train is the only
way I can imagine this every getting accomplished given the atomicity
and two phase commit constraints of getting quota on long lived, async
created resources, with sub resources that also have quota. Definitely
think that's the nearest term path to victory.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-16 Thread Thierry Carrez

Hi PTLs,

Here is the proposed slot allocation for project teams at the Newton 
Design Summit in Austin. This is based on the requests the mitaka PTLs 
have made, space availability and project activity & collaboration needs.


| fb: fishbowl 40-min slots
| wr: workroom 40-min slots
| cm: Friday contributors meetup
| | full: full day, half: only morning or only afternoon

Neutron: 9fb, cm:full
Nova: 18fb, cm:full
Fuel: 3fb, 11wr, cm:full
Horizon: 1fb, 7wr, cm:half
Cinder: 4fb, 5wr, cm:full
Keystone: 5fb, 8wr; cm:full
Ironic: 5fb, 5wr, cm:half
Heat: 4fb, 8wr, cm:half
TripleO: 2fb, 3wr, cm:half
Kolla: 4fb, 10wr, cm:full
Oslo: 3fb, 5wr
Ceilometer: 2fb, 7wr, cm:half
Manila: 2fb, 4wr, cm:half
Murano: 1fb, 2wr
Rally: 2fb, 2wr
Sahara: 2fb, 6wr, cm:half
Glance: 3fb, 5wr, cm:full
Magnum: 5fb, 5wr, cm:full
Swift: 2fb, 12wr, cm:full
OpenStackClient: 1fb, 1wr, cm:half
Senlin: 1fb, 5wr, cm:half
Monasca: 5wr
Trove: 3fb, 6wr, cm:half
Dragonflow: 1fb, 4wr, cm:half*
Mistral: 1fb, 3wr
Zaqar: 1fb, 3wr, cm:half
Barbican: 2fb, 6wr, cm:half
Designate: 1fb, 5wr, cm:half
Astara: 1fb, cm:full
Freezer: 1fb, 2wr, cm:half
Congress: 1fb, 3wr
Tacker: 1fb, 3wr, cm:half
Kuryr: 1fb, 5wr, cm:half*
Searchlight: 1fb, 2wr
Cue: no space request received
Solum: 1fb, 1wr
Winstackers: 1wr
CloudKitty: 1fb
EC2API: 2wr

Infrastructure: 3fb, 4wr, cm:day**
Documentation: 4fb, 4wr, cm:half
Quality Assurance: 4fb, 4wr, cm:day**
PuppetOpenStack: 2fb, 3wr, cm:half
OpenStackAnsible: 1fb, 8wr, cm:half
Release mgmt: 1fb, cm:half
Security: 3fb, 2wr, cm:half
ChefOpenstack: 1fb, 2wr
Stable maint: 1fb
I18n: cm:half
Refstack: 3wr
OpenStack UX: 2wr
RpmPackaging: 1fb***, 1wr
App catalog: 1fb, 2wr
Packaging-deb: 1fb***, 1wr

*: shared meetup between Kuryr and Dragonflow
**: shared meetup between Infra and QA
***: shared fishbowl between RPM packaging and DEB packaging, for 
collecting wider packaging feedback


We'll start working on laying out those sessions over the available
rooms and time slots. Most of you have communicated constraints together 
with their room requests (like Manila not wanting overlap with Cinder 
sessions), and we'll try to accommodate them the best we can. If you 
have extra constraints you haven't communicated yet, please reply to me 
ASAP.


Now is time to think about the content you'd like to cover during those 
sessions and fire up those newton etherpads :)


Cheers,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rabbitmq / ipv6 issue

2016-03-16 Thread Derek Higgins
On 16 March 2016 at 02:41, Emilien Macchi  wrote:
> I did some testing again and I'm still running in curl issues:
> http://paste.openstack.org/show/BU7UY0mUrxoMUGDhXgWs/
>
> I'll continue investigation tomorrow.

btw, tripleo-ci seems to be doing reasonably well this morning, I
don't see any failures over the last few hours so the problem your
seeing looks to be something that isn't a problem in all cases


>
> On Tue, Mar 15, 2016 at 8:00 PM, Emilien Macchi  wrote:
>> Both Pull-requests got merged upstream (kudos to Puppetlabs).
>>
>> I rebased https://review.openstack.org/#/c/289445/ on master and
>> abandoned the pin. Let's see how CI works now.
>> If it still does not work, feel free to restore the pin and rebase
>> again on the pin, so we can make progress.
>>
>> On Tue, Mar 15, 2016 at 6:21 PM, Emilien Macchi  wrote:
>>> So this is an attempt to fix everything in Puppet modules:
>>>
>>> * https://github.com/puppetlabs/puppetlabs-stdlib/pull/577
>>> * https://github.com/puppetlabs/puppetlabs-rabbitmq/pull/443
>>>
>>> If we have the patches like this, there will be no need to patch TripleO.
>>>
>>> Please review the patches if needed,
>>> Thanks
>>>
>>> On Tue, Mar 15, 2016 at 1:57 PM, Emilien Macchi  wrote:
 So from now, we pin [5] puppetlabs-rabbitmq to the commit before [3]
 and I rebased Attila's patch to test CI again.
 This pin is a workaround, in the meantime we are working on a fix in
 puppetlabs-rabbitmq.

 [5] https://review.openstack.org/293074

 I also reported the issue in TripleO Launchpad:
 https://bugs.launchpad.net/tripleo/+bug/1557680

 Also a quick note:
 Puppet OpenStack CI did not detect this failure because we don't
 deploy puppetlabs-rabbitmq from master but from the latest release
 (tag).

 On Tue, Mar 15, 2016 at 1:17 PM, Emilien Macchi  wrote:
> TL;DR;This e-mail tracks down the work done to make RabbitMQ working
> on IPv6 deployments.
> It's currently broken and we might need to patch different Puppet
> modules to make it work.
>
> Long story:
>
> Attila Darazs is currently working on [1] to get IPv6 tested by
> TripleO CI but is stuck because a RabbitMQ issue in Puppet catalog
> [2], reported by Dan Sneddon.
> [1] https://review.openstack.org/#/c/289445
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1317693
>
> [2] is caused by a patch in puppetlabs-rabbitmq [3], that change the
> way we validate RabbitMQ is working from testing localhost to testing
> the actual binding IP.
> [3] 
> https://github.com/puppetlabs/puppetlabs-rabbitmq/commit/dac8de9d95c5771b7ef7596b73a59d4108138e3a
>
> The problem is that when testing the actual IPv6, it curls fails for
> some different reasons explained on [4] by Sofer.
> [4] https://review.openstack.org/#/c/292664/
>
> So we need to investigate puppetlabs-rabbitmq and puppet-staging to
> see if whether or not we need to change something there.
> For now, I don't think we need to patch anything in TripleO Heat
> Templates, but we'll see after the investigation.
>
> I'm currently working on this task, but any help is welcome,
> --
> Emilien Macchi



 --
 Emilien Macchi
>>>
>>>
>>>
>>> --
>>> Emilien Macchi
>>
>>
>>
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-16 Thread Duncan Thomas
On 16 March 2016 at 09:15, Tim Bell  wrote:

Then, there were major reservations from the PTLs at the impacts in terms of
> latency, ability to reconcile and loss of control (transactions are
> difficult, transactions
> across services more so).
>
>
Not just PTLs :-)


> 
> I would favor a library, at least initially. If we cannot agree on a
> library, it
> is unlikely that we can get a service adopted (even if it is desirable).
>
> A library (along the lines of 1 or 2 above) would allow consistent
> implementation
> of nested quotas and user quotas. Nested quotas is currently only
> implemented
> in Cinder and user quota implementations vary between projects which is
> confusing.


It is worth noting that the cinder implementation has been found rather
lacking in correctness, atomicity requirements and testing - I wouldn't
suggest taking it as anything other than a PoC to be honest. Certainly it
should not be cargo-culted into another project in its present state.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum-ui] Reorganization of Magnum-UI Driver

2016-03-16 Thread Shuu Mutou
Hi Bradley Jones, 

I propose reorganization of "Magnum-UI Drivers" on Launchpad for Magnum-UI.
The team can contain magnum core members and magnum-ui core members.
Also review the following patch, please.
https://review.openstack.org/#/c/289584/
So I suggest cutting the mitaka release until existing patches and following 
fix by Rob Cresswell.
https://bugs.launchpad.net/magnum-ui/+bug/1554527

Thanks, 

Shu Muto


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] Proposal of Dashboard for TaaS

2016-03-16 Thread Andreas Scheuring
Just to make sure you're aware of that - there is this new Curvature
Network Topology view since Liberty [1]. Maybe you want to integrate
with it as well...

[1] https://www.openstack.org/software/liberty/

-- 
-
Andreas (IRC: scheuran) 

On Mi, 2016-03-16 at 12:30 +0900, Soichi Shigeta wrote:
>   Hi,
> 
>   Please find attached file.
> Yet another design of the Network Topology Tab.
> 
> # I couldn't upload pdf files to the Wiki.
>   When I tried, a message ".pdf file is not permitted"
>   was shown.
> 
> Regards,
> Soichi
> 
> >
> >   Hi Anil, and folks,
> >
> > Thank you for your comments.
> >
> >> Thanks Soichi and Kaz for your work on implementing Horizon
> >> (dashboard) support for TaaS. The proposal (with screen shots)
> >> discussed in our recent IRC meeting look very nice. Here are some
> >> additional suggestions for improvement.
> >>
> >>
> >>
> >> 1.   General
> >>
> >> a.   When a port is being selected (for a tap-service instance or
> >> a tap-flow) it would be nice to also provide some extra information
> >> associated with that port, such as the VM it belongs to and the IP
> >> address.  This will look very similar to what is being done today when
> >> associating a floating IP with a VM vNIC. The extra context will allow
> >> users to identify their source and destination end-points with more
> >> ease. If a VM is not currently associated with a port then the extra
> >> information is not necessary.
> >
> > I agree with you.
> > It is difficult for users to select an appropriate port
> > by seeing only uuid.
> >
> > I didn't explain in the submitted document, in the current
> > implementation, not only uuid but also name is also shown
> > if a port is given a name.
> >
> > I agree to show IP address.
> > i.e., name, uuid, and IP address are shown for each port.
> > Please refer p.1 of the attached file.
> >
> > On the other hand, in terms of modification cost, I'd like
> > to disagree to show associated VM.
> > Because Neutron doesn't know association between a port and
> > a VM, we need to send a query to Nova.
> > Of course, I agree to implement this if requested from field.
> >
> >
> >> b.  When selecting the traffic monitoring direction, it would be
> >> nice to provide two check boxes, one for 'ingress' and the other for
> >> 'egress'. A user wishing to monitor a port in both directions can
> >> select both check boxes. I feel this looks better than having an
> >> option  called 'both'.
> >
> > In terms of consistency with the option in CLI, I prefer to
> > chose one of the both/ingress/egress from pull down menu.
> >
> > To avoid confusions, it had better to say something like
> > "ingress (to instance)" and "egress (from instance)".
> >
> >
> >> 2.   Using the Tap Services Tab
> >>
> >> a.   Allow tap-flow-create and tap-flow-delete operations to also
> >> be carried out from here. This will let users who prefer working in
> >> this fashion get everything done from the same place.
> >
> > I will plan to add "tap-flow-create" and "tap-flow-delete" button
> > on the tap-service tab.
> >
> > But, I'm afraid that a lot of ports will be listed as candidates
> > when a user starts tap-flow-create from here.
> > Because no instance (VM) is selected here, we can not filter to
> > list.
> >
> >
> >> b.  Provide a way to list tap flows currently associated with a
> >> tap service.
> >
> > Excuse me, I didn't mention about it on the submitted document.
> > This is done on the overview of a tap-service.
> > Please refer p.2 of the attached file.
> >
> >
> >> c.   Allow multiple tap-flows to be created at the same time. Let
> >> the user pick multiple source ports (and traffic monitoring
> >> directions) and have all of them attached to a designated tap-service.
> >
> > I'd like to consider this in the future.
> > Because it seems taking larger man-hour cost to realize.
> > (consideration with man-hour we have)
> > Additionally, I think we need to take care of error cases
> > such as a part of tap-flow creation failed.
> >
> >
> >> 3.   Using the Network Topology Tab
> >>
> >> a.   Allow tap-create and tap-delete operations to be also carried
> >> out from here. This will allow users who prefer working in this
> >> fashion get everything done from the same place. The user can pick the
> >> destination port (from one of the existing VMs) in the same way that a
> >> source port is picked when creating a tap-flow.
> >
> > Yes, I think this is a good idea.
> > I will plan to add "tap-flow-create" and "tap-flow-delete" buttons
> > on the Network Topology tab.
> > # As mentioned in 2-a., I'm afraid that a lot of ports will be
> >   listed as candidates when a user starts tap-flow-create from
> >   here.
> >
> > Actually, I want to add "tap-flow-create" and "tap-flow-delete"
> 

Re: [openstack-dev] [tc] Question about electorate for project without gerrit contribution

2016-03-16 Thread Thierry Carrez

Tony Breeds wrote:

On Tue, Mar 15, 2016 at 06:28:14PM +0100, Thierry Carrez wrote:


The second issue is that we don't have any way to run an election on the
project, since we don't have a way to determine "contributors" (or rather,
the only voter and potential candidate under those rules would be Monty).
You can't even apply to be the PTL :) That is obviously an exceptional case
and if I read Tristan's answer correctly, it will naturally end up in the
process where the TC ends up picking the PTL. It feels natural if you're the
only candidate that we would pick you, but that will likely have to wait
until the end of the election period.


I don't think we have the luxury of waiting.  There needs to be a small
amount of corrective action taken now.

As pointed out elsewhere in this thread the TC has the ability to appoint a PTL
should a project end up leaderless should that happen.

However, at the risk of sounding like a humorless automaton, we have a valid
candidate.
  * Monty Taylor for Packaging-Deb PTL https://review.openstack.org/#/c/292690/

In the interests of transparency I'll say this here.  Monty if you do *NOT*
have a genuine desire to lead the packaging-deb team please abandon that review
ASAP.

Also by the rules I feel like we have no choice but to reject
  * Adding Packaging-Deb/Thomas_Goirand.txt 
https://review.openstack.org/#/c/292885/


Like others have said, it's impossible to change election rules 
mid-flight. So we seem to have two ways out of this hole that would not 
break the established rules:


1/ Monty abandons his candidacy, so we have no valid candidate and the 
TC ends up picking the PTL (and may pick up Thomas, as he is the only 
volunteer)


2/ Monty does not abandon his candidacy, and automatically ends up as 
Packaging-deb PTL (that does not prevent Thomas from working on it, but 
Monty gets the final call on disputes)


Thomas: I'd advise you have a talk with Monty - his candidacy is valid 
unless he abandons it (and he has a pretty valid history and experience 
around debian packaging in openstack, so it's not as if he was the 
craziest candidate ever).


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] single router per tenant in network context

2016-03-16 Thread Aleksandr Maretskiy
Hi,

network context creates router for each network automatically, so you can
not reduce the number of routers with this context
https://github.com/openstack/rally/blob/master/rally/plugins/openstack/context/network/networks.py#L79

However you can create and use own network context plugin, inherited from
https://github.com/openstack/rally/blob/master/rally/plugins/openstack/context/network/networks.py#L31
and override its setup() method - create single router per tenant and then
attach it to each created network, like here
https://github.com/openstack/rally/blob/master/rally/plugins/openstack/wrappers/network.py#L342-L343

Ask me if you need more help


On Tue, Mar 15, 2016 at 7:58 PM, Akshay Kumar Sanghai <
akshaykumarsang...@gmail.com> wrote:

> Hi,
> I have a openstack setup with 1 controller node, 1 network node and 2
> compute nodes. I want to perform scale testing of the setup in the
> following manner:
>
> - Create 10 tenants
> - Create 1 router per tenant
> - Create 100 neutron networks across 10 tenants attached to the router
> - Create 500 VMs spread across 10 tenants attached to the networks
>
> I used the boot_server scenario and defined the number of networks and
> tenants in the network and users context respectively. But I want only one
> router to be created per tenant. In the current case, one router is created
> per network.
>
> Do i have an option to accomplish this using the existing rally code? Or
> should i go ahead and make some change in the network context for my use
> case?
>
> Thanks,
> Akshay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [jacket] Introduction to jacket, a new project

2016-03-16 Thread zs
Hi all,

There is a new project "jacket" to manage multiply clouds. The jacket wiki is: 
https://wiki.openstack.org/wiki/Jacket
 Please review it and give your comments. Thanks.

Best Regards,

Kevin (Sen Zhang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-16 Thread Boris Pavlovic
Nikhil,

Thank you for rising this question.

IMHO quotas should be moved into separated services (this is right micro
services approach).

It will make a lot of things simpler:
1) this removes a lot of logic/code from projects
2) cross project atomic quotas reservation (will be possible)
   (e.g. if we would like to reserver all required quotas, before running
heat stack)
3) it will have better UX (you can change projects quotas from one place
and unified)
4) simpler migrations for the projects (we don't need to maintain db
migrations for each project)
just imagine change in quotas lib that requires db migrations, we will
need to run amount of projects migrations.


Best regards,
Boris Pavlovic


On Tue, Mar 15, 2016 at 11:25 PM, Nikhil Komawar 
wrote:

> Hello everyone,
>
> tl;dr;
> I'm writing to request some feedback on whether the cross project Quotas
> work should move ahead as a service or a library or going to a far
> extent I'd ask should this even be in a common repository, would
> projects prefer to implement everything from scratch in-tree? Should we
> limit it to a guideline spec?
>
> But before I ask anymore, I want to specifically thank Doug Hellmann,
> Joshua Harlow, Davanum Srinivas, Sean Dague, Sean McGinnis and  Andrew
> Laski for the early feedback that has helped provide some good shape to
> the already discussions.
>
> Some more context on what the happenings:
> We've this in progress spec [1] up for providing context and platform
> for such discussions. I will rephrase it to say that we plan to
> introduce a new 'entity' in the Openstack realm that may be a library or
> a service. Both concepts have trade-offs and the WG wanted to get more
> ideas around such trade-offs from the larger community.
>
> Service:
> This would entail creating a new project and will introduce managing
> tables for quotas for all the projects that will use this service. For
> example if Nova, Glance, and Cinder decide to use it, this 'entity' will
> be responsible for handling the enforcement, management and DB upgrades
> of the quotas logic for all resources for all three projects. This means
> less pain for projects during the implementation and maintenance phase,
> holistic view of the cloud and almost a guarantee of best practices
> followed (no clutter or guessing around what different projects are
> doing). However, it results into a big dependency; all projects rely on
> this one service for right enforcement, avoiding races (if do not
> incline on implementing some of that in-tree) and DB
> migrations/upgrades. It will be at the core of the cloud and prone to
> attack vectors, bugs and margin of error.
>
> Library:
> A library could be thought of in two different ways:
> 1) Something that does not deal with backed DB models, provides a
> generic enforcement and management engine. To think ahead a little bit
> it may be a ABC or even a few standard implementation vectors that can
> be imported into a project space. The project will have it's own API for
> quotas and the drivers will enforce different types of logic; per se
> flat quota driver or hierarchical quota driver with custom/project
> specific logic in project tree. Project maintains it's own DB and
> upgrades thereof.
> 2) A library that has models for DB tables that the project can import
> from. Thus the individual projects will have a handy outline of what the
> tables should look like, implicitly considering the right table values,
> arguments, etc. Project has it's own API and implements drivers in-tree
> by importing this semi-defined structure. Project maintains it's own
> upgrades but will be somewhat influenced by the common repo.
>
> Library would keep things simple for the common repository and sourcing
> of code can be done asynchronously as per project plans and priorities
> without having a strong dependency. On the other hand, there is a
> likelihood of re-implementing similar patterns in different projects
> with individual projects taking responsibility to keep things up to
> date. Attack vectors, bugs and margin of error are project responsibilities
>
> Third option is to avoid all of this and simply give guidelines, best
> practices, right packages to each projects to implement quotas in-house.
> Somewhat undesirable at this point, I'd say. But we're all ears!
>
> Thank you for reading and I anticipate more feedback.
>
> [1] https://review.openstack.org/#/c/284454/
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin

Re: [openstack-dev] [nova] Wishlist bugs == (trivial) blueprint?

2016-03-16 Thread Kashyap Chamarthy
On Tue, Mar 15, 2016 at 05:59:32PM +, Tim Bell wrote:

[...]

> The bug process was very light weight for an operator who found
> something they would like enhanced. It could be done through the web
> and did not require git/gerrit knowledge. I went through the process
> for a change:
> 
> - Reported a bug for the need to add an L2 cache size option for QEMU
> (https://bugs.launchpad.net/nova/+bug/1509304) closed as invalid since
> this was a feature request - When this was closed, I followed the
> process and submitted a spec
> (https://blueprints.launchpad.net/nova/+spec/qcow2-l2-cache-size-configuration)
> 
> It was not clear how to proceed from here for me. 

Given the feature request is fairly self-contained (so, it wouldn't
warrant a specification), the next step would be someone feeling
motivated enough to submit a patch to implement it.

> The risk I see is that we are missing input to the development process
> in view of the complexity of submitting those requirements. Clearly,
> setting the bar too low means that there is no clear requirement
> statement etc. However, I think the combination of tools and
> assumption of knowledge of the process means that we are missing the
> opportunity for good quality input.

>From an Operator's perspective, I think your concern seems reasonable:
having a friction-free way to provide input (like an RFE bug) vs. having
the process knowledge (Spec-less Blueprint vs. Blueprint with Spec).

But, given Markus' stats about 'Wishlist' implementation, that category
doesn't seem to be quite effective.

[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-16 Thread Tim Bell
On 16/03/16 07:25, "Nikhil Komawar"  wrote:



>Hello everyone,
>
>tl;dr;
>I'm writing to request some feedback on whether the cross project Quotas
>work should move ahead as a service or a library or going to a far
>extent I'd ask should this even be in a common repository, would
>projects prefer to implement everything from scratch in-tree? Should we
>limit it to a guideline spec?
>
>But before I ask anymore, I want to specifically thank Doug Hellmann,
>Joshua Harlow, Davanum Srinivas, Sean Dague, Sean McGinnis and  Andrew
>Laski for the early feedback that has helped provide some good shape to
>the already discussions.
>
>Some more context on what the happenings:
>We've this in progress spec [1] up for providing context and platform
>for such discussions. I will rephrase it to say that we plan to
>introduce a new 'entity' in the Openstack realm that may be a library or
>a service. Both concepts have trade-offs and the WG wanted to get more
>ideas around such trade-offs from the larger community.
>
>Service:
>This would entail creating a new project and will introduce managing
>tables for quotas for all the projects that will use this service. For
>example if Nova, Glance, and Cinder decide to use it, this 'entity' will
>be responsible for handling the enforcement, management and DB upgrades
>of the quotas logic for all resources for all three projects. This means
>less pain for projects during the implementation and maintenance phase,
>holistic view of the cloud and almost a guarantee of best practices
>followed (no clutter or guessing around what different projects are
>doing). However, it results into a big dependency; all projects rely on
>this one service for right enforcement, avoiding races (if do not
>incline on implementing some of that in-tree) and DB
>migrations/upgrades. It will be at the core of the cloud and prone to
>attack vectors, bugs and margin of error.

This has been proposed a number of times in the past with projects such as Boson
(https://wiki.openstack.org/wiki/Boson) and an extended discussion at one of the
summits (I think it was San Diego).

Then, there were major reservations from the PTLs at the impacts in terms of
latency, ability to reconcile and loss of control (transactions are difficult, 
transactions
across services more so).

>Library:
>A library could be thought of in two different ways:
>1) Something that does not deal with backed DB models, provides a
>generic enforcement and management engine. To think ahead a little bit
>it may be a ABC or even a few standard implementation vectors that can
>be imported into a project space. The project will have it's own API for
>quotas and the drivers will enforce different types of logic; per se
>flat quota driver or hierarchical quota driver with custom/project
>specific logic in project tree. Project maintains it's own DB and
>upgrades thereof.
>2) A library that has models for DB tables that the project can import
>from. Thus the individual projects will have a handy outline of what the
>tables should look like, implicitly considering the right table values,
>arguments, etc. Project has it's own API and implements drivers in-tree
>by importing this semi-defined structure. Project maintains it's own
>upgrades but will be somewhat influenced by the common repo.
>
>Library would keep things simple for the common repository and sourcing
>of code can be done asynchronously as per project plans and priorities
>without having a strong dependency. On the other hand, there is a
>likelihood of re-implementing similar patterns in different projects
>with individual projects taking responsibility to keep things up to
>date. Attack vectors, bugs and margin of error are project responsibilities
>
>Third option is to avoid all of this and simply give guidelines, best
>practices, right packages to each projects to implement quotas in-house.
>Somewhat undesirable at this point, I'd say. But we're all ears!

I would favor a library, at least initially. If we cannot agree on a library, it
is unlikely that we can get a service adopted (even if it is desirable).

A library (along the lines of 1 or 2 above) would allow consistent 
implementation
of nested quotas and user quotas. Nested quotas is currently only implemented
in Cinder and user quota implementations vary between projects which is 
confusing.

Now that we have Oslo (there was no similar structure when it was first 
discussed),
we have the possibility to implement these concepts in a consistent way across
OpenStack and give a better user experience as a result.

Tim

>
>Thank you for reading and I anticipate more feedback.
>
>[1] https://review.openstack.org/#/c/284454/
>
>-- 
>
>Thanks,
>Nikhil
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_