Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-11 Thread Jesse Pretorius
On 12 August 2014 07:26, Amit Das  wrote:

> I would like some guidance in this regards in form of some links, wiki
> pages etc.
>
> I am currently gathering the "driver cert test results" i.e. tempest tests
> from devstack in our environment & CI setup would be my next step.
>

This should get you started:
http://ci.openstack.org/third_party.html

Then Jay Pipes' excellent two part series will help you with the details of
getting it done:
http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/
http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-system-part-2/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container

2014-08-11 Thread Jay Lau
Hi,

I'm now doing some investigation for docker + HEAT integration and come up
one question want to get your help.

What is the best way for a docker container to run some user data once the
docker container was provisioned?

I think there are two ways: using cloud-init or the "CMD" section in
Dockerfile, right? just wondering does anyone has some experience with
cloud-init for docker container, does the configuration same with VM?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] introducing cyclops

2014-08-11 Thread Fei Long Wang
I think the situation described by Doug is what most OpenStack newcomers
are facing. TBH, the company I work for(Catalyst IT Ltd, a NZ based open
source company) run into the same challenge, so we had to cook something
in our kitchen and are currently running it for our cloud service. The
difference is we created it as a 'Rating' instead of 'Billing' system,
since we understand that most companies already have something in place
as the ^real^ billing system (in our case OpenERP).

We see this rating project as bridge between Ceilometer and whatever
systems people are using to do the billing/invoicing. All the commercial
sensitive information and business rules are kept in the billing system
(for example: OpenERP). The rating project only raises the 'sales orders
/ bills / invoices' based on the usage reported by Ceilometer on a given
period. In our case this back-end is pluggable, allowing organizations
to integrate with their existing ERP systems or even generate invoices
in the form spreadsheets if they don't have one.

Anyway, I believe there is a real requirement for a rating system in
OpenStack, no? Consider these projects [1][2][3][4] and
cyclops/cloudkitty mentioned in this loop. And I would like to refer the
charts presented by Nicolas Barcet about Ceilometer [5], which defined
something like below. Obviously, there is still a gap for rating at
least(after almost 2 years), which can be an independent service or an
advanced plugin/service of Ceilometer (like LBaaS for Neutron, depending
on the scope).


*Metering* -- Collect usage data

*Rating* -- Transform usage data into billable items and calculate costs

*Billing* -- Create invoice, collect payment

So it seems the question is changing from '/do users need a rating
system?/' to '/Would OpenStack benefit from having an out-of-box rating
project?/'. I would say yes, why not. More and more companies are trying
to leverage OpenStack as a public/private cloud solution and charge
their customers, so it would be great if there is an out-of-box rating
solution from OpenStack community. Besides, like Eoghan suggested, it
would be great to have a rating/billing session in Paris . Before that,
I would like to suggest some regular IRC meetings to discuss this topic
to work out the goal, scope, key use cases and roadmap.

Our suggestion for the first IRC meeting is 25/August 8PM-10PM UTC time
on Freenodes's #openstack-rating channel.

Thoughts? Please reply with the best date/time for you so we can figure
out a time to start.

Cheers.


[1] Dough https://github.com/lzyeval/dough

[2] trystack.org billing https://github.com/trystack/dash_billing

[3] nova-billing https://github.com/griddynamics/nova-billing

[4] billingstack https://github.com/billingstack

[5]
https://docs.google.com/presentation/d/1ytYhQGR5SxoccZ-wuza2n0H2mS7HniP9HoFDUpuGDiM/edit#slide=id.g1f73edb4_0_2


On 09/08/14 06:13, Doug Hellmann wrote:
>
> On Aug 8, 2014, at 3:34 AM, Piyush Harsh  > wrote:
>
>> Dear Eoghan,
>>
>> Thanks for your comments. Although you are correct that rating,
>> charging, and billing policies are commercially sensitive to the
>> operators, still if an operator has an openstack installation, I do
>> not see why the stack could not offer a service that supports ways
>> for the operator to input desired policies, rules, etc to do charging
>> and billing out of the box. These policies could still only be
>> accessible to the operator.
>
> I think the point was more that most deployers we talked to at the
> beginning of the project already had tools that managed the rates and
> charging, but needed the usage data to feed into those tools. That was
> a while back, though and, as you say, it’s quite possible we have new
> users without similar tools in place, so I’m glad to see a couple of
> groups working on taking billing integration one step further. At the
> very least working with teams building open source consumers of
> ceilometer’s API will help us understand if there are any ways to make
> it easier to use.
>
> Doug
>
>>
>> Furthermore, one could envision that using heat together with some
>> django magic, this could even be offered as a service for tenants of
>> the operators who could be distributors or resellers in his client
>> ecosystem, allowing them to set their own custom policies.
>>
>> I believe such stack based solution would be very much welcome by
>> SMEs, new entrants, etc.
>>
>> I am planning to attend the Kilo summit in Paris, and I would be very
>> glad to talk with you and others on this idea and on Cyclops :)
>>
>> Forking the codebase to stackforge is something which is definitely
>> possible and thanks a lot for suggesting it.
>>
>> Looking forward to more constructive discussions on this with you and
>> others.
>>
>> Kind regards,
>> Piyush.
>>
>>
>> ___
>> Dr. Piyush Harsh, Ph.D.
>> Researcher, InIT Cloud Computing Lab
>> Zurich University of Applied Sciences (ZHAW)
>> [Site] http://p

Re: [openstack-dev] [all] The future of the integrated release

2014-08-11 Thread Joe Gordon
On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery  wrote:

> On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon  wrote:
> >
> >
> >
> > On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez 
> > wrote:
> >>
> >> Hi everyone,
> >>
> >> With the incredible growth of OpenStack, our development community is
> >> facing complex challenges. How we handle those might determine the
> >> ultimate success or failure of OpenStack.
> >>
> >> With this cycle we hit new limits in our processes, tools and cultural
> >> setup. This resulted in new limiting factors on our overall velocity,
> >> which is frustrating for developers. This resulted in the burnout of key
> >> firefighting resources. This resulted in tension between people who try
> >> to get specific work done and people who try to keep a handle on the big
> >> picture.
> >>
> >> It all boils down to an imbalance between strategic and tactical
> >> contributions. At the beginning of this project, we had a strong inner
> >> group of people dedicated to fixing all loose ends. Then a lot of
> >> companies got interested in OpenStack and there was a surge in tactical,
> >> short-term contributions. We put on a call for more resources to be
> >> dedicated to strategic contributions like critical bugfixing,
> >> vulnerability management, QA, infrastructure... and that call was
> >> answered by a lot of companies that are now key members of the OpenStack
> >> Foundation, and all was fine again. But OpenStack contributors kept on
> >> growing, and we grew the narrowly-focused population way faster than the
> >> cross-project population.
> >>
> >>
> >> At the same time, we kept on adding new projects to incubation and to
> >> the integrated release, which is great... but the new developers you get
> >> on board with this are much more likely to be tactical than strategic
> >> contributors. This also contributed to the imbalance. The penalty for
> >> that imbalance is twofold: we don't have enough resources available to
> >> solve old, known OpenStack-wide issues; but we also don't have enough
> >> resources to identify and fix new issues.
> >>
> >> We have several efforts under way, like calling for new strategic
> >> contributors, driving towards in-project functional testing, making
> >> solving rare issues a more attractive endeavor, or hiring resources
> >> directly at the Foundation level to help address those. But there is a
> >> topic we haven't raised yet: should we concentrate on fixing what is
> >> currently in the integrated release rather than adding new projects ?
> >
> >
> > TL;DR: Our development model is having growing pains. until we sort out
> the
> > growing pains adding more projects spreads us too thin.
> >
> +100
>
> > In addition to the issues mentioned above, with the scale of OpenStack
> today
> > we have many major cross project issues to address and no good place to
> > discuss them.
> >
> We do have the ML, as well as the cross-project meeting every Tuesday
> [1], but we as a project need to do a better job of actually bringing
> up relevant issues here.
>
> [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting
>
> >>
> >>
> >> We seem to be unable to address some key issues in the software we
> >> produce, and part of it is due to strategic contributors (and core
> >> reviewers) being overwhelmed just trying to stay afloat of what's
> >> happening. For such projects, is it time for a pause ? Is it time to
> >> define key cycle goals and defer everything else ?
> >
> >
> >
> > I really like this idea, as Michael and others alluded to in above, we
> are
> > attempting to set cycle goals for Kilo in Nova. but I think it is worth
> > doing for all of OpenStack. We would like to make a list of key goals
> before
> > the summit so that we can plan our summit sessions around the goals. On a
> > really high level one way to look at this is, in Kilo we need to pay down
> > our technical debt.
> >
> > The slots/runway idea is somewhat separate from defining key cycle
> goals; we
> > can be approve blueprints based on key cycle goals without doing slots.
>  But
> > with so many concurrent blueprints up for review at any given time, the
> > review teams are doing a lot of multitasking and humans are not very
> good at
> > multitasking. Hopefully slots can help address this issue, and hopefully
> > allow us to actually merge more blueprints in a given cycle.
> >
> I'm not 100% sold on what the slots idea buys us. What I've seen this
> cycle in Neutron is that we have a LOT of BPs proposed. We approve
> them after review. And then we hit one of two issues: Slow review
> cycles, and slow code turnaround issues. I don't think slots would
> help this, and in fact may cause more issues. If we approve a BP and
> give it a slot for which the eventual result is slow review and/or
> code review turnaround, we're right back where we started. Even worse,
> we may have not picked a BP for which the code submitter would have
> turned around reviews faster. So we've now doubly hurt

Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-11 Thread Amit Das
Hi John,

I guess this is w.r.t 3rd party cinder drivers.

I would like some guidance in this regards in form of some links, wiki
pages etc.

I am currently gathering the "driver cert test results" i.e. tempest tests
from devstack in our environment & CI setup would be my next step.


Regards,
Amit
*CloudByte Inc.* 


On Tue, Aug 12, 2014 at 6:31 AM, Anita Kuno  wrote:

> On 08/11/2014 06:26 PM, John Griffith wrote:
> > Hey Cinder folks that have their CI systems up and running; first off...
> > awesome!!!  I do have one favor to ask however though.  Please, please,
> > please monitor your jobs and if they're not working either fix them or
> > disable them.
> >
> > Currently the it seems that none of the implemented jobs are overly
> > reliable (mostly seeing startup failures).  Also, if you're jobs systems
> > aren't actually ready (accessible html link to results files) please
> > disable those as well.
> >
> >
> > Thanks,
> > John
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> In support of John's email, I will remind everyone that the sandbox repo
> is available for testing your third party ci system. Please don't
> automate the creation of patches, just submit them manually and adding
> patchsets to current patches is fine.
>
> The sandbox repo: http://git.openstack.org/cgit/openstack-dev/sandbox/
>
> Do ask if you need some help or guidance using it.
>
> Thanks,
> Anita.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver interface on every API request"

2014-08-11 Thread Eugene Nikanorov
Well, that exactly what we've tried to solve with tags in the flavor.

Considering your example with whole configuration being sent to the driver
- i think it will be fine to not apply unsupported parts of configuration
(like such HM) and mark the HM object with error status/status description.

Thanks,
Eugene.


On Tue, Aug 12, 2014 at 12:33 AM, Brandon Logan  wrote:

> Hi Eugene,
> An example of the HM issue (and really this can happen with any entity)
> is if the driver the API sends the configuration to does not actually
> support the value of an attribute.
>
> For example: Provider A support PING health monitor type, Provider B
> does not.  API allows the PING health monitor type to go through.  Once
> a load balancer has been linked with that health monitor and the
> LoadBalancer chose to use Provider B, that entire configuration is then
> sent to the driver.  The driver errors out not on the LoadBalancer
> create, but on the health monitor create.
>
> I think that's the issue.
>
> Thanks,
> Brandon
>
> On Tue, 2014-08-12 at 00:17 +0400, Eugene Nikanorov wrote:
> > Hi folks,
> >
> >
> > That actually going in opposite direction to what flavor framework is
> > trying to do (and for dispatching it's doing the same as providers).
> > REST call dispatching should really go via the root object.
> >
> >
> > I don't quite get the issue with health monitors. If HM is incorrectly
> > configured prior to association with a pool - API layer should handle
> > that.
> > I don't think driver implementations should be different at
> > constraints to HM parameters.
> >
> >
> > So I'm -1 on adding provider (or flavor) to each entity. After all, it
> > looks just like data denormalization which actually will affect lots
> > of API aspects in negative way.
> >
> >
> > Thanks,
> > Eugene.
> >
> >
> >
> >
> > On Mon, Aug 11, 2014 at 11:20 PM, Vijay Venkatachalam
> >  wrote:
> >
> > Yes, the point was to say "the plugin need not restrict and
> > let driver decide what to do with the API".
> >
> > Even if the call was made to driver instantaneously, I
> > understand, the driver might decide to ignore
> > first and schedule later. But, if the call is present, there
> > is scope for validation.
> > Also, the driver might be scheduling an async-api to backend,
> > in which case  deployment error
> > cannot be shown to the user instantaneously.
> >
> > W.r.t. identifying a provider/driver, how would it be to make
> > tenant the default "root" object?
> > "tenantid" is already associated with each of these entities,
> > so no additional pain.
> > For the tenant who wants to override let him specify provider
> > in each of the entities.
> > If you think of this in terms of the UI, let's say if the
> > loadbalancer configuration is exposed
> > as a single wizard (which has loadbalancer, listener, pool,
> > monitor properties) then provider
> >  is chosen only once.
> >
> > Curious question, is flavour framework expected to address
> > this problem?
> >
> > Thanks,
> > Vijay V.
> >
> > -Original Message-
> > From: Doug Wiegley [mailto:do...@a10networks.com]
> >
> > Sent: 11 August 2014 22:02
> > To: OpenStack Development Mailing List (not for usage
> > questions)
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on
> > "Calling driver interface on every API request"
> >
> > Hi Sam,
> >
> > Very true.  I think that Vijay’s objection is that we are
> > currently imposing a logical structure on the driver, when it
> > should be a driver decision.  Certainly, it goes both ways.
> >
> > And I also agree that the mechanism for returning multiple
> > errors, and the ability to specify whether those errors are
> > fatal or not, individually, is currently weak.
> >
> > Doug
> >
> >
> > On 8/11/14, 10:21 AM, "Samuel Bercovici" 
> > wrote:
> >
> > >Hi Doug,
> > >
> > >In some implementations Driver !== Device. I think this is
> > also true
> > >for HA Proxy.
> > >This might mean that there is a difference between creating a
> > logical
> > >object and when there is enough information to actually
> > schedule/place
> > >this into a device.
> > >The ability to express such errors (detecting an error on a
> > logical
> > >object after it was created but when it actually get
> > scheduled) should
> > >be discussed and addressed anyway.
> > >
> > >-Sam.
> > >
> > >
> > >-Original Message-
> > >From: Doug Wiegley [mailto:do...@a10networks.com]
> > >Sent: Monday, August 11, 2014 6:55 PM
> > >To: OpenStack Development Ma

[openstack-dev] [gantt] scheduler subgroup meeting agenda 8/12

2014-08-11 Thread Dugger, Donald D
1) Forklift status
a. Schedulier client library
b. Isolate Scheduler DB
2) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] devstack build failures

2014-08-11 Thread daya kamath
all,
is anyone seeing these failures in devstack -

create_keystone_accounts
2014-08-11 02:47:47.906 | ++ get_or_create_project admin
2014-08-11 02:47:47.908 | +++ openstack project show admin -f value -c id
2014-08-11 02:47:48.825 | +++ openstack project create admin -f value -c id
2014-08-11 02:47:49.581 | ERROR: cliff.app 'module' object has no attribute 
'get_trace_id_headers'

basically, all cli commands are failing with the above error code. 
this is the cliff pkg i have - cliff (1.6.1.19.g632fdd8)
and the call itself seems to be from osprofiler - osprofiler (0.1.1)

i'm running the gate scripts on my CI setup. i had disabled a few projects 
(swift, trove, sahara) intermittently, but i was getting successful runs 
afterwards, these errors seem to have been triggered by some upstream change.

thanks!
daya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] congress-server fails to start

2014-08-11 Thread Aaron Rosen
Hi Rajdeep,

I think the issue you're facing here is because you have a non-asci char in
your etc/congress.conf.sample file.  Could you try the following commands:

 mv congress/etc/congress.config.sample /tmp
 git checkout congress/etc/config.sample
 ./bin/congress-server --config-file etc/congress.conf.sample

p.s: you shouldn't run congress or probably any openstack component as root
fwiw.



On Mon, Aug 11, 2014 at 7:58 PM, Rajdeep Dua  wrote:

> Thanks, i was running older version of pip.
>
> All the requirements were installed successfully.
>
> Now getting the following error
>
> sudo ./bin/congress-server --config-file etc/congress.conf.sample
> 2014-08-12 08:26:23.417 31129 CRITICAL congress.service [-] 'ascii' codec
> can't decode byte 0xf3 in position 1: ordinal not in range(128)
>
>
>
> On Tue, Aug 12, 2014 at 2:05 AM, Peter Balland 
> wrote:
>
>>  Hi Rajdeep,
>>
>>  What version of pip are you running?  Please try installing the latest
>> version (https://pip.pypa.io/en/latest/installing.html) and run ‘sudo
>> pip install –r requirements.txt’.
>>
>>  - Peter
>>
>>   From: Rajdeep Dua 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Monday, August 11, 2014 at 11:27 AM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: [openstack-dev] [Congress] congress-server fails to start
>>
>>  Hi All,
>>  command to start the congress-server fails
>>
>> $ ./bin/congress-server --config-file etc/congress.conf.sample
>>
>>  Error :
>> ImportError: No module named keystonemiddleware.auth_token
>>
>>  Installing keystonemiddleware manually also fails
>>
>>  $ sudo pip install keystonemiddleware
>>
>> Could not find a version that satisfies the requirement
>> oslo.config>=1.4.0.0a3 (from keystonemiddleware) (from versions: )
>> No distributions matching the version for oslo.config>=1.4.0.0a3 (from
>> keystonemiddleware)
>>
>>  Thanks
>> Rajdeep
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-11 Thread Jay Lau
Thanks Jay Pipes! I see, but setting metadata for server group might be
more flexible to handle all of the policy cases, such as hard
affinity/anti-affinity, soft affinity/anti-affinity, topology
affinity/anti-affinity etc, we may have more use cases in future related to
server group metadata.

Regarding get rid of instance_group table, yes, it is a good idea for
having "near", "not-near", "hard", and "soft", but it is a big change for
current nova server group design, I'm not sure if we can have some clear
conclusion in the coming one or two releases.

Thanks!


2014-08-12 7:01 GMT+08:00 Jay Pipes :

> On 08/11/2014 05:58 PM, Jay Lau wrote:
>
>> I think the metadata in server group is an important feature and it
>> might be used by
>> https://blueprints.launchpad.net/nova/+spec/soft-affinity-
>> for-server-group
>>
>> Actually, we are now doing an internal development for above bp and want
>> to contribute this back to community later. We are now setting hard/soft
>> flags in server group metadata to identify if the server group want
>> hard/soft affinity.
>>
>> I prefer Dan's first suggestion, what do you think?
>> =
>> If we care to have this functionality, then I propose we change the
>> attribute on the object (we can handle this with versioning) and reflect
>> it as "metadata" in the API.
>> =
>>
>
> -1
>
> If hard and soft is something that really needs to be supported, then this
> should be a field in the instance_groups table, not some JSON blob in a
> random metadata field.
>
> Better yet, get rid of the instance_groups table altogether and have
> "near", "not-near", "hard", and "soft" be launch modifiers similar to the
> instance type. IMO, there's really no need to store a named group at all,
> but that goes back to my original ML post about the server groups topic:
>
> https://www.mail-archive.com/openstack-dev@lists.openstack.
> org/msg23055.html
>
> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Bug#1231298 - size parameter for volume creation

2014-08-11 Thread Dean Troyer
On Mon, Aug 11, 2014 at 5:34 PM, Duncan Thomas 
wrote:
>
> Making an previously mandatory parameter optional, at least on the
> command line, does break backward compatibility though, does it?
> Everything that worked before will still work.
>

By itself, maybe that is ok.  You're right, nothing _should_ break.  But
then the following is legal:

cinder create

What does that do?

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] congress-server fails to start

2014-08-11 Thread Rajdeep Dua
Thanks, i was running older version of pip.

All the requirements were installed successfully.

Now getting the following error

sudo ./bin/congress-server --config-file etc/congress.conf.sample
2014-08-12 08:26:23.417 31129 CRITICAL congress.service [-] 'ascii' codec
can't decode byte 0xf3 in position 1: ordinal not in range(128)



On Tue, Aug 12, 2014 at 2:05 AM, Peter Balland  wrote:

>  Hi Rajdeep,
>
>  What version of pip are you running?  Please try installing the latest
> version (https://pip.pypa.io/en/latest/installing.html) and run ‘sudo pip
> install –r requirements.txt’.
>
>  - Peter
>
>   From: Rajdeep Dua 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, August 11, 2014 at 11:27 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Congress] congress-server fails to start
>
>  Hi All,
>  command to start the congress-server fails
>
> $ ./bin/congress-server --config-file etc/congress.conf.sample
>
>  Error :
> ImportError: No module named keystonemiddleware.auth_token
>
>  Installing keystonemiddleware manually also fails
>
>  $ sudo pip install keystonemiddleware
>
> Could not find a version that satisfies the requirement
> oslo.config>=1.4.0.0a3 (from keystonemiddleware) (from versions: )
> No distributions matching the version for oslo.config>=1.4.0.0a3 (from
> keystonemiddleware)
>
>  Thanks
> Rajdeep
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][sr-iov] Tomorrow's IRC meeting

2014-08-11 Thread Robert Li (baoli)
Hi,

I won’t be able to make it tomorrow. Please feel free having the meeting 
without me.

Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-11 Thread Pritesh Kothari (pritkoth)
>> 
>> 
> Thanks Henry:
> 
> Do we have a url for patch in gerrit for this or was this an internal
> code change?

I am in process of uploading a patch for the same, will update url here once i 
get it.

Regards,
Pritesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-11 Thread Anita Kuno
On 08/11/2014 06:26 PM, John Griffith wrote:
> Hey Cinder folks that have their CI systems up and running; first off...
> awesome!!!  I do have one favor to ask however though.  Please, please,
> please monitor your jobs and if they're not working either fix them or
> disable them.
> 
> Currently the it seems that none of the implemented jobs are overly
> reliable (mostly seeing startup failures).  Also, if you're jobs systems
> aren't actually ready (accessible html link to results files) please
> disable those as well.
> 
> 
> Thanks,
> John
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
In support of John's email, I will remind everyone that the sandbox repo
is available for testing your third party ci system. Please don't
automate the creation of patches, just submit them manually and adding
patchsets to current patches is fine.

The sandbox repo: http://git.openstack.org/cgit/openstack-dev/sandbox/

Do ask if you need some help or guidance using it.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-11 Thread Anita Kuno
On 08/11/2014 06:31 PM, Henry Gessau wrote:
> On 8/11/2014 7:56 PM, Anita Kuno wrote:
>> On 08/11/2014 05:46 PM, Henry Gessau wrote:
>>> Anita Kuno  wrote:
 On 08/11/2014 05:05 PM, Edgar Magana wrote:
> Cisco Folks,
>
> I don't see the CI for Cisco NX-OS anymore. Is this being deprecated?
>
 I don't ever recall seeing that as a name of a third party gerrit
 account in my list[0], Edgar.

 Do you happen to have a link to a patchset that has that name attached
 to a comment?
>>>
>>> The "Cisco Neutron CI" tests at least five different configurations. By
>>> "NX-OS" Edgar is referring to the Cisco Nexus switch configurations. The CI
>>> used to run both the "monolithic_nexus" and "ml2_nexus" configurations, but
>>> the monolithic cisco plugin for nexus is being deprecated for juno and its
>>> configuration has already been removed from testing.
>>>
>> Thanks Henry:
>>
>> Do we have a url for patch in gerrit for this or was this an internal
>> code change?
> 
> This was a change only in the internal 3rd party Jenkins/Zuul settings.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Okay.

Perhaps going forward this could be an item for the third party meeting
under the topic of Deadlines & Deprecations:
https://wiki.openstack.org/wiki/Meetings/ThirdParty Then at the very
least if someone missed the announcement we could have a log of it and
point someone to the conversation.

Thanks Henry,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Sandy Walsh
On 8/11/2014 6:49 PM, Eoghan Glynn wrote:
>
 On 8/11/2014 4:22 PM, Eoghan Glynn wrote:
>> Hi Eoghan,
>>
>> Thanks for the note below. However, one thing the overview below does
>> not
>> cover is why InfluxDB ( http://influxdb.com/ ) is not being leveraged.
>> Many
>> folks feel that this technology is a viable solution for the problem
>> space
>> discussed below.
> Great question Brad!
>
> As it happens we've been working closely with Paul Dix (lead
> developer of InfluxDB) to ensure that this metrics store would be
> usable as a backend driver. That conversation actually kicked off
> at the Juno summit in Atlanta, but it really got off the ground
> at our mid-cycle meet-up in Paris on in early July.
 ...
> The InfluxDB folks have committed to implementing those features in
> over July and August, and have made concrete progress on that score.
>
> I hope that provides enough detail to answer to your question?
 I guess it begs the question, if influxdb will do what you want and it's
 open source (MIT) as well as commercially supported, how does gnocchi
 differentiate?
>>> Hi Sandy,
>>>
>>> One of the ideas behind gnocchi is to combine resource representation
>>> and timeseries-oriented storage of metric data, providing an efficient
>>> and convenient way to query for metric data associated with individual
>>> resources.
>> Doesn't InfluxDB do the same?
> InfluxDB stores timeseries data primarily.
>
> Gnocchi in intended to store strongly-typed OpenStack resource
> representations (instance, images, etc.) in addition to providing
> a means to access timeseries data associated with those resources.
>
> So to answer your question: no, IIUC, it doesn't do the same thing.

Ok, I think I'm getting closer on this.  Thanks for the clarification.
Sadly, I have more questions :)

Is this closer? "a metadata repo for resources (instances, images, etc)
+ an abstraction to some TSDB(s)"?

Hmm, thinking out loud ... if it's a metadata repo for resources, who is
the authoritative source for what the resource is? Ceilometer/Gnocchi or
the source service? For example, if I want to query instance power state
do I ask ceilometer or Nova?

Or is it metadata about the time-series data collected for that
resource? In which case, I think most tsdb's have some sort of "series
description" facilities. I guess my question is, what makes this
metadata unique and how would it differ from the metadata ceilometer
already collects?

Will it be using Glance, now that Glance is becoming a pure metadata repo?


> Though of course these things are not a million miles from each
> other, one is just a step up in the abstraction stack, having a
> wider and more OpenStack-specific scope.

Could it be a generic timeseries service? Is it "openstack specific"
because it uses stackforge/python/oslo? I assume the rules and schemas
will be data-driven (vs. hard-coded)? ... and since the ceilometer
collectors already do the bridge work, is it a pre-packaging of
definitions that target openstack specifically? (not sure about "wider
and more specific")

Sorry if this was already hashed out in Atlanta.

>  
>>> Also, having an API layered above the storage driver avoids locking in
>>> directly with a particular metrics-oriented DB, allowing for the
>>> potential to support multiple storage driver options (e.g. to choose
>>> between a canonical implementation based on Swift, an InfluxDB driver,
>>> and an OpenTSDB driver, say).
>> Right, I'm not suggesting to remove the storage abstraction layer. I'm
>> just curious what gnocchi does better/different than InfluxDB?
>>
>> Or, am I missing the objective here and gnocchi is the abstraction layer
>> and not an influxdb alternative? If so, my apologies for the confusion.
> No worries :)
>
> The intention is for gnocchi to provide an abstraction over
> timeseries, aggregation, downsampling and archiving/retention
> policies, with a number of drivers mapping onto real timeseries
> storage options. One of those drivers is based on Swift, another
> is in the works based on InfluxDB, and a third based on OpenTSDB
> has also been proposed.
>
> Cheers,
> Eoghan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-11 Thread Henry Gessau
On 8/11/2014 7:56 PM, Anita Kuno wrote:
> On 08/11/2014 05:46 PM, Henry Gessau wrote:
>> Anita Kuno  wrote:
>>> On 08/11/2014 05:05 PM, Edgar Magana wrote:
 Cisco Folks,

 I don't see the CI for Cisco NX-OS anymore. Is this being deprecated?

>>> I don't ever recall seeing that as a name of a third party gerrit
>>> account in my list[0], Edgar.
>>>
>>> Do you happen to have a link to a patchset that has that name attached
>>> to a comment?
>>
>> The "Cisco Neutron CI" tests at least five different configurations. By
>> "NX-OS" Edgar is referring to the Cisco Nexus switch configurations. The CI
>> used to run both the "monolithic_nexus" and "ml2_nexus" configurations, but
>> the monolithic cisco plugin for nexus is being deprecated for juno and its
>> configuration has already been removed from testing.
>>
> Thanks Henry:
> 
> Do we have a url for patch in gerrit for this or was this an internal
> code change?

This was a change only in the internal 3rd party Jenkins/Zuul settings.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-11 Thread John Griffith
Hey Cinder folks that have their CI systems up and running; first off...
awesome!!!  I do have one favor to ask however though.  Please, please,
please monitor your jobs and if they're not working either fix them or
disable them.

Currently the it seems that none of the implemented jobs are overly
reliable (mostly seeing startup failures).  Also, if you're jobs systems
aren't actually ready (accessible html link to results files) please
disable those as well.


Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-11 Thread Anita Kuno
On 08/11/2014 05:46 PM, Henry Gessau wrote:
> Anita Kuno  wrote:
>> On 08/11/2014 05:05 PM, Edgar Magana wrote:
>>> Cisco Folks,
>>>
>>> I don't see the CI for Cisco NX-OS anymore. Is this being deprecated?
>>>
>> I don't ever recall seeing that as a name of a third party gerrit
>> account in my list[0], Edgar.
>>
>> Do you happen to have a link to a patchset that has that name attached
>> to a comment?
> 
> The "Cisco Neutron CI" tests at least five different configurations. By
> "NX-OS" Edgar is referring to the Cisco Nexus switch configurations. The CI
> used to run both the "monolithic_nexus" and "ml2_nexus" configurations, but
> the monolithic cisco plugin for nexus is being deprecated for juno and its
> configuration has already been removed from testing.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Thanks Henry:

Do we have a url for patch in gerrit for this or was this an internal
code change?

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-11 Thread Henry Gessau
Anita Kuno  wrote:
> On 08/11/2014 05:05 PM, Edgar Magana wrote:
>> Cisco Folks,
>>
>> I don't see the CI for Cisco NX-OS anymore. Is this being deprecated?
>>
> I don't ever recall seeing that as a name of a third party gerrit
> account in my list[0], Edgar.
> 
> Do you happen to have a link to a patchset that has that name attached
> to a comment?

The "Cisco Neutron CI" tests at least five different configurations. By
"NX-OS" Edgar is referring to the Cisco Nexus switch configurations. The CI
used to run both the "monolithic_nexus" and "ml2_nexus" configurations, but
the monolithic cisco plugin for nexus is being deprecated for juno and its
configuration has already been removed from testing.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-11 Thread Joe Gordon
On Mon, Aug 11, 2014 at 3:53 PM, Devananda van der Veen <
devananda@gmail.com> wrote:

> On Mon, Aug 11, 2014 at 3:27 PM, Joe Gordon  wrote:
> >
> >
> >
> > On Mon, Aug 11, 2014 at 3:07 PM, Eoghan Glynn  wrote:
> >>
> >>
> >>
> >> > Ignoring the question of is it ok to say: 'to run ceilometer in any
> sort
> >> > of
> >> > non-trivial deployment you must manager yet another underlying
> service,
> >> > mongodb' I would prefer not adding an addition gate variant to all
> >> > projects.
> >> > With the effort to reduce the number of gate variants we have [0] I
> >> > would
> >> > prefer to see just ceilometer gate on both mongodb and sqlalchemy and
> >> > the
> >> > main integrated gate [1] pick just one.
> >>
> >> Just checking to see that I fully understand what you mean there, Joe.
> >>
> >> So would we:
> >>
> >>  (a) add a new integrated-gate-ceilometer project-template to [1],
> >>  in the style of integrated-gate-neutron or integrated-gate-sahara,
> >>  which would replicate the main integrated-gate template but with
> >>  the addition of gate-tempest-dsvm-ceilometer-mongodb(-full)
> >>
> >> or:
> >>
> >>  (b) simply move gate-tempest-dsvm-ceilometer-mongodb(-full) from
> >>  the experimental column[2] in the openstack-ceilometer project,
> >>  to the gate column on that project
> >>
> >> or:
> >>
> >>  (c) something else
> >>
> >> Please excuse the ignorance of gate mechanics inherent in that question.
> >
> >
> >
> > Correct, AFAIK (a) or (b) would be sufficient.
> >
> > There is another option, which is make the mongodb version the default in
> > integrated-gate and only run SQLA on ceilometer.
> >
>
> Joe,
>
> I believe this last option is equivalent to making mongodb the
> recommended implementation by virtue of suddenly being the most tested
> implementation. I would prefer not to see that.
>

Agreed, I included this option for completeness.


>
> Eoghan,
>
> IIUC (and I am not an infra expert) I would suggest (b) since this
> keeps the mongo tests within the ceilometer project only, which I
> think is fine from a "what we test is what we recommend" standpoint.
>
> Also, if there is a situation where a change in Nova passes with
> ceilometer+mysql and thus lands in Nova, but fails with
> ceilometer+mongodb, yes, that would break the ceilometer project's
> gate (but not the integrated gate). It would also indicate a
> substantial abstraction violation within ceilometer. I have proposed
> exactly this model for Ironic's deploy driver testing, and am willing
> to accept the consequences within the project if we break our own
> abstractions.
>
> Regards,
> Devananda
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-11 Thread Jay S. Bryant
John,

I spent the drive to the Minneapolis Airport thinking about this chain
of e-mails.  Hope these thoughts help ...

On Thu, 2014-08-07 at 07:55 -0600, John Griffith wrote:
> 
> 
> 
> 
> On Thu, Aug 7, 2014 at 7:33 AM, Anne Gentle 
> wrote:
> 
> 
> 
> On Thu, Aug 7, 2014 at 8:20 AM, Russell Bryant
>  wrote:
> On 08/07/2014 09:07 AM, Sean Dague wrote:> I think the
> difference is
> slot selection would just be Nova drivers. I
> > think there is an assumption in the old system that
> everyone in Nova
> > core wants to prioritize the blueprints. I think
> there are a bunch of
> > folks in Nova core that are happy having signaling
> from Nova drivers on
> > high priority things to review. (I know I'm in that
> camp.)
> >
> > Lacking that we all have picking algorithms to hack
> away at the 500 open
> > reviews. Which basically means it's a giant random
> queue.
> >
> > Having a few blueprints that *everyone* is looking
> at also has the
> > advantage that the context for the bits in question
> will tend to be
> > loaded into multiple people's heads at the same
> time, so is something
> > that's discussable.
> >
> > Will it fix the issue, not sure, but it's an idea.
> 
> 
> OK, got it.  So, success critically depends on
> nova-core being willing
> to take review direction and priority setting from
> nova-drivers.  That
> sort of assumption is part of why I think agile
> processes typically
> don't work in open source.  We don't have the ability
> to direct people
> with consistent and reliable results.
> 
> I'm afraid if people doing the review are not directly
> involved in at
> least ACKing the selection and commiting to review
> something, putting
> stuff in slots seems futile.
> 
> 
> 
> My original thinking was I'd set aside a "meeting time" to
> review specs especially for doc issues and API designs. What I
> found quickly was that the 400+ queue in one project alone was
> not only daunting but felt like I wasn't going to make a dent
> as a single person, try as I may.
> 
> 
> I did my best but would appreciate any change in process to
> help with prioritization. I'm pretty sure it will help someone
> like me, looking at cross-project queues of specs, to know
> what to review first, second, third, and what to circle back
> on. 
>  
> --
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ​Seems everybody that's been around a while has noticed "issues" this
> release and have talked about it, thanks Thierry for putting it
> together so well and kicking off the ML thread here.
> 
> 
> I'd agree with everything that you stated, I've also floated the idea
> this past week with a few members of the Core Cinder team to have an
> "every other" release for new drivers submissions in Cinder (I'm
> expecting this to be a HUGELY popular proposal [note sarcastic
> tone]).  
> 
> 
> There are three things that have just crushed productivity and
> motivation in Cinder this release (IMO):
> 1. Overwhelming number of drivers (tactical contributions)

I totally agree with this statement.  We have so many large patches to
review that it is daunting.  I wish I had a good solution for dealing
with this issue.  I don't think quick reviews and just trying to get
them through is the right answer.  Perhaps a coordinated effort for some
portion of our code sprint days to knock things down?  Split up the work
and then work together to push some through?


> 2. Overwhelming amount of churn, literally hundreds of little changes
> to modify docstrings, comments etc but no real improvements to code

I can understand the frustration here, but do we 

Re: [openstack-dev] [TripleO][heat] a small experiment with Ansible in TripleO

2014-08-11 Thread Robert Collins
On 12 August 2014 08:35, Zane Bitter  wrote:

> This sounds like the same figure I heard at the design summit; did the DB
> call optimisation work that Steve Baker did immediately after that not have
> any effect?

It helped a lot- I'm not sure where heat tops out now - I'm not aware
of rigorous benchmarks at this stage - I'm hoping we can get a large
scale integration test (virtual machine based) up soon periodically.
Ideally we'd have a microtest in the gate.


>> That was the issue. So we fixed that bug, but we never un-reverted
>> the patch that forks enough engines to use up all the CPU's on a box
>> by default. That would likely help a lot with metadata access speed
>> (we could manually do it in TripleO but we tend to push defaults. :)
>
>
> Right, and we decided we wouldn't because it's wrong to do that to people by
> default. In some cases the optimal running configuration for TripleO will
> differ from the friendliest out-of-the-box configuration for Heat users in
> general, and in those cases - of which this is one - TripleO will need to
> specify the configuration.

So - thanks for being clear about this (is it in the deployer docs for heat?).

That said, nova, neutron and other projects are defaulting to
one-worker-per-core, so I'm surprised that heat considers this
inappropriate, but our other APIs consider it appropriate :) Whats
different about heat that makes this a bad default?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-11 Thread Anita Kuno
On 08/11/2014 05:05 PM, Edgar Magana wrote:
> Cisco Folks,
> 
> I don't see the CI for Cisco NX-OS anymore. Is this being deprecated?
> 
> Edgar
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
I don't ever recall seeing that as a name of a third party gerrit
account in my list[0], Edgar.

Do you happen to have a link to a patchset that has that name attached
to a comment?

Thanks,
Anita.

[0] http://paste.openstack.org/show/93571/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-11 Thread Eoghan Glynn


> >> So would we:
> >>
> >>  (a) add a new integrated-gate-ceilometer project-template to [1],
> >>  in the style of integrated-gate-neutron or integrated-gate-sahara,
> >>  which would replicate the main integrated-gate template but with
> >>  the addition of gate-tempest-dsvm-ceilometer-mongodb(-full)
> >>
> >> or:
> >>
> >>  (b) simply move gate-tempest-dsvm-ceilometer-mongodb(-full) from
> >>  the experimental column[2] in the openstack-ceilometer project,
> >>  to the gate column on that project
> >>
> >> or:
> >>
> >>  (c) something else
> >>
> >> Please excuse the ignorance of gate mechanics inherent in that question.
> >
> >
> >
> > Correct, AFAIK (a) or (b) would be sufficient.
> >
> > There is another option, which is make the mongodb version the default in
> > integrated-gate and only run SQLA on ceilometer.
> >
> 
> Joe,
> 
> I believe this last option is equivalent to making mongodb the
> recommended implementation by virtue of suddenly being the most tested
> implementation. I would prefer not to see that.
> 
> Eoghan,
> 
> IIUC (and I am not an infra expert) I would suggest (b) since this
> keeps the mongo tests within the ceilometer project only, which I
> think is fine from a "what we test is what we recommend" standpoint.

Fair enough ... though I think (a) would also have that quality
of encapsulation, as long as the new integrated-gate-ceilometer
project-template was only referenced by the openstack/ceilometer
project.

I'm not sure it makes a great deal of difference though, so would
be happy enough to go with either (b) or (a).

> Also, if there is a situation where a change in Nova passes with
> ceilometer+mysql and thus lands in Nova, but fails with
> ceilometer+mongodb, yes, that would break the ceilometer project's
> gate (but not the integrated gate). It would also indicate a
> substantial abstraction violation within ceilometer. I have proposed
> exactly this model for Ironic's deploy driver testing, and am willing
> to accept the consequences within the project if we break our own
> abstractions.

Fair point.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-11 Thread Zane Bitter

On 11/08/14 16:21, Matthew Treinish wrote:

I'm sorry, but the fact that the
docs in the rally tree has a section for user testimonials [4] I feel speaks a
lot about the intent of the project.


What... does that even mean?

"They seem like just the type of guys that would help Keystone with 
performance benchmarking!"

"Burn them!"


I apologize if any of this is somewhat incoherent, I'm still a bit jet-lagged
so I'm not sure that I'm making much sense.


Ah.


[4] http://git.openstack.org/cgit/stackforge/rally/tree/doc/user_stories


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Passing a list of ResourceGroup's attributes back to its members

2014-08-11 Thread Steve Baker
On 09/08/14 11:15, Zane Bitter wrote:
> On 08/08/14 11:07, Tomas Sedovic wrote:
>> On 08/08/14 00:53, Zane Bitter wrote:
>>> On 07/08/14 13:22, Tomas Sedovic wrote:
 Hi all,

 I have a ResourceGroup which wraps a custom resource defined in
 another
 template:

   servers:
 type: OS::Heat::ResourceGroup
 properties:
   count: 10
   resource_def:
 type: my_custom_server
 properties:
   prop_1: "..."
   prop_2: "..."
   ...

 And a corresponding provider template and environment file.

 Now I can get say the list of IP addresses or any custom value of each
 server from the ResourceGroup by using `{get_attr: [servers,
 ip_address]}` and outputs defined in the provider template.

 But I can't figure out how to pass that list back to each server in
 the
 group.

 This is something we use in TripleO for things like building a MySQL
 cluster, where each node in the cluster (the ResourceGroup) needs the
 addresses of all the other nodes.
>>>
>>> Yeah, this is kind of the perpetual problem with clusters. I've been
>>> hoping that DNSaaS will show up in OpenStack soon and that that will be
>>> a way to fix this issue.
>>>
>>> The other option is to have the cluster members discover each other
>>> somehow (mDNS?), but people seem loath to do that.
>>>
 Right now, we have the servers ungrouped in the top-level template
 so we
 can build this list manually. But if we move to ResourceGroups (or any
 other scaling mechanism, I think), this is no longer possible.
>>>
>>> So I believe the current solution is to abuse a Launch Config resource
>>> as a store for the data, and then later retrieve it somehow? Possibly
>>> you could do something along similar lines, but it's unclear how the
>>> 'later retrieval' part would work... presumably it would have to
>>> involve
>>> something outside of Heat closing the loop :(
>>
>> Do you mean AWS::AutoScaling::LaunchConfiguration? I'm having trouble
>> figuring out how would that work. LaunchConfig represents an instance,
>> right?
>>
>>>
 We can't pass the list to ResourceGroup's `resource_def` section
 because
 that causes a circular dependency.

 And I'm not aware of a way to attach a SoftwareConfig to a
 ResourceGroup. SoftwareDeployment only allows attaching a config to a
 single server.
>>>
>>> Yeah, and that would be a tricky thing to implement well, because a
>>> resource group may not be a group of servers (but in many cases it may
>>> be a group of nested stacks that each contain one or more servers, and
>>> you'd want to be able to handle that too).
>>
>> Yeah, I worried about that, too :-(.
>>
>> Here's a proposal that might actually work, though:
>>
>> The provider resource exposes the reference to its inner instance by
>> declaring it as one of its outputs. A SoftwareDeployment would learn to
>> accept a list of Nova servers, too.
>>
>> Provider template:
>>
>>  resources:
>>my_server:
>>  type: OS::Nova::Server
>>  properties:
>>...
>>
>>... (some other resource hidden in the provider template)
>>
>>  outputs:
>>inner_server:
>>  value: {get_resource: my_server}
>>ip_address:
>>  value: {get_attr: [controller_server, networks, private, 0]}
>>
>> Based on my limited testing, this already makes it possible to use the
>> inner server with a SoftwareDeployment from another template that uses
>> "my_server" as a provider resource.
>>
>> E.g.:
>>
>>  a_cluster_of_my_servers:
>>type: OS::Heat::ResourceGroup
>>properties:
>>  count: 10
>>  resource_def:
>>type: custom::my_server
>>...
>>
>>  some_deploy:
>>type: OS::Heat::StructuredDeployment
>>properties:
>>  server: {get_attr: [a_cluster_of_my_servers,
>> resource.0.inner_server]}
>>  config: {get_resource: some_config}
>>
>>
>> So what if we allowed SoftwareDeployment to accept a list of servers in
>> addition to accepting just one server? Or add another resource that does
>> that.
>
> I approve of that in principle. Only Steve Baker can tell us for sure
> if there are any technical roadblocks in the way of that, but I don't
> see any.
>
> Maybe if we had a new resource type that was internally implemented as
> a nested stack... that might give us a way of tracking the individual
> deployment statuses for free.
>
> cheers,
> Zane.
>
>> Then we could do:
>>
>>  mysql_cluster_deployment:
>>type: OS::Heat::StructuredDeployment
>>properties:
>>  server_list: {get_attr: [a_cluster_of_my_servers,
>> inner_server]}
>>  config: {get_resource: mysql_cluster_config}
>>  input_values:
>>cluster_ip_addresses: {get_attr: [a_cluster_of_my_servers,
>> i

[openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore

2014-08-11 Thread Edgar Magana
Cisco Folks,

I don't see the CI for Cisco NX-OS anymore. Is this being deprecated?

Edgar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-11 Thread Jay Pipes

On 08/11/2014 05:58 PM, Jay Lau wrote:

I think the metadata in server group is an important feature and it
might be used by
https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group

Actually, we are now doing an internal development for above bp and want
to contribute this back to community later. We are now setting hard/soft
flags in server group metadata to identify if the server group want
hard/soft affinity.

I prefer Dan's first suggestion, what do you think?
=
If we care to have this functionality, then I propose we change the
attribute on the object (we can handle this with versioning) and reflect
it as "metadata" in the API.
=


-1

If hard and soft is something that really needs to be supported, then 
this should be a field in the instance_groups table, not some JSON blob 
in a random metadata field.


Better yet, get rid of the instance_groups table altogether and have 
"near", "not-near", "hard", and "soft" be launch modifiers similar to 
the instance type. IMO, there's really no need to store a named group at 
all, but that goes back to my original ML post about the server groups 
topic:


https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg23055.html

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][heat] a small experiment with Ansible in TripleO

2014-08-11 Thread Steve Baker
On 12/08/14 06:20, Clint Byrum wrote:
> Excerpts from Zane Bitter's message of 2014-08-11 08:16:56 -0700:
>> On 11/08/14 10:46, Clint Byrum wrote:
>>> Right now we're stuck with an update that just doesn't work. It isn't
>>> just about update-failure-recovery, which is coming along nicely, but
>>> it is also about the lack of signals to control rebuild, poor support
>>> for addressing machines as groups, and unacceptable performance in
>>> large stacks.
>> Are there blueprints/bugs filed for all of these issues?
>>
> Convergnce addresses the poor performance for large stacks in general.
> We also have this:
>
> https://bugs.launchpad.net/heat/+bug/1306743
>
> Which shows how slow metadata access can get. I have worked on patches
> but haven't been able to complete them. We made big strides but we are
> at a point where 40 nodes polling Heat every 30s is too much for one CPU
> to handle. When we scaled Heat out onto more CPUs on one box by forking
> we ran into eventlet issues. We also ran into issues because even with
> many processes we can only use one to resolve templates for a single
> stack during update, which was also excessively slow.
>
> We haven't been able to come back around to those yet, but you can see
> where this has turned into a bit of a rat hole of optimization.

> action-aware-sw-config is sort of what we want for rebuild. We
> collaborated with the trove devs on how to also address it for resize
> a while back but I have lost track of that work as it has taken a back
> seat to more pressing issues.

We were discussing offloading metadata polling to a tempURL swift
object; that would certainly deal to scaling metadata polling.

But also, this could help with out-of-band ansible workflow too.
Anything (ie, Ansible) could push changed data to the swift object too.
And if you wanted to ensure that heat didn't overwrite that during an
accidental heat stack-update then you could configure os-collect-config
to poll from 2 swift objects, one for heat and one for manual updates.
The manual object could take precedence over the heat one for metadata
merging, which could give you a nice fine-grained override mechanism.

> Addressing groups is a general problem that I've had a hard time
> articulating in the past. Tomas Sedovic has done a good job with this
> TripleO spec, but I don't know that we've asked for an explicit change
> in a bug or spec in Heat just yet:
>
> https://review.openstack.org/#/c/97939/
>
> There are a number of other issues noted in that spec which are already
> addressed in Heat, but require refactoring in TripleO's templates and
> tools, and that work continues.
I'll follow up the potential solutions in the other thread:
http://lists.openstack.org/pipermail/openstack-dev/2014-August/042313.html

> The point remains: we need something that works now, and doing an
> alternate implementation for updates is actually faster than addressing
> all of these issues.
Thanks, that was a good summary of the issues, and I do appreciate the
need for both tactical and strategic solutions.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-11 Thread Devananda van der Veen
On Mon, Aug 11, 2014 at 3:27 PM, Joe Gordon  wrote:
>
>
>
> On Mon, Aug 11, 2014 at 3:07 PM, Eoghan Glynn  wrote:
>>
>>
>>
>> > Ignoring the question of is it ok to say: 'to run ceilometer in any sort
>> > of
>> > non-trivial deployment you must manager yet another underlying service,
>> > mongodb' I would prefer not adding an addition gate variant to all
>> > projects.
>> > With the effort to reduce the number of gate variants we have [0] I
>> > would
>> > prefer to see just ceilometer gate on both mongodb and sqlalchemy and
>> > the
>> > main integrated gate [1] pick just one.
>>
>> Just checking to see that I fully understand what you mean there, Joe.
>>
>> So would we:
>>
>>  (a) add a new integrated-gate-ceilometer project-template to [1],
>>  in the style of integrated-gate-neutron or integrated-gate-sahara,
>>  which would replicate the main integrated-gate template but with
>>  the addition of gate-tempest-dsvm-ceilometer-mongodb(-full)
>>
>> or:
>>
>>  (b) simply move gate-tempest-dsvm-ceilometer-mongodb(-full) from
>>  the experimental column[2] in the openstack-ceilometer project,
>>  to the gate column on that project
>>
>> or:
>>
>>  (c) something else
>>
>> Please excuse the ignorance of gate mechanics inherent in that question.
>
>
>
> Correct, AFAIK (a) or (b) would be sufficient.
>
> There is another option, which is make the mongodb version the default in
> integrated-gate and only run SQLA on ceilometer.
>

Joe,

I believe this last option is equivalent to making mongodb the
recommended implementation by virtue of suddenly being the most tested
implementation. I would prefer not to see that.

Eoghan,

IIUC (and I am not an infra expert) I would suggest (b) since this
keeps the mongo tests within the ceilometer project only, which I
think is fine from a "what we test is what we recommend" standpoint.

Also, if there is a situation where a change in Nova passes with
ceilometer+mysql and thus lands in Nova, but fails with
ceilometer+mongodb, yes, that would break the ceilometer project's
gate (but not the integrated gate). It would also indicate a
substantial abstraction violation within ceilometer. I have proposed
exactly this model for Ironic's deploy driver testing, and am willing
to accept the consequences within the project if we break our own
abstractions.

Regards,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] reverting the HOT migration? // dealing with lockstep changes

2014-08-11 Thread Robert Collins
On 12 August 2014 10:27, Zane Bitter  wrote:
> On 11/08/14 15:24, Dan Prince wrote:
>>
>> Hmmm. We blocked a good bit of changes to get these HOT templates in so
>> I hate to see us revert them. Also, It isn't clear to me how much work
>> it would be to fully support the non-HOT to HOT templates upgrade path.
>> How much work is this? And is that something we really want to spend
>> time on instead of all the other things?
>
>
> The fix in Heat is going through the gate as we speak, if that helps:
>
> https://review.openstack.org/#/c/112936/
>
> (BTW it would be great if threads about critical bugs in Heat had [Heat] in
> the subject - I almost missed this one.)

Sorry, was thinking this was more a TripleO issue - but I'm glad its a
shallow heat issue instead :)

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reverting the HOT migration? // dealing with lockstep changes

2014-08-11 Thread Robert Collins
On 12 August 2014 07:24, Dan Prince  wrote:
> On Tue, 2014-08-12 at 06:58 +1200, Robert Collins wrote:
>> Hi, so shortly after the HOT migration landed, we hit
>> https://bugs.launchpad.net/tripleo/+bug/1354305 which is that on even
>> quite recently deployed clouds, the migrated templates were just too
>> new. A partial revert (of just the list_join bit) fixes that, but a
>> deeper problem emerged which is that stack-update to get from a
>> non-HOT to HOT template appears broken
>> (https://bugs.launchpad.net/heat/+bug/1354962).
>>
>> I think we need to revert the HOT migration today, as forcing a
>> scorched earth recreation of a cloud is not a great answer for folk
>> that have deployed versions - its a backwards compat issue.
>>
>> Its true that our release as of icehouse isn't  really useable, so we
>> could try to wiggle our way past this one, but I think as the first
>> real test of our new backwards compat policy, that that would be a
>> mistake.
>
> Hmmm. We blocked a good bit of changes to get these HOT templates in so
> I hate to see us revert them. Also, It isn't clear to me how much work
> it would be to fully support the non-HOT to HOT templates upgrade path.
> How much work is this? And is that something we really want to spend
> time on instead of all the other things?

Following up with Heat folk, apparently the non-HOT->HOTness was a
distraction - I'll validate this on the hp1 region asap, since I too
would rather not revert stuff.

We may need to document a two-step upgrade process for the UC - step 1
upgrade the UC image, *same* template, step 2, use new template to get
new functionality.

> Why not just create a branch of the old templates that works on the
> existing underclouds? Users would then need to use these templates as a
> one-time upgrade step to be able to upgrade to a heat HOT capable
> undercloud first.

We could. But since we compile things, we could just keep a copy of
the last deployed-with-template.

> With regards to the seed...
>
> Would a tool that allowed us to migrate the stateful data from an old
> seed to a new seed be a better use of time here? A re-seeder might be a
> useful tool to future proof upgrade paths that rely on the software
> versions in the seed (Heat etc.) that aren't easily up-gradable.
> Packages would help here too, but not everyone uses them...

The plan we discussed at the midcycle to use a real stateful partition
for the seed will do this I think.

-Rob
-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Bug#1231298 - size parameter for volume creation

2014-08-11 Thread Duncan Thomas
On 8 August 2014 07:55, Dean Troyer  wrote:
> In cinderclient I think you're stuck with size as a mandatory argument to
> the 'cinder create' command, as you must be backward-compatible for at least
> a deprecation period.[0]

Making an previously mandatory parameter optional, at least on the
command line, does break backward compatibility though, does it?
Everything that worked before will still work.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-11 Thread Joe Gordon
On Mon, Aug 11, 2014 at 3:07 PM, Eoghan Glynn  wrote:

>
>
> > Ignoring the question of is it ok to say: 'to run ceilometer in any sort
> of
> > non-trivial deployment you must manager yet another underlying service,
> > mongodb' I would prefer not adding an addition gate variant to all
> projects.
> > With the effort to reduce the number of gate variants we have [0] I would
> > prefer to see just ceilometer gate on both mongodb and sqlalchemy and the
> > main integrated gate [1] pick just one.
>
> Just checking to see that I fully understand what you mean there, Joe.
>
> So would we:
>
>  (a) add a new integrated-gate-ceilometer project-template to [1],
>  in the style of integrated-gate-neutron or integrated-gate-sahara,
>  which would replicate the main integrated-gate template but with
>  the addition of gate-tempest-dsvm-ceilometer-mongodb(-full)
>
> or:
>
>  (b) simply move gate-tempest-dsvm-ceilometer-mongodb(-full) from
>  the experimental column[2] in the openstack-ceilometer project,
>  to the gate column on that project
>
> or:
>
>  (c) something else
>
> Please excuse the ignorance of gate mechanics inherent in that question.
>


Correct, AFAIK (a) or (b) would be sufficient.

There is another option, which is make the mongodb version the default in
integrated-gate and only run SQLA on ceilometer.



>
> Cheers,
> Eoghan
>
>
> [1]
> http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n238
> [2]
> http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n801
>
>
> > [0]
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
> > [1]
> >
> http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n238
> >
> >
> >
> > Does that work for you Devananda?
> >
> > Cheers,
> > Eoghan
> >
> > > -Deva
> > >
> > >
> > > [1]
> > >
> https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Ceilometer_Gap_Coverage
> > >
> > > [2]
> > >
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/030510.html
> > > is a very articulate example of this objection
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] reverting the HOT migration? // dealing with lockstep changes

2014-08-11 Thread Zane Bitter

On 11/08/14 15:24, Dan Prince wrote:

Hmmm. We blocked a good bit of changes to get these HOT templates in so
I hate to see us revert them. Also, It isn't clear to me how much work
it would be to fully support the non-HOT to HOT templates upgrade path.
How much work is this? And is that something we really want to spend
time on instead of all the other things?


The fix in Heat is going through the gate as we speak, if that helps:

https://review.openstack.org/#/c/112936/

(BTW it would be great if threads about critical bugs in Heat had [Heat] 
in the subject - I almost missed this one.)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-11 Thread Joe Gordon
On Sun, Aug 10, 2014 at 11:59 PM, Mark McLoughlin  wrote:

> On Fri, 2014-08-08 at 09:06 -0400, Russell Bryant wrote:
> > On 08/07/2014 08:06 PM, Michael Still wrote:
> > > It seems to me that the tension here is that there are groups who
> > > would really like to use features in newer libvirts that we don't CI
> > > on in the gate. Is it naive to think that a possible solution here is
> > > to do the following:
> > >
> > >  - revert the libvirt version_cap flag
> >
> > I don't feel strongly either way on this.  It seemed useful at the time
> > for being able to decouple upgrading libvirt and enabling features that
> > come with that.
>
> Right, I suggested the flag as a more deliberate way of avoiding the
> issue that was previously seen in the gate with live snapshots. I still
> think it's a pretty elegant and useful little feature, and don't think
> we need to use it as proxy battle over testing requirements for new
> libvirt features.
>

Mark,

I am not sure if I follow.  The gate issue with live snapshots has been
worked around by turning it off [0], so presumably this patch is forward
facing.  I fail to see how this patch is needed to help the gate in the
future. Wouldn't it just delay the issues until we change the version_cap?

The issue I see with the libvirt version_cap [1] is best captured in its
commit message: "The end user can override the limit if they wish to opt-in
to use of untested features via the 'version_cap' setting in the 'libvirt'
group." This goes against the very direction nova has been moving in for
some time now. We have been moving away from merging untested (re: no
integration testing) features.  This patch changes the very direction the
project is going in over testing without so much as a discussion. While I
think it may be time that we revisited this discussion, the discussion
needs to happen before any patches are merged.

I am less concerned about the contents of this patch, and more concerned
with how such a big de facto change in nova policy (we accept untested code
sometimes) without any discussion or consensus. In your comment on the
revert [2], you say the 'whether not-CI-tested features should be allowed
to be merged' debate is 'clearly unresolved.' How did you get to that
conclusion? This was never brought up in the mid-cycles as a unresolved
topic to be discussed. In our specs template we say "Is this untestable in
gate given current limitations (specific hardware / software configurations
available)? If so, are there mitigation plans (3rd party testing, gate
enhancements, etc)" [3].  We have been blocking untested features for some
time now.

I am further perplexed by what Daniel Berrange, the patch author, meant
when he commented [2] "Regardless of the outcome of the testing discussion
we believe this is a useful feature to have." Who is 'we'? Because I don't
see how that can be nova-core or even nova-specs-core, especially
considering how many members of those groups are +2 on the revert. So if
'we' is neither of those groups then who is 'we'?

[0] https://review.openstack.org/#/c/102643/4/nova/virt/libvirt/driver.py
[1] https://review.openstack.org/#/c/107119/
[2] https://review.openstack.org/#/c/110754/
[3]
http://specs.openstack.org/openstack/nova-specs/specs/template.html#testing




>
> Mark.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][heat] a small experiment with Ansible in TripleO

2014-08-11 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-08-11 13:35:44 -0700:
> On 11/08/14 14:49, Clint Byrum wrote:
> > Excerpts from Steven Hardy's message of 2014-08-11 11:40:07 -0700:
> >> On Mon, Aug 11, 2014 at 11:20:50AM -0700, Clint Byrum wrote:
> >>> Excerpts from Zane Bitter's message of 2014-08-11 08:16:56 -0700:
>  On 11/08/14 10:46, Clint Byrum wrote:
> > Right now we're stuck with an update that just doesn't work. It isn't
> > just about update-failure-recovery, which is coming along nicely, but
> > it is also about the lack of signals to control rebuild, poor support
> > for addressing machines as groups, and unacceptable performance in
> > large stacks.
> 
>  Are there blueprints/bugs filed for all of these issues?
> 
> >>>
> >>> Convergnce addresses the poor performance for large stacks in general.
> >>> We also have this:
> >>>
> >>> https://bugs.launchpad.net/heat/+bug/1306743
> >>>
> >>> Which shows how slow metadata access can get. I have worked on patches
> >>> but haven't been able to complete them. We made big strides but we are
> >>> at a point where 40 nodes polling Heat every 30s is too much for one CPU
> 
> This sounds like the same figure I heard at the design summit; did the 
> DB call optimisation work that Steve Baker did immediately after that 
> not have any effect?
> 

Steve's work got us to 40. From 7.

> >>> to handle. When we scaled Heat out onto more CPUs on one box by forking
> >>> we ran into eventlet issues. We also ran into issues because even with
> >>> many processes we can only use one to resolve templates for a single
> >>> stack during update, which was also excessively slow.
> >>
> >> Related to this, and a discussion we had recently at the TripleO meetup is
> >> this spec I raised today:
> >>
> >> https://review.openstack.org/#/c/113296/
> >>
> >> It's following up on the idea that we could potentially address (or at
> >> least mitigate, pending the fully convergence-ified heat) some of these
> >> scalability concerns, if TripleO moves from the one-giant-template model
> >> to a more modular nested-stack/provider model (e.g what Tomas has been
> >> working on)
> >>
> >> I've not got into enough detail on that yet to be sure if it's acheivable
> >> for Juno, but it seems initially to be complex-but-doable.
> >>
> >> I'd welcome feedback on that idea and how it may fit in with the more
> >> granular convergence-engine model.
> >>
> >> Can you link to the eventlet/forking issues bug please?  I thought since
> >> bug #1321303 was fixed that multiple engines and multiple workers should
> >> work OK, and obviously that being true is a precondition to expending
> >> significant effort on the nested stack decoupling plan above.
> >>
> >
> > That was the issue. So we fixed that bug, but we never un-reverted
> > the patch that forks enough engines to use up all the CPU's on a box
> > by default. That would likely help a lot with metadata access speed
> > (we could manually do it in TripleO but we tend to push defaults. :)
> 
> Right, and we decided we wouldn't because it's wrong to do that to 
> people by default. In some cases the optimal running configuration for 
> TripleO will differ from the friendliest out-of-the-box configuration 
> for Heat users in general, and in those cases - of which this is one - 
> TripleO will need to specify the configuration.
> 

Whether or not the default should be to fork 1 process per CPU is a
debate for another time. The point is, we can safely use the forking in
Heat now to perhaps improve performance of metadata polling.

Chasing that, and other optimizations, has not led us to a place where
we can get to, say, 100 real nodes _today_. We're chasing another way to
get to the scale and capability we need _today_, in much the same way
we did with merge.py. We'll find the way to get it done more elegantly
as time permits.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-11 Thread Eoghan Glynn


> Ignoring the question of is it ok to say: 'to run ceilometer in any sort of
> non-trivial deployment you must manager yet another underlying service,
> mongodb' I would prefer not adding an addition gate variant to all projects.
> With the effort to reduce the number of gate variants we have [0] I would
> prefer to see just ceilometer gate on both mongodb and sqlalchemy and the
> main integrated gate [1] pick just one.

Just checking to see that I fully understand what you mean there, Joe.

So would we:

 (a) add a new integrated-gate-ceilometer project-template to [1],
 in the style of integrated-gate-neutron or integrated-gate-sahara,
 which would replicate the main integrated-gate template but with
 the addition of gate-tempest-dsvm-ceilometer-mongodb(-full)

or:

 (b) simply move gate-tempest-dsvm-ceilometer-mongodb(-full) from
 the experimental column[2] in the openstack-ceilometer project,
 to the gate column on that project

or:

 (c) something else

Please excuse the ignorance of gate mechanics inherent in that question.

Cheers,
Eoghan


[1] 
http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n238
[2] 
http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n801

 
> [0] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
> [1]
> http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n238
> 
> 
> 
> Does that work for you Devananda?
> 
> Cheers,
> Eoghan
> 
> > -Deva
> > 
> > 
> > [1]
> > https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Ceilometer_Gap_Coverage
> > 
> > [2]
> > http://lists.openstack.org/pipermail/openstack-dev/2014-March/030510.html
> > is a very articulate example of this objection
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-11 Thread Jay Lau
I think the metadata in server group is an important feature and it might
be used by
https://blueprints.launchpad.net/nova/+spec/soft-affinity-for-server-group

Actually, we are now doing an internal development for above bp and want to
contribute this back to community later. We are now setting hard/soft flags
in server group metadata to identify if the server group want hard/soft
affinity.

I prefer Dan's first suggestion, what do you think?
=
If we care to have this functionality, then I propose we change the
attribute on the object (we can handle this with versioning) and reflect
it as "metadata" in the API.
=

Thanks!


2014-08-12 0:50 GMT+08:00 Sylvain Bauza :

>
> Le 11/08/2014 18:03, Gary Kotton a écrit :
>
>
>> On 8/11/14, 6:06 PM, "Dan Smith"  wrote:
>>
>>  As the person who -2'd the review, I'm thankful you raised this issue on
 the ML, Jay. Much appreciated.

>>> The "metadetails" term isn't being invented in this patch, of course. I
>>> originally complained about the difference when this was being added:
>>>
>>> https://review.openstack.org/#/c/109505/1/nova/api/
>>> openstack/compute/contr
>>> ib/server_groups.py,cm
>>>
>>> As best I can tell, the response in that patch set about why it's being
>>> translated is wrong (backwards). I expect that the API extension at the
>>> time called it "metadetails" and they decided to make the object the
>>> same and do the translation there.
>>>
>>>  >From what I can tell, the actual server_group API extension that made
>> it
>>
>>> into the tree never got the ability to set/change/etc the
>>> metadata/metadetails anyway, so there's no reason (AFAICT) to add it in
>>> wrongly.
>>>
>>> If we care to have this functionality, then I propose we change the
>>> attribute on the object (we can handle this with versioning) and reflect
>>> it as "metadata" in the API.
>>>
>>> However, I have to ask: do we really need another distinct metadata
>>> store attached to server_groups? If not, how about we just remove it
>>>
>> >from the database and the object, clean up the bit of residue that is
>>
>>> still in the API extension and be done with it?
>>>
>> The initial version of the feature did not make use of this. The reason
>> was that we chose for a very
>> Limited subset to be used, that is, affinity and anti affinity. Moving
>> forwards we would like to implement
>> A number of different policies with this. We can drop it at the moment due
>> to the fact that it is not used.
>>
>> I think that Yathi may be using this for the constrain scheduler. But I am
>> not 100% sure.
>>
>
>
> Unless I'm wrong, I can't see where this metadata is being used in the
> scheduler, either for filtering or for other reasons.
>
> So, please give us context why this is currently useful ?
>
> If this is something for the next future, I would love discussing it with
> regards to the current split.
>
>
> Thanks,
> -Sylvain
>
>
>  --Dan
>>>
>>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Eoghan Glynn


> >> On 8/11/2014 4:22 PM, Eoghan Glynn wrote:
>  Hi Eoghan,
> 
>  Thanks for the note below. However, one thing the overview below does
>  not
>  cover is why InfluxDB ( http://influxdb.com/ ) is not being leveraged.
>  Many
>  folks feel that this technology is a viable solution for the problem
>  space
>  discussed below.
> >>> Great question Brad!
> >>>
> >>> As it happens we've been working closely with Paul Dix (lead
> >>> developer of InfluxDB) to ensure that this metrics store would be
> >>> usable as a backend driver. That conversation actually kicked off
> >>> at the Juno summit in Atlanta, but it really got off the ground
> >>> at our mid-cycle meet-up in Paris on in early July.
> >> ...
> >>> The InfluxDB folks have committed to implementing those features in
> >>> over July and August, and have made concrete progress on that score.
> >>>
> >>> I hope that provides enough detail to answer to your question?
> >> I guess it begs the question, if influxdb will do what you want and it's
> >> open source (MIT) as well as commercially supported, how does gnocchi
> >> differentiate?
> > Hi Sandy,
> >
> > One of the ideas behind gnocchi is to combine resource representation
> > and timeseries-oriented storage of metric data, providing an efficient
> > and convenient way to query for metric data associated with individual
> > resources.
> 
> Doesn't InfluxDB do the same?

InfluxDB stores timeseries data primarily.

Gnocchi in intended to store strongly-typed OpenStack resource
representations (instance, images, etc.) in addition to providing
a means to access timeseries data associated with those resources.

So to answer your question: no, IIUC, it doesn't do the same thing.

Though of course these things are not a million miles from each
other, one is just a step up in the abstraction stack, having a
wider and more OpenStack-specific scope.
 
> > Also, having an API layered above the storage driver avoids locking in
> > directly with a particular metrics-oriented DB, allowing for the
> > potential to support multiple storage driver options (e.g. to choose
> > between a canonical implementation based on Swift, an InfluxDB driver,
> > and an OpenTSDB driver, say).
> Right, I'm not suggesting to remove the storage abstraction layer. I'm
> just curious what gnocchi does better/different than InfluxDB?
> 
> Or, am I missing the objective here and gnocchi is the abstraction layer
> and not an influxdb alternative? If so, my apologies for the confusion.

No worries :)

The intention is for gnocchi to provide an abstraction over
timeseries, aggregation, downsampling and archiving/retention
policies, with a number of drivers mapping onto real timeseries
storage options. One of those drivers is based on Swift, another
is in the works based on InfluxDB, and a third based on OpenTSDB
has also been proposed.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-11 Thread Boris Pavlovic
Hi stackers,


I would like to put some more details on current situation.

>
> The issue is with what Rally is in it's
> current form. It's scope is too large and monolithic, and it duplicates
> much of
> the functionality we either already have or need in current QA or Infra
> projects. But, nothing in Rally is designed to be used outside of it. I
> actually
> feel pretty strongly that in it's current form Rally should *not* be a
> part of
> any OpenStack program


Rally is not just a bunch of scripts like tempest, it's more like Nova,
Cinder, and other projects that works out of box and resolve Operators &
Dev use cases in one click.

This architectural design is the main key of Rally success, and why we got
such large adoption and community.

So I'm opposed to this option. It feels to me like this is only on the table
> because the Rally team has not done a great job of communicating or
> working with
> anyone else except for when it comes to either push using Rally, or this
> conversation about adopting Rally.


Actually Rally team done already a bunch of useful work including cross
projects and infra stuff.

Keystone, Glance, Cinder, Neutron and Heat are running rally performance
jobs, that can be used for performance testing, benchmarking, regression
testing (already now). These jobs supports in-tree plugins for all
components (scenarios, load generators, benchmark context) and they can use
Rally fully without interaction with Rally team at all. More about these
jobs:
https://docs.google.com/a/mirantis.com/document/d/1s93IBuyx24dM3SmPcboBp7N47RQedT8u4AJPgOHp9-A/
So I really don't see anything like this in tempest (even in observed
future)


I would like to mention work on OSprofiler (cross service/project profiler)
https://github.com/stackforge/osprofiler (that was done by Rally team)
https://review.openstack.org/#/c/105096/
(btw Glance already accepted it https://review.openstack.org/#/c/105635/ )


My primary concern is the timing for doing all of this work. We're
> approaching
> J-3 and honestly this feels like it would take the better part of an entire
> cycle to analyze, plan, and then implement. Starting an analysis of how to
> do
> all of the work at this point I feel would just distract everyone from
> completing our dev goals for the cycle. Probably the Rally team, if they
> want
> to move forward here, should start the analysis of this structural split
> and we
> can all pick this up together post-juno



Matt, Sean - seriously community is about convincing people, not about
forcing people to do something against their wiliness.  You are making huge
architectural decisions without deep knowledge about what is Rally, what
are use cases, road map, goals and auditory.

IMHO community in my opinion is thing about convincing people. So QA
program should convince Rally team (at least me) to do such changes. Key
secret to convince me, is to say how this will help OpenStack to perform
better.

Currently Rally team see a lot of issues related to this decision:

1) It breaks already existing performance jobs (Heat, Glance, Cinder,
Neutron, Keystone)

2) It breaks functional testing of Rally (that is already done in gates)

2) It makes Rally team depending on Tempest throughput, and what I heard
multiple times from QA team is that performance work is very low priority
and that major goals are to keep gates working. So it will slow down work
of performance team.

3) Brings a ton of questions what should be in Rally and what should be in
Tempest. That are at the moment quite resolved.
https://docs.google.com/a/pavlovic.ru/document/d/137zbrz0KJd6uZwoZEu4BkdKiR_Diobantu0GduS7HnA/edit#heading=h.9ephr9df0new

4) It breaks existing OpenStack team, that is working 100% on performance,
regressions and sla topics. Sorry but there is no such team in tempest.
This directory is not active developed:
https://github.com/openstack/tempest/commits/master/tempest/stress


Matt, Sean, David - what are the real goals of merging Rally to Tempest?
I see a huge harm for OpenStack and companies that are using Rally, and
don't see actually any benefits.
What I heard for now is something like "this decision will make tempest
better"..
But do you care more about Tempest than OpenStack?


Best regards,
Boris Pavlovic




On Tue, Aug 12, 2014 at 12:37 AM, David Kranz  wrote:

>  On 08/11/2014 04:21 PM, Matthew Treinish wrote:
>
> I apologize for the delay in my response to this thread, between travelling
> and having a stuck 'a' key on my laptop this is the earliest I could
> respond.
> I opted for a separate branch on this thread to summarize my views and I'll
> respond inline later on some of the previous discussion.
>
> On Wed, Aug 06, 2014 at 12:30:35PM +0200, Thierry Carrez wrote:
> > Hi everyone,
> >
> > At the TC meeting yesterday we discussed Rally program request and
> > incubation request. We quickly dismissed the incubation request, as
> > Rally appears to be able to live happily on top of OpenStack and would
> > bene

Re: [openstack-dev] [Neutron][LBaaS] Use cases with regards to VIP and routers

2014-08-11 Thread Stephen Balukoff
Susanne,

Are you asking in the context of Load Balancer services in general, or in
terms of the Neutron LBaaS project or the Octavia project?

Stephen


On Mon, Aug 11, 2014 at 9:04 AM, Doug Wiegley  wrote:

> Hi Susanne,
>
> While there are a few operators involved with LBaaS that would have good
> input, you might want to also ask this on the non-dev mailing list, for a
> larger sample size.
>
> Thanks,
> doug
>
> On 8/11/14, 3:05 AM, "Susanne Balle"  wrote:
>
> >Gang,
> >I was asked the following questions around our Neutron LBaaS use cases:
> >1.  Will there be a scenario where the ³VIP² port will be in a different
> >Node, from all the Member ³VMs² in a pool.
> >
> >
> >2.  Also how likely is it for the LBaaS configured subnet to not have a
> >³router² and just use the ³extra_routes²
> > option.
> >3.  Is there a valid use case where customers will be using the
> >³extra_routes² with subnets instead of the ³routers².
> > ( It would be great if you have some use case picture for this).
> >Feel free to chime in here and I'll summaries the answers.
> >Regards Susanne
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Mathieu Gagné

On 2014-08-11 5:13 PM, Sandy Walsh wrote:

Right, I'm not suggesting to remove the storage abstraction layer. I'm
just curious what gnocchi does better/different than InfluxDB?



I was at the OpenStack Design Summit when Gnocchi was presented.

Soon after the basic goals and technical details of Gnocchi were 
presented, people wondered why InfluxDB wasn't used. AFAIK, people 
presenting Gnocchi didn't know about InfluxDB so they weren't able to 
answer the question.


I don't really blame them. At that time, I didn't know anything about 
Gnocchi, even less about InfluxDB but rapidly learned that both are 
DataSeries databases/services.



What I would have answered to that question is (IMO):

Gnocchi is a new project tackling the need for a DataSeries 
database/storage as a service. Pandas/Swift is used as an implementation 
reference. Some people love Swift and will use it everywhere they can, 
nothing wrong with it. (or lets not go down that path)



> Or, am I missing the objective here and gnocchi is the abstraction layer
> and not an influxdb alternative? If so, my apologies for the confusion.
>

InfluxDB can't be used as-is by OpenStack services. There needs to be an 
abstraction layer somewhere.


As Gnocchi is (or will be) well written, people will be free to drop the 
Swift implementation and replace it by whatever they want: InfluxDB, 
Blueflood, RRD, Whisper, plain text files, in-memory, /dev/null, etc.


But we first need to start somewhere with one implementation and 
Pandas/Swift was chosen.


I'm confident people will soon start proposing alternative storage 
backends/implementations better fitting their needs and tastes.


--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-11 Thread Joe Gordon
On Sat, Aug 9, 2014 at 6:29 AM, Eoghan Glynn  wrote:

>
>
> > > Hi Folks,
> > >
> > > Dina Belova has recently landed some infra patches[1,2] to create
> > > an experimental mongodb-based Tempest job. This effectively just
> > > overrides the ceilometer storage backend config so that mongodb
> > > is used instead of sql-alchemy. The new job has been running
> > > happily for a few days so I'd like now to consider the path
> > > forwards with this.
> > >
> > > One of our Juno goals under the TC gap analysis was to more fully
> > > gate against mongodb, given that this is the storage backend
> > > recommended/supported by many distros. The sql-alchemy backend,
> > > on the other hand, is more suited for proofs of concept or small
> > > deployments. However up to now we've been hampered from reflecting
> > > that reality in the gate, due to the gate being stuck on Precise
> > > for a long time, as befits LTS, and the version of mongodb needed
> > > by ceilometer (i.e. 2.4) effectively unavailable on that Ubuntu
> > > release (in fact it was limited to 2.0.4).
> > >
> > > So the orientation towards gating on sql-alchemy was mostly
> > > driven by legacy issues in the gate's usage of Precise, as
> > > opposed to this being considered the most logical basket in
> > > which to put all our testing eggs.
> > >
> > > However, we're now finally in the brave new world of Trusty :)
> > > So I would like to make the long-delayed change over soon.
> > >
> > > This would involve transposing the roles of sql-alchemy and
> > > mongodb in the gate - the mongodb variant becomes the "blessed"
> > > job run by default, whereas the sql-alchemy based job to
> > > relegated to the second tier.
> > >
> > > So my questions are:
> > >
> > > (a) would the QA side of the house be agreeable to this switch?
> > >
> > > and:
> > >
> > > (b) how long would the mongodb job need to be stable in this
> > > experimental mode before we pull the trigger on swicthing?
> > >
> > > If the answer to (a) is yes, we can get infra patches proposed
> > > early next week to make the swap.
> > >
> > > Cheers,
> > > Eoghan
> > >
> > > [1]
> > >
> https://review.openstack.org/#/q/project:openstack-infra/config+branch:master+topic:ceilometer-mongodb-job,n,z
> > > [2]
> > >
> https://review.openstack.org/#/q/project:openstack-infra/devstack-gate+branch:master+topic:ceilometer-backend,n,z
> > >
> >
> > My interpretation of the gap analysis [1] is merely that you have
> coverage,
> > not that you switch to it and relegate the SQLAlchemy tests to second
> chair.
> > I believe that's a dangerous departure from current standards. A
> dependency
> > on mongodb, due to it's AGPL license, and lack of sufficient support for
> a
> > non-AGPL storage back end, has consistently been raised as a blocking
> issue
> > for Marconi. [2]
>
> Sure, the main goal is to have full mongodb-based coverage in the gate.
>
> So, if the QA/infra folks are prepared to host *both* jobs, then I'd be
> happy to change my request to simply:
>
>   let's promote the mongodb-based Tempest variant to the first tier,
>   to run alongside the current sqlalchemy-based job
>


Ignoring the question of is it ok to say: 'to run ceilometer in any sort of
non-trivial deployment you must manager yet another underlying service,
mongodb' I would prefer not adding an addition gate variant to all
projects.  With the effort to reduce the number of gate variants we have
[0] I would prefer to see just ceilometer gate on both mongodb and
sqlalchemy and the main integrated gate [1] pick just one.

[0] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
[1]
http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n238


>
> Does that work for you Devananda?
>
> Cheers,
> Eoghan
>
> > -Deva
> >
> >
> > [1]
> >
> https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Ceilometer_Gap_Coverage
> >
> > [2]
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/030510.html
> > is a very articulate example of this objection
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Sandy Walsh
On 8/11/2014 5:29 PM, Eoghan Glynn wrote:
>
>> On 8/11/2014 4:22 PM, Eoghan Glynn wrote:
 Hi Eoghan,

 Thanks for the note below. However, one thing the overview below does not
 cover is why InfluxDB ( http://influxdb.com/ ) is not being leveraged.
 Many
 folks feel that this technology is a viable solution for the problem space
 discussed below.
>>> Great question Brad!
>>>
>>> As it happens we've been working closely with Paul Dix (lead
>>> developer of InfluxDB) to ensure that this metrics store would be
>>> usable as a backend driver. That conversation actually kicked off
>>> at the Juno summit in Atlanta, but it really got off the ground
>>> at our mid-cycle meet-up in Paris on in early July.
>> ...
>>> The InfluxDB folks have committed to implementing those features in
>>> over July and August, and have made concrete progress on that score.
>>>
>>> I hope that provides enough detail to answer to your question?
>> I guess it begs the question, if influxdb will do what you want and it's
>> open source (MIT) as well as commercially supported, how does gnocchi
>> differentiate?
> Hi Sandy,
>
> One of the ideas behind gnocchi is to combine resource representation
> and timeseries-oriented storage of metric data, providing an efficient
> and convenient way to query for metric data associated with individual
> resources.

Doesn't InfluxDB do the same?

>
> Also, having an API layered above the storage driver avoids locking in
> directly with a particular metrics-oriented DB, allowing for the
> potential to support multiple storage driver options (e.g. to choose
> between a canonical implementation based on Swift, an InfluxDB driver,
> and an OpenTSDB driver, say).
Right, I'm not suggesting to remove the storage abstraction layer. I'm
just curious what gnocchi does better/different than InfluxDB?

Or, am I missing the objective here and gnocchi is the abstraction layer
and not an influxdb alternative? If so, my apologies for the confusion.

> A less compelling reason would be to provide a well-defined hook point
> to innovate with aggregation/analytic logic not supported natively
> in the underlying drivers (e.g. period-spanning statistics such as
> exponentially-weighted moving average or even Holt-Winters).
> Cheers,
> Eoghan
>
>  
>>> Cheers,
>>> Eoghan
>>>
 Thanks,

 Brad


 Brad Topol, Ph.D.
 IBM Distinguished Engineer
 OpenStack
 (919) 543-0646
 Internet: bto...@us.ibm.com
 Assistant: Kendra Witherspoon (919) 254-0680



 From: Eoghan Glynn 
 To: "OpenStack Development Mailing List (not for usage questions)"
 ,
 Date: 08/06/2014 11:17 AM
 Subject: [openstack-dev] [tc][ceilometer] Some background on the gnocchi
 project





 Folks,

 It's come to our attention that some key individuals are not
 fully up-to-date on gnocchi activities, so it being a good and
 healthy thing to ensure we're as communicative as possible about
 our roadmap, I've provided a high-level overview here of our
 thinking. This is intended as a precursor to further discussion
 with the TC.

 Cheers,
 Eoghan


 What gnocchi is:
 ===

 Gnocchi is a separate, but related, project spun up on stackforge
 by Julien Danjou, with the objective of providing efficient
 storage and retrieval of timeseries-oriented data and resource
 representations.

 The goal is to experiment with a potential approach to addressing
 an architectural misstep made in the very earliest days of
 ceilometer, specifically the decision to store snapshots of some
 resource metadata alongside each metric datapoint. The core idea
 is to move to storing datapoints shorn of metadata, and instead
 allow the resource-state timeline to be reconstructed more cheaply
 from much less frequently occurring events (e.g. instance resizes
 or migrations).


 What gnocchi isn't:
 ==

 Gnocchi is not a large-scale under-the-radar rewrite of a core
 OpenStack component along the lines of keystone-lite.

 The change is concentrated on the final data-storage phase of
 the ceilometer pipeline, so will have little initial impact on the
 data-acquiring agents, or on transformation phase.

 We've been totally open at the Atlanta summit and other forums
 about this approach being a multi-cycle effort.


 Why we decided to do it this way:
 

 The intent behind spinning up a separate project on stackforge
 was to allow the work progress at arms-length from ceilometer,
 allowing normalcy to be maintained on the core project and a
 rapid rate of innovation on gnocchi.

 Note that that the developers primarily contributing to gnocchi
 represent a cross-section of the core team, and there's a regu

Re: [openstack-dev] [TripleO][Nova][Neutron] multiple hypervisors on one compute host - neutron agent and compute hostnames

2014-08-11 Thread Joe Gordon
On Tue, Aug 5, 2014 at 4:17 PM, Robert Collins 
wrote:

> Hi!
>
> James has run into an issue implementing the multi-hypervisor spec
> (
> http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/juno/tripleo-juno-deploy-cloud-hypervisor-type.rst
> )
> which we're hoping to use to reduce infrastructure overheads by
> deploying OpenStack control plane services in VMs, without requiring
> dedicated VM hypervisor machines in the deploy cloud.
>
> The issue we've hit is that the Neutron messages for VIF plugging are
> sent to the Neutron agent with an exactly matching hostname to the
> Nova-compute process. However, we have unique hostnames for the
> nova-compute processes on one machine (one for -kvm, one for -docker,
> one for -ironic etc) for a variety of reasons: so we can see if all
> the processes are up, so that we don't get messages for the wrong
> process from nova-api etc.
>

So you are running multiple nova-computes on a single node? This goes
against the model that nova is operating under. Instead of hacking a
workaround, if we think this is a use case nova/openstack should support,
why not have that discussion before deciding that the best solution is a
shallow patch.



>
> I think a reasonable step might be to allow the agent host option to
> be a list - e.g.
>
>  [DEFAULT]
>  hosts={{nova.compute_hostname}}-libvirt,{{nova.compute_hostname}}-docker
>
> we'd just make it listen to all the nova-compute hostnames we may have
> on the machine.
> That seems like a fairly shallow patch to me: add a new hosts option
> with no default, change the code to listen to N queues when hosts is
> set, and to report state N times as well (for consistency).
> Alternatively, with a DB migration, we could record N hosts against
> one agent status.
>
> Alternatively we could run N ovs-agents on one machine (with a
> separate integration bridge each), but I worry that we'd encounter
> unexpected cross-chatter between them on things like external bridge
> flows.
>
> Thoughts?
>
> For now, we're going to have to run with a limitation of only one
> vif-plugging hypervisor type per machine - we'll make the agent
> hostname match that of the nova compute that needs VIFs plugged ;)
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver interface on every API request"

2014-08-11 Thread Brandon Logan
Hi Eugene,
An example of the HM issue (and really this can happen with any entity)
is if the driver the API sends the configuration to does not actually
support the value of an attribute.

For example: Provider A support PING health monitor type, Provider B
does not.  API allows the PING health monitor type to go through.  Once
a load balancer has been linked with that health monitor and the
LoadBalancer chose to use Provider B, that entire configuration is then
sent to the driver.  The driver errors out not on the LoadBalancer
create, but on the health monitor create.

I think that's the issue.

Thanks,
Brandon

On Tue, 2014-08-12 at 00:17 +0400, Eugene Nikanorov wrote:
> Hi folks,
> 
> 
> That actually going in opposite direction to what flavor framework is
> trying to do (and for dispatching it's doing the same as providers).
> REST call dispatching should really go via the root object.
> 
> 
> I don't quite get the issue with health monitors. If HM is incorrectly
> configured prior to association with a pool - API layer should handle
> that.
> I don't think driver implementations should be different at
> constraints to HM parameters.
> 
> 
> So I'm -1 on adding provider (or flavor) to each entity. After all, it
> looks just like data denormalization which actually will affect lots
> of API aspects in negative way.
> 
> 
> Thanks,
> Eugene.
> 
> 
> 
> 
> On Mon, Aug 11, 2014 at 11:20 PM, Vijay Venkatachalam
>  wrote:
> 
> Yes, the point was to say "the plugin need not restrict and
> let driver decide what to do with the API".
> 
> Even if the call was made to driver instantaneously, I
> understand, the driver might decide to ignore
> first and schedule later. But, if the call is present, there
> is scope for validation.
> Also, the driver might be scheduling an async-api to backend,
> in which case  deployment error
> cannot be shown to the user instantaneously.
> 
> W.r.t. identifying a provider/driver, how would it be to make
> tenant the default "root" object?
> "tenantid" is already associated with each of these entities,
> so no additional pain.
> For the tenant who wants to override let him specify provider
> in each of the entities.
> If you think of this in terms of the UI, let's say if the
> loadbalancer configuration is exposed
> as a single wizard (which has loadbalancer, listener, pool,
> monitor properties) then provider
>  is chosen only once.
> 
> Curious question, is flavour framework expected to address
> this problem?
> 
> Thanks,
> Vijay V.
> 
> -Original Message-
> From: Doug Wiegley [mailto:do...@a10networks.com]
> 
> Sent: 11 August 2014 22:02
> To: OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on
> "Calling driver interface on every API request"
> 
> Hi Sam,
> 
> Very true.  I think that Vijay’s objection is that we are
> currently imposing a logical structure on the driver, when it
> should be a driver decision.  Certainly, it goes both ways.
> 
> And I also agree that the mechanism for returning multiple
> errors, and the ability to specify whether those errors are
> fatal or not, individually, is currently weak.
> 
> Doug
> 
> 
> On 8/11/14, 10:21 AM, "Samuel Bercovici" 
> wrote:
> 
> >Hi Doug,
> >
> >In some implementations Driver !== Device. I think this is
> also true
> >for HA Proxy.
> >This might mean that there is a difference between creating a
> logical
> >object and when there is enough information to actually
> schedule/place
> >this into a device.
> >The ability to express such errors (detecting an error on a
> logical
> >object after it was created but when it actually get
> scheduled) should
> >be discussed and addressed anyway.
> >
> >-Sam.
> >
> >
> >-Original Message-
> >From: Doug Wiegley [mailto:do...@a10networks.com]
> >Sent: Monday, August 11, 2014 6:55 PM
> >To: OpenStack Development Mailing List (not for usage
> questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on
> "Calling
> >driver interface on every API request"
> >
> >Hi all,
> >
> >> Validations such as ³timeout > delay² should be performed
> on the API
> >>level before it reaches the driver.
> >For a configuration tree (lb, listeners, pools, etc.), there
> should be
> >one p

Re: [openstack-dev] Which program for Rally

2014-08-11 Thread David Kranz

On 08/11/2014 04:21 PM, Matthew Treinish wrote:


I apologize for the delay in my response to this thread, between 
travelling
and having a stuck 'a' key on my laptop this is the earliest I could 
respond.
I opted for a separate branch on this thread to summarize my views and 
I'll

respond inline later on some of the previous discussion.

On Wed, Aug 06, 2014 at 12:30:35PM +0200, Thierry Carrez wrote:
> Hi everyone,
>
> At the TC meeting yesterday we discussed Rally program request and
> incubation request. We quickly dismissed the incubation request, as
> Rally appears to be able to live happily on top of OpenStack and would
> benefit from having a release cycle decoupled from the OpenStack
> "integrated release".
>
> That leaves the question of the program. OpenStack programs are created
> by the Technical Committee, to bless existing efforts and teams that are
> considered *essential* to the production of the "OpenStack" integrated
> release and the completion of the OpenStack project mission. There are 3
> ways to look at Rally and official programs at this point:
>
> 1. Rally as an essential QA tool
> Performance testing (and especially performance regression testing) is
> an essential QA function, and a feature that Rally provides. If the QA
> team is happy to use Rally to fill that function, then Rally can
> obviously be adopted by the (already-existing) QA program. That said,
> that would put Rally under the authority of the QA PTL, and that raises
> a few questions due to the current architecture of Rally, which is more
> product-oriented. There needs to be further discussion between the QA
> core team and the Rally team to see how that could work and if that
> option would be acceptable for both sides.

So ideally this is where Rally would belong, the scope of what Rally is
attempting to do is definitely inside the scope of the QA program. I 
don't see
any reason why that isn't the case. The issue is with what Rally is in 
it's
current form. It's scope is too large and monolithic, and it 
duplicates much of

the functionality we either already have or need in current QA or Infra
projects. But, nothing in Rally is designed to be used outside of it. 
I actually
feel pretty strongly that in it's current form Rally should *not* be a 
part of

any OpenStack program.

All of the points Sean was making in the other branch on this thread 
(which I'll
probably respond to later) are a huge concerns I share with Rally. He 
basically
summarized most of my views on the topic, so I'll try not to rewrite 
everything.
But, the fact that all of this duplicate functionality was implemented 
in a
completely separate manner which is Rally specific and can't really be 
used

unless all of Rally is used is of a large concern. What I think the path
forward here is to have both QA and Rally work together on getting common
functionality that is re-usable and shareable. Additionally, I have some
concerns over the methodology that Rally uses for it's performance 
measurement.
But, I'll table that discussion because I think it would partially 
derail this

discussion.

So one open question is long-term where would this leave Rally if we 
want to
bring it in under the QA program. (after splitting up the 
functionality to more
conducive with all our existing tools and projects) The one thing 
Rally does
here which we don't have an analogous solution for is, for lack of 
better term,
the post processing layer. The part that generates the performs the 
analysis on
the collected data and generates the graphs. That is something that 
we'll have
an eventually need for and that is something that we can work on 
turning rally

into as we migrate everything to actually work together.

There are probably also other parts of Rally which don't fit into an 
existing

QA program project, (or the QA program in general) and in those cases we
probably should split them off as smaller projects to implement that 
bit. For
example, the SLA stuff Rally has that probably should be a separate 
entity as

well, but I'm unsure if that fits under QA program.

My primary concern is the timing for doing all of this work. We're 
approaching
J-3 and honestly this feels like it would take the better part of an 
entire
cycle to analyze, plan, and then implement. Starting an analysis of 
how to do

all of the work at this point I feel would just distract everyone from
completing our dev goals for the cycle. Probably the Rally team, if 
they want
to move forward here, should start the analysis of this structural 
split and we

can all pick this up together post-juno.

>
> 2. Rally as an essential operator tool
> Regular benchmarking of OpenStack deployments is a best practice for
> cloud operators, and a feature that Rally provides. With a bit of a
> stretch, we could consider that benchmarking is essential to the
> completion of the OpenStack project mission. That program could one day
> evolve to include more such "operations best practices" tools. In
> 

Re: [openstack-dev] [TripleO][heat] a small experiment with Ansible in TripleO

2014-08-11 Thread Zane Bitter

On 11/08/14 14:49, Clint Byrum wrote:

Excerpts from Steven Hardy's message of 2014-08-11 11:40:07 -0700:

On Mon, Aug 11, 2014 at 11:20:50AM -0700, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2014-08-11 08:16:56 -0700:

On 11/08/14 10:46, Clint Byrum wrote:

Right now we're stuck with an update that just doesn't work. It isn't
just about update-failure-recovery, which is coming along nicely, but
it is also about the lack of signals to control rebuild, poor support
for addressing machines as groups, and unacceptable performance in
large stacks.


Are there blueprints/bugs filed for all of these issues?



Convergnce addresses the poor performance for large stacks in general.
We also have this:

https://bugs.launchpad.net/heat/+bug/1306743

Which shows how slow metadata access can get. I have worked on patches
but haven't been able to complete them. We made big strides but we are
at a point where 40 nodes polling Heat every 30s is too much for one CPU


This sounds like the same figure I heard at the design summit; did the 
DB call optimisation work that Steve Baker did immediately after that 
not have any effect?



to handle. When we scaled Heat out onto more CPUs on one box by forking
we ran into eventlet issues. We also ran into issues because even with
many processes we can only use one to resolve templates for a single
stack during update, which was also excessively slow.


Related to this, and a discussion we had recently at the TripleO meetup is
this spec I raised today:

https://review.openstack.org/#/c/113296/

It's following up on the idea that we could potentially address (or at
least mitigate, pending the fully convergence-ified heat) some of these
scalability concerns, if TripleO moves from the one-giant-template model
to a more modular nested-stack/provider model (e.g what Tomas has been
working on)

I've not got into enough detail on that yet to be sure if it's acheivable
for Juno, but it seems initially to be complex-but-doable.

I'd welcome feedback on that idea and how it may fit in with the more
granular convergence-engine model.

Can you link to the eventlet/forking issues bug please?  I thought since
bug #1321303 was fixed that multiple engines and multiple workers should
work OK, and obviously that being true is a precondition to expending
significant effort on the nested stack decoupling plan above.



That was the issue. So we fixed that bug, but we never un-reverted
the patch that forks enough engines to use up all the CPU's on a box
by default. That would likely help a lot with metadata access speed
(we could manually do it in TripleO but we tend to push defaults. :)


Right, and we decided we wouldn't because it's wrong to do that to 
people by default. In some cases the optimal running configuration for 
TripleO will differ from the friendliest out-of-the-box configuration 
for Heat users in general, and in those cases - of which this is one - 
TripleO will need to specify the configuration.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] congress-server fails to start

2014-08-11 Thread Peter Balland
Hi Rajdeep,

What version of pip are you running?  Please try installing the latest version 
(https://pip.pypa.io/en/latest/installing.html) and run 'sudo pip install -r 
requirements.txt'.

- Peter

From: Rajdeep Dua mailto:rajdeep@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, August 11, 2014 at 11:27 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Congress] congress-server fails to start

Hi All,
command to start the congress-server fails

$ ./bin/congress-server --config-file etc/congress.conf.sample

Error :
ImportError: No module named keystonemiddleware.auth_token

Installing keystonemiddleware manually also fails

$ sudo pip install keystonemiddleware

Could not find a version that satisfies the requirement oslo.config>=1.4.0.0a3 
(from keystonemiddleware) (from versions: )
No distributions matching the version for oslo.config>=1.4.0.0a3 (from 
keystonemiddleware)

Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Eoghan Glynn


> On 8/11/2014 4:22 PM, Eoghan Glynn wrote:
> >
> >> Hi Eoghan,
> >>
> >> Thanks for the note below. However, one thing the overview below does not
> >> cover is why InfluxDB ( http://influxdb.com/ ) is not being leveraged.
> >> Many
> >> folks feel that this technology is a viable solution for the problem space
> >> discussed below.
> > Great question Brad!
> >
> > As it happens we've been working closely with Paul Dix (lead
> > developer of InfluxDB) to ensure that this metrics store would be
> > usable as a backend driver. That conversation actually kicked off
> > at the Juno summit in Atlanta, but it really got off the ground
> > at our mid-cycle meet-up in Paris on in early July.
> ...
> >
> > The InfluxDB folks have committed to implementing those features in
> > over July and August, and have made concrete progress on that score.
> >
> > I hope that provides enough detail to answer to your question?
> 
> I guess it begs the question, if influxdb will do what you want and it's
> open source (MIT) as well as commercially supported, how does gnocchi
> differentiate?

Hi Sandy,

One of the ideas behind gnocchi is to combine resource representation
and timeseries-oriented storage of metric data, providing an efficient
and convenient way to query for metric data associated with individual
resources.

Also, having an API layered above the storage driver avoids locking in
directly with a particular metrics-oriented DB, allowing for the
potential to support multiple storage driver options (e.g. to choose
between a canonical implementation based on Swift, an InfluxDB driver,
and an OpenTSDB driver, say).

A less compelling reason would be to provide a well-defined hook point
to innovate with aggregation/analytic logic not supported natively
in the underlying drivers (e.g. period-spanning statistics such as
exponentially-weighted moving average or even Holt-Winters).

Cheers,
Eoghan

 
> > Cheers,
> > Eoghan
> >
> >> Thanks,
> >>
> >> Brad
> >>
> >>
> >> Brad Topol, Ph.D.
> >> IBM Distinguished Engineer
> >> OpenStack
> >> (919) 543-0646
> >> Internet: bto...@us.ibm.com
> >> Assistant: Kendra Witherspoon (919) 254-0680
> >>
> >>
> >>
> >> From: Eoghan Glynn 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> ,
> >> Date: 08/06/2014 11:17 AM
> >> Subject: [openstack-dev] [tc][ceilometer] Some background on the gnocchi
> >> project
> >>
> >>
> >>
> >>
> >>
> >> Folks,
> >>
> >> It's come to our attention that some key individuals are not
> >> fully up-to-date on gnocchi activities, so it being a good and
> >> healthy thing to ensure we're as communicative as possible about
> >> our roadmap, I've provided a high-level overview here of our
> >> thinking. This is intended as a precursor to further discussion
> >> with the TC.
> >>
> >> Cheers,
> >> Eoghan
> >>
> >>
> >> What gnocchi is:
> >> ===
> >>
> >> Gnocchi is a separate, but related, project spun up on stackforge
> >> by Julien Danjou, with the objective of providing efficient
> >> storage and retrieval of timeseries-oriented data and resource
> >> representations.
> >>
> >> The goal is to experiment with a potential approach to addressing
> >> an architectural misstep made in the very earliest days of
> >> ceilometer, specifically the decision to store snapshots of some
> >> resource metadata alongside each metric datapoint. The core idea
> >> is to move to storing datapoints shorn of metadata, and instead
> >> allow the resource-state timeline to be reconstructed more cheaply
> >> from much less frequently occurring events (e.g. instance resizes
> >> or migrations).
> >>
> >>
> >> What gnocchi isn't:
> >> ==
> >>
> >> Gnocchi is not a large-scale under-the-radar rewrite of a core
> >> OpenStack component along the lines of keystone-lite.
> >>
> >> The change is concentrated on the final data-storage phase of
> >> the ceilometer pipeline, so will have little initial impact on the
> >> data-acquiring agents, or on transformation phase.
> >>
> >> We've been totally open at the Atlanta summit and other forums
> >> about this approach being a multi-cycle effort.
> >>
> >>
> >> Why we decided to do it this way:
> >> 
> >>
> >> The intent behind spinning up a separate project on stackforge
> >> was to allow the work progress at arms-length from ceilometer,
> >> allowing normalcy to be maintained on the core project and a
> >> rapid rate of innovation on gnocchi.
> >>
> >> Note that that the developers primarily contributing to gnocchi
> >> represent a cross-section of the core team, and there's a regular
> >> feedback loop in the form of a recurring agenda item at the
> >> weekly team meeting to avoid the effort becoming silo'd.
> >>
> >>
> >> But isn't re-architecting frowned upon?
> >> ==
> >>
> >> Well, the architecture of other OpenStack projects have also
> >> under-gone change as the community understanding of the
> >> implications of 

Re: [openstack-dev] Which program for Rally

2014-08-11 Thread Matthew Treinish
I apologize for the delay in my response to this thread, between travelling
and having a stuck 'a' key on my laptop this is the earliest I could respond.
I opted for a separate branch on this thread to summarize my views and I'll
respond inline later on some of the previous discussion.

On Wed, Aug 06, 2014 at 12:30:35PM +0200, Thierry Carrez wrote:
> Hi everyone,
> 
> At the TC meeting yesterday we discussed Rally program request and
> incubation request. We quickly dismissed the incubation request, as
> Rally appears to be able to live happily on top of OpenStack and would
> benefit from having a release cycle decoupled from the OpenStack
> "integrated release".
> 
> That leaves the question of the program. OpenStack programs are created
> by the Technical Committee, to bless existing efforts and teams that are
> considered *essential* to the production of the "OpenStack" integrated
> release and the completion of the OpenStack project mission. There are 3
> ways to look at Rally and official programs at this point:
> 
> 1. Rally as an essential QA tool
> Performance testing (and especially performance regression testing) is
> an essential QA function, and a feature that Rally provides. If the QA
> team is happy to use Rally to fill that function, then Rally can
> obviously be adopted by the (already-existing) QA program. That said,
> that would put Rally under the authority of the QA PTL, and that raises
> a few questions due to the current architecture of Rally, which is more
> product-oriented. There needs to be further discussion between the QA
> core team and the Rally team to see how that could work and if that
> option would be acceptable for both sides.

So ideally this is where Rally would belong, the scope of what Rally is
attempting to do is definitely inside the scope of the QA program. I don't see
any reason why that isn't the case. The issue is with what Rally is in it's
current form. It's scope is too large and monolithic, and it duplicates much of
the functionality we either already have or need in current QA or Infra
projects. But, nothing in Rally is designed to be used outside of it. I actually
feel pretty strongly that in it's current form Rally should *not* be a part of
any OpenStack program.

All of the points Sean was making in the other branch on this thread (which I'll
probably respond to later) are a huge concerns I share with Rally. He basically
summarized most of my views on the topic, so I'll try not to rewrite everything.
But, the fact that all of this duplicate functionality was implemented in a
completely separate manner which is Rally specific and can't really be used
unless all of Rally is used is of a large concern. What I think the path
forward here is to have both QA and Rally work together on getting common
functionality that is re-usable and shareable. Additionally, I have some
concerns over the methodology that Rally uses for it's performance measurement.
But, I'll table that discussion because I think it would partially derail this
discussion.

So one open question is long-term where would this leave Rally if we want to
bring it in under the QA program. (after splitting up the functionality to more
conducive with all our existing tools and projects) The one thing Rally does
here which we don't have an analogous solution for is, for lack of better term,
the post processing layer. The part that generates the performs the analysis on
the collected data and generates the graphs. That is something that we'll have
an eventually need for and that is something that we can work on turning rally
into as we migrate everything to actually work together.

There are probably also other parts of Rally which don't fit into an existing
QA program project, (or the QA program in general) and in those cases we
probably should split them off as smaller projects to implement that bit. For
example, the SLA stuff Rally has that probably should be a separate entity as
well, but I'm unsure if that fits under QA program.

My primary concern is the timing for doing all of this work. We're approaching
J-3 and honestly this feels like it would take the better part of an entire
cycle to analyze, plan, and then implement. Starting an analysis of how to do
all of the work at this point I feel would just distract everyone from
completing our dev goals for the cycle. Probably the Rally team, if they want
to move forward here, should start the analysis of this structural split and we
can all pick this up together post-juno.

> 
> 2. Rally as an essential operator tool
> Regular benchmarking of OpenStack deployments is a best practice for
> cloud operators, and a feature that Rally provides. With a bit of a
> stretch, we could consider that benchmarking is essential to the
> completion of the OpenStack project mission. That program could one day
> evolve to include more such "operations best practices" tools. In
> addition to the slight stretch already mentioned, one concern here is
> that we still w

Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver interface on every API request"

2014-08-11 Thread Eugene Nikanorov
Hi folks,

That actually going in opposite direction to what flavor framework is
trying to do (and for dispatching it's doing the same as providers). REST
call dispatching should really go via the root object.

I don't quite get the issue with health monitors. If HM is incorrectly
configured prior to association with a pool - API layer should handle that.
I don't think driver implementations should be different at constraints to
HM parameters.

So I'm -1 on adding provider (or flavor) to each entity. After all, it
looks just like data denormalization which actually will affect lots of API
aspects in negative way.

Thanks,
Eugene.



On Mon, Aug 11, 2014 at 11:20 PM, Vijay Venkatachalam <
vijay.venkatacha...@citrix.com> wrote:

>
> Yes, the point was to say "the plugin need not restrict and let driver
> decide what to do with the API".
>
> Even if the call was made to driver instantaneously, I understand, the
> driver might decide to ignore
> first and schedule later. But, if the call is present, there is scope for
> validation.
> Also, the driver might be scheduling an async-api to backend, in which
> case  deployment error
> cannot be shown to the user instantaneously.
>
> W.r.t. identifying a provider/driver, how would it be to make tenant the
> default "root" object?
> "tenantid" is already associated with each of these entities, so no
> additional pain.
> For the tenant who wants to override let him specify provider in each of
> the entities.
> If you think of this in terms of the UI, let's say if the loadbalancer
> configuration is exposed
> as a single wizard (which has loadbalancer, listener, pool, monitor
> properties) then provider
>  is chosen only once.
>
> Curious question, is flavour framework expected to address this problem?
>
> Thanks,
> Vijay V.
>
> -Original Message-
> From: Doug Wiegley [mailto:do...@a10networks.com]
> Sent: 11 August 2014 22:02
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling
> driver interface on every API request"
>
> Hi Sam,
>
> Very true.  I think that Vijay’s objection is that we are currently
> imposing a logical structure on the driver, when it should be a driver
> decision.  Certainly, it goes both ways.
>
> And I also agree that the mechanism for returning multiple errors, and the
> ability to specify whether those errors are fatal or not, individually, is
> currently weak.
>
> Doug
>
>
> On 8/11/14, 10:21 AM, "Samuel Bercovici"  wrote:
>
> >Hi Doug,
> >
> >In some implementations Driver !== Device. I think this is also true
> >for HA Proxy.
> >This might mean that there is a difference between creating a logical
> >object and when there is enough information to actually schedule/place
> >this into a device.
> >The ability to express such errors (detecting an error on a logical
> >object after it was created but when it actually get scheduled) should
> >be discussed and addressed anyway.
> >
> >-Sam.
> >
> >
> >-Original Message-
> >From: Doug Wiegley [mailto:do...@a10networks.com]
> >Sent: Monday, August 11, 2014 6:55 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling
> >driver interface on every API request"
> >
> >Hi all,
> >
> >> Validations such as ³timeout > delay² should be performed on the API
> >>level before it reaches the driver.
> >For a configuration tree (lb, listeners, pools, etc.), there should be
> >one provider.
> >
> >You¹re right, but I think the point of Vijay¹s example was to highlight
> >the combo error problem with populating all of the driver objects at
> >once (in short, the driver interface isn¹t well suited to that model.)
> >That his one example can be covered by API validators is irrelevant.
> >Consider a backend that does not support APP_COOKIE¹s, or
> >HTTPS_TERMINATED (but has multiple listeners) instead.  Should the
> >entire load balancer create fail, or should it offer degraded service?
> >Do all drivers have to implement a transaction rollback; wait, the
> >interface makes that very hard.  That¹s his point.  The driver is no
> >longer just glue code between interfaces; it¹s now a mini-object error
> handler.
> >
> >
> >> Having provider defined in multiple places does not make sense.
> >
> >Channeling Brandon, who can yell if I get this wrong, the point is not
> >to have a potentially different provider on each object.  It¹s to allow
> >a provider to be assigned when the first object in the tree is created,
> >so that future related objects will always get routed to the same
> provider.
> >Not knowing which provider should get all the objects is why we have to
> >wait until we see a LoadBalancer object.
> >
> >
> >All of this sort of edge case nonsense is because we (the royal we, the
> >community), wanted all load balancer objects to be ³root² objects, even
> >though only one of them is an actual root today, to support
> >many-to-

Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Joshua Harlow



On Mon, Aug 11, 2014 at 12:47 PM, Doug Hellmann  
wrote:


On Aug 11, 2014, at 3:26 PM, Joshua Harlow  
wrote:


 
 
 On Mon, Aug 11, 2014 at 11:02 AM, Yuriy Taraday 
 wrote:
 On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow 
 wrote:

 One question from me:
 Will there be later fixes to remove oslo.config dependency/usage 
from oslo.concurrency?
 I still don't understand how oslo.concurrency can be used as a 
library with the configuration being set in a static manner via 
oslo.config (let's use the example of `lock_path` @ 
https://github.com/YorikSar/oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). 
For example:
 Library X inside application Z uses lockutils (via the nice 
oslo.concurrency library) and sets the configuration `lock_path` 
to its desired settings, then library Y (also a user of 
oslo.concurrency) inside same application Z sets the configuration 
for `lock_path` to its desired settings. Now both have some 
unknown set of configuration they have set and when library X (or 
Y) continues to use lockutils they will be using some mix of 
configuration (likely some mish mash of settings set by X and Y); 
perhaps to a `lock_path` that neither actually wants to be able to 
write to...
 This doesn't seem like it will end well; and will just cause 
headaches during debug sessions, testing, integration and more...
 The same question can be asked about the `set_defaults()` 
function, how is library Y or X expected to use this (are they?)??

 I hope one of the later changes is to remove/fix this??
 Thoughts?
 -Josh
 I'd be happy to remove lock_path config variable altogether. It's 
basically never used. There are two basic branches in code wrt 
lock_path:
 - when you provide lock_path argument to lock (and derivative 
functions), file-based lock is used and CONF.lock_path is ignored; 
- when you don't provide lock_path in arguments, semaphore-based 
lock is used and CONF.lock_path is just a prefix for its name 
(before hashing).
 
 Agreed, it just seems confusing (and bad) to have parts of the API 
come in from `CONF.lock_path` (or other `CONF.*` options) and other 
parts of the API come in via function parameters. This just makes 
understanding the API and knowing how to interact with it that much 
harder (after all what is the right way of using XYZ feature when it 
can be changed via a out-of-band *hidden* API call via configuration 
adjustments under the covers?)... This makes it really hard to use 
oslo.concurrency in taskflow (and likely other libraries that would 
like to consume oslo.concurrency, seeing that it will be on pypi, I 
would 


The libraries placed in the oslo namespace are very much NOT meant to 
be used by anything other than OpenStack. They are intended to be the 
glue layer between OpenStack and some other implementation libraries.


oslo.concurrency wraps pylockfile and the plan is to move the actual 
lock code into pylockfile without the oslo.config dependency. That 
will make pylockfile reusable by taskflow and tooz, and the locking 
stuff in oslo.concurrency a smaller API with consistent configuration 
for use by applications.


Sounds great, I've been wondering why 
https://github.com/stackforge/tooz/commit/f3e11e40f9871f8328 
happened/merged (maybe it should be changed?). I see that 
https://review.openstack.org/#/c/102202/ merged so that's good news and 
hopefully makes the underlying lockutils functionality more useful to 
outside of openstack users in the near-term future (which includes 
taskflow, being that it is being used in & outside openstack by various 
entities).





 expect this number to grow...) since taskflow would really 
appreciate and desire to have stable APIs that don't change by some 
configuration that can be set by some party via some out-of-band 
method (for example some other library or program calling 
`set_defaults()`). This kind of way of using an API (half of the 
settings from config, half of the settings from the functions 
API...) may be ok for applications but it's not IMHO ok for 
libraries (or clients) that want to use oslo.concurrency. 
 Hopefully it can be fixed some that it works via both ways? Oslo.db 
I believe made this work better by allowing for configuration to 
come in via a configuration object that can be provided by the user 
of oslo.db, this makes the API that oslo.db exposes strongly tied to 
the attributes & documentation of that object. I still don't think 
thats perfect either since its likely that the documentation for 
what that objects attributes should be is not as update to date or 
easy to change as updating function/method documentation…


That technique of having the configuration object passed to the oslo 
library will be repeated in the other new libraries we are creating 
if they already depend on configuration settings of some sort. The 
configuration options are not part of the public API of the library, 
so they and their definitions will be hidden from the caller, but the 
library has to b

Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Ben Nemec
On 08/11/2014 02:20 PM, Joshua Harlow wrote:
> 
> 
> On Mon, Aug 11, 2014 at 11:39 AM, Ben Nemec  
> wrote:
>> On 08/11/2014 01:02 PM, Yuriy Taraday wrote:
>>>  On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow 
>>>  wrote:
>>>  
  One question from me:

  Will there be later fixes to remove oslo.config dependency/usage 
 from
  oslo.concurrency?

  I still don't understand how oslo.concurrency can be used as a 
 library
  with the configuration being set in a static manner via 
 oslo.config (let's
  use the example of `lock_path` @ https://github.com/YorikSar/
  oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). 
 For
  example:

  Library X inside application Z uses lockutils (via the nice
  oslo.concurrency library) and sets the configuration `lock_path` 
 to its
  desired settings, then library Y (also a user of oslo.concurrency) 
 inside
  same application Z sets the configuration for `lock_path` to its 
 desired
  settings. Now both have some unknown set of configuration they 
 have set and
  when library X (or Y) continues to use lockutils they will be 
 using some
  mix of configuration (likely some mish mash of settings set by X 
 and Y);
  perhaps to a `lock_path` that neither actually wants to be able to 
 write
  to...

  This doesn't seem like it will end well; and will just cause 
 headaches
  during debug sessions, testing, integration and more...

  The same question can be asked about the `set_defaults()` 
 function, how is
  library Y or X expected to use this (are they?)??

  I hope one of the later changes is to remove/fix this??

  Thoughts?

  -Josh
>>>  
>>>  
>>>  I'd be happy to remove lock_path config variable altogether. It's 
>>> basically
>>>  never used. There are two basic branches in code wrt lock_path:
>>>  - when you provide lock_path argument to lock (and derivative 
>>> functions),
>>>  file-based lock is used and CONF.lock_path is ignored;
>>>  - when you don't provide lock_path in arguments, semaphore-based 
>>> lock is
>>>  used and CONF.lock_path is just a prefix for its name (before 
>>> hashing).
>>>  
>>>  I wonder if users even set lock_path in their configs as it has 
>>> almost no
>>>  effect. So I'm all for removing it, but...
>>>  From what I understand, every major change in lockutils drags along 
>>> a lot
>>>  of headache for everybody (and risk of bugs that would be 
>>> discovered very
>>>  late). So is such change really worth it? And if so, it will 
>>> require very
>>>  thorough research of lockutils usage patterns.
>>
>> Two things lock_path has to stay for: Windows and consumers who 
>> require
>> file-based locking semantics.  Neither of those use cases are trivial 
>> to
>> remove, so IMHO it would not be appropriate to do it as part of the
>> graduation.  If we were going to alter the API that much it needed to
>> happen in incubator.
>>
>>
>> As far as lock_path mismatches, that shouldn't be a problem unless a
>> consumer is doing something very unwise.  Oslo libs get their
>> configuration from the application using them, so unless the 
>> application
>> passes two separate conf objects to library X and Y they're both going
>> to get consistent settings.  If someone _is_ doing that, then I think
>> it's their responsibility to make sure the options in both config 
>> files
>> are compatible with each other.
> 
> Why would it be assumed they would pass the same settings (how is that 
> even possible to know ahead of time? especially if library X pulls in a 
> new library ZZ that requires a new configuration setting). For example, 
> one directory for `lock_path` may be reasonable for tooz and another 
> may be reasonable for taskflow (completly depends on there intended 
> usage), it would not likely desireable to have them go to the same 
> location. 

The only reason I can see that you would want to do this is to avoid
lock name collisions, but I think I'd rather namespace lib locks than
require users to manage multiple lock paths.  Even the one lock path has
been a hassle.

> Forcing application Z to know the inner workings of library X 
> and library Y (or future unknown library ZZ) is just pushing the 
> problem onto the library user, which seems inappropriate and breaks the 
> whole point of having abstractions & APIs in the first place... This 
> IMHO is part of the problem with having statically set *action at a 
> distance* type of configuration, the libraries themselves are not in 
> control of their own configuration, which breaks abstractions & APIs 
> left and right. If some application Y can go under a library and pull 
> the rug out from under it, how is that a reasonable thing to expect the 
> library to be able to predict & handle?

The application doesn't have to, and shouldn't, know about lib options.
 The libraries have a list_opts method

Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Sandy Walsh
On 8/11/2014 4:22 PM, Eoghan Glynn wrote:
>
>> Hi Eoghan,
>>
>> Thanks for the note below. However, one thing the overview below does not
>> cover is why InfluxDB ( http://influxdb.com/ ) is not being leveraged. Many
>> folks feel that this technology is a viable solution for the problem space
>> discussed below.
> Great question Brad!
>
> As it happens we've been working closely with Paul Dix (lead
> developer of InfluxDB) to ensure that this metrics store would be
> usable as a backend driver. That conversation actually kicked off
> at the Juno summit in Atlanta, but it really got off the ground
> at our mid-cycle meet-up in Paris on in early July.
...
>
> The InfluxDB folks have committed to implementing those features in
> over July and August, and have made concrete progress on that score.
>
> I hope that provides enough detail to answer to your question?

I guess it begs the question, if influxdb will do what you want and it's
open source (MIT) as well as commercially supported, how does gnocchi
differentiate?

> Cheers,
> Eoghan
>
>> Thanks,
>>
>> Brad
>>
>>
>> Brad Topol, Ph.D.
>> IBM Distinguished Engineer
>> OpenStack
>> (919) 543-0646
>> Internet: bto...@us.ibm.com
>> Assistant: Kendra Witherspoon (919) 254-0680
>>
>>
>>
>> From: Eoghan Glynn 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> ,
>> Date: 08/06/2014 11:17 AM
>> Subject: [openstack-dev] [tc][ceilometer] Some background on the gnocchi
>> project
>>
>>
>>
>>
>>
>> Folks,
>>
>> It's come to our attention that some key individuals are not
>> fully up-to-date on gnocchi activities, so it being a good and
>> healthy thing to ensure we're as communicative as possible about
>> our roadmap, I've provided a high-level overview here of our
>> thinking. This is intended as a precursor to further discussion
>> with the TC.
>>
>> Cheers,
>> Eoghan
>>
>>
>> What gnocchi is:
>> ===
>>
>> Gnocchi is a separate, but related, project spun up on stackforge
>> by Julien Danjou, with the objective of providing efficient
>> storage and retrieval of timeseries-oriented data and resource
>> representations.
>>
>> The goal is to experiment with a potential approach to addressing
>> an architectural misstep made in the very earliest days of
>> ceilometer, specifically the decision to store snapshots of some
>> resource metadata alongside each metric datapoint. The core idea
>> is to move to storing datapoints shorn of metadata, and instead
>> allow the resource-state timeline to be reconstructed more cheaply
>> from much less frequently occurring events (e.g. instance resizes
>> or migrations).
>>
>>
>> What gnocchi isn't:
>> ==
>>
>> Gnocchi is not a large-scale under-the-radar rewrite of a core
>> OpenStack component along the lines of keystone-lite.
>>
>> The change is concentrated on the final data-storage phase of
>> the ceilometer pipeline, so will have little initial impact on the
>> data-acquiring agents, or on transformation phase.
>>
>> We've been totally open at the Atlanta summit and other forums
>> about this approach being a multi-cycle effort.
>>
>>
>> Why we decided to do it this way:
>> 
>>
>> The intent behind spinning up a separate project on stackforge
>> was to allow the work progress at arms-length from ceilometer,
>> allowing normalcy to be maintained on the core project and a
>> rapid rate of innovation on gnocchi.
>>
>> Note that that the developers primarily contributing to gnocchi
>> represent a cross-section of the core team, and there's a regular
>> feedback loop in the form of a recurring agenda item at the
>> weekly team meeting to avoid the effort becoming silo'd.
>>
>>
>> But isn't re-architecting frowned upon?
>> ==
>>
>> Well, the architecture of other OpenStack projects have also
>> under-gone change as the community understanding of the
>> implications of prior design decisions has evolved.
>>
>> Take for example the move towards nova no-db-compute & the
>> unified-object-model in order to address issues in the nova
>> architecture that made progress towards rolling upgrades
>> unneccessarily difficult.
>>
>> The point, in my understanding, is not to avoid doing the
>> course-correction where it's deemed necessary. Rather, the
>> principle is more that these corrections happen in an open
>> and planned way.
>>
>>
>> The path forward:
>> 
>>
>> A subset of the ceilometer community will continue to work on
>> gnocchi in parallel with the ceilometer core over the remainder
>> of the Juno cycle and into the Kilo timeframe. The goal is to
>> have an initial implementation of gnocchi ready for tech preview
>> by the end of Juno, and to have the integration/migration/
>> co-existence questions addressed in Kilo.
>>
>> Moving the ceilometer core to using gnocchi will be contingent
>> on it demonstrating the required performance characteristics and
>> providing the semantics needed to support a v3 

Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Doug Hellmann

On Aug 11, 2014, at 3:26 PM, Joshua Harlow  wrote:

> 
> 
> On Mon, Aug 11, 2014 at 11:02 AM, Yuriy Taraday  wrote:
>> On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow  wrote:
>>> One question from me:
>>> Will there be later fixes to remove oslo.config dependency/usage from 
>>> oslo.concurrency?
>>> I still don't understand how oslo.concurrency can be used as a library with 
>>> the configuration being set in a static manner via oslo.config (let's use 
>>> the example of `lock_path` @ 
>>> https://github.com/YorikSar/oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41).
>>>  For example:
>>> Library X inside application Z uses lockutils (via the nice 
>>> oslo.concurrency library) and sets the configuration `lock_path` to its 
>>> desired settings, then library Y (also a user of oslo.concurrency) inside 
>>> same application Z sets the configuration for `lock_path` to its desired 
>>> settings. Now both have some unknown set of configuration they have set and 
>>> when library X (or Y) continues to use lockutils they will be using some 
>>> mix of configuration (likely some mish mash of settings set by X and Y); 
>>> perhaps to a `lock_path` that neither actually wants to be able to write 
>>> to...
>>> This doesn't seem like it will end well; and will just cause headaches 
>>> during debug sessions, testing, integration and more...
>>> The same question can be asked about the `set_defaults()` function, how is 
>>> library Y or X expected to use this (are they?)??
>>> I hope one of the later changes is to remove/fix this??
>>> Thoughts?
>>> -Josh
>> I'd be happy to remove lock_path config variable altogether. It's basically 
>> never used. There are two basic branches in code wrt lock_path:
>> - when you provide lock_path argument to lock (and derivative functions), 
>> file-based lock is used and CONF.lock_path is ignored; - when you don't 
>> provide lock_path in arguments, semaphore-based lock is used and 
>> CONF.lock_path is just a prefix for its name (before hashing).
> 
> Agreed, it just seems confusing (and bad) to have parts of the API come in 
> from `CONF.lock_path` (or other `CONF.*` options) and other parts of the API 
> come in via function parameters. This just makes understanding the API and 
> knowing how to interact with it that much harder (after all what is the right 
> way of using XYZ feature when it can be changed via a out-of-band *hidden* 
> API call via configuration adjustments under the covers?)... This makes it 
> really hard to use oslo.concurrency in taskflow (and likely other libraries 
> that would like to consume oslo.concurrency, seeing that it will be on pypi, 
> I would 

The libraries placed in the oslo namespace are very much NOT meant to be used 
by anything other than OpenStack. They are intended to be the glue layer 
between OpenStack and some other implementation libraries.

oslo.concurrency wraps pylockfile and the plan is to move the actual lock code 
into pylockfile without the oslo.config dependency. That will make pylockfile 
reusable by taskflow and tooz, and the locking stuff in oslo.concurrency a 
smaller API with consistent configuration for use by applications.

> expect this number to grow...) since taskflow would really appreciate and 
> desire to have stable APIs that don't change by some configuration that can 
> be set by some party via some out-of-band method (for example some other 
> library or program calling `set_defaults()`). This kind of way of using an 
> API (half of the settings from config, half of the settings from the 
> functions API...) may be ok for applications but it's not IMHO ok for 
> libraries (or clients) that want to use oslo.concurrency. 
> Hopefully it can be fixed some that it works via both ways? Oslo.db I believe 
> made this work better by allowing for configuration to come in via a 
> configuration object that can be provided by the user of oslo.db, this makes 
> the API that oslo.db exposes strongly tied to the attributes & documentation 
> of that object. I still don't think thats perfect either since its likely 
> that the documentation for what that objects attributes should be is not as 
> update to date or easy to change as updating function/method documentation…

That technique of having the configuration object passed to the oslo library 
will be repeated in the other new libraries we are creating if they already 
depend on configuration settings of some sort. The configuration options are 
not part of the public API of the library, so they and their definitions will 
be hidden from the caller, but the library has to be given a configuration 
object in order to load the settings for itself.

> 
>> I wonder if users even set lock_path in their configs as it has almost no 
>> effect. So I'm all for removing it, but...
>> From what I understand, every major change in lockutils drags along a lot of 
>> headache for everybody (and risk of bugs that would be discovered very 
>> late). So is such change re

Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Joshua Harlow



On Mon, Aug 11, 2014 at 11:02 AM, Yuriy Taraday  
wrote:
On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow  
wrote:

One question from me:

Will there be later fixes to remove oslo.config dependency/usage 
from oslo.concurrency?


I still don't understand how oslo.concurrency can be used as a 
library with the configuration being set in a static manner via 
oslo.config (let's use the example of `lock_path` @ 
https://github.com/YorikSar/oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). 
For example:


Library X inside application Z uses lockutils (via the nice 
oslo.concurrency library) and sets the configuration `lock_path` to 
its desired settings, then library Y (also a user of 
oslo.concurrency) inside same application Z sets the configuration 
for `lock_path` to its desired settings. Now both have some unknown 
set of configuration they have set and when library X (or Y) 
continues to use lockutils they will be using some mix of 
configuration (likely some mish mash of settings set by X and Y); 
perhaps to a `lock_path` that neither actually wants to be able to 
write to...


This doesn't seem like it will end well; and will just cause 
headaches during debug sessions, testing, integration and more...


The same question can be asked about the `set_defaults()` function, 
how is library Y or X expected to use this (are they?)??


I hope one of the later changes is to remove/fix this??

Thoughts?

-Josh


I'd be happy to remove lock_path config variable altogether. It's 
basically never used. There are two basic branches in code wrt 
lock_path:
- when you provide lock_path argument to lock (and derivative 
functions), file-based lock is used and CONF.lock_path is ignored; 
- when you don't provide lock_path in arguments, semaphore-based lock 
is used and CONF.lock_path is just a prefix for its name (before 
hashing).


Agreed, it just seems confusing (and bad) to have parts of the API come 
in from `CONF.lock_path` (or other `CONF.*` options) and other parts of 
the API come in via function parameters. This just makes understanding 
the API and knowing how to interact with it that much harder (after all 
what is the right way of using XYZ feature when it can be changed via a 
out-of-band *hidden* API call via configuration adjustments under the 
covers?)... This makes it really hard to use oslo.concurrency in 
taskflow (and likely other libraries that would like to consume 
oslo.concurrency, seeing that it will be on pypi, I would expect this 
number to grow...) since taskflow would really appreciate and desire to 
have stable APIs that don't change by some configuration that can be 
set by some party via some out-of-band method (for example some other 
library or program calling `set_defaults()`). This kind of way of using 
an API (half of the settings from config, half of the settings from the 
functions API...) may be ok for applications but it's not IMHO ok for 
libraries (or clients) that want to use oslo.concurrency. 

Hopefully it can be fixed some that it works via both ways? Oslo.db I 
believe made this work better by allowing for configuration to come in 
via a configuration object that can be provided by the user of oslo.db, 
this makes the API that oslo.db exposes strongly tied to the attributes 
& documentation of that object. I still don't think thats perfect 
either since its likely that the documentation for what that objects 
attributes should be is not as update to date or easy to change as 
updating function/method documentation...


I wonder if users even set lock_path in their configs as it has 
almost no effect. So I'm all for removing it, but...
From what I understand, every major change in lockutils drags along a 
lot of headache for everybody (and risk of bugs that would be 
discovered very late). So is such change really worth it? And if so, 
it will require very thorough research of lockutils usage patterns.


Sounds like tech debt to me, it always requires work to make something 
better. Are we the type of community that will avoid changing things 
(for the better) because we fear introducing new bugs that may be found 
along the way? I for one hope that we are not that type of community 
(that type of community will die due to its own *fake* fears...).




--

Kind regards, Yuriy.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reverting the HOT migration? // dealing with lockstep changes

2014-08-11 Thread Dan Prince
On Tue, 2014-08-12 at 06:58 +1200, Robert Collins wrote:
> Hi, so shortly after the HOT migration landed, we hit
> https://bugs.launchpad.net/tripleo/+bug/1354305 which is that on even
> quite recently deployed clouds, the migrated templates were just too
> new. A partial revert (of just the list_join bit) fixes that, but a
> deeper problem emerged which is that stack-update to get from a
> non-HOT to HOT template appears broken
> (https://bugs.launchpad.net/heat/+bug/1354962).
> 
> I think we need to revert the HOT migration today, as forcing a
> scorched earth recreation of a cloud is not a great answer for folk
> that have deployed versions - its a backwards compat issue.
> 
> Its true that our release as of icehouse isn't  really useable, so we
> could try to wiggle our way past this one, but I think as the first
> real test of our new backwards compat policy, that that would be a
> mistake.

Hmmm. We blocked a good bit of changes to get these HOT templates in so
I hate to see us revert them. Also, It isn't clear to me how much work
it would be to fully support the non-HOT to HOT templates upgrade path.
How much work is this? And is that something we really want to spend
time on instead of all the other things?

Why not just create a branch of the old templates that works on the
existing underclouds? Users would then need to use these templates as a
one-time upgrade step to be able to upgrade to a heat HOT capable
undercloud first.

With regards to the seed...

Would a tool that allowed us to migrate the stateful data from an old
seed to a new seed be a better use of time here? A re-seeder might be a
useful tool to future proof upgrade paths that rely on the software
versions in the seed (Heat etc.) that aren't easily up-gradable.
Packages would help here too, but not everyone uses them...

Dan

> 
> What we need to be able to land it again, is some way whereby an
> existing cloud can upgrade their undercloud (via stack-update against
> the old heat deployed in the seed [today, old heat in the undercloud
> itself in future]) and then once that is deployed subsequent templates
> can use the new features. We're likely to run into such lockstep
> changes in future, so we also need to be able to recognise them in
> review / design, and call them out so we can fix them early rather
> than deep down the pike.
> 
> -Rob
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Joshua Harlow



On Mon, Aug 11, 2014 at 11:39 AM, Ben Nemec  
wrote:

On 08/11/2014 01:02 PM, Yuriy Taraday wrote:
 On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow 
 wrote:
 

 One question from me:

 Will there be later fixes to remove oslo.config dependency/usage 
from

 oslo.concurrency?

 I still don't understand how oslo.concurrency can be used as a 
library
 with the configuration being set in a static manner via 
oslo.config (let's

 use the example of `lock_path` @ https://github.com/YorikSar/
 oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). 
For

 example:

 Library X inside application Z uses lockutils (via the nice
 oslo.concurrency library) and sets the configuration `lock_path` 
to its
 desired settings, then library Y (also a user of oslo.concurrency) 
inside
 same application Z sets the configuration for `lock_path` to its 
desired
 settings. Now both have some unknown set of configuration they 
have set and
 when library X (or Y) continues to use lockutils they will be 
using some
 mix of configuration (likely some mish mash of settings set by X 
and Y);
 perhaps to a `lock_path` that neither actually wants to be able to 
write

 to...

 This doesn't seem like it will end well; and will just cause 
headaches

 during debug sessions, testing, integration and more...

 The same question can be asked about the `set_defaults()` 
function, how is

 library Y or X expected to use this (are they?)??

 I hope one of the later changes is to remove/fix this??

 Thoughts?

 -Josh
 
 
 I'd be happy to remove lock_path config variable altogether. It's 
basically

 never used. There are two basic branches in code wrt lock_path:
 - when you provide lock_path argument to lock (and derivative 
functions),

 file-based lock is used and CONF.lock_path is ignored;
 - when you don't provide lock_path in arguments, semaphore-based 
lock is
 used and CONF.lock_path is just a prefix for its name (before 
hashing).
 
 I wonder if users even set lock_path in their configs as it has 
almost no

 effect. So I'm all for removing it, but...
 From what I understand, every major change in lockutils drags along 
a lot
 of headache for everybody (and risk of bugs that would be 
discovered very
 late). So is such change really worth it? And if so, it will 
require very

 thorough research of lockutils usage patterns.


Two things lock_path has to stay for: Windows and consumers who 
require
file-based locking semantics.  Neither of those use cases are trivial 
to

remove, so IMHO it would not be appropriate to do it as part of the
graduation.  If we were going to alter the API that much it needed to
happen in incubator.


As far as lock_path mismatches, that shouldn't be a problem unless a
consumer is doing something very unwise.  Oslo libs get their
configuration from the application using them, so unless the 
application

passes two separate conf objects to library X and Y they're both going
to get consistent settings.  If someone _is_ doing that, then I think
it's their responsibility to make sure the options in both config 
files

are compatible with each other.


Why would it be assumed they would pass the same settings (how is that 
even possible to know ahead of time? especially if library X pulls in a 
new library ZZ that requires a new configuration setting). For example, 
one directory for `lock_path` may be reasonable for tooz and another 
may be reasonable for taskflow (completly depends on there intended 
usage), it would not likely desireable to have them go to the same 
location. Forcing application Z to know the inner workings of library X 
and library Y (or future unknown library ZZ) is just pushing the 
problem onto the library user, which seems inappropriate and breaks the 
whole point of having abstractions & APIs in the first place... This 
IMHO is part of the problem with having statically set *action at a 
distance* type of configuration, the libraries themselves are not in 
control of their own configuration, which breaks abstractions & APIs 
left and right. If some application Y can go under a library and pull 
the rug out from under it, how is that a reasonable thing to expect the 
library to be able to predict & handle?


This kind of requirement has always made me wonder how other libraries 
(like tooz, or taskflow) actually interact with any of the oslo.* 
libraries in any predicatable way (since those library could be 
interacting with oslo.* libraries that have configuration that can be 
switched out from underneath them, making those libraries have *secret* 
APIs that appear and disappear depending on what used oslo.* library 
was newly added as a dependency and what newly added configuration that 
library sucked in/exposed...).


-Josh




-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing 

Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver interface on every API request"

2014-08-11 Thread Vijay Venkatachalam

Yes, the point was to say "the plugin need not restrict and let driver decide 
what to do with the API".

Even if the call was made to driver instantaneously, I understand, the driver 
might decide to ignore 
first and schedule later. But, if the call is present, there is scope for 
validation. 
Also, the driver might be scheduling an async-api to backend, in which case  
deployment error 
cannot be shown to the user instantaneously.

W.r.t. identifying a provider/driver, how would it be to make tenant the 
default "root" object? 
"tenantid" is already associated with each of these entities, so no additional 
pain. 
For the tenant who wants to override let him specify provider in each of the 
entities.
If you think of this in terms of the UI, let's say if the loadbalancer 
configuration is exposed 
as a single wizard (which has loadbalancer, listener, pool, monitor properties) 
then provider
 is chosen only once. 

Curious question, is flavour framework expected to address this problem?

Thanks,
Vijay V.

-Original Message-
From: Doug Wiegley [mailto:do...@a10networks.com] 
Sent: 11 August 2014 22:02
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver 
interface on every API request"

Hi Sam,

Very true.  I think that Vijay’s objection is that we are currently imposing a 
logical structure on the driver, when it should be a driver decision.  
Certainly, it goes both ways.

And I also agree that the mechanism for returning multiple errors, and the 
ability to specify whether those errors are fatal or not, individually, is 
currently weak.

Doug


On 8/11/14, 10:21 AM, "Samuel Bercovici"  wrote:

>Hi Doug,
>
>In some implementations Driver !== Device. I think this is also true 
>for HA Proxy.
>This might mean that there is a difference between creating a logical 
>object and when there is enough information to actually schedule/place 
>this into a device.
>The ability to express such errors (detecting an error on a logical 
>object after it was created but when it actually get scheduled) should 
>be discussed and addressed anyway.
>
>-Sam.
>
>
>-Original Message-
>From: Doug Wiegley [mailto:do...@a10networks.com]
>Sent: Monday, August 11, 2014 6:55 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling 
>driver interface on every API request"
>
>Hi all,
>
>> Validations such as ³timeout > delay² should be performed on the API 
>>level before it reaches the driver.
>For a configuration tree (lb, listeners, pools, etc.), there should be 
>one provider.
>
>You¹re right, but I think the point of Vijay¹s example was to highlight 
>the combo error problem with populating all of the driver objects at 
>once (in short, the driver interface isn¹t well suited to that model.)  
>That his one example can be covered by API validators is irrelevant.  
>Consider a backend that does not support APP_COOKIE¹s, or 
>HTTPS_TERMINATED (but has multiple listeners) instead.  Should the 
>entire load balancer create fail, or should it offer degraded service?  
>Do all drivers have to implement a transaction rollback; wait, the 
>interface makes that very hard.  That¹s his point.  The driver is no 
>longer just glue code between interfaces; it¹s now a mini-object error handler.
>
>
>> Having provider defined in multiple places does not make sense.
>
>Channeling Brandon, who can yell if I get this wrong, the point is not 
>to have a potentially different provider on each object.  It¹s to allow 
>a provider to be assigned when the first object in the tree is created, 
>so that future related objects will always get routed to the same provider.
>Not knowing which provider should get all the objects is why we have to 
>wait until we see a LoadBalancer object.
>
>
>All of this sort of edge case nonsense is because we (the royal we, the 
>community), wanted all load balancer objects to be ³root² objects, even 
>though only one of them is an actual root today, to support 
>many-to-many relationships among all of them, at some future date, 
>without an interface change.  If my bias is showing that I¹m not a fan 
>of adding this complexity for that, I¹m not surprised.
>
>Thanks,
>doug
>
>
>On 8/11/14, 7:57 AM, "Samuel Bercovici"  wrote:
>
>>Hi,
>> 
>>Validations such as ³timeout > delay² should be performed on the API 
>>level before it reaches the driver.
>> 
>>For a configuration tree (lb, listeners, pools, etc.), there should be 
>>one provider.
>>
>>Having provider defined in multiple places does not make sense.
>> 
>> 
>>-San.
>> 
>> 
>>From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
>>
>>Sent: Monday, August 11, 2014 2:43 PM
>>To: OpenStack Development Mailing List
>>(openstack-dev@lists.openstack.org)
>>Subject: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling 
>>driver interface on every API request"
>>
>>
>> 
>>Hi:
>> 
>>

Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Eoghan Glynn


> Hi Eoghan,
> 
> Thanks for the note below. However, one thing the overview below does not
> cover is why InfluxDB ( http://influxdb.com/ ) is not being leveraged. Many
> folks feel that this technology is a viable solution for the problem space
> discussed below.

Great question Brad!

As it happens we've been working closely with Paul Dix (lead
developer of InfluxDB) to ensure that this metrics store would be
usable as a backend driver. That conversation actually kicked off
at the Juno summit in Atlanta, but it really got off the ground
at our mid-cycle meet-up in Paris on in early July.

I wrote a rough strawman version of an InfluxDB driver in advance
of the mid-cycle to frame the discussion, and Paul Dix traveled
to the meet-up so we could have the discussion face-to-face. The
conclusion was that InfluxDB would indeed potentially be a great
fit, modulo some requirements that we identified during the detailed
discussions:

 * shard-space-based retention & backgrounded deletion
 * capability to merge individual timeseries for cross-aggregation
 * backfill-aware downsampling

The InfluxDB folks have committed to implementing those features in
over July and August, and have made concrete progress on that score.

I hope that provides enough detail to answer to your question?

Cheers,
Eoghan

> Thanks,
> 
> Brad
> 
> 
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
> 
> 
> 
> From: Eoghan Glynn 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 08/06/2014 11:17 AM
> Subject: [openstack-dev] [tc][ceilometer] Some background on the gnocchi
> project
> 
> 
> 
> 
> 
> Folks,
> 
> It's come to our attention that some key individuals are not
> fully up-to-date on gnocchi activities, so it being a good and
> healthy thing to ensure we're as communicative as possible about
> our roadmap, I've provided a high-level overview here of our
> thinking. This is intended as a precursor to further discussion
> with the TC.
> 
> Cheers,
> Eoghan
> 
> 
> What gnocchi is:
> ===
> 
> Gnocchi is a separate, but related, project spun up on stackforge
> by Julien Danjou, with the objective of providing efficient
> storage and retrieval of timeseries-oriented data and resource
> representations.
> 
> The goal is to experiment with a potential approach to addressing
> an architectural misstep made in the very earliest days of
> ceilometer, specifically the decision to store snapshots of some
> resource metadata alongside each metric datapoint. The core idea
> is to move to storing datapoints shorn of metadata, and instead
> allow the resource-state timeline to be reconstructed more cheaply
> from much less frequently occurring events (e.g. instance resizes
> or migrations).
> 
> 
> What gnocchi isn't:
> ==
> 
> Gnocchi is not a large-scale under-the-radar rewrite of a core
> OpenStack component along the lines of keystone-lite.
> 
> The change is concentrated on the final data-storage phase of
> the ceilometer pipeline, so will have little initial impact on the
> data-acquiring agents, or on transformation phase.
> 
> We've been totally open at the Atlanta summit and other forums
> about this approach being a multi-cycle effort.
> 
> 
> Why we decided to do it this way:
> 
> 
> The intent behind spinning up a separate project on stackforge
> was to allow the work progress at arms-length from ceilometer,
> allowing normalcy to be maintained on the core project and a
> rapid rate of innovation on gnocchi.
> 
> Note that that the developers primarily contributing to gnocchi
> represent a cross-section of the core team, and there's a regular
> feedback loop in the form of a recurring agenda item at the
> weekly team meeting to avoid the effort becoming silo'd.
> 
> 
> But isn't re-architecting frowned upon?
> ==
> 
> Well, the architecture of other OpenStack projects have also
> under-gone change as the community understanding of the
> implications of prior design decisions has evolved.
> 
> Take for example the move towards nova no-db-compute & the
> unified-object-model in order to address issues in the nova
> architecture that made progress towards rolling upgrades
> unneccessarily difficult.
> 
> The point, in my understanding, is not to avoid doing the
> course-correction where it's deemed necessary. Rather, the
> principle is more that these corrections happen in an open
> and planned way.
> 
> 
> The path forward:
> 
> 
> A subset of the ceilometer community will continue to work on
> gnocchi in parallel with the ceilometer core over the remainder
> of the Juno cycle and into the Kilo timeframe. The goal is to
> have an initial implementation of gnocchi ready for tech preview
> by the end of Juno, and to have the integration/migration/
> co-existence questions addressed in Kilo.
> 

Re: [openstack-dev] [oslo] usage patterns for oslo.config

2014-08-11 Thread Doug Hellmann

On Aug 8, 2014, at 7:22 PM, Devananda van der Veen  
wrote:

> On Fri, Aug 8, 2014 at 12:41 PM, Doug Hellmann  wrote:
>> 
>> That’s right. The preferred approach is to put the register_opt() in
>> *runtime* code somewhere before the option will be used. That might be in
>> the constructor for a class that uses an option, for example, as described
>> in
>> http://docs.openstack.org/developer/oslo.config/cfg.html#registering-options
>> 
>> Doug
> 
> Interesting.
> 
> I've been following the prevailing example in Nova, which is to
> register opts at the top of a module, immediately after defining them.
> Is there a situation in which one approach is better than the other?

The approach used in Nova is the “old” way of doing it. It works, but assumes 
that all of the application code is modifying a global configuration object. 
The runtime approach allows you to pass a configuration object to a library, 
which makes it easier to mock the configuration for testing and avoids having 
the configuration options bleed into the public API of the library. We’ve 
started using the runtime approach in new Oslo libraries that have 
configuration options, but changing the implementation in existing application 
code isn’t strictly necessary.

Doug

> 
> Thanks,
> Devananda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] reverting the HOT migration? // dealing with lockstep changes

2014-08-11 Thread Robert Collins
Hi, so shortly after the HOT migration landed, we hit
https://bugs.launchpad.net/tripleo/+bug/1354305 which is that on even
quite recently deployed clouds, the migrated templates were just too
new. A partial revert (of just the list_join bit) fixes that, but a
deeper problem emerged which is that stack-update to get from a
non-HOT to HOT template appears broken
(https://bugs.launchpad.net/heat/+bug/1354962).

I think we need to revert the HOT migration today, as forcing a
scorched earth recreation of a cloud is not a great answer for folk
that have deployed versions - its a backwards compat issue.

Its true that our release as of icehouse isn't  really useable, so we
could try to wiggle our way past this one, but I think as the first
real test of our new backwards compat policy, that that would be a
mistake.

What we need to be able to land it again, is some way whereby an
existing cloud can upgrade their undercloud (via stack-update against
the old heat deployed in the seed [today, old heat in the undercloud
itself in future]) and then once that is deployed subsequent templates
can use the new features. We're likely to run into such lockstep
changes in future, so we also need to be able to recognise them in
review / design, and call them out so we can fix them early rather
than deep down the pike.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][heat] a small experiment with Ansible in TripleO

2014-08-11 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-08-11 11:40:07 -0700:
> On Mon, Aug 11, 2014 at 11:20:50AM -0700, Clint Byrum wrote:
> > Excerpts from Zane Bitter's message of 2014-08-11 08:16:56 -0700:
> > > On 11/08/14 10:46, Clint Byrum wrote:
> > > > Right now we're stuck with an update that just doesn't work. It isn't
> > > > just about update-failure-recovery, which is coming along nicely, but
> > > > it is also about the lack of signals to control rebuild, poor support
> > > > for addressing machines as groups, and unacceptable performance in
> > > > large stacks.
> > > 
> > > Are there blueprints/bugs filed for all of these issues?
> > > 
> > 
> > Convergnce addresses the poor performance for large stacks in general.
> > We also have this:
> > 
> > https://bugs.launchpad.net/heat/+bug/1306743
> > 
> > Which shows how slow metadata access can get. I have worked on patches
> > but haven't been able to complete them. We made big strides but we are
> > at a point where 40 nodes polling Heat every 30s is too much for one CPU
> > to handle. When we scaled Heat out onto more CPUs on one box by forking
> > we ran into eventlet issues. We also ran into issues because even with
> > many processes we can only use one to resolve templates for a single
> > stack during update, which was also excessively slow.
> 
> Related to this, and a discussion we had recently at the TripleO meetup is
> this spec I raised today:
> 
> https://review.openstack.org/#/c/113296/
> 
> It's following up on the idea that we could potentially address (or at
> least mitigate, pending the fully convergence-ified heat) some of these
> scalability concerns, if TripleO moves from the one-giant-template model
> to a more modular nested-stack/provider model (e.g what Tomas has been
> working on)
> 
> I've not got into enough detail on that yet to be sure if it's acheivable
> for Juno, but it seems initially to be complex-but-doable.
> 
> I'd welcome feedback on that idea and how it may fit in with the more
> granular convergence-engine model.
> 
> Can you link to the eventlet/forking issues bug please?  I thought since
> bug #1321303 was fixed that multiple engines and multiple workers should
> work OK, and obviously that being true is a precondition to expending
> significant effort on the nested stack decoupling plan above.
> 

That was the issue. So we fixed that bug, but we never un-reverted
the patch that forks enough engines to use up all the CPU's on a box
by default. That would likely help a lot with metadata access speed
(we could manually do it in TripleO but we tend to push defaults. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday August 12th at 19:00 UTC

2014-08-11 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday August 12th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Ben Nemec
On 08/11/2014 01:02 PM, Yuriy Taraday wrote:
> On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow  wrote:
> 
>> One question from me:
>>
>> Will there be later fixes to remove oslo.config dependency/usage from
>> oslo.concurrency?
>>
>> I still don't understand how oslo.concurrency can be used as a library
>> with the configuration being set in a static manner via oslo.config (let's
>> use the example of `lock_path` @ https://github.com/YorikSar/
>> oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). For
>> example:
>>
>> Library X inside application Z uses lockutils (via the nice
>> oslo.concurrency library) and sets the configuration `lock_path` to its
>> desired settings, then library Y (also a user of oslo.concurrency) inside
>> same application Z sets the configuration for `lock_path` to its desired
>> settings. Now both have some unknown set of configuration they have set and
>> when library X (or Y) continues to use lockutils they will be using some
>> mix of configuration (likely some mish mash of settings set by X and Y);
>> perhaps to a `lock_path` that neither actually wants to be able to write
>> to...
>>
>> This doesn't seem like it will end well; and will just cause headaches
>> during debug sessions, testing, integration and more...
>>
>> The same question can be asked about the `set_defaults()` function, how is
>> library Y or X expected to use this (are they?)??
>>
>> I hope one of the later changes is to remove/fix this??
>>
>> Thoughts?
>>
>> -Josh
> 
> 
> I'd be happy to remove lock_path config variable altogether. It's basically
> never used. There are two basic branches in code wrt lock_path:
> - when you provide lock_path argument to lock (and derivative functions),
> file-based lock is used and CONF.lock_path is ignored;
> - when you don't provide lock_path in arguments, semaphore-based lock is
> used and CONF.lock_path is just a prefix for its name (before hashing).
> 
> I wonder if users even set lock_path in their configs as it has almost no
> effect. So I'm all for removing it, but...
> From what I understand, every major change in lockutils drags along a lot
> of headache for everybody (and risk of bugs that would be discovered very
> late). So is such change really worth it? And if so, it will require very
> thorough research of lockutils usage patterns.

Two things lock_path has to stay for: Windows and consumers who require
file-based locking semantics.  Neither of those use cases are trivial to
remove, so IMHO it would not be appropriate to do it as part of the
graduation.  If we were going to alter the API that much it needed to
happen in incubator.

As far as lock_path mismatches, that shouldn't be a problem unless a
consumer is doing something very unwise.  Oslo libs get their
configuration from the application using them, so unless the application
passes two separate conf objects to library X and Y they're both going
to get consistent settings.  If someone _is_ doing that, then I think
it's their responsibility to make sure the options in both config files
are compatible with each other.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][heat] a small experiment with Ansible in TripleO

2014-08-11 Thread Steven Hardy
On Mon, Aug 11, 2014 at 11:20:50AM -0700, Clint Byrum wrote:
> Excerpts from Zane Bitter's message of 2014-08-11 08:16:56 -0700:
> > On 11/08/14 10:46, Clint Byrum wrote:
> > > Right now we're stuck with an update that just doesn't work. It isn't
> > > just about update-failure-recovery, which is coming along nicely, but
> > > it is also about the lack of signals to control rebuild, poor support
> > > for addressing machines as groups, and unacceptable performance in
> > > large stacks.
> > 
> > Are there blueprints/bugs filed for all of these issues?
> > 
> 
> Convergnce addresses the poor performance for large stacks in general.
> We also have this:
> 
> https://bugs.launchpad.net/heat/+bug/1306743
> 
> Which shows how slow metadata access can get. I have worked on patches
> but haven't been able to complete them. We made big strides but we are
> at a point where 40 nodes polling Heat every 30s is too much for one CPU
> to handle. When we scaled Heat out onto more CPUs on one box by forking
> we ran into eventlet issues. We also ran into issues because even with
> many processes we can only use one to resolve templates for a single
> stack during update, which was also excessively slow.

Related to this, and a discussion we had recently at the TripleO meetup is
this spec I raised today:

https://review.openstack.org/#/c/113296/

It's following up on the idea that we could potentially address (or at
least mitigate, pending the fully convergence-ified heat) some of these
scalability concerns, if TripleO moves from the one-giant-template model
to a more modular nested-stack/provider model (e.g what Tomas has been
working on)

I've not got into enough detail on that yet to be sure if it's acheivable
for Juno, but it seems initially to be complex-but-doable.

I'd welcome feedback on that idea and how it may fit in with the more
granular convergence-engine model.

Can you link to the eventlet/forking issues bug please?  I thought since
bug #1321303 was fixed that multiple engines and multiple workers should
work OK, and obviously that being true is a precondition to expending
significant effort on the nested stack decoupling plan above.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] congress-server fails to start

2014-08-11 Thread Rajdeep Dua
Hi All,
command to start the congress-server fails

$ ./bin/congress-server --config-file etc/congress.conf.sample

Error :
ImportError: No module named keystonemiddleware.auth_token

Installing keystonemiddleware manually also fails

$ sudo pip install keystonemiddleware

Could not find a version that satisfies the requirement
oslo.config>=1.4.0.0a3 (from keystonemiddleware) (from versions: )
No distributions matching the version for oslo.config>=1.4.0.0a3 (from
keystonemiddleware)

Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-11 Thread Jay Pipes

Hi Li, comments inline.

On 08/08/2014 12:03 AM, Li Ma wrote:

Getting a massive amount of information from data storage to be displayed is
where most of the activity happens in OpenStack. The two activities of reading
data and writing (creating, updating and deleting) data are fundamentally
different.

The optimization for these two opposite database activities can be done by
physically separating the databases that service these two different
activities. All the writes go to database servers, which then replicates the
written data to the database server(s) dedicated to servicing the reads.

Currently, AFAIK, many OpenStack deployment in production try to take
advantage of MySQL (includes Percona or MariaDB) multi-master Galera cluster.
It is possible to design and implement a read/write separation schema
for such a DB cluster.


The above does not really make sense for MySQL Galera/PXC clusters *if 
only Galera nodes are used in the cluster*. Since Galera is 
synchronously replicated, there's no real point in segregating writers 
from readers, IMO. Better to just spread the write AND read load equally 
among all Galera cluster nodes.


However, if you have a Galera cluster that then slaves off to one or 
more standard MySQL slaves, then certainly doing writer/reader 
segregation could be useful, especially for directing readers of 
aggregate or report-type data to the read-only slaves.



Actually, OpenStack has a method for read scalability via defining
master_connection and slave_connection in configuration, but this method
lacks of flexibility due to deciding master or slave in the logical
context(code). It's not transparent for application developer.
As a result, it is not widely used in all the OpenStack projects.

So, I'd like to propose a transparent read/write separation method
for oslo.db that every project may happily takes advantage of it
without any code modification.


I've never seen a writer/reader segregation proxy or middleware piece 
that was properly able to send the "right" reads to the slaves. 
Unfortunately, determining what are the "right" reads to send to the 
slaves is highly application-dependent, since the application knows when 
it can tolerate slave lags.



Moreover, I'd like to put it in the mailing list in advance to
make sure it is acceptable for oslo.db.


I think oslo.db is not the right place for this. I believe the efforts 
that Mike Wilson has been doing in the "slavification" blueprints are 
the more appropriate place to add this slave-aware code.


Best,
-jay


I'd appreciate any comments.

br.
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [operstack-dev][Congress] congress-server fails

2014-08-11 Thread Rajdeep Dua
Following command gives an error

./*bin*/*congress*-*server* --config-file etc/congress.conf.sample

Error i am getting is ImportError: No module named
keystonemiddleware.auth_token

Tried installing keystonemiddleware manually, got the following error

$ sudo pip install keystonemiddleware
Downloading/unpacking keystonemiddleware
  Running setup.py egg_info for package keystonemiddleware
[pbr] Reusing existing SOURCES.txt
Requirement already satisfied (use --upgrade to upgrade): Babel>=1.3 in
/usr/local/lib/python2.7/dist-packages (from keystonemiddleware)
Requirement already satisfied (use --upgrade to upgrade): iso8601>=0.1.9 in
/usr/local/lib/python2.7/dist-packages (from keystonemiddleware)
Requirement already satisfied (use --upgrade to upgrade): netaddr>=0.7.6 in
/usr/local/lib/python2.7/dist-packages (from keystonemiddleware)
Downloading/unpacking oslo.config>=1.4.0.0a3 (from keystonemiddleware)
  Could not find a version that satisfies the requirement
oslo.config>=1.4.0.0a3 (from keystonemiddleware) (from versions: )
No distributions matching the version for oslo.config>=1.4.0.0a3 (from
keystonemiddleware)
Storing complete log in /home/hadoop/.pip/pip.log

Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][heat] a small experiment with Ansible in TripleO

2014-08-11 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-08-11 08:16:56 -0700:
> On 11/08/14 10:46, Clint Byrum wrote:
> > Right now we're stuck with an update that just doesn't work. It isn't
> > just about update-failure-recovery, which is coming along nicely, but
> > it is also about the lack of signals to control rebuild, poor support
> > for addressing machines as groups, and unacceptable performance in
> > large stacks.
> 
> Are there blueprints/bugs filed for all of these issues?
> 

Convergnce addresses the poor performance for large stacks in general.
We also have this:

https://bugs.launchpad.net/heat/+bug/1306743

Which shows how slow metadata access can get. I have worked on patches
but haven't been able to complete them. We made big strides but we are
at a point where 40 nodes polling Heat every 30s is too much for one CPU
to handle. When we scaled Heat out onto more CPUs on one box by forking
we ran into eventlet issues. We also ran into issues because even with
many processes we can only use one to resolve templates for a single
stack during update, which was also excessively slow.

We haven't been able to come back around to those yet, but you can see
where this has turned into a bit of a rat hole of optimization.

action-aware-sw-config is sort of what we want for rebuild. We
collaborated with the trove devs on how to also address it for resize
a while back but I have lost track of that work as it has taken a back
seat to more pressing issues.

Addressing groups is a general problem that I've had a hard time
articulating in the past. Tomas Sedovic has done a good job with this
TripleO spec, but I don't know that we've asked for an explicit change
in a bug or spec in Heat just yet:

https://review.openstack.org/#/c/97939/

There are a number of other issues noted in that spec which are already
addressed in Heat, but require refactoring in TripleO's templates and
tools, and that work continues.

The point remains: we need something that works now, and doing an
alternate implementation for updates is actually faster than addressing
all of these issues.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Brad Topol
Hi Eoghan,

Thanks for the note below.  However, one thing the overview below does not 
 cover  is why InfluxDB (http://influxdb.com/) is not being leveraged. 
Many folks feel that this technology is a viable solution for the problem 
space discussed below.

Thanks,

Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Eoghan Glynn 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   08/06/2014 11:17 AM
Subject:[openstack-dev] [tc][ceilometer] Some background on the 
gnocchi project




Folks,

It's come to our attention that some key individuals are not
fully up-to-date on gnocchi activities, so it being a good and
healthy thing to ensure we're as communicative as possible about
our roadmap, I've provided a high-level overview here of our
thinking. This is intended as a precursor to further discussion
with the TC.

Cheers,
Eoghan


What gnocchi is:
===

Gnocchi is a separate, but related, project spun up on stackforge
by Julien Danjou, with the objective of providing efficient
storage and retrieval of timeseries-oriented data and resource
representations.

The goal is to experiment with a potential approach to addressing
an architectural misstep made in the very earliest days of
ceilometer, specifically the decision to store snapshots of some
resource metadata alongside each metric datapoint. The core idea
is to move to storing datapoints shorn of metadata, and instead
allow the resource-state timeline to be reconstructed more cheaply
from much less frequently occurring events (e.g. instance resizes
or migrations).


What gnocchi isn't:
==

Gnocchi is not a large-scale under-the-radar rewrite of a core
OpenStack component along the lines of keystone-lite.

The change is concentrated on the final data-storage phase of
the ceilometer pipeline, so will have little initial impact on the
data-acquiring agents, or on transformation phase.

We've been totally open at the Atlanta summit and other forums
about this approach being a multi-cycle effort.


Why we decided to do it this way:


The intent behind spinning up a separate project on stackforge
was to allow the work progress at arms-length from ceilometer,
allowing normalcy to be maintained on the core project and a
rapid rate of innovation on gnocchi.

Note that that the developers primarily contributing to gnocchi
represent a cross-section of the core team, and there's a regular
feedback loop in the form of a recurring agenda item at the
weekly team meeting to avoid the effort becoming silo'd.


But isn't re-architecting frowned upon?
==

Well, the architecture of other OpenStack projects have also
under-gone change as the community understanding of the
implications of prior design decisions has evolved.

Take for example the move towards nova no-db-compute & the
unified-object-model in order to address issues in the nova
architecture that made progress towards rolling upgrades
unneccessarily difficult.

The point, in my understanding, is not to avoid doing the
course-correction where it's deemed necessary. Rather, the
principle is more that these corrections happen in an open
and planned way.


The path forward:


A subset of the ceilometer community will continue to work on
gnocchi in parallel with the ceilometer core over the remainder
of the Juno cycle and into the Kilo timeframe. The goal is to
have an initial implementation of gnocchi ready for tech preview
by the end of Juno, and to have the integration/migration/
co-existence questions addressed in Kilo.

Moving the ceilometer core to using gnocchi will be contingent
on it demonstrating the required performance characteristics and
providing the semantics needed to support a v3 ceilometer API
that's fit-for-purpose.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-11 Thread Zane Bitter

On 08/08/14 10:41, Anne Gentle wrote:

- Would have to ensure Rally is what we want "first" as getting to be PTL
since you are first to propose seems to be the model.


I know that at one time it was popular in the trade/gutter press to cast 
aspersions on new projects by saying that someone getting to be a PTL 
was the major motivation behind them. And although, having been there, I 
can tell you that this was grossly unfair to the people concerned, at 
least you could see where the impression might have come from in the 
days where being a PTL guaranteed you a seat on the TC.


These days with a directly elected TC, the job of a PTL is confined to 
administrative busywork. To the extent that a PTL holds any real ex 
officio power, which is not a great extent, it's probably a mistake that 
will soon be rectified. If anyone is really motivated to become a PTL by 
their dreams of avarice then I can guarantee that they will be disappointed.


It seems pretty clear to me that projects want their own programs 
because they don't think it wise to hand over control of all changes to 
the thing they've been working on for the past year or more to a group 
of people who have barely glanced at it before and already have other 
priorities. I submit that this is sufficient to completely explain the 
proliferation of programs without attributing to anyone any untoward 
motivations.


Finally, *yes*, the model is indeed that the project working in the open 
with the community eventually gets incubated, and the proprietary 
project working behind closed doors with a plan to "'open source' it one 
day, when it's perfect" is doomed to perpetual irrelevance. You'll note 
that anyone who is unhappy about that still has an obvious course of 
action that doesn't involve punishing the people who are trying to do 
the Right Thing by the community.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Yuriy Taraday
On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow  wrote:

> One question from me:
>
> Will there be later fixes to remove oslo.config dependency/usage from
> oslo.concurrency?
>
> I still don't understand how oslo.concurrency can be used as a library
> with the configuration being set in a static manner via oslo.config (let's
> use the example of `lock_path` @ https://github.com/YorikSar/
> oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). For
> example:
>
> Library X inside application Z uses lockutils (via the nice
> oslo.concurrency library) and sets the configuration `lock_path` to its
> desired settings, then library Y (also a user of oslo.concurrency) inside
> same application Z sets the configuration for `lock_path` to its desired
> settings. Now both have some unknown set of configuration they have set and
> when library X (or Y) continues to use lockutils they will be using some
> mix of configuration (likely some mish mash of settings set by X and Y);
> perhaps to a `lock_path` that neither actually wants to be able to write
> to...
>
> This doesn't seem like it will end well; and will just cause headaches
> during debug sessions, testing, integration and more...
>
> The same question can be asked about the `set_defaults()` function, how is
> library Y or X expected to use this (are they?)??
>
> I hope one of the later changes is to remove/fix this??
>
> Thoughts?
>
> -Josh


I'd be happy to remove lock_path config variable altogether. It's basically
never used. There are two basic branches in code wrt lock_path:
- when you provide lock_path argument to lock (and derivative functions),
file-based lock is used and CONF.lock_path is ignored;
- when you don't provide lock_path in arguments, semaphore-based lock is
used and CONF.lock_path is just a prefix for its name (before hashing).

I wonder if users even set lock_path in their configs as it has almost no
effect. So I'm all for removing it, but...
>From what I understand, every major change in lockutils drags along a lot
of headache for everybody (and risk of bugs that would be discovered very
late). So is such change really worth it? And if so, it will require very
thorough research of lockutils usage patterns.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] [neutron] designate and neutron integration

2014-08-11 Thread Hayes, Graham
kazuhiro MIYASHITA,

As designate progresses with server pools, we aim to have support for a
'private' dns server, that could run within a neutron network - is that
the level of integration you were referring to?

That is, for the time being, a long term goal, and not covered by Carl's
Kilo blueprint.

We talked with both people from both Neutron and Nova in Atlanta, and
worked out the first steps for designate / neutron integration (auto
provisioning of records)

For that level of integration, we are assuming that a neutron router
will be involved in DNS queries within a network.

Long term I would prefer to see a 'private pool' connecting directly to
the Network2 (like any other service VM (LBaaS etc)) and have dnsmasq
pass on only records hosted by that 'private pool' to designate.

This is all yet to be fleshed out, so I am open to suggestions. It
requires that we complete server pools, and that work is only just
starting (it was the main focus of our mid-cycle 2 weeks ago).

Graham

On Mon, 2014-08-11 at 11:02 -0600, Carl Baldwin wrote:
> kazuhiro MIYASHITA,
> 
> I have done a lot of thinking about this.  I have a blueprint on hold
> until Kilo for Neutron/Designate integration [1].
> 
> However, my blueprint doesn't quite address what you are going after
> here.  An assumption that I have made is that Designate is an external
> or internet facing service so a Neutron router needs to be in the
> datapath to carry requests from dnsmasq to an external network.  The
> advantage of this is that it is how Neutron works today so there is no
> new development needed.
> 
> Could you elaborate on the advantages of connecting dnsmasq directly
> to the external network where Designate will be available?
> 
> Carl
> 
> [1] https://review.openstack.org/#/c/88624/
> 
> On Mon, Aug 11, 2014 at 7:51 AM, Miyashita, Kazuhiro
>  wrote:
> > Hi,
> >
> > I want to ask about neutron and designate integration.
> > I think dnsmasq fowards DNS request from instance to designate is better.
> >
> >++
> >|DNS server(designate)   |
> >++
> > |
> > -+--+-- Network1
> >  |
> >   ++
> >   |dnsmasq |
> >   ++
> > |
> > -+--+-- Network2
> >  |
> > +-+
> > |instance |
> > +-+
> >
> > Because it's simpler than virtual router connects Network1 and Network2.
> > If router connects network, instance should know where DNS server is. it is 
> > complicated.
> > dnsmasq returns its ip address as dns server in DHCP replay by ordinary, so,
> > I think good dnsmasq becomes like a gateway to designate.
> >
> > But, I can't connect dnsmasq to Network1. because of today's neutron design.
> >
> > Question:
> >   Does designate design team have a plan such as above integration?
> >   or other integration design?
> >
> > *1: Network1 and Network2 are deployed by neutron.
> > *2: neutron deploys dnsmasq as a dhcp server.
> > dnsmasq can forward DNS request.
> >
> > Thanks,
> >
> > kazuhiro MIYASHITA
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-11 Thread Hemanth Ravi
On Fri, Aug 8, 2014 at 7:13 PM, Armando M.  wrote:

>
>> On Fri, Aug 8, 2014 at 5:38 PM, Armando M.  wrote:
>>
>>>
>>>
   One advantage of the service plugin is that one can leverage the
 neutron common framework such as Keystone authentication where common
 scoping is done. It would be important in the policy type of framework to
 have such scoping

>>>
>>> The framework you're referring to is common and already reusable, it's
>>> not a prerogative of Neutron.
>>>
>>
>>  Are you suggesting that Service Plugins, L3, IPAM etc become individual
>> endpoints, resulting in redundant authentication round-trips for each of
>> the components.
>>
>> Wouldn't this result in degraded performance and potential consistency
>> issues?
>>
>
> The endpoint - in the OpenStack lingo - that exposes the API abstractions
> (concepts and operations) can be, logically and physically, different from
> the worker that implements these abstractions; authentication is orthogonal
> to this and I am not suggesting what you mention.
>

>From what I understand, you are saying that the implementation could be
done via a mechanism different than a service plugin. Would this be done by
implementing the service plugin as a different process? This would imply
making changes to the the neutron server - plugin interface. If this is the
case, wouldn't it be better to use the existing mechanism to avoid
introducing any instability at this stage of the Juno cycle.


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] [neutron] designate and neutron integration

2014-08-11 Thread Carl Baldwin
kazuhiro MIYASHITA,

I have done a lot of thinking about this.  I have a blueprint on hold
until Kilo for Neutron/Designate integration [1].

However, my blueprint doesn't quite address what you are going after
here.  An assumption that I have made is that Designate is an external
or internet facing service so a Neutron router needs to be in the
datapath to carry requests from dnsmasq to an external network.  The
advantage of this is that it is how Neutron works today so there is no
new development needed.

Could you elaborate on the advantages of connecting dnsmasq directly
to the external network where Designate will be available?

Carl

[1] https://review.openstack.org/#/c/88624/

On Mon, Aug 11, 2014 at 7:51 AM, Miyashita, Kazuhiro
 wrote:
> Hi,
>
> I want to ask about neutron and designate integration.
> I think dnsmasq fowards DNS request from instance to designate is better.
>
>++
>|DNS server(designate)   |
>++
> |
> -+--+-- Network1
>  |
>   ++
>   |dnsmasq |
>   ++
> |
> -+--+-- Network2
>  |
> +-+
> |instance |
> +-+
>
> Because it's simpler than virtual router connects Network1 and Network2.
> If router connects network, instance should know where DNS server is. it is 
> complicated.
> dnsmasq returns its ip address as dns server in DHCP replay by ordinary, so,
> I think good dnsmasq becomes like a gateway to designate.
>
> But, I can't connect dnsmasq to Network1. because of today's neutron design.
>
> Question:
>   Does designate design team have a plan such as above integration?
>   or other integration design?
>
> *1: Network1 and Network2 are deployed by neutron.
> *2: neutron deploys dnsmasq as a dhcp server.
> dnsmasq can forward DNS request.
>
> Thanks,
>
> kazuhiro MIYASHITA
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-11 Thread Robert Kukura


On 8/8/14, 6:28 PM, Salvatore Orlando wrote:
"If we want to keep everything the way it is, we have to change 
everything" [1]


This is pretty much how I felt after reading this proposal, and I felt 
that this quote, which Ivar will probably appreciate, was apt to the 
situation.
Recent events have spurred a discussion about the need for a change in 
process. It is not uncommon indeed to believe that by fixing the 
process, things will inevitably change for better. While no-one argues 
that flaws in processes need to be fixed, no process change will ever 
change anything, in my opinion, unless it is aimed at spurring change 
in people as well.


From what I understand, this proposal starts with the assumption that 
any new feature which is committed to Neutron (ie: has a blueprint 
approved), and is not a required neutron component should be 
considered as a preview. This is not different from the process, 
which, albeit more informally, has been adopted so far. Load 
Balancing, Firewall, VPN, have all been explicitly documented as 
experimental in their first release; I would argue that even if not 
experimental anymore, they may not be considered stable until their 
stability was proven by upstream QA with API and scenario tests - but 
this is not sanctioned anywhere currently, I think.
Correct, this proposal is not so much a new process or change in process 
as a formalization of what we've already been doing, and a suggested 
adaptation to clarify the current expectations around stability of new APIs.


According to this proposal, for preview features:
- all the code would be moved to a "preview" package

Yes.

- Options will be marked as "preview"

Yes.

- URIs should be prefixed with "preview"
That's what I suggested, but, as several people have pointed out, this 
does seem like its worth the cost of breaking the API compatibility just 
at the point when it is being declared stable. I'd like to withdraw this 
item.

- CLIs will note the features are "preview" in their help strings

Yes.
- Documentation will explicitly state this feature is "preview" (I 
think we already mark them as experimental, frankly I don't think 
there are a lot of differences in terminology here)
Yes. Again to me, failure is one likely outcome of an "experiment". The 
term "preview" is intended to imply more of a commitment to quickly 
reach stability.

- Database migrations will be in the main alembic path as usual

Right.

- CLI, Devstack and Heat support will be available

Right, as appropriate for the feature.

- Can be used by non-preview neutron code
No, I suggested "No non-preview Neutron code should import code from 
anywhere under the neutron.preview module, ...".

- Will undergo the usual review process
Right. This is key for the code to not have to jump through a new major 
upheaval at right as it becomes stable.
- QA will be desirable, but will done either with "WIP" tempest 
patches or merging the relevant scenario tests in the preview feature 
iself
More than "desirable". We need a way to maintain and run the 
tempest-like API and scenario tests during the stabilization process, 
but to let then evolve with the feature.
- The feature might be promoted or removed, but the process for this 
is not yet defined.
Any suggestions? I did try to address preventing long-term stagnation of 
preview features. As a starting point, reviewing and merging a patch 
that moves the code from the preview sub-tree to its intended location 
could be a lightweight promotion process.


I don't think this change in process will actually encourage better 
behaviour both by contributors and core reviewers.
Encouraging better behavior might be necessary, but wasn't the main 
intent of this proposal. This proposal was intended to clarify and 
formalize the stability expectations around the initial releases of new 
features. It was specifically intended to address the conundrum 
currently faced by reviewers regarding patches that meet all applicable 
quality standards, but may not yet have (somehow, miraculously) achieved 
the maturity associated with stable APIs and features fully supported 
for widespread deployment.
I reckon that better behaviour might be encouraged by forcing 
developer and reviewers to merge in the neutron source code tree only 
code which meets the highest quality standards. A change in process 
should enforce this - and when I think about the criteria, I think at 
the same kind of criteria we're being imposed to declare parity with 
nova. Proven reliability, and scalability should be a must. Proven 
usability should be a requirement for all new APIs.
I agree regarding the quality standards for merging of code, and am not 
suggesting relaxing those one bit. But proving all of the desirable 
system-level attributes of a complex new feature before merging anything 
precludes any kind of iterative development process. I think we should 
consider enforcing things like proven reliability, scalability, and 
usability at th

Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-11 Thread Sylvain Bauza


Le 11/08/2014 18:03, Gary Kotton a écrit :


On 8/11/14, 6:06 PM, "Dan Smith"  wrote:


As the person who -2'd the review, I'm thankful you raised this issue on
the ML, Jay. Much appreciated.

The "metadetails" term isn't being invented in this patch, of course. I
originally complained about the difference when this was being added:

https://review.openstack.org/#/c/109505/1/nova/api/openstack/compute/contr
ib/server_groups.py,cm

As best I can tell, the response in that patch set about why it's being
translated is wrong (backwards). I expect that the API extension at the
time called it "metadetails" and they decided to make the object the
same and do the translation there.


>From what I can tell, the actual server_group API extension that made it

into the tree never got the ability to set/change/etc the
metadata/metadetails anyway, so there's no reason (AFAICT) to add it in
wrongly.

If we care to have this functionality, then I propose we change the
attribute on the object (we can handle this with versioning) and reflect
it as "metadata" in the API.

However, I have to ask: do we really need another distinct metadata
store attached to server_groups? If not, how about we just remove it

>from the database and the object, clean up the bit of residue that is

still in the API extension and be done with it?

The initial version of the feature did not make use of this. The reason
was that we chose for a very
Limited subset to be used, that is, affinity and anti affinity. Moving
forwards we would like to implement
A number of different policies with this. We can drop it at the moment due
to the fact that it is not used.

I think that Yathi may be using this for the constrain scheduler. But I am
not 100% sure.



Unless I'm wrong, I can't see where this metadata is being used in the 
scheduler, either for filtering or for other reasons.


So, please give us context why this is currently useful ?

If this is something for the next future, I would love discussing it 
with regards to the current split.



Thanks,
-Sylvain


--Dan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver interface on every API request"

2014-08-11 Thread Doug Wiegley
Hi Sam,

Very true.  I think that Vijay’s objection is that we are currently
imposing a logical structure on the driver, when it should be a driver
decision.  Certainly, it goes both ways.

And I also agree that the mechanism for returning multiple errors, and the
ability to specify whether those errors are fatal or not, individually, is
currently weak.

Doug


On 8/11/14, 10:21 AM, "Samuel Bercovici"  wrote:

>Hi Doug,
>
>In some implementations Driver !== Device. I think this is also true for
>HA Proxy.
>This might mean that there is a difference between creating a logical
>object and when there is enough information to actually schedule/place
>this into a device.
>The ability to express such errors (detecting an error on a logical
>object after it was created but when it actually get scheduled) should be
>discussed and addressed anyway.
>
>-Sam.
>
>
>-Original Message-
>From: Doug Wiegley [mailto:do...@a10networks.com]
>Sent: Monday, August 11, 2014 6:55 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling
>driver interface on every API request"
>
>Hi all,
>
>> Validations such as ³timeout > delay² should be performed on the API
>>level before it reaches the driver.
>For a configuration tree (lb, listeners, pools, etc.), there should be
>one provider.
>
>You¹re right, but I think the point of Vijay¹s example was to highlight
>the combo error problem with populating all of the driver objects at once
>(in short, the driver interface isn¹t well suited to that model.)  That
>his one example can be covered by API validators is irrelevant.  Consider
>a backend that does not support APP_COOKIE¹s, or HTTPS_TERMINATED (but
>has multiple listeners) instead.  Should the entire load balancer create
>fail, or should it offer degraded service?  Do all drivers have to
>implement a transaction rollback; wait, the interface makes that very
>hard.  That¹s his point.  The driver is no longer just glue code between
>interfaces; it¹s now a mini-object error handler.
>
>
>> Having provider defined in multiple places does not make sense.
>
>Channeling Brandon, who can yell if I get this wrong, the point is not to
>have a potentially different provider on each object.  It¹s to allow a
>provider to be assigned when the first object in the tree is created, so
>that future related objects will always get routed to the same provider.
>Not knowing which provider should get all the objects is why we have to
>wait until we see a LoadBalancer object.
>
>
>All of this sort of edge case nonsense is because we (the royal we, the
>community), wanted all load balancer objects to be ³root² objects, even
>though only one of them is an actual root today, to support many-to-many
>relationships among all of them, at some future date, without an
>interface change.  If my bias is showing that I¹m not a fan of adding
>this complexity for that, I¹m not surprised.
>
>Thanks,
>doug
>
>
>On 8/11/14, 7:57 AM, "Samuel Bercovici"  wrote:
>
>>Hi,
>> 
>>Validations such as ³timeout > delay² should be performed on the API
>>level before it reaches the driver.
>> 
>>For a configuration tree (lb, listeners, pools, etc.), there should be
>>one provider.
>>
>>Having provider defined in multiple places does not make sense.
>> 
>> 
>>-San.
>> 
>> 
>>From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
>>
>>Sent: Monday, August 11, 2014 2:43 PM
>>To: OpenStack Development Mailing List
>>(openstack-dev@lists.openstack.org)
>>Subject: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver
>>interface on every API request"
>>
>>
>> 
>>Hi:
>> 
>>Continuing from last week¹s LBaaS meetingŠ
>> 
>>Currently an entity cannot be sent to driver unless it is linked to
>>loadbalancer because loadbalancer is the root object and driver
>>information is only available with loadbalancer.
>>
>> 
>>The request to the driver is delayed because of which error propagation
>>becomes tricky.
>> 
>>Let¹s say a monitor was configured with timeout > delay there would be
>>no error then.
>>When a listener is configured there will be a monitor
>>creation/deployment error like ³timeout configured greater than delay².
>> 
>>Unless the error is very clearly crafted the user won¹t be able to
>>understand the error.
>> 
>>I am half-heartedly OK with current approach.
>>
>> 
>>But, I would prefer Brandon¹s Solution ­ make provider an attribute in
>>each of the entities to get rid of this problem.
>>
>> 
>>What do others think?
>> 
>>Thanks,
>>Vijay V.
>>
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list

Re: [openstack-dev] devstack local.conf file

2014-08-11 Thread Asselin, Ramy
Hi Nikesh,

You need to set the enabled_backends in the local.conf file.
e.g.

[[post-config|$CINDER_CONF]]
[DEFAULT]
enabled_backends=hp_msa_driver
[hp_msa_driver]
volume_driver = cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver
…

Ramy

From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
Sent: Monday, August 11, 2014 8:55 AM
To: openst...@lists.openstack.org; openstack-dev@lists.openstack.org
Subject: [openstack-dev] devstack local.conf file

Hi,
I have gone through devstack links.
They are not clear like openstack.org documents.


For Example:
when i am using below local.conf file in devstack,"hp_msa_driver" is not coming 
in "enabled_backends" in cinder.conf after running stack.sh.

[[local|localrc]]
ADMIN_PASSWORD=vedams123
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.170/29
HOST_IP=192.168.2.151
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
TEMPEST_VOLUME_DRIVER=hp_msa_fc
TEMPEST_VOLUME_VENDOR="Hewlett-Packard"
TEMPEST_STORAGE_PROTOCOL=FC


[[post-config|$CINDER_CONF]]
[hp_msa_driver]
volume_driver = cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver
san_ip = 192.168.2.192
san_login = manage
san_password =!manage
volume_backend_name=HPMSA_FC


[lvmdriver-1]
volume_group=stack-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI



I am getting below cinder.conf file after running stack.sh script

[keystone_authtoken]
auth_uri = http://192.168.2.151:5000/v2.0
signing_dir = /var/cache/cinder
admin_password = vedams123
admin_user = cinder
admin_tenant_name = service
cafile =
identity_uri = http://192.168.2.151:35357

[DEFAULT]
rabbit_password = vedams123
rabbit_hosts = 192.168.2.151
rpc_backend = cinder.openstack.common.rpc.impl_kombu
use_syslog = True
default_volume_type = lvmdriver-1
enabled_backends = lvmdriver-1
enable_v1_api = true
periodic_interval = 60
lock_path = /opt/stack/data/cinder
state_path = /opt/stack/data/cinder
osapi_volume_extension = cinder.api.contrib.standard_extensions
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
sql_connection = 
mysql://root:vedams123@127.0.0.1/cinder?charset=utf8
iscsi_helper = tgtadm
my_ip = 192.168.2.151
verbose = True
debug = True
auth_strategy = keystone

[lvmdriver-1]
volume_group = stack-volumes-1
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name = LVM_iSCSI

[hp_msa_driver]
volume_backend_name = HPMSA_FC
san_password = !manage
san_login = manage
san_ip = 192.168.2.192
volume_driver = cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver



Then i analyzed source code of stack.sh,and added in local.conf this line:
CINDER_ENABLED_BACKENDS=hp_msa:hp_msa_driver,lvm:lvmdriver-1


Now i am getting hp_msa_fc in cinder.conf in enabled_backends



Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver interface on every API request"

2014-08-11 Thread Samuel Bercovici
Hi Doug,

In some implementations Driver !== Device. I think this is also true for HA 
Proxy.
This might mean that there is a difference between creating a logical object 
and when there is enough information to actually schedule/place this into a 
device.
The ability to express such errors (detecting an error on a logical object 
after it was created but when it actually get scheduled) should be discussed 
and addressed anyway.

-Sam.


-Original Message-
From: Doug Wiegley [mailto:do...@a10networks.com] 
Sent: Monday, August 11, 2014 6:55 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver 
interface on every API request"

Hi all,

> Validations such as ³timeout > delay² should be performed on the API 
>level before it reaches the driver.
For a configuration tree (lb, listeners, pools, etc.), there should be one 
provider.

You¹re right, but I think the point of Vijay¹s example was to highlight the 
combo error problem with populating all of the driver objects at once (in 
short, the driver interface isn¹t well suited to that model.)  That his one 
example can be covered by API validators is irrelevant.  Consider a backend 
that does not support APP_COOKIE¹s, or HTTPS_TERMINATED (but has multiple 
listeners) instead.  Should the entire load balancer create fail, or should it 
offer degraded service?  Do all drivers have to implement a transaction 
rollback; wait, the interface makes that very hard.  That¹s his point.  The 
driver is no longer just glue code between interfaces; it¹s now a mini-object 
error handler.


> Having provider defined in multiple places does not make sense.

Channeling Brandon, who can yell if I get this wrong, the point is not to have 
a potentially different provider on each object.  It¹s to allow a provider to 
be assigned when the first object in the tree is created, so that future 
related objects will always get routed to the same provider.
Not knowing which provider should get all the objects is why we have to wait 
until we see a LoadBalancer object.


All of this sort of edge case nonsense is because we (the royal we, the 
community), wanted all load balancer objects to be ³root² objects, even though 
only one of them is an actual root today, to support many-to-many relationships 
among all of them, at some future date, without an interface change.  If my 
bias is showing that I¹m not a fan of adding this complexity for that, I¹m not 
surprised.

Thanks,
doug


On 8/11/14, 7:57 AM, "Samuel Bercovici"  wrote:

>Hi,
> 
>Validations such as ³timeout > delay² should be performed on the API 
>level before it reaches the driver.
> 
>For a configuration tree (lb, listeners, pools, etc.), there should be 
>one provider.
>
>Having provider defined in multiple places does not make sense.
> 
> 
>-San.
> 
> 
>From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
>
>Sent: Monday, August 11, 2014 2:43 PM
>To: OpenStack Development Mailing List 
>(openstack-dev@lists.openstack.org)
>Subject: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver 
>interface on every API request"
>
>
> 
>Hi:
> 
>Continuing from last week¹s LBaaS meetingŠ
> 
>Currently an entity cannot be sent to driver unless it is linked to 
>loadbalancer because loadbalancer is the root object and driver 
>information is only available with loadbalancer.
>
> 
>The request to the driver is delayed because of which error propagation 
>becomes tricky.
> 
>Let¹s say a monitor was configured with timeout > delay there would be 
>no error then.
>When a listener is configured there will be a monitor 
>creation/deployment error like ³timeout configured greater than delay².
> 
>Unless the error is very clearly crafted the user won¹t be able to 
>understand the error.
> 
>I am half-heartedly OK with current approach.
>
> 
>But, I would prefer Brandon¹s Solution ­ make provider an attribute in 
>each of the entities to get rid of this problem.
>
> 
>What do others think?
> 
>Thanks,
>Vijay V.
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Freescale CI log site is being blocked

2014-08-11 Thread trinath.soman...@freescale.com
Hi-

Today I contacted the service provider regarding the malware in the website. 
Got a response that website is truly functional and malware free.

I too have verified all the directories, subdirectories and log files manually 
for any malware injected into the website. I have detected none.

There is no anonymous login enabled for FTP or cPanel login to the server. The 
FTP is protected with Strong Passcode.

This is an update from my end. 

Like all other CI's now normal browsing of logs is available for view.

The CI is made down to take the above explained changes to take place. Now the 
CI is active running the jobs.

All the old logs are present as in place. 

You may browse the old logs using the url 

For ML2 Mechanism driver : 
http://fslopenstackci.com/{change_number}/{change_patchset}/Freescale-ML2-Mechanism-Driver
For FWaaS Plugin : 
http://fslopenstackci.com/{change_number}/{change_patchset}/Freescale-FWaaS-Plugin

Now I have updated the CI to create a BUILD directory as well to showcase logs 
for "rechecks".

With this new change the log URL will be 

For ML2 Mechanism driver : 
http://fslopenstackci.com/{build_number}/{change_number}/{change_patchset}/Freescale-ML2-Mechanism-Driver
For FWaaS Plugin : 
http://fslopenstackci.com/{build_number}/{change_number}/{change_patchset}/Freescale-FWaaS-Plugin


Hi Mestery-

Kindly please verify the access to the site. 

Also, if it's still in blocking mode, kindly mail me the logs with Cisco WSA to 
verify the reason behind this blocking.

Kindly help me with your review on my code at 
https://review.openstack.org/#/c/109659/

Thanking  you all.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | Mob: +91 9866 235 130

-Original Message-
From: Anita Kuno [mailto:ante...@anteaya.info] 
Sent: Saturday, August 09, 2014 11:09 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [third-party] Freescale CI log site is 
being blocked

On 08/08/2014 11:27 PM, trinath.soman...@freescale.com wrote:
> Thanks anita for the reply.
> 
> Previously the existing server is accessible by kyle. But now its not being 
> accessible. 
> 
> For the paid hosting I have its administered by godaddy
If you are paying godaddy to administer the server, have you asked them why one 
of your users has acknowledged your site is blacklisted by Cisco WSA appliances?

If you are paying them to administer your server, answering your question falls 
within their job.

You need to find out the reason behind Cisco security blocking, that is what I 
am asking you to do. It if fine if you don't know, but it is your 
responsibility to find out.

Thanks Trinath,
Anita.

> and the FTP is only accessed by Jenkins. 
> 
> I can try relocating FTP web based file browser script and provide a normal 
> view of files. 
> 
> Don't know the reason behind Cisco Security blocking the access where it has 
> given access to view the website before.
> 
> Thanks a lot again for the brief email.
> 
> 
> --
> Trinath Somanchi - B39208
> trinath.soman...@freescale.com | extn: 4048
> 
> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: Saturday, August 09, 2014 10:21 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [neutron] [third-party] Freescale CI log 
> site is being blocked
> 
> On 08/08/2014 10:06 PM, trinath.soman...@freescale.com wrote:
>> Hi Sumit-
>>
>> When I try to paste a large log text into paste.openstack, It is giving me 
>> image verification and says its spam.
> Let's not confuse paste.openstack.org's spam blocker from spam blockers on 
> servers. They are two separate functionalities and the conversation does not 
> move forward if we try to pretend they are the same thing or even remotely 
> related, which they are not.
> 
> If you recall, Trinath, the first server you had got hacked since you had not 
> hardened it appropriately. Having hosting via go daddy or any other paid 
> hosting service does not absolve you of the responsibility of having a well 
> maintained server. If you need help maintaining your server, I suggest you 
> contract a server administrator to advise you or do the work. We have to 
> assume a certain level of competence here, due to the responsibility involved 
> I don't think you are going to get many responses to questions if you don't 
> know how to maintain your server.
> This isn't really the the place to ask. Running your third party ci system 
> and copying the logs, sure this is the place, basic server maintenance is 
> your responsibility.
> 
> If you recall, a basic evaluation of your server logs told you you had been 
> hacked the last time. This might be a place to start now.
> 
> In any case, please maintain your server and please address Kyle's concerns.
> 
> Thank you Trianth,
> Anita.
>>
>> I don't know why its taken as spam/malware. It's a paid hosting I had from 
>> GODADDY.
>>
>> --
>> Trinath Somanchi - B39208
>> trinath.soman...@freescale.com | e

Re: [openstack-dev] [Neutron][LBaaS] Use cases with regards to VIP and routers

2014-08-11 Thread Doug Wiegley
Hi Susanne,

While there are a few operators involved with LBaaS that would have good
input, you might want to also ask this on the non-dev mailing list, for a
larger sample size.

Thanks,
doug

On 8/11/14, 3:05 AM, "Susanne Balle"  wrote:

>Gang,
>I was asked the following questions around our Neutron LBaaS use cases:
>1.  Will there be a scenario where the ³VIP² port will be in a different
>Node, from all the Member ³VMs² in a pool.
>
>
>2.  Also how likely is it for the LBaaS configured subnet to not have a
>³router² and just use the ³extra_routes²
> option.
>3.  Is there a valid use case where customers will be using the
>³extra_routes² with subnets instead of the ³routers².
> ( It would be great if you have some use case picture for this).
>Feel free to chime in here and I'll summaries the answers.
>Regards Susanne
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-11 Thread Gary Kotton


On 8/11/14, 6:06 PM, "Dan Smith"  wrote:

>> As the person who -2'd the review, I'm thankful you raised this issue on
>> the ML, Jay. Much appreciated.
>
>The "metadetails" term isn't being invented in this patch, of course. I
>originally complained about the difference when this was being added:
>
>https://review.openstack.org/#/c/109505/1/nova/api/openstack/compute/contr
>ib/server_groups.py,cm
>
>As best I can tell, the response in that patch set about why it's being
>translated is wrong (backwards). I expect that the API extension at the
>time called it "metadetails" and they decided to make the object the
>same and do the translation there.
>
>From what I can tell, the actual server_group API extension that made it
>into the tree never got the ability to set/change/etc the
>metadata/metadetails anyway, so there's no reason (AFAICT) to add it in
>wrongly.
>
>If we care to have this functionality, then I propose we change the
>attribute on the object (we can handle this with versioning) and reflect
>it as "metadata" in the API.
>
>However, I have to ask: do we really need another distinct metadata
>store attached to server_groups? If not, how about we just remove it
>from the database and the object, clean up the bit of residue that is
>still in the API extension and be done with it?

The initial version of the feature did not make use of this. The reason
was that we chose for a very
Limited subset to be used, that is, affinity and anti affinity. Moving
forwards we would like to implement
A number of different policies with this. We can drop it at the moment due
to the fact that it is not used.

I think that Yathi may be using this for the constrain scheduler. But I am
not 100% sure.

>
>--Dan
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] 6.0 blueprints cleanup

2014-08-11 Thread Mike Scherbakov
+2, yes please


On Mon, Aug 11, 2014 at 7:42 PM, Dmitry Borodaenko  wrote:

> All,
>
> Please refer to this email for a list of information that has to be
> populated in a blueprint before it can be assigned and scheduled:
> http://lists.openstack.org/pipermail/openstack-dev/2014-August/042042.html
>
> Please don't schedule a blueprint to 6.0 until it has these details
> and its assignees are confirmed.
>
> On Mon, Aug 11, 2014 at 8:30 AM, Dmitry Pyzhov 
> wrote:
> > We've moved all blueprints from 6.0 to 'next' milestone. It has been
> done in
> > order to better view of stuff that we really want to implement in 6.0.
> >
> > Feature freeze for 6.0 release is planned to 18th of September. If you
> are
> > going to merge your blueprint before that date, you can move it to 6.0
> > milestone and 6.0.x series. But blueprint must have fixed scope and must
> be
> > assigned to person who will lead this activity.
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Dmitry Borodaenko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-11 Thread David Kranz

On 08/06/2014 05:48 PM, John Griffith wrote:
I have to agree with Duncan here.  I also don't know if I fully 
understand the limit in options.  Stress test seems like it 
could/should be different (again overlap isn't a horrible thing) and I 
don't see it as siphoning off resources so not sure of the issue. 
 We've become quite wrapped up in projects, programs and the like 
lately and it seems to hinder forward progress more than anything else.


I'm also not convinced that Tempest is where all things belong, in 
fact I've been thinking more and more that a good bit of what Tempest 
does today should fall more on the responsibility of the projects 
themselves.  For example functional testing of features etc, ideally 
I'd love to have more of that fall on the projects and their 
respective teams.  That might even be something as simple to start as 
saying "if you contribute a new feature, you have to also provide a 
link to a contribution to the Tempest test-suite that checks it". 
 Sort of like we do for unit tests, cross-project tracking is 
difficult of course, but it's a start.  The other idea is maybe 
functional test harnesses live in their respective projects.


Honestly I think who better to write tests for a project than the 
folks building and contributing to the project.  At some point IMO the 
QA team isn't going to scale.  I wonder if maybe we should be thinking 
about proposals for delineating responsibility and goals in terms of 
functional testing?




All good points. Your last paragraph was discussed by the QA team 
leading up to and at the Atlanta summit. The conclusion was that the 
api/functional tests focused on a single project should be part of that 
project. As Sean said, we can envision there being half (or some other 
much smaller number) as many such tests in tempest going forward.


Details are under discussion, but the way this is likely to play out is 
that individual projects will start by creating their own functional 
tests outside of tempest. Swift already does this and neutron seems to 
be moving in that direction. There is a spec to break out parts of 
tempest 
(https://github.com/openstack/qa-specs/blob/master/specs/tempest-library.rst) 
into a library that might be used by projects implementing functional 
tests.


Once a project has "sufficient" functional testing, we can consider 
removing its api tests from tempest. This is a bit tricky because 
tempest needs to cover *all* cross-project interactions. In this 
respect, there is no clear line in tempest between scenario tests which 
have this goal explicitly, and api tests which may also involve 
interactions that might not be covered in a scenario. So we will need a 
principled way to make sure there is complete cross-project coverage in 
tempest with a smaller number of api tests.


 -David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] devstack local.conf file

2014-08-11 Thread Nikesh Kumar Mahalka
Hi,
I have gone through devstack links.
They are not clear like openstack.org documents.


For Example:
when i am using below local.conf file in devstack,"hp_msa_driver" is not
coming in "enabled_backends" in cinder.conf after running stack.sh.

[[local|localrc]]
ADMIN_PASSWORD=vedams123
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.170/29
HOST_IP=192.168.2.151
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
TEMPEST_VOLUME_DRIVER=hp_msa_fc
TEMPEST_VOLUME_VENDOR="Hewlett-Packard"
TEMPEST_STORAGE_PROTOCOL=FC


[[post-config|$CINDER_CONF]]
[hp_msa_driver]
volume_driver = cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver
san_ip = 192.168.2.192
san_login = manage
san_password =!manage
volume_backend_name=HPMSA_FC


[lvmdriver-1]
volume_group=stack-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI



*I am getting below cinder.conf file after running stack.sh script*

[keystone_authtoken]
auth_uri = http://192.168.2.151:5000/v2.0
signing_dir = /var/cache/cinder
admin_password = vedams123
admin_user = cinder
admin_tenant_name = service
cafile =
identity_uri = http://192.168.2.151:35357

[DEFAULT]
rabbit_password = vedams123
rabbit_hosts = 192.168.2.151
rpc_backend = cinder.openstack.common.rpc.impl_kombu
use_syslog = True
*default_volume_type = lvmdriver-1*
*enabled_backends = lvmdriver-1*
enable_v1_api = true
periodic_interval = 60
lock_path = /opt/stack/data/cinder
state_path = /opt/stack/data/cinder
osapi_volume_extension = cinder.api.contrib.standard_extensions
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
sql_connection = mysql://root:vedams123@127.0.0.1/cinder?charset=utf8
iscsi_helper = tgtadm
my_ip = 192.168.2.151
verbose = True
debug = True
auth_strategy = keystone

[lvmdriver-1]
volume_group = stack-volumes-1
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name = LVM_iSCSI

[hp_msa_driver]
volume_backend_name = HPMSA_FC
san_password = !manage
san_login = manage
san_ip = 192.168.2.192
volume_driver = cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver



*Then i analyzed source code of stack.sh,and added in local.conf this line:*
*CINDER_ENABLED_BACKENDS=hp_msa:hp_msa_driver,lvm:lvmdriver-1*


Now i am getting hp_msa_fc in cinder.conf in enabled_backends



Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver interface on every API request"

2014-08-11 Thread Doug Wiegley
Hi all,

> Validations such as ³timeout > delay² should be performed on the API
>level before it reaches the driver.
For a configuration tree (lb, listeners, pools, etc.), there should be one
provider.

You¹re right, but I think the point of Vijay¹s example was to highlight
the combo error problem with populating all of the driver objects at once
(in short, the driver interface isn¹t well suited to that model.)  That
his one example can be covered by API validators is irrelevant.  Consider
a backend that does not support APP_COOKIE¹s, or HTTPS_TERMINATED (but has
multiple listeners) instead.  Should the entire load balancer create fail,
or should it offer degraded service?  Do all drivers have to implement a
transaction rollback; wait, the interface makes that very hard.  That¹s
his point.  The driver is no longer just glue code between interfaces;
it¹s now a mini-object error handler.


> Having provider defined in multiple places does not make sense.

Channeling Brandon, who can yell if I get this wrong, the point is not to
have a potentially different provider on each object.  It¹s to allow a
provider to be assigned when the first object in the tree is created, so
that future related objects will always get routed to the same provider.
Not knowing which provider should get all the objects is why we have to
wait until we see a LoadBalancer object.


All of this sort of edge case nonsense is because we (the royal we, the
community), wanted all load balancer objects to be ³root² objects, even
though only one of them is an actual root today, to support many-to-many
relationships among all of them, at some future date, without an interface
change.  If my bias is showing that I¹m not a fan of adding this
complexity for that, I¹m not surprised.

Thanks,
doug


On 8/11/14, 7:57 AM, "Samuel Bercovici"  wrote:

>Hi,
> 
>Validations such as ³timeout > delay² should be performed on the API
>level before it reaches the driver.
> 
>For a configuration tree (lb, listeners, pools, etc.), there should be
>one provider.
>
>Having provider defined in multiple places does not make sense.
> 
> 
>-San.
> 
> 
>From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
>
>Sent: Monday, August 11, 2014 2:43 PM
>To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
>Subject: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver
>interface on every API request"
>
>
> 
>Hi:
> 
>Continuing from last week¹s LBaaS meetingŠ
> 
>Currently an entity cannot be sent to driver unless it is linked to
>loadbalancer because loadbalancer is the root object and driver
>information is only available with loadbalancer.
>
> 
>The request to the driver is delayed because of which error propagation
>becomes tricky.
> 
>Let¹s say a monitor was configured with timeout > delay there would be no
>error then.
>When a listener is configured there will be a monitor creation/deployment
>error like ³timeout configured greater than delay².
> 
>Unless the error is very clearly crafted the user won¹t be able to
>understand the error.
> 
>I am half-heartedly OK with current approach.
>
> 
>But, I would prefer Brandon¹s Solution ­ make provider an attribute in
>each of the entities to get rid of this problem.
>
> 
>What do others think?
> 
>Thanks,
>Vijay V.
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling driver interface on every API request"

2014-08-11 Thread Brandon Logan
Yeah what I meant was the only solution I could come up with so that the
driver gets passed every call is to have the every entity have a
provider.  I do believe this is a bit cumbersome for a user, and extra
validation would be needed to verify that two entities linked together
cannot have different providers, but that would be pretty easy.  In my
opinion, it'd be a bit weird to have them all have a provider.  However,
there are some pros to it such as:

1) Driver always gets the create, update, and delete calls.
2) If drivers support a varying range of values for certain entity
attributes, that validation can be caught immediately if thats something
people wanted.
3) Will remove the necessity of a DEFERRED status for some drivers
(This also brings up a CON, in that some drivers may use DEFERRED and
some may not, which leads to an inconsistent UX).
4) Status management in some drivers will become a bit easier.

Still I don't think it is something that should be done because having
the user give a provider for every entity is a bit cumbersome.  Though
if enough people want this then a larger discussion about it should
probably happen.

Thanks,
Brandon

On Mon, 2014-08-11 at 13:57 +, Samuel Bercovici wrote:
> Hi,
> 
>  
> 
> Validations such as “timeout > delay” should be performed on the API
> level before it reaches the driver.
> 
>  
> 
> For a configuration tree (lb, listeners, pools, etc.), there should be
> one provider. 
> 
> Having provider defined in multiple places does not make sense.
> 
>  
> 
>  
> 
> -San.
> 
>  
> 
>  
> 
> From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com] 
> Sent: Monday, August 11, 2014 2:43 PM
> To: OpenStack Development Mailing List
> (openstack-dev@lists.openstack.org)
> Subject: [openstack-dev] [Neutron][LBaaS] Continuing on "Calling
> driver interface on every API request"
> 
> 
>  
> 
> Hi:
> 
>  
> 
> Continuing from last week’s LBaaS meeting…
> 
>  
> 
> Currently an entity cannot be sent to driver unless it is linked to
> loadbalancer because loadbalancer is the root object and driver
> information is only available with loadbalancer. 
> 
>  
> 
> The request to the driver is delayed because of which error
> propagation becomes tricky.
> 
>  
> 
> Let’s say a monitor was configured with timeout > delay there would be
> no error then.
> 
> When a listener is configured there will be a monitor
> creation/deployment error like “timeout configured greater than
> delay”.
> 
>  
> 
> Unless the error is very clearly crafted the user won’t be able to
> understand the error.
> 
>  
> 
> I am half-heartedly OK with current approach. 
> 
>  
> 
> But, I would prefer Brandon’s Solution – make provider an attribute in
> each of the entities to get rid of this problem. 
> 
>  
> 
> What do others think?
> 
>  
> 
> Thanks,
> 
> Vijay V.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] 6.0 blueprints cleanup

2014-08-11 Thread Dmitry Borodaenko
All,

Please refer to this email for a list of information that has to be
populated in a blueprint before it can be assigned and scheduled:
http://lists.openstack.org/pipermail/openstack-dev/2014-August/042042.html

Please don't schedule a blueprint to 6.0 until it has these details
and its assignees are confirmed.

On Mon, Aug 11, 2014 at 8:30 AM, Dmitry Pyzhov  wrote:
> We've moved all blueprints from 6.0 to 'next' milestone. It has been done in
> order to better view of stuff that we really want to implement in 6.0.
>
> Feature freeze for 6.0 release is planned to 18th of September. If you are
> going to merge your blueprint before that date, you can move it to 6.0
> milestone and 6.0.x series. But blueprint must have fixed scope and must be
> assigned to person who will lead this activity.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] 6.0 blueprints cleanup

2014-08-11 Thread Dmitry Pyzhov
We've moved all blueprints from 6.0 to 'next' milestone. It has been done
in order to better view of stuff that we really want to implement in 6.0.

Feature freeze for 6.0 release is planned to 18th of September. If you are
going to merge your blueprint before that date, you can move it to 6.0
milestone and 6.0.x series. But blueprint must have fixed scope and must be
assigned to person who will lead this activity.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Incubation request

2014-08-11 Thread Swartzlander, Ben
I just saw the agenda for tomorrow's TC meeting and we're on it. I plan to be 
there.

https://wiki.openstack.org/wiki/Meetings#Technical_Committee_meeting

-Ben


From: Swartzlander, Ben [mailto:ben.swartzlan...@netapp.com]
Sent: Monday, July 28, 2014 9:53 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Manila] Incubation request

Manila has come a long way since we proposed it for incubation last autumn. 
Below are the formal requests.

https://wiki.openstack.org/wiki/Manila/Incubation_Application
https://wiki.openstack.org/wiki/Manila/Program_Application

Anyone have anything to add before I forward these to the TC?

-Ben Swartzlander

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-11 Thread Robert Kukura


On 8/11/14, 4:52 AM, Thierry Carrez wrote:

gustavo panizzo (gfa) wrote:

only one think i didn't like it

why all url,api, etc has to include the word 'preview'?
i imagine that i would be consuming the new feature using heat, puppet,
local scripts, custom horizon, whatever. Why do you make me to change
all them when the feature moves out of preview? it could be a lot of
rework (for consumers) without gain. I would totally support other big
fat warnings everywhere (logs, documentation, startup log of
neutron-server) but don't change the API if is not necessary

I see two issues with this proposal: the first one is what Gustavo just
said: the use of the "preview" package/configoption/API creates friction
when the feature needs to go mainstream (package renaming, change in
configuration for deployers, change in API calls for users...).

Hi Thierry,

I completely agree with you and with Gustavo that "mangling" the REST 
URIs to include "preview" may have more cost (i.e. friction when the API 
becomes stable) than benefit. I'm happy to drop that particular part of 
the proposal. The email was intended to kick off discussion of these 
sorts of details.


My understanding is that the goal is to make it easy for people to "try"
the preview feature, and keeping the experimental feature in-tree is
seen as simpler to experiment with. But the pain from this friction imho
outweighs the pain of deploying an out-of-tree plugin for deployers.
I agree out-of-tree is a better option for truly experimental features. 
This in-tree stabilization is intended for a beta release, as opposed to 
a prototype.


The second issue is that once the feature is in "preview" in tree, it
moves the responsibility/burden of making it official (or removed) to
the core developers (as Salvatore mentioned). I kind of like the
approach where experimental features are developed in faster iterations
out-of-tree and when they are celebrated by experimenters and are ready
to be stable-supported, they are moved in tree.
I don't think we are really disagreeing here. There are clearly 
situations where rapid iteration out-of-tree, without the burden of the 
core review process, is most appropriate. But this proposal is intended 
for features that are on the cusp of being declared stable, rather than 
for experimentation. The intent is absolutely to have all changes to the 
code go through the regular core review process during this 
stabilization phase. This enables the feature to be fully reviewed and 
integrated (also including CLIs, Horizon and Heat support, 
documentation, etc.) at the point when the decision is made that no 
further incompatible API changes will be needed. Once the feature is 
declared stable, from the end-user perspective, its just a matter of 
removing the "preview" label. Moving the feature's code from the preview 
subtree to its normal locations in the tree will not effect most users 
or operators.


Note that the GBP team had implemented a proof-of-concept prior to the 
start of the Juno cycle out-of-tree. Our initial plan was to get this 
PoC code reviewed and merged at the beginning of Juno, and then 
iteratively improve it throughout the cycle. But we got a lot of 
resistance to the idea of merging a large body of code that had been 
developed outside the Neutron development and review process. We've 
instead had to break it into multiple pieces, and make sure each of 
those is production ready, to have any chance of getting through the 
review process during Juno.  Its not really clear that something 
significant developed externally can ever be "moved in tree", at least 
without major upheaval, including incompatible API changes, as it goes 
through the review/merge process.


Finally, consider that many interesting potential features for Neutron 
involve integrations with external back-ends, such as ODL or 
vendor-specific devices or controllers, along with a reference 
implementation that doesn't depend on external systems. To really 
validate that the API, the model, and the driver framework code for a 
new feature are all stable, it is necessary to implement and deploy 
several of these back-end integrations along with the reference 
implementation. But vendors may not be willing to invest substantially 
in this integration effort without the assurances about the quality and 
relative stability of the interfaces involved that comes from the core 
review process, and without the clear path to market that comes with 
in-tree development of an approved blueprint targeted at a specific 
Neutron release.


-Bob


Regards,




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][heat] a small experiment with Ansible in TripleO

2014-08-11 Thread Zane Bitter

On 11/08/14 10:46, Clint Byrum wrote:

Right now we're stuck with an update that just doesn't work. It isn't
just about update-failure-recovery, which is coming along nicely, but
it is also about the lack of signals to control rebuild, poor support
for addressing machines as groups, and unacceptable performance in
large stacks.


Are there blueprints/bugs filed for all of these issues?

-ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-11 Thread Jay Pipes

On 08/11/2014 11:06 AM, Dan Smith wrote:

As the person who -2'd the review, I'm thankful you raised this issue on
the ML, Jay. Much appreciated.


The "metadetails" term isn't being invented in this patch, of course. I
originally complained about the difference when this was being added:

https://review.openstack.org/#/c/109505/1/nova/api/openstack/compute/contrib/server_groups.py,cm

As best I can tell, the response in that patch set about why it's being
translated is wrong (backwards). I expect that the API extension at the
time called it "metadetails" and they decided to make the object the
same and do the translation there.

 From what I can tell, the actual server_group API extension that made it
into the tree never got the ability to set/change/etc the
metadata/metadetails anyway, so there's no reason (AFAICT) to add it in
wrongly.

If we care to have this functionality, then I propose we change the
attribute on the object (we can handle this with versioning) and reflect
it as "metadata" in the API.

However, I have to ask: do we really need another distinct metadata
store attached to server_groups?


No.

> If not, how about we just remove it

from the database and the object, clean up the bit of residue that is
still in the API extension and be done with it?


+1

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-11 Thread Dan Smith
> As the person who -2'd the review, I'm thankful you raised this issue on
> the ML, Jay. Much appreciated.

The "metadetails" term isn't being invented in this patch, of course. I
originally complained about the difference when this was being added:

https://review.openstack.org/#/c/109505/1/nova/api/openstack/compute/contrib/server_groups.py,cm

As best I can tell, the response in that patch set about why it's being
translated is wrong (backwards). I expect that the API extension at the
time called it "metadetails" and they decided to make the object the
same and do the translation there.

From what I can tell, the actual server_group API extension that made it
into the tree never got the ability to set/change/etc the
metadata/metadetails anyway, so there's no reason (AFAICT) to add it in
wrongly.

If we care to have this functionality, then I propose we change the
attribute on the object (we can handle this with versioning) and reflect
it as "metadata" in the API.

However, I have to ask: do we really need another distinct metadata
store attached to server_groups? If not, how about we just remove it
from the database and the object, clean up the bit of residue that is
still in the API extension and be done with it?

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >