Re: [openstack-dev] [all] the trouble with names

2016-02-08 Thread michael mccune

On 02/05/2016 04:36 PM, Chris Dent wrote:

On Fri, 5 Feb 2016, Sean Dague wrote:

I'd ask for folks to try to stay on the original question:

What possible naming standards could we adopt, and what are the
preferences.


1. service type as described in option 2
2. managed by the api-wg with appeal to the TC
3. be limited to those services that (wish to/need to/should) be present
in the service catalog

As discussed elsewhere 3 is a feature not a bug. We should endeavor
to keep the service catalog clean, tidy, and limited as a control on
OpenStack itself is clean, tidy and limited.


i think this sounds very reasonable

regards,
mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Being more aggressive with our defaults

2016-02-08 Thread Monty Taylor

On 02/08/2016 09:47 AM, Sean M. Collins wrote:

Hi,

With the path_mtu issue - our default was to set path_mtu to zero, and
do no calculation against the physical segment MTU and the overhead for
the tunneling protocol that was selected for a tenant network. Which
means the network would break.

I am working on patches to change our behavior to set the MTU to 1500 by
default[1], so that at least our out of the box experience is more
sensible.

This brings me to the csum feature of recent linux kernel versions and
related network components.

Patches:

https://review.openstack.org/#/c/220744/
https://review.openstack.org/#/c/261409/

Bugs/RFEs:

https://bugs.launchpad.net/neutron/+bug/1515069
https://bugs.launchpad.net/neutron/+bug/1492111

Basically, we see that enabling the csum feature creates the conditions
where 10gig link were able to be fully utilized[2] in one instance[3]. My
thinking is - yes I too would like to fully utilize the links that I
paid good money for. Someone with more knowledge can correct me
, but is there any reason not to enable this feature? If your hardware
supports it, we should utilize it. If your hardware doesn't support it,
then we shouldn't.

tl;dr - why do we keep merging features that create more knobs that
deployers and deployment tools need to keep turning? The fact that we
merge features that are disabled by default means that they are not as
thoroughly tested as features that are enabled by default.

Neutron should have a lot of things enabled by default that improve
performance (l2pop? path_mtu? dvr?), and by itself, try and enable these
features. If for some reason the hardware doesn't support it, log that
it wasn't successful and then disable.


YES

There should not be an option labeled "go-fast" ... the only reason to 
have an option at all is if there is a valid reason for turning it off 
(like cards that have buggy checksums that you need to disable/ignore 
hardware side), and the only reason to leave such an option defaulting 
in the slow position is if the failure mode is one that can't be 
adequately tested at runtime and where failure could lead to 
corruption/data loss.



OK - that's it for me. Thanks for reading. I'll put on my asbestos
undies now.


[1]: https://review.openstack.org/#/c/276411/
[2]: http://openvswitch.org/pipermail/dev/2015-August/059335.html

[3]: Yes, it's only one data point




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Being more aggressive with our defaults

2016-02-08 Thread Sean M. Collins
Hi,

With the path_mtu issue - our default was to set path_mtu to zero, and
do no calculation against the physical segment MTU and the overhead for
the tunneling protocol that was selected for a tenant network. Which
means the network would break.

I am working on patches to change our behavior to set the MTU to 1500 by
default[1], so that at least our out of the box experience is more
sensible.

This brings me to the csum feature of recent linux kernel versions and
related network components.

Patches:

https://review.openstack.org/#/c/220744/
https://review.openstack.org/#/c/261409/

Bugs/RFEs:

https://bugs.launchpad.net/neutron/+bug/1515069
https://bugs.launchpad.net/neutron/+bug/1492111

Basically, we see that enabling the csum feature creates the conditions
where 10gig link were able to be fully utilized[2] in one instance[3]. My
thinking is - yes I too would like to fully utilize the links that I
paid good money for. Someone with more knowledge can correct me
, but is there any reason not to enable this feature? If your hardware
supports it, we should utilize it. If your hardware doesn't support it,
then we shouldn't.

tl;dr - why do we keep merging features that create more knobs that
deployers and deployment tools need to keep turning? The fact that we
merge features that are disabled by default means that they are not as
thoroughly tested as features that are enabled by default.

Neutron should have a lot of things enabled by default that improve
performance (l2pop? path_mtu? dvr?), and by itself, try and enable these
features. If for some reason the hardware doesn't support it, log that
it wasn't successful and then disable.

OK - that's it for me. Thanks for reading. I'll put on my asbestos
undies now.


[1]: https://review.openstack.org/#/c/276411/
[2]: http://openvswitch.org/pipermail/dev/2015-August/059335.html

[3]: Yes, it's only one data point

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Fwd: Need to integrate radius server with OpenStack

2016-02-08 Thread Steve Martinelli
Not sure if this helps, but at one point we had someone propose install
instructions for moonshot and apache:
https://review.openstack.org/#/c/163878/10/doc/source/extensions/abfab.rst

Steve



From:   "Van Leeuwen, Robert" 
To: pratik dave 
Cc: "openstack@lists.openstack.org" 
Date:   2016/02/08 04:01 AM
Subject:Re: [Openstack] Fwd: Need to integrate radius server with
OpenStack



>Now we are stuck at this point how to authenticate users via free radius.
>Any help or pointers on this would be grateful.



Hi Pratik,


You can write your own keystone middleware to authenticate with.

There is a nice doc about that here:
http://docs.openstack.org/developer/keystone/external-auth.html

Note that if you use external_auth as in the example it will only take over
the authentication:
The user will still need to exist in keystone and roles need to be assigned
in the keystone backend.

For  a "fully integrated” solution you will have to look at LDAP afaik.

Cheers,
Robert van Leeuwen___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Fuel] Task Based Deployment Is at Least Twice Faster

2016-02-08 Thread Igor Kalnitsky
Hey Fuelers,

When we are going to enable it? I think since HCF is passed for
stable/8.0, it's time to enable task-based deployment for master
branch.

Opinion?

- Igor

On Wed, Feb 3, 2016 at 12:31 PM, Bogdan Dobrelya  wrote:
> On 02.02.2016 17:35, Alexey Shtokolov wrote:
>> Hi Fuelers!
>>
>> As you may be aware, since [0] Fuel has implemented a new orchestration
>> engine [1]
>> We switched the deployment paradigm from role-based (aka granular) to
>> task-based and now Fuel can deploy all nodes simultaneously using
>> cross-node dependencies between deployment tasks.
>
> That is great news! Please do not forget about docs updates as well.
> Those docs are always forgotten like poor orphans... I submitted a patch
> [0] to MOS docs, please review and add more details, if possible, for
> plugins impact as well.
>
> [0] https://review.fuel-infra.org/#/c/16509/
>
>>
>> This feature is experimental in Fuel 8.0 and will be enabled by default
>> for Fuel 9.0
>>
>> Allow me to show you the results. We made some benchmarks on our bare
>> metal lab [2]
>>
>> Case #1. 3 controllers + 7 computes w/ ceph.
>> Task-based deployment takes *~38* minutes vs *~1h15m* for granular (*~2*
>> times faster)
>> Here and below the deployment time is average time for 10 runs
>>
>> Case #2. 3 controllers + 3 mongodb + 4 computes w/ ceph.
>> Task-based deployment takes *~41* minutes vs *~1h32m* for granular
>> (*~2.24* times faster)
>>
>>
>>
>> Also we took measurements for Fuel CI test cases. Standard BVT (Master
>> node + 3 controllers + 3 computes w/ ceph. All are in qemu VMs on one host)
>>
>> Fuel CI slaves with *4 *cores *~1.1* times faster
>> In case of 4 cores for 7 VMs they are fighting for CPU resources and it
>> marginalizes the gain of task-based deployment
>>
>> Fuel CI slaves with *6* cores *~1.6* times faster
>>
>> Fuel CI slaves with *12* cores *~1.7* times faster
>
> These are really outstanding results!
> (tl;dr)
> I believe the next step may be to leverage the "external install & svc
> management" feature (example [1]) of the Liberty release (7.0.0) of
> Puppet-Openstack (PO) modules. So we could use separate concurrent
> cross-depends based tasks *within a single node* as well, like:
> - task: install_all_packages - a singleton task for a node,
> - task: [configure_x, for each x] - concurrent for a node,
> - task: [manage_service_x, for each x] - some may be concurrent for a
> node, while another shall be serialized.
>
> So, one might use the "--tags" separator for concurrent puppet runs to
> make things go even faster, for example:
>
> # cat test.pp
> notify
> {"A": tag => "a" }
> notify
> {"B": tag => "b" }
>
> # puppet apply test.pp
> Notice: A
> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
> Notice: B
> Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'
>
> # puppet apply test.pp --tags a
> Notice: A
> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
>
> # puppet apply test.pp --tags a & puppet apply test.pp --tags b
> Notice: B
> Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'
> Notice: A
> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
>
> Which is supposed to be faster, although not for this example.
>
> [1] https://review.openstack.org/#/c/216926/3/manifests/init.pp
>
>>
>> You can see additional information and charts in the presentation [3].
>>
>> [0]
>> - 
>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082093.html
>> [1]
>> - 
>> https://specs.openstack.org/openstack/fuel-specs/specs/8.0/task-based-deployment-mvp.html
>> [2] -  3 x HP ProLiant DL360p Gen8 (XeonE5 6 cores/64GB/SSD)  + 7 x HP
>> ProLiant DL320p Gen8 (XeonE3 4 cores/8-16GB/HDD)
>> [3] -
>> https://docs.google.com/presentation/d/1jZCFZlXHs_VhjtVYS2VuWgdxge5Q6sOMLz4bRLuw7YE
>>
>> ---
>> WBR, Alexey Shtokolov
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-08 Thread Pavel Bondar
On 06.02.2016 00:03, Salvatore Orlando wrote:
>
>
> On 5 February 2016 at 17:58, Neil Jerram  > wrote:
>
> On 05/02/16 16:31, Pavel Bondar wrote:
> > On 05.02.2016 12:28, Salvatore Orlando wrote:
> >>
> >>
> >> On 5 February 2016 at 04:12, Armando M.  
> >> >> wrote:
> >>
> >>
> >>
> >> On 4 February 2016 at 08:22, John Belamaric
> >> < >jbelama...@infoblox.com
> > wrote:
> >>
> >>
> >> > On Feb 4, 2016, at 11:09 AM, Carl Baldwin
> 
> >> wrote:
> >> >
> >> > On Thu, Feb 4, 2016 at 7:23 AM, Pavel Bondar
> 
> >> wrote:
> >> >> I am trying to bring more attention to [1] to make
> final decision on
> >> >> approach to use.
> >> >> There are a few point that are not 100% clear for me
> at this point.
> >> >>
> >> >> 1) Do we plan to switch all current clouds to
> pluggable ipam
> >> >> implementation in Mitaka?
>
> I possibly shouldn't comment at all, as I don't know the history, and
> wasn't around when the fundamental design decisions here were
> being made.
>
> However, it seems a shame to me that this was done in a way that
> needs a
> DB migration at all.  (And I would have thought it possible for the
> default pluggable IPAM driver to use the same DB state as the
> non-pluggable IPAM backend, given that it is delivering the same
> semantics.)  Without that, I believe it should be a no-brainer to
> switch
> unconditionally to the pluggable IPAM backend.
>
>
> This was indeed the first implementation attempt that we made, but it
> failed spectacularly as we wrestled with different foreign key
> relationships in the original and new model.
> Pavel has all the details, but after careful considerations we decided
> to adopt a specular model with different db tables.
Yeah, we had this chicken and egg problem on subnet creation.

On the one hand, ipam driver create_subnet has to be called *before*
creating neutron subnet.
Because for AnySubnetRequest ipam driver is responsible for selecting
cidr for subnet.

On the other hand, during ipam driver create_subnet call availability
ranges has to be created,
but they are linked with neutron subnet using foreign key (with
allocation pools in the middle).
So availability ranges could not be created before neutron subnet due to
FK constraint in old tables.

To solve this chicken and egg problem it was decided to use tables for
reference driver that have no FK to neutron subnet.
And it allowed to call ipam driver create_subnet (and create
availability ranges) before creating neutron subnet.
>  
>
>
> Sorry if that's unhelpful...
>
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Fausto Marzi
On Mon, Feb 8, 2016 at 11:19 AM, Jay Pipes  wrote:

> On 02/08/2016 10:29 AM, Sean Dague wrote:
>
>> On 02/08/2016 10:07 AM, Thierry Carrez wrote:
>>
>>> Brian Curtin wrote:
>>>
 On Sun, Feb 7, 2016 at 3:07 PM, Jay Pipes  wrote:

> I would love to see the OpenStack contributor community take back the
> design
> summit to its original format and purpose and decouple it from the
> OpenStack
> Summit's conference portion.
>
> I believe the design summits should be organized by the OpenStack
> contributor community, not the OpenStack Foundation and its marketing
> and
> event planning staff.
>

 As someone who spent years organizing PyCon as a volunteer from the
 Python community, with four of those years in a row taking about 8
 solid months of pre-conference effort, not to mention the on-site
 effort to run a volunteer conference of that size [0]...I would
 suggest even longer and harder thought before stretching a community
 like this even more thinly. Things should change, but probably not the
 "who's doing the work" aspect.

>>>
>>> Beyond stretching out the community, we would end up with the same
>>> problem we are trying to solve. Most of the cross-project folks that
>>> would end up organizing the event would be too busy organizing the event
>>> to be able to fully participate in it.
>>>
>>
>> Right, this is a super key point. Even just organizing and running local
>> user groups, I know how much time is spent making sure the whole thing
>> seems effortless to attendees, and they can just focus on content.
>>
>> Even look at the recently run Nova midcycle, with 40ish folks, it still
>> required some substantial logistics to pull off. The HPE team did a
>> great job with that. But it definitely required real time and effort.
>>
>
> Agreed.
>
> The Foundation has done an amazing job of making everyone think this is
>> easy (I know how much it is not). Without their efforts organizing these
>> events, eliminating the distractions of wandering in a strange city to
>> find lunch, having a network, projectors, access to facilities,
>> appropriate sized spaces, double checking all those things will really
>> actually be there, chasing after folks when they are not, handling the
>> myriad of other unforseen issues that you never have to see we would
>> not be nearly as productive at the design summits.
>>
>
> I understand this. I ran the MySQL Users Conference and Expo for 2 years.
> I realize the amount of effort it takes to organize a 2500+ person event.
> It's essentially a full-time job.
>
> I suppose I should have used a different wording. What I really think
> should happen is that a *separate* team should handle organizing the
> developer-focused working events than the main team that does the marketing
> event. I recognize that it's a lot of work and that asking the "community"
> to just handle the working event organization will lead to undue burden on
> certain cross-project folks.
>
> However, here are a couple things that do *not* need to be done by a
> separate team that handles working event organization:
>
> 1) Vendor and sponsorship stuff
> 2) A call for speakers and reviewing thousands of submissions (this is
> self-organized by each project's contributor team for the working events)
> 3) Determining keynote slots and wrangling C-level speakers -- or any
> speaker wrangling at all
> 4) "Check-in" and registration stands
> 5) Dealing with schwag, giveaways, parties, and other superfluous stuff
>
> So, yes, while it's a lot of work, it's not the same kind of work as the
> marketing event staff.
>
> So while I agree it's worth considering whether the Mega Conference and
>> Design Summit should continue to be collocated and on the same time
>> table, I think the idea that the Design Summit, at even only 500
>> attendees, could/should be run without the Foundation is just folly
>> based on a lack of understanding for what it takes to do events at that
>> scale.
>>
>
> For the record, I *do* understand what it takes to do events at that scale.
>
> > And massively underestimates the effort and skill the Foundation
>
>> has at making our events run as smoothly as they do.
>>
>
> I wasn't saying anything about the effort and skill the Foundation expends
> on making the marketing events run smoothly.
>
> I am pushing for a return to *working* events for developers.
>
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Reworded like that , with no additional burden on the engineers, and
without taking off the fan part of it, it makes a lots of sense.

So are you proposing to do an engineering hardcore Design only summit, less

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Dmitry Tantsur

On 02/07/2016 09:07 PM, Jay Pipes wrote:

Hello all,

tl;dr
=

I have long thought that the OpenStack Summits have become too
commercial and provide little value to the software engineers
contributing to OpenStack.

I propose the following:

1) Separate the design summits from the conferences
2) Hold only a single OpenStack conference per year
3) Return the design summit to being a low-key, low-cost working event


It sounds like a great idea, but I have a couple of concerns - see below.



details
===

The design summits originally started out as working events. Developers
got together in smallish rooms, arranged chairs in a fishbowl, and got
to work planning and designing.

With the OpenStack Summit growing more and more marketing- and
sales-focused, the contributors attending the design summit are often
unfocused. The precious little time that developers have to actually
work on the next release planning is often interrupted or cut short by
the large numbers of "suits" and salespeople at the conference event,
many of which are peddling a product or pushing a corporate agenda.

Many contributors submit talks to speak at the conference part of an
OpenStack Summit because their company says it's the only way they will
pay for them to attend the design summit. This is, IMHO, a terrible
thing. The design summit is a *working* event. Companies that contribute
to OpenStack projects should send their engineers to working events
because that is where work is done, not so that their engineer can go
give a talk about some vendor's agenda-item or newfangled product.


I'm afraid that if a company does not value employees participation in 
the design summit alone, they will continue to send them to the 
conference event, ignoring the design part completely. I.e. we'll get 
even less people from these companies. (of course it's only me guessing)


Also it means that people who actually have to be present in both places 
will travel even more, so it has high chances of increasing budget, not 
decreasing it.




Part of the reason that companies only send engineers who are giving a
talk at the conference side is that the cost of attending the OpenStack
Summit has become ludicrously expensive. Why have the events become so
expensive? I can think of a few reasons:

a) They are held every six months. I know of no other community or open
source project that holds *conference-type* events every six months.

b) They are held in extremely expensive hotels and conference centers
because the number of attendees is so big.


On one hand, big +1 for "extremely expensive" part.

On another hand, for participants arriving from another continent the 
airfare is roughly the half of the whole expense. This probably can't be 
improved (and may actually become worse from some of us, if new events 
become more US-centric).




c) Because the conferences have become sales and marketing-focused
events, companies shell out hundreds of thousands of dollars for schwag,
for rented event people, for food and beverage sponsorships, for keynote
slots, for lavish and often ridiculous parties, and more. This cost
means less money to send engineers to the design summit to do actual work.

I would love to see the OpenStack contributor community take back the
design summit to its original format and purpose and decouple it from
the OpenStack Summit's conference portion.

I believe the design summits should be organized by the OpenStack
contributor community, not the OpenStack Foundation and its marketing
and event planning staff. This will allow lower-cost venues to be chosen
that meet the needs only of the small group of active contributors, not
of huge masses of conference attendees. This will allow contributor
companies to send *more* engineers to *more* design summits, which is
something that really needs to happen if we are to grow our active
contributor pool.

Once this decoupling occurs, I think that the OpenStack Summit should be
renamed to the OpenStack Conference and Expo to better fit its purpose
and focus. This Conference and Expo event really should be held once a
year, in my opinion, and continue to be run by the OpenStack Foundation.

I, for one, would welcome events that have no conference check-in area,
no evening parties with 2000 people, no keynote and
powerpoint-as-a-service sessions, and no getting pulled into sales
meetings.

OK, there, I said it.

Thoughts? Criticism? Support? Suggestions welcome.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Fuel] Task Based Deployment Is at Least Twice Faster

2016-02-08 Thread Bulat Gaifullin
+1.

Regards,
Bulat Gaifullin
Mirantis Inc.



> On 08 Feb 2016, at 19:05, Igor Kalnitsky  wrote:
> 
> Hey Fuelers,
> 
> When we are going to enable it? I think since HCF is passed for
> stable/8.0, it's time to enable task-based deployment for master
> branch.
> 
> Opinion?
> 
> - Igor
> 
> On Wed, Feb 3, 2016 at 12:31 PM, Bogdan Dobrelya  
> wrote:
>> On 02.02.2016 17:35, Alexey Shtokolov wrote:
>>> Hi Fuelers!
>>> 
>>> As you may be aware, since [0] Fuel has implemented a new orchestration
>>> engine [1]
>>> We switched the deployment paradigm from role-based (aka granular) to
>>> task-based and now Fuel can deploy all nodes simultaneously using
>>> cross-node dependencies between deployment tasks.
>> 
>> That is great news! Please do not forget about docs updates as well.
>> Those docs are always forgotten like poor orphans... I submitted a patch
>> [0] to MOS docs, please review and add more details, if possible, for
>> plugins impact as well.
>> 
>> [0] https://review.fuel-infra.org/#/c/16509/
>> 
>>> 
>>> This feature is experimental in Fuel 8.0 and will be enabled by default
>>> for Fuel 9.0
>>> 
>>> Allow me to show you the results. We made some benchmarks on our bare
>>> metal lab [2]
>>> 
>>> Case #1. 3 controllers + 7 computes w/ ceph.
>>> Task-based deployment takes *~38* minutes vs *~1h15m* for granular (*~2*
>>> times faster)
>>> Here and below the deployment time is average time for 10 runs
>>> 
>>> Case #2. 3 controllers + 3 mongodb + 4 computes w/ ceph.
>>> Task-based deployment takes *~41* minutes vs *~1h32m* for granular
>>> (*~2.24* times faster)
>>> 
>>> 
>>> 
>>> Also we took measurements for Fuel CI test cases. Standard BVT (Master
>>> node + 3 controllers + 3 computes w/ ceph. All are in qemu VMs on one host)
>>> 
>>> Fuel CI slaves with *4 *cores *~1.1* times faster
>>> In case of 4 cores for 7 VMs they are fighting for CPU resources and it
>>> marginalizes the gain of task-based deployment
>>> 
>>> Fuel CI slaves with *6* cores *~1.6* times faster
>>> 
>>> Fuel CI slaves with *12* cores *~1.7* times faster
>> 
>> These are really outstanding results!
>> (tl;dr)
>> I believe the next step may be to leverage the "external install & svc
>> management" feature (example [1]) of the Liberty release (7.0.0) of
>> Puppet-Openstack (PO) modules. So we could use separate concurrent
>> cross-depends based tasks *within a single node* as well, like:
>> - task: install_all_packages - a singleton task for a node,
>> - task: [configure_x, for each x] - concurrent for a node,
>> - task: [manage_service_x, for each x] - some may be concurrent for a
>> node, while another shall be serialized.
>> 
>> So, one might use the "--tags" separator for concurrent puppet runs to
>> make things go even faster, for example:
>> 
>> # cat test.pp
>> notify
>> {"A": tag => "a" }
>> notify
>> {"B": tag => "b" }
>> 
>> # puppet apply test.pp
>> Notice: A
>> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
>> Notice: B
>> Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'
>> 
>> # puppet apply test.pp --tags a
>> Notice: A
>> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
>> 
>> # puppet apply test.pp --tags a & puppet apply test.pp --tags b
>> Notice: B
>> Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'
>> Notice: A
>> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
>> 
>> Which is supposed to be faster, although not for this example.
>> 
>> [1] https://review.openstack.org/#/c/216926/3/manifests/init.pp
>> 
>>> 
>>> You can see additional information and charts in the presentation [3].
>>> 
>>> [0]
>>> - 
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082093.html
>>> [1]
>>> - 
>>> https://specs.openstack.org/openstack/fuel-specs/specs/8.0/task-based-deployment-mvp.html
>>> [2] -  3 x HP ProLiant DL360p Gen8 (XeonE5 6 cores/64GB/SSD)  + 7 x HP
>>> ProLiant DL320p Gen8 (XeonE3 4 cores/8-16GB/HDD)
>>> [3] -
>>> https://docs.google.com/presentation/d/1jZCFZlXHs_VhjtVYS2VuWgdxge5Q6sOMLz4bRLuw7YE
>>> 
>>> ---
>>> WBR, Alexey Shtokolov
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> 
>> --
>> Best regards,
>> Bogdan Dobrelya,
>> Irc #bogdando
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> 

Re: [Openstack] Howto remove "/horizon" from URL?

2016-02-08 Thread Erdősi Péter

Hy there!
2016. 02. 07. 22:33 keltezéssel, Mohammed Naser írta:

python manage.py collectstatic
python manage.py compress

Thanks mate! :)
(maybe my question will be a bit off for the thread, but I think, they 
related)


So, I did not run this now, but i will... can you maybe tell me, which 
directory i should be when I run that 2 commands?
The google says, the py file will be there: 
/usr/share/openstack-dashboard/manage.py

Is that okay, if I just run in a /tmp for example?

Thanks,
 Peter

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [nova][bugs] nova-bugs-team IRC meeting

2016-02-08 Thread Markus Zoeller
John Garbutt  wrote on 01/22/2016 11:55:52 AM:

> From: John Garbutt 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 01/22/2016 11:57 AM
> Subject: Re: [openstack-dev] [nova][bugs] nova-bugs-team IRC meeting
> 
> On 22 January 2016 at 10:08, Markus Zoeller  wrote:
> > The dates and times are final now [1]. They differ from the previously
> > dates in this thread! The first and next meetings will be:
> >
> > Tuesday   2016-02-09   18:00 UTC   #openstack-meeting-4
> > Tuesday   2016-02-23   18:00 UTC   #openstack-meeting-4
> > Tuesday   2016-03-01   10:00 UTC   #openstack-meeting-4
> >
> > I have to cancel the meeting of 2016-02-16 in advance, as I have a
> > medical appointment a day before which will knock me out for the week.
> >
> > The dates make it possible to attend either the "nova-bugs-team"
> > or the "nova-team" of the same week as they take turns in "early"
> > and "late" time of day.
> >
> > The agenda can be found in the wiki [2].
> >
> > See you there!
> >
> > References:
> > [1] http://eavesdrop.openstack.org/#Nova_Bugs_Team_Meeting
> > [2] https://wiki.openstack.org/wiki/Meetings/Nova/BugsTeam
> >
> > Regard, Markus Zoeller (markus_z)
> 
> A big thank you for pushing on this.
> 
> As we head into Mitaka-3, post non-priority feature freeze, its a
> great time to push on reviewing bug fixes, and fixing important bugs.
> 
> Thanks,
> johnthetubaguy
> 

This is a short reminder that the first meeting will take place tomorrow:

Tuesday 2016-02-09 18:00 UTC #openstack-meeting-4

Agenda: https://wiki.openstack.org/wiki/Meetings/Nova/BugsTeam

Regards, Markus Zoeller (markus_z)


> >
> > Markus Zoeller/Germany/IBM@IBMDE wrote on 01/20/2016 05:14:14 PM:
> >
> >> From: Markus Zoeller/Germany/IBM@IBMDE
> >> To: "OpenStack Development Mailing List \(not for usage questions\)"
> >> 
> >> Date: 01/20/2016 05:27 PM
> >> Subject: Re: [openstack-dev] [nova][bugs] nova-bugs-team IRC meeting
> >>
> >> Due to other meetings which merged since the announcement,
> >> the IRC meeting patch [1] I pushed proposes now:
> >> Tuesday 1000 UTC #openstack-meeting beweekly-odd
> >> Tuesday 1700 UTC #openstack-meeting beweekly-even
> >>
> >> February the 9th at 1000 UTC will be the first kickoff meeting.
> >> I'll have an agenda ready until then [2]. Feel free to ping me in IRC
> >> or here on the ML when you have questions.
> >>
> >> References:
> >> [1] https://review.openstack.org/#/c/270281/
> >> [2] https://wiki.openstack.org/wiki/Meetings/Nova/BugsTeam
> >>
> >> Regards, Markus Zoeller (markus_z)
> >>
> >>
> >> Markus Zoeller/Germany/IBM@IBMDE wrote on 01/13/2016 01:24:06 PM:
> >> > From: Markus Zoeller/Germany/IBM@IBMDE
> >> > To: "OpenStack Development Mailing List"
> >> 
> >> > Date: 01/13/2016 01:25 PM
> >> > Subject: [openstack-dev] [nova][bugs] nova-bugs-team IRC meeting
> >> >
> >> > Hey folks,
> >> >
> >> > I'd like to revive the nova-bugs-team IRC meeting. As I want to 
chair
> >> > those meetings in my "bug czar" role, the timeslots are bound to my
> >> > timezone (UTC+1). The two bi-weekly alternating slots I have in 
mind
> >> > are:
> >> > * Tuesdays, 10:00 UTC biweekly-odd  (to get folks to the east of 
me)
> >> > * Tuesdays, 16:00 UTC biweekly-even (to get folks to the west of 
me)
> >> > By choosing these slots, the concluded action items of this meeting
> > can
> >> > be finished until the next nova meeting of the same week on 
Thursday.
> >> > The "early" and "late" timeslot is diametrical to the slots of the
> > nova
> >> > meeting to allow you to attend one of those meetings for your
> > timezone.
> >> >
> >> >Day   odd week   even week
> >> > -    -  -
> >> > nova meeting   Thursday  21:00 UTC  14:00 UTC
> >> > nova bugs meeting  Tuesday   10:00 UTC  16:00 UTC
> >> >
> >> > Let me know if you think these slots are not reasonable. My goal is
> >> > to have the first kick-off meeting at the 9th of February at 10:00
> > UTC.
> >> >
> >> > The scope of the team meeting:
> >> > * discuss and set the report-priority of bugs since the last 
meeting
> >> >   if not yet done.
> >> > * decide action items for bug reports which need further attention
> >> > * expire bugs which are hit by the expiration-policy unless someone
> >> >   disagrees.
> >> > * get one or more volunteers for the rotating bug-skimming-duty.
> >> >   Get feedback from the volunteers from the previous week if there
> >> >   are noteworthy items.
> >> > * check if new problem areas are emerging
> >> > * discuss process adjustments/changes if necessary
> >> > * spreading knowledge and discuss open points
> >> >
> >> > Review [1] contains my (WIP) proposal of the process I have in 
mind.
> >> > This still needs concensus, but this is not the focus 

[openstack-dev] [puppet] weekly meeting #69

2016-02-08 Thread Emilien Macchi
Hey, we'll have our weekly meeting tomorrow at 3pm UTC on
#openstack-meeting4.

https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack

As usual, free free to bring topics in this etherpad:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160209

We'll also have open discussion for bugs & reviews, so anyone is welcome
to join.

See you there,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] IRC Meeting tomorrow (2/8) - 0300 UTC

2016-02-08 Thread Gal Sagie
Hello All,

We will have an IRC meeting tomorrow (Tuesday, 2/8) at 0300 UTC
in #openstack-meeting-4

Please review the expected meeting agenda here:
https://wiki.openstack.org/wiki/Meeting
s/Kuryr

You can view last meeting action items and logs here:
http://eavesdrop.openstack.org/meetings/kuryr/2016/kuryr.2016-02-01-15.00.html

It will also be useful to view the meeting we had last week in
#openstack-kuryr regarding
Kubernetes integration:

http://eavesdrop.openstack.org/irclogs/%23openstack-kuryr/%23openstack-kuryr.2016-02-03.log.html

Please update the agenda if you have any subject you would like to discuss
about.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread James Bottomley
On Mon, 2016-02-08 at 09:43 -0500, Jay Pipes wrote:
> On 02/08/2016 09:03 AM, Fausto Marzi wrote:
> > The OpenStack Summit is a great thing as it is now. It creates big
> > momentum, it's a strong motivator for the engineers (as enjoy our
> > time
> > there)
> 
> I disagree with you on this. The design summits are intended to be 
> working events, not conference parties.

Having chaired and helped organise the Linux Plumbers conference for
the last 8 years, I don't agree with this.  Agreeable social events are
actually part of the conference process.  Someone who didn't dare
contradict the expert in a high pressure lecture room environment may
feel more confident to have a quiet discussion of their ideas over a
beer/wine at a social event.

Part of the function of a conference in a remote community is to let
people meet each other and get to know the person on the other end of
the fairly impersonal web id.  It also helps defuse community squabbles
and hostility: it's easier to be nasty to someone you've never met and
who your only interaction with is via a ritual communication mechanism.

>  > and the Companies are happy too with the business related side. I
> > see it also as the most successful Team building activity,
> > Community and
> > Company wide.
> 
> This isn't the intent of design summits. It's not intended to be a 
> company team building event.

Hey, if that's how you have to sell it to your boss ...

>  > For Companies, the costs to send engineers to the Summit
> > or to a dedicated Design event are exactly the same.
> 
> This is absolutely not the case. Sending engineers to expensive 
> conference hotels for a full week or more is more expensive than 
> sending engineers to small hotels in smaller cities for shorter 
> amounts of focused time.

How real is this?  Vancouver was a really expensive place, but a lot of
people who were deeply concerned about cost managed to find cheaper
hotels even there.  You can always (or usually) find the option for the
cost conscious if you look.  One of the advantages of large hub cities
is cheaper airfafe, which is usually a slightly more significant
component than accommodation.  Once you start looking at "smaller"
cities with only a couple of airlines serving them, you'll find the
travel costs sky rocket.

>  > Besides, many Companies send US based employees only to the US
> Summit, and EU
> > based only to the other side. The OpenStack Summit is probably the 
> > most advanced and successful OpenSource event, if you take out of 
> > it the engineering side, it won't be the same.
> 
> I don't see the OpenStack Summit as being an advanced event. It has 
> become a vendor-driven suit-fest, IMHO.

Well, if we disdain its content and pull all the engineers away, that's
certainly a self fulfilling prophecy.  Why not make it our mission to
try and give a more technical talk at the OpenStack summit itself?  I
have ... I think most of the audience actually enjoyed it even if there
were a few suit types who found themselves in the wrong session.  The
design summits are very strictly focussed.  It's actually harder to
give more general technical talks there than it is at the summit
because of the severity of focus.

> > I think, the issue here is that we need to have a better and more
> > productive way to work together. Probably the motivation behind a
> > separate design summit and also this discussion is focused to 
> > improve that, as we see that face to face is effective. Maybe this 
> > is the limitation we need to resolve, rather than changing an 
> > amazing event.
> 
> All I want is to be more productive. In my estimation, the Summits 
> have become vastly less productive than they used to be. Mid-cycles 
> are generally much more productive and much more cost-effective 
> because they don't have the distraction of the Summit party
> atmosphere.

"... because thou art virtuous, there should be no more cakes and ale?"
... you're implying that we all party and forget work because of a
"party atmosphere".  This doesn't accord with my experiences at all.  I
may be less usual than most, but Vancouver was a foodie town ... I
spent all the evenings out to dinner with people I don't normally meet
... I skipped every party including the super special VIP ones (which,
admittedly, I'd intended to go to).  Tokyo was about the same because I
had a lot of people to say "hello" to and it's fun going out for a
Japanese experience.  People who go to the summit to party probably
aren't going to make much of a contribution in a separated design
summit anyway and people who don't can do just as well in either
atmosphere.

> As someone who is responsible for recommending which Mirantis
> engineers go to which events, I strongly favor sending more engineers
> to more focused events at the expense of sending fewer engineers to
> the expensive and unfocused OpenStack Summits.

As long as they mostly go to the associated design summit they're going
to a focussed event.

James



Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-08 Thread Sean M. Collins
Salvatore Orlando wrote:
> Agreed. Operators love to automate things, but they generally don't like
> when components automatically do things they maybe do not expect to do (I
> don't think we should assume all operators fully read release notes). So
> the manual step is preferable, and not that painful after all. From an
> historical perspective, a manual switch was the same approach adopted for
> migration from OVS/LB plugins to ML2.

Honestly the migration from OVS/LB was  not very well done. 

https://bugs.launchpad.net/neutron/+bug/1424378
https://bugs.launchpad.net/neutron/+bug/1378732
https://bugs.launchpad.net/neutron/+bug/1332564 (I hit this one
personally)

Please please please please please let's put a lot of effort into making
sure this works. I beg you.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Anita Kuno
On 02/08/2016 11:11 AM, Fausto Marzi wrote:
> On Mon, Feb 8, 2016 at 11:19 AM, Jay Pipes  wrote:
> 
>> On 02/08/2016 10:29 AM, Sean Dague wrote:
>>
>>> On 02/08/2016 10:07 AM, Thierry Carrez wrote:
>>>
 Brian Curtin wrote:

> On Sun, Feb 7, 2016 at 3:07 PM, Jay Pipes  wrote:
>
>> I would love to see the OpenStack contributor community take back the
>> design
>> summit to its original format and purpose and decouple it from the
>> OpenStack
>> Summit's conference portion.
>>
>> I believe the design summits should be organized by the OpenStack
>> contributor community, not the OpenStack Foundation and its marketing
>> and
>> event planning staff.
>>
>
> As someone who spent years organizing PyCon as a volunteer from the
> Python community, with four of those years in a row taking about 8
> solid months of pre-conference effort, not to mention the on-site
> effort to run a volunteer conference of that size [0]...I would
> suggest even longer and harder thought before stretching a community
> like this even more thinly. Things should change, but probably not the
> "who's doing the work" aspect.
>

 Beyond stretching out the community, we would end up with the same
 problem we are trying to solve. Most of the cross-project folks that
 would end up organizing the event would be too busy organizing the event
 to be able to fully participate in it.

>>>
>>> Right, this is a super key point. Even just organizing and running local
>>> user groups, I know how much time is spent making sure the whole thing
>>> seems effortless to attendees, and they can just focus on content.
>>>
>>> Even look at the recently run Nova midcycle, with 40ish folks, it still
>>> required some substantial logistics to pull off. The HPE team did a
>>> great job with that. But it definitely required real time and effort.
>>>
>>
>> Agreed.
>>
>> The Foundation has done an amazing job of making everyone think this is
>>> easy (I know how much it is not). Without their efforts organizing these
>>> events, eliminating the distractions of wandering in a strange city to
>>> find lunch, having a network, projectors, access to facilities,
>>> appropriate sized spaces, double checking all those things will really
>>> actually be there, chasing after folks when they are not, handling the
>>> myriad of other unforseen issues that you never have to see we would
>>> not be nearly as productive at the design summits.
>>>
>>
>> I understand this. I ran the MySQL Users Conference and Expo for 2 years.
>> I realize the amount of effort it takes to organize a 2500+ person event.
>> It's essentially a full-time job.
>>
>> I suppose I should have used a different wording. What I really think
>> should happen is that a *separate* team should handle organizing the
>> developer-focused working events than the main team that does the marketing
>> event. I recognize that it's a lot of work and that asking the "community"
>> to just handle the working event organization will lead to undue burden on
>> certain cross-project folks.
>>
>> However, here are a couple things that do *not* need to be done by a
>> separate team that handles working event organization:
>>
>> 1) Vendor and sponsorship stuff
>> 2) A call for speakers and reviewing thousands of submissions (this is
>> self-organized by each project's contributor team for the working events)
>> 3) Determining keynote slots and wrangling C-level speakers -- or any
>> speaker wrangling at all
>> 4) "Check-in" and registration stands
>> 5) Dealing with schwag, giveaways, parties, and other superfluous stuff
>>
>> So, yes, while it's a lot of work, it's not the same kind of work as the
>> marketing event staff.
>>
>> So while I agree it's worth considering whether the Mega Conference and
>>> Design Summit should continue to be collocated and on the same time
>>> table, I think the idea that the Design Summit, at even only 500
>>> attendees, could/should be run without the Foundation is just folly
>>> based on a lack of understanding for what it takes to do events at that
>>> scale.
>>>
>>
>> For the record, I *do* understand what it takes to do events at that scale.
>>
>>> And massively underestimates the effort and skill the Foundation
>>
>>> has at making our events run as smoothly as they do.
>>>
>>
>> I wasn't saying anything about the effort and skill the Foundation expends
>> on making the marketing events run smoothly.
>>
>> I am pushing for a return to *working* events for developers.
>>
>> -jay
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> Reworded like that , with no additional burden on 

Re: [openstack-dev] [Neutron] Being more aggressive with our defaults

2016-02-08 Thread Assaf Muller
I'm generally sympathetic to what you're saying, and I agree that we
need to do something about disabled-by-default features, at the very
least on the testing front. Comments in-line.

On Mon, Feb 8, 2016 at 4:47 PM, Sean M. Collins  wrote:
> Hi,
>
> With the path_mtu issue - our default was to set path_mtu to zero, and
> do no calculation against the physical segment MTU and the overhead for
> the tunneling protocol that was selected for a tenant network. Which
> means the network would break.
>
> I am working on patches to change our behavior to set the MTU to 1500 by
> default[1], so that at least our out of the box experience is more
> sensible.
>
> This brings me to the csum feature of recent linux kernel versions and
> related network components.
>
> Patches:
>
> https://review.openstack.org/#/c/220744/
> https://review.openstack.org/#/c/261409/
>
> Bugs/RFEs:
>
> https://bugs.launchpad.net/neutron/+bug/1515069
> https://bugs.launchpad.net/neutron/+bug/1492111
>
> Basically, we see that enabling the csum feature creates the conditions
> where 10gig link were able to be fully utilized[2] in one instance[3]. My
> thinking is - yes I too would like to fully utilize the links that I
> paid good money for. Someone with more knowledge can correct me
> , but is there any reason not to enable this feature? If your hardware
> supports it, we should utilize it. If your hardware doesn't support it,
> then we shouldn't.
>
> tl;dr - why do we keep merging features that create more knobs that
> deployers and deployment tools need to keep turning? The fact that we
> merge features that are disabled by default means that they are not as
> thoroughly tested as features that are enabled by default.

That is partially a testing issue which fullstack is supposed to
solve. We can't afford to set up a job for every combination of
Neutron configuration values, not upstream and not in different
downstream CI environments. Fullstack can test different
configurations knobs quickly, and it's something that a developer can
do on his own without depending on infra changes. It's also easy to
run, and thus easy to iterate.

As for concrete actions, we do have a fullstack test that enables
l2pop and checks connectivity between two VMs on different nodes. It's
the only code patch that actually covers l2pop at the upstream gate!
It already caught a regression that Armando and I fixed a while ago.
As for DVR, I'm searching for someone to pick up the gauntlet and
contribute some L3 fullstack tests. I'd be more than happy to review
it! I even have an abandoned patch that gets the ball rolling (The
idea is to test L3 east/west, north/south with FIP and north/south
without FIP for all four router types: Legacy, HA, DVR and DVR HA. You
run the same test in four different configurations, fullstack is
basically purpose built for this).

>
> Neutron should have a lot of things enabled by default that improve
> performance (l2pop? path_mtu? dvr?), and by itself, try and enable these
> features. If for some reason the hardware doesn't support it, log that
> it wasn't successful and then disable.

I don't know if this is what you wanted to talk about (It feels more
like a side note to me, so I'm sorry if I'm about to hijack the
conversation!), but I think that if an admin sets a certain
configuration option, the software should respect it in a predictable
manner. If an agent tries to use a certain config knob and fails, it
should error out (Saying specifically what's wrong), and not disable
the option but keep on living, because that is surprising behavior,
and there's nothing telling the admin that the option he expects to be
on is actually off, until he notices it the hard way some time later.

>
> OK - that's it for me. Thanks for reading. I'll put on my asbestos
> undies now.
>
>
> [1]: https://review.openstack.org/#/c/276411/
> [2]: http://openvswitch.org/pipermail/dev/2015-August/059335.html
>
> [3]: Yes, it's only one data point
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Task Based Deployment Is at Least Twice Faster

2016-02-08 Thread Vladimir Kozhukalov
+1 to enable it ASAP.

It will also affect our deployment tests (~1 hour vs. ~2.5 hours).

Vladimir Kozhukalov

On Mon, Feb 8, 2016 at 7:35 PM, Bulat Gaifullin 
wrote:

> +1.
>
> Regards,
> Bulat Gaifullin
> Mirantis Inc.
>
>
>
> > On 08 Feb 2016, at 19:05, Igor Kalnitsky 
> wrote:
> >
> > Hey Fuelers,
> >
> > When we are going to enable it? I think since HCF is passed for
> > stable/8.0, it's time to enable task-based deployment for master
> > branch.
> >
> > Opinion?
> >
> > - Igor
> >
> > On Wed, Feb 3, 2016 at 12:31 PM, Bogdan Dobrelya 
> wrote:
> >> On 02.02.2016 17:35, Alexey Shtokolov wrote:
> >>> Hi Fuelers!
> >>>
> >>> As you may be aware, since [0] Fuel has implemented a new orchestration
> >>> engine [1]
> >>> We switched the deployment paradigm from role-based (aka granular) to
> >>> task-based and now Fuel can deploy all nodes simultaneously using
> >>> cross-node dependencies between deployment tasks.
> >>
> >> That is great news! Please do not forget about docs updates as well.
> >> Those docs are always forgotten like poor orphans... I submitted a patch
> >> [0] to MOS docs, please review and add more details, if possible, for
> >> plugins impact as well.
> >>
> >> [0] https://review.fuel-infra.org/#/c/16509/
> >>
> >>>
> >>> This feature is experimental in Fuel 8.0 and will be enabled by default
> >>> for Fuel 9.0
> >>>
> >>> Allow me to show you the results. We made some benchmarks on our bare
> >>> metal lab [2]
> >>>
> >>> Case #1. 3 controllers + 7 computes w/ ceph.
> >>> Task-based deployment takes *~38* minutes vs *~1h15m* for granular
> (*~2*
> >>> times faster)
> >>> Here and below the deployment time is average time for 10 runs
> >>>
> >>> Case #2. 3 controllers + 3 mongodb + 4 computes w/ ceph.
> >>> Task-based deployment takes *~41* minutes vs *~1h32m* for granular
> >>> (*~2.24* times faster)
> >>>
> >>>
> >>>
> >>> Also we took measurements for Fuel CI test cases. Standard BVT (Master
> >>> node + 3 controllers + 3 computes w/ ceph. All are in qemu VMs on one
> host)
> >>>
> >>> Fuel CI slaves with *4 *cores *~1.1* times faster
> >>> In case of 4 cores for 7 VMs they are fighting for CPU resources and it
> >>> marginalizes the gain of task-based deployment
> >>>
> >>> Fuel CI slaves with *6* cores *~1.6* times faster
> >>>
> >>> Fuel CI slaves with *12* cores *~1.7* times faster
> >>
> >> These are really outstanding results!
> >> (tl;dr)
> >> I believe the next step may be to leverage the "external install & svc
> >> management" feature (example [1]) of the Liberty release (7.0.0) of
> >> Puppet-Openstack (PO) modules. So we could use separate concurrent
> >> cross-depends based tasks *within a single node* as well, like:
> >> - task: install_all_packages - a singleton task for a node,
> >> - task: [configure_x, for each x] - concurrent for a node,
> >> - task: [manage_service_x, for each x] - some may be concurrent for a
> >> node, while another shall be serialized.
> >>
> >> So, one might use the "--tags" separator for concurrent puppet runs to
> >> make things go even faster, for example:
> >>
> >> # cat test.pp
> >> notify
> >> {"A": tag => "a" }
> >> notify
> >> {"B": tag => "b" }
> >>
> >> # puppet apply test.pp
> >> Notice: A
> >> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
> >> Notice: B
> >> Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'
> >>
> >> # puppet apply test.pp --tags a
> >> Notice: A
> >> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
> >>
> >> # puppet apply test.pp --tags a & puppet apply test.pp --tags b
> >> Notice: B
> >> Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'
> >> Notice: A
> >> Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
> >>
> >> Which is supposed to be faster, although not for this example.
> >>
> >> [1] https://review.openstack.org/#/c/216926/3/manifests/init.pp
> >>
> >>>
> >>> You can see additional information and charts in the presentation [3].
> >>>
> >>> [0]
> >>> -
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082093.html
> >>> [1]
> >>> -
> https://specs.openstack.org/openstack/fuel-specs/specs/8.0/task-based-deployment-mvp.html
> >>> [2] -  3 x HP ProLiant DL360p Gen8 (XeonE5 6 cores/64GB/SSD)  + 7 x HP
> >>> ProLiant DL320p Gen8 (XeonE3 4 cores/8-16GB/HDD)
> >>> [3] -
> >>>
> https://docs.google.com/presentation/d/1jZCFZlXHs_VhjtVYS2VuWgdxge5Q6sOMLz4bRLuw7YE
> >>>
> >>> ---
> >>> WBR, Alexey Shtokolov
> >>>
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >> --
> >> Best regards,
> >> Bogdan Dobrelya,
> >> Irc #bogdando
> >>
> >>
> 

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Jay Pipes

On 02/08/2016 10:29 AM, Sean Dague wrote:

On 02/08/2016 10:07 AM, Thierry Carrez wrote:

Brian Curtin wrote:

On Sun, Feb 7, 2016 at 3:07 PM, Jay Pipes  wrote:

I would love to see the OpenStack contributor community take back the
design
summit to its original format and purpose and decouple it from the
OpenStack
Summit's conference portion.

I believe the design summits should be organized by the OpenStack
contributor community, not the OpenStack Foundation and its marketing
and
event planning staff.


As someone who spent years organizing PyCon as a volunteer from the
Python community, with four of those years in a row taking about 8
solid months of pre-conference effort, not to mention the on-site
effort to run a volunteer conference of that size [0]...I would
suggest even longer and harder thought before stretching a community
like this even more thinly. Things should change, but probably not the
"who's doing the work" aspect.


Beyond stretching out the community, we would end up with the same
problem we are trying to solve. Most of the cross-project folks that
would end up organizing the event would be too busy organizing the event
to be able to fully participate in it.


Right, this is a super key point. Even just organizing and running local
user groups, I know how much time is spent making sure the whole thing
seems effortless to attendees, and they can just focus on content.

Even look at the recently run Nova midcycle, with 40ish folks, it still
required some substantial logistics to pull off. The HPE team did a
great job with that. But it definitely required real time and effort.


Agreed.


The Foundation has done an amazing job of making everyone think this is
easy (I know how much it is not). Without their efforts organizing these
events, eliminating the distractions of wandering in a strange city to
find lunch, having a network, projectors, access to facilities,
appropriate sized spaces, double checking all those things will really
actually be there, chasing after folks when they are not, handling the
myriad of other unforseen issues that you never have to see we would
not be nearly as productive at the design summits.


I understand this. I ran the MySQL Users Conference and Expo for 2 
years. I realize the amount of effort it takes to organize a 2500+ 
person event. It's essentially a full-time job.


I suppose I should have used a different wording. What I really think 
should happen is that a *separate* team should handle organizing the 
developer-focused working events than the main team that does the 
marketing event. I recognize that it's a lot of work and that asking the 
"community" to just handle the working event organization will lead to 
undue burden on certain cross-project folks.


However, here are a couple things that do *not* need to be done by a 
separate team that handles working event organization:


1) Vendor and sponsorship stuff
2) A call for speakers and reviewing thousands of submissions (this is 
self-organized by each project's contributor team for the working events)
3) Determining keynote slots and wrangling C-level speakers -- or any 
speaker wrangling at all

4) "Check-in" and registration stands
5) Dealing with schwag, giveaways, parties, and other superfluous stuff

So, yes, while it's a lot of work, it's not the same kind of work as the 
marketing event staff.



So while I agree it's worth considering whether the Mega Conference and
Design Summit should continue to be collocated and on the same time
table, I think the idea that the Design Summit, at even only 500
attendees, could/should be run without the Foundation is just folly
based on a lack of understanding for what it takes to do events at that
scale.


For the record, I *do* understand what it takes to do events at that scale.

> And massively underestimates the effort and skill the Foundation

has at making our events run as smoothly as they do.


I wasn't saying anything about the effort and skill the Foundation 
expends on making the marketing events run smoothly.


I am pushing for a return to *working* events for developers.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal for having a service type registry and curated OpenStack REST API

2016-02-08 Thread Monty Taylor

On 02/08/2016 06:16 AM, Sean Dague wrote:

On 02/07/2016 08:13 PM, Monty Taylor wrote:

On 02/07/2016 07:30 AM, Jay Pipes wrote:

On 02/04/2016 06:38 AM, Sean Dague wrote:

What options do we have?



2) Have a registry of "common" names.

Upside, we can safely use common names everywhere and not fear collision
down the road.

Downside, yet another contention point.

A registry would clearly be under TC administration, though all the
heavy lifting might be handed over to the API working group. I still
imagine collision around some areas might be contentious.


The above is my choice. I'd also like to point out that I'm only talking
about the *service* projects here -- i.e. the things that expose a REST
API.


yes


I don't care about a naming registry for non-service projects because
they do not expose a public user-facing API that needs to be curated and
protected.


yes


I would further suggest using the openstack/governance repo's
projects.yaml file for this registry. This is already under the TC's
administration and the API WG could be asked to work closely with the TC
to make recommendations on naming for all type:service projects in the
file. We should add a service:$type tag to the projects.yaml file and
that would serve as the registry for REST API services.

We would need to institute this system by first tackling the current
areas of REST API functional overlap:

* Ceilometer and Monasca are both type:service projects that are both
performing telemetry functionality in their REST APIs. The API WG should
work with both communities to come up with a 6-12 month plan for
creating a *single* OpenStack Telemetry REST API that both communities
would be able to implement separately as they see fit.

* All APIs that the OpenStack Compute API currently proxies to other
service endpoints need to have a formal sunsetting plan. This includes:

   - servers/{server_id}/os-interface (port interfaces)
   - images/
   - images/{image_id}/metadata
   - os-assisted-volume-snapshots/
   - servers/{server_id}/os-bare-metal-nodes/ (BTW, why is this a
sub-resource of /servers again?)
   - os-fixed-ips/
   - os-floating-ip-dns/
   - os-floating-ip-pools/
   - os-floating-ips/
   - os-floating-ips-bulk/
   - os-networks/
   - os-security-groups/
   - os-security-group-rules/
   - os-security-group-default-rules/
   - os-tenant-networks/
   - os-volumes/
   - os-snapshots/

* All those services that have overlapping top-level resources must have
a plan to either:
   - align/consolidate the top-level resource if it makes sense
   - rename the top-level resource to be more specific if needed, or
   - place the top-level resource as a sub-resource on a top-level
resource that is unique in the full OpenStack REST API set of top-level
resources


Yes please god yes oh yes a million times yes. I've never agreed with
you as much as this since the JSON/XML glory of the Cactus summit.

I know shade is not the OpenStack SDK - but as a library that has a top
level "OpenStackCloud" object that has methods like "list_servers" and
"list_images" - things that overlap in conceptual name but do not
present the same semantics quickly become difficult. I believe Jay's
proposal above will help to make the situation much more saner.

Monty

/me sends jaypipes a fruit basket


Ok, but in Tokyo you specifically also stated no one should ever remove
an API because doing so destroyers their users.

I'm trying to reconcile those points of view.


Getting to a new list of things where there are clear resource names 
that have one and only one set of semantics associated with them is the 
thing I want.


If there are other API endpoints lurking somewhere for backwards compat 
that we don't document and that I can safely ignore - that does not 
bother me.


We should NEVER actually delete a thing - that's only self serving. 
However, we can remove a thing from being a thing we talk about. So, if 
nova has an os-floating-ips for backawards compat, that's neat, but I 
can just always use the neutron floating-ips resource and ignore it.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade][keystone] Keystone multinode grenade

2016-02-08 Thread Morgan Fainberg
On Mon, Feb 8, 2016 at 5:20 AM, Grasza, Grzegorz 
wrote:

>
> > From: Sean Dague [mailto:s...@dague.net]
> >
> > On 02/05/2016 04:44 AM, Grasza, Grzegorz wrote:
> > >
> > >> From: Sean Dague [mailto:s...@dague.net]
> > >>
> > >> On 02/04/2016 10:25 AM, Grasza, Grzegorz wrote:
> > >>>
> > >>> Keystone is just one service, but we want to run a test, in which it
> > >>> is setup in HA – two services running at different versions, using
> > >>> the same
> > >> DB.
> > >>
> > >> Let me understand the scenario correctly.
> > >>
> > >> There would be Keystone Liberty and Keystone Mitaka, both talking to
> > >> a Liberty DB?
> > >>
> > >
> > > The DB would be upgraded to Mitaka. From Mitaka onwards, we are
> > making only additive schema changes, so that both versions can work
> > simultaneously.
> > >
> > > Here are the specifics:
> > > http://docs.openstack.org/developer/keystone/developing.html#online-
> > mi
> > > gration
> >
> > Breaking this down, it seems like there is a simpler test setup here.
> >
> > Master keystone is already tested with master db, all over the place. In
> unit
> > tests all the dsvm jobs. So we can assume pretty hard that that works.
> >
> > Keystone doesn't cross talk to itself (as there are no workers), so I
> don't think
> > there is anything to test there.
> >
> > Keystone stable working with master db seems like an interesting bit, are
> > there already tests for that?
>
> Not yet. Right now there is only a unit test, checking obvious
> incompatibilities.
>
>
As an FYI, this test was reverted as we spent a significant time around
covering it at the midcycle (and it was going to require us to
significantly rework in-flight code (and was developed / agreed upon before
the new db restrictions landed). We will be revisiting this with the now
better understanding of the scope and how to handle the "limited" downtime
upgrade for first thing in Newton.


> >
> > Also, is there any time where you'd get data from Keystone new use it in
> a
> > server, and then send it back to Keystone old, and have a validation
> issue?
> > That seems easier to trigger edge cases at a lower level. Like an extra
> > attribute is in a payload in Keystone new, and Keystone old faceplants
> with it.
>
> In case of keystone, the data that can cause compatibility issues is in
> the DB.
> There can be issues when data stored or modified by the new keystone
> is read by the old service, or the other way around. The issues may happen
> only in certain scenarios, like:
>
> row created by old keystone ->
> row modified by new keystone ->
> failure reading by old keystone
>
> I think a CI test, in which we have more than one keystone version
> accessible
> at the same time is preferable to testing only one scenario. My proposed
> solution with HAProxy probably wouldn't trigger all of them, but it may
> catch
> some instances in which there is no full lower level test coverage. I
> think testing
> in HA would be helpful, especially at the beginning, when we are only
> starting to
> evaluate rolling upgrades and discovering new types of issues that we
> should
> test for.
>
>
This was something we need to work on. We came to the conclusion it is
going to be very hard (tm) to run multiple versions of keystone on the same
DB, the volume of complexity added is fairly large. I also want to better
understand the proposed upgrade paths - we ran many scenarios and came up
with a ton of edge cases / issues.

Thus this is likely something we will need to target for newton, but this
shouldn't stop us from standing up the basic test scaffolding so we can
move more quickly next cycle.

When we have the gate job, I would like to see us run a battery of tests if
we're doing this against both keystones in isolation rather than HAProxy.
The HAProxy test is a different type of test to confirm random subsets of
read/write don't break (aren't wildly different) across the two different
code bases. Testing each API in isolation is also important.


> >
> > The reality is that standing up an HA Proxy Keystone multinode
> environment
> > is going to be pretty extensive amount of work. And when things fail,
> digging
> > out why, is kind of hard. However it feels like most of the interesting
> edges
> > can be tested well at a lower level. And is at least worth getting those
> sorted
> > before biting off the bigger thing.
>
> I only proposed multinode grenade, because I thought it is the most
> complete
> solution for what I want to achieve, but maybe there is a simpler way, like
> running two keystone instances on the same node?
>
>
It wouldn't be hard to run two instances of keystone on different points.
However, it is likely to also be a chunk of work to make devstack able to
handle (but less work than multinode I'm 90% sure).


> / Greg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Jim Meyer
On Feb 8, 2016, at 7:07 AM, Thierry Carrez  wrote:
> 
> Brian Curtin wrote:
>>> On Sun, Feb 7, 2016 at 3:07 PM, Jay Pipes  wrote:
>>> I would love to see the OpenStack contributor community take back the design
>>> summit to its original format and purpose and decouple it from the OpenStack
>>> Summit's conference portion.
>>> 
>>> I believe the design summits should be organized by the OpenStack
>>> contributor community, not the OpenStack Foundation and its marketing and
>>> event planning staff.
>> 
>> As someone who spent years organizing PyCon as a volunteer from the
>> Python community, with four of those years in a row taking about 8
>> solid months of pre-conference effort, not to mention the on-site
>> effort to run a volunteer conference of that size [0]...I would
>> suggest even longer and harder thought before stretching a community
>> like this even more thinly. Things should change, but probably not the
>> "who's doing the work" aspect.
> 
> Beyond stretching out the community, we would end up with the same problem we 
> are trying to solve. Most of the cross-project folks that would end up 
> organizing the event would be too busy organizing the event to be able to 
> fully participate in it.

+1000.

--j
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [lbaas][octavia] Security/networking questions

2016-02-08 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hey there,

I've been doing some work to research how best to implement LBaaSv2 and Octavia 
within the OpenStack-Ansible project.  During that research, I've come up with 
a few questions.

1) Is it possible for octavia to operate without providing it with admin 
credentials?

2) If a user has amphora LB's deployed and a serious vulnerability is released 
for OpenSSL/haproxy, what should the user do to patch those load balancers?

3) Is a load balancer management network required?  Putting a LB onto an admin 
tenant network as well as a customer tenant network is challenging and bridging 
those networks could allow an attacker to gain access to other things on that 
admin tenant network.

Thanks in advance for your time.

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWuLpuAAoJEHNwUeDBAR+xSF8P/j/KBH2320xB/dGWmy6xOMuJ
DRQCcpEEljIu3O4pU8sF6yGEZX/CIoI3WXGaOBR2g0phWxEus5lhy0DdkPw4ctAa
+UJ7da/s0C7fDbbl09TvWDe3eBoohIunLOm6ABpMT48YipfM0zJLLDEy9kQpDcFg
qg68S5xgtC9zP9CeK1Gvsq5EwjwyX6Mt0a3+G1NMFbUoARLpDDof06YHrNFw73Td
25AxqToR09yRRXsJfadrjjP9/lGWNBF5f5Oh5WoPnEAiThqN08Ico3geHKIr9s2r
Ift5NueWovCI5MUzOzqwsazKgnVgQXrgaaQwRotl5WdZbstUfWJLO+2If5/z4z8d
AArWLXwsCgIv+I6ZyJ4R3YzJVP3KBY8+8gDswjdMV4Jfy7YV9aragy96ofCEwjuH
p6QOGAKJZASD3cQpOdqVqQt4BaWBXMqm70sNDjfzKRBwweuOZgpNRInluDMbhngs
Yqdj2LGUhuij50gQLa21cYJ5pcuA6yY7KNoiiPLkNbFDJtQo6cjVt/McVFPxN3mu
RKRXpZNBgzf5UAKtrMIyPbw1wioAhbt7lgevfvCOLxHCmu0VxsLzRmOdiON5Exmg
vopL518GJSUx93GhA0cwnqT/ilcTvDxFxPXQrvQK/XPtEQq4U3wBF/kZALK1/4tu
7hi/GjugHBcixIZGE5sI
=XI9V
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Draft Agenda for MAN Ops Meetup (Feb 15, 16)

2016-02-08 Thread Tom Fifield

Brilliant, thanks!

On 06/02/16 18:42, Mariano Cunietti wrote:

Hi Tom

This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
system manager. This message contains confidential information and is
intended only for the individual named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail.
Please notify the sender immediately by e-mail if you have received this
e-mail by mistake and delete this e-mail from your system. If you are
not the intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.


On 04 Feb 2016, at 12:28, Tom Fifield > wrote:

We still need moderators for the following:


* Keystone Federation - discussion session


I can help moderating this session



* OSOps - what is it, where is it going, what you can do


Also on this one

Ciao

M.



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [neutron] documenting configuration option segregation between services and agents

2016-02-08 Thread Ihar Hrachyshka

Kevin Benton  wrote:


Propose it as a devref patch!


+1. Has it happened already?



On Wed, Jan 27, 2016 at 12:30 PM, Dustin Lundquist   
wrote:
We should expand services_and_agents devref to describe how and why  
configuration options should be segregated between services and agents. I  
stumbled into this recently while trying to remove a confusing duplicate  
configuration option [1][2][3]. The present separation appears to be  
'tribal knowledge', and not consistently enforced. So I'll take a shot at  
explaining the status quo as I understand it and hopefully some seasoned  
contributors can fill in the gaps.


=BEGIN PROPOSED DEVREF SECTION=
Configuration Options
-

In addition to database access, configuration options are segregated  
between neutron-server and agents. Both services and agents may load the  
main neutron.conf since this file should contain the Oslo message  
configuration for internal Neutron RPCs and may contain host specific  
configuration such as file paths. In addition neutron.conf contains the  
database, keystone and nova credentials and endpoints strictly for use by  
neutron-server.


In addition neutron-server may load a plugin specific configuration file,  
yet the agents should not. As the plugin configuration is primarily site  
wide options and the plugin provides the persistence layer for Neutron,  
agents should instructed to act upon these values via RPC.


Each individual agent may have its own configuration file. This file  
should be loaded after the main neutron.conf file, so the agent  
configuration takes precedence. The agent specific configuration may  
contain configurations which vary between hosts in a Neutron deployment  
such as the external_network_bridge for a L3 agent. If any agent requires  
access to additional external services beyond the Neutron RPC, those  
endpoints should be defined in the agent specific configuration file  
(e.g. nova metadata for metadata agent).



==END PROPOSED DEVREF SECTION==

Disclaimers: this description is informed my by own experiences reading  
existing documentation and examining example configurations including  
various devstack deployments. I've tried to use RFC style wording:  
should, may, etc.. I'm relatively confused on this subject, and my goal  
in writing this is to obtain some clarity myself and share it with others  
in the form of documentation.



[1] https://review.openstack.org/262621
[2] https://bugs.launchpad.net/neutron/+bug/1523614
[3] https://review.openstack.org/268153

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Fedor Zhadaev for the fuel-menu-core team

2016-02-08 Thread Tatyana Leontovich
+1



On Mon, Feb 8, 2016 at 11:54 AM, Igor Kalnitsky 
wrote:

> Hey Fuelers,
>
> I'd like to nominate Fedor Zhadaev for the fuel-menu-core team.
> Fedor's doing good review with detailed feedback [1], and has
> contributes over 20 patches during Mitaka release cycle [2].
>
> Fuel Cores, please reply back with +1/-1.
>
> - igor
>
> [1] http://stackalytics.com/?module=fuel-menu=mitaka
> [2]
> http://stackalytics.com/?module=fuel-menu=mitaka_id=fzhadaev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Helion HLM on GitHub and what next?

2016-02-08 Thread Jesse Pretorius
Darragh - thanks for sharing your thoughts. We look forward to working with
you!

On 1 February 2016 at 14:20, Bailey, Darragh  wrote:

>
> My initial thoughts are that the first 2 places to focus for alignment
> (assuming people agree with the idea of life cycle phases) would be:
>
> a) abstract the different life cycle phases we have for HLM to be
> controlled by a role var. (I'll elaborate more below)
>

Agreed, this does seem to be a pattern which is forming within the Ansible
community. The debate, as always, is what is considered to be the
composable unit. In theory each role should do one thing and should be
simple, but roles have an overhead attached to them so it's becoming useful
to provide code paths within the role instead of separating each
function/life cycle phase (eg: check-prerequisites, install, configure,
upgrade, test) that can be activated as necessary. For me, this strikes a
nice balance between a sprawling set of roles and an over-complex set of
roles. It's easy to use and easy to understand.


> b) Move the current var access in the defaults.yml that has knowledge of
> config-processor output structure to be abstracted at the site level so
> you can use the same roles whether it's with data from the hlm config
> processor or another source (reusability is key). I guess could use
> wrapper roles, but I think that's less desirable except for handling
> edge cases or transition.
>

Yes, this would be good. What also seems to be forming as a pattern within
the community is a concept of internal vars (vars only for use within the
role) and external vars (vars which may be overridden by group_vars, plays,
CLI, etc). The internal vars are not subject to deprecation, whereas the
external vars are as they effectively fall into something akin to an 'API
contract'.

The same pattern should perhaps also be applied to playbooks - some should
be part of the API contract, and some not.

I'd like us to get a discussion going around these patterns sooner rather
than later, which the hope of completing them and setting them as a policy
in place for the Newton cycle. Shall we arrange some sort of discussion at
the OSA mid-cycle [1] to start this work? We have arranged for the
possibility of remote participation if you can't make it to the UK for it.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-February/085810.html

Jesse
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Security hardening is now integrated!

2016-02-08 Thread Jesse Pretorius
On 27 January 2016 at 13:29, Major Hayden  wrote:

>
> After four months and 80+ gerrit reviews, the security hardening
> configurations provided by the openstack-ansible-security role are now
> integrated with OpenStack-Ansible!  The Jenkins gate jobs for
> OpenStack-Ansible are already applying these configurations by default.
>

Excellent work Major, thanks for leading this effort and thank you to all
the reviewers and contributors for making this happen!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Should the playbook stop on certain tasks?

2016-02-08 Thread Jesse Pretorius
On 13 January 2016 at 15:10, Major Hayden  wrote:

>
> For example, the STIG requires[1] that all system accounts other than root
> are locked.  This could be dangerous on a running production system as
> Ubuntu has non-root accounts that are not locked.  At the moment, the
> playbook does a hard stop (using the fail module) when this check
> fails[2].  Although that can be skipped with --skip-tag, it can be a little
> annoying if you have automation that depends on the playbook running
> without stopping.
>
> Is there a good alternative for this?  I've found a few options:
>
>   1) Leave it as-is and do a hard stop on these tasks
>   2) Print a warning to the console but let the playbook continue
>   3) Use an Ansible callback plugin to catch these and print them at the
> end of the playbook run
>
> Thanks in advance for any advice!
>
> [1]
> https://www.stigviewer.com/stig/red_hat_enterprise_linux_6/2015-05-26/finding/V-38496
> [2]
> https://github.com/openstack/openstack-ansible-security/blob/master/tasks/auth.yml#L60-L87


I think the best thing to do here is to take a stance on what the
project/role deems to be a good set of defaults for the environment its
catering for. Whatever that stance is should be rigorously enforced (ie the
playbook should hard stop if there is non-compliance).

For anyone using automation, if they wish to skip particular compliance
elements then they should build the skip into their automation (ie add
--skip-tags). Skipping compliance should be a conscious action implemented
deliberately by the consumer of the role.

Darren's reply is interesting and perhaps worth consideration. As far as I
recall the security role adopted the STIG primarily because it was the only
openly available set of standards that didn't require licensing. If there
are other options to explore and ways to consume them, then perhaps that
should be an initiative for the Newton cycle?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Tacker Installation support for Openstack Kilo

2016-02-08 Thread Eduardo Gonzalez
Hi,

Currently installation guide is under review.
https://review.openstack.org/#/c/255481/3/doc/source/devref/ubuntu_1404_manual_installation.rst
At the moment is only for Ubuntu based OpenStack, but for Red Hat based OS
should work too, change your paths if necesary to fit your environment
requirements.

Regards

2016-02-08 13:06 GMT+01:00 Basavaraj B :

> Hi,
>
> We have installed Openstack Kilo version and would like to install Tacker
> on it.
> We see only devstack version of Tacker installation available and we want
> to have installed Tacker as a separate component.
>
> Can anyone provide some pointers?
>
> Regards,
> Basavaraj
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [all] Getting error after doing unstack.sh onstable/liberty

2016-02-08 Thread Markus Zoeller
Umar Yousaf  wrote on 02/08/2016 03:04:31 AM:

> From: Umar Yousaf 
> To: openstack-dev@lists.openstack.org
> Date: 02/08/2016 03:05 AM
> Subject: [openstack-dev] [all] Getting error after doing unstack.sh on
> stable/liberty
> 
> I just did unstack.sh due to some reason and then afterwards when I 
> run stack.sh I am getting following error
>  i) Error: Service g-api is not running
> ii) Error: Service g-reg is not running
> iii) Error: Service h-api is not running
> 
> After a lot of struggle I found a resolution that when I reboot my 
> machine,I can successfully stack.sh just like before.Its really 
> tedious to do reboot.So anyone has a better solution to this problem.I
> know in previous additions of devstack there were not any problems 
> like that but I dont know why it is happening...
> 
> Here is the log where error comes
> 2016-02-08 01:40:47.966 | + 
failures='/opt/stack/status/stack/g-api.failure
> 2016-02-08 01:40:47.967 | /opt/stack/status/stack/g-reg.failure
> 2016-02-08 01:40:47.967 | /opt/stack/status/stack/h-api.failure'
> 2016-02-08 01:40:47.967 | + for service in '$failures'
> 2016-02-08 01:40:47.967 | ++ basename 
/opt/stack/status/stack/g-api.failure
> 2016-02-08 01:40:47.969 | + service=g-api.failure
> 2016-02-08 01:40:47.969 | + service=g-api
> 2016-02-08 01:40:47.969 | + echo 'Error: Service g-api is not running'
> 2016-02-08 01:40:47.969 | Error: Service g-api is not running
> 2016-02-08 01:40:47.969 | + for service in '$failures'
> 2016-02-08 01:40:47.969 | ++ basename 
/opt/stack/status/stack/g-reg.failure
> 2016-02-08 01:40:47.970 | + service=g-reg.failure
> 2016-02-08 01:40:47.970 | + service=g-reg
> 2016-02-08 01:40:47.970 | + echo 'Error: Service g-reg is not running'
> 2016-02-08 01:40:47.970 | Error: Service g-reg is not running
> 2016-02-08 01:40:47.970 | + for service in '$failures'
> 2016-02-08 01:40:47.970 | ++ basename 
/opt/stack/status/stack/h-api.failure
> 2016-02-08 01:40:47.971 | + service=h-api.failure
> 2016-02-08 01:40:47.971 | + service=h-api
> 2016-02-08 01:40:47.971 | + echo 'Error: Service h-api is not running'
> 2016-02-08 01:40:47.971 | Error: Service h-api is not running
> 2016-02-08 01:40:47.971 | + '[' -n 
'/opt/stack/status/stack/g-api.failure
> 2016-02-08 01:40:47.971 | /opt/stack/status/stack/g-reg.failure
> 2016-02-08 01:40:47.971 | /opt/stack/status/stack/h-api.failure' ']'
> 2016-02-08 01:40:47.972 | + die 1537 'More details about the above 
> errors can be found with screen, with ./rejoin-stack.sh'
> 2016-02-08 01:40:47.972 | + local exitcode=0
> 2016-02-08 01:40:47.972 | [Call Trace]
> 2016-02-08 01:40:47.972 | ./stack.sh:1314:service_check
> 2016-02-08 01:40:47.972 | 
/home/airbourne/devstack/functions-common:1537:die
> 2016-02-08 01:40:47.974 | [ERROR] /home/airbourne/devstack/functions-
> common:1537 More details about the above errors can be found with 
> screen, with ./rejoin-stack.sh
> 2016-02-08 01:40:49.057 | Error on exit
> 
> regards,
> 
Umar__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I sometimes experience that the "unstack" doesn't clean up all the 
services. I then have to see with "px aux | grep python" which one
of the services is still running, despite of the "unstack.sh" call,
and have to kill that processes manually with "kill -9 ".
After that, a new "stack.sh" does usually work for me.

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A strange transition in Ironic FSM

2016-02-08 Thread Vladyslav Drok
Hi all,

Looking at the state machine spec
,
this might be a leftover from the older state machine.
When the node is in deleting state, we only clean up the kernel/ramdisk and
clear DHCP
options, so rebuild may be completed successfully afterwards. I agree that
this transition
does not make much sense, and removing it will help fixing the bug you've
referenced,
but I'm not sure how to handle backwards compatibility in this case (and
whether we need
to do this).

Vlad

On Fri, Feb 5, 2016 at 4:00 PM, Yuriy Zveryanskyy  wrote:

> Hi.
>
> We have a followed transition in common/states.py:
>
> # An errored instance can be rebuilt
> # ironic/conductor/manager.py:do_node_deploy()
> machine.add_transition(ERROR, DEPLOYING, 'rebuild')
>
> At first glance it looks correct. But ERROR state is
> used only for error after deleting, see
> http://docs.openstack.org/developer/ironic/_images/states.svg
> So ERROR is delete error, at least now, and transition
> error (delete error) -> deploying (on_rebuild)
> is possible.
> Looks strange if operator wants to remove an instance completely and then
> does rebuild after error (non-error targets for deleting is cleaning ->
> available).
> I think this transition should be removed. Without this strange transition
> bug https://bugs.launchpad.net/ironic/+bug/1522008 can be fixed by simple
> way,
> port's vif id can be removed via Ironic virt driver request before waiting
> of CLEANING
> (it's no more needed).
>
> Yuriy Zveryanskyy
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Fwd: Need to integrate radius server with OpenStack

2016-02-08 Thread Van Leeuwen, Robert
>Now we are stuck at this point how to authenticate users via free radius.
>Any help or pointers on this would be grateful.


Hi Pratik,

You can write your own keystone middleware to authenticate with.

There is a nice doc about that here:
http://docs.openstack.org/developer/keystone/external-auth.html

Note that if you use external_auth as in the example it will only take over the 
authentication:
The user will still need to exist in keystone and roles need to be assigned in 
the keystone backend.

For  a "fully integrated” solution you will have to look at LDAP afaik.

Cheers,
Robert van Leeuwen
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Fuel] Nominate Fedor Zhadaev for the fuel-menu-core team

2016-02-08 Thread Evgeniy L
+1

On Mon, Feb 8, 2016 at 12:54 PM, Igor Kalnitsky 
wrote:

> Hey Fuelers,
>
> I'd like to nominate Fedor Zhadaev for the fuel-menu-core team.
> Fedor's doing good review with detailed feedback [1], and has
> contributes over 20 patches during Mitaka release cycle [2].
>
> Fuel Cores, please reply back with +1/-1.
>
> - igor
>
> [1] http://stackalytics.com/?module=fuel-menu=mitaka
> [2]
> http://stackalytics.com/?module=fuel-menu=mitaka_id=fzhadaev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Thierry Carrez

Daniel P. Berrange wrote:

[...]
I really agree with everything you say, except for the bit about the
community doing organization - I think its fine to let function event
staff continue with the burden of planning, as long as their goals are
directed by the community needs.


Exactly.


I might suggest that we could be a bit more radical with the developer
event and decouple the timing from the release cycle. The design summits
are portrayed as events where we plan the next 6 months of work, but the
release has already been open for a good 2-3 or more weeks before we meet
in the design summit. This always makes the first month of each development
cycle pretty inefficient as decisions are needlessly postponed until the
summit. The bulk of specs approval then doesn't happen until after the
summit, leaving even less time until feature freeze to get the work done.


I agree that the developer event happens too late in the cycle (3 weeks 
after final release, 5 weeks after RC1 where most people switch to next 
cycle, and 8 weeks after FF, where we start thinking about the next 
cycle). That said, I still think the dev event should be "coupled" with 
the cycles. It just needs to happen earlier.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal for having a service type registry and curated OpenStack REST API

2016-02-08 Thread Sean Dague
On 02/07/2016 08:30 AM, Jay Pipes wrote:
> On 02/04/2016 06:38 AM, Sean Dague wrote:
>> What options do we have?
> 
>> 2) Have a registry of "common" names.
>>
>> Upside, we can safely use common names everywhere and not fear collision
>> down the road.
>>
>> Downside, yet another contention point.
>>
>> A registry would clearly be under TC administration, though all the
>> heavy lifting might be handed over to the API working group. I still
>> imagine collision around some areas might be contentious.
> 
> The above is my choice. I'd also like to point out that I'm only talking
> about the *service* projects here -- i.e. the things that expose a REST
> API.
> 
> I don't care about a naming registry for non-service projects because
> they do not expose a public user-facing API that needs to be curated and
> protected.
> 
> I would further suggest using the openstack/governance repo's
> projects.yaml file for this registry. This is already under the TC's
> administration and the API WG could be asked to work closely with the TC
> to make recommendations on naming for all type:service projects in the
> file. We should add a service:$type tag to the projects.yaml file and
> that would serve as the registry for REST API services.
> 
> We would need to institute this system by first tackling the current
> areas of REST API functional overlap:
> 
> * Ceilometer and Monasca are both type:service projects that are both
> performing telemetry functionality in their REST APIs. The API WG should
> work with both communities to come up with a 6-12 month plan for
> creating a *single* OpenStack Telemetry REST API that both communities
> would be able to implement separately as they see fit.

1) how do you imagine this happening?

2) is there buy in from both communities?

3) 2 implementations of 1 API that is actually semantically the same is
super hard. Doing so in the IETF typically takes many years.

I feel like we spent a bunch of time a couple years ago putting projects
detailed improvement plans from the TC, and it really didn't go all that
well. The outside-in approach without community buy in mostly just gets
combative and hostile.

> * All APIs that the OpenStack Compute API currently proxies to other
> service endpoints need to have a formal sunsetting plan. This includes:
> 
>  - servers/{server_id}/os-interface (port interfaces)
>  - images/
>  - images/{image_id}/metadata
>  - os-assisted-volume-snapshots/
>  - servers/{server_id}/os-bare-metal-nodes/ (BTW, why is this a
> sub-resource of /servers again?)
>  - os-fixed-ips/
>  - os-floating-ip-dns/
>  - os-floating-ip-pools/
>  - os-floating-ips/
>  - os-floating-ips-bulk/
>  - os-networks/
>  - os-security-groups/
>  - os-security-group-rules/
>  - os-security-group-default-rules/
>  - os-tenant-networks/
>  - os-volumes/
>  - os-snapshots/

It feels really early to run down a path here on trying to build a
registry for top level resources when we've yet to get service types down.

Also, I'm not hugely sure why:

GET /compute/flavors
GET /dataprocessing/flavors
GET /queues/flavors

Is the worst thing we could be doing. And while I get the idea that in a
perfect world there would be no overlap, the cost of getting there in
breaking working software seems... a bit of a bad tradeoff.

> * All those services that have overlapping top-level resources must have
> a plan to either:
>  - align/consolidate the top-level resource if it makes sense
>  - rename the top-level resource to be more specific if needed, or
>  - place the top-level resource as a sub-resource on a top-level
> resource that is unique in the full OpenStack REST API set of top-level
> resources

And what happens to all the software out there written to OpenStack? I
do get the concerns for coherency, at the same time randomly changing
API interfaces on people is a great way to kick all your users in the
knees and take their candy.

At the last summit basically *exactly* the opposite was agreed to. You
don't get to remove an API, ever. Because the moment it's out there, it
has users.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Fedor Zhadaev for the fuel-menu-core team

2016-02-08 Thread Bulat Gaifullin
+1

Regards,
Bulat Gaifullin
Mirantis Inc.



> On 08 Feb 2016, at 12:57, Evgeniy L  wrote:
> 
> +1
> 
> On Mon, Feb 8, 2016 at 12:54 PM, Igor Kalnitsky  > wrote:
> Hey Fuelers,
> 
> I'd like to nominate Fedor Zhadaev for the fuel-menu-core team.
> Fedor's doing good review with detailed feedback [1], and has
> contributes over 20 patches during Mitaka release cycle [2].
> 
> Fuel Cores, please reply back with +1/-1.
> 
> - igor
> 
> [1] http://stackalytics.com/?module=fuel-menu=mitaka 
> 
> [2] http://stackalytics.com/?module=fuel-menu=mitaka_id=fzhadaev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC 2616 was *so* 2010

2016-02-08 Thread Gyorgy Szombathelyi
Hi,

> -Original Message-
> From: Clay Gerrard [mailto:clay.gerr...@gmail.com]
> Sent: 2016 február 5, péntek 21:21
> To: OpenStack Development Mailing List  d...@lists.openstack.org>
> Subject: [openstack-dev] RFC 2616 was *so* 2010
> 
> ... really more like 1999, but when OpenStack started back in '10 - RFC 2616
> was the boss.
> 
> Since then (circa '14) we've got 7230 et. al. - a helpful attempt to
> disambiguate things!  Hooray progress!
> 
> But when someone recently opened this bug I got confused:
> 
> https://bugs.launchpad.net/swift/+bug/1537811
> 
> 
> The wording is 7230 *is* in fact pretty clear - MUST NOT [send a content-
> length header, zero or otherwise, with a 204 response] - but I really can't 
> find
> nearly as strong a prescription in 2616.
> 
> Swift is burdened with a long lived, stable API - which has lead to wide
> adoption from a large ecosystem of clients that have for better or worse
> often adopted the practice of expecting the API to behave the way it does
> even when we might otherwise agree it has a wart here or there.
> 
> But I'm less worried about the client part - we've handled that plenty of
> times in the past - ultimately it's a value/risk trade off.  Can we fix it 
> without
> breaking anything - if we do break someone what's the risk of that fallout vs.
> the value of cleaning it up now (in this particular example RFC 7230 is 
> equally
> strongly prescriptive of clients, in that we should be able to say "content-
> length: booberries" in a 204 response and a well behaved client is expected
> to ignore the header and know that the 204 response is terminated with the
> first blank line following the headers).  Again, we've handled this before and
> I'm sure we'll make the right choice for our project and broad client base.
> 
> But I *am* worried about RFC 7230!?  Is it reasonable that a HTTP 1.1
> compliant server according to 2616 could possibly NOT be a HTTP 1.1
> compliant server after 7230?  Should the wording of this particular
> prescription be SHOULD NOT (is that even possible?!  I think I read
> somewhere that RFC's can have revisions; but I always just pretend they're
> like some sort of divine law which must be followed or face eternal scorn
> from your fellow engineers)  Maybe sending a "content-length: 0" header
> with a 204 response was *never* tolerable (despite being descriptive and
> innocuous), but you just couldn't tell you weren't conforming because of all
> the reasons 7230 got drafted in the first place!?  Does anyone know how to
> get ahold of Mark Nottingham so he can explain to me how all this works?
> 
As I mentioned in the bug report, the problem with disobeying RFC7230 is that 
some
proxy will remove Content-Length for 204 answers, or even worse: it'll make the
response invalid.

> -Clay
//György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing a simple new tool: git-restack

2016-02-08 Thread Bailey, Darragh

I've got some git aliases that handle this, so I did a quick review to
see what might be useful additions to the tool.

  * Looks like passing of the branch should change the branch used for a
.gitreview file to parse
  o This looks to be unnecessary with newer git-review anyway, but
is probably a reasonable fallback for a while
  * Defaulting to looking for an explicit upstream using the
'@{u}' notation first looks like it would allow this tool to
work in any repo
  o newer git-review already sets this from 1.25


The aliases in case they help:

diverge-commit = !f() { git merge-base $(git show-upstream $@)
${1:-HEAD}; }; f
review-branch = !f() { git show ${1:-HEAD}:.gitreview | git config --get
--file - gerrit.defaultbranch || echo master;}; f
rework = !git rebase -i $(git diverge-commit $@)
show-upstream = !f() { git rev-parse --symbolic-full-name --abbrev-ref
${1}@{u} 2>/dev/null || git rev-parse --symbolic-full-name --abbrev-ref
$(git review-branch)@{u}; }; f

Regards,
Darragh Bailey
IRC: electrofelix
"Nothing is foolproof to a sufficiently talented fool" - Unknown

On 02/02/16 23:50, James E. Blair wrote:
> Paul Michali  writes:
>
>> Sounds interesting... the link
>> https://docs.openstack.org/infra/git-restack/ referenced
>> as the home page in PyPI is a broken link.
> I'm clearly getting ahead of things.  The correct link is:
>
>   http://docs.openstack.org/infra/git-restack/
>
> Thanks,
>
> Jim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Nominate Fedor Zhadaev for the fuel-menu-core team

2016-02-08 Thread Igor Kalnitsky
Hey Fuelers,

I'd like to nominate Fedor Zhadaev for the fuel-menu-core team.
Fedor's doing good review with detailed feedback [1], and has
contributes over 20 patches during Mitaka release cycle [2].

Fuel Cores, please reply back with +1/-1.

- igor

[1] http://stackalytics.com/?module=fuel-menu=mitaka
[2] http://stackalytics.com/?module=fuel-menu=mitaka_id=fzhadaev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][nova][neutron] Using specific endpoints

2016-02-08 Thread Brandon Logan
Not sure about the heatclient, but if you use the keystone session you
should be able to provide endpoint_override kwarg to any instantiation
of a client that takes the session in.  At least I think thats the case.

Thanks,
Brandon

On Sat, 2016-02-06 at 22:51 +0530, pn kk wrote:
> Can bypass_url in nova to mention specific endpoint?
> 
> On Sat, Feb 6, 2016 at 4:49 PM, pn kk  wrote:
> Hi,
> 
> 
> We want to have a deployment in which we use a single keystone
> instance, but multiple controllers having other openstack
> services(glance/nova/neutron...) running on each of the
> controllers.
> 
> 
> All these services would register their endpoints with single
> keystone.
> 
> 
> Please suggest a way in which I can point openstack clients to
> specific endpoint and access its services (don't want to use
> regions).
> 
> 
> Is this supported?
> 
> 
> I saw that heat, neutron APIs can take endpoint urls. Can I
> use these APIs to solve my purpose?
> 
> 
> >>> from heatclient.client import Client
> >>> heat = Client('1', endpoint=heat_url, token=auth_token)
> >>> from neutronclient.v2_0 import client
> >>> neutron = 
> client.Client(endpoint_url='http://192.168.206.130:9696/',
> token='d3f9226f27774f338019aa262ef6')
> Could you please also share the APIs of nova/glance which can
> take endpoint_urls?
> 
> 
> -Thanks
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Use of restricted and multiverse in the gate

2016-02-08 Thread Jesse Pretorius
On 7 February 2016 at 14:11, Monty Taylor  wrote:

> We're working on getting per-region APT mirrors stood up for the nodepool
> nodes to use in the gate. As part of working on this, it struck me that we
> currently have restricted and multiverse enabled in our sources.list file.
>
> I ran a quick test of removing both of them on a devstack-gate change and
> nothing broke, so I believe that it would be safe to remove them, but I
> thought I'd check with everyone.
>
> Any objection to not including these in our apt mirrors?
>

I did a quick test run of our standard commit test (convergence and
functional test) and it has passed: https://review.openstack.org/277178

This covers tempest scenario testing including Nova, Glance, Swift, Cinder,
Keystone, Neutron.

For us (OpenStack-Ansible) it seems that we can do without the restricted
and multiverse components.

Thanks for the heads-up!

Jesse
IRC: odysset4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-08 Thread Julien Danjou
On Fri, Feb 05 2016, Jay Pipes wrote:

> However, even though it's not the Poppy team's fault, I think the fact that 
> the
> Poppy project user's only choice when using Poppy is to use a non-free backend
> disqualifies Poppy from being an OpenStack project. The fact that the Poppy
> team follows the four Opens and genuinely wants to align with the OpenStack
> development methodology and processes is admirable and we should certainly
> encourage that behaviour, including welcoming Poppy into our CI platform for 
> as
> much as we can (given the obvious limitations around functional testing of
> Poppy). However, at the end of the day, I agree with Sean that this non-free
> restriction inherent in Poppy means it should not be included in the
> openstack/governance projects.yaml file as an "official" OpenStack project.

This is the kind of situation that makes Debian created a 'contrib'
section in its repository, a middle-ground between 'main' (free software)
and 'non-free' (non-free software):

  "The contrib archive area contains supplemental packages intended to
  work with the Debian distribution, but which require software outside
  of the distribution to either build or function.

  Every package in contrib must comply with the DFSG."

People writing software that goes into 'contrib' did not write non-free
software, but their software depends on non-free software, which makes
them useless to run an complete free system.

It seems OpenStack is finding itself in the same situation here. It
maybe too soon – or even unwanted – to have an equivalent "contrib"
section, but the familiarity of the situation strikes me.

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Daniel P. Berrange
On Sun, Feb 07, 2016 at 03:07:20PM -0500, Jay Pipes wrote:
> I would love to see the OpenStack contributor community take back the design
> summit to its original format and purpose and decouple it from the OpenStack
> Summit's conference portion.
> 
> I believe the design summits should be organized by the OpenStack
> contributor community, not the OpenStack Foundation and its marketing and
> event planning staff. This will allow lower-cost venues to be chosen that
> meet the needs only of the small group of active contributors, not of huge
> masses of conference attendees. This will allow contributor companies to
> send *more* engineers to *more* design summits, which is something that
> really needs to happen if we are to grow our active contributor pool.
> 
> Once this decoupling occurs, I think that the OpenStack Summit should be
> renamed to the OpenStack Conference and Expo to better fit its purpose and
> focus. This Conference and Expo event really should be held once a year, in
> my opinion, and continue to be run by the OpenStack Foundation.
> 
> I, for one, would welcome events that have no conference check-in area, no
> evening parties with 2000 people, no keynote and powerpoint-as-a-service
> sessions, and no getting pulled into sales meetings.
> 
> OK, there, I said it.
> 
> Thoughts? Criticism? Support? Suggestions welcome.

I really agree with everything you say, except for the bit about the
community doing organization - I think its fine to let function event
staff continue with the burden of planning, as long as their goals are
directed by the community needs.

I might suggest that we could be a bit more radical with the developer
event and decouple the timing from the release cycle. The design summits
are portrayed as events where we plan the next 6 months of work, but the
release has already been open for a good 2-3 or more weeks before we meet
in the design summit. This always makes the first month of each development
cycle pretty inefficient as decisions are needlessly postponed until the
summit. The bulk of specs approval then doesn't happen until after the
summit, leaving even less time until feature freeze to get the work done.

In nova at least many of the major "priority themes" we decide upon are
tending to span across multiple development cycles, and we broadly seem
to have a good understanding of what the upcoming themes will be before
we get to the summit. The other problem with the design summit is that
since we have not often started the bulk of the dev work, we don't yet
know all the problems we're going to encounter. So we can talk forever
about theoretical stuff, which never becomes an issue and the actual
problems we uncover during implementation have to wait until the mid-cycle
for the real problem solving work. IOW I'm not really convinced we actually
need to have the design summit as a forum for "planning the next release"
nor is it enourmously useful for general problem solving, since it can be
too earlier in the dev process.

I think that our processes would become more efficient if we were to
decouple the design summit from the release cycle. We would be able to
focus on release planning right from the start of the dev cycle and not
pointlessly postpone decisions to a design summit, which would give us
more time to actually get the planned work written earlier in the cycle.

This would in turn let us make the developer summits into something which
strongly focuses on problem solving, where f2f collaboration is of maximum
benefit. IOW, it would be kind of like merging the design summit & midcycle
concepts into one - we'd have the benefits of the mid-cycle's focus on
explicit problem solving, combined with the ability to have cross-project
collaboration by being co-located with other projects. Instead of having
4 travel events a year, due to need to fix at 6 month intervals to align
with the release schedules, we could cut down to 2 or 3 developer events
a year, which are more productive overall.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal for having a service type registry and curated OpenStack REST API

2016-02-08 Thread Sean Dague
On 02/07/2016 08:13 PM, Monty Taylor wrote:
> On 02/07/2016 07:30 AM, Jay Pipes wrote:
>> On 02/04/2016 06:38 AM, Sean Dague wrote:
>>> What options do we have?
>> 
>>> 2) Have a registry of "common" names.
>>>
>>> Upside, we can safely use common names everywhere and not fear collision
>>> down the road.
>>>
>>> Downside, yet another contention point.
>>>
>>> A registry would clearly be under TC administration, though all the
>>> heavy lifting might be handed over to the API working group. I still
>>> imagine collision around some areas might be contentious.
>>
>> The above is my choice. I'd also like to point out that I'm only talking
>> about the *service* projects here -- i.e. the things that expose a REST
>> API.
> 
> yes
> 
>> I don't care about a naming registry for non-service projects because
>> they do not expose a public user-facing API that needs to be curated and
>> protected.
> 
> yes
> 
>> I would further suggest using the openstack/governance repo's
>> projects.yaml file for this registry. This is already under the TC's
>> administration and the API WG could be asked to work closely with the TC
>> to make recommendations on naming for all type:service projects in the
>> file. We should add a service:$type tag to the projects.yaml file and
>> that would serve as the registry for REST API services.
>>
>> We would need to institute this system by first tackling the current
>> areas of REST API functional overlap:
>>
>> * Ceilometer and Monasca are both type:service projects that are both
>> performing telemetry functionality in their REST APIs. The API WG should
>> work with both communities to come up with a 6-12 month plan for
>> creating a *single* OpenStack Telemetry REST API that both communities
>> would be able to implement separately as they see fit.
>>
>> * All APIs that the OpenStack Compute API currently proxies to other
>> service endpoints need to have a formal sunsetting plan. This includes:
>>
>>   - servers/{server_id}/os-interface (port interfaces)
>>   - images/
>>   - images/{image_id}/metadata
>>   - os-assisted-volume-snapshots/
>>   - servers/{server_id}/os-bare-metal-nodes/ (BTW, why is this a
>> sub-resource of /servers again?)
>>   - os-fixed-ips/
>>   - os-floating-ip-dns/
>>   - os-floating-ip-pools/
>>   - os-floating-ips/
>>   - os-floating-ips-bulk/
>>   - os-networks/
>>   - os-security-groups/
>>   - os-security-group-rules/
>>   - os-security-group-default-rules/
>>   - os-tenant-networks/
>>   - os-volumes/
>>   - os-snapshots/
>>
>> * All those services that have overlapping top-level resources must have
>> a plan to either:
>>   - align/consolidate the top-level resource if it makes sense
>>   - rename the top-level resource to be more specific if needed, or
>>   - place the top-level resource as a sub-resource on a top-level
>> resource that is unique in the full OpenStack REST API set of top-level
>> resources
> 
> Yes please god yes oh yes a million times yes. I've never agreed with
> you as much as this since the JSON/XML glory of the Cactus summit.
> 
> I know shade is not the OpenStack SDK - but as a library that has a top
> level "OpenStackCloud" object that has methods like "list_servers" and
> "list_images" - things that overlap in conceptual name but do not
> present the same semantics quickly become difficult. I believe Jay's
> proposal above will help to make the situation much more saner.
> 
> Monty
> 
> /me sends jaypipes a fruit basket

Ok, but in Tokyo you specifically also stated no one should ever remove
an API because doing so destroyers their users.

I'm trying to reconcile those points of view.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Tacker Installation support for Openstack Kilo

2016-02-08 Thread Basavaraj B
Hi,

We have installed Openstack Kilo version and would like to install Tacker
on it.
We see only devstack version of Tacker installation available and we want
to have installed Tacker as a separate component.

Can anyone provide some pointers?

Regards,
Basavaraj
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [Openstack-dev] Glance

2016-02-08 Thread Pankaj Mishra
Hi,

I have been trying to execute a command in Glance but I am not able to
execute the command.


Could somebody please help me with it? Below given are the details of what
I am looking for.


1. location-add --url  [--metadata ] 

Here for url what should I pass to success to add the location.


2.  md-resource-type-associate  


What should I pass here  and 


Can somebody kindly help me to execute the command.


Thanks & Regards,

Pankaj
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][neutron-*] Notice! pylint breakage

2016-02-08 Thread Ihar Hrachyshka

Gareth  wrote:


Thanks for all!

However I'm developing on stable/liberty and it failed with running
"tox -e pep8" because of this issue. And I found it is pylint==1.4.5
on master branch. Could we cherry-pick this test-requirements update
back to stable/liberty?


Try constrained versions of tox targets: pep8-constraints,  
py27-constraints, etc. They will hopefully work.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Thierry Carrez

Jay Pipes wrote:

tl;dr
=

I have long thought that the OpenStack Summits have become too
commercial and provide little value to the software engineers
contributing to OpenStack.

I propose the following:

1) Separate the design summits from the conferences
2) Hold only a single OpenStack conference per year
3) Return the design summit to being a low-key, low-cost working event
[...]


I agree with most of the things that have been said so far. I think the 
upstream community can't really get its work done in the current 
setting, and that it's too costly for companies to send most of their 
developers to classy hotels in expensive cities. I therefore think it 
would be beneficial to separate the events.


I agree that a separated design summit should be in lower-cost venues 
and smaller cities. But I don't think that the "OpenStack contributor 
community" can or should directly organize them. I happen to have a foot 
on both sides, and I can tell you organizing those events is extremely 
time consuming. I know exactly who would end up with the burden of 
organizing those events in the end -- and those are the same overworked 
cross-project core of developers that fill all the gaps in OpenStack.


I don't want to risk even more burnout from that group by forcing them 
into the craziness of organizing such events every 6 months. I don't 
think the issue with the Design Summit is that the Foundation staff and 
FNTech organizes them. It's mostly my team working on it on the staff 
side -- and I think Mike Perez and myself qualify as "OpenStack 
contributor community". The issue is with the bundling of the two 
events, and that can be fixed while still letting a specialized event 
team do all the heavy lifting.


The timing of this thread is unfortunate, since after Tokyo I have 
actually been working on a solution for separation myself, and the 
Foundation is finalizing a strawman proposal that should soon be pushed 
for comments to the community. It involves changes to the main 
conference event as well.


So please stand by while we finalize that: I think you will like the end 
result.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2016-02-08 Thread Timofei Durakov
Hi,

In case of live-migration reporting I'd rather go with real-time stats,
queried from compute, instead of reporting this data to db first. While
amount of of rpc requests/db updates is relatively small, total number of
such requests depends on amount of active migrations. While realtime data
from compute allows to decrease it. Not every migration will trigger
operator to gather statistics, and each of triggered will require only 2
rpc per request instead of 2 rpc and db write per 3/5/etc. seconds.

Timofey.

On Sun, Feb 7, 2016 at 10:31 PM, Jay Pipes  wrote:

> On 02/04/2016 11:02 PM, Bhandaru, Malini K wrote:
>
>> Another thought, for such ephemeral/changing data, such as progress,
>> why not save the information in the cache (and flush to database at a
>> lower rate), and retrieve for display to active listeners/UI from the
>> cache. Once complete or aborted, of course flush the cache.
>>
>> Also should we provide a "verbose flag", that is only capture
>> progress information when requested? That is when a human user might
>> be issuing the command from the cli or GUI tool.
>>
>
> I agree with you, Malini, on the above suggestion that there is some doubt
> as to the value of saving this temporal data to the database.
>
> Why not just have an on-demand model that simply routes the request for
> progress information directly to the compute node and sends the progress
> amount back directly to the nova-api service instead of going to the
> database at all?
>
> Another alternative would be to use a push model instead of a poll model,
> but that would require a pretty significant change to the code...
>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Fedor Zhadaev for the fuel-menu-core team

2016-02-08 Thread Aleksey Kasatkin
+1


Aleksey Kasatkin


On Mon, Feb 8, 2016 at 12:04 PM, Tatyana Leontovich <
tleontov...@mirantis.com> wrote:

> +1
>
>
>
> On Mon, Feb 8, 2016 at 11:54 AM, Igor Kalnitsky 
> wrote:
>
>> Hey Fuelers,
>>
>> I'd like to nominate Fedor Zhadaev for the fuel-menu-core team.
>> Fedor's doing good review with detailed feedback [1], and has
>> contributes over 20 patches during Mitaka release cycle [2].
>>
>> Fuel Cores, please reply back with +1/-1.
>>
>> - igor
>>
>> [1] http://stackalytics.com/?module=fuel-menu=mitaka
>> [2]
>> http://stackalytics.com/?module=fuel-menu=mitaka_id=fzhadaev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Improving SSL/TLS in OpenStack-Ansible

2016-02-08 Thread Jesse Pretorius
On 15 January 2016 at 14:18, Major Hayden  wrote:
>
>
> I've attended some of the OpenStack Security Mid-Cycle meeting this week
> and Robert Clark was kind enough to give me a deep dive on the Anchor
> project[1].  We had a good discussion around my original email thread[2] on
> improving SSL/TLS certificates within OpenStack-Ansible (OSA) and we went
> over my proposed spec[3] on the topic.
>
> Jean-Philippe Evrard helped me assemble an etherpad[4] this morning where
> we brainstormed some problem statements, user stories, and potential
> solutions for improving the certificate experience in OSA.  It seems like
> an ephemeral PKI solution, like Anchor, might provide a better certificate
> experience for users while also making the revocation and issuance process
> easier.
>
> I'd really like to get some feedback from the OpenStack community on our
> current brainstorming efforts.  We've enumerated a few use cases and user
> stories already, but we've probably missed some other important ones.  Feel
> free to stop by #openstack-ansible or join us in the etherpad.
>
> Thanks!
>
> [1] https://wiki.openstack.org/wiki/Security/Projects/Anchor
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/077877.html
> [3] https://review.openstack.org/#/c/243332/
> [4] https://etherpad.openstack.org/p/openstack-ansible-tls-improvement


Major - fantastic initiative.

One thing I'd like to say is to remind you that OSA is a toolbox of roles
and plays that a deployer can mix and match to suit the needs of the
specific environment.

This effectively means that we could easily curate or consume Ansible roles
which do any of the options outlined as possible solutions. What we care
about mostly is:

1 - consistency in the deployment of certificates and configuration of
services (consistency improves ease of use)
2 - simplicity of deployment - for example making it easy to apply one set
of certs across the board
3 - modularity - retain the ability to apply a different cert to any one
service, and the ability to have TLS/SSL on for some and not for others
4 - testing - we need to be able to gate test services individually, and in
end-to-end scenario's. Each possible scenario costs time, effort and gate
resources - so ideally we want to reduce our first class tested scenario's
but implement everything in such a way that we have a high level of
confidence that replacing one certificate generation method with another
will not change the result.

Thanks to both yourself and Jean-Philippe for taking this on!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Next vitrage meeting

2016-02-08 Thread Afek, Ifat (Nokia - IL)
Hi,

Vitrage next weekly meeting will be tomorrow, Wednesday at 9:00 UTC, on 
#openstack-meeting-3 channel.

Agenda:
* Current status
* Review action items
* Next steps 
* Open Discussion

You are welcome to join.

Thanks, 
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Zuul memory leak

2016-02-08 Thread Michael Still
On Tue, Feb 9, 2016 at 4:59 AM, Joshua Hesketh 
wrote:

> On Thu, Feb 4, 2016 at 2:44 AM, James E. Blair 
> wrote:
>>
>> On the subject of clearing the cache more often, I think we may not want
>> to wipe out the cache more often than we do now -- in fact, I think we
>> may want to look into ways to keep from doing even that, because
>> whenever we reload now, Zuul slows down considerably as it has to query
>> Gerrit again for all of the data previously in its cache.
>>
>
> I can see a lot of 3rd parties or simpler CI's not needing to reload zuul
> very often so this cache would never get cleared. Perhaps cached objects
> should have an expiry time (of a day or so) and can be cleaned up
> periodically? Additionally if clearing the cache on a reload is causing
> pain maybe we should move the cache into the scheduler and keep it between
> reloads?
>

Do you guys use oslo at all? I ask because the olso memcache stuff does
exactly this, so it should be trivial to implement if you don't mind
depending on oslo.

Hope this helps,
Michael

-- 
Rackspace Australia
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[Openstack] I can't get nova-novncproxy to listen on the specified address

2016-02-08 Thread Ludwig Tirazona
Hello Everyone,

   I just can't get my nova-novncproxy to only listen to my public IP
Address. What am I doing wrong?


Here is the relevant part of my nova.conf:

[vnc]
enabled = true
vncserver_listen = 
vncserver_proxyclient_address = 
novncproxy_host = 
#novncproxy_base_url = http://my.url:6080/vnc_auto.html
novnproxy_port = 6080


Even after doing a "service nova-novncproxy restart", here is the output of
my netstat:

tcp0  0 0.0.0.0:60800.0.0.0:*   LISTEN
nova

Here is my ps aux:

21035  0.0  0.0 232644 49868 ?Ss   15:16   0:00 /usr/bin/python
/usr/bin/nova-novncproxy --config-file=/etc/nova/nova.conf
--log-file=/var/log/nova/nova-novncproxy.log


Any help or pointers are greatly appreciated. Thanks!
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] RAID / stripe block storage volumes

2016-02-08 Thread Joe Topjian
Yep. Don't get me wrong -- I agree 100% with everything you've said
throughout this thread. Applications that have native replication are
awesome. Swift is crazy awesome. :)

I understand that some may see the use of mdadm, Cinder-assisted
replication, etc as supporting "pet" environments, and I agree to some
extent. But I do think there are applicable use-cases where those services
could be very helpful.

As one example, I know of large cloud-based environments which handle very
large data sets and are entirely stood up through configuration management
systems. However, due to the sheer size of data being handled, rebuilding
or resyncing a portion of the environment could take hours. Failing over to
a replicated volume is instant.In addition, being able to both stripe and
replicate goes a very long way in making the most out of commodity block
storage environments (for example, avoiding packing problems and such).

Should these types of applications be reading / writing directly to Swift,
HDFS, or handling replication themselves? Sure, in a perfect world. Does
Gluster fill all gaps I've mentioned? Kind of.

I guess I'm just trying to survey the options available for applications
and environments that would otherwise be very flexible and resilient if it
wasn't for their awkward use of storage. :)

On Mon, Feb 8, 2016 at 6:18 PM, Robert Starmer  wrote:

> Besides, wouldn't it be better to actually do application layer backup
> restore, or application level distribution for replication?  That
> architecture at least let's the application determine and deal with corrupt
> data transmission rather than the DRBD like model where you corrupt one
> data-set, you corrupt them all...
>
> Hence my comment about having some form of object storage (SWIFT is
> perhaps even a good example of this architeccture, the proxy replicates,
> checks MD5, etc. to verify good data, rather than just replicating blocks
> of data).
>
>
>
> On Mon, Feb 8, 2016 at 7:15 PM, Robert Starmer  wrote:
>
>> I have not run into anyone replicating volumes or creating redundancy at
>> the VM level (beyond, as you point out, HDFS, etc.).
>>
>> R
>>
>> On Mon, Feb 8, 2016 at 6:54 PM, Joe Topjian  wrote:
>>
>>> This is a great conversation and I really appreciate everyone's input.
>>> Though, I agree, we wandered off the original question and that's my fault
>>> for mentioning various storage backends.
>>>
>>> For the sake of conversation, let's just say the user has no knowledge
>>> of the underlying storage technology. They're presented with a Block
>>> Storage service and the rest is up to them. What known, working options
>>> does the user have to build their own block storage resilience? (Ignoring
>>> "obvious" solutions where the application has native replication, such as
>>> Galera, elasticsearch, etc)
>>>
>>> I have seen references to Cinder supporting replication, but I'm not
>>> able to find a lot of information about it. The support matrix[1] lists
>>> very few drivers that actually implement replication -- is this true or is
>>> there a trove of replication docs that I just haven't been able to find?
>>>
>>> Amazon AWS publishes instructions on how to use mdadm with EBS[2]. One
>>> might interpret that to mean mdadm is a supported solution within EC2 based
>>> instances.
>>>
>>> There are also references to DRBD and EC2, though I could not find
>>> anything as "official" as mdadm and EC2.
>>>
>>> Does anyone have experience (or know users) doing either? (specifically
>>> with libvirt/KVM, but I'd be curious to know in general)
>>>
>>> Or is it more advisable to create multiple instances where data is
>>> replicated instance-to-instance rather than a single instance with multiple
>>> volumes and have data replicated volume-to-volume (by way of a single
>>> instance)? And if so, why? Is a lack of stable volume-to-volume replication
>>> a limitation of certain hypervisors?
>>>
>>> Or has this area just not been explored in depth within OpenStack
>>> environments yet?
>>>
>>> 1: https://wiki.openstack.org/wiki/CinderSupportMatrix
>>> 2: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
>>>
>>>
>>> On Mon, Feb 8, 2016 at 4:10 PM, Robert Starmer  wrote:
>>>
 I'm not against Ceph, but even 2 machines (and really 2 machines with
 enough storage to be meaningful, e.g. not the all blade environments I've
 built some o7k  systems on) may not be available for storage, so there are
 cases where that's not necessarily the solution. I built resiliency in one
 environment with a 2 node controller/Glance/db system with Gluster, which
 enabled enough middleware resiliency to meet the customers recovery
 expectations. Regardless, even with a cattle application model, the
 infrastructure middleware still needs to be able to provide some level of
 resiliency.

 But we've kind-of wandered off of the original question. I think 

Re: [openstack-dev] Dynamically adding Extra Specs

2016-02-08 Thread Joshua Harlow

Dhvanan Shah wrote:

Hey Jay!

I was looking at implementing a few scheduling algorithms of my own
natively into OpenStack, and for that I went through the nova-scheduler.
After going through the scheduler, I felt that it was not very easy to
implement or extend and add new scheduling algorithms to the scheduler.
The only things that I felt that I could change or maybe was provisioned
for adding or extending were the filters and weighers and implementing
new scheduling algorithms with just these 2 knobs was a little hard. I
did change the code in the filter_scheduler to get some basic algorithms
running like the first and next fit apart from the spreading and
stacking which was already present. But to go beyond and to implement
more complex algorithms was much harder and I would have to change a lot
of code in different places that could as a side effect also break
things and didn't seem clean. I might be wrong and might have not
understood things right, please correct me if so.

To give an example of what I mean by a little complex scheduling
algorithms: a subset matching algorithm - that schedules multiple
heterogeneous requests by picking out a subset from the requests that
best fit a host/s, so this would improve the utilization. The
prerequisite for this is that I have multiple heterogeneous requests
lined up to be scheduled. So for this kind of an algorithm it isnt easy
to implement into OpenStack.

So a workaround that I'm working on for implementing different
scheduling algorithms is by building a scheduling wrapper outside of the
OpenStack architecture, where the user interacts with this wrapper and
in the wrapper I get the host details from the database and based on the
algorithm I want, the scheduler chooses the host for the request and
gives out a VM : Host mapping (The wrapper does the sanity checks that
the filters do to check if the host can accommodate or handle the
request). Along with the request, I also want to pass this mapping that
the scheduler can use to assign the request to the host passed in the
mapping. I've written a filter that filters all the hosts apart from the
host that I sent and this is how I make sure that the request gets
placed on the host that I had passed. I have come up with a hack to pass
the host to the scheduler, but it is not quite elegant.


Why use the filter mechanism at all?

Just provide a whole new scheduler that replaces the plugabble point 
that already exists (a class with ``def select_destinations(self, 
context, spec_obj)``)?


Although it seems u want to put a request into SCHEDULING_WAIT (or 
something like that) and then when u have enough requests to batch up u 
will move from SCHEDULING_WAIT -> SCHEDULING (something akin to delayed 
scheduling when you have reached your batch size). Is something like 
that correct?




Would be great to have your input on the same!

On Mon, Feb 8, 2016 at 12:51 AM, Jay Pipes > wrote:

Apologies for the delayed responses. Comments inline.

On 01/27/2016 02:29 AM, Dhvanan Shah wrote:

Hey Jay!

Thanks for the clarification. There was another thing that I
wanted to
know, is there any provision to pass extra arguments or some extra
specifications along with the VM request to nova. To give you some
context, I wanted to pass a host:vm mapping to the nova
scheduler for
its host selection process, and I'm providing this mapping from
outside
of the openstack architecture.


Why do you want to do this? The scheduler is the thing that sets the
host -> vm mapping -- that's what the process of scheduling does.

>  So I need to send this information along

with the request to the scheduler. One way of doing this was
creating
new flavors with their extra specification as different hosts,
but that
would lead to as you pointed out earlier a "flavor explosion"
problem.

So is there a way to pass some extra arguments or some additional
information to nova.


Depends what exactly you are trying to pass to Nova. Could you give
some more information about your use case?

Thanks!
-jay




--
Dhvanan Shah

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] documenting configuration option segregation between services and agents

2016-02-08 Thread Hirofumi Ichihara

On 2016/02/08 18:17, Ihar Hrachyshka wrote:

Kevin Benton  wrote:


Propose it as a devref patch!


+1. Has it happened already?

Here https://review.openstack.org/#/c/275381/






On Wed, Jan 27, 2016 at 12:30 PM, Dustin Lundquist 
 wrote:
We should expand services_and_agents devref to describe how and why 
configuration options should be segregated between services and 
agents. I stumbled into this recently while trying to remove a 
confusing duplicate configuration option [1][2][3]. The present 
separation appears to be 'tribal knowledge', and not consistently 
enforced. So I'll take a shot at explaining the status quo as I 
understand it and hopefully some seasoned contributors can fill in 
the gaps.


=BEGIN PROPOSED DEVREF SECTION=
Configuration Options
-

In addition to database access, configuration options are segregated 
between neutron-server and agents. Both services and agents may load 
the main neutron.conf since this file should contain the Oslo message 
configuration for internal Neutron RPCs and may contain host specific 
configuration such as file paths. In addition neutron.conf contains 
the database, keystone and nova credentials and endpoints strictly 
for use by neutron-server.


In addition neutron-server may load a plugin specific configuration 
file, yet the agents should not. As the plugin configuration is 
primarily site wide options and the plugin provides the persistence 
layer for Neutron, agents should instructed to act upon these values 
via RPC.


Each individual agent may have its own configuration file. This file 
should be loaded after the main neutron.conf file, so the agent 
configuration takes precedence. The agent specific configuration may 
contain configurations which vary between hosts in a Neutron 
deployment such as the external_network_bridge for a L3 agent. If any 
agent requires access to additional external services beyond the 
Neutron RPC, those endpoints should be defined in the agent specific 
configuration file (e.g. nova metadata for metadata agent).



==END PROPOSED DEVREF SECTION==

Disclaimers: this description is informed my by own experiences 
reading existing documentation and examining example configurations 
including various devstack deployments. I've tried to use RFC style 
wording: should, may, etc.. I'm relatively confused on this subject, 
and my goal in writing this is to obtain some clarity myself and 
share it with others in the form of documentation.



[1] https://review.openstack.org/262621
[2] https://bugs.launchpad.net/neutron/+bug/1523614
[3] https://review.openstack.org/268153

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] pid=host

2016-02-08 Thread Steven Dake (stdake)
Michal,

You listed steps to reproduce but it wasn't clear if docker 1.10 kills vms
or keeps them alive from your description.  From our discussion today, it
sounded as if docker 1.10, docker 1.9, and docker 1.8.2 have different
behaviors on this front.  Could you expand?

Dan had required to keep the discussion in the bugzilla for tracking
purposes.  Would you mind creating a bugzilla account and adding your data
to the bug?

Regards
-steve


On 2/8/16, 12:15 PM, "Michał Jastrzębski"  wrote:

>Hey,
>
>So quick steps to reproduce this:
>
>0. install docker 1.10
>1. Deploy kolla
>2. Run VM
>3. On compute host - ps aux | grep qemu, should show your vm process
>4. docker rm -f nova_libvirt
>5. ps aux | grep qemu should still show running vm
>6. re-deploy nova_libvirt
>7. docker exec -it nova_libvirt virsh list - should show running vm
>
>Cheers,
>Michal
>
>On 8 February 2016 at 07:32, Steven Dake (stdake) 
>wrote:
>> Hey folks,
>>
>> I know we have been through some changes with how pid=host works.  I'd
>>like
>> to get to the bottom of this, so we can either add the features we need
>>to
>> docker, or say "all is good".
>>
>> Here is the last quote from this bugzilla where Red Hat in general is
>> interested in the same behavior as the Kolla team has.  They have many
>> people embedded in the Docker and Kubernetes communities, so it may make
>> sense to let them do the work there :)
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1302807
>>
>> Mrunal Patel 2016-02-08 06:10:15 EST
>>
>> docker tracks the pids in a container using cgroups and hence all
>>processes
>> are killed even though we use pid=host. I believe we had probably
>>prompted
>> them to add this behavior in the first place.
>>
>>
>> This statement appears at odds with what was tested on IRC a few days
>>back
>> with docker 1.10.  It is possible docker 1.10 had a regression here, in
>> which case if they fix it, we will be back to a dead VM during libvirt
>> upgrade which we don't want.
>>
>> Can folks that tested this weigh in on the testing that was done on that
>> bugzilla with distro type, docker version, docker-py version, and
>>results.
>> Unfortunately you will have to create a Red Hat bugzilla account, but
>>if you
>> don't wish to do that, please send the information on list after
>>reviewing
>> the bugzilla and I'll submit it on your behalf.
>>
>> The outcomes I would be happy with is:
>>
>> * docker will never change the semantics of host=pid mode for killing
>>child
>> processes
>>
>> * Or alternatively docker will add a feature such as host=pidnochildkill
>> which Red Hat can spearhead
>>
>> Thoughts and comments welcome.
>>
>> Regards
>>
>> -steve
>>
>>
>>
>>
>>
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposed Agenda for Kolla Midcycle

2016-02-08 Thread Steven Dake (stdake)
Hey folks,

The agenda generation happened approximately a month ago over a 3 week period, 
and then was voted on. I left the actual creation of the agenda until today in 
case any last minute pressing issues came in.  I took some suggestions from the 
copious notes in the Etherpad regarding pair programming for 80 minutes for 
upgrades and knocking out some reviews related only to upgrades.

The agenda is here:
https://etherpad.openstack.org/p/kolla-mitaka-midcycle

Please don't edit the agenda - we can discuss in the morning and see if it 
needs fine tuning to fit folks schedules and come to a common agreement as a 
group.

Folks on the west coast need to leave around 3:30-4:00PM (me included) on 
Wednesday to catch flights home which get them in at midnight ftl.  The 
midcycle can continue past this time, but please close up shop at 5pm and make 
copious notes for the folks that are on budget constraints.

Ryan, if your up for facilitating Thursday from 3:30 onward and you will be 
here, I think that makes sense :)

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] problems on periodic-neutron

2016-02-08 Thread Andreas Jaeger

On 02/09/2016 02:22 AM, fumihiko kakuma wrote:

On Tue, 09 Feb 2016 09:11:22 +0900
fumihiko kakuma  wrote:


On Mon, 8 Feb 2016 14:20:45 -0500
Matthew Treinish  wrote:


There is nothing wrong with openstack-health it's behaving as currently
expected. There is a known limitation with the dashboard right now where
results aren't counted if the job failure occurs before devstack starts. If your
jobs are running but never even getting to devstack openstack-health (well
really the subunit2sql db) will not have any data on those runs. Once you fix
these jobs to actually start running devstack (or anything else which generates
a subunit stream in the expected place) it'll appear on the dashboard.

I wrote a brief blog post a little while ago on how openstack-health works and
some of the features it has:

http://blog.kortar.org/?p=279

The current limitations section explains this issue in a bit more detail.

-Matt Treinish


Thank you for reply.

OK, it explains the issue for periodic-neutron pipeline.
But how about periodic pipeline? It also does not seem to work.

http://status.openstack.org/openstack-health/#/g/build_queue/periodic

Currently my jobs are required to use periodic pipeline.
So I want to know whether it works on periodic pipeline.

https://review.openstack.org/#/c/276317/

Thanks,


Sorry, I did not check the details of jobs on periodic pipeline.
They do not seems to run devstack.
So openstack-health dashboard will not display a graph for periodic
pipeline.

Is that correct?


Yes, health dashboard currently covers only devstack,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] Announcing a simple new tool: git-restack

2016-02-08 Thread Thomas Goirand
On 02/03/2016 01:53 AM, James E. Blair wrote:
> Hi,
> 
> I'm pleased to announce a new and very simple tool to help with managing
> large patch series with our Gerrit workflow.
> 
> In our workflow we often find it necessary to create a series of
> dependent changes in order to make a larger change in manageable chunks,
> or because we have a series of related changes.  Because these are part
> of larger efforts, it often seems like they are even more likely to have
> to go through many revisions before they are finally merged.  Each step
> along the way reviewers look at the patches in Gerrit and leave
> comments.  As a reviewer, I rely heavily on looking at the difference
> between patchsets to see how the series evolves over time.
> 
> Occasionally we also find it necessary to re-order the patch series, or
> to include or exclude a particular patch from the series.  Of course the
> interactive git rebase command makes this easy -- but in order to use
> it, you need to supply a base upon which to "rebase".  A simple choice
> would be to rebase the series on master, however, that creates
> difficulties for reviewers if master has moved on since the series was
> begun.  It is very difficult to see any actual intended changes between
> different patch sets when they have different bases which include
> unrelated changes.
> 
> The best thing to do to make it easy for reviewers (and yourself as you
> try to follow your own changes) is to keep the same "base" for the
> entire patch series even as you "rebase" it.  If you know how long your
> patch series is, you can simply run "git rebase -i HEAD~N" where N is
> the patch series depth.  But if you're like me and have trouble with
> numbers other than 0 and 1, then you'll like this new command.
> 
> The git-restack command is very simple -- it looks for the most recent
> commit that is both in your current branch history and in the branch it
> was based on.  It uses that as the base for an interactive rebase
> command.  This means that any time you are editing a patch series, you
> can simply run:
> 
>   git restack
> 
> and you will be placed in an interactive rebase session with all of the
> commits in that patch series staged.  Git-restack is somewhat
> branch-aware as well -- it will read a .gitreview file to find the
> remote branch to compare against.  If your stack was based on a
> different branch, simply run:
> 
>   git restack 
> 
> and it will use that branch for comparison instead.
> 
> Git-restack is on pypi so you can install it with:
> 
>   pip install git-restack
> 
> The source code is based heavily on git-review and is in Gerrit under
> openstack-infra/git-restack.
> 
> https://pypi.python.org/pypi/git-restack/1.0.0
> https://git.openstack.org/cgit/openstack-infra/git-restack
> 
> I hope you find this useful,
> 
> Jim

Hi!

Thanks for that.

Should I upload it to Debian? How many of you want it there (and as a
consequence, in the next Ubuntu LTS if I upload it in time...)? Is it
mature enough?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Brian Curtin
On Sun, Feb 7, 2016 at 3:07 PM, Jay Pipes  wrote:
> I would love to see the OpenStack contributor community take back the design
> summit to its original format and purpose and decouple it from the OpenStack
> Summit's conference portion.
>
> I believe the design summits should be organized by the OpenStack
> contributor community, not the OpenStack Foundation and its marketing and
> event planning staff.

As someone who spent years organizing PyCon as a volunteer from the
Python community, with four of those years in a row taking about 8
solid months of pre-conference effort, not to mention the on-site
effort to run a volunteer conference of that size [0]...I would
suggest even longer and harder thought before stretching a community
like this even more thinly. Things should change, but probably not the
"who's doing the work" aspect.

[0] PyCon is around 2500 (though with one paid staffer now), so maybe
larger than the target for the event you're thinking of, but it was
still a massive effort at the crowds of 2000, 1500, 1000, etc. and
being fully community driven.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] v2 image upload from url

2016-02-08 Thread Pankaj Mishra
Hi Kevin,

I am executing command from glance and referring this command.

glance --os-image-api-version 2 location-add --url  [--metadata
] 


and

glance --os-image-api-version 2 md-resource-type-associate 


Thanks for supporting us.

Thanks & Regards,
Pankaj

On Mon, Feb 8, 2016 at 6:34 PM,  wrote:

>
>> Sorry, can you give an example of the exact command you are using, please?
>> On 5 Feb 2016 22:45, "Fox, Kevin M"  wrote:
>>
>> We've been using the upload image from http url for a long time and when
>>> we upgraded to liberty we noticed it broke because the client's
>>> defaulting
>>> to v2 now. How do you do image upload via http with v2? Is there a
>>> different command/method?
>>>
>>
> Is this really a Glance question? If so, the Glance v2 API doesn't
> currently provide
> a way to upload an image by specifying a URL (ie there's no equivalent to
> v1's copy-from).
>
>
>>> Thanks,
>>> Kevin
>>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] pid=host

2016-02-08 Thread Steven Dake (stdake)
Hey folks,

I know we have been through some changes with how pid=host works.  I'd like to 
get to the bottom of this, so we can either add the features we need to docker, 
or say "all is good".

Here is the last quote from this bugzilla where Red Hat in general is 
interested in the same behavior as the Kolla team has.  They have many people 
embedded in the Docker and Kubernetes communities, so it may make sense to let 
them do the work there :)

https://bugzilla.redhat.com/show_bug.cgi?id=1302807

Mrunal Patel 2016-02-08 06:10:15 EST

docker tracks the pids in a container using cgroups and hence all processes are 
killed even though we use pid=host. I believe we had probably prompted them to 
add this behavior in the first place.


This statement appears at odds with what was tested on IRC a few days back with 
docker 1.10.  It is possible docker 1.10 had a regression here, in which case 
if they fix it, we will be back to a dead VM during libvirt upgrade which we 
don't want.

Can folks that tested this weigh in on the testing that was done on that 
bugzilla with distro type, docker version, docker-py version, and results.  
Unfortunately you will have to create a Red Hat bugzilla account, but if you 
don't wish to do that, please send the information on list after reviewing the 
bugzilla and I'll submit it on your behalf.

The outcomes I would be happy with is:

* docker will never change the semantics of host=pid mode for killing child 
processes

* Or alternatively docker will add a feature such as host=pidnochildkill which 
Red Hat can spearhead

Thoughts and comments welcome.

Regards

-steve





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-08 Thread Flavio Percoco

On 06/02/16 12:12 +0800, Thomas Goirand wrote:

On 02/05/2016 06:57 PM, Thierry Carrez wrote:

Hi everyone,

Even before OpenStack had a name, our "Four Opens" principles were
created to define how we would operate as a community. The first open,
"Open Source", added the following precision: "We do not produce 'open
core' software". What does this mean in 2016 ?

Back in 2010 when OpenStack was started, this was a key difference with
the other open source cloud platform (Eucalyptus) which was following an
Open Core strategy with a crippled community edition and an "enterprise
version". OpenStack was then the property of a single entity
(Rackspace), so giving strong signals that we would never follow such a
strategy was essential to form a real community.

Fast-forward today, the open source project is driven by a non-profit
independent Foundation, which could not even do an "enterprise edition"
if it wanted to. However, member companies build "enterprise products"
on top of the Apache-licensed upstream project. And we have drivers that
expose functionality in proprietary components. So what does it mean to
"not do open core" in 2016 ? What is acceptable and what's not ? It is
time for us to refresh this.

My personal take on that is that we can draw a line in the sand for what
is acceptable as an official project in the upstream OpenStack open
source effort. It should have a fully-functional, production-grade open
source implementation. If you need proprietary software or a commercial
entity to fully use the functionality of a project or getting serious
about it, then it should not be accepted in OpenStack as an official
project. It can still live as a non-official project and even be hosted
under OpenStack infrastructure, but it should not be part of
"OpenStack". That is how I would interpret "no open core" in OpenStack
2016.

Of course, the devil is in the details, especially around what I mean by
"fully-functional" and "production-grade". Is it just an API/stability
thing, or does performance/scalability come into account ? There will
always be some subjectivity there, but I think it's a good place to start.

Comments ?


As I understand, Poppy a kind of middleware that does network access (an
"wrapper API"), right? This is comparable to let's say Pidgin, which
accesses proprietary services like Google talk, Yahoo messenger and
such. I have no problem with such a software, which I consider
completely free, even if they access a non-opened reverse engineered
network protocol.

The problem, to me, is different. It is more related to what kind of
value Poppy brings to OpenStack as a whole. And to me, that's where the
problem is. It's very low value, because its area is very far from what
we do: bring a fully open cloud. And Poppy only publishes to external
(commercial) service providers, it doesn't publish things within let's
say a multi-datacenter OpenStack deployment through a VM image it would use.


Providing a driver that sits on top of an open source solution (which apparently
doesn't exist in this case) doesn't mean everyone will deploy it on top of it.
People could choose non-open technologies and that doesn't - certainly,
shouldn't - change the way Poppy works. The same applies for every other
services that does *provisioning*, which is exactly what Poppy does.

I fail to see how Poppy doesn't provide a provisioning API to CDN
technologies/services that can be multi-tenant/multi-datacenter. If it doesn't,
then I believe the issue is in Poppy's API and not caused by the lack of open
source CDN solutions


Moreover, its requirement of Cassandra DB is a no-go (this has already
been discussed in another thread: Cassandra doesn't work well on OpenJDK
at all, which makes it non-free as it requires a Java interpreter which
is non-free itself). If I had to upload Poppy to Debian, it would be
uploaded to contrib (which is the area where free software requiring
non-free software to run or be built are uploaded). Contrib isn't
officially part of Debian.



So, this, I believe, is certainly an issue but one we shouldn't be discussing in
this email thread.

[snip]

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Anita Kuno
On 02/07/2016 10:42 PM, Michael Still wrote:
> On Mon, Feb 8, 2016 at 1:51 AM, Monty Taylor  wrote:
> 
> [snip]
> 
> 
>> Fifth - if we do this, the real need for the mid-cycles we currently have
>> probably goes away since the summit week can be a legit wall-to-wall work
>> week.
>>
> 
> [snip]
> 
> Another reply to a specific point...
> 
> I disagree strongly here, at least in the Nova case. I feel Nova has been
> getting along much better and generally pulling in the same direction for
> the last few releases. I think one of the things we did to get there is the
> mid-cycles, which gave us more time to sync on the overall direction of
> Nova, as well as ensuring we start being honest at this point in the
> release cycle about what we're going to get done before we ship.

I agree with what Michael is saying here. My experience at the last two
Nova mid-cycles demonstrates the power of having time to listen. Being
able to disagree on the first day, think about the area of disagreement
on the second day and come to some form of resolution on the third day
is really important as a relationship dynamic.

Honesty comes when the environment is stable such that participants feel
supported enough to be vulnerable and admit when prior decisions or
positions did not result in the expected outcome. This is where real
interactions can grow.

I have found the Nova mid-cycles to be beneficial to helping the Nova
team grow together.

I'm not advocating for or against Jay's proposal, I am sharing my
observations on the point Michael is making.

Thank you,
Anita.

> 
> For Nova at least its really important to have an approximately milestone 2
> check point where we can decide what to defer and what to focus on.
> Otherwise we end up back in a place where we release a mish mash of half
> finished features.
> 
> Michael
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Anita Kuno
On 02/08/2016 04:09 AM, Thierry Carrez wrote:
> Jay Pipes wrote:
>> tl;dr
>> =
>>
>> I have long thought that the OpenStack Summits have become too
>> commercial and provide little value to the software engineers
>> contributing to OpenStack.
>>
>> I propose the following:
>>
>> 1) Separate the design summits from the conferences
>> 2) Hold only a single OpenStack conference per year
>> 3) Return the design summit to being a low-key, low-cost working event
>> [...]
> 
> I agree with most of the things that have been said so far. I think the
> upstream community can't really get its work done in the current
> setting,

Sounds like we have some agreement on this point.

> and that it's too costly for companies to send most of their
> developers to classy hotels in expensive cities. I therefore think it
> would be beneficial to separate the events.
> 
> I agree that a separated design summit should be in lower-cost venues
> and smaller cities. But I don't think that the "OpenStack contributor
> community" can or should directly organize them. I happen to have a foot
> on both sides, and I can tell you organizing those events is extremely
> time consuming. I know exactly who would end up with the burden of
> organizing those events in the end -- and those are the same overworked
> cross-project core of developers that fill all the gaps in OpenStack.
> 
> I don't want to risk even more burnout from that group by forcing them
> into the craziness of organizing such events every 6 months.

Thank you!

> I don't
> think the issue with the Design Summit is that the Foundation staff and
> FNTech organizes them.

Shout out to the Foundation staff and FNTech who do such an amazing job
of creating lovely events so consistently. Thank you, thank you. This
discussion, as Monty stated early in his post, in no way is any kind of
reflection of the quality of your work or your dedication to task, which
is evident in everything you do (including the fabulous feedback
sessions at the conclusion of the events). This is a reflection of our
fantastic success and growth. Thanks for your wonderful work!

> It's mostly my team working on it on the staff
> side -- and I think Mike Perez and myself qualify as "OpenStack
> contributor community".

I agree with that characterization, yes.


> The issue is with the bundling of the two
> events, and that can be fixed while still letting a specialized event
> team do all the heavy lifting.
> 
> The timing of this thread is unfortunate, since after Tokyo I have
> actually been working on a solution for separation myself, and the
> Foundation is finalizing a strawman proposal that should soon be pushed
> for comments to the community. It involves changes to the main
> conference event as well.
> 
> So please stand by while we finalize that: I think you will like the end
> result.
> 

Thank you Thierry and team. I look forward to the strawman and
ruminating on it once proposed.

Thank you,
Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] v2 image upload from url

2016-02-08 Thread Pankaj Mishra
Sorry, Looping to Kevin

Thanks,
Pankaj

On Mon, Feb 8, 2016 at 6:54 PM, Pankaj Mishra 
wrote:

> Hi Kevin,
>
> I am executing command from glance and referring this command.
>
> glance --os-image-api-version 2 location-add --url  [--metadata 
> ] 
>
>
> and
>
> glance --os-image-api-version 2 md-resource-type-associate  
> 
>
> Thanks for supporting us.
>
> Thanks & Regards,
> Pankaj
>
> On Mon, Feb 8, 2016 at 6:34 PM,  wrote:
>
>>
>>> Sorry, can you give an example of the exact command you are using,
>>> please?
>>> On 5 Feb 2016 22:45, "Fox, Kevin M"  wrote:
>>>
>>> We've been using the upload image from http url for a long time and when
 we upgraded to liberty we noticed it broke because the client's
 defaulting
 to v2 now. How do you do image upload via http with v2? Is there a
 different command/method?

>>>
>> Is this really a Glance question? If so, the Glance v2 API doesn't
>> currently provide
>> a way to upload an image by specifying a URL (ie there's no equivalent to
>> v1's copy-from).
>>
>>
 Thanks,
 Kevin

>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Should the playbook stop on certain tasks?

2016-02-08 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 02/08/2016 06:40 AM, Jesse Pretorius wrote:
> Darren's reply is interesting and perhaps worth consideration. As far as I 
> recall the security role adopted the STIG primarily because it was the only 
> openly available set of standards that didn't require licensing. If there are 
> other options to explore and ways to consume them, then perhaps that should 
> be an initiative for the Newton cycle?

That's right.  After direct conversations with CIS, we found that the licensing 
and restricted use of the security benchmarks wouldn't allow us to use them in 
OpenStack projects.  That could change in the future, but that's what exists at 
the moment.  The STIG was chosen since it's widely adopted and it is in the 
public domain.

It could be interesting to take an XCCDF/OVAL dump and try to implement it in 
an automated way with Ansible.  Creating the XCCDF XML isn't easy (nor fun), 
but that could be an option, too.

Darren's point about using vendor-provided hardening standards for Red Hat, 
Fedora, and Solaris is a good one.  This could be very useful if the multi-os 
support for OpenStack-Ansible comes together.  It's a shame that Ubuntu doesn't 
have a comprehensive XCCDF profile available as the other distributions do. :/

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWuJmjAAoJEHNwUeDBAR+x7BYP/2Cv31QL7enVAXgEzHThc1Wb
ov3phFoEYCY8FFmcOoH6grSK3DsRPmPc33ma2I6bMMKWpz8j+RFGMfgPAaEEkGiq
d9Ak3bidFe+xYjlMlZkj+EQbIfv2JvZ5FA/eqyVuB1opRpALWnCzXxuSNoIPsbyZ
3u0QkMiNX9eo+Iz0Y3UHQbV61bZWmhz5xO08vo8vxeIhOgbv1Mq9fyRXcsay2tqY
K6nZMK2Tj+Y46hjQ1WR1KMY9HUPBujkhY+It/qtq9QIUPLduavVNzAV8dYRoPwu8
HPRLZA/abWW51VAvmdbr2ABqhDIkL/EKhPUgnKPn/IPWDQuEHa3SAJb4VHK3njz9
fcanJ2h59fY90cBwYz7g0BNbf2m8i1k4DZCdgMfqPzSQ7OdWze3aLd2Eh1AI5ihp
Zk+41Cj8yZPb6d0Ocsqt8voPYtbh0seXLvdiiVccESq8chGBBIvjasFsq1pFrIlH
VqEl13YHI/VlnoLcSHiYP7AYDdM1IXY722It7HDBwB7bKGWL/NaogH/putvlXTw8
J1NT3EnGg7G4p92X0qTiP4datB8AIfYSQhNgjVDJSwJwS2DMaMgrPJr5AWDZ5dfv
iJE4vUbZLI2etmghb4y9XXMMa2g6/zXxvcSQVCEE5v1FoVfLCtr4HuMFGFfhxBeB
KY8imLhpcXlLsJgodUSa
=0PLZ
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-08 Thread Sean Dague
On 02/08/2016 08:54 AM, Flavio Percoco wrote:

> Would our votes change if Poppy had support for OpenCDN (imagine it's being
> maintained) even if that solution is terrible?
> 
> I guess my question is: When do we start considering a project to be
> safe from
> an open source perspective? Because, having support for 1 opensource
> technology
> doesn't mean it provides enough (or good) open source ways to deploy the
> software. If the only supported open solution is *terrible* then
> deployers would
> be left with only commercial solutions to choose from.

There is a lot of difference between 1 and 0 options, even if 1 isn't a
great option. It also means the design has been informed by open
backends, and not just commercial products.

I think one could also consider Neutron originally started in such a
state. openvswitch was definitely not mature technology when this effort
started. You pretty much could only use commercial backends and have
anything work. The use in OpenStack exposed issues, people contributed
to proper upstream, things got much much better. We now have a ton of
open backends in Neutron. That would never have happened if the projects
started with 0.

The flip side is  that CDN is a problem space that no consumers or ops
are interested in open backends. That's ok, however, if that's the case,
it doesn't feel OpenStack to me. Just being overlays for commercial
services seems a different thing that the rest of what's in OpenStack
today.

I think this is a place where there are lots of reasonable and different
points of view. And if it was clear cut there wouldn't be the need for
discussion.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal for having a service type registry and curated OpenStack REST API

2016-02-08 Thread gordon chung


On 08/02/2016 7:13 AM, Sean Dague wrote:
> On 02/07/2016 08:30 AM, Jay Pipes wrote:
>> On 02/04/2016 06:38 AM, Sean Dague wrote:
>>> What options do we have?
>> 
>>> 2) Have a registry of "common" names.
>>>
>>> Upside, we can safely use common names everywhere and not fear collision
>>> down the road.
>>>
>>> Downside, yet another contention point.
>>>
>>> A registry would clearly be under TC administration, though all the
>>> heavy lifting might be handed over to the API working group. I still
>>> imagine collision around some areas might be contentious.
>>
>> The above is my choice. I'd also like to point out that I'm only talking
>> about the *service* projects here -- i.e. the things that expose a REST
>> API.
>>
>> I don't care about a naming registry for non-service projects because
>> they do not expose a public user-facing API that needs to be curated and
>> protected.
>>
>> I would further suggest using the openstack/governance repo's
>> projects.yaml file for this registry. This is already under the TC's
>> administration and the API WG could be asked to work closely with the TC
>> to make recommendations on naming for all type:service projects in the
>> file. We should add a service:$type tag to the projects.yaml file and
>> that would serve as the registry for REST API services.
>>
>> We would need to institute this system by first tackling the current
>> areas of REST API functional overlap:
>>
>> * Ceilometer and Monasca are both type:service projects that are both
>> performing telemetry functionality in their REST APIs. The API WG should
>> work with both communities to come up with a 6-12 month plan for
>> creating a *single* OpenStack Telemetry REST API that both communities
>> would be able to implement separately as they see fit.
>
> 1) how do you imagine this happening?
>
> 2) is there buy in from both communities?

i'd be interested to see how much collaboration continues/exists after 
two overlapping projects are approved as 'openstack projects'. not sure 
how much collaboration happens since duplicating efforts partially 
implies "we don't want/need to collaborate".

>
> 3) 2 implementations of 1 API that is actually semantically the same is
> super hard. Doing so in the IETF typically takes many years.

++ and possibly leads to poor models for both implementations? or 
rewrites of backend(s).

>
> I feel like we spent a bunch of time a couple years ago putting projects
> detailed improvement plans from the TC, and it really didn't go all that
> well. The outside-in approach without community buy in mostly just gets
> combative and hostile.
>
>> * All APIs that the OpenStack Compute API currently proxies to other
>> service endpoints need to have a formal sunsetting plan. This includes:
>>
>>   - servers/{server_id}/os-interface (port interfaces)
>>   - images/
>>   - images/{image_id}/metadata
>>   - os-assisted-volume-snapshots/
>>   - servers/{server_id}/os-bare-metal-nodes/ (BTW, why is this a
>> sub-resource of /servers again?)
>>   - os-fixed-ips/
>>   - os-floating-ip-dns/
>>   - os-floating-ip-pools/
>>   - os-floating-ips/
>>   - os-floating-ips-bulk/
>>   - os-networks/
>>   - os-security-groups/
>>   - os-security-group-rules/
>>   - os-security-group-default-rules/
>>   - os-tenant-networks/
>>   - os-volumes/
>>   - os-snapshots/
>
> It feels really early to run down a path here on trying to build a
> registry for top level resources when we've yet to get service types down.
>
> Also, I'm not hugely sure why:
>
> GET /compute/flavors
> GET /dataprocessing/flavors
> GET /queues/flavors
>
> Is the worst thing we could be doing. And while I get the idea that in a
> perfect world there would be no overlap, the cost of getting there in
> breaking working software seems... a bit of a bad tradeoff.

agree, to clarify in the case of backups, is the idea that GET 
compute/../backups and blockstorage/../backups is bad? personally it 
seems to capture purpose pretty well, ie. 
/../. i think we ran into this years back when 
openstackclient was starting, there's only so much verbiage we can 
select from. ie. aggregation is used in multiple projects.

>
>> * All those services that have overlapping top-level resources must have
>> a plan to either:
>>   - align/consolidate the top-level resource if it makes sense
>>   - rename the top-level resource to be more specific if needed, or
>>   - place the top-level resource as a sub-resource on a top-level
>> resource that is unique in the full OpenStack REST API set of top-level
>> resources
>
> And what happens to all the software out there written to OpenStack? I
> do get the concerns for coherency, at the same time randomly changing
> API interfaces on people is a great way to kick all your users in the
> knees and take their candy.
>
> At the last summit basically *exactly* the opposite was agreed to. You
> don't get to remove an API, ever. Because the moment it's out there, it
> has users.
>
>   -Sean
>

-- 

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Thierry Carrez

Brian Curtin wrote:

On Sun, Feb 7, 2016 at 3:07 PM, Jay Pipes  wrote:

I would love to see the OpenStack contributor community take back the design
summit to its original format and purpose and decouple it from the OpenStack
Summit's conference portion.

I believe the design summits should be organized by the OpenStack
contributor community, not the OpenStack Foundation and its marketing and
event planning staff.


As someone who spent years organizing PyCon as a volunteer from the
Python community, with four of those years in a row taking about 8
solid months of pre-conference effort, not to mention the on-site
effort to run a volunteer conference of that size [0]...I would
suggest even longer and harder thought before stretching a community
like this even more thinly. Things should change, but probably not the
"who's doing the work" aspect.


Beyond stretching out the community, we would end up with the same 
problem we are trying to solve. Most of the cross-project folks that 
would end up organizing the event would be too busy organizing the event 
to be able to fully participate in it.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] v2 image upload from url

2016-02-08 Thread stuart . mclaren


Sorry, can you give an example of the exact command you are using, please?
On 5 Feb 2016 22:45, "Fox, Kevin M"  wrote:


We've been using the upload image from http url for a long time and when
we upgraded to liberty we noticed it broke because the client's defaulting
to v2 now. How do you do image upload via http with v2? Is there a
different command/method?


Is this really a Glance question? If so, the Glance v2 API doesn't currently 
provide
a way to upload an image by specifying a URL (ie there's no equivalent to v1's 
copy-from).



Thanks,
Kevin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal for having a service type registry and curated OpenStack REST API

2016-02-08 Thread Jay Pipes

On 02/08/2016 07:13 AM, Sean Dague wrote:

On 02/07/2016 08:30 AM, Jay Pipes wrote:

On 02/04/2016 06:38 AM, Sean Dague wrote:

What options do we have?



2) Have a registry of "common" names.

Upside, we can safely use common names everywhere and not fear collision
down the road.

Downside, yet another contention point.

A registry would clearly be under TC administration, though all the
heavy lifting might be handed over to the API working group. I still
imagine collision around some areas might be contentious.


The above is my choice. I'd also like to point out that I'm only talking
about the *service* projects here -- i.e. the things that expose a REST
API.

I don't care about a naming registry for non-service projects because
they do not expose a public user-facing API that needs to be curated and
protected.

I would further suggest using the openstack/governance repo's
projects.yaml file for this registry. This is already under the TC's
administration and the API WG could be asked to work closely with the TC
to make recommendations on naming for all type:service projects in the
file. We should add a service:$type tag to the projects.yaml file and
that would serve as the registry for REST API services.

We would need to institute this system by first tackling the current
areas of REST API functional overlap:

* Ceilometer and Monasca are both type:service projects that are both
performing telemetry functionality in their REST APIs. The API WG should
work with both communities to come up with a 6-12 month plan for
creating a *single* OpenStack Telemetry REST API that both communities
would be able to implement separately as they see fit.


1) how do you imagine this happening?


Hard work. The kind of collaboration needed here is hard, sometimes 
boring work. But it's absolutely necessary, IMHO. And we, as a 
community, have simply punted on making these hard decisions, and we're 
worse off for it.


The TC should have as a project acceptance criteria a clause about REST 
API overlap or replacement. I feel strongly about this and so have 
started a patch with a resolution to the governance repo around this.



2) is there buy in from both communities?


I don't really care if there is or there isn't. This is something that 
is vital to the long-term reputation of OpenStack.



3) 2 implementations of 1 API that is actually semantically the same is
super hard. Doing so in the IETF typically takes many years.


Sorry, I'm not quite following you. I'm not proposing that we focus on 
defining a single API. I'm not proposing that we help each project to 
implement that API.



I feel like we spent a bunch of time a couple years ago putting projects
detailed improvement plans from the TC, and it really didn't go all that
well. The outside-in approach without community buy in mostly just gets
combative and hostile.


* All APIs that the OpenStack Compute API currently proxies to other
service endpoints need to have a formal sunsetting plan. This includes:

  - servers/{server_id}/os-interface (port interfaces)
  - images/
  - images/{image_id}/metadata
  - os-assisted-volume-snapshots/
  - servers/{server_id}/os-bare-metal-nodes/ (BTW, why is this a
sub-resource of /servers again?)
  - os-fixed-ips/
  - os-floating-ip-dns/
  - os-floating-ip-pools/
  - os-floating-ips/
  - os-floating-ips-bulk/
  - os-networks/
  - os-security-groups/
  - os-security-group-rules/
  - os-security-group-default-rules/
  - os-tenant-networks/
  - os-volumes/
  - os-snapshots/


It feels really early to run down a path here on trying to build a
registry for top level resources when we've yet to get service types down.

Also, I'm not hugely sure why:

GET /compute/flavors
GET /dataprocessing/flavors
GET /queues/flavors

Is the worst thing we could be doing. And while I get the idea that in a
perfect world there would be no overlap, the cost of getting there in
breaking working software seems... a bit of a bad tradeoff.


Please read the point below. I specifically said that overlaps in 
top-level resources should be handled in one of three ways (alignment, 
renaming or placed under a sub-resource). You have shown the third 
option above: placing the overlapping top-level resources as a sub-resource.



* All those services that have overlapping top-level resources must have
a plan to either:
  - align/consolidate the top-level resource if it makes sense
  - rename the top-level resource to be more specific if needed, or
  - place the top-level resource as a sub-resource on a top-level
resource that is unique in the full OpenStack REST API set of top-level
resources


And what happens to all the software out there written to OpenStack? I
do get the concerns for coherency, at the same time randomly changing
API interfaces on people is a great way to kick all your users in the
knees and take their candy.

At the last summit basically *exactly* the opposite was agreed to. You
don't get to remove an API, ever. Because the moment 

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Jim Meyer
This thread is going many directions all at once, so I'll somewhat rudely 
top-reply and call out specific points rather than extend each sub-thread. 
Thierry, I suspect your strawman will address all of these points and more.

Decoupling: I'm very much in favor. For a long time, devs at summits have been 
making choices between customer meetings, conference talks, and design 
sessions; the three things conflated on each other make productive focus very 
challenging. On the monetary side, my OpenStack budget has grown too large, and 
my ability to send devs so they can get work done is conflated with marketing 
costs, etc. When the cost per dev rises, we reach a spot where each of us can 
send fewer of them. Not a good outcome.

Conference frequency: If we want semi-annual conferences, we might consider an 
Operators Conference and a Users Conference. It would allow companies to decide 
where to invest dollars and time according to who they're aiming to serve. I'd 
expect fair overlap between the two, with power users at the OpsConf, and 
forward-thinking operators at the UsersConf. We might consider a 3-4 month lag 
from release for these to allow vendors to pick up the latest release and do 
interesting things on top of it; I suspect that would do a lot to drive a 
virtuous cycle of release-create-showcase-adopt that would be good for the 
community overall.

Lack of dev participation in conferences: I think the conferences will still be 
the main vehicle for technical companies to showcase the technical work they're 
doing to technical customers who are interested in the very technical thing 
that is OpenStack. I don't believe you can succeed in that environment without 
sending smart developers to talk about it. There will be fewer, true, but I 
believe the interactions will be better and more focused.

Dev summit organization: another voice for centralized organization, at least 
for all the non-technical venue/food/recording logistics. I co-organized a 
successful small tech conference for a number of years. It's very hard work. 
That said, might be interesting to try a self-organizing unconference framework 
inside that space.

Dev summit cycles: If we're doing this, I think we should decide what the 
purpose of the dev summit is (to make sure we don't lose any of the purposes 
it's serving now as we separate it); back plan from the release dates we want; 
and put them on the calendar. I like the releases we have now (Apr/Oct). I'd be 
sad at one yearly megarelease. 

Midcycles: I think for many projects they'll be just as useful. For others, 
they'll turn out to be superfluous. My guess is that the core projects will 
still strongly need them in order to manage their higher complexity. And, as 
someone responsible for sponsorship of some of these, and the budget for 
sending folks to most or all of these, I'm still going to be in strong support 
of them.

Thanks for opening this thread, Jay. It's been brewing for a while.

--j


> On Feb 8, 2016, at 2:56 AM, Thierry Carrez  wrote:
> 
> Daniel P. Berrange wrote:
>> [...]
>> I really agree with everything you say, except for the bit about the
>> community doing organization - I think its fine to let function event
>> staff continue with the burden of planning, as long as their goals are
>> directed by the community needs.
> 
> Exactly.
> 
>> I might suggest that we could be a bit more radical with the developer
>> event and decouple the timing from the release cycle. The design summits
>> are portrayed as events where we plan the next 6 months of work, but the
>> release has already been open for a good 2-3 or more weeks before we meet
>> in the design summit. This always makes the first month of each development
>> cycle pretty inefficient as decisions are needlessly postponed until the
>> summit. The bulk of specs approval then doesn't happen until after the
>> summit, leaving even less time until feature freeze to get the work done.
> 
> I agree that the developer event happens too late in the cycle (3 weeks after 
> final release, 5 weeks after RC1 where most people switch to next cycle, and 
> 8 weeks after FF, where we start thinking about the next cycle). That said, I 
> still think the dev event should be "coupled" with the cycles. It just needs 
> to happen earlier.
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

2016-02-08 Thread Jeremy Stanley
On 2016-02-07 03:19:45 + (+), Wang, Shane wrote:
> You are over worried about the new contributors. We are going to
> limit the number of attendees, and from my experience and
> knowledge, most of developers who join bug smash are old and
> experienced OpenStack developers, and they already have ATC codes.
> On the other hand, if those events don't exist, you are not able
> to stop new contributors from submitting and merging individual
> patches or fixes. You might set a deadline of sending the ATC
> codes - say some Release Candidate.

As I said in my previous message, we do set a deadline: the main
feature freeze listed on the release schedule. I was merely pointing
out that the deadline for contributors to qualify for registration
discounts is prior to this Bug Smash event, and so any contributions
made during the event won't solely qualify anyone for a registration
code. Some of our conference coordinators had expressed concern that
new or latent contributors participating in the Bug Smash may be
expecting a discount code, so I volunteered to send a follow-up to
this thread on their behalf.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Fausto Marzi
On Mon, Feb 8, 2016 at 6:26 AM, Thierry Carrez 
wrote:

> Daniel P. Berrange wrote:
>
>> [...]
>> I really agree with everything you say, except for the bit about the
>> community doing organization - I think its fine to let function event
>> staff continue with the burden of planning, as long as their goals are
>> directed by the community needs.
>>
>
> Exactly.
>
> I might suggest that we could be a bit more radical with the developer
>> event and decouple the timing from the release cycle. The design summits
>> are portrayed as events where we plan the next 6 months of work, but the
>> release has already been open for a good 2-3 or more weeks before we meet
>> in the design summit. This always makes the first month of each
>> development
>> cycle pretty inefficient as decisions are needlessly postponed until the
>> summit. The bulk of specs approval then doesn't happen until after the
>> summit, leaving even less time until feature freeze to get the work done.
>>
>
> I agree that the developer event happens too late in the cycle (3 weeks
> after final release, 5 weeks after RC1 where most people switch to next
> cycle, and 8 weeks after FF, where we start thinking about the next cycle).
> That said, I still think the dev event should be "coupled" with the cycles.
> It just needs to happen earlier.
>
> --
> Thierry Carrez (ttx)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


The OpenStack Summit is a great thing as it is now. It creates big
momentum, it's a strong motivator for the engineers (as enjoy our time
there) and the Companies are happy too with the business related side. I
see it also as the most successful Team building activity, Community and
Company wide. For Companies, the costs to send engineers to the Summit or
to a dedicated Design event are exactly the same. Besides, many Companies
send US based employees only to the US Summit, and EU based only to the
other side. The OpenStack Summit is probably the most advanced and
successful OpenSource event, if you take out of it the engineering side, it
won't be the same.

I think, the issue here is that we need to have a better and more
productive way to work together. Probably the motivation behind a separate
design summit and also this discussion is focused to improve that, as we
see that face to face is effective. Maybe this is the limitation we need to
resolve, rather than changing an amazing event.

Thanks,
Fausto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Doug Hellmann
Excerpts from Brian Curtin's message of 2016-02-08 09:09:37 -0500:
> On Sun, Feb 7, 2016 at 3:07 PM, Jay Pipes  wrote:
> > I would love to see the OpenStack contributor community take back the design
> > summit to its original format and purpose and decouple it from the OpenStack
> > Summit's conference portion.
> >
> > I believe the design summits should be organized by the OpenStack
> > contributor community, not the OpenStack Foundation and its marketing and
> > event planning staff.
> 
> As someone who spent years organizing PyCon as a volunteer from the
> Python community, with four of those years in a row taking about 8
> solid months of pre-conference effort, not to mention the on-site
> effort to run a volunteer conference of that size [0]...I would
> suggest even longer and harder thought before stretching a community
> like this even more thinly. Things should change, but probably not the
> "who's doing the work" aspect.

I wholeheartedly agree. Figuring out who is going to talk about
what is the _least_ difficult and time consuming part of organizing
a large event like we would need, even if it's just the contributor
community*.  If you think you can plan such an event, maintain your
day job and sanity, and then focus on participating while you're
at the event, you're wrong. If you want proof, I encourage you to
get involved with an existing community-run conference. Even one
of the small local events will give you a real education in just
how much work it is. It has certainly given me a healthy respect
for the amazing work the foundation event staff has been doing.

Doug

* Whatever the new event becomes, it should not have the word
  "developer" in the title. We have a lot of contributors who are
  not "developers".

> 
> [0] PyCon is around 2500 (though with one paid staffer now), so maybe
> larger than the target for the event you're thinking of, but it was
> still a massive effort at the crowds of 2000, 1500, 1000, etc. and
> being fully community driven.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Bareon][Fuel] Best practises on syncing patches between repositories

2016-02-08 Thread Evgeniy L
Hi,

Some time ago we started Bareon project [1], and now we have some fixes
landed to
fuel-agent only, the question is what are the best practises on keeping two
repos in sync
with possibility to resolve conflicts manually? Cherry-picking patches
manually doesn't look
like the most error prone solution, are there any scripts written to make
sure that repos are
in one way sync (fuel-agent -> Bareon)?

Thanks,

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082397.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Location of Heka Lua plugins

2016-02-08 Thread Steven Dake (stdake)


From: Eric LEMOINE >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, February 8, 2016 at 12:39 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla] Location of Heka Lua plugins


Le 6 févr. 2016 20:39, "Steven Dake (stdake)" 
> a écrit :
>
>
>
> On 2/5/16, 1:14 AM, "Eric LEMOINE" 
> > wrote:
>
> >On Thu, Feb 4, 2016 at 5:13 PM, Jeff Peeler 
> >> wrote:
> >I totally agree with you Jeff.
> >
> >It is to be noted that we (my team at Mirantis) want to avoid
> >duplicating our Lua plugins, as we obviously don't want to maintain
> >two sets of identical plugins.  So there are mulitple reasons for
> >creating separate packages for these plugins: a) make it easy the
> >share the plugins across different projects, b) avoid maintaining
> >multiple sets of identical plugins, and c) avoid clobbering Kolla with
> >code not directly related to Kolla - for example, would you really
> >like to see Lua tests in Kolla and run Lua tests in the Kolla gates?
> >It would indeed be best to have these plugins in the OpenStack Git
> >namespace (as Steve Dake said), but we will have to see if that's
> >possible in practice.
> >
> >Thank you all for your responses.
>
> Eric,
>
> If I read that correctly, there is some implied resistance to placing
> these LUA plugins in the openstack git namespace.  Could you enumerate the
> issues now please?

We have no problem with placing these Lua plugins in the openstack git 
namespace.  At this point I just don't know if others from the OpenStack 
community would see this as appropriate.  That's all I'm saying.

Great, then that is settled.  It was just a communication problem.  Essentially 
we add a repository to this file:
https://github.com/openstack/governance/blob/master/reference/projects.yaml#L1928

And add the github repo to project config.  I can handle the project config 
easily and the projects.yaml easily.  Lets do that after Mitaka - we have 
enough on our plate to deal with for Mitaka, but from a "will the community see 
this as appropriate" the answer is yes, this is absolutely the correct way to 
go about it.

The repository must be licensed with an ASL2.0 license and all contributors in 
the git repository must have an active signed OpenStack CLA.  If these things 
aren't the case, lets get started on making that happen.  Please contact me 
on-list if either of these cases is false.  That would mean this code couldn't 
go in the Kolla repository either until these problems are rectified.

Regards
-steve



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Jay Pipes

On 02/08/2016 09:03 AM, Fausto Marzi wrote:

The OpenStack Summit is a great thing as it is now. It creates big
momentum, it's a strong motivator for the engineers (as enjoy our time
there)


I disagree with you on this. The design summits are intended to be 
working events, not conference parties.


> and the Companies are happy too with the business related side. I

see it also as the most successful Team building activity, Community and
Company wide.


This isn't the intent of design summits. It's not intended to be a 
company team building event.


> For Companies, the costs to send engineers to the Summit

or to a dedicated Design event are exactly the same.


This is absolutely not the case. Sending engineers to expensive 
conference hotels for a full week or more is more expensive than sending 
engineers to small hotels in smaller cities for shorter amounts of 
focused time.


> Besides, many

Companies send US based employees only to the US Summit, and EU based
only to the other side. The OpenStack Summit is probably the most
advanced and successful OpenSource event, if you take out of it the
engineering side, it won't be the same.


I don't see the OpenStack Summit as being an advanced event. It has 
become a vendor-driven suit-fest, IMHO.



I think, the issue here is that we need to have a better and more
productive way to work together. Probably the motivation behind a
separate design summit and also this discussion is focused to improve
that, as we see that face to face is effective. Maybe this is the
limitation we need to resolve, rather than changing an amazing event.


All I want is to be more productive. In my estimation, the Summits have 
become vastly less productive than they used to be. Mid-cycles are 
generally much more productive and much more cost-effective because they 
don't have the distraction of the Summit party atmosphere.


As someone who is responsible for recommending which Mirantis engineers 
go to which events, I strongly favor sending more engineers to more 
focused events at the expense of sending fewer engineers to the 
expensive and unfocused OpenStack Summits.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Anita Kuno
On 02/08/2016 09:42 AM, Anita Kuno wrote:
> On 02/08/2016 04:09 AM, Thierry Carrez wrote:
>> Jay Pipes wrote:
>>> tl;dr
>>> =
>>>
>>> I have long thought that the OpenStack Summits have become too
>>> commercial and provide little value to the software engineers
>>> contributing to OpenStack.
>>>
>>> I propose the following:
>>>
>>> 1) Separate the design summits from the conferences
>>> 2) Hold only a single OpenStack conference per year
>>> 3) Return the design summit to being a low-key, low-cost working event
>>> [...]
>>
>> I agree with most of the things that have been said so far. I think the
>> upstream community can't really get its work done in the current
>> setting,
> 
> Sounds like we have some agreement on this point.
> 
>> and that it's too costly for companies to send most of their
>> developers to classy hotels in expensive cities. I therefore think it
>> would be beneficial to separate the events.
>>
>> I agree that a separated design summit should be in lower-cost venues
>> and smaller cities. But I don't think that the "OpenStack contributor
>> community" can or should directly organize them. I happen to have a foot
>> on both sides, and I can tell you organizing those events is extremely
>> time consuming. I know exactly who would end up with the burden of
>> organizing those events in the end -- and those are the same overworked
>> cross-project core of developers that fill all the gaps in OpenStack.
>>
>> I don't want to risk even more burnout from that group by forcing them
>> into the craziness of organizing such events every 6 months.
> 
> Thank you!
> 
>> I don't
>> think the issue with the Design Summit is that the Foundation staff and
>> FNTech organizes them.
> 
> Shout out to the Foundation staff and FNTech who do such an amazing job
> of creating lovely events so consistently. Thank you, thank you. This
> discussion, as Monty stated early in his post, in no way is any kind of
> reflection of the quality of your work or your dedication to task, which
> is evident in everything you do (including the fabulous feedback
> sessions at the conclusion of the events). This is a reflection of our
> fantastic success and growth. Thanks for your wonderful work!
> 
>> It's mostly my team working on it on the staff
>> side -- and I think Mike Perez and myself qualify as "OpenStack
>> contributor community".
> 
> I agree with that characterization, yes.
> 
> 
>> The issue is with the bundling of the two
>> events, and that can be fixed while still letting a specialized event
>> team do all the heavy lifting.
>>
>> The timing of this thread is unfortunate, since after Tokyo I have
>> actually been working on a solution for separation myself, and the
>> Foundation is finalizing a strawman proposal that should soon be pushed
>> for comments to the community. It involves changes to the main
>> conference event as well.
>>
>> So please stand by while we finalize that: I think you will like the end
>> result.
>>
> 
> Thank you Thierry and team. I look forward to the strawman and
> ruminating on it once proposed.
> 
> Thank you,
> Anita.
> 

I'll also add that for me one of the things I have lost is the space to
listen.

I miss listening. I value it.

I hope to have space to listen again in the future structure.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade][keystone] Keystone multinode grenade

2016-02-08 Thread Grasza, Grzegorz

> From: Sean Dague [mailto:s...@dague.net]
> 
> On 02/05/2016 04:44 AM, Grasza, Grzegorz wrote:
> >
> >> From: Sean Dague [mailto:s...@dague.net]
> >>
> >> On 02/04/2016 10:25 AM, Grasza, Grzegorz wrote:
> >>>
> >>> Keystone is just one service, but we want to run a test, in which it
> >>> is setup in HA – two services running at different versions, using
> >>> the same
> >> DB.
> >>
> >> Let me understand the scenario correctly.
> >>
> >> There would be Keystone Liberty and Keystone Mitaka, both talking to
> >> a Liberty DB?
> >>
> >
> > The DB would be upgraded to Mitaka. From Mitaka onwards, we are
> making only additive schema changes, so that both versions can work
> simultaneously.
> >
> > Here are the specifics:
> > http://docs.openstack.org/developer/keystone/developing.html#online-
> mi
> > gration
> 
> Breaking this down, it seems like there is a simpler test setup here.
> 
> Master keystone is already tested with master db, all over the place. In unit
> tests all the dsvm jobs. So we can assume pretty hard that that works.
> 
> Keystone doesn't cross talk to itself (as there are no workers), so I don't 
> think
> there is anything to test there.
> 
> Keystone stable working with master db seems like an interesting bit, are
> there already tests for that?

Not yet. Right now there is only a unit test, checking obvious 
incompatibilities.

> 
> Also, is there any time where you'd get data from Keystone new use it in a
> server, and then send it back to Keystone old, and have a validation issue?
> That seems easier to trigger edge cases at a lower level. Like an extra
> attribute is in a payload in Keystone new, and Keystone old faceplants with 
> it.

In case of keystone, the data that can cause compatibility issues is in the DB.
There can be issues when data stored or modified by the new keystone
is read by the old service, or the other way around. The issues may happen
only in certain scenarios, like:

row created by old keystone ->
row modified by new keystone ->
failure reading by old keystone

I think a CI test, in which we have more than one keystone version accessible
at the same time is preferable to testing only one scenario. My proposed
solution with HAProxy probably wouldn't trigger all of them, but it may catch
some instances in which there is no full lower level test coverage. I think 
testing
in HA would be helpful, especially at the beginning, when we are only starting 
to
evaluate rolling upgrades and discovering new types of issues that we should
test for.

> 
> The reality is that standing up an HA Proxy Keystone multinode environment
> is going to be pretty extensive amount of work. And when things fail, digging
> out why, is kind of hard. However it feels like most of the interesting edges
> can be tested well at a lower level. And is at least worth getting those 
> sorted
> before biting off the bigger thing.

I only proposed multinode grenade, because I thought it is the most complete
solution for what I want to achieve, but maybe there is a simpler way, like
running two keystone instances on the same node?

/ Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Plan to add Python 3 support to Swift

2016-02-08 Thread Victor Stinner

Hi,

The Swift port was blocked during 6 months by a complex issue related to 
PyEClib. Good news: this issue was fixed 2 months ago. After the Liberty 
Summit, a plan was defined to port Swift to Python 3 (end of october), 
and I understood that the whole Swift team agreed on it.


Some Python 3 changes were merged, but less than I expected. My 3 latest 
patches are waiting for reviews since october 19, 2015:


https://review.openstack.org/#/c/237027/
https://review.openstack.org/#/c/236998/
https://review.openstack.org/#/c/237019/

I can write patches, but it takes time to rebase them and to regularly 
try various ways to try to get a review (like ping on IRC).


Because of the general lack of interest of Swift developers for Python 
3, I think that I will simply abandon my patches and let others port Swift.


Victor


Le 30/10/2015 06:54, John Dickinson a écrit :

Thanks for the update. This seems like a reasonable way forward, but also one 
that will take a long time. Thank you for your work.

I think this will result in larger and larger chunks of work, and so it will 
eventually result in large patches to move different components to py3. So 
you'll be able to start small, but the work will get larger as you go.

You're right about needing the voting gate job. That should be the first 
priority for py3 work.

--John




On 30 Oct 2015, at 12:47, Victor Stinner wrote:


Hi,

We talked about Python 3 with Christian Schwede, Alistair Coles, Samuel Meritt, 
Jaivish Kothari and others (sorry, I don't recall all names :-/) during Swift 
contributor meetup. It looks like we had an agreement on how to add Python 3 
support to Swift. The plan is:

1) Fix the gate-swift-python34 check job

2) Make the gate-swift-python34 check job voting

3) Port remaining code step by step (incremental development)

Python 3 issues had been fixed in the past in Swift, but came back. So it's 
important to not reintroduce such regressions by making the gate voting.

Christian said that he will explain the plan at the next Swift meeting 
(Wednesday). I don't think that I will be able to attend this meeting, I have 
another one at the same time with my team :-/

I can put this plan in a blueprint if you want. So we can refer to the 
blueprint in Python 3 changes. It's up to you.


Plan in detail.

(1) To fix the Python 3 job, the idea is to only run a subset of tests on 
Python 3. For example, if we fix Python 3 issues with dnspython (dnspython3) 
and PyEClib dependencies, we can run
"nosetests test/unit/common/test_exceptions.py" on Python 3 (the test pass on 
Python 3).

We need these two changes:

* "py3: Update pbr and dnspython requirements"
https://review.openstack.org/#/c/217423/

* "py3: Add py34 test environment to tox"
https://review.openstack.org/#/c/199034/


(2) When the gate-swift-python34 check job will pass and we waited long enough 
to consider that it's stable, we can make it voting. At this point, we cannot 
introduced Python 3 regressions on the code tested on Python 3. Then the idea 
is to run more and more tests on Python 3.


(3) Ok, now the interesting part. To port remaining code, following changes 
will enlarge the code coverage of Python 3 tests by adding new tests to 
tox.ini. For example, port utils.py to Python 3 and add test_utils.py to 
tox.ini.


Misc questions.

Q: "Is it possible to port Swift to Python 3 in a single patch?"

A: Yes, it is technically possible. But it would be one unique giant patch 
simply impossible to review and that will conflict at each merged change. Some 
changes required by Python 3 need discussions and to make technical choices.  
It's more convenient to work on smaller patches.

Q: "How much changes do we need to port Swift to Python ?"

A: Sorry, I don't know. Since we cannot run all tests on Python 3 right now, we 
cannot see all issues. It's really hard to estimate the number of required 
changes. Anyway, the plan is to port the code step by step.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-08 Thread Flavio Percoco

On 05/02/16 21:41 -0500, Jay Pipes wrote:

On 02/05/2016 02:16 PM, Sean Dague wrote:

On 02/05/2016 01:17 PM, Doug Hellmann wrote:

So, is Poppy "open core"?


Whether or not it is, I'm not sure how it is part of a Ubiquitous Open
Source Cloud Platform. Because it only enables the use of commerical
services.

It's fine that it's open source software. I just don't think it's OpenStack.


So, I've read through this ML thread a couple times now. I see 
arguments on both sides of the coin here.


ditto

I'm no fan of open core. Never have been. So it irks me that Poppy 
can't work with any non-proprietary backend. But, as others have said, 
that isn't the Poppy team's fault.


However, even though it's not the Poppy team's fault, I think the fact 
that the Poppy project user's only choice when using Poppy is to use a 
non-free backend disqualifies Poppy from being an OpenStack project. 
The fact that the Poppy team follows the four Opens and genuinely 
wants to align with the OpenStack development methodology and 
processes is admirable and we should certainly encourage that 
behaviour, including welcoming Poppy into our CI platform for as much 
as we can (given the obvious limitations around functional testing of 
Poppy). However, at the end of the day, I agree with Sean that this 
non-free restriction inherent in Poppy means it should not be included 
in the openstack/governance projects.yaml file as an "official" 
OpenStack project.


After having put enough (I hope) thoughts on this over the weekend, I think I
agree with the above. They way I put it is:

What would be my solution, as a cloud provider, if I'd like to have a cloud
that relies only on open source technologies?

If you will, we could also add: What would distributions of OpenStack recommend
as a default driver?

This being said, I'd like to throw another question in the mix (just for the
sake of discussion and because I like to contradict myself).

Would our votes change if Poppy had support for OpenCDN (imagine it's being
maintained) even if that solution is terrible?

I guess my question is: When do we start considering a project to be safe from
an open source perspective? Because, having support for 1 opensource technology
doesn't mean it provides enough (or good) open source ways to deploy the
software. If the only supported open solution is *terrible* then deployers would
be left with only commercial solutions to choose from.

I'll comment back on the review but I wanted to get feedback from other folks in
this thread.

Cheers,
Flavio


I've left this comment on the review accordingly.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Sean Dague
On 02/08/2016 10:07 AM, Thierry Carrez wrote:
> Brian Curtin wrote:
>> On Sun, Feb 7, 2016 at 3:07 PM, Jay Pipes  wrote:
>>> I would love to see the OpenStack contributor community take back the
>>> design
>>> summit to its original format and purpose and decouple it from the
>>> OpenStack
>>> Summit's conference portion.
>>>
>>> I believe the design summits should be organized by the OpenStack
>>> contributor community, not the OpenStack Foundation and its marketing
>>> and
>>> event planning staff.
>>
>> As someone who spent years organizing PyCon as a volunteer from the
>> Python community, with four of those years in a row taking about 8
>> solid months of pre-conference effort, not to mention the on-site
>> effort to run a volunteer conference of that size [0]...I would
>> suggest even longer and harder thought before stretching a community
>> like this even more thinly. Things should change, but probably not the
>> "who's doing the work" aspect.
> 
> Beyond stretching out the community, we would end up with the same
> problem we are trying to solve. Most of the cross-project folks that
> would end up organizing the event would be too busy organizing the event
> to be able to fully participate in it.

Right, this is a super key point. Even just organizing and running local
user groups, I know how much time is spent making sure the whole thing
seems effortless to attendees, and they can just focus on content.

Even look at the recently run Nova midcycle, with 40ish folks, it still
required some substantial logistics to pull off. The HPE team did a
great job with that. But it definitely required real time and effort.

The Foundation has done an amazing job of making everyone think this is
easy (I know how much it is not). Without their efforts organizing these
events, eliminating the distractions of wandering in a strange city to
find lunch, having a network, projectors, access to facilities,
appropriate sized spaces, double checking all those things will really
actually be there, chasing after folks when they are not, handling the
myriad of other unforseen issues that you never have to see we would
not be nearly as productive at the design summits.

So while I agree it's worth considering whether the Mega Conference and
Design Summit should continue to be collocated and on the same time
table, I think the idea that the Design Summit, at even only 500
attendees, could/should be run without the Foundation is just folly
based on a lack of understanding for what it takes to do events at that
scale. And massively underestimates the effort and skill the Foundation
has at making our events run as smoothly as they do.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] BadAltAuth / Test Isolation same tenant

2016-02-08 Thread Vincent Gatignol
Hi there, 

I know that it's not the default configuration for openstack nor tempest but I 
need to make a script that test user isolation _inside_ the same tenant. 

Some of our users are in the same tenant but they must not interfere with each 
others. 

We have modified the nova policy rules and we must test these policies (the 
default one is : "rule:admin_or_user"). 
We are using tempest as a base tool with pre-provisioned credentials (cannot 
use admin account for security reasons) 

First thought was "easy" : load tempest with pre-created users via account.yaml 
file, all in the same tenant, and launch 
'tempest.api.compute.test_authorization' that contains almost what we need to 
test. 

But we ran into the "BadAltAuth" exception and I don't know how to get rid of 
it except breaking the tempest_lib (skipping/commenting this exception) 
This exception is thrown when the accounts used in tempest have the same auth 
url. 

I tried another approach, without alt_authentication : 
>From a prompt, I'm launching a test that creates a test_server and export its 
>ID, then wait until the timeout value (default to 500s) 
>From another prompt, I launch the real test that get the server ID and try to 
>delete it. But the same BadAltAuth thing happen... 
(I'm using an account file with 2 different users in the same tenant and with 
the locking mechanism, the logic is using both accounts for this group of 
tests) 

So I'm asking here if someone have a clue to help us ? 

It could be some kind of rewrite of tempest_lib/auth regarding this BadAltAuth, 
throwing a warning instead of a critical exception. 

Thank you all for your time answering this, 

Regards, 

Vincent 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] gate issues

2016-02-08 Thread Hongbin Lu
Hi Team,

In order to resolve issue #3, it looks like we have to significantly reduce the 
memory consumption of the gate tests. Details can be found in this patch 
https://review.openstack.org/#/c/276958/ . For core team, a fast review and 
approval of that patch would be greatly appreciated, since it is hard to work 
with a gate that takes several hours to complete. Thanks.

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: February-05-16 12:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] gate issues

So as we're all aware, the gate is a mess right now. I wanted to sum up some of 
the issues so we can figure out solutions.

1. The functional-api job sometimes fails because bays timeout building after 1 
hour. The logs look something like this:
magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays
 [3733.626171s] ... FAILED
I can reproduce this hang on my devstack with etcdctl 2.0.10 as described in 
this bug (https://bugs.launchpad.net/magnum/+bug/1541105), but apparently 
either my fix with using 2.2.5 (https://review.openstack.org/#/c/275994/) is 
incomplete or there is another intermittent problem because it happened again 
even with that fix: 
(http://logs.openstack.org/94/275994/1/check/gate-functional-dsvm-magnum-api/32aacb1/console.html)

2. The k8s job has some sort of intermittent hang as well that causes a similar 
symptom as with swarm. https://bugs.launchpad.net/magnum/+bug/1541964

3. When the functional-api job runs, it frequently destroys the VM causing the 
jenkins slave agent to die. Example: 
http://logs.openstack.org/03/275003/6/check/gate-functional-dsvm-magnum-api/a9a0eb9//console.html
When this happens, zuul re-queues a new build from the start on a new VM. This 
can happen many times in a row before the job completes.
I chatted with openstack-infra about this and after taking a look at one of the 
VMs, it looks like memory over consumption leading to thrashing was a possible 
culprit. The sshd daemon was also dead but the console showed things like 
"INFO: task kswapd0:77 blocked for more than 120 seconds". A cursory glance and 
following some of the jobs seems to indicate that this doesn't happen on RAX 
VMs which have swap devices unlike the OVH VMs as well.

4. In general, even when things work, the gate is really slow. The sequential 
master-then-node build process in combination with underpowered VMs makes bay 
builds take 25-30 minutes when they do succeed. Since we're already close to 
tipping over a VM, we run functional tests with concurrency=1, so 2 bay builds 
means almost the entire allotted devstack testing time (generally 75 minutes of 
actual test time available it seems).

Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] pid=host

2016-02-08 Thread Michał Jastrzębski
Hey,

So quick steps to reproduce this:

0. install docker 1.10
1. Deploy kolla
2. Run VM
3. On compute host - ps aux | grep qemu, should show your vm process
4. docker rm -f nova_libvirt
5. ps aux | grep qemu should still show running vm
6. re-deploy nova_libvirt
7. docker exec -it nova_libvirt virsh list - should show running vm

Cheers,
Michal

On 8 February 2016 at 07:32, Steven Dake (stdake)  wrote:
> Hey folks,
>
> I know we have been through some changes with how pid=host works.  I'd like
> to get to the bottom of this, so we can either add the features we need to
> docker, or say "all is good".
>
> Here is the last quote from this bugzilla where Red Hat in general is
> interested in the same behavior as the Kolla team has.  They have many
> people embedded in the Docker and Kubernetes communities, so it may make
> sense to let them do the work there :)
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1302807
>
> Mrunal Patel 2016-02-08 06:10:15 EST
>
> docker tracks the pids in a container using cgroups and hence all processes
> are killed even though we use pid=host. I believe we had probably prompted
> them to add this behavior in the first place.
>
>
> This statement appears at odds with what was tested on IRC a few days back
> with docker 1.10.  It is possible docker 1.10 had a regression here, in
> which case if they fix it, we will be back to a dead VM during libvirt
> upgrade which we don’t want.
>
> Can folks that tested this weigh in on the testing that was done on that
> bugzilla with distro type, docker version, docker-py version, and results.
> Unfortunately you will have to create a Red Hat bugzilla account, but if you
> don't wish to do that, please send the information on list after reviewing
> the bugzilla and I'll submit it on your behalf.
>
> The outcomes I would be happy with is:
>
> * docker will never change the semantics of host=pid mode for killing child
> processes
>
> * Or alternatively docker will add a feature such as host=pidnochildkill
> which Red Hat can spearhead
>
> Thoughts and comments welcome.
>
> Regards
>
> -steve
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] DHCP provider API change

2016-02-08 Thread Jim Rollenhagen
Hi all,

If you maintain an out-of-tree DHCP provider, this is a heads up on a
breaking change that just landed.

tl;dr: the update_dhcp_opts vifs parameter changed form, as seen here:
https://github.com/openstack/ironic/commit/e5c5ddbdc8b015221b2656270a2f3f21414a055f#diff-9d3fa5de7cff99fe2d9522ed98108b91L77

We try not to break this interface, however it seemed necessary in this
case, so we apologize for this. Do note that this is not advertised as a
stable API and is subject to change; we'll continue to send emails when
this happens in the future.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-08 Thread Mike Perez
On 13:56 Feb 08, Flavio Percoco wrote:
> On 08/02/16 09:24 -0500, Sean Dague wrote:
> >On 02/08/2016 08:54 AM, Flavio Percoco wrote:
> >
> >>Would our votes change if Poppy had support for OpenCDN (imagine it's being
> >>maintained) even if that solution is terrible?
> >>
> >>I guess my question is: When do we start considering a project to be
> >>safe from
> >>an open source perspective? Because, having support for 1 opensource
> >>technology
> >>doesn't mean it provides enough (or good) open source ways to deploy the
> >>software. If the only supported open solution is *terrible* then
> >>deployers would
> >>be left with only commercial solutions to choose from.
> >
> >There is a lot of difference between 1 and 0 options, even if 1 isn't a
> >great option. It also means the design has been informed by open
> >backends, and not just commercial products.
> >
> 
> If I'm not misinterpreting the above, you're saying that design adviced by 
> open
> source backends give better results. While I'm a huge fan of basing designs on
> open source solutions, I don't think the above is necessarily true. I don't
> think a solution that comes out of common features taken from commercial
> products is bad.
> 
> Just to be clear, I do prefer designs based on open solutions but I don't 
> think
> those, like Poppy, that provision commercial solutions are bad.
> 
> Sorry if I misunderstood you here.

Nobody said providing commercial solutions is bad. Only providing commercial
solutions and having infra provide something in gate that's *dependent* on
a commercial entity is bad.

> 
> >I think one could also consider Neutron originally started in such a
> >state. openvswitch was definitely not mature technology when this effort
> >started. You pretty much could only use commercial backends and have
> >anything work. The use in OpenStack exposed issues, people contributed
> >to proper upstream, things got much much better. We now have a ton of
> >open backends in Neutron. That would never have happened if the projects
> >started with 0.
> >
> 
> ++
> 
> This is exactly where I wanted to get to. So, arguably, the Poppy team could
> "simply" take OpenCDN (assuming the license allow for) put it on GH, get a 
> gate
> on it and come back to the TC requesting inclusion with the difference this 
> time
> it'll have support for 1, very old, open source, CDN software.
> 
> This wouldn't be seen as a "nice thing" from a community perspective but,
> technically, it'd suffice all the requirements. Right?
> 
> I don't think *anyone* will actually contribute to OpenCDN ater that happens 
> and
> it'll still require the TC to say: "That solution is not well maintained 
> still,
> we need to make sure it's production ready before it can be considered a valid
> open source backend for Poppy"

If Poppy was to be picked as OpenCDN as their reference implementation:

* Everything in the API should work with the reference implementation so it can
  be tested.

* If only a commercial solution can do something exposed in the API, that's
  bad. Your API is now dictated by some implementations.

You better believe there's going to be some investment in making OpenCDN work
if gate jobs are depending on it passing. If you want to introduce some shiny
new checkbox feature from a CDN, there's going to also be investment in making
it work with the reference implementation.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Dmitry Tantsur

On 02/08/2016 06:37 PM, Kevin L. Mitchell wrote:

On Mon, 2016-02-08 at 10:49 -0500, Jay Pipes wrote:

5) Dealing with schwag, giveaways, parties, and other superfluous
stuff


As a confirmed introvert, I have to say that I rarely attend parties,
for a variety of reasons.  However, I don't think our hypothetical
design-only meeting should completely eliminate parties, though we can
back off from some of the more extravagant ones.  If we maintain at
least one party, I think that would satisfy the social needs of the
community without distracting too much from the main purpose of the
event.  Of course, I agree with eliminating the other distracting
elements, such as schwag and giveaways…



+1, I think we can just make a party somewhat less fancy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IRC Meeting tomorrow (2/9) - 0300 UTC

2016-02-08 Thread Gal Sagie
As Mike spotted, the date for the meeting is February 9!!
and not the 8th in the original email

Thanks for the correction!


On Mon, Feb 8, 2016 at 5:44 PM, Gal Sagie  wrote:

> Hello All,
>
> We will have an IRC meeting tomorrow (Tuesday, 2/8) at 0300 UTC
> in #openstack-meeting-4
>
> Please review the expected meeting agenda here:
> https://wiki.openstack.org/wiki/Meeting
> s/Kuryr
>
> You can view last meeting action items and logs here:
>
> http://eavesdrop.openstack.org/meetings/kuryr/2016/kuryr.2016-02-01-15.00.html
>
> It will also be useful to view the meeting we had last week in
> #openstack-kuryr regarding
> Kubernetes integration:
>
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-kuryr/%23openstack-kuryr.2016-02-03.log.html
>
> Please update the agenda if you have any subject you would like to discuss
> about.
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Use of restricted and multiverse in the gate

2016-02-08 Thread Thomas Goirand
On 02/07/2016 10:11 PM, Monty Taylor wrote:
> Hey all,
> 
> We're working on getting per-region APT mirrors stood up for the
> nodepool nodes to use in the gate. As part of working on this, it struck
> me that we currently have restricted and multiverse enabled in our
> sources.list file.
> 
> I ran a quick test of removing both of them on a devstack-gate change
> and nothing broke, so I believe that it would be safe to remove them,
> but I thought I'd check with everyone.
> 
> Quick background for folks on them - Ubuntu has 4 different 'components'
> - main, universe, multiverse and restricted:
> 
> Main - Officially supported software.
> 
> Restricted - Supported software that is not available under a completely
> free license.
> 
> Universe - Community maintained software, i.e. not officially supported
> software.
> 
> Multiverse - Software that is not free.
> 
> Practically speaking there is nothing particularly useful to us in
> Restricted or Multiverse that would cause us to need to have the
> philosophical discussion about whether or not we _should_ use them -
> it's mostly software for desktop users.
> 
> I mostly want to not mirror them because it's all desktop software and
> that's a waste of space for us. I also think it's not terribly
> appropriate for us to use non-free dependencies, and so far we have not.
> 
> Any objection to not including these in our apt mirrors?
> 
> Thanks,
> Monty

While it is a good idea to enhance the current Ubuntu image, at the same
time, I'd like to draw your attention that we need review for adding the
Debian image too:
https://review.openstack.org/#/c/264726

Igor Belikov did an amazing job at it, let's please not get this stuck
because no core reviewers are helping.

Thanks for your help.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Kevin L. Mitchell
On Mon, 2016-02-08 at 10:49 -0500, Jay Pipes wrote:
> 5) Dealing with schwag, giveaways, parties, and other superfluous
> stuff

As a confirmed introvert, I have to say that I rarely attend parties,
for a variety of reasons.  However, I don't think our hypothetical
design-only meeting should completely eliminate parties, though we can
back off from some of the more extravagant ones.  If we maintain at
least one party, I think that would satisfy the social needs of the
community without distracting too much from the main purpose of the
event.  Of course, I agree with eliminating the other distracting
elements, such as schwag and giveaways…
-- 
Kevin L. Mitchell 
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-08 Thread Flavio Percoco

On 08/02/16 09:24 -0500, Sean Dague wrote:

On 02/08/2016 08:54 AM, Flavio Percoco wrote:


Would our votes change if Poppy had support for OpenCDN (imagine it's being
maintained) even if that solution is terrible?

I guess my question is: When do we start considering a project to be
safe from
an open source perspective? Because, having support for 1 opensource
technology
doesn't mean it provides enough (or good) open source ways to deploy the
software. If the only supported open solution is *terrible* then
deployers would
be left with only commercial solutions to choose from.


There is a lot of difference between 1 and 0 options, even if 1 isn't a
great option. It also means the design has been informed by open
backends, and not just commercial products.



If I'm not misinterpreting the above, you're saying that design adviced by open
source backends give better results. While I'm a huge fan of basing designs on
open source solutions, I don't think the above is necessarily true. I don't
think a solution that comes out of common features taken from commercial
products is bad.

Just to be clear, I do prefer designs based on open solutions but I don't think
those, like Poppy, that provision commercial solutions are bad.

Sorry if I misunderstood you here.


I think one could also consider Neutron originally started in such a
state. openvswitch was definitely not mature technology when this effort
started. You pretty much could only use commercial backends and have
anything work. The use in OpenStack exposed issues, people contributed
to proper upstream, things got much much better. We now have a ton of
open backends in Neutron. That would never have happened if the projects
started with 0.



++

This is exactly where I wanted to get to. So, arguably, the Poppy team could
"simply" take OpenCDN (assuming the license allow for) put it on GH, get a gate
on it and come back to the TC requesting inclusion with the difference this time
it'll have support for 1, very old, open source, CDN software.

This wouldn't be seen as a "nice thing" from a community perspective but,
technically, it'd suffice all the requirements. Right?

I don't think *anyone* will actually contribute to OpenCDN ater that happens and
it'll still require the TC to say: "That solution is not well maintained still,
we need to make sure it's production ready before it can be considered a valid
open source backend for Poppy"


The flip side is  that CDN is a problem space that no consumers or ops
are interested in open backends. That's ok, however, if that's the case,
it doesn't feel OpenStack to me. Just being overlays for commercial
services seems a different thing that the rest of what's in OpenStack
today.


Agreed that there's no much interest in having open backends for CDNs but there
*is* interest in CDNs, which are an important part of nowaday's cloud
applications. I personally want my cloud to suggest me something that *works*
and give me a seamless way to integrate with it the same way I integrate with
the DNS solution, messaging solution, etc.

In other words, I want my cloud to provide this and to do so, I agree it doesn't
need to be an "official" project for clouds to deploy it but I do think it's a
valid solution to have in the cloud tools-belt.



I think this is a place where there are lots of reasonable and different
points of view. And if it was clear cut there wouldn't be the need for
discussion.


++

Before we get to make any call, I want to make sure we've enough arguments that
we can base our opinions on. In fact, this very email currently has two
different opinions in favor and against including Poppy, although I believe I've
formed my opinion already.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] list description

2016-02-08 Thread Elizabeth K. Joseph
On Mon, Feb 8, 2016 at 11:54 AM, Jeremy Stanley  wrote:
> On 2016-02-08 11:11:39 -0800 (-0800), Elizabeth K. Joseph wrote:
> [...]
>> "Development and maintenance of the project infrastructure and tooling
>> used by contributors to develop OpenStack."
>
> I'm cool going with that as a new list description, though I'm not
> entirely convinced that newcomers skimming quickly looking for a
> place to get help with general OpenStack problems won't still
> confuse "the project infrastructure" with "the bits you're trying to
> install to run your OpenStack-based cloud infrastructure" since the
> word "infrastructure" is thrown around in so many contexts now it's
> essentially a meaningless industry fluff term. I guess what I was
> looking for was a non-circular definition for our particular use of
> "infrastructure" but that might just be expecting too much.

Short of changing the name of the team itself (nooo) I'm not sure we
will ever get away with people not understanding at a glance :)

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [Openstack-operators] RAID / stripe block storage volumes

2016-02-08 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
In our environments, we offer two types of storage. Tenants can either use 
Ceph/RBD and trade speed/latency for reliability and protection against 
physical disk failures, or they can launch instances that are realized as LVs 
on an LVM VG that we create on top of a RAID 0 spanning all but the OS disk on 
the hypervisor. This lets the users elect to go all-in on speed and sacrifice 
reliability for applications where replication/HA is handled at the app level, 
if the data on the instance is sourced from elsewhere, or if they just don't 
care much about the data.

There are some further changes to our approach that we would like to make down 
the road, but in general our users seem to like the current system and being 
able to forgo reliability or speed as their circumstances demand.

From: j...@topjian.net 
Subject: Re: [Openstack-operators] RAID / stripe block storage volumes

Hi Robert,

Can you elaborate on "multiple underlying storage services"?

The reason I asked the initial question is because historically we've made our 
block storage service resilient to failure. Historically we also made our 
compute environment resilient to failure, too, but over time, we've seen users 
become more educated to cope with compute failure. As a result, we've been able 
to become more lenient with regard to building resilient compute environments.

We've been discussing how possible it would be to translate that same idea to 
block storage. Rather than have a large HA storage cluster (whether Ceph, 
Gluster, NetApp, etc), is it possible to offer simple single LVM volume servers 
and push the failure handling on to the user? 

Of course, this doesn't work for all types of use cases and environments. We 
still have projects which require the cloud to own most responsibility for 
failure than the users. 

But for environments were we offer general purpose / best effort compute and 
storage, what methods are available to help the user be resilient to block 
storage failures?

Joe

On Mon, Feb 8, 2016 at 12:09 PM, Robert Starmer  wrote:

I've always recommended providing multiple underlying storage services to 
provide this rather than adding the overhead to the VM.  So, not in any of my 
systems or any I've worked with.

R


On Fri, Feb 5, 2016 at 5:56 PM, Joe Topjian  wrote:

Hello,

Does anyone have users RAID'ing or striping multiple block storage volumes from 
within an instance?

If so, what was the experience? Good, bad, possible but with caveats?

Thanks,
Joe 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [tempest] BadAltAuth / Test Isolation same tenant

2016-02-08 Thread Matthew Treinish
On Mon, Feb 08, 2016 at 06:04:15PM +0100, Vincent Gatignol wrote:
> Hi there, 
> 
> I know that it's not the default configuration for openstack nor tempest but 
> I need to make a script that test user isolation _inside_ the same tenant. 
> 
> Some of our users are in the same tenant but they must not interfere with 
> each others. 
> 
> We have modified the nova policy rules and we must test these policies (the 
> default one is : "rule:admin_or_user"). 

As I explained on IRC a couple of weeks ago this is a really bad idea. It breaks
all users expectations with using your cloud. The OpenStack APIs scope most
resources to the tenant/project changing that is changing fundamental behavior
of your cloud. Just because you can hand configure this doesn't mean you should.

> We are using tempest as a base tool with pre-provisioned credentials (cannot 
> use admin account for security reasons) 
> 
> First thought was "easy" : load tempest with pre-created users via 
> account.yaml file, all in the same tenant, and launch 
> 'tempest.api.compute.test_authorization' that contains almost what we need to 
> test. 
> 
> But we ran into the "BadAltAuth" exception and I don't know how to get rid of 
> it except breaking the tempest_lib (skipping/commenting this exception) 
> This exception is thrown when the accounts used in tempest have the same auth 
> url. 
> 
> I tried another approach, without alt_authentication : 
> From a prompt, I'm launching a test that creates a test_server and export its 
> ID, then wait until the timeout value (default to 500s) 
> From another prompt, I launch the real test that get the server ID and try to 
> delete it. But the same BadAltAuth thing happen... 
> (I'm using an account file with 2 different users in the same tenant and with 
> the locking mechanism, the logic is using both accounts for this group of 
> tests) 
> 
> So I'm asking here if someone have a clue to help us ? 

Also, as I explained previously tempest is not designed to do this. The use case
for dynamic credentials and pre_provisioned credentials is to provide
credential sets with separate projects/tenants and users. This is because the
auth model for OpenStack has most resources scoped to the tenant/project so it's
providing isolation for each of the test classes. Tempest is for testing
OpenStack clouds and the modifications you've made to your deployment's policy
file I'd argue goes far enough to not be that anymore.

If you're still set on doing this the only method available to you is to have an
admin user create the additional users for your new test.

-Matt Treinish

> 
> It could be some kind of rewrite of tempest_lib/auth regarding this 
> BadAltAuth, throwing a warning instead of a critical exception. 
> 
> Thank you all for your time answering this, 
> 


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >