Re: [openstack-dev] [osc] Meeting time preferences for OSC team

2016-04-14 Thread reedip banerjee
E1 and O3 work for me as well ,
Can someone please submit the patch for the timing change. Atleast voting
can then be done directly on the Patch than on the email , would have a
better representation :)
(Or I can take the initiative if everyone passes it :} )

On Fri, Apr 15, 2016 at 7:59 AM, Sheel Rana Insaan 
wrote:

> Dear Tang,
>
> Yes, I am ok with it.
>
> Best Regards,
> Sheel Rana
> On Apr 15, 2016 6:59 AM, "Tang Chen"  wrote:
>
>> Hi all,
>>
>> In yesterday's meeting, Dean, Richard and I have discussed the meeting
>> time issue.
>> The following two options work for us.
>>
>> E.1 Every two weeks (on even weeks) on Thursday at 1300 UTC in
>>
>> O.3 Every two weeks (on odd/even weeks) on Thursday at 1900 UTC in
>>
>>
>> Sheel, are you OK with these two options ?
>>
>> Thanks. :)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Hirofumi Ichihara to Neutron Core Reviewer Team

2016-04-14 Thread reedip banerjee
Congratulations Hirofumi :)

On Fri, Apr 15, 2016 at 8:52 AM, Takashi Yamamoto 
wrote:

> welcome, hirofumi!
>
> On Fri, Apr 15, 2016 at 11:40 AM, Akihiro Motoki 
> wrote:
> > It's been over a week.
> > I'd like to welcome Hirofumi to the neutron core reviewer team!
> >
> > Akihiro
> >
> > 2016-04-08 13:34 GMT+09:00 Akihiro Motoki :
> >> Hi Neutrinos,
> >>
> >> As the API Lieutenant of Neutron team,
> >> I would like to propose Hirofumi Ichihara (irc: hichihara) as a member
> of
> >> Neutron core reviewer team mainly focuing on the API/DB area.
> >>
> >> Hirofumi has been contributing neutron actively in the recent two
> >> releases constantly.
> >> He was involved in key features in API/DB areas in Mitaka such as
> >> tagging support and network availability zones.
> >> I believe his knowledge and involvement will be great addition to our
> team.
> >> He have been reviewing constantly [1] and I expect he continue to work
> >> for Newton or later.
> >>
> >> Existing API/DB core reviews (and other Neutron core reviewers),
> >> please vote +1/-1 for the addition of Hirofumi to the team.
> >>
> >> Thanks!
> >> Akihiro
> >>
> >>
> >> [1] http://stackalytics.com/report/contribution/neutron/90
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS flow modification performance

2016-04-14 Thread IWAMOTO Toshihiro
At Mon, 11 Apr 2016 14:42:59 +0200,
Miguel Angel Ajo Pelayo wrote:
> 
> On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
>  wrote:
> > At Fri, 8 Apr 2016 12:21:21 +0200,
> > Miguel Angel Ajo Pelayo wrote:
> >>
> >> Hi, good that you're looking at this,
> >>
> >>
> >> You could create a lot of ports with this method [1] and a bit of extra
> >> bash, without the extra expense of instance RAM.
> >>
> >>
> >> [1]
> >> http://www.ajo.es/post/89207996034/creating-a-network-interface-to-tenant-network-in
> >>
> >>
> >> This effort is going to be still more relevant in the context of
> >> openvswitch firewall. We still need to make sure it's tested with the
> >> native interface, and eventually we will need flow bundling (like in
> >> ovs-ofctl --bundle add-flows) where the whole addition/removal/modification
> >> is sent to be executed atomically by the switch.
> >
> > Bad news is that ovs-firewall isn't currently using the native
> > of_interface much.  I can add install_xxx methods to
> > OpenFlowSwitchMixin classes so that ovs-firewall can use the native
> > interface.
> > Do you have a plan for implementing flow bundling or using conjunction?
> >
> 
> Adding Jakub to the thread,
> 
> IMO, if the native interface is going to provide us with greater speed
> for rule manipulation, we should look into it.
> 
> We don't use bundling or conjunctions yet, but it's part of the plan.
> Bundling will allow atomicity of operations with rules (switching
> firewall rules, etc, as we have with iptables-save /
> iptables-restore), and conjunctions will reduce the number of entries.
> (No expansion of IP addresses for remote groups, no expansion of
> security group rules per port, when several ports are on the same
> security group on the same compute host).
> 
> Do we have any metric of bare rule manipulation time (ms/rule, for example)?

No bare numbers but from a graph in the other mail I sent last week,
bind_devices for 160 ports (iirc, that amounts to 800 flows) takes
4.5sec with of_interface=native and 8sec with of_interface=ovs-ofctl,
which means an native add-flow is 4ms faster than the other.

As the ovs firewall uses DeferredOVSBridge and has less exec
overheads, I have no idea how much gain the native of_interface
brings.

> As a note, we're around 80 rules/port with IPv6 + IPv4 on the default
> sec group plus a couple of rules.

I booted 120VMs on one network and the default security group
generated 62k flows.  It seems using conjunction is the #1 item for
performance.



> 
> >> On Thu, Apr 7, 2016 at 10:00 AM, IWAMOTO Toshihiro 
> >> wrote:
> >>
> >> > At Thu, 07 Apr 2016 16:33:02 +0900,
> >> > IWAMOTO Toshihiro wrote:
> >> > >
> >> > > At Mon, 18 Jan 2016 12:12:28 +0900,
> >> > > IWAMOTO Toshihiro wrote:
> >> > > >
> >> > > > I'm sending out this mail to share the finding and discuss how to
> >> > > > improve with those interested in neutron ovs performance.
> >> > > >
> >> > > > TL;DR: The native of_interface code, which has been merged recently
> >> > > > and isn't default, seems to consume less CPU time but gives a mixed
> >> > > > result.  I'm looking into this for improvement.
> >> > >
> >> > > I went on to look at implementation details of eventlet etc, but it
> >> > > turned out to be fairly simple.  The OVS agent in the
> >> > > of_interface=native mode waits for a openflow connection from
> >> > > ovs-vswitchd, which can take up to 5 seconds.
> >> > >
> >> > > Please look at the attached graph.
> >> > > The x-axis is time from agent restarts, the y-axis is numbers of ports
> >> > > processed (in treat_devices and bind_devices).  Each port is counted
> >> > > twice; the first slope is treat_devices and the second is
> >> > > bind_devices.  The native of_interface needs some more time on
> >> > > start-up, but bind_devices is about 2x faster.
> >> > >
> >> > > The data was collected with 160 VMs with the devstack default settings.
> >> >
> >> > And if you wonder how other services are doing meanwhile, here is a
> >> > bonus chart.
> >> >
> >> > The ovs agent was restarted 3 times with of_interface=native, then 3
> >> > times with of_interface=ovs-ofctl.
> >> >
> >> > As the test machine has 16 CPUs, 6.25% CPU usage can mean a single
> >> > threaded process is CPU bound.
> >> >
> >> > Frankly, the OVS agent would have little rooms for improvement than
> >> > other services.  Also, it might be fun to draw similar charts for
> >> > other types of workloads.
> >> >
> >> >
> >> > __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe: 
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 

Re: [openstack-dev] [Neutron] Proposing Hirofumi Ichihara to Neutron Core Reviewer Team

2016-04-14 Thread Takashi Yamamoto
welcome, hirofumi!

On Fri, Apr 15, 2016 at 11:40 AM, Akihiro Motoki  wrote:
> It's been over a week.
> I'd like to welcome Hirofumi to the neutron core reviewer team!
>
> Akihiro
>
> 2016-04-08 13:34 GMT+09:00 Akihiro Motoki :
>> Hi Neutrinos,
>>
>> As the API Lieutenant of Neutron team,
>> I would like to propose Hirofumi Ichihara (irc: hichihara) as a member of
>> Neutron core reviewer team mainly focuing on the API/DB area.
>>
>> Hirofumi has been contributing neutron actively in the recent two
>> releases constantly.
>> He was involved in key features in API/DB areas in Mitaka such as
>> tagging support and network availability zones.
>> I believe his knowledge and involvement will be great addition to our team.
>> He have been reviewing constantly [1] and I expect he continue to work
>> for Newton or later.
>>
>> Existing API/DB core reviews (and other Neutron core reviewers),
>> please vote +1/-1 for the addition of Hirofumi to the team.
>>
>> Thanks!
>> Akihiro
>>
>>
>> [1] http://stackalytics.com/report/contribution/neutron/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Hirofumi Ichihara to Neutron Core Reviewer Team

2016-04-14 Thread Hirofumi Ichihara

Thank you all!

I'm happy to be part of the Neutron core.
I will try my best helping Neutron project.

Thanks,
Hirofumi

On 2016/04/15 11:40, Akihiro Motoki wrote:

It's been over a week.
I'd like to welcome Hirofumi to the neutron core reviewer team!

Akihiro

2016-04-08 13:34 GMT+09:00 Akihiro Motoki :

Hi Neutrinos,

As the API Lieutenant of Neutron team,
I would like to propose Hirofumi Ichihara (irc: hichihara) as a member of
Neutron core reviewer team mainly focuing on the API/DB area.

Hirofumi has been contributing neutron actively in the recent two
releases constantly.
He was involved in key features in API/DB areas in Mitaka such as
tagging support and network availability zones.
I believe his knowledge and involvement will be great addition to our team.
He have been reviewing constantly [1] and I expect he continue to work
for Newton or later.

Existing API/DB core reviews (and other Neutron core reviewers),
please vote +1/-1 for the addition of Hirofumi to the team.

Thanks!
Akihiro


[1] http://stackalytics.com/report/contribution/neutron/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Newton Design Summit Session Schedule

2016-04-14 Thread Joshua Harlow

Howdy (oslo and other) folks,

I put up the timings/titles[1] and etherpads[2] for the oslo summit 
sessions (workroom(s) and fishbowl(s)) which should be in a good state 
(but may be edited a little), feel free to suggest better titles, or 
better descriptions or even fill out an etherpad or two with some of the 
missing information :)


May the odds be ever in our favor (in austin),

-Josh

[1] 
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Oslo%3A

[2] https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Oslo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Removing Nova specifics from oslo.log

2016-04-14 Thread Joshua Harlow

Victor Stinner wrote:

Le 13/04/2016 22:54, Julien Danjou a écrit :

There's a bunch of projects that have no intention of using
oslo.context, so depending and referring to it by default is something
I'd love to fade away.


It looks like Oslo has an identity crisis :-)


Well not entirely IMHO. I think oslo should obviously be easy to use 
inside openstack and as well outside, with preference to inside 
openstack (especially for libraries that start with 'oslo.*'). When we 
can make it more useable outside openstack (and without hurting the 
mission of openstack being the primary target for 'oslo.*' libraries) 
then obviously we should try to do our best to...


Will all the oslo.* libraries as they exist be forever perfect at a 
given point in time, no. We can always do better, which is where the 
community comes in :)




Basically the question looks like: should we make Oslo easier to use
outside "OpenStack"? If I summarized correctly the question, my answer
is YES!

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Hirofumi Ichihara to Neutron Core Reviewer Team

2016-04-14 Thread Akihiro Motoki
It's been over a week.
I'd like to welcome Hirofumi to the neutron core reviewer team!

Akihiro

2016-04-08 13:34 GMT+09:00 Akihiro Motoki :
> Hi Neutrinos,
>
> As the API Lieutenant of Neutron team,
> I would like to propose Hirofumi Ichihara (irc: hichihara) as a member of
> Neutron core reviewer team mainly focuing on the API/DB area.
>
> Hirofumi has been contributing neutron actively in the recent two
> releases constantly.
> He was involved in key features in API/DB areas in Mitaka such as
> tagging support and network availability zones.
> I believe his knowledge and involvement will be great addition to our team.
> He have been reviewing constantly [1] and I expect he continue to work
> for Newton or later.
>
> Existing API/DB core reviews (and other Neutron core reviewers),
> please vote +1/-1 for the addition of Hirofumi to the team.
>
> Thanks!
> Akihiro
>
>
> [1] http://stackalytics.com/report/contribution/neutron/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc] Meeting time preferences for OSC team

2016-04-14 Thread Sheel Rana Insaan
Dear Tang,

Yes, I am ok with it.

Best Regards,
Sheel Rana
On Apr 15, 2016 6:59 AM, "Tang Chen"  wrote:

> Hi all,
>
> In yesterday's meeting, Dean, Richard and I have discussed the meeting
> time issue.
> The following two options work for us.
>
> E.1 Every two weeks (on even weeks) on Thursday at 1300 UTC in
>
> O.3 Every two weeks (on odd/even weeks) on Thursday at 1900 UTC in
>
>
> Sheel, are you OK with these two options ?
>
> Thanks. :)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] - About Openstack upgrade

2016-04-14 Thread Dolph Mathews
On Thu, Apr 14, 2016 at 8:40 PM, Kenny Ji-work  wrote:

> Hi all,
>
> We have deployed openstack liberty in our online environment by using
> devstack. We wanner upgrade our openstack to the newest version - mitaka,
> so is there some tools or facilities to complete it? Thank you for
> answering!
>

Grenade [1] is designed to exercise DevStack upgrades across releases.

However, by "online environment", I hope you do not mean "production
environment." DevStack is a development environment [2], not a production
cloud intended for end users:

> It is used interactively as a development environment and as the basis
for much of the OpenStack project’s functional testing.

If you need to "upgrade" your development environment and do not intend to
test upgrades, I'd instead suggest nuking your environment and creating a
new one.

[1] https://github.com/openstack-dev/grenade
[2] http://devstack.org/


>
> Sincerely,
> Kenny Ji
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-14 Thread Shinobu Kinjo
How did you test it out?
Would you elaborate on this more?

Cheers,
Shinobu

On Fri, Apr 15, 2016 at 11:10 AM, Kenny Ji-work  wrote:
> Hi all,
>
> In the environment of openstack kilo, I test the bandwidth in the scene
> which VxLan being used. The result show that the vxlan can only support up
> to 1 gbits bandwidth. Is this a bug or any else issue, or is there some
> hotfix to solve the issue? Thank you for answering!
>
> Sincerely,
> Kenny Ji
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-14 Thread Kenny Ji-work
Hi all,


In the environment of openstack kilo, I test the bandwidth in the scene which 
VxLan being used. The result show that the vxlan can only support up to 1 gbits 
bandwidth. Is this a bug or any else issue, or is there some hotfix to solve 
the issue? Thank you for answering!


Sincerely,
Kenny Ji__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-14 Thread reedip banerjee
Speaking on behalf of Tap-as-a-Service members, we would also be very much
interested in the following initiative :)

On Fri, Apr 15, 2016 at 5:14 AM, Ihar Hrachyshka 
wrote:

> Cathy Zhang  wrote:
>
>
>> I think there is no formal spec or anything, just some emails around
>> there.
>>
>> That said, I don’t follow why it’s a requirement for SFC to switch to l2
>> agent extension mechanism. Even today, with SFC maintaining its own agent,
>> there are no clear guarantees for flow priorities that would avoid all
>> possible conflicts.
>>
>> Cathy> There is no requirement for SFC to switch. My understanding is
>> that current L2 agent extension does not solve the conflicting entry issue
>> if two features inject the same priority table entry. I think this new L2
>> agent effort is try to come up with a mechanism to resolve this issue. Of
>> course if each feature( SFC or Qos) uses its own agent, then there is no
>> coordination and no way to avoid conflicts.
>>
>
> Sorry, I probably used misleading wording. I meant, why do we consider the
> semantic flow management support in l2 agent extension framework a
> *prerequisite* for SFC to switch to l2 agent extensions? The existing
> framework should already allow SFC to achieve what you have in the
> subproject tree implemented as a separate agent (essentially a fork of OVS
> agent). It will also set SFC to use standard extension mechanisms instead
> of hacky inheritance from OVS agent classes. So even without the strict
> semantic flow management, there is benefit for the subproject.
>
> With that in mind, I would split this job into 3 pieces:
> * first, adopt l2 agent extension mechanism for SFC functionality
> (dropping custom agent);
> * then, work on semantic flow management support in OVS agent API class
> [1];
> * once the feature emerges, switch SFC l2 agent extension to the new
> framework to manage SFC flows.
>
> I would at least prioritize the first point and target it to Newton-1.
> Other bullet points may take significant time to bake.
>
> [1]
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_agent_extension_api.py
>
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] - About Openstack upgrade

2016-04-14 Thread Kenny Ji-work
Hi all,


We have deployed openstack liberty in our online environment by using devstack. 
We wanner upgrade our openstack to the newest version - mitaka, so is there 
some tools or facilities to complete it? Thank you for answering!


Sincerely,
Kenny Ji__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc] Meeting time preferences for OSC team

2016-04-14 Thread Tang Chen

Hi all,

In yesterday's meeting, Dean, Richard and I have discussed the meeting 
time issue.

The following two options work for us.

E.1 Every two weeks (on even weeks) on Thursday at 1300 UTC in

O.3 Every two weeks (on odd/even weeks) on Thursday at 1900 UTC in


Sheel, are you OK with these two options ?

Thanks. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-14 Thread Anita Kuno
On 04/14/2016 08:52 PM, Bhandaru, Malini K wrote:
> Hello Michael!
> 
> Quite recently David Lyle hosted the Horizon Mid cycle at Intel Portland 
> offices – they enjoyed getting out and dinning in the cafeteria.
> Internet accounts were set up ahead of schedule but IRC still was an issue. 
> We shall figure out proxy settings to tackle this.
> We are also looking for rooms that have easy access to facilities – the 
> previous Nova/Ironic mid cycle Intel hosted unfortunately was next to a 
> construction zone.
> 
> And we note preference for Portland over San Antonio in the summer! ☺
> 
> Regards
> Malini

Hi Malini:

I'll just confirm that Intel is aware that gerrit is now on a new server
with new ips, as of April 11th, so this past Monday.

Full details are in this email:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088985.html

Thank you to you and Intel,
Anita.

> 
> From: Michael Still [mailto:mi...@stillhq.com]
> Sent: Wednesday, April 13, 2016 10:36 PM
> To: OpenStack Development Mailing List 
> Cc: Ding, Jian-feng ; Bhargava, Ruchi 
> ; Fuller, Michael ; 
> Apostol, Michael J 
> Subject: Re: [openstack-dev] [nova] Newton midcycle planning
> 
> 
> We had issues with physical security and unfiltered internet access last time 
> we were in Hillsboro. Do we know if those issues are now resolved?
> 
> Michael
> On 13 Apr 2016 9:08 AM, "Bhandaru, Malini K" 
> > wrote:
> Hi Everyone!
> 
> Intel would be pleased to host the Nova midcycle meetup either at San 
> Antonio, Texas or Hillsboro, Oregon during R-15 (June 20-24) or R-11 (July 
> 18-22) as preferred by the Nova community.
> 
> Regards
> Malini
> 
>  Forwarded Message 
> Subject:Re: [openstack-dev] [nova] Newton midcycle planning
> Date:   Tue, 12 Apr 2016 08:54:17 +1000
> From:   Michael Still >
> Reply-To:   OpenStack Development Mailing List (not for usage questions)
> >
> To: OpenStack Development Mailing List (not for usage questions)
> >
> 
> 
> 
> On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann 
>  
> >> wrote:
> 
> A few people have been asking about planning for the nova midcycle
> for newton. Looking at the schedule [1] I'm thinking weeks R-15 or
> R-11 work the best. R-14 is close to the US July 4th holiday, R-13
> is during the week of the US July 4th holiday, and R-12 is the week
> of the n-2 milestone.
> 
> R-16 is too close to the summit IMO, and R-10 is pushing it out too
> far in the release. I'd be open to R-14 though but don't know what
> other people's plans are.
> 
> As far as a venue is concerned, I haven't heard any offers from
> companies to host yet. If no one brings it up by the summit, I'll
> see if hosting in Rochester, MN at the IBM site is a possibility.
> 
> 
> Intel at Hillsboro had expressed an interest in hosting the N mid-cycle last 
> release, so they might still be an option? I don't recall any other possible 
> hosts in the queue, but its possible I've missed someone.
> 
> Michael
> 
> --
> Rackspace Australia
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Authorization mechanisms for each user

2016-04-14 Thread Yuki Nisiwaki
Hi openstacker working on congress.

I want to implement the authorization mechanisms for each user, not role
base.
For example, User A can change security group, But User B can’t change
security group like IAM feature of AWS.

In order to achieve it,
I’m considering whether can I utilize Congress feature.
I am thinking somehow that I can achieve it by following step.
1. create policy for each user with datalog in congress
2. prepare the wsgi filter for each project that works confirming
authorization of each user to Congress.

I think this use-case is very popular and there is someone who think same
thing.
But There is no information about it in any website (blog, presentation in
summit).
So why is there anyone who achieve it?
or does this approach have anxious point?
If you are interested in this approach or think same thing, I want to know
it.


Best regards

Yuki Nishiwaki
NTT Communitions
Technology development
Cloud Core Technology Unit
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-14 Thread Bhandaru, Malini K
Hello Michael!

Quite recently David Lyle hosted the Horizon Mid cycle at Intel Portland 
offices – they enjoyed getting out and dinning in the cafeteria.
Internet accounts were set up ahead of schedule but IRC still was an issue. We 
shall figure out proxy settings to tackle this.
We are also looking for rooms that have easy access to facilities – the 
previous Nova/Ironic mid cycle Intel hosted unfortunately was next to a 
construction zone.

And we note preference for Portland over San Antonio in the summer! ☺

Regards
Malini

From: Michael Still [mailto:mi...@stillhq.com]
Sent: Wednesday, April 13, 2016 10:36 PM
To: OpenStack Development Mailing List 
Cc: Ding, Jian-feng ; Bhargava, Ruchi 
; Fuller, Michael ; 
Apostol, Michael J 
Subject: Re: [openstack-dev] [nova] Newton midcycle planning


We had issues with physical security and unfiltered internet access last time 
we were in Hillsboro. Do we know if those issues are now resolved?

Michael
On 13 Apr 2016 9:08 AM, "Bhandaru, Malini K" 
> wrote:
Hi Everyone!

Intel would be pleased to host the Nova midcycle meetup either at San 
Antonio, Texas or Hillsboro, Oregon during R-15 (June 20-24) or R-11 (July 
18-22) as preferred by the Nova community.

Regards
Malini

 Forwarded Message 
Subject:Re: [openstack-dev] [nova] Newton midcycle planning
Date:   Tue, 12 Apr 2016 08:54:17 +1000
From:   Michael Still >
Reply-To:   OpenStack Development Mailing List (not for usage questions)
>
To: OpenStack Development Mailing List (not for usage questions)
>



On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann 
 
>> wrote:

A few people have been asking about planning for the nova midcycle
for newton. Looking at the schedule [1] I'm thinking weeks R-15 or
R-11 work the best. R-14 is close to the US July 4th holiday, R-13
is during the week of the US July 4th holiday, and R-12 is the week
of the n-2 milestone.

R-16 is too close to the summit IMO, and R-10 is pushing it out too
far in the release. I'd be open to R-14 though but don't know what
other people's plans are.

As far as a venue is concerned, I haven't heard any offers from
companies to host yet. If no one brings it up by the summit, I'll
see if hosting in Rochester, MN at the IBM site is a possibility.


Intel at Hillsboro had expressed an interest in hosting the N mid-cycle last 
release, so they might still be an option? I don't recall any other possible 
hosts in the queue, but its possible I've missed someone.

Michael

--
Rackspace Australia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-14 Thread Ihar Hrachyshka

Cathy Zhang  wrote:



I think there is no formal spec or anything, just some emails around there.

That said, I don’t follow why it’s a requirement for SFC to switch to l2  
agent extension mechanism. Even today, with SFC maintaining its own  
agent, there are no clear guarantees for flow priorities that would avoid  
all possible conflicts.


Cathy> There is no requirement for SFC to switch. My understanding is  
that current L2 agent extension does not solve the conflicting entry  
issue if two features inject the same priority table entry. I think this  
new L2 agent effort is try to come up with a mechanism to resolve this  
issue. Of course if each feature( SFC or Qos) uses its own agent, then  
there is no coordination and no way to avoid conflicts.


Sorry, I probably used misleading wording. I meant, why do we consider the  
semantic flow management support in l2 agent extension framework a  
*prerequisite* for SFC to switch to l2 agent extensions? The existing  
framework should already allow SFC to achieve what you have in the  
subproject tree implemented as a separate agent (essentially a fork of OVS  
agent). It will also set SFC to use standard extension mechanisms instead  
of hacky inheritance from OVS agent classes. So even without the strict  
semantic flow management, there is benefit for the subproject.


With that in mind, I would split this job into 3 pieces:
* first, adopt l2 agent extension mechanism for SFC functionality (dropping  
custom agent);

* then, work on semantic flow management support in OVS agent API class [1];
* once the feature emerges, switch SFC l2 agent extension to the new  
framework to manage SFC flows.


I would at least prioritize the first point and target it to Newton-1.  
Other bullet points may take significant time to bake.


[1]  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_agent_extension_api.py


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-14 Thread Augustina Ragwitz
Do ittt!



-- 
Augustina Ragwitz
Sr Systems Software Engineer, HPE Cloud
Hewlett Packard Enterprise
---
irc: auggy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-14 Thread melanie witt

On Thu, 14 Apr 2016 13:17:48 -0500, Dean Troyer wrote:

My only real concern is you've implied a structure that will potentially
have many combinations of configurations and those will bitrot.  How
different are x86 and s390 arch in local.conf? (I've never seen an s390
local.conf!)  I do know there are few, if any, differences between most
ubuntu and fedora configs, we abstract most of that out in the scripts.

I wonder if the grouping of configs might be better suited along
usa-case lines?  nova-net vs neutron, single- vs multi-node, etc.


+1. This already exists to some extent in the devstack documentation but 
it's a bit scattered [1][2] and I rummage around to find it when I need 
it. For multi-node I have also gone to find a recent multi-node tempest 
job run to copy some devstack local.conf settings before.


So, I agree it would be helpful to have some use-case based samples in 
an easy to navigate place.


-melanie

[1] 
http://docs.openstack.org/developer/devstack/configuration.html#multi-node-setup

[2] http://docs.openstack.org/developer/devstack/configuration.html#neutron


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Generic solution for bare metal testing

2016-04-14 Thread Ben Nemec
On 04/12/2016 09:17 AM, Jim Rollenhagen wrote:
> On Thu, Apr 07, 2016 at 02:42:09AM +, Jeremy Stanley wrote:
>> On 2016-04-06 18:33:06 +0300 (+0300), Igor Belikov wrote:
>> [...]
>>> I suppose there are security issues when we talk about running
>>> custom code on bare metal slaves, but I'm not sure I understand
>>> the difference from running custom code on a virtual machine if
>>> bare metal nodes are isolated, don't contain any sensitive data
>>> and follow a regular redeployment procedure.
>> [...]
>>
>> With a virtual machine, you can delete it and create a new one.
>> Nothing remains behind.
>>
>> With a physical machine, arbitrary code running in the scope of a
>> test with root access can do _nasty_ things like backdoor your
>> server firmware with shims that even masquerade as the firmware
>> updater and persist through redeployments that include firmware
>> refreshes.
>>
>> Physical servers persist, and are therefore vulnerable in this
>> scenario in ways which virtual servers are not.
> 
> Right, it's a huge effort to run a secure bare metal cloud running
> arbitrary code. Homogenous hardware and vendor cooperation is a must,
> and that's only part of it.
> 
> I don't foresee the infra team having the resources to take on such a
> task any time soon (but of course, I'm not well-informed on the infra
> team's workload).
> 
> Another option for baremetal in the gate is baremetal flavors in other
> public clouds - Rackspace has one (OnMetal) but doesn't yet support
> custom images, and others have launched or are working on one. Once
> there's two clouds that support baremetal with custom images, we could
> put those resources in the upstream CI pool.

Depending on exactly what you need baremetal for, we're getting very
close to OVB[1] being usable in an unmodified cloud, especially for
one-time-use CI environments.  I just merged [2] from Steve Baker which
enables pxe booting without Nova hacks, and I've done some successful
tests locally using the Neutron port-security extension to allow PXE
deployment of instances.  The port-security stuff isn't in the git repo
yet because we need to make it compatible with Kilo-based clouds, but
Steve tells me has a way to make that work.

This obviously doesn't help with the nested virt problem, if that's what
you need baremetal for, but for testing baremetal-style deployments it
works quite well in my experience.  We've started work to make use of it
for TripleO CI[3], and it's already being used for some of our
downstream testing.

I don't know that we're quite ready to just run in regular infra yet
because we do need the ability to upload our custom ipxe-boot image and
we need a cloud at least new enough for the port-security to work (and I
don't know exactly how new is new enough, other than it worked in a
Neutron build from a couple of weeks ago).  It also deploys the VMs with
Heat, so we need that in addition to all the other usual suspects.

For the moment, our plan in TripleO is to re-deploy our rack with an
OVB-friendly cloud and stay separate, but I believe eventually we'd like
to run in a regular infra environment and throw that hardware into the
infra pool (don't quote me on this, I don't have any direct control over
it, but this is my understanding of the plan).  We're way closer to
being able to do that than I had thought a month ago, so I wanted to
bring it up as part of this discussion.

1: https://github.com/cybertron/openstack-virtual-baremetal
2:
https://github.com/cybertron/openstack-virtual-baremetal/commit/915269adc73475c1ee6ac722534386ef5dc0250c
3: https://review.openstack.org/#/c/295243

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-14 Thread Cathy Zhang
Thanks for everyone's reply! 

Here is the summary based on the replies I received: 

1.  We should have a meet-up for these two topics. The "to" list are the people 
who have interest in these topics. 
I am thinking about around lunch time on Tuesday or Wednesday since some of 
us will fly back on Friday morning/noon. 
If this time is OK with everyone, I will find a place and let you know 
where and what time to meet. 

2.  There is a bug opened for the QoS Flow Classifier 
https://bugs.launchpad.net/neutron/+bug/1527671
We can either change the bug title and modify the bug details or start with a 
new one for the common FC which provides info on all requirements needed by all 
relevant use cases. There is a bug opened for OVS agent extension 
https://bugs.launchpad.net/neutron/+bug/1517903

3.  There are some very rough, ugly as Sean put it:-), and preliminary work on 
common FC https://github.com/openstack/neutron-classifier which we can see how 
to leverage. There is also a SFC API spec which covers the FC API for SFC usage 
https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst,
the following is the CLI version of the Flow Classifier for your reference:

neutron flow-classifier-create [-h]
[--description ]
[--protocol ]
[--ethertype ]
[--source-port :]
[--destination-port :]
[--source-ip-prefix ]
[--destination-ip-prefix ]
[--logical-source-port ]
[--logical-destination-port ]
[--l7-parameters ] FLOW-CLASSIFIER-NAME

The corresponding code is here 
https://github.com/openstack/networking-sfc/tree/master/networking_sfc/extensions

4.  We should come up with a formal Neutron spec for FC and another one for OVS 
Agent extension and get everyone's review and approval. Here is the etherpad 
catching our previous requirement discussion on OVS agent (Thanks David for the 
link! I remember we had this discussion before)
https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion


More inline. 

Thanks,
Cathy


-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com] 
Sent: Thursday, April 14, 2016 3:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Cathy Zhang  wrote:

> Hi everyone,
> Per Armando’s request, Louis and I are looking into the following 
> features for Newton cycle.
> · Neutron Common FC used for SFC, QoS, Tap as a service etc.,
> · OVS Agent extension
> Some of you might know that we already developed a FC in 
> networking-sfc project and QoS also has a FC. It makes sense that we 
> have one common FC in Neutron that could be shared by SFC, QoS, Tap as a 
> service etc.
> features in Neutron.

I don’t actually know of any classifier in QoS. It’s only planned to emerge, 
but there are no specs or anything specific to the feature.

Anyway, I agree that classifier API belongs to core neutron and should be 
reused by all interested subprojects from there.

> Different features may extend OVS agent and add different new OVS flow 
> tables to support their new functionality. A mechanism is needed to 
> ensure consistent OVS flow table modification when multiple features 
> co-exist. AFAIK, there is some preliminary work on this, but it is not 
> a complete solution yet.

I think there is no formal spec or anything, just some emails around there.

That said, I don’t follow why it’s a requirement for SFC to switch to l2 agent 
extension mechanism. Even today, with SFC maintaining its own agent, there are 
no clear guarantees for flow priorities that would avoid all possible conflicts.

Cathy> There is no requirement for SFC to switch. My understanding is that 
current L2 agent extension does not solve the conflicting entry issue if two 
features inject the same priority table entry. I think this new L2 agent effort 
is try to come up with a mechanism to resolve this issue. Of course if each 
feature( SFC or Qos) uses its own agent, then there is no coordination and no 
way to avoid conflicts. 

> We will like to start these effort by collecting requirements and then 
> posting specifications for review. If any of you would like to join 
> this effort, please chime in. We can set up a meet-up session in the 
> Summit to discuss this face-in-face.

Great. Let’s have a meetup for this topic.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] Nova quota statistics counting issue

2016-04-14 Thread Andrew Laski
 
 
 
On Wed, Apr 13, 2016, at 12:27 PM, Dmitry Stepanenko wrote:
> Hi Team,
> I worked on nova quota statistics issue
> (https://bugs.launchpad.net/nova/+bug/1284424) happenning when nova-*
> processes are restarted during removing instances and was able to
> reproduce it. For repro I used devstack and started nova-api and nova-
> compute in separate screen windows. For killing them I used ctrl+c. As
> I found this issue happened if nova-* processes are killed after
> instance was deleted but right before quota commit procedure finishes.
> We discussed these results with Markus Zoeller and decided that even
> though killing nova processes is a bit exotic event, this still
> should be fixed because quotas counting affects billing and very
> important for us.
 
+1. This is very important to get right. And while killing Nova
processes is exotic during normal operation it could happen for upgrades
and that should not cause quota issues.
 
> So, we need to introduce some mechanism that will prevent us from
> reaching inconsistent states in terms of quotas. In other words, this
> mechanism should work in such a way that both instance create/remove
> operation and quota usage recount operation happened or not happened
> together.
 
There's been some discussion around this, and there are other ML threads
somewhat discussing it in the context of moving quota enforcement into a
centralized service/library. There are a couple of approaches that could
be taken for tackling quotas, but a larger issue is that we have no good
way of knowing if some change helps the situation. What we need before
making any changes  is a functional test that reproduces the issue.
 
Once that is in place I would love to see the removal of the
quota_usages table and reservations and have quota be based on actual
usage represented in the instances table. But there are a lot of other
viewpoints and I think work in this area is going to have to start
making small incremental improvements.
 
 
> Any ideas how to do that properly?
> Kind regards,
> Dmitry
> -
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

2016-04-14 Thread Blazej Kwasniak

Hi,

To be more precise, libvirt is doing that when nova-compute says to libvirt:

"Pls create VM with sucha a XML config".

Inside this XML You can see ports and bridges where they should be created.

Regards

On 04/12/16 18:40, Sławek Kapłoński wrote:

Hello,

It's nova-compute service which is configuring it. This service is running on
compute node: http://docs.openstack.org/developer/nova/architecture.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] upgrade options for custom heat resource plug-ins

2016-04-14 Thread Praveen Yalagandula
Zane,
Thanks for the reply; this is the information I was looking for.
Cheers,
Praveen

On Thu, Apr 14, 2016 at 10:51 AM Zane Bitter  wrote:

> On 11/04/16 14:06, Praveen Yalagandula wrote:
> > Hi,
> >
> > We are developing a custom heat resource plug-in and wondering about how
> > to handle plug-in upgrades. As our product's object model changes with
> > new releases, we will need to release updated resource plug-in code too.
>
> So, in the first instance, I would recommend trying very hard not to do
> this. If you can, try to keep a stable interface even if the product
> changes underneath. (You can still add properties, as long as they are
> not required, but don't remove, rename, or otherwise make
> backward-incompatible changes to properties in the resource schema.)
> That said, I realise this is not always possible because of reasons.
>
> > However, the "properties" stored in the heat DB for the existing
> > resources, whose definitions have been upgraded, need to be updated too.
> > Was there any discussion on this?
>
> I believe this is what you need:
>
>
> http://git.openstack.org/cgit/openstack/heat/tree/heat/engine/translation.py
>
> Documentation is unfortunately light on the ground, but you should be
> able to find a few examples in the core resources. Here is the spec:
>
>
> http://specs.openstack.org/openstack/heat-specs/specs/liberty/deprecating-improvements.html
>
> cheers,
> Zane.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-14 Thread Dean Troyer
On Thu, Apr 14, 2016 at 11:58 AM, Markus Zoeller 
wrote:

> Let me know what you think: https://review.openstack.org/#/c/305967/


My only real concern is you've implied a structure that will potentially
have many combinations of configurations and those will bitrot.  How
different are x86 and s390 arch in local.conf? (I've never seen an s390
local.conf!)  I do know there are few, if any, differences between most
ubuntu and fedora configs, we abstract most of that out in the scripts.

I wonder if the grouping of configs might be better suited along usa-case
lines?  nova-net vs neutron, single- vs multi-node, etc.

Thanks for getting this started!
dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] Problem receiving mirrored ingress traffic and a solution suggestion

2016-04-14 Thread Anil Rao
Hi Simhon,

We are aware of this problem. The main issue is that packets entering br-int 
from br-ex aren’t tagged with a VLAN id (unlike packets entering br-int from 
br-tun). Since our overall design is meant to support multi-node production 
environments we have to consider the packets coming in from br-tun. Your 
suggested fix might suffice for a single-node DevStack environment but I don’t 
think it is generic enough to support the multi-node situation.

We are looking into this and hope to come up with a fix that works for both 
cases. We’ll keep you updated.

Thanks,
Anil

From: Simhon Doctori שמחון דוקטורי [mailto:simh...@gmail.com]
Sent: Wednesday, April 13, 2016 12:56 AM
To: openstack-dev@lists.openstack.org
Cc: yossi barshishat יוסי ברששת
Subject: [openstack-dev] [neutron][taas] Problem receiving mirrored ingress 
traffic and a solution suggestion

Anil and all Hi,
Continuing the discussion from the IRC about the problem with the mirrored 
traffic incoming to a VM not being mirrored. Indeed, it does look like the bug 
mentioned on https://bugs.launchpad.net/tap-as-a-service/+bug/1544176.
I am using Liberty, ovs 2.0.2, Devstack, Single node.

As I mentioned, the problem is due to a rule match including the vlan tag. 
Since the VM port is receiving data, after the ovs stripped the vlan of the 
virtual network, there is no reason for doing match on a vlan, this rule does 
not have any hits:

cookie=0x0, duration=59625.138s, table=0, n_packets=0, n_bytes=0, 
idle_age=59625, priority=20,dl_vlan=3,dl_dst=fa:16:3e:d3:60:16 
actions=NORMAL,mod_vlan_vid:3901,output:11
IMHO, the solution should be a rule where there is no vlan in match AND an 
action where output port is the destination port. Since you already have a 
match of a destination mac, why not output it to the destination vm interface, 
together with the patch-int-tap interface? This rule works for me:

cookie=0x0, duration=20.422s, table=0, n_packets=42, n_bytes=3460, idle_age=1, 
priority=20,dl_dst=fa:16:3e:d3:60:16 
actions=output:14,mod_vlan_vid:3901,output:11
As you can see, there is no vlan in match, and two output ports - 14 for the vm 
interface, and 11 for the patch interface together with the vlan.

Simhon Doctori
imVision Technologies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] upgrade options for custom heat resource plug-ins

2016-04-14 Thread Zane Bitter

On 11/04/16 14:06, Praveen Yalagandula wrote:

Hi,

We are developing a custom heat resource plug-in and wondering about how
to handle plug-in upgrades. As our product's object model changes with
new releases, we will need to release updated resource plug-in code too.


So, in the first instance, I would recommend trying very hard not to do 
this. If you can, try to keep a stable interface even if the product 
changes underneath. (You can still add properties, as long as they are 
not required, but don't remove, rename, or otherwise make 
backward-incompatible changes to properties in the resource schema.) 
That said, I realise this is not always possible because of reasons.



However, the "properties" stored in the heat DB for the existing
resources, whose definitions have been upgraded, need to be updated too.
Was there any discussion on this?


I believe this is what you need:

http://git.openstack.org/cgit/openstack/heat/tree/heat/engine/translation.py

Documentation is unfortunately light on the ground, but you should be 
able to find a few examples in the core resources. Here is the spec:


http://specs.openstack.org/openstack/heat-specs/specs/liberty/deprecating-improvements.html

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] convergence cancel messages

2016-04-14 Thread Zane Bitter

On 11/04/16 04:51, Anant Patil wrote:

On 14-Mar-16 14:40, Anant Patil wrote:

On 24-Feb-16 22:48, Clint Byrum wrote:

Excerpts from Anant Patil's message of 2016-02-23 23:08:31 -0800:

Hi,

I would like the discuss various approaches towards fixing bug
https://launchpad.net/bugs/1533176

When convergence is on, and if the stack is stuck, there is no way to
cancel the existing request. This feature was not implemented in
convergence, as the user can again issue an update on an in-progress
stack. But if a resource worker is stuck, the new update will wait
for-ever on it and the update will not be effective.

The solution is to implement cancel request. Since the work for a stack
is distributed among heat engines, the cancel request will not work as
it does in legacy way. Many or all of the heat engines might be running
worker threads to provision a stack.

I could think of two options which I would like to discuss:

(a) When a user triggered cancel request is received, set the stack
current traversal to None or something else other than current
traversal. With this the new check-resources/workers will never be
triggered. This is okay as long as the worker(s) is not stuck. The
existing workers will finish running, and no new check-resource
(workers) will be triggered, and it will be a graceful cancel.  But the
workers that are stuck will be stuck for-ever till stack times-out.  To
take care of such cases, we will have to implement logic of "polling"
the DB at regular intervals (may be at each step() of scheduler task)
and bail out if the current traversal is updated. Basically, each worker
will "poll" the DB to see if the current traversal is still valid and if
not, stop itself. The drawback of this approach is that all the workers
will be hitting the DB and incur a significant overhead.  Besides, all
the stack workers irrespective of whether they will be cancelled or not,
will keep on hitting DB. The advantage is that it probably is easier to
implement. Also, if the worker is stuck in particular "step", then this
approach will not work.

(b) Another approach is to send cancel message to all the heat engines
when one receives a stack cancel request. The idea is to use the thread
group manager in each engine to keep track of threads running for a
stack, and stop the thread group when a cancel message is received. The
advantage is that the messages to cancel stack workers is sent only when
required and there is no other over-head. The draw-back is that the
cancel message is 'broadcasted' to all heat engines, even if they are
not running any workers for the given stack, though, in such cases, it
will be a just no-op for the heat-engine (the message will be gracefully
discarded).

Oh hah, I just sent (b) as an option to avoid (a) without really
thinking about (b) again.

I don't think the cancel broadcasts are all that much of a drawback. I
do think you need to rate limit cancels though, or you give users the
chance to DDoS the system.

There is no easier way to restrict the cancels, so I am choosing the
option of having a "monitoring task" which runs in separate thread. This
task periodically polls DB to check if the current traversal is updated.
When a cancel message is received, the current traversal is updated to
new id and monitoring task will stop the thread group running worker
threads for previous traversal (traversal uniquely identifies a stack
operation).

Also, this will help with checking timeout. Currently each worker checks
for timeout.  I can move this to the monitoring thread which will stop
the thread group when stack times out.

It is better to restrict the actions within the heat engine than to load
the AMQP; that can lead to potentially complicated issues.

-- Anant

I almost forgot to update this thread.

After lot of ping-pong in my head, I have taken a different approach to
implement stack-update-cancel when convergence is on. Polling for
traversal update in each heat engine worker is not efficient method and
so is the broadcasting method.

In the new implementation, when a stack-cancel-update request is
received, the heat engine worker will immediately cancel eventlets
running locally for the stack. Then it sends cancel messages to only
those heat engines who are working on the stack, one request per engine.


I'm concerned that this is forgetting the reason we didn't implement 
this in convergence in the first place. The purpose of 
stack-cancel-update is to roll the stack back to its pre-update state, 
not to unwedge blocked resources.


The problem with just killing a thread is that the resource gets left in 
an unknown state. (It's slightly less dangerous if you do it only during 
sleeps, but still the state is indeterminate.) As a result, we mark all 
such resources UPDATE_FAILED, and anything (apart from nested stacks) in 
a FAILED state is liable to be replaced on the next update (straight 
away in the case of a rollback). That's why in convergence we just let 
resources run 

Re: [openstack-dev] [cross-project] [all] Quotas and the need for reservation

2016-04-14 Thread Salvatore Orlando
On 12 April 2016 at 15:48, Andrew Laski  wrote:

>
>
> On Tue, Apr 5, 2016, at 09:57 AM, Ryan McNair wrote:
> > >It is believed that reservation help to to reserve a set of resources
> > >beforehand and hence eventually preventing any other upcoming request
> > >(serial or parallel) to exceed quota if because of original request the
> > >project might have reached the quota limits.
> > >
> > >Questions :-
> > >1. Does reservation in its current state as used by Nova, Cinder,
> Neutron
> > >help to solve the above problem ?
> >
> > In Cinder the reservations are useful for grouping quota
> > for a single request, and if the request ends up failing
> > the reservation gets rolled back. The reservations also
> > rollback automatically if not committed within a certain
> > time. We also use reservations with Cinder nested quotas
> > to group a usage request that may propagate up to a parent
> > project in order to manage commit/rollback of the request
> > as a single unit.
>

Neutron recently introduced reservations.
Without reservations it was theoretically possible for a tenant to achieve
n times the amount of resources granted by the quota, where n is the number
of workers or distinct server instances.
More informations are available in [1] and [2]


> >
> > >
> > >2. Is it consistent, reliable ?  Even with reservation can we run into
> > >in-consistent behaviour ?
>
>
> > Others can probably answer this better, but I have not
> > seen the reservations be a major issue. In general with
> > quotas we're not doing the check and set atomically which
> > can get us in an inconsistent state with quota-update,
> > but that's unrelated to the reservations.
>

I do not have any news of bugs, nor do I have any know issue that might
affect consistency of the reservation system.
One know weakness has to do with galera clusters - as the reservation
system uses the update lock which is pointless in this case.
Neutron handles the resulting write-set certification failure retrying the
operation, which is quite expensive.
There were already proposals in the nova space to implement a lock-free CAS
algorithm for reservations, but since then I've lost track of developments
in the area.



> >
> > >
> > >3. Do we really need it ?
> > >
> >
> > Seems like we need *some* way of keeping track of usage
> > reserved during a particular request and a way to easily
> > roll that back at a later time. I'm open to alternatives
> > to reservations, just wondering what the big downside of
> > the current reservation system is.
>

Like most things either one proactively ensures a desired condition is met,
or reacts when that condition is not met anymore.
This means that without reservation - eg: optimistic enforcement -
corrective steps must be taken after committing the transaction
that sent the resource over quota. This is completely ok in my opinion. For
instance if taking corrective steps has a cost of 5 and
creating/committing a reservation has a cost of 2, the reactive approach is
convenient if less than 1 request over 3 sends a resource
over quota (note: I've made the numbers up, I just wanted to make a point
that reacting rather than being proactive can be convenient).

However, for Neutron the reactive approach simply won't work because
Neutron leaves a certain degree of freedom to plugins, and several plugins
operate on the backend before committing the DB transaction (I know it's
probably not ok, but if we give them freedom to do so then we cannot
complain I guess). In that case the rollback will be very expensive and it
cannot be a simple DB operation as it has to involve the backend as well.


>
> Jay goes into it a little bit in his response to another quota thread
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/090560.html
> and I share his thoughts here.
>
> With a reservation system you're introducing eventual consistency into
> the system rather than being strict because reservations are not tied to
> a concrete thing. You can't do a point in time check of whether the
> reserved resources are going to eventually be used if something happens
> like a service restart and a request is lost. You have to have no
> activity for the duration of the expiration time to let things settle
> before getting a real view of quota usages.
>

That is true. This is a problem in Neutron that I would like to address too.


>
> Instead if you tie quota usage to the resource records then you can
> always get a view of what's actually in use.
>

Yup, but a reservation and current usage are two different things, aren't
they?


>
> One thing that should probably be clarified in all of these discussion
> is what exactly is the quota on. I see two answers: the quota is against
> the actual resource usage, or the quota is against the records tracking
> usage. Since we currently track quotas with a reservation system I think
> it's fair to say that we're not actually tracking against resource like
> disk/RAM/CPU 

Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-14 Thread Markus Zoeller
> From: Matt Riedemann 
> To: openstack-dev@lists.openstack.org
> Date: 04/14/2016 04:53 PM
> Subject: Re: [openstack-dev] [all] [devstack] Adding example 
> "local.conf" files for testing?
> 
> 
> 
> On 4/14/2016 6:09 AM, Sean Dague wrote:
> > On 04/14/2016 05:19 AM, Markus Zoeller wrote:
> >>> From: Neil Jerram 
> >>> To: "OpenStack Development Mailing List (not for usage questions)"
> >>> 
> >>> Date: 04/14/2016 10:50 AM
> >>> Subject: Re: [openstack-dev] [all] [devstack] Adding example
> >>> "local.conf" files for testing?
> >>>
> >>> On 14/04/16 08:35, Markus Zoeller wrote:
>  Sometimes (especially when I try to reproduce bugs) I have the need
>  to set up a local environment with devstack. Everytime I have to 
look
>  at my notes to check which option in the "local.conf" have to be 
set
>  for my needs. I'd like to add a folder in devstacks tree which 
hosts
>  multiple example local.conf files for different, often used setups.
>  Something like this:
> 
>    example-confs
>    --- newton
>    --- --- x86-ubuntu-1404
>    --- --- --- minimum-setup
>    --- --- --- --- README.rst
>    --- --- --- --- local.conf
>    --- --- --- serial-console-setup
>    --- --- --- --- README.rst
>    --- --- --- --- local.conf
>    --- --- --- live-migration-setup
>    --- --- --- --- README.rst
>    --- --- --- --- local.conf.controller
>    --- --- --- --- local.conf.compute1
>    --- --- --- --- local.conf.compute2
>    --- --- --- minimal-neutron-setup
>    --- --- --- --- README.rst
>    --- --- --- --- local.conf
>    --- --- s390x-1.1.1-vulcan
>    --- --- --- minimum-setup
>    --- --- --- --- README.rst
>    --- --- --- --- local.conf
>    --- --- --- live-migration-setup
>    --- --- --- --- README.rst
>    --- --- --- --- local.conf.controller
>    --- --- --- --- local.conf.compute1
>    --- --- --- --- local.conf.compute2
>    --- mitaka
>    --- --- # same structure as master branch. omitted for 
brevity
>    --- liberty
>    --- --- # same structure as master branch. omitted for 
brevity
> 
>  Thoughts?
> >>>
> >>> Yes, this looks useful to me.  Only thing is that you shouldn't need 
the
> >>
> >>> per-release subtrees, though; the DevStack repository already has
> >>> per-release stable/ branches, which you need to check out 
in
> >>> order to do a DevStack setup of a past release.  So I would expect 
the
> >>> local.confs for each past release to live in the corresponding 
branch.
> >>>
> >>> Regards,
> >>> Neil
> >>
> >> My intention was to avoid that there is a folder "current" or "trunk"
> >> or similar, which doesn't get updated. That's the issue Steve talked
> >> about.
> >>
> >> The workflow could be, at every new cycle:
> >>  * create a new "release folder" (Newton, Ocata, ...)
> >>  * copy the "setup folders" (minimum-setup, ...) to the new 
folder
> >>  * clean up the "local.conf" file(s) of deprecated options
> >>  * delete a "release folder" if the release is EOL
> >>
> >> I also assume that this would make potential backports easier.
> >
> > I think this would be useful, and accepted easily.
> >
> > I *don't* think we want per release directories. Because it confuses 
the
> > issue on whether or not devstack master can install liberty (which it
> > can't).
> >
> > Every local.conf should include a documentation page as well that
> > describes the scenario, which means these would be easy to snag off 
the
> > web docs.
> >
> >-Sean
> >
> 
> +1 to add example scenarios (I have a copy of a basic neutron + ovs that 

> I got from a co-worker) and -1 on release-specific directories, we don't 

> need them as pointed out already, that's what the branches are for in 
> the git repo. The trunk local.confs should be updated naturally as 
> people try to use them and hit issues.
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 


Let me know what you think: https://review.openstack.org/#/c/305967/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Encrypted Ephemeral Storage

2016-04-14 Thread Coffman, Joel M.
> Upon reading the source, I don't see "cryptsetup luksFormat" being called 
> anywhere (nova/libvirt/storage/*).
Check out 
imagebackend.py:Lvm.create_image
 and 
dmcrypt.py:create_volume.

> How is this feature envisioned to work?
The LVM volume with the '-dmcrypt' suffix is the unencrypted device that is 
passed to the VM. From a DevStack machine with an encrypted instance:

$ sudo cryptsetup status 
/dev/mapper/065859b2-50d6-46d6-927a-2dfd07db3306_disk-dmcrypt

/dev/mapper/065859b2-50d6-46d6-927a-2dfd07db3306_disk-dmcrypt is active and is 
in use.

  type:PLAIN

  cipher:  aes-xts-plain64

  keysize: 256 bits

  device:  
/dev/mapper/stack--volumes--default-065859b2--50d6--46d6--927a--2dfd07db3306_disk

  offset:  0 sectors

  size:2097152 sectors

  mode:read/write

$ sudo fuser -vam /dev/mapper/065859b2-50d6-46d6-927a-2dfd07db3306_disk-dmcrypt

 USERPID ACCESS COMMAND

/dev/dm-1:   libvirt-qemu   8429 F qemu-system-x86

While information in the '*-dmcrypt' device is visible to a root user on the 
compute host, the underlying device (stack--volumes--default-* in the example 
above) is encrypted, and everything written to the underlying disk is also 
encrypted. Try searching for the text in the underlying device – you shouldn't 
be able to find it.

Joel


From: Chris Buccella 
>
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Monday, April 11, 2016 at 1:06 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [nova] Encrypted Ephemeral Storage

I've been looking into using encrypted ephemeral storage with LVM. With the 
[ephemeral_storage_encryption] and [keymgr] sections to nova.conf, I get an LVM 
volume with "-dmcrypt" is appended to the volume name, but otherwise see no 
difference; I can still grep for text inside the volume.

Upon reading the source, I don't see "cryptsetup luksFormat" being called 
anywhere (nova/libvirt/storage/*).

I was expecting a new encrypted LVM volume when a new instance was created. Are 
my expectations misplaced? How is this feature envisioned to work?


Thanks,

-Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova quota statistics counting issue

2016-04-14 Thread Salvatore Orlando
For what is worth neutron employs "resource trackers" which conceptually do
something similar to nova quota usage statistics.
Before starting any transaction that can potentially change usage for a
given resource, the quota enforcement mechanism checks for a "dirty" marker
on the resource tracker.
If that marker is present, usage data for that resource are calculated from
the DB table for the resource. If not, current usage is employed for quota
enforcement and the "dirty" flag is set.

This means that if the process dies in the middle of a transaction, the
next transaction will rebuild the correct usage count from the DB.

Salvatore


On 14 April 2016 at 14:08, Timofei Durakov  wrote:

> Hi,
>
> I think it would be ok to store persistently quota details on compute
> side, as was discussed during mitaka mid-cycle[1] for migrations[2]. So if
> compute service fails we could restore state and update quota after compute
> restart.
>
> Timofey
>
> [1] - https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
> [2] - https://review.openstack.org/#/c/291161/5/nova/compute/background.py
>
>
>
>
> On Wed, Apr 13, 2016 at 7:27 PM, Dmitry Stepanenko <
> dstepane...@mirantis.com> wrote:
>
>> Hi Team,
>>
>> I worked on nova quota statistics issue (
>> https://bugs.launchpad.net/nova/+bug/1284424) happenning when nova-*
>> processes are restarted during removing instances and was able to reproduce
>> it. For repro I used devstack and started nova-api and nova-compute in
>> separate screen windows. For killing them I used ctrl+c. As I found this
>> issue happened if nova-* processes are killed after instance was deleted
>> but right before quota commit procedure finishes.
>>
>> We discussed these results with Markus Zoeller and decided that even
>> though killing nova processes is a bit exotic event, this still should be
>> fixed because quotas counting affects billing and very important for us.
>>
>> So, we need to introduce some mechanism that will prevent us from
>> reaching inconsistent states in terms of quotas. In other words, this
>> mechanism should work in such a way that both instance create/remove
>> operation and quota usage recount operation happened or not happened
>> together.
>>
>> Any ideas how to do that properly?
>>
>> Kind regards,
>> Dmitry
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-14 Thread Jeremy Stanley
On 2016-04-14 12:57:38 +0300 (+0300), Oleg Gelbukh wrote:
> The thread I'm referring to in the prev message is:
> http://lists.openstack.org/pipermail/openstack-infra/2014-January/000624.html

At this point it's probably no longer a concern. We don't (and
haven't for some time) really support pip versions as old as the
ones which predate prerelease identification in their version
parsing so could probably just start running the same sdist
publication to PyPI for prereleases as we do for full release
version tags.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-14 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Answers inline.

On 4/14/16 10:05 AM, Hongbin Lu wrote:
> Castellan is another alternative under consideration [1].
> 
> Would I ask some clarifying questions: * I saw Castellan is using
> release:independent model. What are the rationales of choosing this
> release model over release:cycle-with-intermediary?

release:independent is the default release model for new libs.  AFAIK
nobody has previously asked about having a different release model, so
we haven't had the need to consider other models.

> * If Magnum depends on this library and contributes a Castellan
> backend, who will maintain the backend? Who can review and approve
> bug fixes? Who will release a new package? How fast the whole
> process will be (patch critical fixes -> review -> approve ->
> release)?

If you want to add a new backend that is included as part of Castellan
we would appreciate contributions from your team to help fix bugs,
etc. especially since you would be the main user of a new backend.
That said the barbican-core team is commited to maintaining Castellan
and we are responsible for reviews and releases.  Since Castellan is
released independently the release turnaround is much faster than
waiting for patch reviews from the release management team.

> * I wonder why this library is not managed by Oslo (currently
> managed by barbican-core).

When the Castellan project was concieved we asked the Oslo team to
maintain it.  The oslo team PTL at that time recommended that the
barbican-core team keep ownership of the new repo based on concerns
about domain knowledge. [1]

[1] https://review.openstack.org/#/c/138875/

> 
> In general, I saw the advantages of leveraging Castellan. My major
> concern is the development speed: contributing on external repo
> might slow down the development process. Personally, I lean to
> start everything in our own repo, and push them to a common library
> later.
> 

There is no technical requirement that mandates that implementations
of Castellan be included as part of the Castellan library.  You could
develop an implementation of the Castellan interface in your own
source tree or its own separate repo or wherever you'd like as long as
your implementation conforms to the Castellan interface.  The only
drawback is that your implementation would not be usable outside of
your project.

Just to be clear, the reason I would like for you to use Castellan is
so that deployers can easily use the Barbican implementation when our
service is available in the deployer's cloud.  It would also be
helpful for small-cloud deployments where Castellan could interface
directly with a Hardware Security Module, etc.

I don't particularly care for the Shared DEK+DB model I described
earlier.  My argument was that using that model would be no less
secure than pre-encrypting things to be stored in Keystone, but it
does not provide the security assurances that Barbican provides.  I
doesn't provide the level of auditability that Barbican provides
either.  And implementing a Castellan backend that does all those
things securely definitely sounds like you'd be re-writing Barbican.

At the end of the day, it will be up to the deployer to consider their
threat models and decide how much risk they're willing to accept.  So
if implementing a low-security key management backend is what your
early adopters want, then please do so in a manner that lets deployers
with high security requirements easily use Barbican or other Hardware
solutions.

- - Douglas Mendizábal

> [1] https://etherpad.openstack.org/p/magnum-barbican-alternative
> 
> Best regards, Hongbin
> 
>> -Original Message- From: Nathan Reller
>> [mailto:nathan.s.rel...@gmail.com] Sent: April-14-16 9:22 AM To:
>> OpenStack Development Mailing List (not for usage questions) 
>> Subject: Re: [openstack-dev] [magnum][keystone][all] Using
>> Keystone /v3/credentials to store TLS certificates
>> 
>> I agree with Doug's comments. Castellan is a generic key manager 
>> library that allows symmetric keys, private keys, public keys, 
>> certificates, passphrases, and opaque secret data to be stored in
>> a key manager. There is a Barbican implementation that is
>> complete, and a KMIP (Key Management Interoperability Protocol
>> (an OASIS standard)) implementation is under development.
>> 
>> The precursor to castellan was the KeyManager interface
>> integrated into Nova and Cinder. We are in the process of making
>> the easy switch from that to Castellan. Glance and Sahara have
>> both already integrated with Castellan. Swift is currently
>> integrating with Castellan and will swap between Barbican and
>> KMIP.
>> 
>> -Nate
>> 
>> 
>> 
>> On Wed, Apr 13, 2016 at 3:04 PM, Douglas Mendizábal 
>>  wrote:
> Hi Hongbin,
> 
> I have to admit that it's a bit disappointing that the Magnum team 
> chose to decouple from Barbican, although I do understand that our 
> team needs to do a better job of 

Re: [openstack-dev] [tc] Leadership training dates - please confirm attendance

2016-04-14 Thread Sean Dague
On 04/14/2016 11:31 AM, Flavio Percoco wrote:
> On 14/04/16 16:40 +0200, Thierry Carrez wrote:
>> Colette Alexander wrote:
>>> Hi everyone!
>>>
>>> Quick summary of where we're at with leadership training: dates are
>>> confirmed as available with ZingTrain, and we're finalizing trainers
>>> with them right now. *June 28/29th in Ann Arbor, Michigan.*
>>>
>>> https://etherpad.openstack.org/p/Leadershiptraining
>>
>> You mention that there was only minimal interest in adding a third
>> day. To make the oversea trip more worthwhile for me, I'll definitely
>> be there on Thursday, so we could also have (at least in the morning)
>> a discussion on how useful the exercise was, and if any lesson is
>> immediately applicable. It doesn't have to be Zing-moderated or
>> in-training, could be more of a small post-event open brainstorming
>> for those who would still be around.
> 
> This sounds reasonable to me! Would be happy to join such session.

I already have family vacation planned starting the weekend after this,
so am unlikely to stick around all day on Thursday.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all][ptl] release process changes for official projects

2016-04-14 Thread Doug Hellmann
The tagging ACL changes have merged: https://review.openstack.org/#/c/298866/

Doug

> On Mar 29, 2016, at 4:59 PM, Davanum Srinivas  wrote:
> 
> Kirill,
> 
> This is prep for Newton. So definitely not rocking the boat when we
> have a week left.
> 
> -- Dims
> 
> On Tue, Mar 29, 2016 at 4:08 PM, Kirill Zaitsev  wrote:
>> My immediate question is — when would this be merged? Is it a good idea to
>> alter this during the final RC week and before mitaka release, rather than
>> implement this change early in the Newton cycle and let people release their
>> final release the old way?
>> 
>> --
>> Kirill Zaitsev
>> Murano Team
>> Software Engineer
>> Mirantis, Inc
>> 
>> On 29 March 2016 at 19:46:08, Doug Hellmann (d...@doughellmann.com) wrote:
>> 
>> During the Mitaka cycle, the release team worked on automation for
>> tagging and documenting releases [1]. For the first phase, we focused
>> on official teams with the release:managed tag for their deliverables,
>> to keep the number of projects manageable as we built out the tools
>> and processes we needed. That created a bit of confusion as official
>> projects still had to submit openstack/releases changes in order
>> to appear on the releases.openstack.org website.
>> 
>> For the second phase during the Newton cycle, we are prepared to
>> expand the use of automation to all deliverables for all official
>> projects. As part of this shift, we will be updating the Gerrit
>> ACLs for projects to ensure that the release team can handle the
>> releases and branching. These updates will remove tagging and
>> branching rights for anyone not on the central release management
>> team. Instead of tagging releases and then recording them in the
>> releases repository after the tag is applied, all official teams
>> can now use the releases repo to request new releases. We will be
>> reviewing version numbers in all tag requests to ensure SemVer is
>> followed, and we won't release libraries late in the week, but we
>> will process releases regularly so there is no reason this change
>> should have a significant impact on your ability to release frequently.
>> 
>> If you're not familiar with the current release process, please
>> review the README.rst file in the openstack/releases repository.
>> Follow up here on the mailing list or in #openstack-release if you
>> have questions.
>> 
>> The project-config change to update ACLs and correct issues with
>> the build job definitions for official projects is
>> https://review.openstack.org/298866
>> 
>> Doug
>> 
>> [1]
>> http://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Leadership training dates - please confirm attendance

2016-04-14 Thread Flavio Percoco

On 14/04/16 16:40 +0200, Thierry Carrez wrote:

Colette Alexander wrote:

Hi everyone!

Quick summary of where we're at with leadership training: dates are
confirmed as available with ZingTrain, and we're finalizing trainers
with them right now. *June 28/29th in Ann Arbor, Michigan.*

https://etherpad.openstack.org/p/Leadershiptraining


You mention that there was only minimal interest in adding a third 
day. To make the oversea trip more worthwhile for me, I'll definitely 
be there on Thursday, so we could also have (at least in the morning) 
a discussion on how useful the exercise was, and if any lesson is 
immediately applicable. It doesn't have to be Zing-moderated or 
in-training, could be more of a small post-event open brainstorming 
for those who would still be around.


This sounds reasonable to me! Would be happy to join such session.

Flavio


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-14 Thread Eichberger, German
Just to echo others: FWaaS would be interested in this as well so please keep 
us in the loop.

Thanks,
German




On 4/14/16, 7:12 AM, "Sean M. Collins"  wrote:

>Vikram Choudhary wrote:
>> Hi Cathy,
>> 
>> A project called "neutron-classifier [1]" is also there addressing the same
>> use case. Let's sync up and avoid work duplicity.
>> 
>> [1] https://github.com/openstack/neutron-classifier
>
>Agree with Vikram - we have a small git repo that we're using to futz
>around with ideas around how to store classifiers in a way that is
>re-usable by other projects, and create a decent object model.
>
>It's very very rough, and the API is ... kind of ugly right now. That's
>what you get when I steal like 4 Red Bulls and do an all-night coding
>session in Tokyo.
>
>So, It'd be great to get other people involved, get an API hashed out
>that doesn't expose all the nitty gritty DB details (like it currently
>is) and move forward.
>
>-- 
>Sean M. Collins
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][horizon] - Add extra plugins config to puppet-horizon

2016-04-14 Thread Denis Egorenko
>
> Does Murano uses the same local_settings.py file as Horizon? If yes,
> we might stop using puppet-murano to manage this file.


Yes, it uses same file.

And maybe find a mechanism in puppet-horizon with a provider, so we
> can have a plugin architecture like:
> horizon::plugins::murano
> horizon::plugins::foobar
> that would use this provider to configure a common local_settings.py
> and notify service on change, like we do for .conf files.


That's idea for researching, sounds good. We can implement something like
*_config providers for conf files.

Also, another question. If we will move all UI stuff to puppet-horizon, do
we need
add some dependency for changed modules on horizon module?
Right now, modules with UI configuration don't have dependency on horizon.
May be we need to add it?

Depends on horizon version I think. Mitaka gained a local_settings.d magic
> dir that plugins can drop things into.


That's a really good. We can use a separate file for each plugin then and
pass it to provider.

2016-04-14 17:59 GMT+03:00 Fox, Kevin M :

> Depends on horizon version I think. Mitaka gained a local_settings.d magic
> dir that plugins can drop things into.
>
> Thanks,
> Kevin
>
> --
> *From:* Marcos Fermin Lobo
> *Sent:* Thursday, April 14, 2016 5:52:31 AM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [puppet][horizon] - Add extra plugins config
> to puppet-horizon
>
> Hi all,
>
> I have a question about puppet-horizon module and UI plugins for Horizon.
>
> Some of UI plugins, like murano-dashboard, needs to add extra parameters
> https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
> to local_settings file (which comes from Horizon).
>
> My question is: Should puppet-horizon module provide those extra
> parameters coming from each official UI plugins? or this kind of things
> should come from specific a puppet-{ui-plugin}?
>
> Thanks.
>
> Cheers,
> Marcos
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Egorenko Denis,
Senior Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][horizon] - Add extra plugins config to puppet-horizon

2016-04-14 Thread Emilien Macchi
On Thu, Apr 14, 2016 at 10:59 AM, Fox, Kevin M  wrote:
> Depends on horizon version I think. Mitaka gained a local_settings.d magic
> dir that plugins can drop things into.

That's a very good news, indeed. I think we target Newton and beyond.

So it seems like we can either continue to manage templates and put
them in local_settings.d directory or write a ruby provider that will
take care of one single local_settings.py.

> Thanks,
> Kevin
>
> 
> From: Marcos Fermin Lobo
> Sent: Thursday, April 14, 2016 5:52:31 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [puppet][horizon] - Add extra plugins config to
> puppet-horizon
>
> Hi all,
>
> I have a question about puppet-horizon module and UI plugins for Horizon.
>
> Some of UI plugins, like murano-dashboard, needs to add extra parameters
> https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
> to local_settings file (which comes from Horizon).
>
> My question is: Should puppet-horizon module provide those extra parameters
> coming from each official UI plugins? or this kind of things should come
> from specific a puppet-{ui-plugin}?
>
> Thanks.
>
> Cheers,
> Marcos
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-14 Thread Hongbin Lu
Castellan is another alternative under consideration [1].

Would I ask some clarifying questions:
* I saw Castellan is using release:independent model. What are the rationales 
of choosing this release model over release:cycle-with-intermediary?
* If Magnum depends on this library and contributes a Castellan backend, who 
will maintain the backend? Who can review and approve bug fixes? Who will 
release a new package? How fast the whole process will be (patch critical fixes 
-> review -> approve -> release)?
* I wonder why this library is not managed by Oslo (currently managed by 
barbican-core).

In general, I saw the advantages of leveraging Castellan. My major concern is 
the development speed: contributing on external repo might slow down the 
development process. Personally, I lean to start everything in our own repo, 
and push them to a common library later.

[1] https://etherpad.openstack.org/p/magnum-barbican-alternative

Best regards,
Hongbin

> -Original Message-
> From: Nathan Reller [mailto:nathan.s.rel...@gmail.com]
> Sent: April-14-16 9:22 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][keystone][all] Using Keystone
> /v3/credentials to store TLS certificates
> 
> I agree with Doug's comments. Castellan is a generic key manager
> library that allows symmetric keys, private keys, public keys,
> certificates, passphrases, and opaque secret data to be stored in a key
> manager. There is a Barbican implementation that is complete, and a
> KMIP (Key Management Interoperability Protocol (an OASIS standard))
> implementation is under development.
> 
> The precursor to castellan was the KeyManager interface integrated into
> Nova and Cinder. We are in the process of making the easy switch from
> that to Castellan. Glance and Sahara have both already integrated with
> Castellan. Swift is currently integrating with Castellan and will swap
> between Barbican and KMIP.
> 
> -Nate
> 
> 
> 
> On Wed, Apr 13, 2016 at 3:04 PM, Douglas Mendizábal
>  wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > Hi Hongbin,
> >
> > I have to admit that it's a bit disappointing that the Magnum team
> > chose to decouple from Barbican, although I do understand that our
> > team needs to do a better job of documenting detailed how-tos for
> > deploying Barbican.
> >
> > I'm not sure that I understand the Threat Model you're trying to
> > protect against, and I have not spent a whole lot of time researching
> > Magnum architecture so please forgive me if my assumptions are wrong.
> >
> > So that we're all on the same page, I'm going to summarize the TLS
> > use-case as I understand it:
> >
> > The magnum-conductor is a single process that may be scalable at some
> > point in the future. [1]
> >
> > When the magnum-conductor is asked to provision a new bay the
> > following things happen:
> > 1. A new self-signed root CA is created.  This results in a Root CA
> > Certificate and its associated key 2. N number of nodes are created
> to
> > be part of the new bay.  For each node, a new x509 certificate is
> > provisioned and signed by the Root CA created in 1.  This results in
> a
> > certificate and key pair for each node.
> > 3. The conductor then needs to store all generated keys in a secure
> > location.
> > 4. The conductor would also like to store all generated Certificates
> > in a secure location, although this is not strictly necessary since
> > Certificates contain no secret information as pointed out by Adam
> > Young elsewhere in this thread.
> >
> > Currently the conductor is using python-barbicanclient to store the
> > Root CA and Key in Barbican and associates those secrets via a
> > Certificate Container and then stores the container URI in the
> > conductor database.
> >
> > Since most users of Magnum are unwilling or unable to deploy Barbican
> > the Magnum team would like an alternative mechanism for storing all
> > keys as well as the Certificates.
> >
> > Additionally, since magnum-conductor may be more than one process in
> > the future, the alternative storage must be available to many
> > magnum-conductors.
> >
> > Now, in the proposed Keystone alternative the magnum-conductor will
> > have a (symmetric?) encryption key.  Let's call this key the DEK
> > (short for data-encryption-key).  How the DEK is stored and
> replicated
> > to other magnum-conductors is outside of the scope of the proposed
> > alternative solution.
> > The magnum-conductor will use the DEK to encrypt all Certificates and
> > Keys and then store the resulting ciphertexts using the Keystone
> > credentials endpoint.
> >
> > This begs the question: If you're pre-encrypting all this data with
> > the DEK, why do you need to store it in an external system?  I see no
> > security benefit of using Keystone credentials over just storing
> these
> > ciphertexts in a table in the database that all magnum-conductors
> 

Re: [openstack-dev] [keystone] Newton midycle planning

2016-04-14 Thread Lance Bragstad
++ Nice to see this planning happening early!

R-14 would probably be a no-go for me. R-12 and R-11 fit my schedule.

On Thu, Apr 14, 2016 at 9:11 AM, Henry Nash  wrote:

> Hi Morgan,
>
> Great to be planning this ahead of time!!!
>
> For me either of the July dates are fine - I would have a problem with the
> June date.
>
> Henry
>
> On 14 Apr 2016, at 14:57, Dolph Mathews  wrote:
>
> On Wed, Apr 13, 2016 at 9:07 PM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>> It is that time again, the time to plan the Keystone midcycle! Looking at
>> the schedule [1] for Newton, the weeks that make the most sense look to be
>> (not in preferential order):
>>
>> R-14 June 27-01
>> R-12 July 11-15
>> R-11 July 18-22
>>
>
> They all work equally well for me at this point, but I'd be interested to
> try one of the earlier options.
>
>
>>
>> As usual this will be a 3 day event (probably Wed, Thurs, Fri), and based
>> on previous attendance we can expect ~30 people to attend. Based upon all
>> the information (other midcycles, other events, the US July4th holiday), I
>> am thinking that week R-12 (the week of the newton-2 milestone) would be
>> the best offering. Weeks before or after these three tend to push too close
>> to the summit or too far into the development cycle.
>>
>> I am trying to arrange for a venue in the Bay Area (most likely will be
>> South Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) since we
>> have done east coast and central over the last few midcycles.
>>
>> Please let me know your thoughts / preferences. In summary:
>>
>> * Venue will be Bay Area (more info to come soon)
>>
>> * Options of weeks (in general subjective order of preference): R-12,
>> R-11, R-14
>>
>> Cheers,
>> --Morgan
>>
>> [1] http://releases.openstack.org/newton/schedule.html
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][horizon] - Add extra plugins config to puppet-horizon

2016-04-14 Thread Fox, Kevin M
Depends on horizon version I think. Mitaka gained a local_settings.d magic dir 
that plugins can drop things into.

Thanks,
Kevin


From: Marcos Fermin Lobo
Sent: Thursday, April 14, 2016 5:52:31 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [puppet][horizon] - Add extra plugins config to 
puppet-horizon

Hi all,

I have a question about puppet-horizon module and UI plugins for Horizon.

Some of UI plugins, like murano-dashboard, needs to add extra parameters 
https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
 to local_settings file (which comes from Horizon).

My question is: Should puppet-horizon module provide those extra parameters 
coming from each official UI plugins? or this kind of things should come from 
specific a puppet-{ui-plugin}?

Thanks.

Cheers,
Marcos
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova hook

2016-04-14 Thread Fox, Kevin M
Instance users

The spec has been around for a long time.

People have been using hooks or vendor data to work around the lack of them. 
And both are being depricated.

Please, attend the summit session and lets address the issue.

Thanks,
Kevin


From: Daniel P. Berrange
Sent: Thursday, April 14, 2016 1:30:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Nova hook

On Thu, Apr 14, 2016 at 03:15:42PM +0800, Kenny Ji-work wrote:
> Hi all,​
> The nova hooks facility will be removed in the future, now
> what's the recommended method to add custom code into the nova's
> internal APIs? Thank you for answer!

The point of removing it is that we do *not* want people to add custom
code into nova's internal APIs, so there is explicitly no replacement
for this functionality.

If you have a use case that Nova does not currently address, that is
broadly useful then you can propose a blueprint/spec to explicitly
support this in Nova, rather than doing it out of tree via a hook

Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Newton Design Summit sessions planning

2016-04-14 Thread Alexey Shtokolov
Hi, +1 from my side.

---
WBR, Alexey Shtokolov

2016-04-14 16:47 GMT+03:00 Evgeniy L :

> Hi, no problem from my side.
>
> On Thu, Apr 14, 2016 at 10:53 AM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> I'd like to request workrooms sessions swap.
>>
>> We have a session about Fuel/Ironic integration and I'd like
>> this session not to overlap with Ironic sessions, so Ironic
>> team could attend Fuel sessions. At the same time, we have
>> a session about orchestration engine and it would be great to
>> invite there people from Mistral and Heat.
>>
>> My suggestion is as follows:
>>
>> Wed:
>> 9:50 Astute -> Mistral/Heat/???
>> Thu:
>> 9.00 Fuel/Ironic/Ironic-inspector
>>
>> If there are any objections, please let me know asap.
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Fri, Apr 1, 2016 at 9:47 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> Looks like we have final version sessions layout [1]
>>> for Austin design summit. We have 3 fishbows,
>>> 11 workrooms, full day meetup.
>>>
>>> Here you can find some useful information about design
>>> summit [2]. All session leads must read this page,
>>> be prepared for their sessions (agenda, slides if needed,
>>> etherpads for collaborative work, etc.) and follow
>>> the recommendations given in "At the Design Summit" section.
>>>
>>> Here is Fuel session planning etherpad [3]. Almost all suggested
>>> topics have been put there. Please put links to slide decks
>>> and etherpads next to respective sessions. Here is the
>>> page [4] where other teams publish their planning pads.
>>>
>>> If session leads want for some reason to swap their slots it must
>>> be requested in this ML thread. If for some reason session lead
>>> can not lead his/her session, it must be announced in this ML thread.
>>>
>>> Fuel sessions are:
>>> ===
>>> Fishbowls:
>>> ===
>>> Wed:
>>> 15:30-16:10
>>> 16:30:17:10
>>> 17:20-18:00
>>>
>>> ===
>>> Workrooms:
>>> ===
>>> Wed:
>>> 9:00-9:40
>>> 9:50-10:30
>>> 11:00-11:40
>>> 11:50-12:30
>>> 13:50-14:30
>>> 14:40-15:20
>>> Thu:
>>> 9:00-9:40
>>> 9:50-10:30
>>> 11:00-11:40
>>> 11:50-12:30
>>> 13:30-14:10
>>>
>>> ===
>>> Meetup:
>>> ===
>>> Fri:
>>> 9:00-12:30
>>> 14:00-17:30
>>>
>>> [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/attachments/20160331/d59d38b7/attachment.pdf
>>> [2] https://wiki.openstack.org/wiki/Design_Summit
>>> [3] https://etherpad.openstack.org/p/fuel-newton-summit-planning
>>> [4] https://wiki.openstack.org/wiki/Design_Summit/Planning
>>>
>>> Thanks.
>>>
>>> Vladimir Kozhukalov
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-14 Thread Matt Riedemann



On 4/14/2016 6:09 AM, Sean Dague wrote:

On 04/14/2016 05:19 AM, Markus Zoeller wrote:

From: Neil Jerram 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 04/14/2016 10:50 AM
Subject: Re: [openstack-dev] [all] [devstack] Adding example
"local.conf" files for testing?

On 14/04/16 08:35, Markus Zoeller wrote:

Sometimes (especially when I try to reproduce bugs) I have the need
to set up a local environment with devstack. Everytime I have to look
at my notes to check which option in the "local.conf" have to be set
for my needs. I'd like to add a folder in devstacks tree which hosts
multiple example local.conf files for different, often used setups.
Something like this:

  example-confs
  --- newton
  --- --- x86-ubuntu-1404
  --- --- --- minimum-setup
  --- --- --- --- README.rst
  --- --- --- --- local.conf
  --- --- --- serial-console-setup
  --- --- --- --- README.rst
  --- --- --- --- local.conf
  --- --- --- live-migration-setup
  --- --- --- --- README.rst
  --- --- --- --- local.conf.controller
  --- --- --- --- local.conf.compute1
  --- --- --- --- local.conf.compute2
  --- --- --- minimal-neutron-setup
  --- --- --- --- README.rst
  --- --- --- --- local.conf
  --- --- s390x-1.1.1-vulcan
  --- --- --- minimum-setup
  --- --- --- --- README.rst
  --- --- --- --- local.conf
  --- --- --- live-migration-setup
  --- --- --- --- README.rst
  --- --- --- --- local.conf.controller
  --- --- --- --- local.conf.compute1
  --- --- --- --- local.conf.compute2
  --- mitaka
  --- --- # same structure as master branch. omitted for brevity
  --- liberty
  --- --- # same structure as master branch. omitted for brevity

Thoughts?


Yes, this looks useful to me.  Only thing is that you shouldn't need the



per-release subtrees, though; the DevStack repository already has
per-release stable/ branches, which you need to check out in
order to do a DevStack setup of a past release.  So I would expect the
local.confs for each past release to live in the corresponding branch.

Regards,
Neil


My intention was to avoid that there is a folder "current" or "trunk"
or similar, which doesn't get updated. That's the issue Steve talked
about.

The workflow could be, at every new cycle:
 * create a new "release folder" (Newton, Ocata, ...)
 * copy the "setup folders" (minimum-setup, ...) to the new folder
 * clean up the "local.conf" file(s) of deprecated options
 * delete a "release folder" if the release is EOL

I also assume that this would make potential backports easier.


I think this would be useful, and accepted easily.

I *don't* think we want per release directories. Because it confuses the
issue on whether or not devstack master can install liberty (which it
can't).

Every local.conf should include a documentation page as well that
describes the scenario, which means these would be easy to snag off the
web docs.

-Sean



+1 to add example scenarios (I have a copy of a basic neutron + ovs that 
I got from a co-worker) and -1 on release-specific directories, we don't 
need them as pointed out already, that's what the branches are for in 
the git repo. The trunk local.confs should be updated naturally as 
people try to use them and hit issues.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][vitrage] Joint design sesssion in Austin

2016-04-14 Thread Julien Danjou
On Thu, Apr 14 2016, Weyl, Alexey (Nokia - IL) wrote:

> As far as Vitrage team is concerned, 16:10-16:50 works best for us,
> but we can attend either session if needed.

Perfect, I've assigned that slot for this session.

See you there!

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Leadership training dates - please confirm attendance

2016-04-14 Thread Thierry Carrez

Colette Alexander wrote:

Hi everyone!

Quick summary of where we're at with leadership training: dates are
confirmed as available with ZingTrain, and we're finalizing trainers
with them right now. *June 28/29th in Ann Arbor, Michigan.*

https://etherpad.openstack.org/p/Leadershiptraining


You mention that there was only minimal interest in adding a third day. 
To make the oversea trip more worthwhile for me, I'll definitely be 
there on Thursday, so we could also have (at least in the morning) a 
discussion on how useful the exercise was, and if any lesson is 
immediately applicable. It doesn't have to be Zing-moderated or 
in-training, could be more of a small post-event open brainstorming for 
those who would still be around.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Proposing Saad Zaher for freezer core

2016-04-14 Thread Dieterly, Deklan
+1 on Saad Zaher (szaher) being a core member

-- 
Deklan Dieterly

Senior Systems Software Engineer
HPE




On 4/14/16, 8:23 AM, "Ramirez Garcia, Guillermo"
 wrote:

>+1 on Saad Zaher (szaher) being a core member
>
>Regards 
>GRG
>
>From: Mathieu, Pierre-Arthur
>Sent: Thursday, April 14, 2016 2:20 PM
>To: openstack-dev@lists.openstack.org
>Subject: [openstack-dev]  [freezer] Proposing Saad Zaher for freezer core
>
>Hello,
>
>I would like to propose that we make Saad Zaher (szaher) core on freezer.
>He has been a highly valuable developper for the past few month, mainly
>working on integrating oslo component into freezer.
>He has also been helping a lot with feature testing.
>
>His work can be found here: [1]
>
>Unless there is a disagreement I plan to make Saad core by the end of the
>week.
>
>Thanks
>- Pierre
>
>[1] https://review.openstack.org/#/q/owner:%22Saad+Zaher%22
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Proposing Saad Zaher for freezer core

2016-04-14 Thread Dieterly, Deklan
+1 on Saad Zaher (szaher) being a core member

-- 
Deklan Dieterly

Senior Systems Software Engineer
HPE




On 4/14/16, 8:23 AM, "Ramirez Garcia, Guillermo"
 wrote:

>+1 on Saad Zaher (szaher) being a core member
>
>Regards 
>GRG
>
>From: Mathieu, Pierre-Arthur
>Sent: Thursday, April 14, 2016 2:20 PM
>To: openstack-dev@lists.openstack.org
>Subject: [openstack-dev]  [freezer] Proposing Saad Zaher for freezer core
>
>Hello,
>
>I would like to propose that we make Saad Zaher (szaher) core on freezer.
>He has been a highly valuable developper for the past few month, mainly
>working on integrating oslo component into freezer.
>He has also been helping a lot with feature testing.
>
>His work can be found here: [1]
>
>Unless there is a disagreement I plan to make Saad core by the end of the
>week.
>
>Thanks
>- Pierre
>
>[1] https://review.openstack.org/#/q/owner:%22Saad+Zaher%22
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Proposing Saad Zaher for freezer core

2016-04-14 Thread Fausto Marzi
+1. Saad Zaher is a Top Gear engineer : )

On Thu, Apr 14, 2016 at 4:20 PM, Mathieu, Pierre-Arthur <
pierre-arthur.math...@hpe.com> wrote:

> Hello,
>
> I would like to propose that we make Saad Zaher (szaher) core on freezer.
> He has been a highly valuable developper for the past few month, mainly
> working on integrating oslo component into freezer.
> He has also been helping a lot with feature testing.
>
> His work can be found here: [1]
>
> Unless there is a disagreement I plan to make Saad core by the end of the
> week.
>
> Thanks
> - Pierre
>
> [1] https://review.openstack.org/#/q/owner:%22Saad+Zaher%22
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Liberty->Mitaka upgrade: is it possible without downtime?

2016-04-14 Thread Matt Fischer
On Thu, Apr 14, 2016 at 7:45 AM, Grasza, Grzegorz  wrote:

> > From: Gyorgy Szombathelyi
> >
> > Unknown column 'user.name' in 'field list'
> >
> > in some operation when the DB is already upgraded to Mitaka, but some
> > keystone instances in a HA setup are still Liberty.
>
> Currently we don't support rolling upgrades in keystone. To do an upgrade,
> you need to upgrade all keystone service instances at once, instead of
> going one-by-one, which means you have to plan for downtime of the keystone
> API.
>
>

Doing them all at once is dangerous if there's an issue during the DB
migration or between the other services and the new code. Better to
shutdown all but one node, and stop mysql as well on the other nodes. Then
upgrade one, run tests, then do the others serially. That way if the first
node has issues, you can quarantine it, restore mysql on the other nodes,
and then destroy and rebuild the first node back on old code. We've had
enough issues with db migrations before (not keystone that I recall
however) that you'd be nuts to trust that it's just going to work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][horizon] - Add extra plugins config to puppet-horizon

2016-04-14 Thread Jason Guiditta

On 14/04/16 10:16 -0400, Emilien Macchi wrote:

On Thu, Apr 14, 2016 at 9:21 AM, Denis Egorenko  wrote:

Some of UI plugins, like murano-dashboard, needs to add extra parameters
https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
to local_settings file (which comes from Horizon).
My question is: Should puppet-horizon module provide those extra
parameters coming from each official UI plugins? or this kind of things
should come from specific a puppet-{ui-plugin}?



Well, not exactly puppet-{ui-plugin}. For example, we already have murano
module and it has manifests for UI plugin installation.

On a one side, in such way we are keeping all module related configuration
in one place.
On another side, all UI configuration probably should be placed in horizon
module. But in this case, we need to support in horizon module full
configuration for each UI plugin.

So, i think we can keep UI configuration in-place (in separate module) if we
have this module at all. For cases, when we need only support some UI
settings/plugins - we can keep it in puppet-horizon.

Thoughts?


Does Murano uses the same local_settings.py file as Horizon? If yes,
we might stop using puppet-murano to manage this file.
And maybe find a mechanism in puppet-horizon with a provider, so we
can have a plugin architecture like:
horizon::plugins::murano
horizon::plugins::foobar
that would use this provider to configure a common local_settings.py
and notify service on change, like we do for .conf files.

What do you think?


I like the sound of this, keeps the file managements central like all
the _config providers, while allowing each module to specify the
parts that only it knows or cares about.

-j

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Proposing Saad Zaher for freezer core

2016-04-14 Thread Ramirez Garcia, Guillermo
+1 on Saad Zaher (szaher) being a core member

Regards 
GRG

From: Mathieu, Pierre-Arthur
Sent: Thursday, April 14, 2016 2:20 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev]  [freezer] Proposing Saad Zaher for freezer core

Hello,

I would like to propose that we make Saad Zaher (szaher) core on freezer.
He has been a highly valuable developper for the past few month, mainly working 
on integrating oslo component into freezer.
He has also been helping a lot with feature testing.

His work can be found here: [1]

Unless there is a disagreement I plan to make Saad core by the end of the week.

Thanks
- Pierre

[1] https://review.openstack.org/#/q/owner:%22Saad+Zaher%22


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Proposing Saad Zaher for freezer core

2016-04-14 Thread Mathieu, Pierre-Arthur
Hello, 

I would like to propose that we make Saad Zaher (szaher) core on freezer.
He has been a highly valuable developper for the past few month, mainly working 
on integrating oslo component into freezer.
He has also been helping a lot with feature testing.

His work can be found here: [1]

Unless there is a disagreement I plan to make Saad core by the end of the week.

Thanks
- Pierre

[1] https://review.openstack.org/#/q/owner:%22Saad+Zaher%22


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][horizon] - Add extra plugins config to puppet-horizon

2016-04-14 Thread Emilien Macchi
On Thu, Apr 14, 2016 at 9:21 AM, Denis Egorenko  wrote:
>> Some of UI plugins, like murano-dashboard, needs to add extra parameters
>> https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
>> to local_settings file (which comes from Horizon).
>> My question is: Should puppet-horizon module provide those extra
>> parameters coming from each official UI plugins? or this kind of things
>> should come from specific a puppet-{ui-plugin}?
>
>
> Well, not exactly puppet-{ui-plugin}. For example, we already have murano
> module and it has manifests for UI plugin installation.
>
> On a one side, in such way we are keeping all module related configuration
> in one place.
> On another side, all UI configuration probably should be placed in horizon
> module. But in this case, we need to support in horizon module full
> configuration for each UI plugin.
>
> So, i think we can keep UI configuration in-place (in separate module) if we
> have this module at all. For cases, when we need only support some UI
> settings/plugins - we can keep it in puppet-horizon.
>
> Thoughts?

Does Murano uses the same local_settings.py file as Horizon? If yes,
we might stop using puppet-murano to manage this file.
And maybe find a mechanism in puppet-horizon with a provider, so we
can have a plugin architecture like:
horizon::plugins::murano
horizon::plugins::foobar
that would use this provider to configure a common local_settings.py
and notify service on change, like we do for .conf files.

What do you think?

> 2016-04-14 16:00 GMT+03:00 Emilien Macchi :
>>
>> On Thu, Apr 14, 2016 at 8:52 AM, Marcos Fermin Lobo
>>  wrote:
>> > Hi all,
>> >
>> > I have a question about puppet-horizon module and UI plugins for
>> > Horizon.
>> >
>> > Some of UI plugins, like murano-dashboard, needs to add extra parameters
>> >
>> > https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
>> > to local_settings file (which comes from Horizon).
>> >
>> > My question is: Should puppet-horizon module provide those extra
>> > parameters
>> > coming from each official UI plugins? or this kind of things should come
>> > from specific a puppet-{ui-plugin}?
>> >
>> I don't think having a separated Puppet module for each plugin will
>> help us, maintaining a module is a lot of work.
>> One thing we could do is to have classes:
>> horizon::plugin::murano
>> horizon::plugin::foobar etc
>>
>> What do you think?
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best Regards,
> Egorenko Denis,
> Senior Deployment Engineer
> Mirantis
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-14 Thread Sean M. Collins
Vikram Choudhary wrote:
> Hi Cathy,
> 
> A project called "neutron-classifier [1]" is also there addressing the same
> use case. Let's sync up and avoid work duplicity.
> 
> [1] https://github.com/openstack/neutron-classifier

Agree with Vikram - we have a small git repo that we're using to futz
around with ideas around how to store classifiers in a way that is
re-usable by other projects, and create a decent object model.

It's very very rough, and the API is ... kind of ugly right now. That's
what you get when I steal like 4 Red Bulls and do an all-night coding
session in Tokyo.

So, It'd be great to get other people involved, get an API hashed out
that doesn't expose all the nitty gritty DB details (like it currently
is) and move forward.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Newton midycle planning

2016-04-14 Thread Henry Nash
Hi Morgan,

Great to be planning this ahead of time!!!

For me either of the July dates are fine - I would have a problem with the June 
date.

Henry
> On 14 Apr 2016, at 14:57, Dolph Mathews  wrote:
> 
> On Wed, Apr 13, 2016 at 9:07 PM, Morgan Fainberg  > wrote:
> It is that time again, the time to plan the Keystone midcycle! Looking at the 
> schedule [1] for Newton, the weeks that make the most sense look to be (not 
> in preferential order):
> 
> R-14 June 27-01
> R-12 July 11-15
> R-11 July 18-22
> 
> They all work equally well for me at this point, but I'd be interested to try 
> one of the earlier options.
>  
> 
> As usual this will be a 3 day event (probably Wed, Thurs, Fri), and based on 
> previous attendance we can expect ~30 people to attend. Based upon all the 
> information (other midcycles, other events, the US July4th holiday), I am 
> thinking that week R-12 (the week of the newton-2 milestone) would be the 
> best offering. Weeks before or after these three tend to push too close to 
> the summit or too far into the development cycle.
> 
> I am trying to arrange for a venue in the Bay Area (most likely will be South 
> Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) since we have 
> done east coast and central over the last few midcycles.
> 
> Please let me know your thoughts / preferences. In summary:
> 
> * Venue will be Bay Area (more info to come soon)
> 
> * Options of weeks (in general subjective order of preference): R-12, R-11, 
> R-14
> 
> Cheers,
> --Morgan
> 
> [1] http://releases.openstack.org/newton/schedule.html 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] API features discoverability

2016-04-14 Thread Michał Dulko
Hi,

When looking at bug [1] I've thought that we could simply use
/v2//extensions to signal features available in the
deployment - in this case backups, as these are implemented as API
extension too. Cloud admin can disable an extension if his cloud doesn't
support a particular feature and this is easily discoverable using
aforementioned call. Looks like that solution weren't proposed when the
bug was initially raised.

Now the problem is that we're actually planning to move all API
extensions to the core API. Do we plan to keep this API for features
discovery? How to approach API compatibility in this case if we want to
change it? Do we have a plan for that?

We could keep this extensions API controlled from the cinder.conf,
regardless of the fact that we've moved everything to the core, but that
doesn't seem right (API will still be functional, even if administrator
disables it in configuration, am I right?)

Anyone have thoughts on that?

Thanks,
Michal

[1] https://bugs.launchpad.net/cinder/+bug/1334856

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Liberty->Mitaka upgrade: is it possible without downtime?

2016-04-14 Thread Gyorgy Szombathelyi
Hi Matt!

I didn't try to use Liberty tokens in Mitaka, but then I'll try to minimize the
upgrade window.

Br,
György

> -Original Message-
> From: Matt Fischer [mailto:m...@mattfischer.com]
> Sent: 2016 április 14, csütörtök 15:55
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [keystone]Liberty->Mitaka upgrade: is it
> possible without downtime?
> 
> Unfortunately Keystone does not handle database upgrades like nova. and
> they do tend to be disruptive. I have not tried Liberty to mitaka  myself, but
> have you tried to validate a token granted on a mitaka node against the
> liberty one?  If you are lucky the other nodes will still be able to validate
> tokens during the upgrade. Even if other API calls fail this is slightly less
> disruptive. What I would do is shut down your entire cluster except for one
> node an upgrade that node first. If you find that other nodes can still 
> validate
> tokens, leave two up, so that the upgrade restart doesn't cause a blip. Then
> upgrade the second node as quickly as possible. I'd also strongly recommend
> a db backup before you start.
> 
> We did this last week from an early liberty commit to stable and had
> incompatible db changes and a token format change and only had a brief
> keystone outage.
> 
> On Apr 14, 2016 7:39 AM, "Gyorgy Szombathelyi"
>   > wrote:
> 
> 
>   Hi!
> 
>   I just experimenting with upgrading Liberty to Mitaka, and hit an
> issue:
>   In Mitaka, the user table doesn't have 'name' field, so running mixed
> versions of Keystone could result in:
> 
>   Unknown column 'user.name  ' in 'field list'
> 
>   in some operation when the DB is already upgraded to Mitaka, but
> some keystone instances in a HA setup are still Liberty.
> 
>   Is this change is intentional? Should I ignore the problem and just
> upgrade all instances as fast as possible? Or I just overlooked something?
> 
>   Br,
>   György
> 
> 
>   
> __
>   OpenStack Development Mailing List (not for usage questions)
>   Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe  requ...@lists.openstack.org?subject:unsubscribe>
>   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Liberty->Mitaka upgrade: is it possible without downtime?

2016-04-14 Thread Gyorgy Szombathelyi
Hi!

> >
> > Unknown column 'user.name' in 'field list'
> >
> > in some operation when the DB is already upgraded to Mitaka, but some
> > keystone instances in a HA setup are still Liberty.
> 
> Currently we don't support rolling upgrades in keystone. To do an upgrade,
> you need to upgrade all keystone service instances at once, instead of going
> one-by-one, which means you have to plan for downtime of the keystone
> API.
> 

Thanks, then I'll try to minimize the upgrade window as small as possible for 
keystone.

> >
> > Is this change is intentional? Should I ignore the problem and just
> > upgrade all instances as fast as possible? Or I just overlooked something?
> >
> 
> You are right that there will be an error if you try running Liberty+Mitaka,
> since the database schema is not compatible. We have an ongoing effort to
> support online schema migrations, but it didn't make into Mitaka. [1] [2]
> 
> We will have a presentation about Online DB Migrations at the summit (in the
> upstream development track), so if you are interested, you can attend or
> watch the recorded session afterwards [3]. There will also be a discussion
> about this in keystone meetings at the design summit. [4]

Great, hope the next version(s) will handle rolling upgrades.

Br,
György

> 
> [1] https://specs.openstack.org/openstack/keystone-
> specs/specs/mitaka/online-schema-migration.html
> [2] https://review.openstack.org/#/c/274079/
> [3] https://www.openstack.org/summit/austin-2016/summit-
> schedule/events/7639
> [4] https://etherpad.openstack.org/p/keystone-newton-summit-brainstorm
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Newton Design Summit sessions planning

2016-04-14 Thread Evgeniy L
Hi, no problem from my side.

On Thu, Apr 14, 2016 at 10:53 AM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> I'd like to request workrooms sessions swap.
>
> We have a session about Fuel/Ironic integration and I'd like
> this session not to overlap with Ironic sessions, so Ironic
> team could attend Fuel sessions. At the same time, we have
> a session about orchestration engine and it would be great to
> invite there people from Mistral and Heat.
>
> My suggestion is as follows:
>
> Wed:
> 9:50 Astute -> Mistral/Heat/???
> Thu:
> 9.00 Fuel/Ironic/Ironic-inspector
>
> If there are any objections, please let me know asap.
>
>
>
> Vladimir Kozhukalov
>
> On Fri, Apr 1, 2016 at 9:47 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> Looks like we have final version sessions layout [1]
>> for Austin design summit. We have 3 fishbows,
>> 11 workrooms, full day meetup.
>>
>> Here you can find some useful information about design
>> summit [2]. All session leads must read this page,
>> be prepared for their sessions (agenda, slides if needed,
>> etherpads for collaborative work, etc.) and follow
>> the recommendations given in "At the Design Summit" section.
>>
>> Here is Fuel session planning etherpad [3]. Almost all suggested
>> topics have been put there. Please put links to slide decks
>> and etherpads next to respective sessions. Here is the
>> page [4] where other teams publish their planning pads.
>>
>> If session leads want for some reason to swap their slots it must
>> be requested in this ML thread. If for some reason session lead
>> can not lead his/her session, it must be announced in this ML thread.
>>
>> Fuel sessions are:
>> ===
>> Fishbowls:
>> ===
>> Wed:
>> 15:30-16:10
>> 16:30:17:10
>> 17:20-18:00
>>
>> ===
>> Workrooms:
>> ===
>> Wed:
>> 9:00-9:40
>> 9:50-10:30
>> 11:00-11:40
>> 11:50-12:30
>> 13:50-14:30
>> 14:40-15:20
>> Thu:
>> 9:00-9:40
>> 9:50-10:30
>> 11:00-11:40
>> 11:50-12:30
>> 13:30-14:10
>>
>> ===
>> Meetup:
>> ===
>> Fri:
>> 9:00-12:30
>> 14:00-17:30
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/attachments/20160331/d59d38b7/attachment.pdf
>> [2] https://wiki.openstack.org/wiki/Design_Summit
>> [3] https://etherpad.openstack.org/p/fuel-newton-summit-planning
>> [4] https://wiki.openstack.org/wiki/Design_Summit/Planning
>>
>> Thanks.
>>
>> Vladimir Kozhukalov
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RPC Communication Errors Might Lead to a Bad State

2016-04-14 Thread Dan Smith
>> I have wanted to make a change for a while that involves a TTL on
>> messages, along with a deadline record so that we can know when to retry
>> or revert things that were in flight. This requires a lot of machinery
>> to accomplish, and is probably interwoven with the task concept we've
>> had on the back burner for a while. The complexity of moving nova to
>> this sort of scheme means that nobody has picked it up as of yet, but
>> it's certainly in the minds of many of us as something we need to do
>> before too long.
> 
> Are you still thinking of this kind of mechanism deployment?
> We need any kind of RPC handling mechanism at the end of the day.

I'm not sure what you're saying exactly. The above would be something we
integrate with our RPC calls to signal to us when they may have been
dropped or failed. It wouldn't replace the mechanism or need for RPC at
a fundamental level.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Liberty->Mitaka upgrade: is it possible without downtime?

2016-04-14 Thread Matt Fischer
Unfortunately Keystone does not handle database upgrades like nova. and
they do tend to be disruptive. I have not tried Liberty to mitaka  myself,
but have you tried to validate a token granted on a mitaka node against the
liberty one?  If you are lucky the other nodes will still be able to
validate tokens during the upgrade. Even if other API calls fail this is
slightly less disruptive. What I would do is shut down your entire cluster
except for one node an upgrade that node first. If you find that other
nodes can still validate tokens, leave two up, so that the upgrade restart
doesn't cause a blip. Then upgrade the second node as quickly as possible.
I'd also strongly recommend a db backup before you start.

We did this last week from an early liberty commit to stable and had
incompatible db changes and a token format change and only had a brief
keystone outage.
On Apr 14, 2016 7:39 AM, "Gyorgy Szombathelyi" <
gyorgy.szombathe...@doclerholding.com> wrote:

> Hi!
>
> I just experimenting with upgrading Liberty to Mitaka, and hit an issue:
> In Mitaka, the user table doesn't have 'name' field, so running mixed
> versions of Keystone could result in:
>
> Unknown column 'user.name' in 'field list'
>
> in some operation when the DB is already upgraded to Mitaka, but some
> keystone instances in a HA setup are still Liberty.
>
> Is this change is intentional? Should I ignore the problem and just
> upgrade all instances as fast as possible? Or I just overlooked something?
>
> Br,
> György
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Invalid command sfc-create

2016-04-14 Thread Victor Mehmeri
I am getting this now:

2016-04-15 01:33:02.412 | Waiting for Opendaylight to start via 
restconf/operational/network-topology:network-topology/topology/netvirt:1 ...
2016-04-15 01:43:02.430 | [Call Trace]
2016-04-15 01:43:02.432 | ./stack.sh:1158:run_phase
2016-04-15 01:43:02.432 | /home/stack/devstack/functions-common:1878:run_plugins
2016-04-15 01:43:02.443 | /home/stack/devstack/functions-common:1845:source
2016-04-15 01:43:02.443 | 
/opt/stack/networking-odl/devstack/plugin.sh:43:start_opendaylight
2016-04-15 01:43:02.443 | 
/opt/stack/networking-odl/devstack/entry_points:188:test_with_retry
2016-04-15 01:43:02.443 | /home/stack/devstack/functions-common:2293:die
2016-04-15 01:43:02.476 | [ERROR] /home/stack/devstack/functions-common:2293 
Opendaylight did not start after 600
2016-04-15 01:43:03.516 | Error on exit

Should I install and start opendaylight myself, or is it something Tacker does 
in the background? If so, any ideas why am I getting this error?

Thanks,

Victor

-Original Message-
From: Victor Mehmeri 
Sent: 13. april 2016 16:04
To: 'Tim Rozet'; OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [Tacker] Invalid command sfc-create

Thanks, Tim!

Victor

-Original Message-
From: Tim Rozet [mailto:tro...@redhat.com] 
Sent: 13. april 2016 16:01
To: OpenStack Development Mailing List (not for usage questions); Victor Mehmeri
Subject: Re: [openstack-dev] [Tacker] Invalid command sfc-create

Hi Victor,
You can use the local.conf thats in the sfc-random repo.  The sfc functionality 
is not in upstream Tacker yet.  It is here:
https://github.com/trozet/sfc-random/blob/master/local.conf#L2


Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Victor Mehmeri" 
To: openstack-dev@lists.openstack.org
Sent: Wednesday, April 13, 2016 4:38:10 PM
Subject: [openstack-dev] [Tacker] Invalid command sfc-create



Hi all, 



I am trying to follow this walkthrough here: 
https://github.com/trozet/sfc-random/blob/master/tacker_sfc_walkthrough.txt 



But when I get to this point: tacker sfc-create --name mychain --chain 
testVNF1, I get the error: 



Invalid command u'sfc-create --name' 



‘tacker help’ doesn’t even list any command related to sfc. My devstack 
local.conf file has this line: 



enable_plugin tacker https://git.openstack.org/openstack/tacker stable/liberty 



is the reason that I don’t have sfc-related commands that I am pointing to the 
liberty version? Should I point to master and rerun stack.sh? 



Thanks in advance, 



Victor 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Newton midycle planning

2016-04-14 Thread Dolph Mathews
On Wed, Apr 13, 2016 at 9:07 PM, Morgan Fainberg 
wrote:

> It is that time again, the time to plan the Keystone midcycle! Looking at
> the schedule [1] for Newton, the weeks that make the most sense look to be
> (not in preferential order):
>
> R-14 June 27-01
> R-12 July 11-15
> R-11 July 18-22
>

They all work equally well for me at this point, but I'd be interested to
try one of the earlier options.


>
> As usual this will be a 3 day event (probably Wed, Thurs, Fri), and based
> on previous attendance we can expect ~30 people to attend. Based upon all
> the information (other midcycles, other events, the US July4th holiday), I
> am thinking that week R-12 (the week of the newton-2 milestone) would be
> the best offering. Weeks before or after these three tend to push too close
> to the summit or too far into the development cycle.
>
> I am trying to arrange for a venue in the Bay Area (most likely will be
> South Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) since we
> have done east coast and central over the last few midcycles.
>
> Please let me know your thoughts / preferences. In summary:
>
> * Venue will be Bay Area (more info to come soon)
>
> * Options of weeks (in general subjective order of preference): R-12,
> R-11, R-14
>
> Cheers,
> --Morgan
>
> [1] http://releases.openstack.org/newton/schedule.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Liberty->Mitaka upgrade: is it possible without downtime?

2016-04-14 Thread Grasza, Grzegorz
> From: Gyorgy Szombathelyi
> 
> Unknown column 'user.name' in 'field list'
> 
> in some operation when the DB is already upgraded to Mitaka, but some
> keystone instances in a HA setup are still Liberty.

Currently we don't support rolling upgrades in keystone. To do an upgrade, you 
need to upgrade all keystone service instances at once, instead of going 
one-by-one, which means you have to plan for downtime of the keystone API.

> 
> Is this change is intentional? Should I ignore the problem and just upgrade 
> all
> instances as fast as possible? Or I just overlooked something?
> 

You are right that there will be an error if you try running Liberty+Mitaka, 
since the database schema is not compatible. We have an ongoing effort to 
support online schema migrations, but it didn't make into Mitaka. [1] [2]

We will have a presentation about Online DB Migrations at the summit (in the 
upstream development track), so if you are interested, you can attend or watch 
the recorded session afterwards [3]. There will also be a discussion about this 
in keystone meetings at the design summit. [4]

[1] 
https://specs.openstack.org/openstack/keystone-specs/specs/mitaka/online-schema-migration.html
[2] https://review.openstack.org/#/c/274079/
[3] https://www.openstack.org/summit/austin-2016/summit-schedule/events/7639
[4] https://etherpad.openstack.org/p/keystone-newton-summit-brainstorm


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Design Summit Cross Project Session Etherpads

2016-04-14 Thread Sean Dague
The Cross Project agenda with a list to all the etherpads has been added
to the wiki -
https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Cross-Project_workshops


If you already started an etherpad for a topic in question, feel free to
update the link. If you did not, please ensure that you have updated
your etherpad before the summit gets started.

Thanks much,

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-14 Thread Nathan Reller
I agree with Doug's comments. Castellan is a generic key manager
library that allows symmetric keys, private keys, public keys,
certificates, passphrases, and opaque secret data to be stored in a
key manager. There is a Barbican implementation that is complete, and
a KMIP (Key Management Interoperability Protocol (an OASIS standard))
implementation is under development.

The precursor to castellan was the KeyManager interface integrated
into Nova and Cinder. We are in the process of making the easy switch
from that to Castellan. Glance and Sahara have both already integrated
with Castellan. Swift is currently integrating with Castellan and will
swap between Barbican and KMIP.

-Nate



On Wed, Apr 13, 2016 at 3:04 PM, Douglas Mendizábal
 wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Hi Hongbin,
>
> I have to admit that it's a bit disappointing that the Magnum team
> chose to decouple from Barbican, although I do understand that our
> team needs to do a better job of documenting detailed how-tos for
> deploying Barbican.
>
> I'm not sure that I understand the Threat Model you're trying to
> protect against, and I have not spent a whole lot of time researching
> Magnum architecture so please forgive me if my assumptions are wrong.
>
> So that we're all on the same page, I'm going to summarize the TLS
> use-case as I understand it:
>
> The magnum-conductor is a single process that may be scalable at some
> point in the future. [1]
>
> When the magnum-conductor is asked to provision a new bay the
> following things happen:
> 1. A new self-signed root CA is created.  This results in a Root CA
> Certificate and its associated key
> 2. N number of nodes are created to be part of the new bay.  For each
> node, a new x509 certificate is provisioned and signed by the Root CA
> created in 1.  This results in a certificate and key pair for each node.
> 3. The conductor then needs to store all generated keys in a secure
> location.
> 4. The conductor would also like to store all generated Certificates
> in a secure location, although this is not strictly necessary since
> Certificates contain no secret information as pointed out by Adam
> Young elsewhere in this thread.
>
> Currently the conductor is using python-barbicanclient to store the
> Root CA and Key in Barbican and associates those secrets via a
> Certificate Container and then stores the container URI in the
> conductor database.
>
> Since most users of Magnum are unwilling or unable to deploy Barbican
> the Magnum team would like an alternative mechanism for storing all
> keys as well as the Certificates.
>
> Additionally, since magnum-conductor may be more than one process in
> the future, the alternative storage must be available to many
> magnum-conductors.
>
> Now, in the proposed Keystone alternative the magnum-conductor will
> have a (symmetric?) encryption key.  Let's call this key the DEK
> (short for data-encryption-key).  How the DEK is stored and replicated
> to other magnum-conductors is outside of the scope of the proposed
> alternative solution.
> The magnum-conductor will use the DEK to encrypt all Certificates and
> Keys and then store the resulting ciphertexts using the Keystone
> credentials endpoint.
>
> This begs the question: If you're pre-encrypting all this data with
> the DEK, why do you need to store it in an external system?  I see no
> security benefit of using Keystone credentials over just storing these
> ciphertexts in a table in the database that all magnum-conductors will
> already have access to.
>
> I think a better alternative would be to integrate with Castellan and
> develop a new Castellan implementation where the DEK is specified in a
> config file, and the ciphertexts are stored in a database.  Let's call
> this new implementation LocalDEKAndDBKeyManager.
>
> With this approach the deployer could specify the
> LocalDEKAndDBKeyManager class as the implementation of Castellan to be
> used for their deployment, and then the DEK and db connection string
> could be specified in the config as well.
>
> By introducing the Castellan abstraction you would lose the ability to
> group secrets into containers, so you'd have to store separate
> references for each cert and key instead of just one barbican
> reference for both.  Also, you would probably have to write the
> Castellan integration in a way that always uses a context that is
> generated from the config file which will result in all keys being
> owned by the Magnum service tenant instead of the user's tenant when
> using Barbican as a backend.
>
> The upshot is that a deployer could choose the existing Barbican
> implementation instead, and other projects may be able to make use of
> the LocalDEKAndDBKeyManager.
>
> - - Douglas Mendizábal
>
> [1] http://docs.openstack.org/developer/magnum/#architecture
>
> On 4/13/16 10:14 AM, Hongbin Lu wrote:
>> I think there are two questions here:
>>
>> 1.   Should Magnum 

Re: [openstack-dev] [puppet][horizon] - Add extra plugins config to puppet-horizon

2016-04-14 Thread Denis Egorenko
>
> Some of UI plugins, like murano-dashboard, needs to add extra parameters
> https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
> to local_settings file (which comes from Horizon).
> My question is: Should puppet-horizon module provide those extra
> parameters coming from each official UI plugins? or this kind of things
> should come from specific a puppet-{ui-plugin}?


Well, not exactly puppet-{ui-plugin}. For example, we already have murano
module and it has manifests for UI plugin installation.

On a one side, in such way we are keeping all module related configuration
in one place.
On another side, all UI configuration probably should be placed in horizon
module. But in this case, we need to support in horizon module full
configuration for each UI plugin.

So, i think we can keep UI configuration in-place (in separate module) if
we have this module at all. For cases, when we need only support some UI
settings/plugins - we can keep it in puppet-horizon.

Thoughts?

2016-04-14 16:00 GMT+03:00 Emilien Macchi :

> On Thu, Apr 14, 2016 at 8:52 AM, Marcos Fermin Lobo
>  wrote:
> > Hi all,
> >
> > I have a question about puppet-horizon module and UI plugins for Horizon.
> >
> > Some of UI plugins, like murano-dashboard, needs to add extra parameters
> >
> https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
> > to local_settings file (which comes from Horizon).
> >
> > My question is: Should puppet-horizon module provide those extra
> parameters
> > coming from each official UI plugins? or this kind of things should come
> > from specific a puppet-{ui-plugin}?
> >
> I don't think having a separated Puppet module for each plugin will
> help us, maintaining a module is a lot of work.
> One thing we could do is to have classes:
> horizon::plugin::murano
> horizon::plugin::foobar etc
>
> What do you think?
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Egorenko Denis,
Senior Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone]Liberty->Mitaka upgrade: is it possible without downtime?

2016-04-14 Thread Gyorgy Szombathelyi
Hi!

I just experimenting with upgrading Liberty to Mitaka, and hit an issue:
In Mitaka, the user table doesn't have 'name' field, so running mixed versions 
of Keystone could result in:

Unknown column 'user.name' in 'field list'

in some operation when the DB is already upgraded to Mitaka, but some keystone 
instances in a HA setup are still Liberty.

Is this change is intentional? Should I ignore the problem and just upgrade all 
instances as fast as possible? Or I just overlooked something?

Br,
György


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] command tox with error in mitaka

2016-04-14 Thread zhaolihuisky
--
>From:Ihar Hrachyshka 
>Send Time:2016 Apr 14 (Thu) 18:11
>To:zhaolihuisky ; OpenStack Development Mailing List 
>(not for usage questions) 
>Subject:Re: [openstack-dev]  [all] [devstack] command tox with error in mitaka


>zhaolihuisky  wrote:

>> Hi everyone
>>
>> Install devstack by http://docs.openstack.org/developer/devstack/
>> Install tox by  
>> docs.openstack.org/project-team-guide/project-setup/python.html
>> command 'tox', the error log:
>>
>> pip-missing-reqs runtests: PYTHONHASHSEED='952003835'
>> pip-missing-reqs runtests: commands[0] | pip-missing-reqs -d  
>> --ignore-file=nova/tests/* --ignore-file=nova/test.py nova
>> Traceback (most recent call last):
>>   File ".tox/pip-missing-reqs/bin/pip-missing-reqs", line 7, in 
>> from pip_missing_reqs.find_missing_reqs import main
>>   File 
>> "/opt/stack/nova/.tox/pip-missing-reqs/local/lib/python2.7/site-packages/pip_missing_reqs/find_missing_reqs.py",
>>  line 14, in 
>> from pip.utils import get_installed_distributions, normalize_name
>> ImportError: cannot import name normalize_name
>> ERROR: InvocationError:  
>> '/opt/stack/nova/.tox/pip-missing-reqs/bin/pip-missing-reqs -d  
>> --ignore-file=nova/tests/* --ignore-file=nova/test.py nova
>> '  
>> summary  
>> 
>> ERROR:   py34: could not install deps  
>> [-r/opt/stack/nova/test-requirements.txt]; v =  
>> InvocationError('/opt/stack/nova/.tox/py34/bin/pip install -
>> chttps://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt
>>   
>> -r/opt/stack/nova/test-requirements.txt (see  
>> /opt/stack/nova/.tox/py34/log/py34-1.log)', 1)ERROR:   py27: commands  
>> failed
>>   functional: commands succeeded
>>   pep8: commands succeeded
>> ERROR:   pip-missing-reqs: commands failed
>>
>> Is there any suggestion?

>I think your pip is too old. Please upgrade it, f.e. by doing pip install  
>--user -U pip. [When using --user, make sure your .local/bin is in PATH].

>Ihar

thanks.

zhaolh@develop:~$ python --version
Python 2.7.6

zhaolh@develop:~$ pip --version
pip 8.1.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)

zhaolh@develop:/opt/stack/nova$ pip install --user -U pip
/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:315:
 SNIMissingWarning: An HTTPS request has been made, but the SNI (Su
bject Name Indication) extension to TLS is not available on this platform. This 
may cause the server to present an incorrect TLS certificate, which can cause 
validation failures. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning.  
SNIMissingWarning
/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:120:
 InsecurePlatformWarning: A true SSLContext object is not available
. This prevents urllib3 from configuring SSL appropriately and may cause 
certain SSL connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
Requirement already up-to-date: pip in /usr/local/lib/python2.7/dist-packages

The error was still exist.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Cinder Datasource

2016-04-14 Thread Weyl, Alexey (Nokia - IL)
Vitrage is a OpenStack project which supports Root Cause Analysis functionality 
for OpenStack, with support for raising additional deduced alarms and states. 
For this purpose Vitrage gathers information from multiple OpenStack projects 
about the state of the system, and how the different entities are related to 
one another.

This update was about first steps of adding Cinder into the Vitrage model - 
Vitrage can query Cinder to get the list of volumes and which instance they are 
attached to, and store this information in the Vitrage data model.

Based on your question, it is important to emphasize that Vitrage does not 
itself make changes to the system - it is concerned only in reflecting and 
analyzing what is happening and raising alarms / changing states as a result.

For more information on Vitrage, please see here:
https://wiki.openstack.org/wiki/Vitrage

Best Regards,
Alexey Weyl

> From: Erlon Cruz [mailto:sombra...@gmail.com] 
> Sent: Wednesday, April 13, 2016 3:43 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [vitrage] Cinder Datasource
>
> Can you give a bit more of context? Where is the design you mentioned? By 
> datasource you mean the Cinder service?
> There is already some work[1] to allow Cider to attach volumes in baremetal 
> servers.
>
>
> [1] https://blueprints.launchpad.net/cinder/+spec/use-cinder-without-nova
>
> On Tue, Apr 12, 2016 at 4:02 AM, Weyl, Alexey (Nokia - IL) 
>  wrote:
> Hi,
>
> Here is the design of the Cinder datasource of Vitrage.
>
> Currently Cinder datasource is handling only Volumes.
> This datasource listens to cinder volumes notifications on the oslo bus, and 
> updates the topology accordingly.
> Currently Cinder Volume can be attached only to instance (Cinder design).
>
> Future Steps:
> We want to perform research on what other data we can bring from Cinder.
> For example:
> 1. To what zone we can connect the volume
> 2. To what image we can connect the volume
>
> Alexey

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][horizon] - Add extra plugins config to puppet-horizon

2016-04-14 Thread Emilien Macchi
On Thu, Apr 14, 2016 at 8:52 AM, Marcos Fermin Lobo
 wrote:
> Hi all,
>
> I have a question about puppet-horizon module and UI plugins for Horizon.
>
> Some of UI plugins, like murano-dashboard, needs to add extra parameters
> https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
> to local_settings file (which comes from Horizon).
>
> My question is: Should puppet-horizon module provide those extra parameters
> coming from each official UI plugins? or this kind of things should come
> from specific a puppet-{ui-plugin}?
>
I don't think having a separated Puppet module for each plugin will
help us, maintaining a module is a lot of work.
One thing we could do is to have classes:
horizon::plugin::murano
horizon::plugin::foobar etc

What do you think?
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][horizon] - Add extra plugins config to puppet-horizon

2016-04-14 Thread Marcos Fermin Lobo
Hi all,

I have a question about puppet-horizon module and UI plugins for Horizon.

Some of UI plugins, like murano-dashboard, needs to add extra parameters 
https://github.com/openstack/murano-dashboard/blob/master/muranodashboard/local/local_settings.py.example
 to local_settings file (which comes from Horizon).

My question is: Should puppet-horizon module provide those extra parameters 
coming from each official UI plugins? or this kind of things should come from 
specific a puppet-{ui-plugin}?

Thanks.

Cheers,
Marcos
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][vitrage] Joint design sesssion in Austin

2016-04-14 Thread Weyl, Alexey (Nokia - IL)
As far as Vitrage team is concerned, 16:10-16:50 works best for us, but we can 
attend either session if needed.

Alexey Weyl

-Original Message-
From: Julien Danjou [mailto:jul...@danjou.info] 
Sent: Thursday, April 14, 2016 12:07 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [telemetry][vitrage] Joint design sesssion in Austin

Hi folks,

Vitrage doesn't have any track/session at the summit, and we Telemetry have a 
bunch of spare ones, so I figured we should use one to meet and chat a bit 
about how our projects can help each others. There should be some interesting 
evolution for Aodh going forward with the usage Vitrage is making of it.

We got 2 slots available on Thursday 28th April: 16:10-16:50 or 17:00:17:40. 
Would any of those fit the schedule of every interested?

Cheers,
--
Julien Danjou
# Free Software hacker
# https://julien.danjou.info

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-14 Thread Alex Xu
+1, thanks Andrey for helping on Microversions implementation a lot in
python-novaclient!

2016-04-14 1:53 GMT+08:00 Matt Riedemann :

> I'd like to propose that we make Andrey Kurilin core on python-novaclient.
>
> He's been doing a lot of the maintenance the last several months and a lot
> of times is the first to jump on any major issue, does a lot of the
> microversion work, and is also working on cleaning up docs and helping me
> with planning releases.
>
> His work is here [1].
>
> Review stats for the last 4 months (although he's been involved in the
> project longer than that) [2].
>
> Unless there is disagreement I plan to make Andrey core by the end of the
> week.
>
> [1]
> https://review.openstack.org/#/q/owner:akurilin%2540mirantis.com+project:openstack/python-novaclient
> [2] http://stackalytics.com/report/contribution/python-novaclient/120
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova quota statistics counting issue

2016-04-14 Thread Timofei Durakov
Hi,

I think it would be ok to store persistently quota details on compute side,
as was discussed during mitaka mid-cycle[1] for migrations[2]. So if
compute service fails we could restore state and update quota after compute
restart.

Timofey

[1] - https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
[2] - https://review.openstack.org/#/c/291161/5/nova/compute/background.py




On Wed, Apr 13, 2016 at 7:27 PM, Dmitry Stepanenko  wrote:

> Hi Team,
>
> I worked on nova quota statistics issue (
> https://bugs.launchpad.net/nova/+bug/1284424) happenning when nova-*
> processes are restarted during removing instances and was able to reproduce
> it. For repro I used devstack and started nova-api and nova-compute in
> separate screen windows. For killing them I used ctrl+c. As I found this
> issue happened if nova-* processes are killed after instance was deleted
> but right before quota commit procedure finishes.
>
> We discussed these results with Markus Zoeller and decided that even
> though killing nova processes is a bit exotic event, this still should be
> fixed because quotas counting affects billing and very important for us.
>
> So, we need to introduce some mechanism that will prevent us from reaching
> inconsistent states in terms of quotas. In other words, this mechanism
> should work in such a way that both instance create/remove operation and
> quota usage recount operation happened or not happened together.
>
> Any ideas how to do that properly?
>
> Kind regards,
> Dmitry
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][vitrage] Joint design sesssion in Austin

2016-04-14 Thread gordon chung
cool! either works for me. prefer earlier slot if it's just the one.

On 14/04/2016 5:07 AM, Julien Danjou wrote:
> Hi folks,
>
> Vitrage doesn't have any track/session at the summit, and we Telemetry
> have a bunch of spare ones, so I figured we should use one to meet and
> chat a bit about how our projects can help each others. There should be
> some interesting evolution for Aodh going forward with the usage Vitrage
> is making of it.
>
> We got 2 slots available on Thursday 28th April: 16:10-16:50 or
> 17:00:17:40. Would any of those fit the schedule of every interested?
>
> Cheers,
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Ceilometer][Architecture] Transformers in Kilo vs Liberty(and Mitaka)

2016-04-14 Thread gordon chung


On 14/04/2016 5:28 AM, Nadya Shakhat wrote:
> Hi Gordon,
>
> I'd like to add some clarifications and comments.
>
> this is not entirely accurate pre-polling change, the polling agents
> publish one message per sample. not the polling agents publish one
> message per interval (multiple samples).
>
> Looks like there is some misunderstanding here. In the code, there is
> "batch_polled_samples" option. You can switch it off and get the result
> you described, but it's True by default.  See
> https://github.com/openstack/ceilometer/blob/master/ceilometer/agent/manager.py#L205-L211

right... the polling agents are by default to publish one message per 
interval as i said (if you s/not/now/) where as before it was publishing 
1 message per sample. i don't see why that's a bad thing?

> .
>
> You wrote:
>
> the polling change is not related to coordination work in notification.
> the coordination work was to handle HA / multiple notification agents.
> regardless polling change, this must exist.
>
> and
>
> transformers are already optional. they can be removed from
> pipeline.yaml if not required (and thus coordination can be disabled).
>
>
> So, coordination is needed only to support transformations. Polling
> change does relate to this because it has brought additional
> transformations on notification agent side. I suggest to pay attention
> to the existing use cases. In real life, people use transformers for
> polling-based metrics only. The most important use case for
> transformation is Heat autoscaling. It usually based on cpu_util. Before
> Liberty, we were able not to use coordination for notification agent to
> support the autoscaling usecase. In Liberty we cannot support it without
> Redis. Now "transformers are already optional", that's true. But I think
> it's better to add some restrictions like "we don't support
> transformations for notifications" and have transformers optional on
> polling-agent only instead of introducing such a comprehensive
> coordination.

i'm not sure if it's safe to say it's only use for cpu_util. that said, 
cpu_util ideally shouldn't be a transform anyways. see the work Avi was 
doing[1].


>
> IPC is one of the
> standard use cases for message queues. the concept of using queues to
> pass around and distribute work is essentially what it's designed for.
> if rabbit or any message queue service can't provide this function, it
> does worry me.
>
>
> I see your point here, but Ceilometer aims to take care of the
> OpenStack, monitor it's state. Now it is known as a "Rabbit killer". We
> cannot ignore that if we want anybody uses Ceilometer.

what is the message load we're seeing here? how is your MQ configured? 
do you have batching? how many agents/queues do you have? i think this 
needs to be reviewed first to be honest as there really isn't much to go on?


[1] https://review.openstack.org/#/c/182057/


-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-14 Thread Sean Dague
On 04/13/2016 01:53 PM, Matt Riedemann wrote:
> I'd like to propose that we make Andrey Kurilin core on python-novaclient.
> 
> He's been doing a lot of the maintenance the last several months and a
> lot of times is the first to jump on any major issue, does a lot of the
> microversion work, and is also working on cleaning up docs and helping
> me with planning releases.
> 
> His work is here [1].
> 
> Review stats for the last 4 months (although he's been involved in the
> project longer than that) [2].
> 
> Unless there is disagreement I plan to make Andrey core by the end of
> the week.
> 
> [1]
> https://review.openstack.org/#/q/owner:akurilin%2540mirantis.com+project:openstack/python-novaclient
> 
> [2] http://stackalytics.com/report/contribution/python-novaclient/120
> 

+1

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API doc import, and next steps

2016-04-14 Thread Sean Dague
On 04/14/2016 05:03 AM, Markus Zoeller wrote:
>> From: Sean Dague 
>> To: openstack-dev@lists.openstack.org
>> Date: 04/13/2016 05:08 PM
>> Subject: [openstack-dev] [nova] Nova API doc import, and next steps
>>
>> I think we've gotten the automatic converters for the wadl files to
>> about as good as we're going to get. The results right now are here -
>> https://review.openstack.org/#/c/302500/
>>
>> There remain many issues in the content (there are many issues in the
>> source content, and a few crept in during imperfect translation),
>> however at some point we need to just call the automatic translation
>> effort "good enough", commit it, and start fixing the docs in chunks. I
>> think we are at that stage.
>>
>> Once we get those bits committed, it's time to start fixing what
>> remains. I started an etherpad for the rough guide here -
>> https://etherpad.openstack.org/p/nova-api-docs-in-rst there are a few
>> global level things, but a bunch of this is a set of verifications and
>> fixes that will have to happen for every *.inc file.
>>
>> for every file in api-ref/sources/*.inc
>>
>> 1. Verify methods
>>  1. Do all methods of the resource currently exist?
>>  2. Rearange methods in order (sorted by url)
>>   1. GET
>>   2. POST
>>   3. PUT
>>   4. DELETE
>>   5. i.e. for servers.inc GET /servers, POST /servers, GET
>>  /servers/details, GET /servers/{id}, PUT /servers/{id},
>>  DELETE /servers/{id}
>> 2. Verify all parameters
>>  1. Are all parameters that exist in the resource are listed
>>  2. Are all parameters referencing the right lookup value in
>> parameters.yaml
>>   1. name, id are common issues, will need $foo_name and 
> $foo_id
>>  created
>>  3. Add microversion parameters at the end of the table in order 
> of
>> introduction
>>   1. min_ver: 2.10 is a valid parameter key
>> 3. Examples
>>  1. Is there an example response for all request / response that
>> have
>> a body
>>  2. Is there an english description of the change in question
>> explaining the action that it would have
>> 4. Body Text
>>  1. Is formatting of the introduction text for each section well
>> formatted (lists and headers were stripped in the 
> processing)
>>
>> My feeling is that we should probably create a fleet of bugs which is 1
>> per source file and phase, with a set of api-ref tags. This will give us
>> easy artifacts to hand off to people, and know which ones are getting
>> done and which ones remain. A lot of this work is pretty easy, just
>> takes some time.
>>
>> I'd like to get the base patches landed in the next day or so so that we
>> can start chugging through these fixes pre summit, and do a virtual doc
>> sprint post summit to push through to completion.
>>
>>-Sean
>>
>> -- 
>> Sean Dague
>> http://dague.net
> 
> The rendered output looks pretty neat. I like that all is on one page: 
> http://docs-draft.openstack.org/00/302500/9/check/gate-nova-api-ref/81d644c/api-ref/build/html/
> 
> 
> One bug report per source-file sounds reasonable. Adding the 
> "low-hanging-fruit" tag will maybe get you some volunteers.

Yes, that is the intent. Auggy is working up a tool that will let us
bulk create this fleet of bugs with detailed instructions so they will
be easy to work through by new folks. This may also be extremely useful
for other low hanging fruit tracking mass efforts in the future.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-14 Thread Sean Dague
On 04/14/2016 05:19 AM, Markus Zoeller wrote:
>> From: Neil Jerram 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: 04/14/2016 10:50 AM
>> Subject: Re: [openstack-dev] [all] [devstack] Adding example 
>> "local.conf" files for testing?
>>
>> On 14/04/16 08:35, Markus Zoeller wrote:
>>> Sometimes (especially when I try to reproduce bugs) I have the need
>>> to set up a local environment with devstack. Everytime I have to look
>>> at my notes to check which option in the "local.conf" have to be set
>>> for my needs. I'd like to add a folder in devstacks tree which hosts
>>> multiple example local.conf files for different, often used setups.
>>> Something like this:
>>>
>>>  example-confs
>>>  --- newton
>>>  --- --- x86-ubuntu-1404
>>>  --- --- --- minimum-setup
>>>  --- --- --- --- README.rst
>>>  --- --- --- --- local.conf
>>>  --- --- --- serial-console-setup
>>>  --- --- --- --- README.rst
>>>  --- --- --- --- local.conf
>>>  --- --- --- live-migration-setup
>>>  --- --- --- --- README.rst
>>>  --- --- --- --- local.conf.controller
>>>  --- --- --- --- local.conf.compute1
>>>  --- --- --- --- local.conf.compute2
>>>  --- --- --- minimal-neutron-setup
>>>  --- --- --- --- README.rst
>>>  --- --- --- --- local.conf
>>>  --- --- s390x-1.1.1-vulcan
>>>  --- --- --- minimum-setup
>>>  --- --- --- --- README.rst
>>>  --- --- --- --- local.conf
>>>  --- --- --- live-migration-setup
>>>  --- --- --- --- README.rst
>>>  --- --- --- --- local.conf.controller
>>>  --- --- --- --- local.conf.compute1
>>>  --- --- --- --- local.conf.compute2
>>>  --- mitaka
>>>  --- --- # same structure as master branch. omitted for brevity
>>>  --- liberty
>>>  --- --- # same structure as master branch. omitted for brevity
>>>
>>> Thoughts?
>>
>> Yes, this looks useful to me.  Only thing is that you shouldn't need the 
> 
>> per-release subtrees, though; the DevStack repository already has 
>> per-release stable/ branches, which you need to check out in 
>> order to do a DevStack setup of a past release.  So I would expect the 
>> local.confs for each past release to live in the corresponding branch.
>>
>> Regards,
>>Neil
> 
> My intention was to avoid that there is a folder "current" or "trunk"
> or similar, which doesn't get updated. That's the issue Steve talked
> about.
> 
> The workflow could be, at every new cycle:
> * create a new "release folder" (Newton, Ocata, ...)
> * copy the "setup folders" (minimum-setup, ...) to the new folder
> * clean up the "local.conf" file(s) of deprecated options
> * delete a "release folder" if the release is EOL
> 
> I also assume that this would make potential backports easier.

I think this would be useful, and accepted easily.

I *don't* think we want per release directories. Because it confuses the
issue on whether or not devstack master can install liberty (which it
can't).

Every local.conf should include a documentation page as well that
describes the scenario, which means these would be easy to snag off the
web docs.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Removing Nova specifics from oslo.log

2016-04-14 Thread Victor Stinner

Le 13/04/2016 22:54, Julien Danjou a écrit :

There's a bunch of projects that have no intention of using
oslo.context, so depending and referring to it by default is something
I'd love to fade away.


It looks like Oslo has an identity crisis :-)

Basically the question looks like: should we make Oslo easier to use 
outside "OpenStack"? If I summarized correctly the question, my answer 
is YES!


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][vitrage] Joint design sesssion in Austin

2016-04-14 Thread Ildikó Váncsa
Hi Julien,

First of all big +1. :)

The 16:10-16:50 slot looks better for me.

Thanks,
/Ildikó

> -Original Message-
> From: Julien Danjou [mailto:jul...@danjou.info]
> Sent: April 14, 2016 11:07
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [telemetry][vitrage] Joint design sesssion in Austin
> 
> Hi folks,
> 
> Vitrage doesn't have any track/session at the summit, and we Telemetry have a 
> bunch of spare ones, so I figured we should use one
> to meet and chat a bit about how our projects can help each others. There 
> should be some interesting evolution for Aodh going
> forward with the usage Vitrage is making of it.
> 
> We got 2 slots available on Thursday 28th April: 16:10-16:50 or 17:00:17:40. 
> Would any of those fit the schedule of every interested?
> 
> Cheers,
> --
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-14 Thread Shaughnessy, David
Hi Cathy.
I’d be interested in contributing.
I think a meet up at the summit would be a good idea as the people I’ve engaged 
with from the other projects on this topic expressed interest.
There was an etherpad for the l2-agent-extensions-api that was merged in Mitaka 
that listed projects that needed access to the flow table[1].
Hope you find it helpful.
Regards.
David.

[1] https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion

From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com]
Sent: Thursday, April 14, 2016 10:05 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Hi cathy,
at net-bgpvpn, we're very interested in this effort. Please, keep us in the 
loop.
Mathieu

On Thu, Apr 14, 2016 at 8:59 AM, Haim Daniel 
> wrote:
Hi,

I'd +1 Vikram's comment on neutron-classifier , RFE [1] contains the original 
thread about that topic.


[1] https://bugs.launchpad.net/neutron/+bug/1527671

On Thu, Apr 14, 2016 at 5:33 AM, Vikram Choudhary 
> wrote:

Hi Cathy,

A project called "neutron-classifier [1]" is also there addressing the same use 
case. Let's sync up and avoid work duplicity.

[1] https://github.com/openstack/neutron-classifier

Thanks
Vikram
On Apr 14, 2016 6:40 AM, "Cathy Zhang" 
> wrote:
Hi everyone,
Per Armando’s request, Louis and I are looking into the following features for 
Newton cycle.

• Neutron Common FC used for SFC, QoS, Tap as a service etc.,
• OVS Agent extension
Some of you might know that we already developed a FC in networking-sfc project 
and QoS also has a FC. It makes sense that we have one common FC in Neutron 
that could be shared by SFC, QoS, Tap as a service etc. features in Neutron.
Different features may extend OVS agent and add different new OVS flow tables 
to support their new functionality. A mechanism is needed to ensure consistent 
OVS flow table modification when multiple features co-exist. AFAIK, there is 
some preliminary work on this, but it is not a complete solution yet.
We will like to start these effort by collecting requirements and then posting 
specifications for review. If any of you would like to join this effort, please 
chime in. We can set up a meet-up session in the Summit to discuss this 
face-in-face.
Thanks,
Cathy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-14 Thread Kwasniewska, Alicja
+1 to approach suggested by sdake.

Furthermore, I think it would be good if -1/0/+1 only reflects logical meaning 
of reviewed docs while still providing some suggestions on improving spelling 
and grammar in comment even if we +1 given patch.

Alicja

From: Martin André [mailto:martin.an...@gmail.com]
Sent: Wednesday, April 13, 2016 12:03 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes


On Tue, Apr 12, 2016 at 10:05 PM, Steve Gordon 
> wrote:
- Original Message -
> From: "Jeff Peeler" >
> To: "OpenStack Development Mailing List (not for usage questions)" 
> >
>
> On Mon, Apr 11, 2016 at 3:37 AM, Steven Dake (stdake) 
> >
> wrote:
> > Hey folks,
> >
> > The reviewers in Kolla tend to nit-pick the quickstart guide to death
> > during
> > reviews.  I'd like to keep that high bar in place for the QSG, because it
> > is
> > our most important piece of documentation at present.  However, when new
> > contributors see the nitpicking going on in reviews, I think they may get
> > discouraged about writing documentation for other parts of Kolla.
> >
> > I'd prefer if the core reviewers held a lower bar for docs not related to
> > the philosophy or quiickstart guide document.  We can always iterate on
> > these new documents (like the operator guide) to improve them and raise the
> > bar on their quality over time, as we have done with the quickstart guide.
> > That way contributors don't feel nitpicked to death and avoid improving the
> > documentation.
> >
> > If you are a core reveiwer and agree with this approach please +1, if not
> > please –1.
>
> I'm fine with relaxing the reviews on documentation. However, there's
> a difference between having a missed comma versus the whole patch
> being littered with misspellings. In general in the former scenario I
> try to comment and leave the code review set at 0, hoping the
> contributor fixes it. The danger is that a 0 vote people sometimes
> miss, but it doesn't block progress.
My typical experience with (very) occasional drive by commits to operational 
project docs (albeit not Kolla) is that the type of nit that comes up is more 
typically -1 thanks for adding X, can you also add Y and Z. Before you know it 
a simple drive by commit to flesh out one area has become an expectation to 
write an entire chapter.

That's because you're a native speaker and you write proper English to begin 
with :)

We should be asking ourselves this simple question when reviewing documentation 
patch "does it make the documentation better?". Often the answer is yes, that's 
why I'm trying to ask for additional improvements in follow-up patches.
Regarding spelling or a grammatical mistakes, why not fix it now while it's 
still hot when we spot one in the new documentation that's being written? It's 
more time consuming to fix it later. If needed a native speaker can take over 
the patch and correct English.
Martin

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-14 Thread Ihar Hrachyshka

Cathy Zhang  wrote:


Hi everyone,
Per Armando’s request, Louis and I are looking into the following  
features for Newton cycle.

· Neutron Common FC used for SFC, QoS, Tap as a service etc.,
· OVS Agent extension
Some of you might know that we already developed a FC in networking-sfc  
project and QoS also has a FC. It makes sense that we have one common FC  
in Neutron that could be shared by SFC, QoS, Tap as a service etc.  
features in Neutron.


I don’t actually know of any classifier in QoS. It’s only planned to  
emerge, but there are no specs or anything specific to the feature.


Anyway, I agree that classifier API belongs to core neutron and should be  
reused by all interested subprojects from there.


Different features may extend OVS agent and add different new OVS flow  
tables to support their new functionality. A mechanism is needed to  
ensure consistent OVS flow table modification when multiple features  
co-exist. AFAIK, there is some preliminary work on this, but it is not a  
complete solution yet.


I think there is no formal spec or anything, just some emails around there.

That said, I don’t follow why it’s a requirement for SFC to switch to l2  
agent extension mechanism. Even today, with SFC maintaining its own agent,  
there are no clear guarantees for flow priorities that would avoid all  
possible conflicts.


We will like to start these effort by collecting requirements and then  
posting specifications for review. If any of you would like to join this  
effort, please chime in. We can set up a meet-up session in the Summit to  
discuss this face-in-face.


Great. Let’s have a meetup for this topic.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] command tox with error in mitaka

2016-04-14 Thread Ihar Hrachyshka

zhaolihuisky  wrote:


Hi everyone

Install devstack by http://docs.openstack.org/developer/devstack/
Install tox by  
docs.openstack.org/project-team-guide/project-setup/python.html

command 'tox', the error log:

pip-missing-reqs runtests: PYTHONHASHSEED='952003835'
pip-missing-reqs runtests: commands[0] | pip-missing-reqs -d  
--ignore-file=nova/tests/* --ignore-file=nova/test.py nova

Traceback (most recent call last):
  File ".tox/pip-missing-reqs/bin/pip-missing-reqs", line 7, in 
from pip_missing_reqs.find_missing_reqs import main
  File 
"/opt/stack/nova/.tox/pip-missing-reqs/local/lib/python2.7/site-packages/pip_missing_reqs/find_missing_reqs.py",
 line 14, in 
from pip.utils import get_installed_distributions, normalize_name
ImportError: cannot import name normalize_name
ERROR: InvocationError:  
'/opt/stack/nova/.tox/pip-missing-reqs/bin/pip-missing-reqs -d  
--ignore-file=nova/tests/* --ignore-file=nova/test.py nova
'  
summary  

ERROR:   py34: could not install deps  
[-r/opt/stack/nova/test-requirements.txt]; v =  
InvocationError('/opt/stack/nova/.tox/py34/bin/pip install -
chttps://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt  
-r/opt/stack/nova/test-requirements.txt (see  
/opt/stack/nova/.tox/py34/log/py34-1.log)', 1)ERROR:   py27: commands  
failed

  functional: commands succeeded
  pep8: commands succeeded
ERROR:   pip-missing-reqs: commands failed

Is there any suggestion?


I think your pip is too old. Please upgrade it, f.e. by doing pip install  
--user -U pip. [When using --user, make sure your .local/bin is in PATH].


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Set CSS style for each cell in Datatable according to the value

2016-04-14 Thread 严超
Can you be more explicit ? How should we overwrite the Cell Class of
Horizon ?
Is there a best practice ? Or example at hand ?
Seems that is not easy for me and, it's the only choice here for that case?

Thank you very mush for your reply !

*Best Regards!*


*Chao Yan--About me : http://about.me/chao_yan
*

*My twitter: @yanchao727 *
*My Weibo: http://weibo.com/herewearenow *
*--*

2016-04-14 17:44 GMT+08:00 Itxaka Serrano Garcia :

> If Im reading the correct correctly, you can provide on your Table class
> in the Meta, a cell_class.
>
> So you could create your custom Cell class that inherits from the normal
> Cell class (from horizon.tables.base import Cell), and based on the data,
> change the css before rendering?
>
>
>
>
> On 04/14/2016 10:44 AM, 严超 wrote:
>
>> Hi, Everyone:
>>  Is there a possible way to set CSS style for*each cell* in
>> *DataTable* according to the value of the cell ? For example, if the
>> cell value is *'available' *then the css should display a *green icon*
>> as well, else if the cell value is *'error'* the the css should display
>> a *red icon*.
>>  What I found is horizon.tables*.*Columnoption:
>>
>> classes<
>> http://docs.openstack.org/developer/horizon/ref/tables.html#horizon.tables.Column.classes
>> >
>>
>>  An iterable of CSS classes which should be added to this
>> column. Example: classes=('foo', 'bar').
>>
>>  But this sets all the column cell style.
>>  Is there a possible way to set CSS style for* each cell
>> respectively?*
>>  I'm very grateful for answering.
>>
>> /*Best Regards!*/
>> /*Chao Yan*
>> **--
>> About me : http://about.me/chao_yan/
>> //My twitter: @yanchao727 
>> /My Weibo: http://weibo.com/herewearenow
>> --/*/
>> /*
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] pbr 1.9.1 release

2016-04-14 Thread no-reply
We are thrilled to announce the release of:

pbr 1.9.1: Python Build Reasonableness

With source available at:

http://git.openstack.org/cgit/openstack-dev/pbr

Please report issues through launchpad:

http://bugs.launchpad.net/pbr

For more details, please see below.

Changes in pbr 1.9.0..1.9.1
---

a27f512 Handle IndexError during version string parsing

Diffstat (except docs and test files)
-

pbr/version.py  | 8 
2 files changed, 11 insertions(+), 1 deletion(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-14 Thread Oleg Gelbukh
The thread I'm referring to in the prev message is:
http://lists.openstack.org/pipermail/openstack-infra/2014-January/000624.html

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

On Thu, Apr 14, 2016 at 12:56 PM, Oleg Gelbukh 
wrote:

> Hi,
>
> I'm sorry for replying to this old thread, but I would really like to see
> this moving.
>
> There's a 'pre-release' pipeline in Zuul which serves exactly that
> purpose: handle pre-release tags (beta-versions). However, per this thread,
> it is not recommended due to possible issues with pip unable to
> differentiate pre-release versions from main releases.
>
> Another option here is to publish minor versions of the package, i.e.
> start with 9.0.0 early, and then increase to 9.0.1 etc once the development
> progresses.
>
> --
> Best regards,
> Oleg Gelbukh
> Mirantis Inc.
>
> On Thu, Jan 21, 2016 at 11:52 AM, Yuriy Taraday 
> wrote:
>
>> By the way, it would be very helpful for testing external tools if we had
>> 7.0.1 release on PyPI as well. It seems python-fuelclient somehow ended up
>> with a "stable/7.0.1" branch instead of "7.0.1" tag.
>>
>> On Wed, Jan 20, 2016 at 2:49 PM Roman Prykhodchenko 
>> wrote:
>>
>>> Releasing a beta version sounds like a good plan but does OpenStack
>>> Infra actually support this?
>>>
>>> > 20 січ. 2016 р. о 12:05 Oleg Gelbukh 
>>> написав(ла):
>>> >
>>> > Hi,
>>> >
>>> > Currently we're experiencing issues with Python dependencies of our
>>> package (fuel-octane), specifically between fuelclient's dependencies and
>>> keystoneclient dependencies.
>>> >
>>> > New keystoneclient is required to work with the new version of Nailgun
>>> due to introduction of SSL in the latter. On the other hand, fuelclient is
>>> released along with the main release of Fuel, and the latest version
>>> available from PyPI is 7.0.0, and it has very old dependencies (based on
>>> packages available in centos6/python26).
>>> >
>>> > The solution I'd like to propose is to release beta version of
>>> fuelclient (8.0.0b1) with updated requirements ASAP. With --pre flag to
>>> pip/tox, this will allow to run unittests against the proper set of
>>> requirements. On the other hand, it will not break the users consuming the
>>> latest stable (7.0.0) version with old requirements from PyPI.
>>> >
>>> > Please, share your thoughts and considerations. If no objections, I
>>> will create a corresponding bug/blueprint against fuelclient to be fixed in
>>> the current release cycle.
>>> >
>>> > --
>>> > Best regards,
>>> > Oleg Gelbukh
>>> > Mirantis
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-04-14 Thread Oleg Gelbukh
Hi,

I'm sorry for replying to this old thread, but I would really like to see
this moving.

There's a 'pre-release' pipeline in Zuul which serves exactly that purpose:
handle pre-release tags (beta-versions). However, per this thread, it is
not recommended due to possible issues with pip unable to differentiate
pre-release versions from main releases.

Another option here is to publish minor versions of the package, i.e. start
with 9.0.0 early, and then increase to 9.0.1 etc once the development
progresses.

--
Best regards,
Oleg Gelbukh
Mirantis Inc.

On Thu, Jan 21, 2016 at 11:52 AM, Yuriy Taraday  wrote:

> By the way, it would be very helpful for testing external tools if we had
> 7.0.1 release on PyPI as well. It seems python-fuelclient somehow ended up
> with a "stable/7.0.1" branch instead of "7.0.1" tag.
>
> On Wed, Jan 20, 2016 at 2:49 PM Roman Prykhodchenko  wrote:
>
>> Releasing a beta version sounds like a good plan but does OpenStack Infra
>> actually support this?
>>
>> > 20 січ. 2016 р. о 12:05 Oleg Gelbukh 
>> написав(ла):
>> >
>> > Hi,
>> >
>> > Currently we're experiencing issues with Python dependencies of our
>> package (fuel-octane), specifically between fuelclient's dependencies and
>> keystoneclient dependencies.
>> >
>> > New keystoneclient is required to work with the new version of Nailgun
>> due to introduction of SSL in the latter. On the other hand, fuelclient is
>> released along with the main release of Fuel, and the latest version
>> available from PyPI is 7.0.0, and it has very old dependencies (based on
>> packages available in centos6/python26).
>> >
>> > The solution I'd like to propose is to release beta version of
>> fuelclient (8.0.0b1) with updated requirements ASAP. With --pre flag to
>> pip/tox, this will allow to run unittests against the proper set of
>> requirements. On the other hand, it will not break the users consuming the
>> latest stable (7.0.0) version with old requirements from PyPI.
>> >
>> > Please, share your thoughts and considerations. If no objections, I
>> will create a corresponding bug/blueprint against fuelclient to be fixed in
>> the current release cycle.
>> >
>> > --
>> > Best regards,
>> > Oleg Gelbukh
>> > Mirantis
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [devstack] command tox with error in mitaka

2016-04-14 Thread zhaolihuisky
Hi everyone
Install devstack by http://docs.openstack.org/developer/devstack/ Install tox 
by docs.openstack.org/project-team-guide/project-setup/python.html command 
'tox', the error log:
pip-missing-reqs runtests: PYTHONHASHSEED='952003835'
pip-missing-reqs runtests: commands[0] | pip-missing-reqs -d 
--ignore-file=nova/tests/* --ignore-file=nova/test.py nova
Traceback (most recent call last):
  File ".tox/pip-missing-reqs/bin/pip-missing-reqs", line 7, in 
from pip_missing_reqs.find_missing_reqs import main
  File 
"/opt/stack/nova/.tox/pip-missing-reqs/local/lib/python2.7/site-packages/pip_missing_reqs/find_missing_reqs.py",
 line 14, in 
from pip.utils import get_installed_distributions, normalize_name
ImportError: cannot import name normalize_name
ERROR: InvocationError: 
'/opt/stack/nova/.tox/pip-missing-reqs/bin/pip-missing-reqs -d 
--ignore-file=nova/tests/* --ignore-file=nova/test.py nova
' summary 

ERROR:   py34: could not install deps 
[-r/opt/stack/nova/test-requirements.txt]; v = 
InvocationError('/opt/stack/nova/.tox/py34/bin/pip install -
chttps://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt
 -r/opt/stack/nova/test-requirements.txt (see 
/opt/stack/nova/.tox/py34/log/py34-1.log)', 1)ERROR:   py27: commands failed
  functional: commands succeeded
  pep8: commands succeeded
ERROR:   pip-missing-reqs: commands failed

Is there any suggestion?
Best Regardszhaolihui__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Set CSS style for each cell in Datatable according to the value

2016-04-14 Thread Itxaka Serrano Garcia
If Im reading the correct correctly, you can provide on your Table class 
in the Meta, a cell_class.


So you could create your custom Cell class that inherits from the normal 
Cell class (from horizon.tables.base import Cell), and based on the 
data, change the css before rendering?





On 04/14/2016 10:44 AM, 严超 wrote:

Hi, Everyone:
 Is there a possible way to set CSS style for*each cell* in
*DataTable* according to the value of the cell ? For example, if the
cell value is *'available' *then the css should display a *green icon*
as well, else if the cell value is *'error'* the the css should display
a *red icon*.
 What I found is horizon.tables*.*Columnoption:

classes

 An iterable of CSS classes which should be added to this
column. Example: classes=('foo', 'bar').

 But this sets all the column cell style.
 Is there a possible way to set CSS style for* each cell respectively?*
 I'm very grateful for answering.

/*Best Regards!*/
/*Chao Yan*
**--
About me : http://about.me/chao_yan/
//My twitter: @yanchao727 
/My Weibo: http://weibo.com/herewearenow
--/*/
/*


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tricircle] Error runnig py27

2016-04-14 Thread Khayam Gondal
Hi joehuang,
by removing self it is showing following error.

  File
"/home/khayam/tricircle/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
line 1206, in _importer
thing = __import __ (import_path)
Import Error: No module named app
Ran 135 tests in 2.994s (+ 0.940s)


On Thu, Apr 14, 2016 at 5:53 AM, joehuang  wrote:

> Hi, Khayam,
>
>
>
> @mock.patch(*'self.app.post_json'*)
>
>
>
> No “self.” needed.
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
>
>
> *From:* Khayam Gondal [mailto:khayam.gon...@gmail.com]
> *Sent:* Wednesday, April 13, 2016 2:50 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* joehuang; Zhiyuan Cai
> *Subject:* [Tricircle] Error runnig py27
>
>
>
> Hi I am writing a test for exception . Following is my testing function.
>
> @mock.patch(*'self.app.post_json'*)
> *def *test_post_exp(self, mock_get, mock_http_error_handler):
>
> mock_response = mock.Mock()
> mock_response.raise_for_status.side_effect = db_exc.DBDuplicateEntry
> mock_get.return_value = mock_response
> mock_http_error_handler.side_effect = db_exc.DBDuplicateEntry
> *with *self.assertRaise(db_exc.DBDuplicateEntry):
> self.app.post_json(
> *'/v1.0/pods'*,
> dict(pod=None),
> expect_errors=True)
>
> But when I run tox -epy27 it shows:
>
>  * File 
> "/home/khayam/tricircle/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
>  line 1206, in _importer*
>
> *thing = __import__(import_path)*
>
> *ImportError: No module named self*
>
> Can someone guide me whats wrong here. I already had installed latest version 
> of mock, python-dev.
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-14 Thread Bogdan Dobrelya
> On 04/11/2016 09:43 AM, Allison Randal wrote:
>>> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas  
>>> wrote:
 Reading unofficial notes [1], i found one topic very interesting:
 One Platform – How do we truly support containers and bare metal under
 a common API with VMs? (Ironic, Nova, adjacent communities e.g.
 Kubernetes, Apache Mesos etc)

 Anyone present at the meeting, please expand on those few notes on
 etherpad? And how if any this feedback is getting back to the
 projects?
>>
>> It was really two separate conversations that got conflated in the
>> summary. One conversation was just being supportive of bare metal, VMs,
>> and containers within the OpenStack umbrella. The other conversation
>> started with Monty talking about his work on shade, and how it wouldn't
>> exist if more APIs were focused on the way users consume the APIs, and
>> less an expression of the implementation details of each project.
>> OpenStackClient was mentioned as a unified CLI for OpenStack focused
>> more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
>> but falls in the same general category of work.)
>>
>> i.e. There wasn't anything new in the conversation, it was more a matter
>> of the developers/TC members on the board sharing information about work
>> that's already happening.
> 
> I agree with that - but would like to clarify the 'bare metal, VMs and 
> containers' part a bit. (an in fact, I was concerned in the meeting that 
> the messaging around this would be confusing because we 'supporting bare 
> metal' and 'supporting containers' mean two different things but we use 
> one phrase to talk about it.
> 
> It's abundantly clear at the strategic level that having OpenStack be 
> able to provide both VMs and Bare Metal as two different sorts of 
> resources (ostensibly but not prescriptively via nova) is one of our 
> advantages. We wanted to underscore how important it is to be able to do 
> that, and wanted to underscore that so that it's really clear how 
> important it is any time the "but cloud should just be VMs" sentiment 
> arises.
> 
> The way we discussed "supporting containers" was quite different and was 
> not about nova providing containers. Rather, it was about reaching out 
> to our friends in other communities and working with them on making 
> OpenStack the best place to run things like kubernetes or docker swarm. 
> Those are systems that ultimately need to run, and it seems that good 
> integration (like kuryr with libnetwork) can provide a really strong 
> story. I think pretty much everyone agrees that there is not much value 
> to us or the world for us to compete with kubernetes or docker.

Let me quote exactly here and summarize the proposals mentioned in this
thread (as I understood it):

1. TOSCA YAML service templates [0], or [1], or suchlike to define
unified workloads (BM/VM/lightweight) and placement strategies as well.
Those templates are generated by either users directly or projects like
Solum, Trove shipping Apps-As-A-Service or Kolla, TrippleO, Fuel and
others - to deploy OpenStack services as well.

[0]
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csprd01/TOSCA-Simple-Profile-YAML-v1.0-csprd01.html
[1] https://review.openstack.org/#/c/210549/15/specs/super-scheduler.rst

2. Heat-translator [2] (or a New Project?) to present the templates as
Heat Orchestration Templates (HOT)

[2] https://github.com/openstack/heat-translator

3. Heat (or TOSCA translator, or...) to translate the HOTs (into API
calls?) and orchestrate the workloads placement to the reworked cloud
workloads schedulers of Nova [3], Magnum, Ironic, Neutron/Kuryr for SDN,
Cinder/Swift/Ceph for volumes mounts and images, then down the road to
their BM/VM/ligtweight-containers drivers nova.virt.ironic,
nova-docker/hypernova, kubernates/mesos/swarm and the like.

[3] https://review.openstack.org/#/c/183837/4

4. At this point, here they are - unified workloads running shiny on top
of OpenStack.

So the question is, do we really need a unified API or rather a unified
(TOSCA YAML) templates and a translator to *reworked* local APIs?

By the way, this flow clearly illustrates why there is no collisions
between the cp spec [1] and related Nova API reworking spec [3]. Those
are just different parts of the whole picture.

> 
> So, we do want to be supportive of bare metal and containers - but the 
> specific _WAY_ we want to be supportive of those things is different for 
> each one.
> 
> Monty


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Ceilometer][Architecture] Transformers in Kilo vs Liberty(and Mitaka)

2016-04-14 Thread Nadya Shakhat
Hi Gordon,

I'd like to add some clarifications and comments.

this is not entirely accurate pre-polling change, the polling agents
> publish one message per sample. not the polling agents publish one
> message per interval (multiple samples).

Looks like there is some misunderstanding here. In the code, there is
"batch_polled_samples" option. You can switch it off and get the result you
described, but it's True by default.  See
https://github.com/openstack/ceilometer/blob/master/ceilometer/agent/manager.py#L205-L211
.

You wrote:

> the polling change is not related to coordination work in notification.
> the coordination work was to handle HA / multiple notification agents.
> regardless polling change, this must exist.

and

> transformers are already optional. they can be removed from
> pipeline.yaml if not required (and thus coordination can be disabled).


So, coordination is needed only to support transformations. Polling change
does relate to this because it has brought additional transformations on
notification agent side. I suggest to pay attention to the existing use
cases. In real life, people use transformers for polling-based metrics
only. The most important use case for transformation is Heat autoscaling.
It usually based on cpu_util. Before Liberty, we were able not to use
coordination for notification agent to support the autoscaling usecase. In
Liberty we cannot support it without Redis. Now "transformers are already
optional", that's true. But I think it's better to add some restrictions
like "we don't support transformations for notifications" and have
transformers optional on polling-agent only instead of introducing such a
comprehensive coordination.

> IPC is one of the
> standard use cases for message queues. the concept of using queues to
> pass around and distribute work is essentially what it's designed for.
> if rabbit or any message queue service can't provide this function, it
> does worry me.


I see your point here, but Ceilometer aims to take care of the OpenStack,
monitor it's state. Now it is known as a "Rabbit killer". We cannot ignore
that if we want anybody uses Ceilometer.


Also, I'd like to copy-paste Chris's ideas from the previous message:

Are the options the following?
> * Do what you suggest and pull transformers back into the pollsters.

  Basically revert the change. I think this is the wrong long term
>   solution but might be the best option if there's nobody to do the
>   other options.
> * Implement a pollster.yaml for use by the pollsters and consider
>   pipeline.yaml as the canonical file for the notification agents as
>   there's where the actual _pipelines_ are. Somewhere in there kill
>   interval as a concept on pipeline side.
>   This of course doesn't address the messaging complexity. I admit
>   that I don't understand all the issues there but it often feels
>   like we are doing that aspect of things completely wrong, so I
>   would hope that before we change things there we consider all the
>   options.

I think that two types of agents should have two different pipeline
descriptions, but I still think that "pipeline" should be described and
fully applied on the both types of agents. On polling ones it should be the
same as it is now, on notification: remove interval and refuse from
transformations at all. Chris, I see your point about "long term", but I'm
afraid that "long term" may not happen...


> What else?
> One probably crazy idea: What about figuring out the desired end-meters
> of common transformations and making them into dedicated pollsters?
> Encapsulating that transformation not at the level of the polling
> manager but at the individual pollster.


Your "crazy idea" may work at least for restoring autoscaling functionality
indeed.

Thanks,
Nadya

On Wed, Apr 13, 2016 at 9:25 PM, gordon chung  wrote:

> hi Nadya,
>
> copy/pasting full original message with comments inline to clarify some
> comments.
>
> i think a lot of the confusion is because we use pipeline.yaml across
> both polling and notification agents when really it only applies to
> latter. just an fyi, we've had an open work item to create a
> polling.yaml file... just the issue of 'resources'.
>
> > Hello colleagues,
> >
> > I'd like to discuss one question with you. Perhaps, you remember that
> > in Liberty we decided to get rid of transformers on polling agents [1].
> I'd
> > like to describe several issues we are facing now because of this
> decision.
> > 1. pipeline.yaml inconsistency.
> > Ceilometer pipeline consists from the two basic things: source and
> > sink. In source, we describe how to get data, in sink - how to deal with
> > the data. After the refactoring described in [1], on polling agents we
> > apply only "source" definition, on notification agents we apply only
> "sink"
> > one. It causes the problems described in the mailing thread [2]: the
> "pipe"
> > concept is actually broken. To make it work more or less correctly, the
> 

  1   2   >