Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Vikas Choudhary
Thanks Banix, Actually after noticing this patch i pushed my changes on top
this, linked with dependency and that worked.



On Fri, Jul 1, 2016 at 11:14 PM, Mohammad Banikazemi  wrote:

> Did this resolve the problem? https://review.openstack.org/#/c/336549/
>
> [image: Inactive hide details for Vikas Choudhary ---07/01/2016 11:29:14
> AM---Hi Toni, There seems to be some problem with. I cloned ku]Vikas
> Choudhary ---07/01/2016 11:29:14 AM---Hi Toni, There seems to be some
> problem with. I cloned kuryr-libnetwork:
>
> From: Vikas Choudhary 
> To: Antoni Segura Puimedon 
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, Mohammad Banikazemi/Watson/IBM@IBMUS,
> Gal Sagie , Irena Berezovsky 
> Date: 07/01/2016 11:29 AM
> Subject: Re: [kuryr] kuryr-libnetwork split
> --
>
>
>
> Hi Toni,
>
> There seems to be some problem with. I cloned kuryr-libnetwork:
>
>git clone *http://github.com/openstack/kuryr-libnetwork.git*
>
>
>
> But when pushed patch, on gerrit its showing project "openstack/kuryr"
>
> PTAL, *https://review.openstack.org/#/c/336617/*
> 
>
>
>
> -Vikas
>
> On Fri, Jul 1, 2016 at 6:40 PM, Antoni Segura Puimedon <
> *toni+openstac...@midokura.com* > wrote:
>
>Hi fellow kuryrs!
>
>In order to proceed with the split of kuryr into a main lib and it's
>kuryr libnetwork component, we've cloned the contents of openstack/kuryr
>over to openstack/kuryr-libnetwork.
>
>The idea is that after this, the patches that will go to
>openstack/kuryr will be to trim out the kuryr/kuryr-libnetwork specific
>parts and make a release of the common parts so that
>openstack/kuryr-libnetwork can start using it.
>
>I propose that we use python namespaces and the current common code in
>kuryr is moved to:
>kuryr/lib/
>
>
>which openstack/kuryr-libnetwork would import like so:
>
>from kuryr.lib import binding
>
>So, right now, patches in review that are for the Docker ipam or
>remote driver, should be moved to openstack/kuryr-libnetwork and soon we
>should make openstack/kuryr-libnetwork add kuryr-lib to the requirements.
>
>Regards,
>
>Toni
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Build the docker images in a graceful way

2016-07-01 Thread Jeffrey Zhang
Hi all,

the spec is here[0]

[0] https://review.openstack.org/336757

On Wed, Jun 29, 2016 at 9:44 PM, Jeffrey Zhang 
wrote:

>
> On Wed, Jun 29, 2016 at 9:26 PM, Gerard Braad  wrote:
>
>> Although I saw the Ansible Container repo, I wasn't that excited about
>> it at the moment. It still feels complicated when compared to how Chef
>> describes a container. However, it is still promising.
>>
>
> ​The ansible container is more complicated. I do not mean to using it
> directly. But using the concept that replace the
> raw scripts with
> ansible-playbook in the Dockerfile . The Dockerfile
> will become very simple and the main logical will
> ​​
> enclos
> ​​
> e
> ​​
> the
> ​A
> nsible
> playbooks.
>
> Just like what your openstack client container does.​
>
>
>>
>> On the reasons I did not choose to use it either is that Automated
>> builds only handle Dockerfiles. I'd rather run a script in a statement
>> to ensure it is one layer. and inside this script use Ansible to
>> orchestrate when necessary.
>>
> ​
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][stable] liberty periodic bitrot jobs have been failing more than a week

2016-07-01 Thread Ravi, Goutham
Thanks Matt. 

https://review.openstack.org/#/c/334220 adds the upper constraints. 

--
Goutham


On 7/1/16, 5:08 PM, "Matt Riedemann"  wrote:

The manila periodic stable/liberty jobs have been failing for at least a 
week.

It looks like manila isn't using upper-constraints when running unit 
tests, not even on stable/mitaka or master. So in liberty it's pulling 
in uncapped oslo.utils even though the upper constraint for oslo.utils 
in liberty is 3.2.

Who from the manila team is going to be working on fixing this, either 
via getting upper-constraints in place in the tox.ini for manila (on all 
supported branches) or performing some kind of workaround in the code?

-- 

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-dev] The QoS feature minimum guaranteed bandwidth of OpenvSwitch not work

2016-07-01 Thread Ben Pfaff
On Fri, Jul 01, 2016 at 03:40:30AM +, Xiao Ma (xima2) wrote:
> I want to use the QoS feature of OpenvSwitch to control the bandwidth based 
> on the vlan id(Scene 1) or port id(Scene 2).
> So I deployed it as showed bellow,and configured the qos rules,the flows,and 
> used iperf tool to test it.
> But the result is disappointment.

Are you confident that this is actually an OVS problem and not a problem
with the Linux kernel HTB code?  The FAQ says:

### Q: I configured QoS, correctly, but my measurements show that it isn't
   working as well as I expect.

A: With the Linux kernel, the Open vSwitch implementation of QoS has
   two aspects:

   - Open vSwitch configures a subset of Linux kernel QoS
 features, according to what is in OVSDB.  It is possible that
 this code has bugs.  If you believe that this is so, then you
 can configure the Linux traffic control (QoS) stack directly
 with the "tc" program.  If you get better results that way,
 you can send a detailed bug report to b...@openvswitch.org.

 It is certain that Open vSwitch cannot configure every Linux
 kernel QoS feature.  If you need some feature that OVS cannot
 configure, then you can also use "tc" directly (or add that
 feature to OVS).

   - The Open vSwitch implementation of OpenFlow allows flows to
 be directed to particular queues.  This is pretty simple and
 unlikely to have serious bugs at this point.

   However, most problems with QoS on Linux are not bugs in Open
   vSwitch at all.  They tend to be either configuration errors
   (please see the earlier questions in this section) or issues with
   the traffic control (QoS) stack in Linux.  The Open vSwitch
   developers are not experts on Linux traffic control.  We suggest
   that, if you believe you are encountering a problem with Linux
   traffic control, that you consult the tc manpages (e.g. tc(8),
   tc-htb(8), tc-hfsc(8)), web resources (e.g. http://lartc.org/), or
   mailing lists (e.g. http://vger.kernel.org/vger-lists.html#netdev).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Python35 Jobs coming

2016-07-01 Thread Doug Hellmann
Excerpts from Clark Boylan's message of 2016-07-01 12:51:52 -0700:
> The infra team is working on taking advantage of the new Ubuntu Xenial
> release including running unittests on python35. The current plan is to
> get https://review.openstack.org/#/c/336272/ merged next Tuesday (July
> 5, 2016). This will add non voting python35 tests restricted to >=
> master/Newton on all projects that had python34 testing.
> 
> The expectation is that in many cases python35 tests will just work if
> python34 testing was also working. If this is the case for your project
> you can propose a change to openstack-infra/project-config to make these
> jobs voting against your project. You should only need to edit
> jenkins/jobs/projects.yaml and zuul/layout.yaml and remove the '-nv'
> portion of the python35 jobs to do this.
> 
> We do however expect that there will be a large group of failed tests
> too. If your project has a specific tox.ini py34 target to restrict
> python3 testing to a specific list of tests you will need to add a tox
> target for py35 that does the same thing as the py34 target. We have
> also seen bug reports against some projects whose tests rely on stable
> error messages from Python itself which isn't always the case across
> version changes so these tests will need to be updated as well.
> 
> Note this change will not add python35 jobs for cases where projects
> have special tox targets. This is restricted just to the default py35
> unittesting.
> 
> As always let us know if you questions,
> Clark
> 

This is good news. Python 3.4 is only supported for security updates at
this point [1], so we'll want to make it a priority to get 3.5 working
soon.

Doug

[1] https://mail.python.org/pipermail/python-announce-list/2016-June/011249.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-07-01 Thread Armando M.
On 30 June 2016 at 10:55, HU, BIN  wrote:

> I see, and thank you very much Dan. Also thank you Markus for unreleased
> release notes.
>
> Now I understand that it is not a plugin and unstable interface. And there
> is a new "use_neutron" option for configuring Nova to use Neutron as its
> network backend.
>
> When we use Neutron, there are ML2 and ML3 plugins so that we can choose
> to use different backend providers to actually perform those network
> functions. For example, integration with ODL.
>

There's no such a thing as ML3, not yet anyway and not in the same shape of
ML2.


>
> Shall we foresee a situation, where user can choose another network
> backend directly, e.g. ODL, ONOS? Under this circumstance, a stable plugin
> interface seems needed which can provide end users with more options and
> flexibility in deployment.
>

The networking landscape is dramatically different from the one Nova
experiences and even though I personally share the same ideals and desire
to strive for interoperability across OpenStack clouds, the Neutron team is
generally more open to providing extensibility and integration points. One
of these integration points we currently have is the ML2 interface, which
is considered stable and to be used by third parties.

Bear in mind that we are trying to strike a better balance between wild
wild west and tight control, so my suggestion would be to stay plugged with
the Neutron community to get a sense on how things evolve over time. That
should help avoiding surprises where you end up realizing that something
you relied on was indeed taken away from you.


> What do you think?
>


daada

>
> Thanks
> Bin
>
> -Original Message-
> From: Dan Smith [mailto:d...@danplanet.com]
> Sent: Thursday, June 30, 2016 10:30 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova] Deprecated Configuration Option in
> Nova Mitaka Release
>
> > Just curious - what is the motivation of removing the plug-ability
> > entirely? Because of significant maintenance effort?
>
> It's not a plugin interface and has never been stable. We've had a
> long-running goal of removing all of these plug points where we don't
> actually expect people to write stable plugins.
>
> If you want to write against an unstable internal-only API and chase every
> little change we make to it, then just patch the code locally.
> Using these plug points is effectively the same thing.
>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Openstack Mitaka Neutron LBaaS Question

2016-07-01 Thread zhihao wang
Dear OpenStack Dev member:
May I ask you some question about neutron lbaaS? 
How to install the neutron LBaaS with Octavia in Mitaka?I followed these two 
guide ,but which one I should use? (My openstack is Mitaka , 1 controller, 2 
compute nodes)
https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun  --  Ubuntu Packages 
Setuphttp://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html  
-- Configuring LBaaS v2 with Octavia
Here is what I did:pip install octavia and then :vim /etc/neutron/neutron.conf
service_plugins = 
router,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
[service_providers]service_provider = 
LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default
/etc/openstack-dashboard/local_settings.py
OPENSTACK_NEUTRON_NETWORK = {
'enable_lb': True
}
And then I restart all the neutron service and apache server  service 
neutron-server restart  service neutron-dhcp-agent restart  service 
neutron-metadata-agent restart  service neutron-l3-agent restart
but and then i ran the command neutron agent-list, it return this. I am 
wondering what is wrong with this? how can I install Neutron LaaS?
root@controller:~# neutron agent-listUnable to establish connection to 
http://controller:9696/v2.0/agents.json

Please help
Thanks so much
ThanksWally


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L2 Gateway] No meeting on July 4th

2016-07-01 Thread Sukhdev Kapur
Folks,

Due the US holiday on 4th of July, there will be no meeting for L2 Gateway.

The channel will be available for anybody to use, should you need to
discuss anything during the meeting's time slot.

Happy 4th..

-Sukhdev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Neutron] No Ironic-Neutron Integration meeting on July 4th

2016-07-01 Thread Sukhdev Kapur
Folks,

Due the US holiday on 4th of July, there will be no meeting on
Ironic-Neutron Integration.

The channel will be available for anybody to use, should you need to
discuss anything during the meeting's time slot.

Happy 4th..

-Sukhdev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] OpenStack Trove Ocata Virtual Midcycle,

2016-07-01 Thread Mariam John

Hello Amrith,

  Just wanted to confirm that I can attend the virtual midcycle during the
proposed dates: July 26th-28th.

Thank you.

Regards,
Mariam.




From:   Amrith Kumar 
To: "OpenStack Development Mailing List (not for usage questions)"
,
"openstack-operat...@lists.openstack.org"

Date:   06/25/2016 04:50 AM
Subject:[openstack-dev] [trove] OpenStack Trove Ocata Virtual Midcycle,




After we discussed and announced this mid-cycle, there has been some
feedback that (a) it would be better to hold the mid-cycle earlier, and (b)
NYC was not the most convenient location for all attendees.

Thanks for the feedback. Given that we are coming up on a holiday week (in
the US), and the N2 deadline in the week of July 18th, I propose that we
conduct the Trove Ocata midcycle as a virtual midcycle in the week of July
25th.

In the interest of time, I'd like all those who are able and interested in
attending to reply to this email so we can confirm this at the Trove
meeting on Wednesday.

 Trove Ocata Virtual Midcycle
 Date and Time: 4 hours each day, July 26, 27 and 28; 1300-1700
EDT
 (1200 - 1600 CDT, 1000 to 1400 PDT, 1700 to 
2100
UTC)
 Location: Virtual midcycle, [likely] Google Hangouts with
audio dial in (telephone)

Thanks,

-amrith



> -Original Message-
> From: Amrith Kumar
> Sent: Wednesday, June 22, 2016 9:54 AM
> To: openstack-dev 
> Subject: OpenStack Trove Ocata Midcycle, NYC, August 25 and 26.
>
> The Trove midcycle will be held in midtown NYC, thanks to IBM for hosting
> the event, on August 25th and 26th.
>
> If you are interested in attending, please join the Trove meeting today,
> 2pm Eastern Time (#openstack-meeting-alt) and register at
> http://www.eventbrite.com/e/openstack-trove-ocata-midcycle-tickets-
> 26197358003.
>
> An etherpad for proposing sessions is at
> https://etherpad.openstack.org/p/ocata-trove-midcycle
>
> This will be a two-day event (not three days as we have done in the past)
> so we will start early on 25th and go as late as we can on 26th
> recognizing that people who have to travel out of NYC may want to get
late
> flights (9pm, 10pm) on Friday.
>
> Thanks,
>
> -amrith

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Mid Cycle Hangout

2016-07-01 Thread Tripp, Travis S
At our last IRC meeting, we discussed that there were a few topics which could 
use some high bandwidth conversation, in particular the pipeline architecture 
[0] as well as reviewing progress for the release.  Since we are not having a 
face to face meetup which would exclude people unable to travel, we decided 
that we would try to have a video conference hangout sometime in the next few 
weeks.  We agreed to use doodle to find a time that works for people.  Below is 
the doodle poll:

- http://doodle.com/poll/stxrx9wxq38itnk3

Please expand the accordion to see all the date / time options.

[0] Pipeline architecture https://review.openstack.org/308824

Thank you,
Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila][stable] liberty periodic bitrot jobs have been failing more than a week

2016-07-01 Thread Matt Riedemann
The manila periodic stable/liberty jobs have been failing for at least a 
week.


It looks like manila isn't using upper-constraints when running unit 
tests, not even on stable/mitaka or master. So in liberty it's pulling 
in uncapped oslo.utils even though the upper constraint for oslo.utils 
in liberty is 3.2.


Who from the manila team is going to be working on fixing this, either 
via getting upper-constraints in place in the tox.ini for manila (on all 
supported branches) or performing some kind of workaround in the code?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Non-priority feature freeze and FFEs

2016-07-01 Thread Matt Riedemann
We're now past non-priority feature freeze. I've started going through 
some blueprints and -2ing them if they still have outstanding changes. I 
haven't gone through the full list yet (we started with 100).


I'm also building a list of potential FFE candidates based on:

1. How far along the change is (how ready is it?), e.g. does it require 
a lot of change yet? Does it require a Tempest test and is that passing 
already? How much of the series has already merged and what's left?


2. How much core reviewer attention has it already gotten?

3. What kind of priority does it have, i.e. if we don't get it done in 
Newton do we miss something in Ocata? Think things that start 
deprecation/removal timers.


The plan is for the nova core team to have an informal meeting in the 
#openstack-nova IRC channel early next week, either Tuesday or 
Wednesday, and go through the list of potential FFE candidates.


Blueprints that get exceptions will be checked against the above 
criteria and who on the core team is actually going to push the changes 
through.


I'm looking to get any exceptions completed within a week, so targeting 
Wednesday 7/13. That leaves a few days for preparing for the meetup.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New Python35 Jobs coming

2016-07-01 Thread Clark Boylan
The infra team is working on taking advantage of the new Ubuntu Xenial
release including running unittests on python35. The current plan is to
get https://review.openstack.org/#/c/336272/ merged next Tuesday (July
5, 2016). This will add non voting python35 tests restricted to >=
master/Newton on all projects that had python34 testing.

The expectation is that in many cases python35 tests will just work if
python34 testing was also working. If this is the case for your project
you can propose a change to openstack-infra/project-config to make these
jobs voting against your project. You should only need to edit
jenkins/jobs/projects.yaml and zuul/layout.yaml and remove the '-nv'
portion of the python35 jobs to do this.

We do however expect that there will be a large group of failed tests
too. If your project has a specific tox.ini py34 target to restrict
python3 testing to a specific list of tests you will need to add a tox
target for py35 that does the same thing as the py34 target. We have
also seen bug reports against some projects whose tests rely on stable
error messages from Python itself which isn't always the case across
version changes so these tests will need to be updated as well.

Note this change will not add python35 jobs for cases where projects
have special tox targets. This is restricted just to the default py35
unittesting.

As always let us know if you questions,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] When to purge the DB, and when not to purge the DB?

2016-07-01 Thread Ian Cordasco
-Original Message-
From: Jesse Pretorius 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: July 1, 2016 at 04:45:17
To: OpenStack Development Mailing List (not for usage questions)

Subject:  [openstack-dev] [openstack-ansible] When to purge the DB,
and when not to purge the DB?

> Hi everyone,
>
> In a recent conversation on the Operators list [1] there was a discussion 
> about purging
> archived data in the database. It would seem to me an important step in 
> maintaining an
> environment which should be done from time to time and perhaps at the very 
> least prior
> to a major upgrade.
>
> What’re the thoughts on how often this should be done? Should we include it 
> as an opt-in
> step, or an opt-out step?
>
> [1] 
> http://lists.openstack.org/pipermail/openstack-operators/2016-June/010813.html

Is OpenStack-Ansible now going to get into the minutae of operating
the entire cloud? I was under the impression that it left the easy
things to the operators (e.g., deciding when and how to purge the
database) while taking care of the things that are less obvious
(setting up OpenStack, and interacting with the database directly to
only set up things for the services).

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [horizon] Security implications of exposing a keystone token to a JS client

2016-07-01 Thread Fox, Kevin M
Hi David,

How do you feel about the approach here:
https://review.openstack.org/#/c/311189/

Its lets the existing angular js module:
horizon.app.core.openstack-service-api.keystone

access the current token via getCurrentUserSession().token

Thanks,
Kevin

From: David Stanek [dsta...@dstanek.com]
Sent: Friday, July 01, 2016 11:17 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [security] [horizon] Security implications of 
exposing a keystone token to a JS client

On 06/29 at 21:10, Timur Sufiev wrote:
> Hello, vigilant folks of OpenStack Security team!
>
> The commit(s) I'd like you to take a look at introduces a new Horizon
> feature, Create (Glance) Image using CORS (AKA Cross-Origin Resource
> Sharing) [1].
>
> The main idea is to bypass Horizon web-server when uploading large local
> image and to send it directly to Glance server, thus saving network
> bandwidth and disk space on the controller node where Horizon web-server is
> deployed. However there is one possible security trade-off I had to make so
> that Glance service would allow me to upload an image - I'm passing the
> Keystone token to the Horizon JS runtime [2], and then pass it to Glance
> service [3] or [4] (different links here correspond to different versions
> of new Create Image - Django and Angular). This trade-off made Horizon
> community somewhat hesitant if we should push these changes forward, but
> nobody yet voiced a viable alternative, so here I'm writing this letter to
> you.
>
> The usual Horizon workflow for working with Keystone tokens is the
> following: retrieve scoped token and put it into web-server session, which
> is itself not exposed to browser (unless SESSION_STORAGE signed_cookies
> backend was chosen, but even in that case session contents are encrypted in
> some way), but is kept on web-server and referenced using the session key
> which is kept in browser cookies - so one may say that in existing setup
> keystone token never leaks to browser.
>
> On the other hand, in some not so far (I hope) future, when more logic is
> moved to client-side UI (i.e. browser), the issue of browser authenticating
> to some OpenStack services directly would become more widespread, it just
> happened that this work on Create Image in Horizon is pioneering this area
> (AFAIK). So, what do you think of possible security implications of this
> setup?
>
> Just for the reference, three patches mentioned in [1-3] implement most of
> the logic of new Create Image feature.
>
> [1]
> https://blueprints.launchpad.net/horizon/+spec/horizon-glance-large-image-upload
> [2]
> https://review.openstack.org/#/c/317365/15/openstack_dashboard/api/glance.py@215
> [3]
> https://review.openstack.org/#/c/230434/37/horizon/static/horizon/js/horizon.modals.js@212
> [4]
> https://review.openstack.org/#/c/317456/16/openstack_dashboard/static/app/core/openstack-service-api/glance.service.js@151

Since tokens are bearer tokens any leak could possibly lead to a
security issue. I don't see allowing the JS application to have access
to the token as being a terrible thing.

We just need to make sure we do it as safely as we can in order to
prevent the token from lingering around after the web session has
completed. For example, putting the token in redirect URLs may cause
it to end up in browser history, putting it in the source of page
that could be cached may write it to disk, etc, etc.

--
David Stanek
web: http://dstanek.com
blog: http://traceback.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-07-01 Thread Matt Riedemann

On 7/1/2016 1:32 PM, Sean Dague wrote:

On 06/30/2016 08:31 AM, Andrew Laski wrote:



On Wed, Jun 29, 2016, at 11:11 PM, Matt Riedemann wrote:

On 6/29/2016 10:10 PM, Matt Riedemann wrote:

On 6/29/2016 6:40 AM, Andrew Laski wrote:




On Tue, Jun 28, 2016, at 09:27 PM, Zhenyu Zheng wrote:

How about I sync updated_at and created_at in my patch, and leave the
finish to the other BP, by this way, I can use updated_at for the
timestamp filter I added and it don't need to change again once the
finish BP is complete.


Sounds good to me.



It's been a long day so my memory might be fried, but the options we
talked about in the API meeting were:

1. Setting updated_at = created_at when the instance action record is
created. Laski likes this, I'm not crazy about it, especially since we
don't do that for anything else.


I would actually like for us to do this generally. I have the same
thinking as Ed does elsewhere in this thread, the creation of a record
is an update of that record. So take my comments as applying to Nova
overall and not just this issue.


Agree. Also it just simplifies a number of things. We should just start
doing this going forward, and probably put some online data migrations
in place next cycle to update all the old records. Once updated_at can't
be null, we can handle things like this a bit better.


2. Update the instance action's updated_at when instance action events
are created. I like this since the instance action is like a parent
resource and the event is the child, so when we create/modify an event
we can consider it an update to the parent. Laski thought this might be
weird UX given we don't expose instance action events in the REST API
unless you're an admin. This is also probably not something we'd do for
other related resources like server groups and server group members (but
we don't page on those either right now).


Right. My concern is just that the ordering of actions can change based
on events happening which are not visible to the user. However thinking
about it further we don't really allow multiple actions at once, except
for a few special cases like delete, so this may not end up affecting
any ordering as actions are mostly serial. I think this is a fine
solution for the issue at hand. I just think #1 is a more general
solution.



3. Order the results by updated_at,created_at so that if updated_at
isn't set for older records, created_at will be used. I think we all
agreed in the meeting to do this regardless of #1 or #2 above.


I kind of hate that as the order, because then the marker is going to
have to be really funny double timestamp, right?


Good point.



I guess that's the one thing I don't see in this patch is a functional
test that actually loads up instance actions and iterates through
demonstrating the pagination.

-Sean



If we don't need to order by updated_at,created_at or update the 
instance action when a member event is created, by just setting 
created_at = updated_at for instance actions, then this is a lot simpler 
and I agree we should just do that.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-01 Thread Doug Hellmann
Excerpts from Denis Makogon's message of 2016-06-26 15:56:14 +0300:
> Hello stackers.
> 
> 
> I know that some work in progress to bring Python 3.4 compatibility to
> backend services and it is kinda hard question to answer, but i'd like to
> know if there are any plans to support asynchronous HTTP API client in the
> nearest future using aiohttp [1] (PEP-3156)?
> 
> If yes, could someone describe current state?

I'm not aware of anything like this -- that doesn't mean it isn't out
there, but I don't think any of our official project teams are looking
at it. I'd be interested in seeing a project like that started, though,
especially if it is done with some coordination with the existing SDK
team. Have you talked to them directly?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-07-01 Thread Jeremy Stanley
On 2016-07-01 14:18:13 -0400 (-0400), Doug Hellmann wrote:
[...]
> The "release:managed" tag used to convey information about how much
> the release team did for the project team in a way that was (we
> hoped) useful to consumers of the project. That included things we
> no longer do at all for anyone, like update bug milestones and
> upload artifacts to launchpad, as well as things that are now encoded
> in the other release tags like "perform the tagging of the release".
> 
> At the start of this cycle we updated the gerrit ACLs so that all
> projects using a cycle-with* release model *must* have the release
> team process their releases (if we have any such projects who we
> missed, or who were added later and not updated, we need to fix
> that).
[...]

I agree, that sounds like release:managed is now unnecessary, and
the bit from it which might have still been useful to track is since
implicit through release:cycle-with* tags. Thanks for clarifying!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-07-01 Thread Doug Hellmann
Excerpts from Steven Dake (stdake)'s message of 2016-07-01 18:42:47 +:
> 
> On 7/1/16, 11:18 AM, "Doug Hellmann"  wrote:
> 
> >Excerpts from Jeremy Stanley's message of 2016-07-01 13:50:30 +:
> >> On 2016-07-01 11:24:27 +0200 (+0200), Thierry Carrez wrote:
> >> > Short answer is: release:managed doesn't mean that much anymore (all
> >> > official projects are "managed"), so we'll likely retire it ASAP.
> >> [...]
> >> 
> >> If the meaning has been reduced to "this project is allowed to
> >> request tagging by the Release Management team" then I agree it's no
> >> longer necessary since any official project _can_ do that. If the
> >> meaning is "this project is _only_ allowed to be tagged by the
> >> Release Management team" then I can still see some use for it, since
> >> there are plenty of official projects that currently follow their
> >> own independent release process and push their own tags instead.
> >
> >I've been telling folks throughout this cycle that we weren't going to
> >add the "managed" tag to any new projects because we were considering
> >redefining the tag and we would want to do that first. While discussing
> >how to redefine it, we realized its meaning is now covered by other
> >tags, so we've proposed to drop it instead [1].
> >
> >The "release:managed" tag used to convey information about how much
> >the release team did for the project team in a way that was (we
> >hoped) useful to consumers of the project. That included things we
> >no longer do at all for anyone, like update bug milestones and
> >upload artifacts to launchpad, as well as things that are now encoded
> >in the other release tags like "perform the tagging of the release".
> >
> >At the start of this cycle we updated the gerrit ACLs so that all
> >projects using a cycle-with* release model *must* have the release
> >team process their releases (if we have any such projects who we
> >missed, or who were added later and not updated, we need to fix
> >that). Projects using the independent release model may process
> >their own releases or may ask the release team to do it. Either
> >way, since those projects are by definition not part of the cycle
> >releases we don't consider it "interesting" to their consumers to
> >say who is actually doing the releases (feedback on that assumption
> >is of course welcome).
> >
> >Doug
> >
> >[1] https://review.openstack.org/#/c/335440/
> 
> Doug,
> 
> Thanks for the response.  I understand where your coming from; I had also
> thought the release:managed tag was meaningless at this point, but hey -
> its a tag so I had planned to apply for it since we meet the criteria
> (which as far as I can tell means following the freeze model - everything
> else is enforced by the ACL change that happened at the end of Mitaka (or
> whenever that was)).
> 
> Since it is being removed, we won't apply.
> 
> Thanks for helping get to the bottom of this :)

Sure, and sorry about the delay -- it just happened that all of the
release team was out of touch this week for different reasons.

Doug

> 
> Regards
> -steve
> 
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-07-01 Thread Steven Dake (stdake)


On 7/1/16, 11:18 AM, "Doug Hellmann"  wrote:

>Excerpts from Jeremy Stanley's message of 2016-07-01 13:50:30 +:
>> On 2016-07-01 11:24:27 +0200 (+0200), Thierry Carrez wrote:
>> > Short answer is: release:managed doesn't mean that much anymore (all
>> > official projects are "managed"), so we'll likely retire it ASAP.
>> [...]
>> 
>> If the meaning has been reduced to "this project is allowed to
>> request tagging by the Release Management team" then I agree it's no
>> longer necessary since any official project _can_ do that. If the
>> meaning is "this project is _only_ allowed to be tagged by the
>> Release Management team" then I can still see some use for it, since
>> there are plenty of official projects that currently follow their
>> own independent release process and push their own tags instead.
>
>I've been telling folks throughout this cycle that we weren't going to
>add the "managed" tag to any new projects because we were considering
>redefining the tag and we would want to do that first. While discussing
>how to redefine it, we realized its meaning is now covered by other
>tags, so we've proposed to drop it instead [1].
>
>The "release:managed" tag used to convey information about how much
>the release team did for the project team in a way that was (we
>hoped) useful to consumers of the project. That included things we
>no longer do at all for anyone, like update bug milestones and
>upload artifacts to launchpad, as well as things that are now encoded
>in the other release tags like "perform the tagging of the release".
>
>At the start of this cycle we updated the gerrit ACLs so that all
>projects using a cycle-with* release model *must* have the release
>team process their releases (if we have any such projects who we
>missed, or who were added later and not updated, we need to fix
>that). Projects using the independent release model may process
>their own releases or may ask the release team to do it. Either
>way, since those projects are by definition not part of the cycle
>releases we don't consider it "interesting" to their consumers to
>say who is actually doing the releases (feedback on that assumption
>is of course welcome).
>
>Doug
>
>[1] https://review.openstack.org/#/c/335440/

Doug,

Thanks for the response.  I understand where your coming from; I had also
thought the release:managed tag was meaningless at this point, but hey -
its a tag so I had planned to apply for it since we meet the criteria
(which as far as I can tell means following the freeze model - everything
else is enforced by the ACL change that happened at the end of Mitaka (or
whenever that was)).

Since it is being removed, we won't apply.

Thanks for helping get to the bottom of this :)

Regards
-steve

>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla + BiFrost integration

2016-07-01 Thread Stephen Hindle
Maybe I missed it - but is there a way to provide site specific
configurations?  Things we will run into in the wild include:
Configuring multiple non-openstack nics
 IPMI configuration
 Password integration with Corporate LDAP etc.
 Integration with existing SANs
 Integration with existing corporate IPAM
 Corporate Security policy (firewall rules, sudo groups,
hosts.allow, ssh configs,etc)

Thats just off the top of my head - I'm sure we'll run into others.  I
tend to think the best way
to approach this is to allow some sort of 'bootstrap' role, that could
be populated by the
operators.  This should initially be empty (Kolla specific 'bootstrap'
actions should be
in another role) to prevent confusion.

We also have to be careful that kolla doesn't stomp on any non-kolla
configuration...


On Thu, Jun 30, 2016 at 12:43 PM, Mooney, Sean K
 wrote:
>
>
>> -Original Message-
>> From: Steven Dake (stdake) [mailto:std...@cisco.com]
>> Sent: Monday, June 27, 2016 9:21 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla +
>> BiFrost integration
>>
>>
>>
>> On 6/27/16, 11:19 AM, "Devananda van der Veen"
>> 
>> wrote:
>>
>> >At a quick glance, this sequence diagram matches what I
>> >envisioned/expected.
>> >
>> >I'd like to suggest a few additional steps be called out, however I'm
>> >not sure how to edit this so I'll write them here.
>> >
>> >
>> >As part of the installation of Ironic, and assuming this is done
>> >through Bifrost, the Actor should configure Bifrost for their
>> >particular network environment. For instance: what eth device is
>> >connected to the IPMI network; what IP ranges can Bifrost assign to
>> >physical servers; and so on.
>> >
>> >There are a lot of other options during the install that can be
>> >changed, but the network config is the most important. Full defaults
>> >for this roles' config options are here:
>> >
>> >https://github.com/openstack/bifrost/blob/master/playbooks/roles/bifro
>> s
>> >t-i
>> >ronic-install/defaults/main.yml
>> >
>> >and documentation is here:
>> >
>> >https://github.com/openstack/bifrost/tree/master/playbooks/roles/bifro
>> s
>> >t-i
>> >ronic-install
>> >
>> >
>> >
>> >Immediately before "Ironic PXE boots..." step, the Actor must perform
>> >an action to "enroll" hardware (the "deployment targets") in Ironic.
>> >This could be done in several ways: passing a YAML file to Bifrost;
>> >using the Ironic CLI; or something else.
>> >
>> >
>> >"Ironic reports success to the bootstrap operation" is ambiguous.
>> >Ironic does not currently support notifications, so, to learn the
>> >status of the deployments, you will need to poll the Ironic API (eg,
>> >"ironic node-list").
>> >
>>
>> Great,
>>
>> Thanks for the feedback.  I'll integrate your changes into the sequence
>> diagram when I have a free hour or so - whenever that is :)
>>
>> Regards
>> -steve
> [Mooney, Sean K] I agree with most of devananda points and had come to similar
> Conlcutions.
>
> At a highlevel I think the workflow from 0 to cloud would be as follow.
> Assuming you have one linux system.
> - clone http://github.com/openstack/kolla && cd kolla
> - tools/kolla-host build-host-deploy
> This will install ansible if not installed then invoke a playbook to 
> install
> All build dependencies and generate the kolla-build.conf 
> passwords.yml and global.yml.
>  Install kolla python package
> - configure kolla-build.conf as required
> - tools/build.py or kolla-build to build image
> - configure global.yml and or biforst specific file
>   This would involve specifying a file that can be used with bifrost dynamic 
> inventory.
>   Configuring network interface for bifrost to use.
>   Enable ssh-key generate or supply one to use as the key to us when 
> connecting to the servers post deploy.
>   Configure diskimage builder options or supply path to a file on the system 
> to use as your os image.
> - tools/kolla-host deploy-bifrost
>   Deploys bifrost container.
>   Copies images/keys
>   Bootstraps bifrost and start services.
> - tools/kolla-host deploy-servers
>   Invokes bifrost enroll and deploy dynamic then polls until all
>   Servers are provisioned or a server fails.
> - tools/kolla-hosts bootstrap-servers
>   Installs all kolla deploy dependencies
>   Docker ect. This will also optionally do things such as
>   Configure hugepages, configure cpu isolation, firewall settings
>   Or any other platform level config for example apply labels to ceph
>   Disks .
>   This role will reboot the remote server at the end of the role if required
>   e.g. after installing The wily kernel on Ubuntu 14.04
> - configure global.yml as normal
> - tools/kolla-ansible prechecks (this should now pass)
> - tools/kolla-ansible deploy
> - profit
>
> I think this largely agrees with the diagram you 

Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-07-01 Thread Sean Dague
On 06/30/2016 08:31 AM, Andrew Laski wrote:
> 
> 
> On Wed, Jun 29, 2016, at 11:11 PM, Matt Riedemann wrote:
>> On 6/29/2016 10:10 PM, Matt Riedemann wrote:
>>> On 6/29/2016 6:40 AM, Andrew Laski wrote:



 On Tue, Jun 28, 2016, at 09:27 PM, Zhenyu Zheng wrote:
> How about I sync updated_at and created_at in my patch, and leave the
> finish to the other BP, by this way, I can use updated_at for the
> timestamp filter I added and it don't need to change again once the
> finish BP is complete.

 Sounds good to me.

>>>
>>> It's been a long day so my memory might be fried, but the options we
>>> talked about in the API meeting were:
>>>
>>> 1. Setting updated_at = created_at when the instance action record is
>>> created. Laski likes this, I'm not crazy about it, especially since we
>>> don't do that for anything else.
> 
> I would actually like for us to do this generally. I have the same
> thinking as Ed does elsewhere in this thread, the creation of a record
> is an update of that record. So take my comments as applying to Nova
> overall and not just this issue.

Agree. Also it just simplifies a number of things. We should just start
doing this going forward, and probably put some online data migrations
in place next cycle to update all the old records. Once updated_at can't
be null, we can handle things like this a bit better.

>>> 2. Update the instance action's updated_at when instance action events
>>> are created. I like this since the instance action is like a parent
>>> resource and the event is the child, so when we create/modify an event
>>> we can consider it an update to the parent. Laski thought this might be
>>> weird UX given we don't expose instance action events in the REST API
>>> unless you're an admin. This is also probably not something we'd do for
>>> other related resources like server groups and server group members (but
>>> we don't page on those either right now).
> 
> Right. My concern is just that the ordering of actions can change based
> on events happening which are not visible to the user. However thinking
> about it further we don't really allow multiple actions at once, except
> for a few special cases like delete, so this may not end up affecting
> any ordering as actions are mostly serial. I think this is a fine
> solution for the issue at hand. I just think #1 is a more general
> solution.
> 
>>>
>>> 3. Order the results by updated_at,created_at so that if updated_at
>>> isn't set for older records, created_at will be used. I think we all
>>> agreed in the meeting to do this regardless of #1 or #2 above.

I kind of hate that as the order, because then the marker is going to
have to be really funny double timestamp, right?

I guess that's the one thing I don't see in this patch is a functional
test that actually loads up instance actions and iterates through
demonstrating the pagination.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-07-01 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2016-07-01 13:50:30 +:
> On 2016-07-01 11:24:27 +0200 (+0200), Thierry Carrez wrote:
> > Short answer is: release:managed doesn't mean that much anymore (all
> > official projects are "managed"), so we'll likely retire it ASAP.
> [...]
> 
> If the meaning has been reduced to "this project is allowed to
> request tagging by the Release Management team" then I agree it's no
> longer necessary since any official project _can_ do that. If the
> meaning is "this project is _only_ allowed to be tagged by the
> Release Management team" then I can still see some use for it, since
> there are plenty of official projects that currently follow their
> own independent release process and push their own tags instead.

I've been telling folks throughout this cycle that we weren't going to
add the "managed" tag to any new projects because we were considering
redefining the tag and we would want to do that first. While discussing
how to redefine it, we realized its meaning is now covered by other
tags, so we've proposed to drop it instead [1].

The "release:managed" tag used to convey information about how much
the release team did for the project team in a way that was (we
hoped) useful to consumers of the project. That included things we
no longer do at all for anyone, like update bug milestones and
upload artifacts to launchpad, as well as things that are now encoded
in the other release tags like "perform the tagging of the release".

At the start of this cycle we updated the gerrit ACLs so that all
projects using a cycle-with* release model *must* have the release
team process their releases (if we have any such projects who we
missed, or who were added later and not updated, we need to fix
that). Projects using the independent release model may process
their own releases or may ask the release team to do it. Either
way, since those projects are by definition not part of the cycle
releases we don't consider it "interesting" to their consumers to
say who is actually doing the releases (feedback on that assumption
is of course welcome).

Doug

[1] https://review.openstack.org/#/c/335440/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [horizon] Security implications of exposing a keystone token to a JS client

2016-07-01 Thread David Stanek
On 06/29 at 21:10, Timur Sufiev wrote:
> Hello, vigilant folks of OpenStack Security team!
> 
> The commit(s) I'd like you to take a look at introduces a new Horizon
> feature, Create (Glance) Image using CORS (AKA Cross-Origin Resource
> Sharing) [1].
> 
> The main idea is to bypass Horizon web-server when uploading large local
> image and to send it directly to Glance server, thus saving network
> bandwidth and disk space on the controller node where Horizon web-server is
> deployed. However there is one possible security trade-off I had to make so
> that Glance service would allow me to upload an image - I'm passing the
> Keystone token to the Horizon JS runtime [2], and then pass it to Glance
> service [3] or [4] (different links here correspond to different versions
> of new Create Image - Django and Angular). This trade-off made Horizon
> community somewhat hesitant if we should push these changes forward, but
> nobody yet voiced a viable alternative, so here I'm writing this letter to
> you.
> 
> The usual Horizon workflow for working with Keystone tokens is the
> following: retrieve scoped token and put it into web-server session, which
> is itself not exposed to browser (unless SESSION_STORAGE signed_cookies
> backend was chosen, but even in that case session contents are encrypted in
> some way), but is kept on web-server and referenced using the session key
> which is kept in browser cookies - so one may say that in existing setup
> keystone token never leaks to browser.
> 
> On the other hand, in some not so far (I hope) future, when more logic is
> moved to client-side UI (i.e. browser), the issue of browser authenticating
> to some OpenStack services directly would become more widespread, it just
> happened that this work on Create Image in Horizon is pioneering this area
> (AFAIK). So, what do you think of possible security implications of this
> setup?
> 
> Just for the reference, three patches mentioned in [1-3] implement most of
> the logic of new Create Image feature.
> 
> [1]
> https://blueprints.launchpad.net/horizon/+spec/horizon-glance-large-image-upload
> [2]
> https://review.openstack.org/#/c/317365/15/openstack_dashboard/api/glance.py@215
> [3]
> https://review.openstack.org/#/c/230434/37/horizon/static/horizon/js/horizon.modals.js@212
> [4]
> https://review.openstack.org/#/c/317456/16/openstack_dashboard/static/app/core/openstack-service-api/glance.service.js@151

Since tokens are bearer tokens any leak could possibly lead to a
security issue. I don't see allowing the JS application to have access
to the token as being a terrible thing.

We just need to make sure we do it as safely as we can in order to
prevent the token from lingering around after the web session has
completed. For example, putting the token in redirect URLs may cause
it to end up in browser history, putting it in the source of page
that could be cached may write it to disk, etc, etc.

-- 
David Stanek
web: http://dstanek.com
blog: http://traceback.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-07-01 Thread Sean Dague
On 07/01/2016 10:37 AM, Matt Riedemann wrote:
> On 6/30/2016 11:10 AM, Chris Friesen wrote:
>>
>> For what it's worth, this is how the timestamps work for POSIX
>> filesystems. When you create a file it sets the access/modify/change
>> timestamps to the file creation time.
>>
>> Chris
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> That's a good point.

I would be +2 on setting updated == created on initial create across the
board in the system. I think people actually expect this because they
assume it's like unix time stamps, then get confused when they get None
back.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2016-07-01 15:05:30 +:
> On 2016-07-01 08:26:13 -0500 (-0500), Monty Taylor wrote:
> [...]
> > Check with Doug Hellman about namespaces. We used to use them in some
> > oslo things and had to step away from them because of some pretty weird
> > and horrible breakage issues.
> [...]
> 
> Or read the associated Oslo spec from when that was done:
> 
>  https://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html
>  >
> 

Yes, please don't use python namespaces. It's a cool feature, as you
say, but the setuptools implementation available for Python 2 has some
buggy edge cases that we hit on a regular basis before moving back to
regular packages. It might be something we could look into again when
we're running only on Python 3, since at that point the feature is built
into the language.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Mohammad Banikazemi

Did this resolve the problem? https://review.openstack.org/#/c/336549/



From:   Vikas Choudhary 
To: Antoni Segura Puimedon 
Cc: "OpenStack Development Mailing List (not for usage questions)"
, Mohammad
Banikazemi/Watson/IBM@IBMUS, Gal Sagie ,
Irena Berezovsky 
Date:   07/01/2016 11:29 AM
Subject:Re: [kuryr] kuryr-libnetwork split



Hi Toni,

There seems to be some problem with. I cloned kuryr-libnetwork:
  git clone http://github.com/openstack/kuryr-libnetwork.git

But when pushed patch, on gerrit its showing project "openstack/kuryr"

PTAL, https://review.openstack.org/#/c/336617/



-Vikas

On Fri, Jul 1, 2016 at 6:40 PM, Antoni Segura Puimedon  wrote:
  Hi fellow kuryrs!

  In order to proceed with the split of kuryr into a main lib and it's
  kuryr libnetwork component, we've cloned the contents of openstack/kuryr
  over to openstack/kuryr-libnetwork.

  The idea is that after this, the patches that will go to openstack/kuryr
  will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
  release of the common parts so that openstack/kuryr-libnetwork can start
  using it.

  I propose that we use python namespaces and the current common code in
  kuryr is moved to:
  kuryr/lib/


  which openstack/kuryr-libnetwork would import like so:

      from kuryr.lib import binding

  So, right now, patches in review that are for the Docker ipam or remote
  driver, should be moved to openstack/kuryr-libnetwork and soon we should
  make openstack/kuryr-libnetwork add kuryr-lib to the requirements.

  Regards,

  Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate][charms] Synchronising domains with new bind9 server

2016-07-01 Thread Hayes, Graham
On 01/07/2016 18:04, Liam Young wrote:
> Hi,
>
> I'm trying to add a new bind9 pool_target to an existing pool. The
> problem is that the new bind server has no knowledge of the existing
> zones as it missed the addzone commands when the domains where created.
> It seems to me I have 3 options:
>
> 1) To sync the zone + nzf files from an existing bind9 pool_target

This is what we recommend for scaling bind.

We have a periodic task that will clean up zones that have been updated
in a certain time, so as long as the time it takes to sync the files
is less than that window (set in the config as `periodic_sync_seconds`
in the [service:pool_manager] section) there should be no issue - the
new server will just not have all the zones + records up to date.


> 2) Write a script to extract a list of domains for all tenants from
> designate and convert those into  "rndc addzone" commands targeted at
> the new unit
> 3) Some builtin designate method I've yet to discover?

`periodic_sync_seconds` is set to `None` by default, which means it will
check all zones, but this should be set lower for busy installs.

> What would you recommend?
>
> I'm writing a designate and designate bind Juju charm and testing
> scaleout which is what caused me to trip over this issue. So for option
> 1 I'll need to synchronise a directory between Juju units of an
> application does anyone have a neat way of doing this?
> Thanks
> Liam


Thanks,

Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [horizon] Security implications of exposing a keystone token to a JS client

2016-07-01 Thread Thai Q Tran
I am not sure if this is a valid concern. If I am using a CLI and someone gets access to my computer, they can do whatever they well please. If I am using Horizon and someone gets access, its going to be the same story, they can still do damage even without knowing the token (at least until the web session or token expires). This is just the nature of using token-based authentication, if someone steals it, they will get access for a brief time to do whatever they want. So the notion that hiding the token from the front-end is somehow going to make it safer does not make sense to me.
 
 
- Original message -From: Timur Sufiev To: "OpenStack Development Mailing List (not for usage questions)" Cc:Subject: [openstack-dev] [security] [horizon] Security implications of exposing a keystone token to a JS clientDate: Wed, Jun 29, 2016 2:11 PM 
Hello, vigilant folks of OpenStack Security team!
 
The commit(s) I'd like you to take a look at introduces a new Horizon feature, Create (Glance) Image using CORS (AKA Cross-Origin Resource Sharing) [1]. 
 
The main idea is to bypass Horizon web-server when uploading large local image and to send it directly to Glance server, thus saving network bandwidth and disk space on the controller node where Horizon web-server is deployed. However there is one possible security trade-off I had to make so that Glance service would allow me to upload an image - I'm passing the Keystone token to the Horizon JS runtime [2], and then pass it to Glance service [3] or [4] (different links here correspond to different versions of new Create Image - Django and Angular). This trade-off made Horizon community somewhat hesitant if we should push these changes forward, but nobody yet voiced a viable alternative, so here I'm writing this letter to you.
 
The usual Horizon workflow for working with Keystone tokens is the following: retrieve scoped token and put it into web-server session, which is itself not exposed to browser (unless SESSION_STORAGE signed_cookies backend was chosen, but even in that case session contents are encrypted in some way), but is kept on web-server and referenced using the session key which is kept in browser cookies - so one may say that in existing setup keystone token never leaks to browser.
 
On the other hand, in some not so far (I hope) future, when more logic is moved to client-side UI (i.e. browser), the issue of browser authenticating to some OpenStack services directly would become more widespread, it just happened that this work on Create Image in Horizon is pioneering this area (AFAIK). So, what do you think of possible security implications of this setup?
 
Just for the reference, three patches mentioned in [1-3] implement most of the logic of new Create Image feature.
 
[1] https://blueprints.launchpad.net/horizon/+spec/horizon-glance-large-image-upload
[2] https://review.openstack.org/#/c/317365/15/openstack_dashboard/api/glance.py@215
[3] https://review.openstack.org/#/c/230434/37/horizon/static/horizon/js/horizon.modals.js@212
[4] https://review.openstack.org/#/c/317456/16/openstack_dashboard/static/app/core/openstack-service-api/glance.service.js@151
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Monty Taylor
On 07/01/2016 12:05 PM, Jeremy Stanley wrote:
> On 2016-07-01 20:59:06 +0530 (+0530), Vikas Choudhary wrote:
>> There seems to be some problem with. I cloned kuryr-libnetwork:
>>
>>> git clone http://github.com/openstack/kuryr-libnetwork.git
>>
>>
>> But when pushed patch, on gerrit its showing project "openstack/kuryr"
>>
>> PTAL, https://review.openstack.org/#/c/336617/
> 
> There's a simple solution to that: correct the .gitreview file to
> reference the correct project name.
> 
> 
> http://git.openstack.org/cgit/openstack/kuryr-libnetwork/tree/.gitreview#n4
> 

Done and merged:

https://review.openstack.org/#/c/336549/

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Jeremy Stanley
On 2016-07-01 20:59:06 +0530 (+0530), Vikas Choudhary wrote:
> There seems to be some problem with. I cloned kuryr-libnetwork:
> 
> > git clone http://github.com/openstack/kuryr-libnetwork.git
> 
> 
> But when pushed patch, on gerrit its showing project "openstack/kuryr"
> 
> PTAL, https://review.openstack.org/#/c/336617/

There's a simple solution to that: correct the .gitreview file to
reference the correct project name.

http://git.openstack.org/cgit/openstack/kuryr-libnetwork/tree/.gitreview#n4

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate][charms] Synchronising domains with new bind9 server

2016-07-01 Thread Liam Young
Hi,

I'm trying to add a new bind9 pool_target to an existing pool. The problem
is that the new bind server has no knowledge of the existing zones as it
missed the addzone commands when the domains where created. It seems to me
I have 3 options:

1) To sync the zone + nzf files from an existing bind9 pool_target
2) Write a script to extract a list of domains for all tenants from
designate and convert those into  "rndc addzone" commands targeted at the
new unit
3) Some builtin designate method I've yet to discover?

What would you recommend?

I'm writing a designate and designate bind Juju charm and testing scaleout
which is what caused me to trip over this issue. So for option 1 I'll need
to synchronise a directory between Juju units of an application does anyone
have a neat way of doing this?
Thanks
Liam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Daniel P. Berrange
On Fri, Jul 01, 2016 at 02:13:46PM +, Jeremy Stanley wrote:
> Have you considered just writing a throwaway devstack-gate change
> which overrides the gate_hook to run that one suspect Tempest test,
> say, a thousand times in a loop? Would be far more efficient if you
> don't need to be concerned with all the environment setup/teardown
> overhead.

Mmm, that's a possibility for initial reproducability. We've now seen
that this looks like some kind of kernel / iscsi problem, so in this
particular case I think we really do need to setup/teardown fresh
machines to ensure a "sane" initial kernel state.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Daniel P. Berrange
On Fri, Jul 01, 2016 at 02:35:34PM +, Jeremy Stanley wrote:
> On 2016-07-01 15:39:10 +0200 (+0200), Kashyap Chamarthy wrote:
> > [Snip description of some nice debugging.]
> > 
> > > I'd really love it if there was
> > > 
> > >  1. the ability to request checking of just specific jobs eg
> > > 
> > >   "recheck gate-tempest-dsvm-multinode-full"
> > 
> > Yes, this would really be desirable.  I recall once asking this exact
> > question on #openstack-infra, but can't find Infra team's response to
> > that.
> 
> The challenge here is that you want to make sure it can't be used to
> recheck individual jobs until you have them all passing (like
> picking a pin and tumbler lock). The temptation to recheck-spam
> nondeterministically failing changes is already present, but this
> would make it considerably easier still for people to introduce new
> nondeterministic failures in projects. Maybe if it were tied to a
> special pipeline type, and then we set it only for the experimental
> pipeline or something?

If we don't want it to interfere with "normal" testing, then
perhaps just don't hook it under 'recheck'. Have a competely
separate command ('run-job blah') to trigger that has no influence
on the normal check status applied to a changeset, and reports it
separately too.

> > >  2. the ability to request this recheck to run multiple
> > > times in parallel. eg if i just repeat the 'recheck'
> > > command many times on the same patchset # without
> > > waiting for results
> > 
> > Yes, this too, would be _very_ useful for all the reasons you described.
> [...]
> 
> In the past we've discussed the option of having an "idle pipeline"
> which repeatedly runs specified jobs only when there are unused
> resources available, so that it doesn't significantly cut into our
> resource pool when we're under high demand but still allows to
> automatically collect a large amount of statistical data.

Yep, that could work as long as the 'idle pipeline' did have some
kind of minimal throughput. Debugging some of these things can
be time critical, so we don't neccessarily want to wait for a
fully idle time period. IOW a 'mostly-idle pipeline' which would
run jobs at any time, but rate limit them to prevent it swamping
out the normal jobs.

> Anyway, hopefully James Blair can weigh in on this, since Zuul is
> basically in a feature freeze for a while to limit the number of
> significant changes we'll need to forward-port into the v3 branch.
> We'd want to discuss these new features in the context of Zuul v3
> instead.

Sure, that's no problem - I got lucky and reproduced the problem
this time around after a few rechecks. I just wanted to raise this
as a general request, since we've hit this scenario several times
in the past, so it'd be useful to have a more general solution in
the future, whenever that's practical.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime on 1 July 2016 at 20:00 UTC

2016-07-01 Thread Elizabeth K. Joseph
On Tue, Jun 21, 2016 at 1:11 PM, Elizabeth K. Joseph
 wrote:
> Hi everybody,
>
> On 1 July 2016 at 20:00 UTC Gerrit will be unavailable for
> approximately 120 minutes (2 hours) while we upgrade
> zuul.openstack.org and static.openstack.org to Ubuntu Trusty.
>
> During this time, running jobs will be stopped as both new servers for
> zuul.openstack.org and static.openstack.org will be brought online.
>
> If you have any questions about the maintenance, please reply here or
> contact us in #openstack-infra on freenode.

Just a quick reminder, the downtime to upgrade the Zuul server and
static.openstack.org is coming up in just under 4 hours, at 20:00 UTC.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] No meeting this Monday, July 4

2016-07-01 Thread Ed Leafe
Due to the US Independence Day holiday, we will be skipping the Nova Scheduler 
subteam meeting on this upcoming Monday, July 4. The next meeting will be July 
11 at 1400 UTC [0].

[0] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20160711T14


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Live-Migration] Cross l2 agent migration and solving Nova-Neutron live migration bugs

2016-07-01 Thread Carl Baldwin
On Fri, Jul 1, 2016 at 8:34 AM, Murray, Paul (HP Cloud)  wrote:
>> -Original Message-
>> From: Carl Baldwin [mailto:c...@ecbaldwin.net]
>> Sent: 29 June 2016 22:20
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [Nova][Neutron][Live-Migration] Cross l2 agent
>> migration and solving Nova-Neutron live migration bugs
>>
>> On Tue, Jun 28, 2016 at 9:42 AM, Andreas Scheuring
>>  wrote:
>> > I'm currently working on solving Nova-Neutron issues during Live
>> > Migration. This mail is intended to raise awareness cross project and
>> > get things kicked off.
>>
>> Thanks for sending this.
>>
>> > The issues
>> > ==
>> >
>> > #1 When portbinding fails, instance is migrated but stuck in error
>> > state
>> > #2 Macvtap agent live Migration when source and target use different
>> > physical_interface_mapping [3]. Either the migration fails (good case)
>> > or it would place the instance on a wrong network (worst case)
>> > #3 (more a feature):  Migration cross l2 agent not possible (e.g.
>> > migrate from lb host to ovs host, or from ovs-hybrid to new
>> > ovsfirewall
>> > host)
>>
>> Since all of these issues are experienced when using live migration, a Nova
>> feature, I was interested in finding out how Nova prioritizes getting a fix 
>> and
>> then trying to align Neutron's priority to that priority.  It would seem 
>> artificial
>> to me to drive priority from Neutron.  That's why I mentioned it.  Since Nova
>> is hitting a freeze for non-priority work tomorrow, I don't think anything 
>> can
>> be done for Newton.  However, there is a lot of time to tee up this
>> conversation for Ocata if we get on it.
>>
>
> I read the comments from the neutron meeting here: 
> http://eavesdrop.openstack.org/meetings/neutron_drivers/2016/neutron_drivers.2016-06-30-22.00.log.html#l-320
>  and thought a little commentary on our priorities over the last couple of 
> cycles might avoid any misconception.
>
> Live migration was a priority in Mitaka because it simply was not up to 
> scratch for production use. The main objective was to make it work and make 
> it manageable. That translated to:
>
> 1. Expand CI coverage
> 2. Manage migrations: monitor progress, cancel migrations, force spinning 
> migrations to complete
> 3. Extend use cases: allow mix of volumes, shared storage and local disks to 
> be migrated
> 4. Some other things: simplify config and APIs, scheduling support, separate 
> migration traffic from management network
>
> These were mostly covered including some supporting work on qemu and libvirt 
> as well.
>
> We next wanted to do some security work (refactoring image backend and 
> removing ssh-based copy - aka storage pools) but that could not be done in 
> Mitaka and was deferred. The priority for Newton was specifically this 
> security work and continuing efforts on CI which is now making progress (ref: 
> the sterling work by Daniel Berrange in the last couple of days: 
> http://lists.openstack.org/pipermail/openstack-dev/2016-June/098540.html )

Paul, thanks for your reply, it is helpful.  I look forward to
discussing it with the Nova team at the mid-cycle.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Bilean][CloudKitty][Telemetry] Open discussion around Bilean and existing OpenStack components

2016-07-01 Thread Stéphane Albert
Hi,

I would like to continue the discussion that started in the [review][1]
for the Big-Tent integration of the project Bilean.

In the [review][1] the Bilean team stated that a new project was needed
to overcome limitations of existing components.
In this thread, I would like to have an open discussion about what
features are lacking in the available components and what needs to be
done to integrate the Bilean use case with the current components.

I'm not opposed to changes and new features in CloudKitty, and I'm
pretty sure that trigger-based billing can be integrated in CloudKitty's
codebase.

>From my perspective, CloudKitty team is a small team and having two
teams working on rating/billing is just scattering contributions and is
detrimental to both projects. It brings confusion to users minds about
what components should be used.

We can add this topic to our meeting agenda, so we can have a talk on
IRC.

I'm hoping we'll find a solution that benefits existing projects and
enables you to implement your trigger-based billing solution.

Cheers,
Stéphane

[1]: https://review.openstack.org/#/c/334350/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][calico] New networking-calico IRC meeting

2016-07-01 Thread Carl Baldwin
Sorry I missed it.  I will plan to listen in on the 12th.  It will be
nice to have the ICS available by then.  You should probably check
with the @openstack-infra channel to see what the issue is.

Carl

On Fri, Jul 1, 2016 at 5:31 AM, Neil Jerram  wrote:
> The first networking-calico IRC meeting was planned for 28th June, but I'm
> afraid I was unwell - so it will now be 12th July.  I've updated the agenda
> [2] accordingly.
>
> Neil
>
>
> On Mon, Jun 20, 2016 at 3:42 PM Neil Jerram  wrote:
>>
>> Calling everyone interested in networking-calico ...!  networking-calico
>> has been around in the Neutron stadium for a while now, and it's way past
>> time that we had a proper forum for discussing and evolving it - so I'm
>> happy to be finally proposing a regular IRC meeting slot for it: [1].  A
>> strawman initial agenda is up at [2].
>>
>> [1] https://review.openstack.org/#/c/331689/
>> [2] https://wiki.openstack.org/wiki/Meetings/NetworkingCalico
>>
>> Please do take a look and
>>
>> let me know your views on the timing, either here or on the review
>> feel free to add items to the agenda.
>>
>>
>> Many thanks,
>>Neil
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Nominate Thomas Herve for Zaqar core

2016-07-01 Thread Victoria Martínez de la Cruz
Thomas contributions have been very useful to Zaqar. Big +1!

2016-07-01 1:17 GMT-03:00 王玺源 :

> +1, Thomas did a great job on Zaqar. As we can see, many important feature
> are done by him.
>
> Welcome, thomas.
>
> 2016-07-01 6:33 GMT+08:00 Emilien Macchi :
>
>> I'm not core but Thomas helped Puppet OpenStack group many times to
>> get Zaqar working in gate and we highly appreciate his help.
>> Way to go!
>>
>> On Thu, Jun 30, 2016 at 3:18 PM, Fei Long Wang 
>> wrote:
>> > Hi team,
>> >
>> > I would like to propose adding Thomas Herve(therve) for the Zaqar core
>> team.
>> > TBH, I have drafted this mail about 6 months ago, the reason you see
>> this
>> > mail until now is because I'm not sure if Thomas can dedicate his time
>> on
>> > Zaqar(He is a very busy man). But as you see, I'm wrong. He continually
>> > contribute a lot of high quality patches for Zaqar[1] and a lot of
>> inspiring
>> > comments for this project and team. I'm sure he would make excellent
>> > addition to the team. If no one objects, I'll proceed and add him  in a
>> week
>> > from now.
>> >
>> > [1]
>> >
>> http://stackalytics.com/?module=zaqar-group=commits=all_id=therve
>> >
>> > --
>> > Cheers & Best regards,
>> > Fei Long Wang (王飞龙)
>> >
>> --
>> > Senior Cloud Software Engineer
>> > Tel: +64-48032246
>> > Email: flw...@catalyst.net.nz
>> > Catalyst IT Limited
>> > Level 6, Catalyst House, 150 Willis Street, Wellington
>> >
>> --
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Timofei Durakov
This option is already available for live-migration job, as it's hooks are
under nova project.

On Fri, Jul 1, 2016 at 5:13 PM, Jeremy Stanley  wrote:

> Have you considered just writing a throwaway devstack-gate change
> which overrides the gate_hook to run that one suspect Tempest test,
> say, a thousand times in a loop? Would be far more efficient if you
> don't need to be concerned with all the environment setup/teardown
> overhead.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Vikas Choudhary
Hi Toni,

There seems to be some problem with. I cloned kuryr-libnetwork:

> git clone http://github.com/openstack/kuryr-libnetwork.git


But when pushed patch, on gerrit its showing project "openstack/kuryr"

PTAL, https://review.openstack.org/#/c/336617/



-Vikas

On Fri, Jul 1, 2016 at 6:40 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

> Hi fellow kuryrs!
>
> In order to proceed with the split of kuryr into a main lib and it's kuryr
> libnetwork component, we've cloned the contents of openstack/kuryr over to
> openstack/kuryr-libnetwork.
>
> The idea is that after this, the patches that will go to openstack/kuryr
> will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
> release of the common parts so that openstack/kuryr-libnetwork can start
> using it.
>
> I propose that we use python namespaces and the current common code in
> kuryr is moved to:
> kuryr/lib/
>
>
> which openstack/kuryr-libnetwork would import like so:
>
> from kuryr.lib import binding
>
> So, right now, patches in review that are for the Docker ipam or remote
> driver, should be moved to openstack/kuryr-libnetwork and soon we should
> make openstack/kuryr-libnetwork add kuryr-lib to the requirements.
>
> Regards,
>
> Toni
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Jeremy Stanley
On 2016-07-01 08:26:13 -0500 (-0500), Monty Taylor wrote:
[...]
> Check with Doug Hellman about namespaces. We used to use them in some
> oslo things and had to step away from them because of some pretty weird
> and horrible breakage issues.
[...]

Or read the associated Oslo spec from when that was done:

https://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html
 >

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-07-01 Thread Matt Riedemann

On 6/28/2016 4:56 PM, Sean Dague wrote:

On 06/28/2016 01:46 AM, Angus Lees wrote:

Ok, thanks for the in-depth explanation.

My take away is that we need to file any rootwrap updates as exceptions
for now (so releasenotes and grenade scripts).


That is definitely the fall back if there is no better idea. However, we
should try really hard to figure out if there is a non manual way
through this. Even if that means some compat code that we keep for a
release to just bridge the gap.

-Sean



Walter had this for os-brick:

https://review.openstack.org/#/c/329586/

That would fallback to rootwrap if privsep doesn't work / not available. 
That could be a workaround for upgrading with os-brick for Newton, with 
a big fat warning logged if we use it, and then drop it in Ocata and 
require privsep.


I'm not sure about os-vif, we weren't using that in Mitaka so it doesn't 
suffer from the same mitaka->newton upgrade issue, but will we get into 
any problems with newton->ocata? I know there was a change to devstack 
to configure nova to use privsep for os-vif:


https://review.openstack.org/#/c/327199/

And the os-vif integration change in nova has a rootwrap change for 
using privsep + os-vif:


https://review.openstack.org/#/c/269672/25/etc/nova/rootwrap.d/compute.filters

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-07-01 Thread Matt Riedemann

On 6/30/2016 11:10 AM, Chris Friesen wrote:


For what it's worth, this is how the timestamps work for POSIX
filesystems. When you create a file it sets the access/modify/change
timestamps to the file creation time.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That's a good point.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Jeremy Stanley
On 2016-07-01 15:39:10 +0200 (+0200), Kashyap Chamarthy wrote:
> [Snip description of some nice debugging.]
> 
> > I'd really love it if there was
> > 
> >  1. the ability to request checking of just specific jobs eg
> > 
> >   "recheck gate-tempest-dsvm-multinode-full"
> 
> Yes, this would really be desirable.  I recall once asking this exact
> question on #openstack-infra, but can't find Infra team's response to
> that.

The challenge here is that you want to make sure it can't be used to
recheck individual jobs until you have them all passing (like
picking a pin and tumbler lock). The temptation to recheck-spam
nondeterministically failing changes is already present, but this
would make it considerably easier still for people to introduce new
nondeterministic failures in projects. Maybe if it were tied to a
special pipeline type, and then we set it only for the experimental
pipeline or something?

> >  2. the ability to request this recheck to run multiple
> > times in parallel. eg if i just repeat the 'recheck'
> > command many times on the same patchset # without
> > waiting for results
> 
> Yes, this too, would be _very_ useful for all the reasons you described.
[...]

In the past we've discussed the option of having an "idle pipeline"
which repeatedly runs specified jobs only when there are unused
resources available, so that it doesn't significantly cut into our
resource pool when we're under high demand but still allows to
automatically collect a large amount of statistical data.

Anyway, hopefully James Blair can weigh in on this, since Zuul is
basically in a feature freeze for a while to limit the number of
significant changes we'll need to forward-port into the v3 branch.
We'd want to discuss these new features in the context of Zuul v3
instead.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Live-Migration] Cross l2 agent migration and solving Nova-Neutron live migration bugs

2016-07-01 Thread Murray, Paul (HP Cloud)


> -Original Message-
> From: Carl Baldwin [mailto:c...@ecbaldwin.net]
> Sent: 29 June 2016 22:20
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Nova][Neutron][Live-Migration] Cross l2 agent
> migration and solving Nova-Neutron live migration bugs
> 
> On Tue, Jun 28, 2016 at 9:42 AM, Andreas Scheuring
>  wrote:
> > I'm currently working on solving Nova-Neutron issues during Live
> > Migration. This mail is intended to raise awareness cross project and
> > get things kicked off.
> 
> Thanks for sending this.
> 
> > The issues
> > ==
> >
> > #1 When portbinding fails, instance is migrated but stuck in error
> > state
> > #2 Macvtap agent live Migration when source and target use different
> > physical_interface_mapping [3]. Either the migration fails (good case)
> > or it would place the instance on a wrong network (worst case)
> > #3 (more a feature):  Migration cross l2 agent not possible (e.g.
> > migrate from lb host to ovs host, or from ovs-hybrid to new
> > ovsfirewall
> > host)
> 
> Since all of these issues are experienced when using live migration, a Nova
> feature, I was interested in finding out how Nova prioritizes getting a fix 
> and
> then trying to align Neutron's priority to that priority.  It would seem 
> artificial
> to me to drive priority from Neutron.  That's why I mentioned it.  Since Nova
> is hitting a freeze for non-priority work tomorrow, I don't think anything can
> be done for Newton.  However, there is a lot of time to tee up this
> conversation for Ocata if we get on it.
> 

I read the comments from the neutron meeting here: 
http://eavesdrop.openstack.org/meetings/neutron_drivers/2016/neutron_drivers.2016-06-30-22.00.log.html#l-320
 and thought a little commentary on our priorities over the last couple of 
cycles might avoid any misconception.

Live migration was a priority in Mitaka because it simply was not up to scratch 
for production use. The main objective was to make it work and make it 
manageable. That translated to:

1. Expand CI coverage
2. Manage migrations: monitor progress, cancel migrations, force spinning 
migrations to complete
3. Extend use cases: allow mix of volumes, shared storage and local disks to be 
migrated
4. Some other things: simplify config and APIs, scheduling support, separate 
migration traffic from management network

These were mostly covered including some supporting work on qemu and libvirt as 
well.

We next wanted to do some security work (refactoring image backend and removing 
ssh-based copy - aka storage pools) but that could not be done in Mitaka and 
was deferred. The priority for Newton was specifically this security work and 
continuing efforts on CI which is now making progress (ref: the sterling work 
by Daniel Berrange in the last couple of days: 
http://lists.openstack.org/pipermail/openstack-dev/2016-June/098540.html )

 
> > The proposal
> > 
> > All those problems could be solved with the same approach . The
> > proposal is, to bind a port to the source AND to the target port
> > during migration.
> >

Good - it is frankly ridiculous that we irreversibly commit a migration to the 
destination host before we know the networks can be built out. This is 
basically what we do with cinder - volumes are attached at both source and 
destination during the migration.


> > * Neutron would need to allow multiple bindings for a compute port and
> > externalize that via API.
> >   - Neutron Spec [1]
> >   - Bug [4]  is a prereq to the spec.
> >
> > * Nova would need to use those new APIs to check in
> > pre_live_migration, if the binding for target host is valid and to
> > modify the instance definition (e.g. domain.xml) during migration.
> >   - Nova Spec [2]
> >
> > This would solve the issues in the following way:
> > #1 would abort the migration before it started, so instance is still
> > usable
> > #2 Migration is possible with all configurations
> > #3 would allow such a migration
> >
> > Coordination
> > 
> > Some coordination between Nova & Neutron is required. Along todays
> > Nova Live Migration Meeting [5] this will happen on the Nova midcycle.
> > I put an item on the agenda [6].
> 
> I'll be there.

Yes, Good. 


> 
> > Would be great that anybody that is interested in this bugfix/feature
> > could comment on the specs [1] or [2] to get as much feedback as
> > possible before the nova midcycle in July!
> >
> > Thank you!
> 

I don't think this is actually a large amount of work on the Nova side. We need 
to get the API right - will leave comments.


> >
> > [1] Neutron spec: https://review.openstack.org/#/c/309416
> > [2] Nova spec: https://review.openstack.org/301090
> > [3] macvtap bug: https://bugs.launchpad.net/neutron/+bug/1550400
> > [4] https://bugs.launchpad.net/neutron/+bug/1367391
> > [5]
> >
> 

Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Antoni Segura Puimedon
On Fri, Jul 1, 2016 at 3:26 PM, Monty Taylor  wrote:

> On 07/01/2016 08:10 AM, Antoni Segura Puimedon wrote:
> > Hi fellow kuryrs!
> >
> > In order to proceed with the split of kuryr into a main lib and it's
> kuryr
> > libnetwork component, we've cloned the contents of openstack/kuryr over
> to
> > openstack/kuryr-libnetwork.
> >
> > The idea is that after this, the patches that will go to openstack/kuryr
> > will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
> > release of the common parts so that openstack/kuryr-libnetwork can start
> > using it.
> >
> > I propose that we use python namespaces and the current common code in
> > kuryr is moved to:
> > kuryr/lib/
>
> Check with Doug Hellman about namespaces. We used to use them in some
> oslo things and had to step away from them because of some pretty weird
> and horrible breakage issues.
>

Thanks for the warning. It's a very cool looking feature that is underused,
so probably
there is a nasty reason for that. I'll ask.



>
> >
> > which openstack/kuryr-libnetwork would import like so:
> >
> > from kuryr.lib import binding
> >
> > So, right now, patches in review that are for the Docker ipam or remote
> > driver, should be moved to openstack/kuryr-libnetwork and soon we should
> > make openstack/kuryr-libnetwork add kuryr-lib to the requirements.
> >
> > Regards,
> >
> > Toni
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Jeremy Stanley
Have you considered just writing a throwaway devstack-gate change
which overrides the gate_hook to run that one suspect Tempest test,
say, a thousand times in a loop? Would be far more efficient if you
don't need to be concerned with all the environment setup/teardown
overhead.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][nailgun-agent] Announcement of massive changes in fuel-nailgun-agent

2016-07-01 Thread Ivan Suzdal
Greetings!

Today we did a big change which might be invasive. We replaced ohai with facter 
in fuel-nailgun-agent [1]. There were a couple reasons why we did that:

1. Fuel project uses facter very intensively. However, ohai and facter have 
similar purpose and functionality. We were lack of some useful features from 
ohai.
2. Ohai requires a lot of dependencies. Every dependency is a separate package 
which requires some efforts to maintain it.

All necessary functionality was added to nailgun-agent so we have a feature 
parity at the moment. Since the change is big it might be dangerous as it’s 
related to hardware where it’s run on. Please don’t hesitate to contact me in 
case of any issues with nailgun-agent. Your feedback is appreciated.

[1] https://review.openstack.org/#/c/314642/ 



Best regards!
Ivan Suzdal
isuz...@mirantis.com





signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-07-01 Thread Jeremy Stanley
On 2016-07-01 11:24:27 +0200 (+0200), Thierry Carrez wrote:
> Short answer is: release:managed doesn't mean that much anymore (all
> official projects are "managed"), so we'll likely retire it ASAP.
[...]

If the meaning has been reduced to "this project is allowed to
request tagging by the Release Management team" then I agree it's no
longer necessary since any official project _can_ do that. If the
meaning is "this project is _only_ allowed to be tagged by the
Release Management team" then I can still see some use for it, since
there are plenty of official projects that currently follow their
own independent release process and push their own tags instead.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Kashyap Chamarthy
On Thu, Jun 30, 2016 at 02:44:36PM +0100, Daniel P. Berrange wrote:

[Snip description of some nice debugging.]

> I'd really love it if there was
> 
>  1. the ability to request checking of just specific jobs eg
> 
>   "recheck gate-tempest-dsvm-multinode-full"

Yes, this would really be desirable.  I recall once asking this exact
question on #openstack-infra, but can't find Infra team's response to
that.

>  2. the ability to request this recheck to run multiple
> times in parallel. eg if i just repeat the 'recheck'
> command many times on the same patchset # without
> waiting for results

Yes, this too, would be _very_ useful for all the reasons you described.

Thanks for starting this thread!

> Any one got any other tips for debugging highly non-deterministic
> bugs like this which only hit perhaps 1 time in 100, without wasting
> huge amounts of CI resource as I'm doing right now ?
> 
> No one has ever been able to reproduce these failures outside of
> the gate CI infra, indeed certain CI hosting providers seem worse
> afffected by the bug than others, so running tempest locally is not
> an option.


[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nfs-ganesha export modification issue

2016-07-01 Thread John Spray
On Thu, Jun 30, 2016 at 1:37 PM, Alexey Ovchinnikov
 wrote:
> Hello everyone,
>
> here I will briefly summarize an export update problem one will encounter
> when using nfs-ganesha.
>
> While working on a driver that relies on nfs-ganesha I have discovered that
> it
> is apparently impossible to provide interruption-free export updates. As of
> version
> 2.3 which I am working with it is possible to add an export or to remove an
> export without restarting the daemon, but it is not possible to modify an
> existing
> export. So in other words if you create an export you should define all
> clients
> before you actually export and use it, otherwise it will be impossible to
> change
> rules on the fly. One can come up with at least two ways to work around
> this issue: either by removing, updating and re-adding an export, or by
> creating multiple
> exports (one per client) for an exported resource. Both ways have associated
> problems: the first one interrupts clients already working with an export,
> which might be a big problem if a client is doing heavy I/O, the second one
> creates multiple exports associated with a single resource, which can easily
> lead
> to confusion. The second approach is used in current manila's ganesha
> helper[1].
> This issue seems to be raised now and then with nfs-ganesha team, most
> recently in
> [2], but apparently it will not  be addressed in the nearest future.

This is certainly an important limitation for people to be aware of.
My reading of [2] wasn't that anyone was saying it would necessarily
not be addressed, it just needs someone to do it.  Franks mail on that
thread pretty much laid out the steps needed.

John

> With kind regards,
> Alexey.
>
> [1]:
> https://github.com/openstack/manila/blob/master/manila/share/drivers/ganesha/__init__.py
> [2]: https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Monty Taylor
On 07/01/2016 08:10 AM, Antoni Segura Puimedon wrote:
> Hi fellow kuryrs!
> 
> In order to proceed with the split of kuryr into a main lib and it's kuryr
> libnetwork component, we've cloned the contents of openstack/kuryr over to
> openstack/kuryr-libnetwork.
> 
> The idea is that after this, the patches that will go to openstack/kuryr
> will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
> release of the common parts so that openstack/kuryr-libnetwork can start
> using it.
> 
> I propose that we use python namespaces and the current common code in
> kuryr is moved to:
> kuryr/lib/

Check with Doug Hellman about namespaces. We used to use them in some
oslo things and had to step away from them because of some pretty weird
and horrible breakage issues.

> 
> which openstack/kuryr-libnetwork would import like so:
> 
> from kuryr.lib import binding
> 
> So, right now, patches in review that are for the Docker ipam or remote
> driver, should be moved to openstack/kuryr-libnetwork and soon we should
> make openstack/kuryr-libnetwork add kuryr-lib to the requirements.
> 
> Regards,
> 
> Toni
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Antoni Segura Puimedon
On Fri, Jul 1, 2016 at 3:10 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

> Hi fellow kuryrs!
>
> In order to proceed with the split of kuryr into a main lib and it's kuryr
> libnetwork component, we've cloned the contents of openstack/kuryr over to
> openstack/kuryr-libnetwork.
>
> The idea is that after this, the patches that will go to openstack/kuryr
> will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
> release of the common parts so that openstack/kuryr-libnetwork can start
> using it.
>
> I propose that we use python namespaces and the current common code in
> kuryr is moved to:
> kuryr/lib/
>
>
> which openstack/kuryr-libnetwork would import like so:
>
> from kuryr.lib import binding
>
> So, right now, patches in review that are for the Docker ipam or remote
> driver, should be moved to openstack/kuryr-libnetwork and soon we should
> make openstack/kuryr-libnetwork add kuryr-lib to the requirements.
>

We should be moving the gates too.


>
> Regards,
>
> Toni
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

In order to proceed with the split of kuryr into a main lib and it's kuryr
libnetwork component, we've cloned the contents of openstack/kuryr over to
openstack/kuryr-libnetwork.

The idea is that after this, the patches that will go to openstack/kuryr
will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
release of the common parts so that openstack/kuryr-libnetwork can start
using it.

I propose that we use python namespaces and the current common code in
kuryr is moved to:
kuryr/lib/


which openstack/kuryr-libnetwork would import like so:

from kuryr.lib import binding

So, right now, patches in review that are for the Docker ipam or remote
driver, should be moved to openstack/kuryr-libnetwork and soon we should
make openstack/kuryr-libnetwork add kuryr-lib to the requirements.

Regards,

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL]Network verification failed.

2016-07-01 Thread Aleksey Zvyagintsev
Hi
I guess, this doc[1] will be more useful - since Samer use fuel 8.0

[1] https://community.mellanox.com/docs/DOC-2435

On Thu, Jun 30, 2016 at 6:18 PM, Yuki Nishiwaki 
wrote:

> Hello Samer
>
> Did you read this reference (https://community.mellanox.com/docs/DOC-2036)
> ?
> If you didn’t read it yet, this may be helpful to you.
>
> By the way, "Figure 1” which is attached is very cool chart.
> What the tool did you use to write this figure ??
>
> Yuki Nishiwaki
>
> 2016/06/30 21:01、Samer Machara  のメール:
>
> Hello!
>   I'm having problems configuring the Network settings (See Figure 3) in
> Fuel 8.0. In Figure 1, I resume my network topology.
>
>   When I am checking the network, I get the following error: verification
> failed Expected VLAN (not received). See Figure 2.
>
>   All these Node interfaces correspond to the "storage network" that are
> type "InfiniBand" connected to a switch "Mellanox IS5025"
> unmanaged, which means it is plug and play. So, I can not configure PKs
> (VLANs).
>
>
>   Otherwise, I do not know how to determine what is happening because I do
> not see more details of the error, other than that shown in Fig 2.
>
> 
> Figure 1
>
> 
> Figure 2.
>
>
> 
> Figure 3
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
---
Best regards,
   Aleksey Zvyagintsev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Deprecated Configuration Option in Nova Mitaka Release

2016-07-01 Thread Sean Dague
On 06/30/2016 01:55 PM, HU, BIN wrote:
> I see, and thank you very much Dan. Also thank you Markus for unreleased 
> release notes.
> 
> Now I understand that it is not a plugin and unstable interface. And there is 
> a new "use_neutron" option for configuring Nova to use Neutron as its network 
> backend.
> 
> When we use Neutron, there are ML2 and ML3 plugins so that we can choose to 
> use different backend providers to actually perform those network functions. 
> For example, integration with ODL.
> 
> Shall we foresee a situation, where user can choose another network backend 
> directly, e.g. ODL, ONOS? Under this circumstance, a stable plugin interface 
> seems needed which can provide end users with more options and flexibility in 
> deployment.
> 
> What do you think?

Neutron is the network API that we've agreed to in OpenStack, and have
worked towards for years. Network backends should exist behind the
OpenStack Network API (Neutron).

If there are challenges in that software stack, then there is an
upstream community to engage there to work with to get what you need out
of that stack. I'd highly recommend that you do so sooner rather than
later on that front.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Timofei Durakov
Hi,

talking about live-migration job. If you are OK to re-run tempest tests
multiple times on single environment you could wrap this line:
https://github.com/openstack/nova/blob/master/nova/tests/live_migration/hooks/run_tests.sh#L30
in for-cycle.

Timofey.

On Fri, Jul 1, 2016 at 10:31 AM, Spencer Krum  wrote:

>
> This is some neat work. I like it.
>
> We have a graph of nodepool nodes in use in grafana[1]. Looking at a 30
> day window, its pretty easy to tell that the weekends are low activity.
> Running big tests like this on a Saturday would be great. Of course,
> there won't be much infra support on that day.
>
> What we can do for running just the multinode migration test is create a
> new pipeline(maybe call it heavyload or something) and attach 100 copies
> of the test to that pipeline. Then you only need one gerrit change and
> you issue a 'check heavyload' comment on the review and it will run all
> 100 copies of the test. Gate-tempest-dsvm-multinode-live-migration-1,
> gate-tempest-dsvm-multinode-live-migration-2 ... and so on. I think this
> is better than reusing the experimental pipeline since people will try
> to check experimental on nova from time to time and probably don't
> expect to use 200 virtual machines. It's not a very clean suggestion,
> but it would work. Someone else might have a better idea.
>
>
>
> [1]
> http://grafana.openstack.org/dashboard/db/nodepool?panelId=4=1464741261723=1467333261724
>
> --
>   Spencer Krum
>   n...@spencerkrum.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [osc][openstackclient][all] time to make the check-osc-plugins job voting?

2016-07-01 Thread Steve Martinelli
There's a whole bunch of OpenStackClient plugins now (17 to be exact).

With [1] now merged, we run a non-voting job (check-osc-plugins) on each
plugin to check for commands that may conflict with each other. It tests
the master branch of each plugin, so we'll catch errors before the release.

The job has caught legitimate errors in the past (entry point not defined,
conflicting command, etc), I think it's time to make it voting. I've
proposed [2] just for that.

If you're the PTL of a project that has an OSC plugin, can you please
comment on [2], as it'll create a voting job in your project.

[1] https://review.openstack.org/#/c/336061/
[2] https://review.openstack.org/#/c/336344/

Thanks,
stevemar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]tricircle wiki update

2016-07-01 Thread joehuang
Hello, team,

The wiki page of Tricircle was updated in
 the architecture and value section, other
 sections also with minor update

Please review and comment, and update
 accordingly for obvious error.

If you have any question and topic to discuss, also let's discussion in the 
m-l. Thanks

Sent from HUAWEI AnyOffice
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][infra] RH2 is up and running

2016-07-01 Thread Derek Higgins
Hi All,
Yesterday the final patch merged to run CI jobs on RH2, and last
night we merged the patch to tripleo-ci to support RH2 jobs. So we now
have a new job (gate-tripleo-ci-centos-7-ovb-ha) running on all
tripleo patch reviews. This job is running pacemaker HA with a 3 node
controller and a single compute node. Its basically the same as our
current HA job without net-iso.

Looking at pass rates this morning
1. The jobs are failing on stable branches[1]
  o I've submitted a patch to the mitaka and liberty branch to fix
this(see the bug)
2. The pass rate does seem to be a little lower then the RH1 HA job
  o I'll look into this today but overall the pass rate should be good
enough for when RH1 is taken offline

The main difference between jobs running on rh2 when compared to rh1
is that the CI slave IS the undercloud (we've eliminated the need for
an extra undercloud node), this saves resources. We no longer build a
instack qcow2 image, this saves us a little time.

To make this work, early in the CI process we make a call out to a
geard broker and pass it the instance ID of the undercloud, this
broker creates a heat stack (using OVB heat templates) with a number
nodes on a provisioning network. It then attaches an interface on this
provisioning network to the undercloud[2]. Ironic can then talk (with
ipmi) to a bmc node to power them on and PXE boot them. At the end of
the job the stack is deleted.

Whats next?
o On Tuesday evening next, rh1 will be taken offline so I'll be
submitting a patch to remove all of the RH1 jobs and until we bring it
back up we will only have a single triple-ci job
o The RH1 rack will be back available to us on Thursday, we then have a choice
 1. Bring rh1 back up as is and return everything back to the status quo
 2. Redeploy rh1 with OVB and move away from the legacy system permanently
 If the OVB based jobs prove to be reliable etc.. I think option 2 is
worth thinking about, it wasn't the original plan but it would allow
us move away from a legacy system that is getting harder to support as
time goes on.
o RH2 was a loaned to us to allow this to happen so once we pick
either option above and complete the deployment of RH1 we'll have to
give it back

The OVB based cloud opens up a couple of interesting options to us
that we can explore if we were to stick with using OVB
1. Periodic scale test
  o With OVB its possible to select the number of nodes we place on
the provisioning network, for example while testing rh2 I was able to
deploy a overcloud with 80(we could do up to 120 on rh2 even higher on
rh1) compute nodes, doing this nightly when CI load is low would be an
extremely valuable test to run and gather data on.
2. Dev quota to reproduce CI
  o On OVB its now a lot easier to give somebody some quota to
reproduce exactly what CI is using in order to reproduce problems
etc... this was possible on rh1 but required a cloud admin to manually
take testenvs away from CI(it was manual and messy so we didn't do it
much)

The move doesn't come without its costs

1. tripleo-quickstart
  o Part of the tripleo-quickstart install is to first download a
prebuilt undercloud image that we were building in our periodic job.
Because the undercloud is now the CI slave we no longer build a
instack.qcow2 image. For the near future we can host the most recent
one on RH2(the IP will change so this needs to change in tripleo
quickstart or better still a DNS entry could be used so switch over
would be smother in future) but if we make the move to jobs of this
type permanent we'll no longer be generating this image for
quickstart. So we'll have to see if we can come to an alternative. We
could generate one in the periodic job but I'm not sure how we could
test it easily.

2. moving the current-tripleo pin
  o I havn't put in place yet anything needed for our periodic job to
move the current-tripleo pin, so until we get this done (and decide
what to do about 1. above) we're stuck on what ever pin we happen to
be on on Tuesday when rh1 is taken offline. The pin moved last night
to a repository from 2016-06-29 so we are at least reasonably up to
date. If it looks like the rh1 deployment is going to take an
excessive amount time we'll need to make this a priority.

3. The ability to telnet to CI slaves to get the console for running
CI jobs doesn't work on RH2  jobs, this is because its is using the
same port number(8088) we use in tripleo for ironic to serve its iPXE
images over http. So I've had to kill the console serving process
until we solve this. If we want to fix this we'll have to explore
changing the port number in either tripleo or infra.

I was putting together a screencast of how rh2 was deployed(with RDO
mitaka) but after several hours of editing the screen casts into
something usable the software I was using(openshot) refused to
generate what I had put together, in fact it crashed a lot, so if
anybody has any good suggestions of software I could use I'll try
again.

If I've missed 

Re: [openstack-dev] [neutron][calico] New networking-calico IRC meeting

2016-07-01 Thread Neil Jerram
The first networking-calico IRC meeting was planned for 28th June, but I'm
afraid I was unwell - so it will now be 12th July.  I've updated the agenda
[2] accordingly.

Neil

On Mon, Jun 20, 2016 at 3:42 PM Neil Jerram  wrote:

> Calling everyone interested in networking-calico ...!  networking-calico
> has been around in the Neutron stadium for a while now, and it's way past
> time that we had a proper forum for discussing and evolving it - so I'm
> happy to be finally proposing a regular IRC meeting slot for it: [1].  A
> strawman initial agenda is up at [2].
>
> [1] https://review.openstack.org/#/c/331689/
> [2] https://wiki.openstack.org/wiki/Meetings/NetworkingCalico
>
> Please do take a look and
>
>- let me know your views on the timing, either here or on the review
>- feel free to add items to the agenda.
>
>
> Many thanks,
>Neil
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-07-01 Thread Jesse Pretorius
Now that OpenStack-Ansible has the final Swift kilo-eol tag implemented we’ve 
requested a final tag [1]. Once that merges we are ready to have our kilo-eol 
tag implemented and the ‘kilo’ branch removed.

Whoops – I forget to add the link to the final Kilo tag request.

[1] https://review.openstack.org/336505



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-07-01 Thread Jesse Pretorius
Hi all,

Now that OpenStack-Ansible has the final Swift kilo-eol tag implemented we’ve 
requested a final tag [1]. Once that merges we are ready to have our kilo-eol 
tag implemented and the ‘kilo’ branch removed.

Just to make life interesting, we still have leftover ‘juno’ and ‘icehouse’ 
branches and would like to implement eol tags for them too. I think we have the 
appropriate skips in place for the juno branch so there should be no funky 
post-tag jobs kicking off for them, but the icehouse branch may end up with 
some unknown jobs kicking off. If you can help identify the changes we need to 
get implemented into project-config then we can be rid of the old cruft.

Thanks,

Jesse


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][calico] New networking-calico IRC meeting

2016-07-01 Thread Neil Jerram
Thanks, Carl.  I had the same observation, so would also be interested in
the answer.

Neil


On Thu, Jun 30, 2016 at 9:15 PM Carl Baldwin  wrote:

> On Mon, Jun 20, 2016 at 8:42 AM, Neil Jerram  wrote:
> > Calling everyone interested in networking-calico ...!  networking-calico
> has
> > been around in the Neutron stadium for a while now, and it's way past
> time
> > that we had a proper forum for discussing and evolving it - so I'm happy
> to
> > be finally proposing a regular IRC meeting slot for it: [1].  A strawman
> > initial agenda is up at [2].
>
> I see that the meeting yaml has merged and I've tried to check
> eavesdrop.o.o [3].  I figured that since this is a biweekly meeting,
> it would help me avoid mistakes to download the ICS file and import it
> in to my calendar.  But, I can't find this meeting on that page!  Has
> this page stopped updating?
>
> Carl
>
> > [1] https://review.openstack.org/#/c/331689/
> > [2] https://wiki.openstack.org/wiki/Meetings/NetworkingCalico
>
> [3] http://eavesdrop.openstack.org/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Apply for release:managed tag

2016-07-01 Thread Steven Dake (stdake)


On 6/30/16, 8:26 PM, "Tony Breeds"  wrote:

>On Fri, Jul 01, 2016 at 12:56:09AM +, Steven Dake (stdake) wrote:
>> Hey folks,
>> 
>> I'd like the release management team to take on releases of Kolla.  This
>> means applying for the release:managed[1] tag.  Please vote +1 if you
>>wish to
>> proceed, or -1 if you don't wish to proceed.  The most complex part of
>>this
>> change will be that when feature freeze happens, we must actually
>>freeze all
>> feature development.  We as a team haven't been super good at this in
>>the
>> past, but I am confident we could hold to that set of rules if the core
>>team
>> is in agreement on this vote.
>
>I'm far from the center of this but release:managed is on the way out (
>https://review.openstack.org/#/c/335440/ ) So I think you're good as you
>are.
>I'm sure Doug will provide help in understanding what the release process
>looks
>like without release:managed.  Especially WRT feature-freeze / RC periods.
>
>Yours Tony.

Tony,

Cool thanks for the information.  I have been attempting to get a feel for
whats going to happen with this tag all week, but have yet to see Doug's
review.

Thanks again

Folks, I think we can just ignore this vote since the tag is being removed.

Regards
-steve
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][QA] fuel-devops bridge and interface names

2016-07-01 Thread Anton Studenov
Hi to all,

This message is related to upcoming changes in fuel-devops project.

Race condition issue was found in scenario where multiple environments
are being created at the same time. Networks or node interfaces recive
the same name in different environmens and this leads to error.
This happens because in the current implementation of fuel-devops
the names are assigned manually in code. We'd like to change this
and make libvirt to choose available names because libvirt has such feature
and it would be easier to use it rather than look for available names.

After proposed changes[1][2], fuel-devops will use the following prefixes:
  virbr - for network bridge names
  vnet - for node interface names

Concerns/Proposals are welcome.


[1] https://review.openstack.org/#/c/336057/ for release/2.9 branch
[2] https://review.openstack.org/#/c/336064/ for master branch

Thanks,
Anton

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ansible] Fail to install Openstack all in one

2016-07-01 Thread Jesse Pretorius
On Thu, Jun 23, 2016 at 8:47 AM, Alioune 
> wrote:

I'm trying to install OpenStack all in one using ansible but I got the error 
bellow.
Someone knows how to solve this error?

Regards,

ERROR:
+ openstack-ansible --forks 1 openstack-hosts-setup.yml
Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e 
@/etc/openstack_deploy/user_variables.yml "
ERROR: Inventory script (inventory/dynamic_inventory.py) had an execution 
error: No user config loadaed
No openstack_user_config files are available in either the base location or the 
conf.d directory

For the all-in-one it looks like you’ve skipped the execution of the 
‘bootstrap-aio.sh’ script which puts all the AIO configurations into place. 
This is covered in the AIO documentation [1].

As Jimmy has said, please feel free to drop by in #openstack-ansible on 
freenode for more active assistance.

[1] 
http://docs.openstack.org/developer/openstack-ansible/developer-docs/quickstart-aio.html


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] When to purge the DB, and when not to purge the DB?

2016-07-01 Thread Jesse Pretorius
Hi everyone,

In a recent conversation on the Operators list [1] there was a discussion about 
purging archived data in the database. It would seem to me an important step in 
maintaining an environment which should be done from time to time and perhaps 
at the very least prior to a major upgrade.

What’re the thoughts on how often this should be done? Should we include it as 
an opt-in step, or an opt-out step?

[1] 
http://lists.openstack.org/pipermail/openstack-operators/2016-June/010813.html

—
Jesse Pretorius
IRC: odyssey4me



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-07-01 Thread Thierry Carrez

Steven Dake (stdake) wrote:

If it does have some special meaning or requirements beyond the "we will
freeze on the freeze deadline" could someone enumerate those?


Short answer is: release:managed doesn't mean that much anymore (all 
official projects are "managed"), so we'll likely retire it ASAP.


I'll let Doug do the long answer when he finishes processing his ML backlog.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas] Taas can not capture the packet, if the two VM on the same host. Is it a Bug?

2016-07-01 Thread 张广明
Hi ,
I found a limitation when use taas.  My test case is descripped as
follow:
VM1 and VM2 is running on the same host and  they are belong the vlan.
The monitor VM is on the same host or the  other host . I want to monitor
the only INPUT flow to the VM1.
So I configure the tap-flow like this "neutron tap-flow-create  --port
2a5a4382-a600-4fb1-8955-00d0fc9f648f  --tap-service
c510e5db-4ba8-48e3-bfc8-1f0b61f8f41b --direction IN ".
When ping from VM2 to VM1.  I can not get the flow in the monitor VM.
   The reason is the the flow from VM2 to VM1 in br-int has not vlan
information. The vlan tag was added in flow when output the packet  in OVS.
So the code in file ovs_taas.py did not work in this case .

 if direction == 'IN' or direction == 'BOTH':
port_mac = tap_flow['port_mac']
 self.int_br.add_flow(table=0,
 priority=20,
dl_vlan=port_vlan_id,
dl_dst=port_mac,
   actions="normal,mod_vlan_vid:%s,output:%s" %
 (str(taas_id), str(patch_int_tap_id)))




 Is this is a Bug or a Design ??



Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] the meaning of the release:managed tag now that everything is released:managed

2016-07-01 Thread Ihar Hrachyshka

> On 30 Jun 2016, at 16:07, Steven Dake (stdake)  wrote:
> 
> Hey folks,
> 
> I am keen on tagging Kolla in the governance repository with every tag that 
> is applicable to our project.  One of these is release:managed.  I've been 
> working as PTL the last 3 cycles to get Kolla processes to the point we could 
> apply for release:managed.  Looks like Doug and release team in general has 
> beat me to the punch :)
> 
> The requirements of the tag are met by force because of how the release 
> process is now executed.  I'm wondering if this tag has any meaning any 
> longer given the fact that the release team has nearly automated themselves 
> out of a job :)

I think you fall under wrong perception of the level of automation here. That’s 
ok, I also did before. :)

While openstack/releases repo indeed allows to post release patches for all 
official projects, there is an assumption that e.g. the git tag for the release 
is signed and pushed by the project respective release liaison. In that way, 
the openstack/releases patch is just for documentation purposes, it does not 
trigger actual release (because it happened before with the git tag being 
pushed).

Note that while there is that expectation, release team is usually very helpful 
and will check the tag, and if there is no tag yet, will push it for you. But 
that’s not supposed to be the default behaviour.

> 
> If it does have some special meaning or requirements beyond the "we will 
> freeze on the freeze deadline" could someone enumerate those?
> 
> FWIW I feel a lot more comfortable with the current release process.  The 
> release team has done a fantastic job.  I always felt nervous pushing a 
> signed tag and I've been doing this for ~5 years :)
> 
> Regards
> -steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Spencer Krum

This is some neat work. I like it.

We have a graph of nodepool nodes in use in grafana[1]. Looking at a 30
day window, its pretty easy to tell that the weekends are low activity.
Running big tests like this on a Saturday would be great. Of course,
there won't be much infra support on that day.

What we can do for running just the multinode migration test is create a
new pipeline(maybe call it heavyload or something) and attach 100 copies
of the test to that pipeline. Then you only need one gerrit change and
you issue a 'check heavyload' comment on the review and it will run all
100 copies of the test. Gate-tempest-dsvm-multinode-live-migration-1,
gate-tempest-dsvm-multinode-live-migration-2 ... and so on. I think this
is better than reusing the experimental pipeline since people will try
to check experimental on nova from time to time and probably don't
expect to use 200 virtual machines. It's not a very clean suggestion,
but it would work. Someone else might have a better idea.



[1]http://grafana.openstack.org/dashboard/db/nodepool?panelId=4=1464741261723=1467333261724

-- 
  Spencer Krum
  n...@spencerkrum.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Support for unicode filename in tempurl middleware

2016-07-01 Thread Antonio Calanducci
Hi,

at the end I figured the problem out. It was the very same bug of the
python-swiftclient, missing encoding for utf-8.
I encoded the hmac_body with utf8 (but not URI encoding) and the tempURL is
now valid!

Thank you for the support!
Best
Antonio


On Wed, Jun 29, 2016 at 9:40 PM, Antonio Calanducci <
antonio.calandu...@ct.infn.it> wrote:

> Hi Tim,
>
> thank you. I have modified my local python swift client according the diff
> on your patch and now I can generate tempURL on python2.7 too.
>
> I have noticed a bizzarre thing now. Trying to use CURL to download the
> generated tempURL, using a utf8 filename ("Monè.txt"), the download works
> fine if I copy and paste the filename from "swift list" output, but if I
> type the filename with my keyboard, it doesn't work anymore. Same "è"
> character, not sure why. Maybe the key I type on the keyboard has a
> different encoding than the one from the "swift list" output?
>
> I have also a second question, related to filename that contains spaces. I
> should not need to URL encode (with %20) the path to generate the tempURL,
> but I should encode when doing the actual upload, correct?
>
> Actually the code that generates the tempURL is a Node.js script written
> in JavaScript. Probably I am not doing correctly the encodings (as far as I
> know I should not encode anything in Node.js because of the native
> support). Do you have any JS sample code for tempURL generation or any
> hints I should follow to properly generate the tempURL?
>
> thank you in advance
> Best
> Antonio
>
>
>
>
>
>
>
>
> On Wed, Jun 29, 2016 at 8:27 PM, Tim Burke  wrote:
>
>> Hi Antonio,
>>
>> That sounds like a bug! Looks like we decode the temporal key coming in
>> from the terminal, but don't properly encode it when getting the HMAC. This
>> generates the UnicodeEncodeError on python 2.7, and I've submitted
>> https://review.openstack.org/#/c/335615/ to address this.
>>
>> I'm not sure about what's going on with the temporal generated on python
>> 3, though. FWIW, I'll often forget whether it's supposed to be
>> X-Container-Meta-Temp-Url-Key or X-Container-Meta-Tempurl-Key. (For
>> reference, use the first one.) Maybe that's the issue?
>>
>> Tim
>>
>> > On Jun 29, 2016, at 10:06 AM, Dr Antonio Calanducci <
>> antonio.calandu...@ct.infn.it> wrote:
>> >
>> > Hi,
>> >
>> > I hope this is the right mailing list to ask this question. In case
>> it's not, please direct me to the right one and apologise.
>> >
>> > I have some files stored in a container the use german character (and
>> in general non ascii character).
>> > Using the swift REST APIs and python-swiftclient (3.0.0)
>> download/upload command I have no problem to handle them.
>> >
>> > But when I try to generate tempurl for some of them with
>> python-swiftclient, I got errors if I use python 2.7. By the way, switching
>> to python3 I can generate properly the tempurl, but the URL is actually not
>> working generating a 401 error.
>> >
>> > Looking at the github repo it seems there is utf8 support for the
>> tempurl key, but I am not sure if you support unicode for path too. I am
>> currently using swift 2.2.0 on the server.
>> >
>> > I read in the docs that I should not encode the path while generating
>> the tempurl, but only to actually retrieve from the generate url.
>> >
>> > Could you confirm or not that filename with unicode chars are supported
>> and if so in which version?
>> >
>> > thanks a lot for the attention
>> > All the best
>> > Antonio
>> >
>> > --
>> >
>> > Dr. Antonio Calanducci, PhD
>> > INFN Catania
>> >
>> > Office:+39 095 3785519
>> > Mobile:  +39 349 6762534
>> > Skype:   tcaland
>> >
>> > antonio.calandu...@ct.infn.it
>> > calandu...@unict.it
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Dr. *Antonio Calanducci*, PhD
> INFN Catania
> Dipartimento di Fisica e Astronomia - Università di Catania
> *Via Santa Sofia, 64*
> *95128 Catania (IT)*
>
> O : +39 095 378 5519
> M:  +39 349 6762534
> Skype: tcaland
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dr. *Antonio Calanducci*, PhD
INFN Catania
Dipartimento di Fisica e Astronomia - Università di 

Re: [openstack-dev] [keystone][openid][mistral] Enabling OpenID Connect authentication w/o federation

2016-07-01 Thread Renat Akhmerov
Jamie,

Thanks a lot for your detailed answer. We’ll think again about all that taking 
your comments into account.

Renat Akhmerov
@Nokia

> On 01 Jul 2016, at 13:10, Jamie Lennox  wrote:
> 
> 
> 
> On 23 June 2016 at 21:30, Renat Akhmerov  > wrote:
> Hi,
> 
> I’m looking for some hints on how to enable authentication via OpenID Connect 
> protocol, particularly in Mistral. Actually, specific protocol is not so 
> important. I’m mostly interested in conceptional vision here and I’d like to 
> ask the community if what we would like to do makes sense.
> 
> Problem statement
> 
> Whereas there are people using Mistral as an OpenStack service with proper 
> Keystone authentication etc. some people want to be able to use it w/o 
> OpenStack at all or in some scenarios where OpenStack is just one thing that 
> Mistral workflows should interact with.
> 
> In one of our cases we want to use Mistral w/o OpenStack but we want to make 
> Mistral perform authentication via OIDC. I’ve done some research on what 
> Keystone already has that could help us do that and I found a group of 
> plugins for OIDC authentication flows under [1]. The problem I see with these 
> plugins for my particular case is that I still have to properly install 
> Keystone and configure it for Federation since the plugins use Federation. 
> Feels like a redundant time consuming step for me. A normal flow for these 
> plugins is to first get so-called unscoped token via OIDC and then request a 
> scoped token from Keystone via its Federation API. I think understand why it 
> works this way, it’s well documented in Keystone docs. Briefly, it’s required 
> to get user info, list of available resources etc, whatever OIDC server does 
> not provide, it only works as an identity provider.
> 
> What ideally I'd like to do is to avoid installing and configuring Keystone 
> at all. 
> 
> So with the exception of token_endpoint which is basically for debugging yes, 
> all the plugins in keystoneauth are designed to work with keystone. Keystone 
> provides a whole bunch of things here like user, role and project management 
> - basically the Authorization that goes with OIDC's authentication. 
> 
> It also provides the auth_token middleware which reads those tokens and 
> provides a series of well known headers that you can use to know what project 
> you're in, do policy enforcement and basically all permission management. For 
> most projects this is what you care about. If you write your own version of 
> auth_token middleware for your identity provider you can use whatever 
> authentication you like.
> 
> You'll basically need a way of mapping the information you get from your OIDC 
> provider into the projects, roles and user info that makes sense for your 
> service. And when it gets sufficiently complex that you have to allow 
> different deployers to configure this in different ways and for any number of 
> protocols you'll have keystone's federation implementation.
>  
> Possible solution
> 
> What I’m thinking about is: would it be OK to just create a set of new 
> authentication plugins under keystoneauth project that would do the same as 
> existing ones but w/o getting a Keystone scoped token? That way we could 
> still take advantage of existing keystone auth plugins framework but w/o 
> having to install and configure Keystone service. I realize that we’ll lose 
> some capabilities that Keystone provides but for many cases it would be 
> enough just to authenticate on a client and then validate token from HTTP 
> headers via OIDC server on server side. Just one more necessary thing to do 
> here is to fill tenant/project but that could be extracted from a token.
> 
> So you can use keystoneauth to implement plugins that do not hit keystone. A 
> plugin basically has to implement this[1] interface which has no direct ties 
> to keystone. There is then a standard subclass of this that handles most of 
> the work for interacting with keystone that the existing plugins all use. 
> It's fairly well documented but if you have additional questions let us know. 
> 
> I'm pretty sure from here you can use the new version of openstackclient and 
> anything else that uses keystoneauth. 
> 
> These plugins would probably not live in the keystoneauth repository unless 
> there was a lot more people interested in using them - however keystoneauth, 
> OSC, shade etc all specify the plugin to use via a name which is a setuptools 
> entrypoint so so long as the plugin is installed on the system you can use it 
> even though it wasn't in the repo. 
> 
> 
> [1] 
> https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/plugin.py 
> 
>  
>  
> 
> Questions
> 
> Would this new plugin have a right to be part of keystoneauth project despite 
> Keystone service is not involved at all? The 

Re: [openstack-dev] [keystone][openid][mistral] Enabling OpenID Connect authentication w/o federation

2016-07-01 Thread Jamie Lennox
On 23 June 2016 at 21:30, Renat Akhmerov  wrote:

> Hi,
>
> I’m looking for some hints on how to enable authentication via OpenID
> Connect protocol, particularly in Mistral. Actually, specific protocol is
> not so important. I’m mostly interested in conceptional vision here and I’d
> like to ask the community if what we would like to do makes sense.
>
> *Problem statement*
>
> Whereas there are people using Mistral as an OpenStack service with proper
> Keystone authentication etc. some people want to be able to use it w/o
> OpenStack at all or in some scenarios where OpenStack is just one thing
> that Mistral workflows should interact with.
>
> In one of our cases we want to use Mistral w/o OpenStack but we want to
> make Mistral perform authentication via OIDC. I’ve done some research on
> what Keystone already has that could help us do that and I found a group of
> plugins for OIDC authentication flows under [1]. The problem I see with
> these plugins for my particular case is that I still have to properly
> install Keystone and configure it for Federation since the plugins use
> Federation. Feels like a redundant time consuming step for me. A normal
> flow for these plugins is to first get so-called unscoped token via OIDC
> and then request a scoped token from Keystone via its Federation API. I
> think understand why it works this way, it’s well documented in Keystone
> docs. Briefly, it’s required to get user info, list of available resources
> etc, whatever OIDC server does not provide, it only works as an identity
> provider.
>
> What ideally I'd like to do is to avoid installing and configuring
> Keystone at all.
>

So with the exception of token_endpoint which is basically for debugging
yes, all the plugins in keystoneauth are designed to work with keystone.
Keystone provides a whole bunch of things here like user, role and project
management - basically the Authorization that goes with OIDC's
authentication.

It also provides the auth_token middleware which reads those tokens and
provides a series of well known headers that you can use to know what
project you're in, do policy enforcement and basically all permission
management. For most projects this is what you care about. If you write
your own version of auth_token middleware for your identity provider you
can use whatever authentication you like.

You'll basically need a way of mapping the information you get from your
OIDC provider into the projects, roles and user info that makes sense for
your service. And when it gets sufficiently complex that you have to allow
different deployers to configure this in different ways and for any number
of protocols you'll have keystone's federation implementation.


> *Possible solution*
>
> What I’m thinking about is: would it be OK to just create a set of new
> authentication plugins under keystoneauth project that would do the same as
> existing ones but w/o getting a Keystone scoped token? That way we could
> still take advantage of existing keystone auth plugins framework but w/o
> having to install and configure Keystone service. I realize that we’ll lose
> some capabilities that Keystone provides but for many cases it would be
> enough just to authenticate on a client and then validate token from HTTP
> headers via OIDC server on server side. Just one more necessary thing to do
> here is to fill tenant/project but that could be extracted from a token.
>

So you can use keystoneauth to implement plugins that do not hit keystone.
A plugin basically has to implement this[1] interface which has no direct
ties to keystone. There is then a standard subclass of this that handles
most of the work for interacting with keystone that the existing plugins
all use. It's fairly well documented but if you have additional questions
let us know.

I'm pretty sure from here you can use the new version of openstackclient
and anything else that uses keystoneauth.

These plugins would probably not live in the keystoneauth repository unless
there was a lot more people interested in using them - however
keystoneauth, OSC, shade etc all specify the plugin to use via a name which
is a setuptools entrypoint so so long as the plugin is installed on the
system you can use it even though it wasn't in the repo.


[1]
https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/plugin.py


>
> *Questions*
>
>
>1. Would this new plugin have a right to be part of keystoneauth
>project despite Keystone service is not involved at all? The alternative is
>just to teach Mistral to do authentication w/o using keystone client  at
>all. But IMO the advantage of having such plugin (group of plugins
>actually) is that someone else could reuse it.
>
> Not initially, but as i mentioned above so long as its installed on the
machine you want to use it from that doesn't matter.

>
>1. Is there any existing code that we could reuse to solve this
>problem? Maybe what I’m describing is