Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-08 Thread Na Zhu
Hi John,

Thanks your effort, So the next plan is you submit the WIP patches, then I 
submit test script about your code changes, do you think it is ok?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: "disc...@openvswitch.org" , "OpenStack 
Development Mailing List (not for usage questions)" 
, Srilatha Tangirala 

Date:   2016/06/09 00:48
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juni,

Thanks �C added the code and everything builds, just need to debug 
end-to-end now.  I think your approach is the best so far all the IDL code 
for accessing ovs/ovn is in networking-ovn. The OVN driver in 
networking-sfc calls the IDL code to access ovs/ovn. There is minimal 
linkage between networking-sfc and networking-ovn , just one import:

from networking_ovn.ovsdb import impl_idl_ovn

I think this is what Ryan was asking for.

I have updated all repos so we can think about creating WIP patches.

Regards

John
From: Na Zhu 
Date: Wednesday, June 8, 2016 at 12:44 AM
To: John McDowall 
Cc: "disc...@openvswitch.org" , "OpenStack 
Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>, Srilatha Tangirala <
srila...@us.ibm.com>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I think you can create ovsdb idl client in networking-sfc to connect to 
OVN_Northbound DB, then call the APIs you add to networking-ovn to 
configure SFC.
Now OVN is a ML2 mechanism driver (OVNMechanismDriver), not core plugin, 
the OVN L3 (OVNL3RouterPlugin) is a neutron service plugin like vpn, sfc 
and ect.

You can refer to method OVNMechanismDriver._ovn and 
OVNL3RouterPlugin._ovn, they both create ovsdb idl client object, so in 
your ovn driver, you can do it in the same way. Here is the code sample:

class OVNSfcDriver(driver_base.SfcDriverBase,
   ovs_sfc_db.OVSSfcDriverDB)
..
@property
def _ovn(self):
if self._ovn_property is None:
LOG.info(_LI("Getting OvsdbOvnIdl"))
self._ovn_property = impl_idl_ovn.OvsdbOvnIdl(self)
return self._ovn_property

..
@log_helpers.log_method_call
def create_port_chain(self, context): 
port_chain = context.current
for flow_classifier in port_chain:
first get the flow classifier contents
then call self._ovn.create_lflow_classifier()
for port_pair_groups in port_chain:
get the port_pair_group contents
then call self._ovn.create_lport_pair_group()
for port_pair in port_pair_group
first get the port_pair contents
then call self._ovn.create_lport_pair()
   






Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN, Srilatha Tangirala 
Cc:"disc...@openvswitch.org" , "OpenStack 
Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date:2016/06/08 12:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno, Srilatha,

I need some help �C I have fixed most of the obvious typo’s in the three 
repos and merged them with mainline. There is still a problem with the 
build I think in mech_driver.py but I will fix it asap in the am.

However I am not sure of the best way to interface between sfc and ovn.

In networking_sfc/services/src/drivers/ovn/driver.py there is a function 
that creates a deep copy of the port-chain dict, 
create_port_chain(self,contact,port_chain). 

Looking at networking-ovn I think it should use mech_driver.py so we can 
call the OVS-IDL to send the parameters to ovn. However I am not sure of 
the best way to do it. Could you make some suggestions or send me some 
sample code showing the best approach?

I will get the ovs/ovn cleaned up and ready. Also Louis from the 
networking-sfc has posted a draft blueprint.

Regards

John

From: Na Zhu 
Date: Monday, June 6, 2016 at 7:54 PM
To: John McDowall , Ryan Moats <
rmo...@us.ibm.com>
Cc: "disc...@openvswitch.org" , "OpenStack 
Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>, Srilatha Tangirala <

Re: [openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-08 Thread Vikram Hosakote (vhosakot)
+1

Regards,
Vikram Hosakote
IRC: vhosakot

From: "Swapnil Kulkarni (coolsvap)" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, June 8, 2016 at 8:54 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] Request for changing the meeting time to 1600 
UTC for all meetings

Dear Kollagues,

Some time ago we discussed the requirement of alternating meeting
times for Kolla weekly meeting due to major contributors from
kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
implemented alternate US/APAC meeting times.

With kolla-mesos not active anymore and looking at the current active
contributors, I wish to reinstate the UTC 1600 time for all Kolla
Weekly meetings.

Please let me know your views.

--
Best Regards,
Swapnil Kulkarni
irc : coolsvap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-08 Thread Chris Friesen

On 06/03/2016 12:03 PM, Paul Michali wrote:

Thanks for the link Tim!

Right now, I have two things I'm unsure about...

One is that I had 1945 huge pages left (of size 2048k) and tried to create a VM
with a small flavor (2GB), which should need 1024 pages, but Nova indicated that
it wasn't able to find a host (and QEMU reported an allocation issue).

The other is that VMs are not being evenly distributed on my two NUMA nodes, and
instead, are getting created all on one NUMA node. Not sure if that is expected
(and setting mem_page_size to 2048 is the proper way).



Just in case you haven't figured out the problem...

Have you checked the per-host-numa-node 2MB huge page availability on your host? 
 If it's uneven then that might explain what you're seeing.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-08 Thread MD. Nadeem
+1

From: Mauricio Lima [mailto:mauricioli...@gmail.com]
Sent: Wednesday, June 8, 2016 7:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Request for changing the meeting time to 
1600 UTC for all meetings

+1

2016-06-08 10:29 GMT-03:00 Jeffrey Zhang 
>:
it will be mid-night (0:00) in my local time. But i think i am OK with it.
so +1 for this.

On Wed, Jun 8, 2016 at 9:12 PM, Paul Bourke 
> wrote:
+1

On 08/06/16 13:54, Swapnil Kulkarni (coolsvap) wrote:
Dear Kollagues,

Some time ago we discussed the requirement of alternating meeting
times for Kolla weekly meeting due to major contributors from
kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
implemented alternate US/APAC meeting times.

With kolla-mesos not active anymore and looking at the current active
contributors, I wish to reinstate the UTC 1600 time for all Kolla
Weekly meetings.

Please let me know your views.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] OSIC cluster accepteed

2016-06-08 Thread MD. Nadeem
If kolla-upgrade is in  the list, I would like to help to test it.

From: Vikram Hosakote (vhosakot) [mailto:vhosa...@cisco.com]
Sent: Wednesday, June 8, 2016 11:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] OSIC cluster accepteed

I'd like to help with kolla scaling on the OSIC cluster too.

Regards,
Vikram Hosakote
IRC: vhosakot

From: Jeffrey Zhang >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, June 8, 2016 at 9:42 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla] OSIC cluster accepteed

Cool.
Do we have any test list now?
and how can i help for this? I am very interested in this
test.

On Tue, Jun 7, 2016 at 4:52 PM, Paul Bourke 
> wrote:
Michal,

I'd be interested in helping with this. Keep us updated!

-Paul


On 03/06/16 17:58, Michał Jastrzębski wrote:
Hello Kollagues,

Some of you might know that I submitted request for 130 nodes out of
osic cluster for testing Kolla. We just got accepted. Time window will
be 3 weeks between 7/22 and 8/14, so we need to make most of it. I'd
like some volunteers to help me with tests, setup and such. We need to
prepare test scenerios, streamline bare metal deployment and prepare
architectures we want to run through. I would also make use of our
global distribution to keep nodes being utilized 24h.

Nodes we're talking about are pretty powerful 256gigs of ram each, 12
ssd disks in each and 10Gig networking all the way. We will get IPMI
access to it so bare metal provisioning will have to be there too
(good time to test out bifrost right?:))

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-08 Thread Chris Friesen

On 06/07/2016 04:26 PM, Ben Meyer wrote:

On 06/07/2016 06:09 PM, Samuel Merritt wrote:

On 6/7/16 12:00 PM, Monty Taylor wrote:

[snip]

I'd rather see us focus energy on Python3, asyncio and its pluggable
event loops. The work in:

http://magic.io/blog/uvloop-blazing-fast-python-networking/

is a great indication in an actual apples-to-apples comparison of what
can be accomplished in python doing IO-bound activities by using modern
Python techniques. I think that comparing python2+eventlet to a fresh
rewrite in Go isn't 100% of the story. A TON of work has gone in to
Python that we're not taking advantage of because we're still supporting
Python2. So what I've love to see in the realm of comparative
experimentation is to see if the existing Python we already have can be
leveraged as we adopt newer and more modern things.


Asyncio, eventlet, and other similar libraries are all very good for
performing asynchronous IO on sockets and pipes. However, none of them
help for filesystem IO. That's why Swift needs a golang object server:
the go runtime will keep some goroutines running even though some
other goroutines are performing filesystem IO, whereas filesystem IO
in Python blocks the entire process, asyncio or no asyncio.


That can be modified. gevent has a tool
(http://www.gevent.org/gevent.fileobject.html) that enables the File IO
to be async  as well by putting the file into non-blocking mode. I've
used it, and it works and scales well.


Arguably non-blocking isn't really async when it comes to reads.  I suspect what 
we really want is full-async where you issue a request and then get notified 
when it's done.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar][zaqar-ui] Nomating Shu Muto for Zaqar-UI core

2016-06-08 Thread Shuu Mutou
Hi team,

Thank you for nominating to Zaqar-UI core and voting to me.
I'm looking forward to working with you guys more.

Yours,
Shu


> -Original Message-
> From: Fei Long Wang [mailto:feil...@catalyst.net.nz]
> Sent: Wednesday, June 08, 2016 7:15 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [zaqar][zaqar-ui] Nomating Shu Muto for
> Zaqar-UI core
> 
> +1 and thanks for all the great work.
> 
> 
> On 08/06/16 08:27, Thai Q Tran wrote:
> 
> 
>   Hello all,
> 
>   I am pleased to nominate Shu Muto to the Zaqar-UI core team. Shu's
> reviews are extremely thorough and his work exemplary. His expertise in
> angularJS, translation, and project infrastructure proved to be invaluable.
> His support and reviews has helped the project progressed. Combine with
> his strong understanding of the project, I believe his will help guide us
> in the right direction and allow us to keep our current pace.
> 
>   Please vote +1 or -1 to the nomination.
> 
>   Thanks,
>   Thai (tqtran)
> 
> 
> 
> 
> 
>   __
> 
>   OpenStack Development Mailing List (not for usage questions)
>   Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
>   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> dev
> 
> 
> --
> Cheers & Best regards,
> Fei Long Wang (王飞龙)
> --
> 
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz 
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-06-08 Thread Matt Riedemann

On 6/8/2016 7:19 PM, Bias, Randy wrote:

I just want to point out that this appears to imply that open source
storage backends for OpenStack would be prioritized over closed-source
ones and I think that runs counter to the general inclusivity in the
community.  I assume it¹s just a turn of phrase, but I suspect it could be
easily misinterpreted to mean that open source storage projects (external
to OpenStack) could be prioritized over open source ones, creating a very
uneven playing field, which would potentially be very bad from a
perception point of view.

Thanks,


--Randy

VP, Technology, EMC Corporation
Top 10 OpenStack & Cloud Pioneer
+1 (415) 787-2253 [google voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
EXEC ADMIN: inna.k...@emc.com, +1 (415) 601-1168




On 5/10/16, 9:40 AM, "Matt Riedemann"  wrote:


A closed-source vendor-specific ephemeral backend for a single virt
driver in Nova isn't a review priority for the release.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The implication has to do more with the ability to test and develop on 
closed-source vendor specific backends. So there is a smaller group of 
people that can work on these things and we can't/don't test them in the 
community CI system, which is what third party CI requirements are for.


As a data point, it took me about 4 releases to get DB2 support into 
Nova (landed in Liberty) and we ripped that out a couple of weeks ago. 
It wasn't maintained (no CI) and very few people knew how DB2 worked and 
didn't have access to setting up an environment to test it out.


So, no, the ScaleIO backend spec is not being intentionally blocked 
because it's a closed-source vendor solution. I didn't mean it that way. 
I was just trying to point out the matter of priorities for Nova in this 
release. That blueprint is high priority for a single vendor but low 
priority when compared to the very large backlog of items that Nova has 
for the release as a whole.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Initial oslo.privsep conversion?

2016-06-08 Thread Matt Riedemann

On 6/8/2016 5:51 PM, Michael Still wrote:

This seems like the sort of thing we should document in the devref. I
agree we shouldn't be doing any more of the old thing and should provide
a worked example of the new thing.

Michael

--
Rackspace Australia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Agreed, but it's the worked example part that we don't have yet, 
chicken/egg. So we can drop the hammer on all new things until someone 
does it, which sucks, or hope that someone volunteers to work the first 
example.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] [qa] Graphs of how long jobs take?

2016-06-08 Thread Jay Faulkner

Thanks a bunch Mikhail, this was very helpful!

I've started a dashboard to track Ironic tempest job speeds as well as 
IPA -src job speeds; it's here; 
http://graphite.openstack.org/dashboard/#ironic-job-duration


Please feel free to improve upon it or add additional useful metrics. 
Given duration of execution is a frequent complaint about our CI, it 
seems like a good thing to graph!



Thanks again,
Jay

On 6/8/16 5:20 PM, Mikhail Medvedev wrote:

Hi Jay,

On Wed, Jun 8, 2016 at 5:56 PM, Jay Faulkner > wrote:


Hey all,

As you may recall, recently Ironic was changed to use iPXE and
TinyIPA in the jobs, as part of an attempt to get the jobs to use
less ram and perhaps even run more quickly in the short run.
However, when I tried to make a graph at graphite.openstack.org
 showing the duration of the jobs,
it doesn't look like that metric was available
(stats.zuul.pipeline.check.job.check-tempest-dsvm-ironic-pxe_ssh.*
appears to only track the job result).


I did find two metrics that seem to be what you are looking for:

stats.timers.nodepool.job.gate-tempest-dsvm-ironic-pxe_ssh.master.ubuntu-trusty.runtime.mean
stats.timers.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.*.mean 




Is there a common or documented way or tool to graph duration of
jobs so I can see the real impact of this change?


Thanks a bunch,

Jay Faulkner

OSIC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Mikhail Medvedev
IBM


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-08 Thread John McDowall
Juno,

Ok added the code and I have port-chains creating up to the IDL 
networking-sfc->networking-ovn then I have an issue with my IDL I need to fix.

So we have a model for creating port-chains all the way through, updated the 
repos to reflect the changes.

Thanks for your help.

Regards

John

From: Na Zhu >
Date: Wednesday, June 8, 2016 at 12:44 AM
To: John McDowall 
>
Cc: "disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>, 
Srilatha Tangirala >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I think you can create ovsdb idl client in networking-sfc to connect to 
OVN_Northbound DB, then call the APIs you add to networking-ovn to configure 
SFC.
Now OVN is a ML2 mechanism driver (OVNMechanismDriver), not core plugin, the 
OVN L3 (OVNL3RouterPlugin) is a neutron service plugin like vpn, sfc and ect.

You can refer to method OVNMechanismDriver._ovn and OVNL3RouterPlugin._ovn, 
they both create ovsdb idl client object, so in your ovn driver, you can do it 
in the same way. Here is the code sample:

class OVNSfcDriver(driver_base.SfcDriverBase,
   ovs_sfc_db.OVSSfcDriverDB)
..
@property
def _ovn(self):
if self._ovn_property is None:
LOG.info(_LI("Getting OvsdbOvnIdl"))
self._ovn_property = impl_idl_ovn.OvsdbOvnIdl(self)
return self._ovn_property

..
@log_helpers.log_method_call
def create_port_chain(self, context):
port_chain = context.current
for flow_classifier in port_chain:
first get the flow classifier contents
then call self._ovn.create_lflow_classifier()
for port_pair_groups in port_chain:
get the port_pair_group contents
then call self._ovn.create_lport_pair_group()
for port_pair in port_pair_group
first get the port_pair contents
then call self._ovn.create_lport_pair()







Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN, Srilatha Tangirala 
>
Cc:"disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>
Date:2016/06/08 12:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Juno, Srilatha,

I need some help – I have fixed most of the obvious typo’s in the three repos 
and merged them with mainline. There is still a problem with the build I think 
in mech_driver.py but I will fix it asap in the am.

However I am not sure of the best way to interface between sfc and ovn.

In networking_sfc/services/src/drivers/ovn/driver.py there is a function that 
creates a deep copy of the port-chain dict, 
create_port_chain(self,contact,port_chain).

Looking at networking-ovn I think it should use mech_driver.py so we can call 
the OVS-IDL to send the parameters to ovn. However I am not sure of the best 
way to do it. Could you make some suggestions or send me some sample code 
showing the best approach?

I will get the ovs/ovn cleaned up and ready. Also Louis from the networking-sfc 
has posted a draft blueprint.

Regards

John

From: Na Zhu >
Date: Monday, June 6, 2016 at 7:54 PM
To: John McDowall 
>, Ryan 
Moats >
Cc: "disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>, 
Srilatha Tangirala >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I do not know any better approach, I think it is good to write all the 
parameters in the creation of a port chain, this can avoid saving many data in 
northbound db which are not used. We can do it in that way currently, if the 
community has 

Re: [openstack-dev] [neutron][upgrades] Bi-weekly upgrades work status. 6/2/2016

2016-06-08 Thread Carl Baldwin
On Thu, Jun 2, 2016 at 2:29 PM, Korzeniewski, Artur
 wrote:
> I would like to remind that agreed approach at Design Summit in Austin was,
> that every new resource added to neutron should have OVO implemented. Please
> comply, and core reviewers please take care of this requirements in patches
> you review.

How about the networksegments table?  It was already a part of the ML2
model but was moved out of ML2 to make it available for the OVN
plugin.  Just days after the summit, it was made in to a first class
resource [2] with its own CRUD operations.  Is this part of the model
on your radar?  What needs to be done?

Since then, a relationship has been added between segment and subnet
[3].  Also, a mapping to hosts has been added [4].  What needs to be
done for OVO for these?  I'm sorry if these are slipping through the
cracks but we're still learning.  There are a couple of other model
tweaks in play on this topic too [5][6].  I'd like to begin doing
these the correct way.

Carl

[1] https://review.openstack.org/#/c/242393/
[2] https://review.openstack.org/#/c/296603/
[3] https://review.openstack.org/#/c/288774/
[4] https://review.openstack.org/#/c/285548/
[5] https://review.openstack.org/#/c/326261/
[6] https://review.openstack.org/#/c/293305/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] [qa] Graphs of how long jobs take?

2016-06-08 Thread Mikhail Medvedev
Hi Jay,

On Wed, Jun 8, 2016 at 5:56 PM, Jay Faulkner  wrote:

> Hey all,
>
> As you may recall, recently Ironic was changed to use iPXE and TinyIPA in
> the jobs, as part of an attempt to get the jobs to use less ram and perhaps
> even run more quickly in the short run. However, when I tried to make a
> graph at graphite.openstack.org showing the duration of the jobs, it
> doesn't look like that metric was available
> (stats.zuul.pipeline.check.job.check-tempest-dsvm-ironic-pxe_ssh.* appears
> to only track the job result).
>

I did find two metrics that seem to be what you are looking for:

stats.timers.nodepool.job.gate-tempest-dsvm-ironic-pxe_ssh.master.ubuntu-trusty.runtime.mean
stats.timers.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.*.mean



> Is there a common or documented way or tool to graph duration of jobs so I
> can see the real impact of this change?
>
>
> Thanks a bunch,
>
> Jay Faulkner
>
> OSIC
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Mikhail Medvedev
IBM
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-06-08 Thread Bias, Randy
I just want to point out that this appears to imply that open source
storage backends for OpenStack would be prioritized over closed-source
ones and I think that runs counter to the general inclusivity in the
community.  I assume it¹s just a turn of phrase, but I suspect it could be
easily misinterpreted to mean that open source storage projects (external
to OpenStack) could be prioritized over open source ones, creating a very
uneven playing field, which would potentially be very bad from a
perception point of view.

Thanks,


--Randy

VP, Technology, EMC Corporation
Top 10 OpenStack & Cloud Pioneer
+1 (415) 787-2253 [google voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
EXEC ADMIN: inna.k...@emc.com, +1 (415) 601-1168




On 5/10/16, 9:40 AM, "Matt Riedemann"  wrote:

>A closed-source vendor-specific ephemeral backend for a single virt
>driver in Nova isn't a review priority for the release. 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Revert "Migrate tripleo to centos-7"

2016-06-08 Thread Paul Belanger
On Mon, Jun 06, 2016 at 08:08:40PM -0400, Dan Prince wrote:
> Sending it again to [TripleO].
> 
> On Mon, 2016-06-06 at 20:06 -0400, Dan Prince wrote:
> > Hi all,
> > 
> > Having a bit of a CI outage today due to (I think) the switch to
> > Centos
> > Jenkins slaves. I'd like to suggest that we revert that quickly to
> > keep
> > things moving in TripleO:
> > 
> > https://review.openstack.org/326182 Revert "Migrate tripleo to
> > centos-
> > 7"
> > 
> > And then perhaps we can follow up with a bit more Centos 7 testing
> > before we switch over completely.
> > 
> > Dan
> 
I spend the last 2 days looking into this. Currently experimental tripleo jobs
for centos-7 are green. We actually didn't need to fix anything specific to
centos-7 migration, the issues were simply exposed at the time of migration.

Both issues revolved around external gearman server and HDD space issue on
underclouds.

As a result, I've proposed the revert[1] of the revert.

[1] https://review.openstack.org/#/c/327425/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Stepping down from Trove Core

2016-06-08 Thread Peter Stachowski
Hi Victoria,

Thanks for all the help and good luck in your future work!

Peter

From: Victoria Martínez de la Cruz [mailto:victo...@vmartinezdelacruz.com]
Sent: June-07-16 2:34 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Trove] Stepping down from Trove Core


After one year and a half contributing to the Trove project,

I have decided to change my focus and start gaining more experience

on other storage and data-management related projects.



Because of this decision, I'd like to ask to be removed from the Trove core 
team.



I want to thank Trove community for all the good work and shared experiences.

Working with you all has been a very fulfilling experience.



All the best,



Victoria
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting Thursday June 9th

2016-06-08 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for June 9th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Tomorrow in addition to status updates, we plan to talk more about the
Application Development improvement effort being led by Igor Marnat.
Please join us if you have any thoughts or opinions on that front (and
read these two previous messages for the background:
http://lists.openstack.org/pipermail/user-committee/2016-May/000854.html
and http://lists.openstack.org/pipermail/openstack-dev/2016-May/095917.html).

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [infra] [qa] Graphs of how long jobs take?

2016-06-08 Thread Jay Faulkner

Hey all,

As you may recall, recently Ironic was changed to use iPXE and TinyIPA 
in the jobs, as part of an attempt to get the jobs to use less ram and 
perhaps even run more quickly in the short run. However, when I tried to 
make a graph at graphite.openstack.org showing the duration of the jobs, 
it doesn't look like that metric was available 
(stats.zuul.pipeline.check.job.check-tempest-dsvm-ironic-pxe_ssh.* 
appears to only track the job result).


Is there a common or documented way or tool to graph duration of jobs so 
I can see the real impact of this change?



Thanks a bunch,

Jay Faulkner

OSIC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Initial oslo.privsep conversion?

2016-06-08 Thread Michael Still
+Angus

On Thu, Jun 9, 2016 at 7:10 AM, Matt Riedemann 
wrote:

> While sitting in Angus' cross-project session on oslo.privsep at the
> Austin summit I believe I had a conversation with myself in my head that
> Nova should stop adding new rootwrap filters and anything new should use
> oslo.privsep.
>
> For example:
>
> https://review.openstack.org/#/c/182257/
>
> However, we don't have anything in Nova using oslo.privsep directly. We
> have os-brick and soon we'll have os-vif using oslo.privsep, but those are
> indirect.
>
> Looking at the change in Neutron for using privsep [1] it's pretty
> complicated. So I'm struggling with requiring new changes to Nova that
> require new rootwrap filters to use privsep when we don't have an example
> in tree of how to do this.
>
> Is anyone working on something like that yet that I haven't seen? If not,
> has anyone thought about doing something or is interested in doing it?
> Because I don't think it's really fair to prevent new things until that
> happens - although the flip side to that is there isn't an example until
> someone is forced to do it.
>
> Other thoughts? Is anyone willing to help here? I'm assuming there will
> need to be hand-holding from Angus at least initially.
>
> [1] https://review.openstack.org/#/c/155631/


This seems like the sort of thing we should document in the devref. I agree
we shouldn't be doing any more of the old thing and should provide a worked
example of the new thing.

Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-06-08 Thread Nikhil Komawar

Please note, due to the last minute additions to the RSVP list, we have
changed the tool to be used. Update info can now be found here:
https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync .

Please try to join 5-10 minutes before the meeting as you may have to
install a plugin and give us time to fix any issues you may face.


On 6/7/16 11:51 PM, Nikhil Komawar wrote:
> Hi all,
>
>
> Thanks a ton for the feedback on the time and thanks to Kris for adding
> items to the agenda [1].
>
>
> Just wanted to announce a few things here:
>
>
> The final decision on the time has been made after a lot of discussions.
>
> This event will be on *Thursday June 9th at 1130 UTC* 
>
> Here's [2] how it looks at/near your timezone.
>
>
> It somewhat manages to accommodate people from different (and extremely
> diverse) timezones but if it's too early or too late for you for this
> full *2 hour* sync, please add your interest topics and name against it
> so that we can schedule your items either later or earlier during the
> event. The schedule will be tentative unless significant/enough
> information is provided on time to help set the schedule in advance.
>
>
> I had kept open agenda from the developers' side so that we can
> collaborate better on the pain points of the operators. You are very
> welcome to add items to the etherpad [1].
>
>
> The event has been updated to the Virtual Sprints wiki [3] and the
> details have been added to the etherpad [1] as well. Please feel free to
> reach out to me for any questions.
>
>
> Thanks for the RSVP and see you soon virtually.
>
>
> [1] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync
> [2]
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=11=30=0=881=196=47=22=157=87=24=78=283=1800
> [3]
> https://wiki.openstack.org/wiki/VirtualSprints#Glance_and_Operators_mid-cycle_sync_for_Newton
>
>
> Cheers
>
>
> On 5/31/16 5:13 PM, Nikhil Komawar wrote:
>> Hey,
>>
>>
>> Thanks for your interest.
>>
>> Sorry about the confusion. Please consider the same time for Thursday
>> June 9th.
>>
>>
>> Thur June 9th proposed time:
>> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=11=0=0=881=196=47=22=157=87=24=78=283
>>
>>
>> Alternate time proposal:
>> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=23=0=0=881=196=47=22=157=87=24=78=283
>>
>>
>> Overall time planner:
>> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160609=881=196=47=22=157=87=24=78=283
>>
>>
>>
>> It will really depend on who is strongly interested in the discussions.
>> Scheduling with EMEA, Pacific time (US), Australian (esp. Eastern) is
>> quite difficult. If there's strong interest from San Jose, we may have
>> to settle for a rather awkward choice below:
>>
>>
>> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=4=0=0=881=196=47=22=157=87=24=78=283
>>
>>
>>
>> A vote of +1, 0, -1 on these times would help long way.
>>
>>
>> On 5/31/16 4:35 PM, Belmiro Moreira wrote:
>>> Hi Nikhil,
>>> I'm interested in this discussion.
>>>
>>> Initially you were proposing Thursday June 9th, 2016 at 2000UTC.
>>> Are you suggesting to change also the date? Because in the new
>>> timeanddate suggestions is 6/7 of June.
>>>
>>> Belmiro
>>>
>>> On Tue, May 31, 2016 at 6:13 PM, Nikhil Komawar >> > wrote:
>>>
>>> Hey,
>>>
>>>
>>>
>>>
>>>
>>> Thanks for the feedback. 0800UTC is 4am EDT for some of the US
>>> Glancers :-)
>>>
>>>
>>>
>>>
>>>
>>> I request this time which may help the folks in Eastern and Central US
>>>
>>> time.
>>>
>>> 
>>> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=7=11=0=0=881=196=47=22=157=87=24=78
>>>
>>>
>>>
>>>
>>>
>>> If it still does not work, I may have to poll the folks in EMEA on how
>>>
>>> strong their intentions are for joining this call.  Because
>>> another time
>>>
>>> slot that works for folks in Australia & US might be too inconvenient
>>>
>>> for those in EMEA:
>>>
>>> 
>>> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=6=23=0=0=881=196=47=22=157=87=24=78
>>>
>>>
>>>
>>>
>>>
>>> Here's the map of cities that may be involved:
>>>
>>> 
>>> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160607=881=196=47=22=157=87=24=78
>>>
>>>
>>>
>>>
>>>
>>> Please let me know which ones are possible and we can try to work
>>> around
>>>
>>> the times.
>>>
>>>
>>>
>>>
>>>
>>> On 5/31/16 2:54 AM, Blair Bethwaite wrote:
>>>
>>> > Hi Nikhil,
>>>
>>> >
>>>
>>> > 2000UTC might catch a few kiwis, but it's 6am everywhere on the east
>>>
>>> > coast of Australia, and even earlier out west. 0800UTC, on the other
>>>
>>> > hand, would be more sociable.
>>>
>>> >
>>>
>>> > On 26 May 2016 at 15:30, Nikhil Komawar >> > wrote:
>>>

Re: [openstack-dev] [nova] Policy check for network:attach_external_network

2016-06-08 Thread Matt Riedemann

On 6/8/2016 4:27 PM, Ryan Rossiter wrote:

Taking a look at [1], I got curious as to why all of the old network policies 
were deleted except for network:attach_external_network. With the help of 
mriedem, it turns out that policy is checked indirectly on the compute node, in 
allocate_for_instance(). mriedem pointed out that this policy doesn’t work very 
well from an end-user perspective, because if you have an existing instance and 
want to now attach it to an external network, it’ll reschedule it, and if you 
don’t have permission to attach to an external network, it’ll bounce around the 
scheduler until the user receives the infamous “No Valid Host”.

My main question is: how do we want to handle this? I’m thinking because 
Neutron has all of the info as to whether or not the network we’re creating a 
port on is external, we could just let Neutron handle all of the policy work. 
That way eventually the policy can just leave nova’s policy.json. But that’ll 
take a while.

A temporary alternative is we move that policy check to the API. That way we 
can accurately deny the user instead of plumbing things down into the compute 
for them to be denied there. I did a scavenger hunt and found that the policy 
check was added because of [2], which, in the end, is just a permissions thing. 
So that could get added to the API checks when 1) creating an instance and 2) 
attaching an existing instance to another network. Are there any other places 
this API check would be needed?

[1]: https://review.openstack.org/#/c/320751/
[2]: https://bugs.launchpad.net/nova/+bug/1352102

-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



When I looked at this (briefly) the check is done in 
allocate_for_instance which is called when creating a server and when 
attaching interfaces to an existing server. The create is a cast and 
gets into the NoValidHost failure. The attach is a call and the user 
would at least get a 400 back.


If we moved that check to the validate_networks() method it would get 
validated in the API when creating the server, which is good for 
avoiding the NoValidHost case. However, attach_interfaces doesn't call 
validate_networks and I'm not really sure why, it seems we'd want the 
same network/port/quota checking in that case. Although 
validate_networks also returns the number of instances you can create 
for the multi-create scenario - which is really just creating servers, 
not attaching interfaces.


So we'd either have to call validate_networks when attaching interfaces, 
or do the policy check in the attach_interfaces flow - which would mean 
getting the available networks up front, which also sucks.


I guess we could just do that check in both validate_networks (server 
create API) and allocate_for_instance (attach interfaces API). The gross 
thing about leaving it in allocate_for_instance is you have a policy 
check in the compute node still.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][ironic] ironic-python-agent 1.0.3 release (liberty)

2016-06-08 Thread no-reply
We are eager to announce the release of:

ironic-python-agent 1.0.3: Ironic Python Agent Ramdisk

This release is part of the liberty stable release series.

For more details, please see below.

Changes in ironic-python-agent 1.0.2..1.0.3
---

f47876a Updated from global requirements
9be8a3b Install qemu-image from backports repo
8c5968a Correct link to enabling agent drivers
40d70ea Fix full_trusty_build once and for all
6fa4498 Append BRANCH_PATH to filenames of build output
774bbdf Updated from global requirements

Diffstat (except docs and test files)
-

Dockerfile |  7 +++
imagebuild/coreos/full_trusty_build.sh | 24 ++--
requirements.txt   |  2 +-
test-requirements.txt  |  2 +-
5 files changed, 28 insertions(+), 9 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 1873b3a..caf94bd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@ pbr>=1.6
-Babel>=1.3
+Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD
diff --git a/test-requirements.txt b/test-requirements.txt
index 994947e..12fbd37 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -17 +17 @@ oslosphinx>=2.5.0 # Apache-2.0
-reno>=0.1.1  # Apache2
+reno>=0.1.1 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Policy check for network:attach_external_network

2016-06-08 Thread Ryan Rossiter
Taking a look at [1], I got curious as to why all of the old network policies 
were deleted except for network:attach_external_network. With the help of 
mriedem, it turns out that policy is checked indirectly on the compute node, in 
allocate_for_instance(). mriedem pointed out that this policy doesn’t work very 
well from an end-user perspective, because if you have an existing instance and 
want to now attach it to an external network, it’ll reschedule it, and if you 
don’t have permission to attach to an external network, it’ll bounce around the 
scheduler until the user receives the infamous “No Valid Host”.

My main question is: how do we want to handle this? I’m thinking because 
Neutron has all of the info as to whether or not the network we’re creating a 
port on is external, we could just let Neutron handle all of the policy work. 
That way eventually the policy can just leave nova’s policy.json. But that’ll 
take a while.

A temporary alternative is we move that policy check to the API. That way we 
can accurately deny the user instead of plumbing things down into the compute 
for them to be denied there. I did a scavenger hunt and found that the policy 
check was added because of [2], which, in the end, is just a permissions thing. 
So that could get added to the API checks when 1) creating an instance and 2) 
attaching an existing instance to another network. Are there any other places 
this API check would be needed?

[1]: https://review.openstack.org/#/c/320751/
[2]: https://bugs.launchpad.net/nova/+bug/1352102

-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [VPNaaS] Support for Stronger hashes and combined mode ciphers

2016-06-08 Thread Mark Fenwick

Hi,

I was wondering if there are any plans to extend support for IPsec and 
IKE algorithms. Looks like only AES-CBC mode and SHA1 are supported.


It would be nice to see:

SHA256, SHA384, SHA512

As well as the combined mode ciphers:

AES-CCM and AES-GCM

StrongSWAN already supports all of these ciphers and hashes.

Thanks

Mark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Initial oslo.privsep conversion?

2016-06-08 Thread Matt Riedemann
While sitting in Angus' cross-project session on oslo.privsep at the 
Austin summit I believe I had a conversation with myself in my head that 
Nova should stop adding new rootwrap filters and anything new should use 
oslo.privsep.


For example:

https://review.openstack.org/#/c/182257/

However, we don't have anything in Nova using oslo.privsep directly. We 
have os-brick and soon we'll have os-vif using oslo.privsep, but those 
are indirect.


Looking at the change in Neutron for using privsep [1] it's pretty 
complicated. So I'm struggling with requiring new changes to Nova that 
require new rootwrap filters to use privsep when we don't have an 
example in tree of how to do this.


Is anyone working on something like that yet that I haven't seen? If 
not, has anyone thought about doing something or is interested in doing 
it? Because I don't think it's really fair to prevent new things until 
that happens - although the flip side to that is there isn't an example 
until someone is forced to do it.


Other thoughts? Is anyone willing to help here? I'm assuming there will 
need to be hand-holding from Angus at least initially.


[1] https://review.openstack.org/#/c/155631/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] vision on new modules

2016-06-08 Thread Emilien Macchi
Hi folks,

Over the last months we've been creating more and more modules [1] [2]
and I would like to take the opportunity to continue some discussion
we had during the last Summits about the quality of our modules.

[1] octavia, vitrage, ec2api, tacker, watcher, congress, magnum,
mistral, zaqar, etc.
[2] by the end of Newton, we'll have ~ 33 Puppet modules !

Announce your work
As a reminder, we have defined a process when adding new modules:
http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html
This process is really helpful to scale our project and easily add modules.
If you're about to start a new module, I suggest you to start this
process and avoid to start it on your personal github, because you'll
loose the valuable community review on your work.

Iterate
I've noticed some folks pushing 3000 LOC in Gerrit when adding the
bits for new Puppet modules (after the first cookiecutter init).
That's IMHO bad, because it makes reviews harder, slower and expose
the risk of missing something during the review process. Please write
modules bits by bits.
Example: start with init.pp for common bits, then api.pp, etc.
For each bit, add its unit tests & functional tests (beaker). It will
allow us to write modules with good design, good tests and good code
in general.

Write tests
A good Puppet module is one that we can use to successfully deploy an
OpenStack service. For that, please add beaker tests when you're
initiating a module. Not at the end of your work, but for every new
class or feature.
It helps to easily detect issues that we'll have when running Puppet
catalog and quickly fix it. It also helps community to report feedback
on packaging, Tempest or detect issues in our libraries.
If you're not familiar with beaker, you'll see in existing modules
that there is nothing complicated, we basically write a manifest that
will deploy the service.


If you're new in this process, please join our IRC channel on freenode
#puppet-openstack and don't hesitate to poke us.

Any feedback / comment is highly welcome,
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] neutron-lib and dependencies in neutron reference implementation

2016-06-08 Thread Henry Gessau
One of the goals of neutron-lib is to reduce the chances of a code change in
neutron core breaking other repos. We want to get to a point where no repo
imports anything from neutron core.

So if there is some value shared between neutron core and one or more other
repos, then the value should go in neutron-lib.

Your question seems to be around "neutron reference implementation", but I
don't think that is relevant to what goes into neutron-lib.

You could argue that the values in neutron_lib/constants.py are big mix of
unrelated items, and that we may want to divide them up into separate
_constants.py modules. But then I could argue that would proliferate the
number of imports required for many repos.

Gal Sagie  wrote:
> For example references to the various different agents which are an
> implementation details to me
> 
> On Wed, Jun 8, 2016 at 8:51 PM, Henry Gessau  > wrote:
> 
> Gal Sagie > wrote:
> > Hello all,
> >
> > I have recently came across some missing constants in neutron-lib and 
> sent
> > a patch but i wanted to try and understand the scope of the lib.
> >
> > I see that the Neutron lib consist of many definitions which are 
> actually
> > part of the reference implementation and are not really "generic" 
> Neutron
> > parts.
> 
> Can you give specific examples of 'not really generic' constants?
> 
> > I am wondering if this is the right approach, especially since i think 
> an
> > end goal is to split between the two (some day..)
> >
> > My suggestion would be to at least split these two in the neutron-lib, 
> but maybe
> > i miss understood the scope of the lib


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Update on resource providers work

2016-06-08 Thread Jay Pipes

On 06/08/2016 03:52 PM, Sean Dague wrote:

On 06/08/2016 03:31 PM, Matt Riedemann wrote:

On 6/6/2016 7:26 AM, Jay Pipes wrote:

Once the InventoryList and AllocationList objects are merged, then we
will focus on reviews of the placement REST API patches [3]. Again, we
are planning on having the nova-compute resource tracker call these REST
API calls directly (while continuing to use the Nova ComputeNode object
for saving legacy inventory information). Clearly, before the resource
tracker can call this placement REST API, we need the placement REST API
service to be created and a client for it added to OSC. Once this client
exists, we can add code to the resource tracker which uses it.


Wait, we're going to require python-openstackclient in Nova to call the
placement REST API? That seems bad given the dependencies that OSC pulls
in. Why not just create the REST API wrapper that we need within Nova
and then split that out later to whichever client it's going to live in?


Yes, that ^^^

Just use keystoneauth1 and hand rolled json. We shouldn't be talking
about a ton of code.

Pulling python-openstackclient back into Nova as a dependency is really
a hard NACK for a bunch of reasons, including the way dependencies work.


Ack.

It looks like actually we're going to be able to do much of the initial 
pass using objects (that *only* communicate with the API database, not 
the cell DB) and transition to that being REST API calls after some time.


In code, what we're planning is to temporarily have the resource tracker 
instantiate a nova.objects.ResourceProvider object and call that 
object's set_inventory() method which communicates *only* with the API 
database and writes records to the inventories table. The resource 
tracker would continue to write inventory fields to the legacy locations 
(e.g. compute_nodes.memory_mb) for some period of time.


Later on, we'll change the resource tracker to call the placement API 
directly (via a keystoneauth + JSON over HTTP approach) and then long 
term figure out a small-dependency library import that would wrap the 
keystoneauth + JSON over HTTP calls.


The set_inventory() patch, for the record, is up for review here:

https://review.openstack.org/#/c/326440/

Reviews welcome ;)
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Question about service subnets spec

2016-06-08 Thread Carl Baldwin
Thanks, John for your comments.  I've added a few comments inline.

In summary, I'm inclined to move forward with this as an admin-only
operation to begin with.  I'll give another day or two for someone new
to take notice.

Carl

On Tue, Jun 7, 2016 at 7:56 AM, John Davidge  wrote:
> Resurrecting this thread from last week.
>
> On 5/31/16, 10:11 PM, "Brian Haley"  wrote:
>
>>> At this point the enumeration values map simply to device owners.  For
>>>example:
>>>
>>>router_ports -> "network:router_gateway"
>>>dvr_fip_ports -> "network:floatingip_agent_gateway"
>>>
>>> It was at this point that I questioned the need for the abstraction at
>>> all.  Hence the proposal to use the device owners directly.
>>
>>I would agree, think having another name to refer to a device_owner makes
>>it
>>more confusing.  Using it directly let's us be flexible for deployers,
>>and
>>allows for using additional owners values if/when they are added.
>
> I agree that a further abstraction is probably not desirable here. If this
> is only going to be exposed to admins then using the existing device_owner
> values shouldn¹t cause confusion for users.

Given the lack of opposing opinions, I'm inclined to move forward
unless someone speaks up soon.  We are getting to the point where we
need to converge on this and put the implementation up for review in
order to make Newton.

>>> Armando expressed some concern about using the device owner as a
>>> security issue.  We have the following policy on device_owner:
>>>
>>>"not rule:network_device or rule:context_is_advsvc or
>>> rule:admin_or_network_owner"
>>>
>>> At the moment, I don't see this as much of an issue.  Do you?
>>
>>I don't, since only admins should be able to set device_owner to these
>>values
>>(that's the policy we're talking about here, right?).
>>
>>To be honest, I think Armando's other comment - "Do we want to expose
>>device_owner via tha API or leave it an implementation detail?" is
>>important as
>>well.  Even though I think an admin should know this level of neutron
>>detail,
>>will they really?  It's hard to answer that question being so close to
>>the code
>
> Seeing as device_owner is already exposed by the port API I don¹t think
> this is an issue. And if we agree that a further abstraction isn¹t a good
> idea then I don¹t see how we would get around exposing it in this context.

This is how I thought about it.

> https://review.openstack.org/#/c/300207

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][reno] FAQ: Why is reno showing notes for released versions on the "unreleased.html" page?

2016-06-08 Thread Doug Hellmann
I've had several people notice recently that release notes on the
"unreleased.html" page are actually including notes from the mitaka
release. This is because of the instructions we're giving reno.

First, the "unreleased.rst" page is poorly named (my fault). It is
actually the "current" branch, as the title inside the rst file
indicates. The instructions given to reno via the directives in
that file tell it to scan the "current" branch, whatever that happens
to be.

We need it to do that to test the release notes build with a patch
under review, because while the patch is still under review it does
not live on any of the named branches (it's not on master until
it's merged into master, etc.). So, in order to ensure that new
release notes do not break the HTML build, we test them on this
special page.

As reno scans a branch, it looks for all versions visible down the
linear history of that branch. It happens that now our master
branches include multiple versions in their history with release
notes, and so reno reports all of them.

This may also occur on future stable branches, since the history
for stable/newton will include the version tagged to create the
branch for stable/mitaka.  If you would prefer that "newton.html"
page not include older versions, you can add the "earliest-version"
parameter to the release-notes directive to specify a version where
reno should stop. [1]

Setting the "earliest-version" on the page that scans the current
branch may result in notes being left out of the test build, so
please do not do that.

Doug

[1] 
http://docs.openstack.org/developer/reno/sphinxext.html#directive-release-notes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-08 Thread Amitabha Biswas
I didn’t see that idea earlier, +2 to it :)

Regards
Amitabha

> On Jun 8, 2016, at 9:52 AM, John McDowall  
> wrote:
> 
> Amitabha,
> 
> Thanks for looking at it . I took the suggestion from Juno and implemented 
> it. I think it is a good solution as it minimizes impact on both 
> networking-ovn and networking-sfc. I have updated my repos, if you have 
> suggestions for improvements let me know.
> 
> I agree that there needs to be some refactoring of the networking-sfc driver 
> code. I think the team did a good job with it as it was easy for me to create 
> the OVN driver ( copy and paste). As more drivers are created I think the 
> model will get polished and refactored.
> 
> Regards
> 
> John
> 
> From: Amitabha Biswas >
> Date: Tuesday, June 7, 2016 at 11:36 PM
> To: John McDowall  >
> Cc: Na Zhu >, Srilatha Tangirala 
> >, "OpenStack Development 
> Mailing List (not for usage questions)"  >, discuss  >
> Subject: Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
> [networking-sfc] SFC andOVN
> 
> Hi John,
> 
> Looking at the code with Srilatha, it seems like the 
> https://github.com/doonhammer/networking 
> -ovn
>  repo has gone down the path of having a sfc_ovn.py file in the 
> networking-ovn/ovsdb directory. This file deals with the SFC specific OVSDB 
> transactions in OVN. So to answer your question of invoking OVS-IDL, we can 
> import the src_ovn.py file from 
> networking_sfc/services/src/drivers/ovn/driver.py and invoke calls into IDL.
> 
> Another aspect from a networking-sfc point of view is the duplication of code 
> between networking_sfc/services/src/drivers/ovn/driver.py and 
> networking_sfc/services/src/drivers/ovs/driver.py in the 
> https://github.com/doonhammer/networking-sfc 
> 
>  repo. There should be a mechanism to coalesce the common code and invoke the 
> OVS and OVN specific parts separately.
> 
> Regards
> Amitabha
> 
>> On Jun 7, 2016, at 9:54 PM, John McDowall > > wrote:
>> 
>> Juno, Srilatha,
>> 
>> I need some help – I have fixed most of the obvious typo’s in the three 
>> repos and merged them with mainline. There is still a problem with the build 
>> I think in mech_driver.py but I will fix it asap in the am.
>> 
>> However I am not sure of the best way to interface between sfc and ovn.
>> 
>> In networking_sfc/services/src/drivers/ovn/driver.py there is a function 
>> that creates a deep copy of the port-chain dict, 
>> create_port_chain(self,contact,port_chain). 
>> 
>> Looking at networking-ovn I think it should use mech_driver.py so we can 
>> call the OVS-IDL to send the parameters to ovn. However I am not sure of the 
>> best way to do it. Could you make some suggestions or send me some sample 
>> code showing the best approach?
>> 
>> I will get the ovs/ovn cleaned up and ready. Also Louis from the 
>> networking-sfc has posted a draft blueprint.
>> 
>> Regards
>> 
>> John
>> 
>> From: Na Zhu >
>> Date: Monday, June 6, 2016 at 7:54 PM
>> To: John McDowall > >, Ryan Moats > >
>> Cc: "disc...@openvswitch.org " 
>> >, "OpenStack 
>> Development Mailing List (not for usage questions)" 
>> > >, Srilatha Tangirala 
>> >
>> Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
>> [networking-sfc] SFC andOVN
>> 
>> Hi John,
>> 
>> I do not know any better approach, I think it is good to write all the 
>> parameters in the creation of a port chain, this can avoid saving many data 
>> in northbound db which are not used. We can do it in that way currently, if 
>> the community has opposite ideas, we can change, what do you think?
>> 
>> Hi Ryan,
>> 
>> Do you agree with that?
>> 
>> 
>> 
>> 

[openstack-dev] [puppet] newton virtual midcycle?

2016-06-08 Thread Emilien Macchi
Hi Puppeteers,

We would like to poll our community and know who would be interested
by a virtual midcycle during Newton.
I created: https://etherpad.openstack.org/p/newton-puppet-midcycle-meetup
Please add your name and propose topics that you're willing to work on.

During the next meeting, we'll review the output and decide if whether
or not we do a midcycle or not.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Update on resource providers work

2016-06-08 Thread Sean Dague
On 06/08/2016 03:31 PM, Matt Riedemann wrote:
> On 6/6/2016 7:26 AM, Jay Pipes wrote:
>> Once the InventoryList and AllocationList objects are merged, then we
>> will focus on reviews of the placement REST API patches [3]. Again, we
>> are planning on having the nova-compute resource tracker call these REST
>> API calls directly (while continuing to use the Nova ComputeNode object
>> for saving legacy inventory information). Clearly, before the resource
>> tracker can call this placement REST API, we need the placement REST API
>> service to be created and a client for it added to OSC. Once this client
>> exists, we can add code to the resource tracker which uses it.
> 
> Wait, we're going to require python-openstackclient in Nova to call the
> placement REST API? That seems bad given the dependencies that OSC pulls
> in. Why not just create the REST API wrapper that we need within Nova
> and then split that out later to whichever client it's going to live in?

Yes, that ^^^

Just use keystoneauth1 and hand rolled json. We shouldn't be talking
about a ton of code.

Pulling python-openstackclient back into Nova as a dependency is really
a hard NACK for a bunch of reasons, including the way dependencies work.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][reno][infra] merging tags between branches is confusing our release notes

2016-06-08 Thread Doug Hellmann
Excerpts from John Dickinson's message of 2016-06-08 11:30:03 -0700:
> 
> On 8 Jun 2016, at 11:13, Doug Hellmann wrote:
> 
> > tl;dr: The switch from pre-versioning to post-versioning means that
> > sometimes master appears to be older than stable/$previous, so we
> > merge "final" tags from stable/$previous into master to make up for
> > it. This introduces versions into the history of master that aren't
> > *really* there, but git sees them and so does reno. That, in turn,
> > means that the release notes generated from master may place some
> > notes in the wrong version, suggesting that something happened
> > sooner than it did. I propose that we stop merging tags, and instead
> > introduce a new tag on master after we create a branch to ensure
> > that the version number there is always higher than stable/$previous.
> >
> >
> > Background
> > --
> >
> > Over the last year or so we've switched from pre-versioning (declaring
> > versions in setup.cfg) to post-versioning (relying solely on git
> > tags for versions). This made the release process simpler, because
> > we didn't need to worry about synchronizing the change of version
> > strings within setup.cfg as we created our branches. A side-effect,
> > though, is that the version from which we tag appears on both
> > branches. That means that stable/$previous and master both have the
> > same version for some period of time, and then stable/$previous
> > receives a final tag and has a version newer than master. To
> > compensate, we merge that final tag from stable/$previous into
> > master (taking only the tag, without any of the code changes), so
> > that master again has the same version.
> >
> >
> > The Problem
> > ---
> >
> > The tag may be merged into master after other changes have landed
> > in master but not stable/$previous, and if those changes include
> > release notes then reno will associate them with the newly merged
> > tag, rather than the correct version number.
> >
> > Here's an example I have been using to test reno. In it, 3 separate
> > reno notes are created on two branches. Note 1 is on master when
> > it is tagged 1.0.0. Then master is branched and note 2 is added to
> > the branch and tagged 1.1.0. Then the tag is merged into master and
> > note 3 is added.
> >
> >   * af93946 (HEAD -> master, tag: 2.0.0) add slug-0003.yaml
> >   * f78d1a2 add ignore-2.txt
> >   *   4502dbd merge 1.1.0 tag into master
> >   |\
> >   | * bf50a97 (tag: 1.1.0, test_merge_tags) add slug-0002.yaml
> >   * | 1e4d846 add ignore-1.txt
> >   |/
> >   * 9f481a9 (tag: 1.0.0) add slug-0001.yaml
> >
> > Before the tag is applied to note 3, it appears to be part of 1.1.0,
> > even though it is not from the branch where that version was created
> > and the version 1.1.0 is included in the release notes for master,
> > even though that version should not really be a part of that series.
> >
> > Technically reno is doing the right thing, because even "git describe"
> > honors the merged tag and treats commit f78d1a2 as 1.1.0-4-gaf93946.
> > So because we've merged the version number into a different series
> > branch, that version becomes part of that series.
> >
> >
> > The Proposal
> > 
> >
> > We should stop merging tags between branches, at all. Then our git
> > branches will be nice and linear, without merges, and reno will
> > associate the correct version number with each note.
> >
> > To compensate for the fact that master will have a lower version
> > number after the branch, we can introduce a new alpha tag on master
> > to raise its version. So, after creating stable/$series from version
> > X.0.0.0rc1, we would tag the next commit on master with X+1.0.0.0a1.
> > All subsequent commits on master would then be considered to be
> > part of the X+1.0.0 series.
> 
> This seems to go back to the essence of pre-versioning. Instead of updating a 
> string in a file, you've updated it as a tag. You've still got the 
> coordination issues at release to deal with (when and what to tag) and the 
> issue of knowing what the next release is before you've landed any patches 
> that will be in that release.

This only affects milestone-based projects, and all of them are
currently raising their major version number each cycle to indicate the
cycle boundary. So it's easy to know what the next version will be.

It's not quite as burdensome as the post-versioning thing we were
doing because it's not necessary to commit something *before*
creating the branch or starting the release candidate phase.

> 
> Isn't the reason that the branch is merged back in because otherwise per 
> can't generate a valid version number?

I don't think it's related to versions being "valid," but to making
things feel less confusing to human consumers at the expense of
(what I think is) giving a misleading picture of the version history.

We started merging the tags between branches because the version
that ends up in the 

Re: [openstack-dev] [neutron][SFC]

2016-06-08 Thread Alioune
I've switched from devstack to a normal deployment of openstack/mitaka and
neutron-l2 agent is working fine with sfc. I can boot instances, create
ports.
However I can not create neither flow-classifier nor port-pair ...

neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix
22.1.20.1/32 --destination-ip-prefix 172.4.5.6/32 --protocol tcp
--source-port 23:23 --destination-port 100:100 FC1

returns: neutron flow-classifier-create: error: argument
--logical-source-port is required
Try 'neutron help flow-classifier-create' for more information.

 neutron port-pair-create --ingress=p1 --egress=p2 PP1
404 Not Found

The resource could not be found.

Neutron server returns request_ids:
['req-1bfd0983-4a61-4b32-90b3-252004d90e65']

neutron --version
4.1.1

p1,p2,p3,p4 have already been created, I can ping instances attached to
these ports.
Since I've not installed networking-sfc, are there additional config to set
in neutron config files ?
Or is it due to neutron-client version ?

Regards

On 8 June 2016 at 20:31, Mohan Kumar  wrote:

> neutron agent not able to fetch details from ovsdb . Could you check below
> options 1.restart ovsdb-server and execute ovs_vsctl list-br  2.   execute
> ovs- vsctl list-br manually and try to check error.
>
> 3. Could be ovs cleanup issue , please check the output of sudo service
> openvswitch restart and /etc/init.d/openvswich** restart , both should be
> same
>
> Thanks.,
> Mohankumar.N
> On Jun 7, 2016 6:04 PM, "Alioune"  wrote:
>
>> Hi Mohan/Cathy
>>  I've installed now ovs 2.4.0 and followed
>> https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining but
>> I got this error :
>> Regards,
>>
>> + neutron-ovs-cleanup
>> 2016-06-07 11:25:36.465 22147 INFO neutron.common.config [-] Logging
>> enabled!
>> 2016-06-07 11:25:36.468 22147 INFO neutron.common.config [-]
>> /usr/local/bin/neutron-ovs-cleanup version 7.1.1.dev4
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl [-]
>> Unable to execute ['ovs-vsctl', '--timeout=10', '--oneline',
>> '--format=json', '--', 'list-br'].
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> Traceback (most recent call last):
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl   File
>> "/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, in
>> run_vsctl
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> log_fail_as_error=False).rstrip()
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl   File
>> "/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in execute
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> raise RuntimeError(m)
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> RuntimeError:
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> Command: ['sudo', 'ovs-vsctl', '--timeout=10', '--oneline',
>> '--format=json', '--', 'list-br']
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl Exit
>> code: 1
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> 2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
>> 2016-06-07 11:25:36.512 22147 CRITICAL neutron [-] RuntimeError:
>> Command: ['sudo', 'ovs-vsctl', '--timeout=10', '--oneline',
>> '--format=json', '--', 'list-br']
>> Exit code: 1
>>
>> 2016-06-07 11:25:36.512 22147 ERROR neutron Traceback (most recent call
>> last):
>> 2016-06-07 11:25:36.512 22147 ERROR neutron   File
>> "/usr/local/bin/neutron-ovs-cleanup", line 10, in 
>> 2016-06-07 11:25:36.512 22147 ERROR neutron sys.exit(main())
>> 2016-06-07 11:25:36.512 22147 ERROR neutron   File
>> "/opt/stack/neutron/neutron/cmd/ovs_cleanup.py", line 89, in main
>> 2016-06-07 11:25:36.512 22147 ERROR neutron ovs_bridges =
>> set(ovs.get_bridges())
>> 2016-06-07 11:25:36.512 22147 ERROR neutron   File
>> "/opt/stack/neutron/neutron/agent/common/ovs_lib.py", line 132, in
>> get_bridges
>> 2016-06-07 11:25:36.512 22147 ERROR neutron return
>> self.ovsdb.list_br().execute(check_error=True)
>> 2016-06-07 11:25:36.512 22147 ERROR neutron   File
>> "/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 83, in execute
>> 2016-06-07 11:25:36.512 22147 ERROR neutron txn.add(self)
>> 2016-06-07 11:25:36.512 22147 ERROR neutron   File
>> "/opt/stack/neutron/neutron/agent/ovsdb/api.py", line 70, in __exit__
>> 2016-06-07 11:25:36.512 22147 ERROR neutron self.result =
>> self.commit()
>> 2016-06-07 11:25:36.512 22147 ERROR neutron   File
>> "/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 50, in commit
>> 2016-06-07 11:25:36.512 22147 ERROR neutron res = self.run_vsctl(args)
>> 2016-06-07 11:25:36.512 22147 ERROR neutron   File
>> "/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 70, in
>> run_vsctl
>> 2016-06-07 11:25:36.512 22147 ERROR neutron ctxt.reraise = False
>> 2016-06-07 11:25:36.512 

Re: [openstack-dev] [nova] Update on resource providers work

2016-06-08 Thread Matt Riedemann

On 6/6/2016 7:26 AM, Jay Pipes wrote:

Once the InventoryList and AllocationList objects are merged, then we
will focus on reviews of the placement REST API patches [3]. Again, we
are planning on having the nova-compute resource tracker call these REST
API calls directly (while continuing to use the Nova ComputeNode object
for saving legacy inventory information). Clearly, before the resource
tracker can call this placement REST API, we need the placement REST API
service to be created and a client for it added to OSC. Once this client
exists, we can add code to the resource tracker which uses it.


Wait, we're going to require python-openstackclient in Nova to call the 
placement REST API? That seems bad given the dependencies that OSC pulls 
in. Why not just create the REST API wrapper that we need within Nova 
and then split that out later to whichever client it's going to live in?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-06-08 Thread Matt Riedemann

On 6/8/2016 12:05 PM, Alexandre Levine wrote:

Hi Matt,

According to the state of this review:
https://review.openstack.org/#/c/317689/ the works aren't going to be
done in this cycle.

Do you think it'd be possible for our driver to cut in now?

Feodor participated in reviewing and helped as much as possible with
current efforts and if needed we can spare even more resources to help
with the refactoring in the next cycle.

Best regards,

  Alex Levine




Alex,

Unfortunately the spec for the scaleio image backend wasn't approved 
before the non-priority spec approval freeze so it's going to have to 
wait for Ocata.


I realize this is frustrating. We do already have 85 approved blueprints 
for Newton though including the libvirt imagebackend refactor which the 
scaleio change is going to be dependent on, so I still urge helping out 
with that in any way possible to move it along and make sure that 
dependency gets completed in Newton.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Stepping down from Trove Core

2016-06-08 Thread Amrith Kumar
Victoria,

Thank you very much for your contribution to Trove. All the very best to you 
and the projects you will be working on.

-amrith


From: Victoria Martínez de la Cruz [mailto:victo...@vmartinezdelacruz.com]
Sent: Tuesday, June 07, 2016 2:34 PM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [Trove] Stepping down from Trove Core


After one year and a half contributing to the Trove project,

I have decided to change my focus and start gaining more experience

on other storage and data-management related projects.



Because of this decision, I'd like to ask to be removed from the Trove core 
team.



I want to thank Trove community for all the good work and shared experiences.

Working with you all has been a very fulfilling experience.



All the best,



Victoria
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][ironic] ironic-python-agent 1.2.2 release (mitaka)

2016-06-08 Thread no-reply
We are pumped to announce the release of:

ironic-python-agent 1.2.2: Ironic Python Agent Ramdisk

This release is part of the mitaka stable release series.

For more details, please see below.

1.2.2
^


New Features


* The driver_internal_info internal setting
  "agent_continue_if_ata_erase_failed" allows operators to enable disk
  cleaning operations to fallback from a failed ata_erase operation to
  disk shredding operations.


Bug Fixes
*

* On start up wait up to 30 seconds for the first disk device
  suitable for deployment to appear. This is to fix both inspection
  and deployment on hardware that takes long to initialize (e.g. some
  RAID devices).

* IPA will now attempt to unlock a security locked drive with a
  'NULL' password if it is found to be enabled, however this will only
  work if the password was previously set to a 'NULL' value, such as
  if a failure during a previous ata_erase sequence.

* Potential command failures in the secure erase process are now
  captured and raised as BlockDeviceEraseError exceptions.

Changes in ironic-python-agent 1.2.1..1.2.2
---

b6bdaf0 Provide fallback from ATA erase to shredding
2ec82a4 Wait for at least one suitable disk to appear on start up
c65d30f TinyIPA: Shave off some file size from tinyipa ramdisk
8ece8ba TinyIPA: Precompile python code for faster load
d053f9d Use TinyCore Linux 7.x for TinyIPA
cc8cab2 Optimise tinyipa boot time
29ba706 Enable branch tagging during tinyipa build
30cf976 Remove "Experimental" warning from tinyipa README

Diffstat (except docs and test files)
-

imagebuild/tinyipa/Makefile|   4 +-
imagebuild/tinyipa/README.rst  |   3 -
imagebuild/tinyipa/build-tinyipa.sh|  16 +-
imagebuild/tinyipa/build_files/bootlocal.sh|   4 +
imagebuild/tinyipa/build_files/buildreqs.lst   |   4 +-
imagebuild/tinyipa/build_files/fakeuname   |   2 +-
imagebuild/tinyipa/build_files/finalreqs.lst   |   5 +-
imagebuild/tinyipa/finalise-tinyipa.sh |  40 +++-
ironic_python_agent/api/app.py |   1 +
ironic_python_agent/hardware.py|  79 ++-
releasenotes/notes/disk-wait-2e0e85e0947f80e9.yaml |   5 +
.../enable-cleaning-fallback-57e8c9aa2f24e63d.yaml |  14 ++
14 files changed, 378 insertions(+), 40 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Trove weekly meeting minutes

2016-06-08 Thread Amrith Kumar
The summary of the Trove weekly meeting held just now.

Action Items:
- prepare for next weeks meeting; things that we can get into Newton

Full transcript at 
http://eavesdrop.openstack.org/meetings/trove/2016/trove.2016-06-08-18.00.html

Thanks,

-amrith

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [tacker] Request to create puppet-tacker

2016-06-08 Thread Sridhar Ramaswamy
This is exciting. Thanks Dan for your contribution!

I hope to see a similar contribution for Tacker deployment using
openstack-ansible :)

- Sridhar

On Wed, Jun 8, 2016 at 8:18 AM, Dan Radez  wrote:

> FYI, tacker community:
> https://review.openstack.org/#/c/327173/
> https://review.openstack.org/#/c/327178/
>
> Radez
>
> On 06/08/2016 10:54 AM, Dan Radez wrote:
> > sure will, thx Emilien
> > Dan
> >
> > On 06/08/2016 09:08 AM, Emilien Macchi wrote:
> >> Yeah, super good news!
> >> Please do the same as I did in https://review.openstack.org/326720 and
> >> https://review.openstack.org/326721
> >>
> >> Add me as reviewer because I need to sign-off the 2 patches (I'm
> current PTL).
> >> Once it's done & merged, you'll be able to deprecate the old
> >> repository on your github with a nice README giving the link of the
> >> new module.
> >>
> >> I haven't looked at the code yet but we'll probably have to adjust
> >> some bits, add some testing (beaker [1], etc). Please make sure that
> >> we have some packaging available in RDO (I checked on Ubuntu and they
> >> don't provide it) so we can download it during our beaker tests.
> >>
> >> Also a last thing, in order to help us to make the module compliant &
> >> consistent, please read how we wrote the recent modules. For example
> >> you can look puppet-gnocchi or puppet-aodh that are clean modules.
> >> We recently had a lot of new modules: vitrage, watcher, tacker,
> >> congress, (I'm working now on octavia) - which means reviews might
> >> take more time than usual because our team will review the new modules
> >> carefuly to make sure the code is clean & consistent from beginning
> >> (and avoid the puppet-monasca story). Please be patient and help us by
> >> reading how we did other modules.
> >>
> >> Thanks a ton for your collaboration and we're looking forward for this
> >> new challenge,
> >>
> >> [1] https://github.com/puppetlabs/beaker
> >>
> >> On Wed, Jun 8, 2016 at 8:11 AM, Iury Gregory 
> wrote:
> >>> Awesome!
> >>>
> >>> You just need to follow the same process that Emilien pointed for
> >>> puppet-congress. If you need any help please let us know.
> >>>
> >>> 1- Move  https://github.com/radez/puppet-tacker to OpenStack
> >>> 2- Add it to our governance
> >>> 3- Follow
> >>>
> http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html
> >>>
> >>>
> >>>
> >>> 2016-06-08 8:56 GMT-03:00 Dan Radez :
> 
>  I also have puppet-tacker module that has existed before the project
> was
>  part of big tent.
> 
>  It was based on cookie cutter originally but will probably need some
>  adjustments to adhere to standards.
> 
>  I'd like to get the project establish so that the code can be run
>  through the proper review process.
> 
>  exiting repo is here: https://github.com/radez/puppet-tacker
> 
>  Dan Radez
>  freenode: radez
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>>
> >>> ~
> >>> Att[]'s
> >>> Iury Gregory Melo Ferreira
> >>> Master student in Computer Science at UFCG
> >>> E-mail:  iurygreg...@gmail.com
> >>> ~
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Stepping down from Trove Core

2016-06-08 Thread Mariam John

Thank you Victoria for all your hard work and dedication to the Trove
project. It's been a pleasure knowing you and working with you.

Wish you all the best and good luck.

Regards,
Mariam.




From:   Victoria Martínez de la Cruz 
To: OpenStack Development Mailing List

Date:   06/07/2016 01:37 PM
Subject:[openstack-dev] [Trove] Stepping down from Trove Core



After one year and a half contributing to the Trove project,
I have decided to change my focus and start gaining more experience
on other storage and data-management related projects.

Because of this decision, I'd like to ask to be removed from the Trove core
team.

I want to thank Trove community for all the good work and shared
experiences.
Working with you all has been a very fulfilling experience.

All the best,

Victoria
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][reno][infra] merging tags between branches is confusing our release notes

2016-06-08 Thread John Dickinson


On 8 Jun 2016, at 11:13, Doug Hellmann wrote:

> tl;dr: The switch from pre-versioning to post-versioning means that
> sometimes master appears to be older than stable/$previous, so we
> merge "final" tags from stable/$previous into master to make up for
> it. This introduces versions into the history of master that aren't
> *really* there, but git sees them and so does reno. That, in turn,
> means that the release notes generated from master may place some
> notes in the wrong version, suggesting that something happened
> sooner than it did. I propose that we stop merging tags, and instead
> introduce a new tag on master after we create a branch to ensure
> that the version number there is always higher than stable/$previous.
>
>
> Background
> --
>
> Over the last year or so we've switched from pre-versioning (declaring
> versions in setup.cfg) to post-versioning (relying solely on git
> tags for versions). This made the release process simpler, because
> we didn't need to worry about synchronizing the change of version
> strings within setup.cfg as we created our branches. A side-effect,
> though, is that the version from which we tag appears on both
> branches. That means that stable/$previous and master both have the
> same version for some period of time, and then stable/$previous
> receives a final tag and has a version newer than master. To
> compensate, we merge that final tag from stable/$previous into
> master (taking only the tag, without any of the code changes), so
> that master again has the same version.
>
>
> The Problem
> ---
>
> The tag may be merged into master after other changes have landed
> in master but not stable/$previous, and if those changes include
> release notes then reno will associate them with the newly merged
> tag, rather than the correct version number.
>
> Here's an example I have been using to test reno. In it, 3 separate
> reno notes are created on two branches. Note 1 is on master when
> it is tagged 1.0.0. Then master is branched and note 2 is added to
> the branch and tagged 1.1.0. Then the tag is merged into master and
> note 3 is added.
>
>   * af93946 (HEAD -> master, tag: 2.0.0) add slug-0003.yaml
>   * f78d1a2 add ignore-2.txt
>   *   4502dbd merge 1.1.0 tag into master
>   |\
>   | * bf50a97 (tag: 1.1.0, test_merge_tags) add slug-0002.yaml
>   * | 1e4d846 add ignore-1.txt
>   |/
>   * 9f481a9 (tag: 1.0.0) add slug-0001.yaml
>
> Before the tag is applied to note 3, it appears to be part of 1.1.0,
> even though it is not from the branch where that version was created
> and the version 1.1.0 is included in the release notes for master,
> even though that version should not really be a part of that series.
>
> Technically reno is doing the right thing, because even "git describe"
> honors the merged tag and treats commit f78d1a2 as 1.1.0-4-gaf93946.
> So because we've merged the version number into a different series
> branch, that version becomes part of that series.
>
>
> The Proposal
> 
>
> We should stop merging tags between branches, at all. Then our git
> branches will be nice and linear, without merges, and reno will
> associate the correct version number with each note.
>
> To compensate for the fact that master will have a lower version
> number after the branch, we can introduce a new alpha tag on master
> to raise its version. So, after creating stable/$series from version
> X.0.0.0rc1, we would tag the next commit on master with X+1.0.0.0a1.
> All subsequent commits on master would then be considered to be
> part of the X+1.0.0 series.

This seems to go back to the essence of pre-versioning. Instead of updating a 
string in a file, you've updated it as a tag. You've still got the coordination 
issues at release to deal with (when and what to tag) and the issue of knowing 
what the next release is before you've landed any patches that will be in that 
release.

Isn't the reason that the branch is merged back in because otherwise per can't 
generate a valid version number? You've "solved" that by hiding the release 
version number behind the new *a1 tag. Therefore, for any commit on the master 
branch, the only ancestor commits that are tagged will have the *a1 tags, and 
the actual release tags will never be reachable by walking up parent commits 
(assuming there is at least one commit on a release branch, which seems normal).



>
> Libraries and other projects that follow the cycle-with-intermediary
> release model won't need this treatment, because we're not using
> alpha or beta versions for them and they are tagged more often than
> the projects following the cycle-with-milestones model.
>
>
> Possible Issues
> ---
>
> We will be moving back to a situation where we have to orchestrate
> multiple operations when we create branches. Adding an extra tag
> isn't a lot of work, though, and it doesn't really matter what the
> commit is that gets the tag, so if there's 

Re: [openstack-dev] [Neutron] neutron-lib and dependencies in neutron reference implementation

2016-06-08 Thread Gal Sagie
For example references to the various different agents which are an
implementation details to me

On Wed, Jun 8, 2016 at 8:51 PM, Henry Gessau  wrote:

> Gal Sagie  wrote:
> > Hello all,
> >
> > I have recently came across some missing constants in neutron-lib and
> sent
> > a patch but i wanted to try and understand the scope of the lib.
> >
> > I see that the Neutron lib consist of many definitions which are actually
> > part of the reference implementation and are not really "generic" Neutron
> > parts.
>
> Can you give specific examples of 'not really generic' constants?
>
> > I am wondering if this is the right approach, especially since i think an
> > end goal is to split between the two (some day..)
> >
> > My suggestion would be to at least split these two in the neutron-lib,
> but maybe
> > i miss understood the scope of the lib
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [meghdwar] Base code to use

2016-06-08 Thread prakash RAMCHANDRAN


Hi all,Appreciate the initiative taken by individuals to join and discuss the 
possibility of using codes from different Cloudlet projects.We know the 
requirements cross the boundaries of Nova and Neutron and hence the chalenge is 
to ensure we have a better specification.Thus reviewing codes will give us the 
state of ark of Cloudlet, but we need to persue ause case that will be simple 
and commercially useful for providers to use the megdwar cloudlet platform as a 
service. The Edge Cloud Services differ from Central cloud even under MEC, OEC, 
OpenFog to name a few. IOT archietcures require an Aggregator and host gateway 
at edge to offer IOT services. Thus all road lead to Edge Application gateway 
services. Lets first study few we have at hand and plan for code init and 
migrations accordingly.refer
https://wiki.openstack.org/wiki/Meghdwar
Here are details and lets debate till next meetingNext meeting: Meghdwar on irc 
June 15th 7am-8am PDT #opnestack-meghdwarMeetings: Wednesday's from 7-8am PDT 
(Wed 14:00-15:00 UTC) irc Channel : #openstack-meghdwarMeghdwar-irc June 8th 
7am-8am PDT #opnestack-meghdwar summaryTopic : What is meghdwarlink 
https://launchpad.net/meghdwar This is a follow up project to create a project 
for Edge Cloud Services im Openstack Earlier effort for a Micro Service API as 
Cloud let failed link https://launchpad.net/cloudlet Cloudlet as defined by 
CMU/OEC is a VM at the edge supporting AR/VR applications In a three tier the 
client is an UE/Mobile equipment/Smartphone with client and connecting to edge 
Cloudlet The edge cloudlet serves the AR/VR application from within the VM The 
Central Cloud is where you may register and downlaod applications or apps link 
http://beyondtheclouds.github.io/ Review the Video of Discovery initiative, 
mainly supported through the Inria Project Labs program and the I/O labs, a 
joint lab between Inria and Orange Labs. The aim of project is to try build 
sustainable (power reduction) network Pops distrubuted as IaaS.The 8 minute 
video shows how openstack is used to deliver the Discovery service with Nova 
and Redis in-meory data structure stores. link 
https://hal.inria.fr/hal-01320235 Revising OpenStack Internals to Operate 
Massively Distributed Clouds is on this link.action on satyak - Support to 
resolve Devstack ticket for Mitaka action on prakash - Ask Kiryong Ha CMU to 
provide bitucket access to Cloudlet code for Proevisioning and Openstack 
projects in CMU ija or OEC action on prakash/ad_rien - Review 
beyondclouds.github.io and work with ad_rien to see if we can use it along 
CMU/OEC Cloudlet or independednt of itNext meeting same time Wednesday 7AM-8 AM 
PDT /Attendees: ad_rien: adrien.le...@inria.fr pramchan: pramc...@yahoo.com 
narinder: narinder.gu...@canonical.com satyak: satyavit...@gmail.com
ThanksPrakash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Stepping down from Trove Core

2016-06-08 Thread Craig Vyvial
Victoria,

Thank for your contributions to Trove and wish you the best. Its been great
working with you in the community.

-Craig Vyvial

On Tue, Jun 7, 2016 at 1:34 PM Victoria Martínez de la Cruz <
victo...@vmartinezdelacruz.com> wrote:

> After one year and a half contributing to the Trove project,
> I have decided to change my focus and start gaining more experience
> on other storage and data-management related projects.
>
> Because of this decision, I'd like to ask to be removed from the Trove core 
> team.
>
> I want to thank Trove community for all the good work and shared experiences.
> Working with you all has been a very fulfilling experience.
>
> All the best,
>
> Victoria
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][reno][infra] merging tags between branches is confusing our release notes

2016-06-08 Thread Doug Hellmann
tl;dr: The switch from pre-versioning to post-versioning means that
sometimes master appears to be older than stable/$previous, so we
merge "final" tags from stable/$previous into master to make up for
it. This introduces versions into the history of master that aren't
*really* there, but git sees them and so does reno. That, in turn,
means that the release notes generated from master may place some
notes in the wrong version, suggesting that something happened
sooner than it did. I propose that we stop merging tags, and instead
introduce a new tag on master after we create a branch to ensure
that the version number there is always higher than stable/$previous.


Background
--

Over the last year or so we've switched from pre-versioning (declaring
versions in setup.cfg) to post-versioning (relying solely on git
tags for versions). This made the release process simpler, because
we didn't need to worry about synchronizing the change of version
strings within setup.cfg as we created our branches. A side-effect,
though, is that the version from which we tag appears on both
branches. That means that stable/$previous and master both have the
same version for some period of time, and then stable/$previous
receives a final tag and has a version newer than master. To
compensate, we merge that final tag from stable/$previous into
master (taking only the tag, without any of the code changes), so
that master again has the same version.


The Problem
---

The tag may be merged into master after other changes have landed
in master but not stable/$previous, and if those changes include
release notes then reno will associate them with the newly merged
tag, rather than the correct version number.

Here's an example I have been using to test reno. In it, 3 separate
reno notes are created on two branches. Note 1 is on master when
it is tagged 1.0.0. Then master is branched and note 2 is added to
the branch and tagged 1.1.0. Then the tag is merged into master and
note 3 is added.

  * af93946 (HEAD -> master, tag: 2.0.0) add slug-0003.yaml
  * f78d1a2 add ignore-2.txt
  *   4502dbd merge 1.1.0 tag into master
  |\
  | * bf50a97 (tag: 1.1.0, test_merge_tags) add slug-0002.yaml
  * | 1e4d846 add ignore-1.txt
  |/
  * 9f481a9 (tag: 1.0.0) add slug-0001.yaml

Before the tag is applied to note 3, it appears to be part of 1.1.0,
even though it is not from the branch where that version was created
and the version 1.1.0 is included in the release notes for master,
even though that version should not really be a part of that series.

Technically reno is doing the right thing, because even "git describe"
honors the merged tag and treats commit f78d1a2 as 1.1.0-4-gaf93946.
So because we've merged the version number into a different series
branch, that version becomes part of that series.


The Proposal


We should stop merging tags between branches, at all. Then our git
branches will be nice and linear, without merges, and reno will
associate the correct version number with each note.

To compensate for the fact that master will have a lower version
number after the branch, we can introduce a new alpha tag on master
to raise its version. So, after creating stable/$series from version
X.0.0.0rc1, we would tag the next commit on master with X+1.0.0.0a1.
All subsequent commits on master would then be considered to be
part of the X+1.0.0 series.

Libraries and other projects that follow the cycle-with-intermediary
release model won't need this treatment, because we're not using
alpha or beta versions for them and they are tagged more often than
the projects following the cycle-with-milestones model.


Possible Issues
---

We will be moving back to a situation where we have to orchestrate
multiple operations when we create branches. Adding an extra tag
isn't a lot of work, though, and it doesn't really matter what the
commit is that gets the tag, so if there's nothing on master beyond
the point where a branch needs to be created we can add a minor
change somewhere just to have something to tag.

fungi pointed out that if we backport anything by using a fast-forward
merge from master to stable/$previous, we will pull the newer version
number back into the older series. We already have that issue,
today, though, for anything backported after the first milestone.
So while we're shrinking the safe window period, this is not a new
problem.

Some cycle-with-milestone projects may not merge commits into master
very soon after their branches are created. We can address that by
waiting, or by introducing a small commit just for the purpose of
having something to tag.


Alternatives


Do nothing and live with some of our early release notes in a version
being "not quite right." I don't consider this acceptable.

Switch back to pre-versioning. We dropped this for a good reason,
it makes synchronizing all of the actions needed to create a release
branch a real 

Re: [openstack-dev] [ironic] Virtual midcycle date poll

2016-06-08 Thread Jim Rollenhagen
And, as a note, I've added an RSVP and details on communication channels
and such on the etherpad.

Details are also on the wiki now:
https://wiki.openstack.org/wiki/VirtualSprints#Ironic_Virtual_Newton_Midcycle
https://wiki.openstack.org/wiki/Sprints#Future_sprints_for_Newton

// jim

On Mon, Jun 06, 2016 at 11:10:19AM -0400, Jim Rollenhagen wrote:
> By the way, I created an etherpad for the midcycle to start bringing in
> ideas. You know what to do. :)
> 
> https://etherpad.openstack.org/p/ironic-newton-midcycle
> 
> // jim
> 
> On Wed, Jun 01, 2016 at 02:45:33PM -0400, Jim Rollenhagen wrote:
> > On Thu, May 19, 2016 at 09:25:18AM -0400, Jim Rollenhagen wrote:
> > > Hi Ironickers,
> > > 
> > > We decided in our last meeting that the midcycle for Newton will again
> > > be virtual. Now, we need to choose a date. Please indicate which options
> > > work for you (more than one may be selected):
> > > 
> > > http://doodle.com/poll/gpug7ynd9fn4rdfe
> > > 
> > > I'll close this poll two Mondays from now, May 30.
> > > 
> > > Note that this will be similar to the last midcycle; likely split up
> > > into two sessions. Last time was 1500-2000 UTC and -0400 UTC. If
> > > that worked for folks, we'll do the same times again.
> > 
> > June 20-22 won, with the votes being 18 to 14.
> > 
> > The actual dates UTC will be something like:
> > 
> > June 20 1500-2000
> > June 21 -0400
> > June 21 1500-2000
> > June 22 -0400
> > June 22 1500-2000
> > June 23 -0400
> > 
> > I'll send out communication channels and such before the end of next
> > week.
> > 
> > See you all there!
> > 
> > // jim
> > 
> > > 
> > > Thanks!
> > > 
> > > // jim
> > > 
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Nominate Ilya Kutukov for the fuel-web-core team

2016-06-08 Thread Bulat Gaifullin
Hey Fuelers,

I'd like to nominate Ilya Kutukov for the fuel-web-core team.
Ilya`s doing good reviews with detailed feedback [1],
and has implemented custom graph execution engine for Fuel.
Also Ilya`s implemented new database models for storing deployment tasks in 
Fuel.


Fuel Cores, please reply back with +1/-1.

[1] http://stackalytics.com/report/contribution/fuel-web/90 



Regards,
Bulat Gaifullin
Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] neutron-lib and dependencies in neutron reference implementation

2016-06-08 Thread Henry Gessau
Gal Sagie  wrote:
> Hello all,
> 
> I have recently came across some missing constants in neutron-lib and sent
> a patch but i wanted to try and understand the scope of the lib.
> 
> I see that the Neutron lib consist of many definitions which are actually
> part of the reference implementation and are not really "generic" Neutron
> parts.

Can you give specific examples of 'not really generic' constants?

> I am wondering if this is the right approach, especially since i think an
> end goal is to split between the two (some day..)
> 
> My suggestion would be to at least split these two in the neutron-lib, but 
> maybe
> i miss understood the scope of the lib


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] OSIC cluster accepteed

2016-06-08 Thread Vikram Hosakote (vhosakot)
I'd like to help with kolla scaling on the OSIC cluster too.

Regards,
Vikram Hosakote
IRC: vhosakot

From: Jeffrey Zhang >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, June 8, 2016 at 9:42 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla] OSIC cluster accepteed

Cool.
Do we have any test list now?
and how can i help for this? I am very interested in this
test.

On Tue, Jun 7, 2016 at 4:52 PM, Paul Bourke 
> wrote:
Michal,

I'd be interested in helping with this. Keep us updated!

-Paul


On 03/06/16 17:58, Michał Jastrzębski wrote:
Hello Kollagues,

Some of you might know that I submitted request for 130 nodes out of
osic cluster for testing Kolla. We just got accepted. Time window will
be 3 weeks between 7/22 and 8/14, so we need to make most of it. I'd
like some volunteers to help me with tests, setup and such. We need to
prepare test scenerios, streamline bare metal deployment and prepare
architectures we want to run through. I would also make use of our
global distribution to keep nodes being utilized 24h.

Nodes we're talking about are pretty powerful 256gigs of ram each, 12
ssd disks in each and 10Gig networking all the way. We will get IPMI
access to it so bare metal provisioning will have to be there too
(good time to test out bifrost right?:))

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-06-08 Thread Alexandre Levine

Hi Matt,

According to the state of this review: 
https://review.openstack.org/#/c/317689/ the works aren't going to be 
done in this cycle.


Do you think it'd be possible for our driver to cut in now?

Feodor participated in reviewing and helped as much as possible with 
current efforts and if needed we can spare even more resources to help 
with the refactoring in the next cycle.


Best regards,

  Alex Levine


On 5/10/16 7:40 PM, Matt Riedemann wrote:

On 5/10/2016 11:24 AM, Alexandre Levine wrote:

Hi Matt,

Sorry I couldn't reply earlier - was away.
I'm worrying about ScaleIO ephemeral storage backend
(https://blueprints.launchpad.net/nova/+spec/scaleio-ephemeral-storage-backend) 


which is not in this list but various clients are very interested in
having it working along with or instead of Ceph. Especially I'm worrying
in view of the global libvirt storage pools refactoring which looks like
a quite global effort to me judging by a number of preliminary reviews.
It seems to me that we wouldn't be able to squeeze ScaleIO additions
after this refactoring.
What can be done about it?
We could've contribute our initial changes to current code (which would
potentially allow easy backporting to previous versions as a benefit
afterwards) and promise to update our parts along with the refactoring
reviews or something like this.

Best regards,
  Alex Levine


On 5/6/16 3:34 AM, Matt Riedemann wrote:

There are still a few design summit sessions from the summit that I'll
recap but I wanted to get the priorities session recap out as early as
possible. We held that session in the last slot on Thursday. The full
etherpad is here [1].

The first part of the session was mostly going over schedule 
milestones.


We already started Newton with a freeze on spec approvals for new
things since we already have a sizable backlog [2]. Now that we're
past the summit we can approve specs for new things again.

The full Newton release schedule for Nova is in this wiki [3].

These are the major dates from here on out:

* June 2: newton-1, non-priority spec approval freeze
* June 30: non-priority feature freeze
* July 15: newton-2
* July 19-21: Nova Midcycle
* Aug 4: priority spec approval freeze
* Sept 2: newton-3, final python-novaclient release, FeatureFreeze,
Soft StringFreeze
* Sept 16: RC1 and Hard StringFreeze
* Oct 7, 2016: Newton Release

The important thing for most people right now is we have exactly four
weeks until the non-priority spec approval freeze. We then have about
one month after that to land all non-priority blueprints.

Keep in mind that we've already got 52 approved blueprints and most of
those were re-approved from Mitaka, so have been approved for several
weeks already.

The non-priority blueprint cycle is intentionally restricted in Newton
because of all of the backlog work we've had spilling over into this
release. We really need to focus on getting as much of that done as
possible before taking on more new work.

For the rest of the priorities session we talked about what our actual
review priorities are for Newton. The list with details and owners is
already available here [4].

In no particular order, these are the review priorities:

* Cells v2
* Scheduler
* API Improvements
* os-vif integration
* libvirt storage pools (for live migration)
* Get Me a Network
* Glance v2 Integration

We *should* be able to knock out glance v2, get-me-a-network and
os-vif relatively soon (I'm thinking sometime in June).

Not listed in [4] but something we talked about was volume
multi-attach with Cinder. We said this was going to be a 'stretch
goal' contingent on making decent progress on that item by
non-priority feature freeze *and* we get the above three smaller
priority items completed.

Another thing we talked about but isn't going to be a priority is
NFV-related work. We talked about cleaning up technical debt and
additional testing for NFV but had no one in the session signed up to
own that work or with concrete proposals on how to make improvements
in that area. Since we can't assign review priorities to something
that nebulous it was left out. Having said that, Moshe Levi has
volunteered to restart and lead the SR-IOV/PCI bi-weekly meeting [5]
(thanks again, Moshe!). So if you (or your employer, or your vendor)
are interested in working on NFV in Nova please attend that meeting
and get involved in helping out that subteam.

[1] https://etherpad.openstack.org/p/newton-nova-summit-priorities
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090370.html 


[3] https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule
[4]
https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html 



[5]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/093541.html 






__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-08 Thread Gregory Haynes
On Wed, Jun 8, 2016, at 03:46 AM, Thierry Carrez wrote:
> Another option (raised by dims) is to find a way to allow usage of 
> golang (or another language) in a more granular way: selectively allow 
> projects which really need another tool to use it. The benefit is that 
> it lets project teams make a case and use the best tool for the job, 
> while limiting the cross-project impact and without opening the 
> distraction floodgates of useless rewrites. The drawback is that 
> depending on how it's done, it may place the TC in the role of granting 
> "you're tall enough to use Go" badges, creating new endless discussions 
> and more "you're special" exceptions. That said, I'm still interested in 
> exploring that option, for one reason. I think that whenever a project 
> team considers adding a component or a rewrite in another language, they 
> are running into an interesting problem with Python, on which they 
> really could use advice from the rest of the OpenStack community. I'd 
> really like to see a cross-project team of Python performance experts to 
> look at the problem this specific team has that makes them want to use 
> another language. That seems like a great way to drive more practice 
> sharing and reduce fragmentation in "OpenStack" in general. We might 
> just need to put the bar pretty high so that we are not flooded by silly 
> rewrite requests.
> 

++.  There's a lot of value in these issues getting bubbled up to the
cross-project level: If we have identified a serious hurdle then this
knowledge really shouldn't live inside of a single project. Otherwise,
if we haven't identified such an issue, then the (the greater openstack
community) can offer some alternative solutions which is also a huge
win.

I completely understand the fear that we might be creating an endless
review stream for the TC by making them the review squad for getting
approval to use a new language, and I agree that we need to make sure
that doesn't happen. OTOH, I strongly believe that in almost all of the
cases which would be proposed some alternative solutions could be found.
I worry that if we just tell these folks 'the solution you thought of
isn't allowed' rather than offer an outlet for seriously investigating
the issue were likely to see teams try and find ways around that
restriction when really we want to identify another solution to the
problem. A perf team sounds like a great way to help both the tribal
knowledge problem and support the type of problem solving we are asking
for. Sign me up :).

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port is Active

2016-06-08 Thread Salvatore Orlando
Neutron has the ability already of sending an event as a REST call to
notify a third party that a port became active [1]
This is used by Nova to hold on booting instances until network has been
wired.
Perhaps kuryr could leverage this without having to tap into the AMQP bus,
as that would be implementation-specific - since there would be an
assumption about having a plugin that communicates with the reference impl
l2 agent.

Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/notifiers/nova.py



On 8 June 2016 at 17:23, Mohammad Banikazemi  wrote:

> For the Kuryr project, in order to support blocking until vifs are plugged
> in (that is adding config options similar to the following options define
> in Nova: vif_plugging_is_fatal and vif_plugging_timeout), we need to detect
> that the Neutron plugin being used is done with plugging a given vif.
>
> Here are a few options:
>
> 1- The simplest approach seems to be polling for the status of the Neutron
> port to become Active. (This may lead to scalability issues but short of
> having a specific goal for scalability, it is not clear that will be the
> case.)
> 2- Alternatively, We could subscribe to the message queue and wait for
> such a port update event.
> 3- It was also suggested that we could use l2 agent extension to detect
> such an event but that seems to limit us to certain Neutron plugins and
> therefore not acceptable.
>
> I was wondering if there are other and better options.
>
> Best,
>
> Mohammad
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-08 Thread John McDowall
Amitabha,

Thanks for looking at it . I took the suggestion from Juno and implemented it. 
I think it is a good solution as it minimizes impact on both networking-ovn and 
networking-sfc. I have updated my repos, if you have suggestions for 
improvements let me know.

I agree that there needs to be some refactoring of the networking-sfc driver 
code. I think the team did a good job with it as it was easy for me to create 
the OVN driver ( copy and paste). As more drivers are created I think the model 
will get polished and refactored.

Regards

John

From: Amitabha Biswas >
Date: Tuesday, June 7, 2016 at 11:36 PM
To: John McDowall 
>
Cc: Na Zhu >, Srilatha Tangirala 
>, "OpenStack Development 
Mailing List (not for usage questions)" 
>, 
discuss >
Subject: Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

Looking at the code with Srilatha, it seems like the 
https://github.com/doonhammer/networking-ovn
 repo has gone down the path of having a sfc_ovn.py file in the 
networking-ovn/ovsdb directory. This file deals with the SFC specific OVSDB 
transactions in OVN. So to answer your question of invoking OVS-IDL, we can 
import the src_ovn.py file from 
networking_sfc/services/src/drivers/ovn/driver.py and invoke calls into IDL.

Another aspect from a networking-sfc point of view is the duplication of code 
between networking_sfc/services/src/drivers/ovn/driver.py and 
networking_sfc/services/src/drivers/ovs/driver.py in the 
https://github.com/doonhammer/networking-sfc
 repo. There should be a mechanism to coalesce the common code and invoke the 
OVS and OVN specific parts separately.

Regards
Amitabha

On Jun 7, 2016, at 9:54 PM, John McDowall 
> wrote:

Juno, Srilatha,

I need some help – I have fixed most of the obvious typo’s in the three repos 
and merged them with mainline. There is still a problem with the build I think 
in mech_driver.py but I will fix it asap in the am.

However I am not sure of the best way to interface between sfc and ovn.

In networking_sfc/services/src/drivers/ovn/driver.py there is a function that 
creates a deep copy of the port-chain dict, 
create_port_chain(self,contact,port_chain).

Looking at networking-ovn I think it should use mech_driver.py so we can call 
the OVS-IDL to send the parameters to ovn. However I am not sure of the best 
way to do it. Could you make some suggestions or send me some sample code 
showing the best approach?

I will get the ovs/ovn cleaned up and ready. Also Louis from the networking-sfc 
has posted a draft blueprint.

Regards

John

From: Na Zhu >
Date: Monday, June 6, 2016 at 7:54 PM
To: John McDowall 
>, Ryan 
Moats >
Cc: "disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>, 
Srilatha Tangirala >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I do not know any better approach, I think it is good to write all the 
parameters in the creation of a port chain, this can avoid saving many data in 
northbound db which are not used. We can do it in that way currently, if the 
community has opposite ideas, we can change, what do you think?

Hi Ryan,

Do you agree with that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" 

Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-08 Thread John McDowall
Juni,

Thanks – added the code and everything builds, just need to debug end-to-end 
now.  I think your approach is the best so far all the IDL code for accessing 
ovs/ovn is in networking-ovn. The OVN driver in networking-sfc calls the IDL 
code to access ovs/ovn. There is minimal linkage between networking-sfc and 
networking-ovn , just one import:

from networking_ovn.ovsdb import impl_idl_ovn

I think this is what Ryan was asking for.

I have updated all repos so we can think about creating WIP patches.

Regards

John
From: Na Zhu >
Date: Wednesday, June 8, 2016 at 12:44 AM
To: John McDowall 
>
Cc: "disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>, 
Srilatha Tangirala >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I think you can create ovsdb idl client in networking-sfc to connect to 
OVN_Northbound DB, then call the APIs you add to networking-ovn to configure 
SFC.
Now OVN is a ML2 mechanism driver (OVNMechanismDriver), not core plugin, the 
OVN L3 (OVNL3RouterPlugin) is a neutron service plugin like vpn, sfc and ect.

You can refer to method OVNMechanismDriver._ovn and OVNL3RouterPlugin._ovn, 
they both create ovsdb idl client object, so in your ovn driver, you can do it 
in the same way. Here is the code sample:

class OVNSfcDriver(driver_base.SfcDriverBase,
   ovs_sfc_db.OVSSfcDriverDB)
..
@property
def _ovn(self):
if self._ovn_property is None:
LOG.info(_LI("Getting OvsdbOvnIdl"))
self._ovn_property = impl_idl_ovn.OvsdbOvnIdl(self)
return self._ovn_property

..
@log_helpers.log_method_call
def create_port_chain(self, context):
port_chain = context.current
for flow_classifier in port_chain:
first get the flow classifier contents
then call self._ovn.create_lflow_classifier()
for port_pair_groups in port_chain:
get the port_pair_group contents
then call self._ovn.create_lport_pair_group()
for port_pair in port_pair_group
first get the port_pair contents
then call self._ovn.create_lport_pair()







Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN, Srilatha Tangirala 
>
Cc:"disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>
Date:2016/06/08 12:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Juno, Srilatha,

I need some help – I have fixed most of the obvious typo’s in the three repos 
and merged them with mainline. There is still a problem with the build I think 
in mech_driver.py but I will fix it asap in the am.

However I am not sure of the best way to interface between sfc and ovn.

In networking_sfc/services/src/drivers/ovn/driver.py there is a function that 
creates a deep copy of the port-chain dict, 
create_port_chain(self,contact,port_chain).

Looking at networking-ovn I think it should use mech_driver.py so we can call 
the OVS-IDL to send the parameters to ovn. However I am not sure of the best 
way to do it. Could you make some suggestions or send me some sample code 
showing the best approach?

I will get the ovs/ovn cleaned up and ready. Also Louis from the networking-sfc 
has posted a draft blueprint.

Regards

John

From: Na Zhu >
Date: Monday, June 6, 2016 at 7:54 PM
To: John McDowall 
>, Ryan 
Moats >
Cc: "disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>, 
Srilatha Tangirala >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I do not know 

Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-08 Thread John McDowall
Juno,

Sorry jumped the gun not published yet :-(. Will send you the link when it is 
published.

Apologies

John

From: Na Zhu >
Date: Wednesday, June 8, 2016 at 1:41 AM
To: John McDowall 
>
Cc: "disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>, 
Srilatha Tangirala >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

John,

Is the blueprint Louis posted this one?
https://blueprints.launchpad.net/networking-sfc/+spec/networking-sfc-ovn-driver

If not, can you send me the link?


Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN, Srilatha Tangirala 
>
Cc:"disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>
Date:2016/06/08 12:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Juno, Srilatha,

I need some help – I have fixed most of the obvious typo’s in the three repos 
and merged them with mainline. There is still a problem with the build I think 
in mech_driver.py but I will fix it asap in the am.

However I am not sure of the best way to interface between sfc and ovn.

In networking_sfc/services/src/drivers/ovn/driver.py there is a function that 
creates a deep copy of the port-chain dict, 
create_port_chain(self,contact,port_chain).

Looking at networking-ovn I think it should use mech_driver.py so we can call 
the OVS-IDL to send the parameters to ovn. However I am not sure of the best 
way to do it. Could you make some suggestions or send me some sample code 
showing the best approach?

I will get the ovs/ovn cleaned up and ready. Also Louis from the networking-sfc 
has posted a draft blueprint.

Regards

John

From: Na Zhu >
Date: Monday, June 6, 2016 at 7:54 PM
To: John McDowall 
>, Ryan 
Moats >
Cc: "disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>, 
Srilatha Tangirala >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I do not know any better approach, I think it is good to write all the 
parameters in the creation of a port chain, this can avoid saving many data in 
northbound db which are not used. We can do it in that way currently, if the 
community has opposite ideas, we can change, what do you think?

Hi Ryan,

Do you agree with that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" 
>, Ryan Moats 
>, Srilatha Tangirala 
>, "OpenStack Development 
Mailing List (not for usage questions)" 
>
Date:2016/06/06 23:36
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Juno,

Let me check – my intention was that the networking-sfc OVNB driver would 
configure all aspects 

[openstack-dev] [kolla] cinder implementation + lvm

2016-06-08 Thread Carlos Cesario
Hi Dev-team, 

Please, if possible could someone confirm me details about cinder 
implementation on kolla (current master branch). 
I have facing some problems with Kolla deploy (AIO method) and Cinder with lvm. 
The current code on master branch does not deploy cinder without enable iscsi - 
enable_iscsi ( already reported on list 
https://www.mail-archive.com/openstack-dev%40lists.openstack.org/msg85315.html) 

Another point is the mandatory iscsi driver to lvm works. According cinder the 
documentation 
http://docs.openstack.org/mitaka/install-guide-rdo/cinder-storage-install.html 
the iscsi driver doe not needed to lvm works. Is it a Kolla dependency!? If yes 
is there an spec doc for it ? 


Best regards, 


Carlos 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] Integration with OVN NAT gateway (Proposal)

2016-06-08 Thread Amitabha Biswas
Here is the proposal in etherpad to make it more readable:

https://etherpad.openstack.org/p/Integration_with_OVN_L3_Gateway 


Thanks
Amitabha

> On Jun 7, 2016, at 5:12 PM, Amitabha Biswas  wrote:
> 
> Sorry that was a typo, it should read:
> 
>> Note that the MAC addresses of gtrp and dtrp will be the same on each OVN 
>> Join Network, but because they are in different branches of the network 
>> topology it doesn’t matter.
> Amitabha
> 
>> On Jun 7, 2016, at 4:39 PM, Bhalachandra Banavalikar 
>> > wrote:
>> 
>> Can you please provide more details on lgrp and lip ports (last bullet in 
>> section 1)?
>> 
>> Thanks,
>> Bhal
>> 
>> Amitabha Biswas ---06/07/2016 01:56:23 PM---This proposal 
>> outlines the modifications needed in networking-ovn (addresses 
>> https://bugs.launchpad .
>> 
>> From:  Amitabha Biswas >
>> To:  "OpenStack Development Mailing List (not for usage questions)" 
>> > >
>> Cc:  Chandra Sekhar Vejendla/San Jose/IBM@IBMUS
>> Date:  06/07/2016 01:56 PM
>> Subject:  [openstack-dev] [neutron][networking-ovn] Integration with OVN NAT 
>> gateway (Proposal)
>> 
>> 
>> 
>> 
>> This proposal outlines the modifications needed in networking-ovn (addresses 
>> https://bugs.launchpad.net/networking-ovn/+bug/1551717 
>> ) to provide 
>> Floating IP (FIP) and SNAT using the L3 gateway router patches.
>> 
>> http://patchwork.ozlabs.org/patch/624312/ 
>>  
>> http://patchwork.ozlabs.org/patch/624313/ 
>>  
>> http://patchwork.ozlabs.org/patch/624314/ 
>>  
>> http://patchwork.ozlabs.org/patch/624315/ 
>>  
>> http://patchwork.ozlabs.org/patch/629607/ 
>> 
>> 
>> Diagram:
>> 
>> +---+ +---+
>> | NET 1 | | NET 2 |
>> +---+ +---+
>> | |
>> | * |
>> | ** ** |
>> | ** * * ** |
>> +---RP1 * DR * RP2 --+
>> ** * * **
>> ** **  
>> *  
>> DTRP (168.254.128.2)
>> |
>> |
>> |
>> +--+
>> | Transit Network |
>> | 169.254.128.0/30 |
>> +--+
>> |
>> |
>> |
>> |
>> GTRP (169.254.128.1)
>> ***  
>> ** **  
>> ** * * ** +--+
>> * GW *-| Provider Network |
>> ** * * ** +--+
>> ** **  
>> ***  
>> 
>> New Entities:
>> OVN Join/Transit Networks
>> One per Neutron Router - /30 address space with only 2 ports for e.g. 
>> 169.254.128.0/30
>> Created when an external gateway is added to a router.
>> One extra datapath per router with an External Gateway.
>> (Alternate option - One Transit Network in a deployment, IPAM becomes a 
>> headache - Not discussed here).
>> Prevent Neutron from using that /30 address space. Specify in networking-ovn 
>> conf file.
>> Create 1 new “Join” neutron network (to represent all Join OVN Networks) in 
>> the networking-ovn.
>> Note that it may be possible to replace the Join/Transit network using 
>> Router Peering in later versions (not discussed here).
>> Allocate 2 ports in the Join network in the networking-ovn plugin.
>> Logical Gateway Transit Router Port (gtrp), 169.254.128.1
>> Logical Distributed Transit Router Port (dtrp), 169.254.128.2
>> Note that Neutron only sees 1 Join network with 2 ports; OVN sees a replica 
>> of this Join network as a new Logical Switch for each Gateway Router. The 
>> mapping of OVN Logical Switch(es) Join(s) to Gateway Router is discussed in 
>> OVN (Default) Gateway Routers below.
>> Note that the MAC addresses of gtrp and dtrp will be the same on each OVN 
>> Join Network, but because they are in different branches of the network 
>> topology it doesn’t matter.
>> OVN (Default) Gateway Routers:
>> One per Neutron Router.
>> 2 ports
>> Logical Gateway Transit Router Port (gtrp), 169.254.128.1 (same for each OVN 
>> Join network).
>> External/Provider Router Port (legwrp), this is allocated by neutron.
>> Scheduling - The current OVN gateway proposal relies on the CMS/nbctl to 
>> decide on which hypervisor (HV) to schedule a particular gateway router.
>> A setting on the chassis (new external_id key or a new column) that allows 
>> the hypervisor admin to specify that a chassis can or cannot be used to host 
>> a gateway router (similar to a network node in OpenStack). Default - Allow 
>> (for compatibility purposes).
>> The networking-ovn plugin picks up the list of “candidate” chassis from the 
>> Southbound DB and uses an existing scheduling algorithm
>> Use a simple random.choice i.e. ChanceScheduler (Version 1)
>> Tap into the neutron’s LeastRouterScheduler - but 

[openstack-dev] [new][ironic] bifrost 1.0.2 release (mitaka)

2016-06-08 Thread no-reply
We are satisfied to announce the release of:

bifrost 1.0.2: Deployment of physical machines using OpenStack Ironic
and Ansible

This release is part of the mitaka stable release series.

For more details, please see below.

Changes in bifrost 1.0.1..1.0.2
---

9db927b Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index a703a54..cdd2511 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ pbr>=1.6 # Apache-2.0
-Babel>=1.3 # BSD
+Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Re-licensing OpenStack charms under Apache 2.0

2016-06-08 Thread Gauvain Pocentek

Hi James,

Le 2016-06-08 12:20, James Page a écrit :

Hi

We're currently blocked on becoming an OpenStack project under the
big-tent by the licensing of the 26 OpenStack charms under GPL v3.

I'm proposing that we re-license the following code repositories as 
Apache 2.0:


  charm-ceilometer
  charm-ceilometer-agent
  charm-ceph
  charm-ceph-mon
  charm-ceph-osd
  charm-ceph-radosgw
  charm-cinder
  charm-cinder-backup
  charm-cinder-ceph
  charm-glance
  charm-hacluster
  charm-heat
  charm-keystone
  charm-lxd
  charm-neutron-api
  charm-neutron-api-odl
  charm-neutron-gateway
  charm-neutron-openvswitch
  charm-nova-cloud-controller
  charm-nova-compute
  charm-odl-controller
  charm-openstack-dashboard
  charm-openvswitch-odl
  charm-percona-cluster
  charm-rabbitmq-server
  charm-swift-proxy
  charm-swift-storage

The majority of contributors are from Canonical (from whom I have
permission to make this switch) with a further 18 contributors from
outside of Canonical who I will be directly contacting for approval in
gerrit as reviews are raised for each repository.


No problem on my side to change the license to apache 2.0.

Thanks for checking with us.

Gauvain




Any new charms and supporting repositories will be licensed under
Apache 2.0 from the outset.

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Gauvain Pocentek

Objectif Libre - Infrastructure et Formations Linux
http://www.objectif-libre.com
phone: +33 972 54 98 01 / +33 611 60 84 25

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] Virtual Mid Cycle Coordinates - July 19/20

2016-06-08 Thread Fabio Giannetti (fgiannet)
Monasca Mid Cycle Day 1
July 19 2016
7am to noon PDT

Webex

Join WebEx meeting:
https://cisco.webex.com/ciscosales/j.php?MTID=mb490140119f1f6f518160d85b108
0a13


Meeting number: 200 700 937
Meeting password: mXdvExYq
  


Join by phone  
+1-408-525-6800 Call-in toll number (US/Canada)
+1-866-432-9903 Call-in toll-free number (US/Canada)
Access code: 200 700 937
Numeric meeting password: 17093037



Monasca Mid Cycle Day 2
July 20 2016
7am to noon PDT

Webex


Join WebEx meeting:
https://cisco.webex.com/ciscosales/j.php?MTID=m84f9f81d7c1c171be6365716522d
e15e

Meeting number: 205 141 519
Meeting password: 8VyzUiyr

Join by phone  
+1-408-525-6800 Call-in toll number (US/Canada)
+1-866-432-9903 Call-in toll-free number (US/Canada)
Access code: 205 141 519
Numeric meeting password: 01558880


See you there.
Fabio


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port is Active

2016-06-08 Thread Mohammad Banikazemi


For the Kuryr project, in order to support blocking until vifs are plugged
in (that is adding config options similar to the following options define
in Nova: vif_plugging_is_fatal and vif_plugging_timeout), we need to detect
that the Neutron plugin being used is done with plugging a given vif.

Here are a few options:

1- The simplest approach seems to be polling for the status of the Neutron
port to become Active. (This may lead to scalability issues but short of
having a specific goal for scalability, it is not clear that will be the
case.)
2- Alternatively, We could subscribe to the message queue and wait for such
a port update event.
3- It was also suggested that we could use l2 agent extension to detect
such an event but that seems to limit us to certain Neutron plugins and
therefore not acceptable.

I was wondering if there are other and better options.

Best,

Mohammad
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [tacker] Request to create puppet-tacker

2016-06-08 Thread Dan Radez
FYI, tacker community:
https://review.openstack.org/#/c/327173/
https://review.openstack.org/#/c/327178/

Radez

On 06/08/2016 10:54 AM, Dan Radez wrote:
> sure will, thx Emilien
> Dan
> 
> On 06/08/2016 09:08 AM, Emilien Macchi wrote:
>> Yeah, super good news!
>> Please do the same as I did in https://review.openstack.org/326720 and
>> https://review.openstack.org/326721
>>
>> Add me as reviewer because I need to sign-off the 2 patches (I'm current 
>> PTL).
>> Once it's done & merged, you'll be able to deprecate the old
>> repository on your github with a nice README giving the link of the
>> new module.
>>
>> I haven't looked at the code yet but we'll probably have to adjust
>> some bits, add some testing (beaker [1], etc). Please make sure that
>> we have some packaging available in RDO (I checked on Ubuntu and they
>> don't provide it) so we can download it during our beaker tests.
>>
>> Also a last thing, in order to help us to make the module compliant &
>> consistent, please read how we wrote the recent modules. For example
>> you can look puppet-gnocchi or puppet-aodh that are clean modules.
>> We recently had a lot of new modules: vitrage, watcher, tacker,
>> congress, (I'm working now on octavia) - which means reviews might
>> take more time than usual because our team will review the new modules
>> carefuly to make sure the code is clean & consistent from beginning
>> (and avoid the puppet-monasca story). Please be patient and help us by
>> reading how we did other modules.
>>
>> Thanks a ton for your collaboration and we're looking forward for this
>> new challenge,
>>
>> [1] https://github.com/puppetlabs/beaker
>>
>> On Wed, Jun 8, 2016 at 8:11 AM, Iury Gregory  wrote:
>>> Awesome!
>>>
>>> You just need to follow the same process that Emilien pointed for
>>> puppet-congress. If you need any help please let us know.
>>>
>>> 1- Move  https://github.com/radez/puppet-tacker to OpenStack
>>> 2- Add it to our governance
>>> 3- Follow
>>> http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html
>>>
>>>
>>>
>>> 2016-06-08 8:56 GMT-03:00 Dan Radez :

 I also have puppet-tacker module that has existed before the project was
 part of big tent.

 It was based on cookie cutter originally but will probably need some
 adjustments to adhere to standards.

 I'd like to get the project establish so that the code can be run
 through the proper review process.

 exiting repo is here: https://github.com/radez/puppet-tacker

 Dan Radez
 freenode: radez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> --
>>>
>>> ~
>>> Att[]'s
>>> Iury Gregory Melo Ferreira
>>> Master student in Computer Science at UFCG
>>> E-mail:  iurygreg...@gmail.com
>>> ~
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [tacker] Request to create puppet-tacker

2016-06-08 Thread Dan Radez
sure will, thx Emilien
Dan

On 06/08/2016 09:08 AM, Emilien Macchi wrote:
> Yeah, super good news!
> Please do the same as I did in https://review.openstack.org/326720 and
> https://review.openstack.org/326721
> 
> Add me as reviewer because I need to sign-off the 2 patches (I'm current PTL).
> Once it's done & merged, you'll be able to deprecate the old
> repository on your github with a nice README giving the link of the
> new module.
> 
> I haven't looked at the code yet but we'll probably have to adjust
> some bits, add some testing (beaker [1], etc). Please make sure that
> we have some packaging available in RDO (I checked on Ubuntu and they
> don't provide it) so we can download it during our beaker tests.
> 
> Also a last thing, in order to help us to make the module compliant &
> consistent, please read how we wrote the recent modules. For example
> you can look puppet-gnocchi or puppet-aodh that are clean modules.
> We recently had a lot of new modules: vitrage, watcher, tacker,
> congress, (I'm working now on octavia) - which means reviews might
> take more time than usual because our team will review the new modules
> carefuly to make sure the code is clean & consistent from beginning
> (and avoid the puppet-monasca story). Please be patient and help us by
> reading how we did other modules.
> 
> Thanks a ton for your collaboration and we're looking forward for this
> new challenge,
> 
> [1] https://github.com/puppetlabs/beaker
> 
> On Wed, Jun 8, 2016 at 8:11 AM, Iury Gregory  wrote:
>> Awesome!
>>
>> You just need to follow the same process that Emilien pointed for
>> puppet-congress. If you need any help please let us know.
>>
>> 1- Move  https://github.com/radez/puppet-tacker to OpenStack
>> 2- Add it to our governance
>> 3- Follow
>> http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html
>>
>>
>>
>> 2016-06-08 8:56 GMT-03:00 Dan Radez :
>>>
>>> I also have puppet-tacker module that has existed before the project was
>>> part of big tent.
>>>
>>> It was based on cookie cutter originally but will probably need some
>>> adjustments to adhere to standards.
>>>
>>> I'd like to get the project establish so that the code can be run
>>> through the proper review process.
>>>
>>> exiting repo is here: https://github.com/radez/puppet-tacker
>>>
>>> Dan Radez
>>> freenode: radez
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>>
>> ~
>> Att[]'s
>> Iury Gregory Melo Ferreira
>> Master student in Computer Science at UFCG
>> E-mail:  iurygreg...@gmail.com
>> ~
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-08 Thread Spyros Trigazis
Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu  wrote:

> Ricardo,
>
> Thanks for the offer. Would I know where is the exact location?
>
> Best regards,
> Hongbin
>
> > -Original Message-
> > From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> > Sent: June-08-16 5:43 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
> >
> > Hi Hongbin.
> >
> > Not sure how this fits everyone, but we would be happy to host it at
> > CERN. How do people feel about it? We can add a nice tour of the place
> > as a bonus :)
> >
> > Let us know.
> >
> > Ricardo
> >
> >
> >
> > On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> > wrote:
> > > Hi all,
> > >
> > >
> > >
> > > Please find the Doodle pool below for selecting the Magnum midcycle
> > date.
> > > Presumably, it will be a 2 days event. The location is undecided for
> > now.
> > > The previous midcycles were hosted in bay area so I guess we will
> > stay
> > > there at this time.
> > >
> > >
> > >
> > > http://doodle.com/poll/5tbcyc37yb7ckiec
> > >
> > >
> > >
> > > In addition, the Magnum team is finding a host for the midcycle.
> > > Please let us know if you interest to host us.
> > >
> > >
> > >
> > > Best regards,
> > >
> > > Hongbin
> > >
> > >
> > >
> > __
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] bug in handling of ISOLATE thread policy

2016-06-08 Thread Chris Friesen

On 06/07/2016 11:36 AM, Chris Friesen wrote:

Hi,

The full details are available at https://bugs.launchpad.net/nova/+bug/1590091
but the short version is this:

1) I'm running stable/mitaka in devstack.  I've got a small system with 2 pCPUs,
both marked as available for pinning.  They're two cores of a single processor,
no threads.

2) I tried to boot an instance with two dedicated CPUs and a thread policy of
ISOLATE, but the NUMATopology filter fails my host.


In case anyone comes across this later, it turns out that Stephen already had a 
bug for this issue:


https://bugs.launchpad.net/bugs/1550317


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-08 Thread Hongbin Lu
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
> 
> Hi Hongbin.
> 
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
> 
> Let us know.
> 
> Ricardo
> 
> 
> 
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Proposal of a virtual mid-cycle instead of the co-located

2016-06-08 Thread Nikhil Komawar
This event is confirmed, more details will be out soon.


On 5/29/16 11:15 PM, Nikhil Komawar wrote:
> Hello,
>
>
> I would like to propose a two day 4 hour sessions each of Glance virtual
> mid-cycle on June 15th Wednesday & 16th Thursday, 1400 UTC onward. This
> is a replacement of the Glance mid-cycle meetup that we've cancelled.
> Some people have already expressed some items to discuss then and I
> would like for us to utilize a couple of hours discussing the
> glance-specs so that we can apply spec-soft-freeze [1] in a better capacity.
>
>
> We can try to accommodate topics according to the TZ, for example topics
> proposed by folks in EMEA earlier in the day vs. for those in the PDT TZ
> in the later part of the event.
>
>
> Please vote with +1, 0, -1. If the time/date doesn't work, please
> propose 2-3 additional slots.
>
>
> We can use either hangouts, bluejeans or an IBM conferencing tool as
> required, which is to be finalized closer to the event.
>
>
> I will setup an agenda etherpad once we decide on the date/time.
>
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/096175.html
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] OSIC cluster accepteed

2016-06-08 Thread Jeffrey Zhang
Cool.
Do we have any test list now?
and how can i help for this? I am very interested in this
test.

On Tue, Jun 7, 2016 at 4:52 PM, Paul Bourke  wrote:

> Michal,
>
> I'd be interested in helping with this. Keep us updated!
>
> -Paul
>
>
> On 03/06/16 17:58, Michał Jastrzębski wrote:
>
>> Hello Kollagues,
>>
>> Some of you might know that I submitted request for 130 nodes out of
>> osic cluster for testing Kolla. We just got accepted. Time window will
>> be 3 weeks between 7/22 and 8/14, so we need to make most of it. I'd
>> like some volunteers to help me with tests, setup and such. We need to
>> prepare test scenerios, streamline bare metal deployment and prepare
>> architectures we want to run through. I would also make use of our
>> global distribution to keep nodes being utilized 24h.
>>
>> Nodes we're talking about are pretty powerful 256gigs of ram each, 12
>> ssd disks in each and 10Gig networking all the way. We will get IPMI
>> access to it so bare metal provisioning will have to be there too
>> (good time to test out bifrost right?:))
>>
>> Cheers,
>> Michal
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-08 Thread Mauricio Lima
+1

2016-06-08 10:29 GMT-03:00 Jeffrey Zhang :

> it will be mid-night (0:00) in my local time. But i think i am OK with it.
> so +1 for this.
>
> On Wed, Jun 8, 2016 at 9:12 PM, Paul Bourke 
> wrote:
>
>> +1
>>
>> On 08/06/16 13:54, Swapnil Kulkarni (coolsvap) wrote:
>>
>>> Dear Kollagues,
>>>
>>> Some time ago we discussed the requirement of alternating meeting
>>> times for Kolla weekly meeting due to major contributors from
>>> kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
>>> implemented alternate US/APAC meeting times.
>>>
>>> With kolla-mesos not active anymore and looking at the current active
>>> contributors, I wish to reinstate the UTC 1600 time for all Kolla
>>> Weekly meetings.
>>>
>>> Please let me know your views.
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-08 Thread Jeffrey Zhang
it will be mid-night (0:00) in my local time. But i think i am OK with it.
so +1 for this.

On Wed, Jun 8, 2016 at 9:12 PM, Paul Bourke  wrote:

> +1
>
> On 08/06/16 13:54, Swapnil Kulkarni (coolsvap) wrote:
>
>> Dear Kollagues,
>>
>> Some time ago we discussed the requirement of alternating meeting
>> times for Kolla weekly meeting due to major contributors from
>> kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
>> implemented alternate US/APAC meeting times.
>>
>> With kolla-mesos not active anymore and looking at the current active
>> contributors, I wish to reinstate the UTC 1600 time for all Kolla
>> Weekly meetings.
>>
>> Please let me know your views.
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-08 Thread Paul Bourke

+1

On 08/06/16 13:54, Swapnil Kulkarni (coolsvap) wrote:

Dear Kollagues,

Some time ago we discussed the requirement of alternating meeting
times for Kolla weekly meeting due to major contributors from
kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
implemented alternate US/APAC meeting times.

With kolla-mesos not active anymore and looking at the current active
contributors, I wish to reinstate the UTC 1600 time for all Kolla
Weekly meetings.

Please let me know your views.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-08 Thread Michał Jastrzębski
+1 from me. But I'd love to hear from Jeffrey if he'd be ok with that too.

On 8 June 2016 at 07:54, Swapnil Kulkarni (coolsvap)  wrote:
> Dear Kollagues,
>
> Some time ago we discussed the requirement of alternating meeting
> times for Kolla weekly meeting due to major contributors from
> kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
> implemented alternate US/APAC meeting times.
>
> With kolla-mesos not active anymore and looking at the current active
> contributors, I wish to reinstate the UTC 1600 time for all Kolla
> Weekly meetings.
>
> Please let me know your views.
>
> --
> Best Regards,
> Swapnil Kulkarni
> irc : coolsvap
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Request to create puppet-tacker

2016-06-08 Thread Emilien Macchi
Yeah, super good news!
Please do the same as I did in https://review.openstack.org/326720 and
https://review.openstack.org/326721

Add me as reviewer because I need to sign-off the 2 patches (I'm current PTL).
Once it's done & merged, you'll be able to deprecate the old
repository on your github with a nice README giving the link of the
new module.

I haven't looked at the code yet but we'll probably have to adjust
some bits, add some testing (beaker [1], etc). Please make sure that
we have some packaging available in RDO (I checked on Ubuntu and they
don't provide it) so we can download it during our beaker tests.

Also a last thing, in order to help us to make the module compliant &
consistent, please read how we wrote the recent modules. For example
you can look puppet-gnocchi or puppet-aodh that are clean modules.
We recently had a lot of new modules: vitrage, watcher, tacker,
congress, (I'm working now on octavia) - which means reviews might
take more time than usual because our team will review the new modules
carefuly to make sure the code is clean & consistent from beginning
(and avoid the puppet-monasca story). Please be patient and help us by
reading how we did other modules.

Thanks a ton for your collaboration and we're looking forward for this
new challenge,

[1] https://github.com/puppetlabs/beaker

On Wed, Jun 8, 2016 at 8:11 AM, Iury Gregory  wrote:
> Awesome!
>
> You just need to follow the same process that Emilien pointed for
> puppet-congress. If you need any help please let us know.
>
> 1- Move  https://github.com/radez/puppet-tacker to OpenStack
> 2- Add it to our governance
> 3- Follow
> http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html
>
>
>
> 2016-06-08 8:56 GMT-03:00 Dan Radez :
>>
>> I also have puppet-tacker module that has existed before the project was
>> part of big tent.
>>
>> It was based on cookie cutter originally but will probably need some
>> adjustments to adhere to standards.
>>
>> I'd like to get the project establish so that the code can be run
>> through the proper review process.
>>
>> exiting repo is here: https://github.com/radez/puppet-tacker
>>
>> Dan Radez
>> freenode: radez
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
>
> ~
> Att[]'s
> Iury Gregory Melo Ferreira
> Master student in Computer Science at UFCG
> E-mail:  iurygreg...@gmail.com
> ~
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Got Failure:"fixtures._fixtures.timeout.TimeoutException"

2016-06-08 Thread Jim Rollenhagen
On Wed, Jun 08, 2016 at 01:29:49AM -0700, Clark Boylan wrote:
> On Tue, Jun 7, 2016, at 10:40 PM, zhangshuai wrote:
> > Hi all
> > 
> > I have a question with fixtures._fixtures.timeout.TimeoutException. like
> > following:
> > 
> > 
> > 
> > 
> > Traceback (most recent call last):
> > 
> >   File "smaug/tests/fullstack/test_checkpoints.py", line 73, in
> >   test_checkpoint_create
> > 
> > volume.id)
> > 
> >   File "smaug/tests/fullstack/test_checkpoints.py", line 51, in
> >   create_checkpoint
> > 
> > sleep(640)
> > 
> >   File
> >   
> > "/home/lexus/workspace/smaug/.tox/fullstack/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
> >   line 52, in signal_handler
> > 
> > raise TimeoutException()
> > 
> > fixtures._fixtures.timeout.TimeoutException
> > 
> > Ran 1 tests in 61.986s (-0.215s)
> > 
> > FAILED (id=213, failures=1)
> > 
> > 
> > 
> > error: testr failed (1)
> 
> By default the bast test classes for many OpenStack projects implement a
> 60 second unittest timeout. If the unittest takes longer than 60 seconds
> an exception is raised and the test fails. I am guessing that smaug has
> inherited this behavior which leads to the failure when you attempt to
> sleep for 640 seconds.
> 
> You can address this by either changing the timeout or making your test
> run quicker.

There's no need for sleeping in tests. Assuming this tests that
something happens after 640 seconds, instead of actually sleeping for
640 seconds, you could instead mock the method that you get the time
from, for example:

mock.patch.object(time_module, 'time')
def my_test(self, mock_time):
mock_time.side_effect = [0, 640]
test_a_thing()

Note that (at least in the past) mocking time.time directly confuses
testr and friends, so you may need some sort of wrapper around it like

def _time():
return time.time()

Though I think this has been fixed recently, so try it without the
wrapper first. :)

// jim

> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Request for changing the meeting time to 1600 UTC for all meetings

2016-06-08 Thread Swapnil Kulkarni (coolsvap)
Dear Kollagues,

Some time ago we discussed the requirement of alternating meeting
times for Kolla weekly meeting due to major contributors from
kolla-mesos were not able to attend weekly meeting at UTC 1600 and we
implemented alternate US/APAC meeting times.

With kolla-mesos not active anymore and looking at the current active
contributors, I wish to reinstate the UTC 1600 time for all Kolla
Weekly meetings.

Please let me know your views.

-- 
Best Regards,
Swapnil Kulkarni
irc : coolsvap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-08 Thread Jim Rollenhagen
On Wed, Jun 08, 2016 at 10:39:20AM +, Sam Betts (sambetts) wrote:
> 
> 
> On 07/06/2016 23:59, "Kris G. Lindgren" 
> > wrote:
> 
> Replying to a digest so sorry for the copy and pastes
> 
> 
> >> There's also been discussion of ways we could do ad-hoc changes in RAID 
> >> level,
> >> based on flavor metadata, during the provisioning process (rather than 
> >> ahead of
> >> time) but no code has been done for this yet, AFAIK.
> >
> > I'm still pretty interested in it, because I agree with anything said
> > above about building RAID ahead-of-time not being convenient. I don't
> > quite understand how such a feature would look like, we might add it as
> > a topic for midcycle.
> 
> This sounds like an interesting/acceptable way to handle this problem to me.  
> Update the node to set the desired raid state from the flavor.
> 
> >> - Inspection is geared towards using a different network and dnsmasq
> 
> >> infrastructure than what is in use for ironic/neutron.  Which also means 
> >> that in
> >> order to not conflict with dhcp requests for servers in ironic I need to 
> >> use
> >> different networks.  Which also means I now need to handle swinging server 
> >> ports
> >> between different networks.
> >> Inspector is designed to respond only to requests for nodes in the 
> >> inspection
> > phase, so that it *doesn't* conflict with provisioning of nodes by Ironic. 
> > I've
> > been using the same network for inspection and provisioning without issue 
> > -- so
> > I'm not sure what problem you're encountering here.
> 
> So I was mainly thinking about the use case of using inspector to onboard 
> unknown hosts into ironic (though I see didn't mention that).  So in a 
> datacenter we are always on boarding servers.  Right now we boot a linux 
> agent that "inventories" the box and adds it to our management system as a 
> node to be able to be consumed by a build request.  My understanding is that 
> inspector supports this as of Mitaka.  However, the install guide for 
> inspection states that you need to install its own dnsmasq instance for 
> inspection.  To me this implies that this is suppose to be a separate 
> network.  As if I have 2 dhcp servers running on the same L2 network I am 
> going to get races between the 2 dhcp servers for normal provisioning 
> activities.  Especially if one dhcp server is configured to respond to 
> everything (so that it can onboard unknown hardware) and the other only to 
> specific hosts(ironic/neutron).  The only way that wouldn't be an issue is if 
> both inspector and ironic/neutron are using th
 e same dhcp servers.  Or am I missing something?
> 
> Ironic inspector handles this by managing Iptables as a firewall service to 
> white/black list nodes, only allowing nodes that should be talking to the 
> Inspector's dnsmasq instance to be served by it. This avoids any DHCP races 
> between Ironic and Inspector.

To add to this, the primary reason this is done is because Neutron
doesn't allow wildcard DHCP (there's a spec up for this, but it isn't
moving quickly).

// jim

> 
> 
> ___
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

2016-06-08 Thread Matt Riedemann

On 6/8/2016 6:50 AM, Sean Dague wrote:

On 06/07/2016 04:55 PM, Matt Riedemann wrote:

I tested the glance v2 stack (glance v1 disabled) using a devstack
change here:

https://review.openstack.org/#/c/325322/

Now that the changes are merged up through the base nova image proxy and
the libvirt driver, and we just have hyper-v/xen driver changes for that
series, we should look at gating on this configuration.

I was originally thinking about adding a new job for this, but it's
probably better if we just change one of the existing integrated gate
jobs, like gate-tempest-dsvm-full or gate-tempest-dsvm-neutron-full.

Does anyone have an issue with that? Glance v1 is deprecated and the
configuration option added to nova (use_glance_v1) defaults to True for
compat but is deprecated, and the Nova team plans to drop it's v1 proxy
code in Ocata. So it seems like changing config to use v2 in the gate
jobs should be a non-issue. We'd want to keep at least one integrated
gate job using glance v1 to make sure we don't regress anything there in
Newton.


Honestly, I think we should take the Nova defaults (which will flip to
v2 shortly) and move forward. v1 usage in Nova will be deprecated in a
week. It will default to v2 for people in Newton and they will have to
manually change it to go back. And because we did the copy / paste
approach instead of common dynamic code, the chances for a v1 regression
that is not caught by our unit tests is very very small. It's basically
frozen code.

And we're going to delete the v1 code paths entirely in September. By
the time anyone deploys the Newton code the master v1 code will be
deleted. And our answer is going to be move to v2 for everyone.

It doesn't make sense to me to drive up the complexity by testing both
paths. We'll have a new tested opinionated default that we lead with.

-Sean



I'm OK with making it the default stack in devstack, but it's also 
trivial to just run at least one job (the weirdo postgres job?) with v1 
enabled so we're at least covered in the gate before we remove it in 
Ocata. I don't see harm in that.


nova-network is deprecated and we haven't gone through and dropped that 
from all of our gate jobs.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Request to create puppet-tacker

2016-06-08 Thread Iury Gregory
Awesome!

You just need to follow the same process that Emilien pointed for
puppet-congress. If you need any help please let us know.

1- Move  https://github.com/radez/puppet-tacker to OpenStack
2- Add it to our governance
3- Follow
http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html



2016-06-08 8:56 GMT-03:00 Dan Radez :

> I also have puppet-tacker module that has existed before the project was
> part of big tent.
>
> It was based on cookie cutter originally but will probably need some
> adjustments to adhere to standards.
>
> I'd like to get the project establish so that the code can be run
> through the proper review process.
>
> exiting repo is here: https://github.com/radez/puppet-tacker
>
> Dan Radez
> freenode: radez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-08 Thread Jiri Tomasek
On Wed, Jun 8, 2016 at 11:23 AM, Steven Hardy  wrote:

> On Tue, Jun 07, 2016 at 04:53:12PM -0400, Zane Bitter wrote:
> > On 07/06/16 15:57, Jay Dobies wrote:
> > > >
> > > > 1. Now that we support passing un-merged environment files to heat,
> > > > it'd be
> > > > good to support an optional description key for environments,
> > >
> > > I've never understood why the environment file doesn't have a
> > > description field itself. Templates have descriptions, and IMO it makes
> > > sense for an environment to describe what its particular additions to
> > > the parameters/registry do.
> >
> > Just use a comment?
>
> This doesn't work for any of the TripleO use-cases because you can't parse
> a comment.
>
> The requirements are twofold:
>
> 1. Prior to creating the stack, we need a way to present choices to the
> user about which environment files to enable.  This is made much easier if
> you can include a human-readable description about what the environment
> actually does.
>
> 2. After creating the stack, we need a way to easily introspect the stack
> and see what environments were enabled.  Same as above, it's be
> super-awesome if we could just then strip out the description of what they
> do, so we don't have to maintain hacks like this:
>
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities-map.yaml
>
> The description is one potentially easy-win here, it just makes far more
> sense to keep the description of a thing inside the same file (just like we
> do already with HOT templates).
>
> The next step beyond that is the need to express dependencies between
> things, which is what I was trying to address via the
> https://review.openstack.org/#/c/196656/ spec - that kinda stalled when it
> took 7 months to land so we'll probably need that capabilities_map for that
> unless we can revive that effort.
>
> > > I'd be happy to write that patch, but I wanted to first double check
> > > that there wasn't a big philosophical reason why it shouldn't have a
> > > description.
> >
> > There's not much point unless you're also adding an API to retrieve
> > environment files like Steve mentioned. Comments get stripped when the
> yaml
> > is parsed, but that's fairly academic if you don't have a way to get it
> out
> > again.
>
> Yup, I'm absolutely proposing we add an interface to retrieve the
> environment files (or, in fact, the entire stack files map, and a list of
> environment_files).
>
> Steve
>


Hi, thanks for bringing this topic up. Capabilities map provides several
information about environments. We definitely need to get rid of it in
favor of having Heat provide this from the environment file metadata. How
much additional work would it be to enable environments provide more
metadata than just a description?

>From the GUI point of view an information structure such as following would
be much appreciated:

environments/environments/net-bond-with-vlans.yaml:

meta:
  label: Net Bond with Vlans
  description: >
Configure each role to use a pair of bonded nics (nic2 and
nic3) and configures an IP address on each relevant isolated network
for each role. This option assumes use of Network Isolation.
  requires:
- environments/network-isolation.yaml
- overcloud-resource-registry-puppet.yaml
  alternatives:
- environments/net-single-nic-with-vlans.yaml
  group:
- network-configuration

Grouping of environments is a bit problematic. We could introduce something
like 'group' which could categorize the environments. Problem is that each
group would eventually require own entity to cover group label and
description.


-- Jirka


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Request to create puppet-tacker

2016-06-08 Thread Dan Radez
I also have puppet-tacker module that has existed before the project was
part of big tent.

It was based on cookie cutter originally but will probably need some
adjustments to adhere to standards.

I'd like to get the project establish so that the code can be run
through the proper review process.

exiting repo is here: https://github.com/radez/puppet-tacker

Dan Radez
freenode: radez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

2016-06-08 Thread Sean Dague
On 06/07/2016 04:55 PM, Matt Riedemann wrote:
> I tested the glance v2 stack (glance v1 disabled) using a devstack
> change here:
> 
> https://review.openstack.org/#/c/325322/
> 
> Now that the changes are merged up through the base nova image proxy and
> the libvirt driver, and we just have hyper-v/xen driver changes for that
> series, we should look at gating on this configuration.
> 
> I was originally thinking about adding a new job for this, but it's
> probably better if we just change one of the existing integrated gate
> jobs, like gate-tempest-dsvm-full or gate-tempest-dsvm-neutron-full.
> 
> Does anyone have an issue with that? Glance v1 is deprecated and the
> configuration option added to nova (use_glance_v1) defaults to True for
> compat but is deprecated, and the Nova team plans to drop it's v1 proxy
> code in Ocata. So it seems like changing config to use v2 in the gate
> jobs should be a non-issue. We'd want to keep at least one integrated
> gate job using glance v1 to make sure we don't regress anything there in
> Newton.

Honestly, I think we should take the Nova defaults (which will flip to
v2 shortly) and move forward. v1 usage in Nova will be deprecated in a
week. It will default to v2 for people in Newton and they will have to
manually change it to go back. And because we did the copy / paste
approach instead of common dynamic code, the chances for a v1 regression
that is not caught by our unit tests is very very small. It's basically
frozen code.

And we're going to delete the v1 code paths entirely in September. By
the time anyone deploys the Newton code the master v1 code will be
deleted. And our answer is going to be move to v2 for everyone.

It doesn't make sense to me to drive up the complexity by testing both
paths. We'll have a new tested opinionated default that we lead with.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] spawn a group of nodes on different availability zones

2016-06-08 Thread Oleksii Chuprykov
One more example how you may do it using yaql:

oleksii@oleksii:~$ cat example.yaml
heat_template_version: 2013-05-23

parameters:
  az_list:
type: string
  count:
type: number

resources:
  rg:
type: OS::Heat::ResourceGroup
properties:
  count: {get_param: count}
  resource_def:
type: server.yaml
properties:
  index: "%index%"
  availability_zones: {get_param: az_list}

oleksii@oleksii:~$ cat server.yaml
heat_template_version: 2013-05-23
parameters:
  availability_zones:
type: comma_delimited_list
  index:
type: string
resources:
  instance:
type: OS::Nova::Server
properties:
availability_zone:
  yaql:
expression: $.data.availability_zones[int($.data.index) mod
$.data.availability_zones.len()]
data:
  nova_flavors: {get_param: availability_zones}
  index: {get_param: index}
flavor: m1.tiny
image: cirros

For example, if count == 4 and az_list=[az1, az2] you will have instance1
in az1, instance2 in az2 and instance3 in az1, instance4 in az2.



On Wed, Jun 8, 2016 at 12:53 AM, Hongbin Lu  wrote:

> Hi Heat team,
>
> A question inline.
>
> Best regards,
> Hongbin
>
> > -Original Message-
> > From: Steven Hardy [mailto:sha...@redhat.com]
> > Sent: March-03-16 3:57 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum][heat] spawn a group of nodes on
> > different availability zones
> >
> > On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:
> > > On 02/03/16 05:50, Mathieu Velten wrote:
> > > >Hi all,
> > > >
> > > >I am looking at a way to spawn nodes in different specified
> > > >availability zones when deploying a cluster with Magnum.
> > > >
> > > >Currently Magnum directly uses predefined Heat templates with Heat
> > > >parameters to handle configuration.
> > > >I tried to reach my goal by sticking to this model, however I
> > > >couldn't find a suitable Heat construct that would allow that.
> > > >
> > > >Here are the details of my investigation :
> > > >- OS::Heat::ResourceGroup doesn't allow to specify a list as a
> > > >variable that would be iterated over, so we would need one
> > > >ResourceGroup by AZ
> > > >- OS::Nova::ServerGroup only allows restriction at the hypervisor
> > > >level
> > > >- OS::Heat::InstanceGroup has an AZs parameter but it is marked
> > > >unimplemented , and is CFN specific.
> > > >- OS::Nova::HostAggregate only seems to allow adding some metadatas
> > > >to a group of hosts in a defined availability zone
> > > >- repeat function only works inside the properties section of a
> > > >resource and can't be used at the resource level itself, hence
> > > >something like that is not allowed :
> > > >
> > > >resources:
> > > >   repeat:
> > > > for_each:
> > > >   <%az%>: { get_param: availability_zones }
> > > > template:
> > > >   rg-<%az%>:
> > > > type: OS::Heat::ResourceGroup
> > > > properties:
> > > >   count: 2
> > > >   resource_def:
> > > > type: hot_single_server.yaml
> > > > properties:
> > > >   availability_zone: <%az%>
> > > >
> > > >
> > > >The only possibility that I see is generating a ResourceGroup by AZ,
> > > >but it would induce some big changes in Magnum to handle
> > > >modification/generation of templates.
> > > >
> > > >Any ideas ?
> > >
> > > This is a long-standing missing feature in Heat. There are two
> > > blueprints for this (I'm not sure why):
> > >
> > > https://blueprints.launchpad.net/heat/+spec/autoscaling-
> > availabilityzo
> > > nes-impl
> > > https://blueprints.launchpad.net/heat/+spec/implement-
> > autoscalinggroup
> > > -availabilityzones
> > >
> > > The latter had a spec with quite a lot of discussion:
> > >
> > > https://review.openstack.org/#/c/105907
> > >
> > > And even an attempted implementation:
> > >
> > > https://review.openstack.org/#/c/116139/
> > >
> > > which was making some progress but is long out of date and would need
> > > serious work to rebase. The good news is that some of the changes I
> > > made in Liberty like https://review.openstack.org/#/c/213555/ should
> > > hopefully make it simpler.
> > >
> > > All of which is to say, if you want to help then I think it would be
> > > totally do-able to land support for this relatively early in Newton :)
> > >
> > >
> > > Failing that, the only think I can think to try is something I am
> > > pretty sure won't work: a ResourceGroup with something like:
> > >
> > >   availability_zone: {get_param: [AZ_map, "%i"]}
> > >
> > > where AZ_map looks something like {"0": "az-1", "1": "az-2", "2":
> > > "az-1", ...} and you're using the member index to pick out the AZ to
> > > use from the parameter. I don't think that works (if "%i" is resolved
> > > after get_param then it won't, and I suspect that's the case) but
> > it's
> > > worth a try if you 

Re: [openstack-dev] [charms] Renaming openvswitch-odl -> openvswitch

2016-06-08 Thread Ryan Beisner
Project renames are limited to certain maintenance windows @ infra, so
that's something to be aware of re: timing and coordination.

Also for clarity, can we discuss and highlight the differing use cases and
eventual plans a bit re: the neutron-openvswitch subordinate charm vs. the
openvswitch subordinate charm?

Cheers,

Ryan

On Wed, Jun 8, 2016 at 4:37 AM, James Page  wrote:

> Hi All
>
> The current openvswitch-odl charm is designed for use with
> nova-compute/neutron-gateway and the odl-controller charm, but it declares
> and configures OVS in such a way that I think it has broader applicability
> than just OpenDayLight; specifically it looks like it would also work with
> ONOS which configures the OVS manager in exactly the same way as for ODL.
>
> I'd like to proposed renaming the openvswitch-odl charm to just
> openvswitch to support this generalisation of its use.
>
> Thoughts?
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-08 Thread Sam Betts (sambetts)


On 07/06/2016 23:59, "Kris G. Lindgren" 
> wrote:

Replying to a digest so sorry for the copy and pastes


>> There's also been discussion of ways we could do ad-hoc changes in RAID 
>> level,
>> based on flavor metadata, during the provisioning process (rather than ahead 
>> of
>> time) but no code has been done for this yet, AFAIK.
>
> I'm still pretty interested in it, because I agree with anything said
> above about building RAID ahead-of-time not being convenient. I don't
> quite understand how such a feature would look like, we might add it as
> a topic for midcycle.

This sounds like an interesting/acceptable way to handle this problem to me.  
Update the node to set the desired raid state from the flavor.

>> - Inspection is geared towards using a different network and dnsmasq

>> infrastructure than what is in use for ironic/neutron.  Which also means 
>> that in
>> order to not conflict with dhcp requests for servers in ironic I need to use
>> different networks.  Which also means I now need to handle swinging server 
>> ports
>> between different networks.
>> Inspector is designed to respond only to requests for nodes in the inspection
> phase, so that it *doesn't* conflict with provisioning of nodes by Ironic. 
> I've
> been using the same network for inspection and provisioning without issue -- 
> so
> I'm not sure what problem you're encountering here.

So I was mainly thinking about the use case of using inspector to onboard 
unknown hosts into ironic (though I see didn't mention that).  So in a 
datacenter we are always on boarding servers.  Right now we boot a linux agent 
that "inventories" the box and adds it to our management system as a node to be 
able to be consumed by a build request.  My understanding is that inspector 
supports this as of Mitaka.  However, the install guide for inspection states 
that you need to install its own dnsmasq instance for inspection.  To me this 
implies that this is suppose to be a separate network.  As if I have 2 dhcp 
servers running on the same L2 network I am going to get races between the 2 
dhcp servers for normal provisioning activities.  Especially if one dhcp server 
is configured to respond to everything (so that it can onboard unknown 
hardware) and the other only to specific hosts(ironic/neutron).  The only way 
that wouldn't be an issue is if both inspector and ironic/neutron are using the 
same dhcp servers.  Or am I missing something?

Ironic inspector handles this by managing Iptables as a firewall service to 
white/black list nodes, only allowing nodes that should be talking to the 
Inspector's dnsmasq instance to be served by it. This avoids any DHCP races 
between Ironic and Inspector.


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVSDB native interface as default in gate jobs

2016-06-08 Thread Clark Boylan
On Tue, Apr 5, 2016, at 08:32 PM, IWAMOTO Toshihiro wrote:
> At Tue, 5 Apr 2016 12:57:33 -0400,
> Assaf Muller wrote:
> > 
> > On Tue, Apr 5, 2016 at 12:35 PM, Sean M. Collins  wrote:
> > > Russell Bryant wrote:
> > >> because they are related to two different command line utilities
> > >> (ovs-vsctl vs ovs-ofctl) that speak two different protocols (OVSDB vs
> > >> OpenFlow) that talk to two different daemons on the system (ovsdb-server 
> > >> vs
> > >> ovs-vswitchd) ?
> > >
> > > True, they influence two different daemons - but it's really two options
> > > that both have two settings:
> > >
> > > * "talk to it via the CLI tool"
> > > * "talk to it via a native interface"
> > >
> > > How likely is it to have one talking via native interface and the other
> > > via CLI?
> > 
> > The ovsdb native interface is a couple of cycles more mature than the
> > openflow one, I see how some users would use one but not the other.
> 
> The native of_interface has been tested by periodic jobs and seems
> pretty stable.
> 
> http://graphite.openstack.org/dashboard/#neutron-ovs-native
> 
> > > Also, if the native interface is faster, I think we should consider
> > > making it the default.
> > 
> > Definitely. I'd prefer to deprecate and delete the cli interfaces and
> > keep only the native interfaces in the long run.
> > 
> > >
> > > --
> > > Sean M. Collins
> 
> The native of_interface is definitely faster than the CLI alternative,
> but (un?)fortunately that's not a performance bottleneck.
> 
> The transition would be a gain, but it comes with uncovering a few
> unidentified bugs etc.
> 
> Anyway, I'll post an updated version of performance comparison shortly.

Going to resurrect this thread to see where we have gotten. Did an
updated comparison ever get posted? If so I mised it. Looks like Neutron
does have functional tests now that use the native interfaces for ovsdb
and openflow. The change to update the default interface for ovsdb is
still in review though.

Any chance that we document these choices yet?

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] Re-licensing OpenStack charms under Apache 2.0

2016-06-08 Thread James Page
Hi

We're currently blocked on becoming an OpenStack project under the big-tent
by the licensing of the 26 OpenStack charms under GPL v3.

I'm proposing that we re-license the following code repositories as Apache
2.0:

  charm-ceilometer
  charm-ceilometer-agent
  charm-ceph
  charm-ceph-mon
  charm-ceph-osd
  charm-ceph-radosgw
  charm-cinder
  charm-cinder-backup
  charm-cinder-ceph
  charm-glance
  charm-hacluster
  charm-heat
  charm-keystone
  charm-lxd
  charm-neutron-api
  charm-neutron-api-odl
  charm-neutron-gateway
  charm-neutron-openvswitch
  charm-nova-cloud-controller
  charm-nova-compute
  charm-odl-controller
  charm-openstack-dashboard
  charm-openvswitch-odl
  charm-percona-cluster
  charm-rabbitmq-server
  charm-swift-proxy
  charm-swift-storage

The majority of contributors are from Canonical (from whom I have
permission to make this switch) with a further 18 contributors from outside
of Canonical who I will be directly contacting for approval in gerrit as
reviews are raised for each repository.

Any new charms and supporting repositories will be licensed under Apache
2.0 from the outset.

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-08 Thread Ricardo Rocha
Hi Hongbin.

Not sure how this fits everyone, but we would be happy to host it at
CERN. How do people feel about it? We can add a nice tour of the place
as a bonus :)

Let us know.

Ricardo



On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu  wrote:
> Hi all,
>
>
>
> Please find the Doodle pool below for selecting the Magnum midcycle date.
> Presumably, it will be a 2 days event. The location is undecided for now.
> The previous midcycles were hosted in bay area so I guess we will stay there
> at this time.
>
>
>
> http://doodle.com/poll/5tbcyc37yb7ckiec
>
>
>
> In addition, the Magnum team is finding a host for the midcycle. Please let
> us know if you interest to host us.
>
>
>
> Best regards,
>
> Hongbin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] Renaming openvswitch-odl -> openvswitch

2016-06-08 Thread James Page
Hi All

The current openvswitch-odl charm is designed for use with
nova-compute/neutron-gateway and the odl-controller charm, but it declares
and configures OVS in such a way that I think it has broader applicability
than just OpenDayLight; specifically it looks like it would also work with
ONOS which configures the OVS manager in exactly the same way as for ODL.

I'd like to proposed renaming the openvswitch-odl charm to just openvswitch
to support this generalisation of its use.

Thoughts?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] bug in handling of ISOLATE thread policy

2016-06-08 Thread Finucane, Stephen
On 08 Jun 10:11, Finucane, Stephen wrote:
> On 07 Jun 11:36, Chris Friesen wrote:
> > Hi,
> > 
> > The full details are available at
> > https://bugs.launchpad.net/nova/+bug/1590091 but the short version
> > is this:
> > 
> > 1) I'm running stable/mitaka in devstack.  I've got a small system
> > with 2 pCPUs, both marked as available for pinning.  They're two
> > cores of a single processor, no threads.
> > 
> > 2) I tried to boot an instance with two dedicated CPUs and a thread
> > policy of ISOLATE, but the NUMATopology filter fails my host.
> 
> This sounds like bug #1550317 [1], which has been fixed on master but
> clearly needs to be backported to stable/mitaka. I'll do this as soon
> as soon I figure out how to :)
> 
> Stephen
> 
> PS: Marked #1590091 as a duplicate, per above.
> 
> [1] https://bugs.launchpad.net/nova/+bug/1550317

Done (thanks to bauzas). Review is available here [1]. I'll also point
out the other bug fix that's available for the CPU thread policy
feature [2]. This fixes less obvious issues with the 'require' policy.

Stephen

[1] https://review.openstack.org/326944
[2] https://review.openstack.org/285232/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-08 Thread Steven Hardy
On Tue, Jun 07, 2016 at 04:53:12PM -0400, Zane Bitter wrote:
> On 07/06/16 15:57, Jay Dobies wrote:
> > > 
> > > 1. Now that we support passing un-merged environment files to heat,
> > > it'd be
> > > good to support an optional description key for environments,
> > 
> > I've never understood why the environment file doesn't have a
> > description field itself. Templates have descriptions, and IMO it makes
> > sense for an environment to describe what its particular additions to
> > the parameters/registry do.
> 
> Just use a comment?

This doesn't work for any of the TripleO use-cases because you can't parse
a comment.

The requirements are twofold:

1. Prior to creating the stack, we need a way to present choices to the
user about which environment files to enable.  This is made much easier if
you can include a human-readable description about what the environment
actually does.

2. After creating the stack, we need a way to easily introspect the stack
and see what environments were enabled.  Same as above, it's be
super-awesome if we could just then strip out the description of what they
do, so we don't have to maintain hacks like this:

https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities-map.yaml

The description is one potentially easy-win here, it just makes far more
sense to keep the description of a thing inside the same file (just like we
do already with HOT templates).

The next step beyond that is the need to express dependencies between
things, which is what I was trying to address via the
https://review.openstack.org/#/c/196656/ spec - that kinda stalled when it
took 7 months to land so we'll probably need that capabilities_map for that
unless we can revive that effort.

> > I'd be happy to write that patch, but I wanted to first double check
> > that there wasn't a big philosophical reason why it shouldn't have a
> > description.
> 
> There's not much point unless you're also adding an API to retrieve
> environment files like Steve mentioned. Comments get stripped when the yaml
> is parsed, but that's fairly academic if you don't have a way to get it out
> again.

Yup, I'm absolutely proposing we add an interface to retrieve the
environment files (or, in fact, the entire stack files map, and a list of
environment_files).

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting of Jun.8

2016-06-08 Thread joehuang
Hi,

IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.

Today's agenda:

# spec review and status of 'cross pod L2 networking' and 'dynamic pod binding'
# policy enforcement for API access
# tempest test integration

If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-08 Thread Steven Hardy
On Tue, Jun 07, 2016 at 03:57:31PM -0400, Jay Dobies wrote:
> > All,
> > 
> > We've got some requirements around adding some interfaces to the heat
> > environment file format, for example:
> > 
> > 1. Now that we support passing un-merged environment files to heat, it'd be
> > good to support an optional description key for environments,
> 
> I've never understood why the environment file doesn't have a description
> field itself. Templates have descriptions, and IMO it makes sense for an
> environment to describe what its particular additions to the
> parameters/registry do.
> 
> I'd be happy to write that patch, but I wanted to first double check that
> there wasn't a big philosophical reason why it shouldn't have a description.

AFAIK there are two reasons:

1. Until your recent work landed, any description would be destroyed by the
client when it merged the environments

2. We've got no way to retrieve the environment descriptions from heat (as
Zane mentioned in his reply).

I'm suggesting we fix (2) as a followup step to your work to add an API
that returns the merged environment, e.g add an API that returns the files
map associated with a stack, and one that can list the environments in use
(not just the resolved/merged environment).

> > such that we
> > could add an API (in addition to the one added by jdob to retrieve the
> > merged environment for a running stack) that can retrieve
> > all-the-environments and we can easily tell which one does what (e.g to
> > display in a UI perhaps)
> 
> I'm not sure I follow. Are you saying the API would return the list of
> descriptions, or the actual contents of each environment file that was
> passed in?

The actual contents, either by passing a list of environment filenames, and
providing another API that can return the files map containing the files,
or by having one API call that can return a map of filenames to content for
all environments passed via environment_files.

Basically, I think we should expose all data as passed to create_stack in
it's original form, and (as you already added) in it's post-processed form
e.g the merged environment.

> Currently, the environment is merged before we do anything with it. We'd
> have to change that to store... I'm not entirely sure. Multiple environments
> in the DB per stack? Is there a raw_environment in the DB that we would
> leverage?

We just need to store the environment_files list - we already store the
environment files in the files map

https://review.openstack.org/#/c/241662/17/heat/engine/service.py

So, we need to store environment_files as well as the output of
_merge_environments, then add some sort of API to expose both that list and
the files map.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] bug in handling of ISOLATE thread policy

2016-06-08 Thread Finucane, Stephen
On 07 Jun 11:36, Chris Friesen wrote:
> Hi,
> 
> The full details are available at
> https://bugs.launchpad.net/nova/+bug/1590091 but the short version
> is this:
> 
> 1) I'm running stable/mitaka in devstack.  I've got a small system
> with 2 pCPUs, both marked as available for pinning.  They're two
> cores of a single processor, no threads.
> 
> 2) I tried to boot an instance with two dedicated CPUs and a thread
> policy of ISOLATE, but the NUMATopology filter fails my host.

This sounds like bug #1550317 [1], which has been fixed on master but
clearly needs to be backported to stable/mitaka. I'll do this as soon
as soon I figure out how to :)

Stephen

PS: Marked #1590091 as a duplicate, per above.

[1] https://bugs.launchpad.net/nova/+bug/1550317

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-08 Thread Na Zhu
Hi John,

I think you can create ovsdb idl client in networking-sfc to connect to 
OVN_Northbound DB, then call the APIs you add to networking-ovn to 
configure SFC.
Now OVN is a ML2 mechanism driver (OVNMechanismDriver), not core plugin, 
the OVN L3 (OVNL3RouterPlugin) is a neutron service plugin like vpn, sfc 
and ect.

You can refer to method OVNMechanismDriver._ovn and 
OVNL3RouterPlugin._ovn, they both create ovsdb idl client object, so in 
your ovn driver, you can do it in the same way. Here is the code sample:

class OVNSfcDriver(driver_base.SfcDriverBase,
   ovs_sfc_db.OVSSfcDriverDB)
..
@property
def _ovn(self):
if self._ovn_property is None:
LOG.info(_LI("Getting OvsdbOvnIdl"))
self._ovn_property = impl_idl_ovn.OvsdbOvnIdl(self)
return self._ovn_property

..
@log_helpers.log_method_call
def create_port_chain(self, context): 
port_chain = context.current
for flow_classifier in port_chain:
first get the flow classifier contents
then call self._ovn.create_lflow_classifier()
for port_pair_groups in port_chain:
get the port_pair_group contents
then call self._ovn.create_lport_pair_group()
for port_pair in port_pair_group
first get the port_pair contents
then call self._ovn.create_lport_pair()
 






Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN, Srilatha Tangirala 
Cc: "disc...@openvswitch.org" , "OpenStack 
Development Mailing List (not for usage questions)" 

Date:   2016/06/08 12:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno, Srilatha,

I need some help �C I have fixed most of the obvious typo’s in the three 
repos and merged them with mainline. There is still a problem with the 
build I think in mech_driver.py but I will fix it asap in the am.

However I am not sure of the best way to interface between sfc and ovn.

In networking_sfc/services/src/drivers/ovn/driver.py there is a function 
that creates a deep copy of the port-chain dict, 
create_port_chain(self,contact,port_chain). 

Looking at networking-ovn I think it should use mech_driver.py so we can 
call the OVS-IDL to send the parameters to ovn. However I am not sure of 
the best way to do it. Could you make some suggestions or send me some 
sample code showing the best approach?

I will get the ovs/ovn cleaned up and ready. Also Louis from the 
networking-sfc has posted a draft blueprint.

Regards

John

From: Na Zhu 
Date: Monday, June 6, 2016 at 7:54 PM
To: John McDowall , Ryan Moats <
rmo...@us.ibm.com>
Cc: "disc...@openvswitch.org" , "OpenStack 
Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>, Srilatha Tangirala <
srila...@us.ibm.com>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I do not know any better approach, I think it is good to write all the 
parameters in the creation of a port chain, this can avoid saving many 
data in northbound db which are not used. We can do it in that way 
currently, if the community has opposite ideas, we can change, what do you 
think?

Hi Ryan,

Do you agree with that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" , Ryan Moats 
, Srilatha Tangirala , "OpenStack 
Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date:2016/06/06 23:36
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

Let me check �C my intention was that the networking-sfc OVNB driver would 
configure all aspects of the port-chain and add the parameters to the 
networking-sfc db. Once all the parameters were in the creation of a 
port-chain would call networking-ovn (passing a deep copy of the 
port-chain dict). Here I see networking-ovn acting only as a bridge into 
ovs/ovn (I did not add anything in the ovn plugin �C not sure if that is 
the right approach). Networking-ovn calls into ovs/ovn and inserts the 
entire port-chain.

Thoughts?

j

From: Na Zhu 
Date: Monday, June 6, 2016 at 5:49 AM
To: John McDowall 

Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-08 Thread Thierry Carrez

Samuel Merritt wrote:

On 6/7/16 12:00 PM, Monty Taylor wrote:

[snip]

 >

I'd rather see us focus energy on Python3, asyncio and its pluggable
event loops. The work in:

http://magic.io/blog/uvloop-blazing-fast-python-networking/

is a great indication in an actual apples-to-apples comparison of what
can be accomplished in python doing IO-bound activities by using modern
Python techniques. I think that comparing python2+eventlet to a fresh
rewrite in Go isn't 100% of the story. A TON of work has gone in to
Python that we're not taking advantage of because we're still supporting
Python2. So what I've love to see in the realm of comparative
experimentation is to see if the existing Python we already have can be
leveraged as we adopt newer and more modern things.


Asyncio, eventlet, and other similar libraries are all very good for
performing asynchronous IO on sockets and pipes. However, none of them
help for filesystem IO. That's why Swift needs a golang object server:
the go runtime will keep some goroutines running even though some other
goroutines are performing filesystem IO, whereas filesystem IO in Python
blocks the entire process, asyncio or no asyncio.


As you probably know by now, there was a majority of TC members to deny 
the addition of Golang as another generally available tool in our 
toolbelt. The resolution (which was made to follow the guidelines the TC 
itself set as the process to follow in that specific case) was therefore 
rejected.


What made this decision pretty hard for everyone involved is that the 
Swift team built a compelling argument (see just above) on why they 
can't really solve a real user problem they have (scale and perform 
better) using the tools they are allowed to use in "OpenStack". There 
was probably also a majority of TC members agreeing that Swift needs to 
use another language to solve the specific filesystem IO problem they 
are running into. So we have been looking at various trade-off solutions 
to get the projects the tools they really need without making 
cross-project work significantly more complex, and without triggering 
the gigantic distraction of superfluous rewrites across OpenStack projects.


None of those middle-ground solutions is optimal. But I've been asked 
to list them for clarity.


One option is to develop generally-useful libraries or services outside 
of OpenStack that solve the specific issue you're running into. 
OpenStack relies on a lot of dependencies in the wider open source 
community, we don't have to develop *everything* as official "openstack 
projects" under TC governance. We even have a number of them that are 
using OpenStack infrastructure (git, gerrit, gate...). The benefit of 
this approach is that it lets you pick whatever language or framework is 
most appropriate, and forces clean contracts between the components, 
which makes replacing the external piece easier in the future. The 
drawback of this approach is that it creates local fragmentation at the 
project team level, as they have to artificially split the project into 
"OpenStack" and non-"OpenStack" bits. And it is also not optimal for 
Swift due to the way it's currently designed, you can't really slice it 
in this way.


Another option (raised by dims) is to find a way to allow usage of 
golang (or another language) in a more granular way: selectively allow 
projects which really need another tool to use it. The benefit is that 
it lets project teams make a case and use the best tool for the job, 
while limiting the cross-project impact and without opening the 
distraction floodgates of useless rewrites. The drawback is that 
depending on how it's done, it may place the TC in the role of granting 
"you're tall enough to use Go" badges, creating new endless discussions 
and more "you're special" exceptions. That said, I'm still interested in 
exploring that option, for one reason. I think that whenever a project 
team considers adding a component or a rewrite in another language, they 
are running into an interesting problem with Python, on which they 
really could use advice from the rest of the OpenStack community. I'd 
really like to see a cross-project team of Python performance experts to 
look at the problem this specific team has that makes them want to use 
another language. That seems like a great way to drive more practice 
sharing and reduce fragmentation in "OpenStack" in general. We might 
just need to put the bar pretty high so that we are not flooded by silly 
rewrite requests.


For completeness I'll also list the nuclear option: to reject the 
Technical Committee oversight altogether and decide that your project is 
better off as an autonomous project in the OpenStack ecosystem.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-08 Thread Na Zhu
John,

Is the blueprint Louis posted this one?
https://blueprints.launchpad.net/networking-sfc/+spec/networking-sfc-ovn-driver

If not, can you send me the link?


Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN, Srilatha Tangirala 
Cc: "disc...@openvswitch.org" , "OpenStack 
Development Mailing List (not for usage questions)" 

Date:   2016/06/08 12:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno, Srilatha,

I need some help �C I have fixed most of the obvious typo’s in the three 
repos and merged them with mainline. There is still a problem with the 
build I think in mech_driver.py but I will fix it asap in the am.

However I am not sure of the best way to interface between sfc and ovn.

In networking_sfc/services/src/drivers/ovn/driver.py there is a function 
that creates a deep copy of the port-chain dict, 
create_port_chain(self,contact,port_chain). 

Looking at networking-ovn I think it should use mech_driver.py so we can 
call the OVS-IDL to send the parameters to ovn. However I am not sure of 
the best way to do it. Could you make some suggestions or send me some 
sample code showing the best approach?

I will get the ovs/ovn cleaned up and ready. Also Louis from the 
networking-sfc has posted a draft blueprint.

Regards

John

From: Na Zhu 
Date: Monday, June 6, 2016 at 7:54 PM
To: John McDowall , Ryan Moats <
rmo...@us.ibm.com>
Cc: "disc...@openvswitch.org" , "OpenStack 
Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>, Srilatha Tangirala <
srila...@us.ibm.com>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I do not know any better approach, I think it is good to write all the 
parameters in the creation of a port chain, this can avoid saving many 
data in northbound db which are not used. We can do it in that way 
currently, if the community has opposite ideas, we can change, what do you 
think?

Hi Ryan,

Do you agree with that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" , Ryan Moats 
, Srilatha Tangirala , "OpenStack 
Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date:2016/06/06 23:36
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

Let me check �C my intention was that the networking-sfc OVNB driver would 
configure all aspects of the port-chain and add the parameters to the 
networking-sfc db. Once all the parameters were in the creation of a 
port-chain would call networking-ovn (passing a deep copy of the 
port-chain dict). Here I see networking-ovn acting only as a bridge into 
ovs/ovn (I did not add anything in the ovn plugin �C not sure if that is 
the right approach). Networking-ovn calls into ovs/ovn and inserts the 
entire port-chain.

Thoughts?

j

From: Na Zhu 
Date: Monday, June 6, 2016 at 5:49 AM
To: John McDowall 
Cc: "disc...@openvswitch.org" , Ryan Moats <
rmo...@us.ibm.com>, Srilatha Tangirala , "OpenStack 
Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

One question need confirm with you, I think the ovn flow classifier driver 
and ovn port chain driver should call the APIs which you add to 
networking-ovn to configure the northbound db sfc tables, right? I see 
your networking-sfc ovn drivers, they does not call the APIs you add to 
networking-ovn, do you miss that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
Cc:Srilatha Tangirala , OpenStack Development 
Mailing List , Ryan Moats <
rmo...@us.ibm.com>, "disc...@openvswitch.org" 
Date:2016/06/06 14:28
Subject:Re: 

Re: [openstack-dev] Got Failure:"fixtures._fixtures.timeout.TimeoutException"

2016-06-08 Thread Clark Boylan
On Tue, Jun 7, 2016, at 10:40 PM, zhangshuai wrote:
> Hi all
> 
> I have a question with fixtures._fixtures.timeout.TimeoutException. like
> following:
> 
> 
> 
> 
> Traceback (most recent call last):
> 
>   File "smaug/tests/fullstack/test_checkpoints.py", line 73, in
>   test_checkpoint_create
> 
> volume.id)
> 
>   File "smaug/tests/fullstack/test_checkpoints.py", line 51, in
>   create_checkpoint
> 
> sleep(640)
> 
>   File
>   
> "/home/lexus/workspace/smaug/.tox/fullstack/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
>   line 52, in signal_handler
> 
> raise TimeoutException()
> 
> fixtures._fixtures.timeout.TimeoutException
> 
> Ran 1 tests in 61.986s (-0.215s)
> 
> FAILED (id=213, failures=1)
> 
> 
> 
> error: testr failed (1)

By default the bast test classes for many OpenStack projects implement a
60 second unittest timeout. If the unittest takes longer than 60 seconds
an exception is raised and the test fails. I am guessing that smaug has
inherited this behavior which leads to the failure when you attempt to
sleep for 640 seconds.

You can address this by either changing the timeout or making your test
run quicker.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]trusts with federated users

2016-06-08 Thread Gyorgy Szombathelyi
> > As an OIDC user, tried to play with Heat and Murano recently. They usually
> fail with a trust creation error, noticing that keystone cannot find the
> _member_ role while creating the trust.
> Hmmm...that should not be the case.  The user in question should have a
> role on the project, but getting it via a group is OK.
> 
> I suspect the problem is the Ephemeral nature of Federated users. With the
> Shadow user construct (under construction) there would be something to
> use.
> 
> Please file a bug on this and assign it to me (or notify me if you can't 
> assign).
> 

I had filed a bug to murano before, maybe it can be assigned to keystone(?):

https://bugs.launchpad.net/murano/+bug/1589993



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >