[openstack-dev] Got Failure:"fixtures._fixtures.timeout.TimeoutException"

2016-06-07 Thread zhangshuai
Hi all

I have a question with fixtures._fixtures.timeout.TimeoutException. like 
following:




Traceback (most recent call last):

  File "smaug/tests/fullstack/test_checkpoints.py", line 73, in 
test_checkpoint_create

volume.id)

  File "smaug/tests/fullstack/test_checkpoints.py", line 51, in 
create_checkpoint

sleep(640)

  File 
"/home/lexus/workspace/smaug/.tox/fullstack/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler

raise TimeoutException()

fixtures._fixtures.timeout.TimeoutException

Ran 1 tests in 61.986s (-0.215s)

FAILED (id=213, failures=1)



error: testr failed (1)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [odl-networking] Devstack with ODL using Mitaka?

2016-06-07 Thread Isaku Yamahata
Hi. (Added neutron-...@lists.opendaylight.org)

It seems like networking-odl bug.
Can you please file a bug?

Just in case, can you please check if you don't
"enable_service odl-server" somewhere manually?

thanks,

On Tue, Jun 07, 2016 at 01:42:39PM +0200,
Wojciech Dec  wrote:

> Hi Openstack dev Folks,
> 
> I'd appreciate your help with setting up the necessary local.conf for using
> an ODL as the controller, along with Mitaka.
> 
> Would be great if anyone could share a working  Mitaka updated local.conf
> that they could share?
> 
> A more specific problem I've been experiencing, is that the devstack script
> insists on pulling and starting ODL no matter what ODL_MODE setting is
> used. According to :
> https://github.com/openstack/networking-odl/blob/master/devstack/settings
> I've been using ODL_MODE=externalodl as well as =manual. In both cases
> devstack starts the ODL it pulls.
> 
> Thanks,
> Wojtek.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-07 Thread John McDowall
Juno, Srilatha,

I need some help – I have fixed most of the obvious typo’s in the three repos 
and merged them with mainline. There is still a problem with the build I think 
in mech_driver.py but I will fix it asap in the am.

However I am not sure of the best way to interface between sfc and ovn.

In networking_sfc/services/src/drivers/ovn/driver.py there is a function that 
creates a deep copy of the port-chain dict, 
create_port_chain(self,contact,port_chain).

Looking at networking-ovn I think it should use mech_driver.py so we can call 
the OVS-IDL to send the parameters to ovn. However I am not sure of the best 
way to do it. Could you make some suggestions or send me some sample code 
showing the best approach?

I will get the ovs/ovn cleaned up and ready. Also Louis from the networking-sfc 
has posted a draft blueprint.

Regards

John

From: Na Zhu >
Date: Monday, June 6, 2016 at 7:54 PM
To: John McDowall 
>, Ryan 
Moats >
Cc: "disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List (not for usage questions)" 
>, 
Srilatha Tangirala >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I do not know any better approach, I think it is good to write all the 
parameters in the creation of a port chain, this can avoid saving many data in 
northbound db which are not used. We can do it in that way currently, if the 
community has opposite ideas, we can change, what do you think?

Hi Ryan,

Do you agree with that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" 
>, Ryan Moats 
>, Srilatha Tangirala 
>, "OpenStack Development 
Mailing List (not for usage questions)" 
>
Date:2016/06/06 23:36
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Juno,

Let me check – my intention was that the networking-sfc OVNB driver would 
configure all aspects of the port-chain and add the parameters to the 
networking-sfc db. Once all the parameters were in the creation of a port-chain 
would call networking-ovn (passing a deep copy of the port-chain dict). Here I 
see networking-ovn acting only as a bridge into ovs/ovn (I did not add anything 
in the ovn plugin – not sure if that is the right approach). Networking-ovn 
calls into ovs/ovn and inserts the entire port-chain.

Thoughts?

j

From: Na Zhu >
Date: Monday, June 6, 2016 at 5:49 AM
To: John McDowall 
>
Cc: "disc...@openvswitch.org" 
>, Ryan Moats 
>, Srilatha Tangirala 
>, "OpenStack Development 
Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

One question need confirm with you, I think the ovn flow classifier driver and 
ovn port chain driver should call the APIs which you add to networking-ovn to 
configure the northbound db sfc tables, right? I see your networking-sfc ovn 
drivers, they does not call the APIs you add to networking-ovn, do you miss 
that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
>
Cc:Srilatha Tangirala 
>, OpenStack Development 
Mailing List 
>, 
Ryan Moats >, 

Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-06-07 Thread Adam Heczko
Hi,
I'd like to ask what's the current state of Shotgun and what are the plans
for the future?
Is there any alternative chosen for Fuel diagnostic snapshot functionality
and being worked on?

On Mon, Apr 18, 2016 at 3:39 PM, Igor Kalnitsky 
wrote:

> Evgeniy L. wrote:
> > I think such kind of tools should use as less as possible existing
> > infrastructure, because in case if something went wrong, you should
> > be able to easily get diagnostic information, even with broken RabbitMQ,
> > Astute and MCollective.
>
> It's a good point indeed! Moreover, troubleshooting scenarios may vary
> from case to case, so it should be easily extendable and changeable.
> So users can use various (probably, downloaded) scenarios to gather
> diagnostic info.
>
> That's why I think Ansible could really be helpful here. Such
> scenarios may be distributed as Ansible playbooks.
>
> On Mon, Apr 18, 2016 at 4:25 PM, Evgeniy L  wrote:
> >>> Btw, one of the ideas was to use Fuel task capabilities to gather
> >>> diagnostic snapshot.
> >
> > I think such kind of tools should use as less as possible existing
> > infrastructure, because in case if something went wrong, you should be
> able
> > to easily get diagnostic information, even with broken RabbitMQ, Astute
> and
> > MCollective.
> >
> > Thanks,
> >
> >
> > On Mon, Apr 18, 2016 at 2:26 PM, Vladimir Kozhukalov
> >  wrote:
> >>
> >> Colleagues,
> >>
> >> Whether we are going to continue using Shotgun or
> >> substitute it with something else, we still need to
> >> decouple it from Fuel because Shotgun is a generic
> >> tool. Please review these [1], [2].
> >>
> >> [1] https://review.openstack.org/#/c/298603
> >> [2] https://review.openstack.org/#/c/298615
> >>
> >>
> >> Btw, one of the ideas was to use Fuel task capabilities
> >> to gather diagnostic snapshot.
> >>
> >> Vladimir Kozhukalov
> >>
> >> On Thu, Mar 31, 2016 at 1:32 PM, Evgeniy L  wrote:
> >>>
> >>> Hi,
> >>>
> >>> Problems which I see with current Shotgun are:
> >>> 1. Luck of parallelism, so it's not going to fetch data fast enough
> from
> >>> medium/big clouds.
> >>> 2. There should be an easy way to run it manually (it's possible, but
> >>> there is no ready-to-use config), it would be really helpful in case if
> >>> Nailgun/Astute/MCollective are down.
> >>>
> >>> As far as I know 1st is partly covered by Ansible, but the problem is
> it
> >>> executes a single task in parallel, so there is probability that
> lagging
> >>> node will slow down fetching from entire environment.
> >>> Also we will have to build a tool around Ansible to generate playbooks.
> >>>
> >>> Thanks,
> >>>
> >>> On Wed, Mar 30, 2016 at 5:18 PM, Tomasz 'Zen' Napierala
> >>>  wrote:
> 
>  Hi,
> 
>  Do we have any requirements for the new tool? Do we know what we don’t
>  like about current implementation, what should be avoided, etc.?
> Before that
>  we can only speculate.
>  From my ops experience, shotgun like tools will not work conveniently
> on
>  medium to big environments. Even on medium env amount of logs is just
> too
>  huge to handle by such simple tool. In such environments better
> pattern is
>  to use dedicated log collection / analysis tool, just like StackLight.
>  At the other hand I’m not sure if ansible is the right tool for that.
> It
>  has some features (like ‘fetch’ command) but in general it’s a
> configuration
>  management tool, and I’m not sure how it would act under such heavy
> load.
> 
>  Regards,
> 
>  > On 30 Mar 2016, at 15:20, Vladimir Kozhukalov
>  >  wrote:
>  >
>  > Igor,
>  >
>  > I can not agree more. Wherever possible we should
>  > use existent mature solutions. Ansible is really
>  > convenient and well known solution, let's try to
>  > use it.
>  >
>  > Yet another thing should be taken into account.
>  > One of Shotgun features is diagnostic report
>  > that could then be attached to bugs to identify
>  > the content of env. This report could also be
>  > used to reproduce env and then fight a bug.
>  > I'd like we to have this kind of report.
>  > Is it possible to implement such a feature
>  > using Ansible? If yes, then let's switch to Ansible
>  > as soon as possible.
>  >
>  >
>  >
>  > Vladimir Kozhukalov
>  >
>  > On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky
>  >  wrote:
>  > Neil Jerram wrote:
>  > > But isn't Ansible also over-complicated for just running commands
>  > > over SSH?
>  >
>  > It may be not so "simple" to ignore that. Ansible has a lot of
> modules
>  > which might be very helpful. For instance, Shotgun makes a database
>  > dump and there're Ansible modules with the same functionality [1].
>  >
>  > Don't 

[openstack-dev] [Neutron][LBaaS] LBaaSv2 with HAproxy Agent Deployment Issue

2016-06-07 Thread Daneyon Hansen (danehans)
All,

I am trying to add Neutron LBaaSv2 to a working OpenStack Liberty deployment. I 
am running into an issue where the lbaas agent does not appear in the output of 
neutron agent-list. However, the lbaas extension appears in the output of 
neutron ext-list. After investigating further, the lbaas-agent sends a message 
on the queue and times out waiting for a reply:

2016-06-06 21:09:15.958 22 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on 10.32.20.52:5672
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve 
ready devices
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutro
n_lbaas/services/loadbalancer/agent/agent_manager.py", line 152, in sync_state
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = 
set(self.plugin_rpc.get_r
eady_devices())
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutro
n_lbaas/services/loadbalancer/agent/agent_api.py", line 36, in get_ready_devices
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager return 
cctxt.call(self.context, 'get_ready_
devices', host=self.host)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/rpc/client.py", line 158, in call
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/transport.py", line 90, in _send
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/_drivers/amqpdriver.py", line 431, in send
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/_drivers/amqpdriver.py", line 420, in _send
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager result = 
self._waiter.wait(msg_id, timeout)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/_drivers/amqpdriver.py", line 318, in wait
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager message = 
self.waiters.get(msg_id, timeout=
timeout)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_m
essaging/_drivers/amqpdriver.py", line 223, in get
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 'to message ID %s' 
% msg_id)
2016-06-06 21:10:15.972 22 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed 
out waiting for a reply
 to message ID eae3cc1bc8614aa8ae499d92ca4ec731

I verified that the lbaas queues reside within the Rabbit cluster:

bash-4.2$ rabbitmqctl list_queues

n-lbaas_agent   0
n-lbaas_agent.control-server-1.novalocal0
n-lbaas_agent.control-server-2.novalocal0
n-lbaas_agent.control-server-3.novalocal0
n-lbaas_agent_fanout_18a3b28c969148f3a008df8f3e5f5363   0
n-lbaas_agent_fanout_a7d48e8a1b27443d82ee4944bec44cf8   0
n-lbaas_agent_fanout_b5360edb19c240e79c71d60806977f66   0
n-lbaasv2-plugin0
n-lbaasv2-plugin.control-server-1.novalocal 0
n-lbaasv2-plugin.control-server-2.novalocal 0
n-lbaasv2-plugin.control-server-3.novalocal 0
n-lbaasv2-plugin_fanout_5cbb6dd4fafc4c4784add8a20e0a28a50
n-lbaasv2-plugin_fanout_756ee4e4eee547528d0f6e3dde71b1500
n-lbaasv2-plugin_fanout_7629f7bb85ce493d83c334dfcc2cd4aa0
notifications.info  8


And the lbaas queues are being mirrored:

# rabbitmq server logs
=INFO REPORT 6-Jun-2016::19:01:23 ===
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in 
vhost '/': Adding mirror on node 'rabbit@mercury-control-se
rver-3': <3038.25481.1>

=INFO REPORT 6-Jun-2016::19:01:23 ===
Mirrored queue 'n-lbaasv2-plugin_fanout_659b460849ef43ee834ce6d88d294b46' in 
vhost '/': Adding mirror on node 'rabbit@mercury-control-se
rver-2': <3037.25635.1>

=INFO REPORT 6-Jun-2016::19:01:23 ===
Mirrored queue 

Re: [openstack-dev] [Openstack-operators] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-06-07 Thread Nikhil Komawar
Hi all,


Thanks a ton for the feedback on the time and thanks to Kris for adding
items to the agenda [1].


Just wanted to announce a few things here:


The final decision on the time has been made after a lot of discussions.

This event will be on *Thursday June 9th at 1130 UTC* 

Here's [2] how it looks at/near your timezone.


It somewhat manages to accommodate people from different (and extremely
diverse) timezones but if it's too early or too late for you for this
full *2 hour* sync, please add your interest topics and name against it
so that we can schedule your items either later or earlier during the
event. The schedule will be tentative unless significant/enough
information is provided on time to help set the schedule in advance.


I had kept open agenda from the developers' side so that we can
collaborate better on the pain points of the operators. You are very
welcome to add items to the etherpad [1].


The event has been updated to the Virtual Sprints wiki [3] and the
details have been added to the etherpad [1] as well. Please feel free to
reach out to me for any questions.


Thanks for the RSVP and see you soon virtually.


[1] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync
[2]
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=11=30=0=881=196=47=22=157=87=24=78=283=1800
[3]
https://wiki.openstack.org/wiki/VirtualSprints#Glance_and_Operators_mid-cycle_sync_for_Newton


Cheers


On 5/31/16 5:13 PM, Nikhil Komawar wrote:
> Hey,
>
>
> Thanks for your interest.
>
> Sorry about the confusion. Please consider the same time for Thursday
> June 9th.
>
>
> Thur June 9th proposed time:
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=11=0=0=881=196=47=22=157=87=24=78=283
>
>
> Alternate time proposal:
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=23=0=0=881=196=47=22=157=87=24=78=283
>
>
> Overall time planner:
> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160609=881=196=47=22=157=87=24=78=283
>
>
>
> It will really depend on who is strongly interested in the discussions.
> Scheduling with EMEA, Pacific time (US), Australian (esp. Eastern) is
> quite difficult. If there's strong interest from San Jose, we may have
> to settle for a rather awkward choice below:
>
>
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=4=0=0=881=196=47=22=157=87=24=78=283
>
>
>
> A vote of +1, 0, -1 on these times would help long way.
>
>
> On 5/31/16 4:35 PM, Belmiro Moreira wrote:
>> Hi Nikhil,
>> I'm interested in this discussion.
>>
>> Initially you were proposing Thursday June 9th, 2016 at 2000UTC.
>> Are you suggesting to change also the date? Because in the new
>> timeanddate suggestions is 6/7 of June.
>>
>> Belmiro
>>
>> On Tue, May 31, 2016 at 6:13 PM, Nikhil Komawar <nik.koma...@gmail.com
>> <mailto:nik.koma...@gmail.com>> wrote:
>>
>> Hey,
>>
>>
>>
>>
>>
>> Thanks for the feedback. 0800UTC is 4am EDT for some of the US
>> Glancers :-)
>>
>>
>>
>>
>>
>> I request this time which may help the folks in Eastern and Central US
>>
>> time.
>>
>> 
>> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=7=11=0=0=881=196=47=22=157=87=24=78
>>
>>
>>
>>
>>
>> If it still does not work, I may have to poll the folks in EMEA on how
>>
>> strong their intentions are for joining this call.  Because
>> another time
>>
>>     slot that works for folks in Australia & US might be too inconvenient
>>
>> for those in EMEA:
>>
>> 
>> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=6=23=0=0=881=196=47=22=157=87=24=78
>>
>>
>>
>>
>>
>> Here's the map of cities that may be involved:
>>
>> 
>> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160607=881=196=47=22=157=87=24=78
>>
>>
>>
>>
>>
>> Please let me know which ones are possible and we can try to work
>> around
>>
>> the times.
>>
>>
>>
>>
>>
>> On 5/31/16 2:54 AM, Blair Bethwaite wrote:
>>
>> > Hi Nikhil,
>>
>> >
>>
>> > 2000UTC might catch a few kiwis, but it's 6am everywhere on the east
>>
>> > coast of Australia, and even earlier out west. 0800UTC, on the other
>>
>> > hand, would be more sociable.
>>
>> >
>>
>> > On 26 May 2016 at 15:30, Nikhil Komawar <

Re: [openstack-dev] [Octavia] Unable to plug VIP

2016-06-07 Thread Michael Johnson
I have not seen this.  Can you please open a bug in launchpad and
include your o-cw.log and /var/log/upstart/amphora-agent.log from the
affected amphora?

Thank you,
Michael


On Tue, Jun 7, 2016 at 5:09 AM, Babu Shanmugam  wrote:
> Hi,
> I am using octavia deployed using devstack. I am *never* able to
> successfully create a loadbalancer. Following is my investigation,
>
> 1. When a loadbalancer is created, octavia controller sends plug_vip request
> to the amphora VM.
> 2. It waits for some time till the connection to the amphora is established.
> 3. After it successfully connects, octavia controller gets response
> {"details": "No suitable network interface found"} to plug_vip request.
> 4. I tried to get the list of interfaces attached to the amphora VM using
> subprocess.check_cmd("sudo virsh domiflist") before returning frpm plug_vip
> (https://github.com/openstack/octavia/blob/master/octavia/amphorae/drivers/haproxy/rest_api_driver.py#L326)
> and found that there is indeed a veth device attached to the VM with that
> MAC sent in the request.
> 5. From the amphora server code, I could understand that the possible place
> for this exception is
> https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/plug.py#L53.
> But, when I ssh to the amphora VM, I was able to see the 'amphora-haproxy'
> netns created and also the interface configuration file for eth1 device,
> which should have been executed after #L53.
>
> I am not sure why this problem happens. Everything seems to be fine, but I
> am still facing this problem. Have you seen this problem before?
>
> Thank you,
> Babu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Enabling/Disabling specific API extensions

2016-06-07 Thread Kevin Benton
I think we want to be careful about allowing every extension to be
configurable by the operator because plugins might expect certain
extensions to be loaded since they are defined in code. I suppose we could
have a way that it is just disabled at the API level but all of the code is
still loaded, but I would want to make sure we have a good use case before
we add more knobs to change API behavior.

For the L3 example, you can just disable the L3 service plugin and the l3
extensions will be gone since they are loaded by that plugin.

On Tue, Jun 7, 2016 at 12:49 PM, Brandon Logan 
wrote:

> On Tue, 2016-06-07 at 19:17 +, Sean M. Collins wrote:
> > The patch that switches DevStack over to using the Neutron API to
> > discover what features are available has landed.
> >
> > https://review.openstack.org/#/c/318145/7
> >
> > The quick summary is that things like Q_L3_ENABLED[1] and if certain
> > services are running/enabled has been replaced with checks for if an API
> > extension is available. The point being, the Networking API should be
> > discoverable and features should be determined based on what extensions
> > are available, instead of some DevStack-y bits.
> >
> > Neutron controls what API extensions are loaded via the
> > `api_extensions_path`[2]
> >
> >
> https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L46
> >
> > So by default Neutron loads up every extension that is included in tree.
> >
> > But what if a deployment doesn't want to support an API extension?
> >
> > With third party CI, prior to https://review.openstack.org/#/c/318145 -
> > systems could get away with it by not enabling services - like q-l3 - and
> > that would stop subnets and routers being created. After that patch,
> > well that's not the case.
> >
> > So is there a way to configure what API extensions are available, so that
> > if a CI system doesn't want to provide the ability to create Neutron
> > routers, they can disable the router API extension in some manner more
> > graceful than rm'ing the extension file?
> >
> > I know at least in one deployment I was involved with, we didn't deploy
> > the L3 agent, but I don't believe we disabled or deleted the router API
> > extension, so users would try and create routers and other resources
> > then wonder why nothing would ever work.
> >
> > From a discoverability standpoint - do we provide fine-grained a way for
> > deployers to enable/disable specific API extensions?
>
> As far as I know, you can disable an extension by moving it out of that
> api_extensions_path, renaming it with an _ in front of it, or the core
> plugin or any loaded service plugins do not support it via the
> support_extension_aliases variable.  I don't know of any easier way to
> do that.  Hopefully I'm just not aware of one that exists.
>
> >
> >
> > Further reading:
> >
> > http://lists.openstack.org/pipermail/openstack-dev/2016-May/095323.html
> > http://lists.openstack.org/pipermail/openstack-dev/2016-May/095349.html
> > http://lists.openstack.org/pipermail/openstack-dev/2016-May/095361.html
> >
> >
> > [1]:
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/095095.html
> > [2]:
> https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L46
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]cherry pick patches to stable/mitaka release

2016-06-07 Thread joehuang
Hello,

As the job "publish to PyPI" has been added to the infra pipeline, I suggest to 
cherry pick the following patches(which has been merged to the master branch) 
to stable/mitaka branch, then we can make an initial release in stable/mitaka 
branch, and also to test the release publish procedure work or not.

In the early stage of a project, velocity is quite important to get features on 
board and move fast to grow, the first release is not for production purpose, 
but for preview.

[1]https://review.openstack.org/#/c/305648/
[2] https://review.openstack.org/#/c/307599/
[3] https://review.openstack.org/#/c/310335/
[4] https://review.openstack.org/#/c/310415/
[5] https://review.openstack.org/#/c/310975/
[6] https://review.openstack.org/#/c/311069/

Your comments are welcome.

Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Is the vmware driver going to be OK when we drop glance v1 support?

2016-06-07 Thread Sabari Murugesan
Since most of the image attribute translation is handled by
image/glance.py, I don't see any problem with dropping glance v1 support.
However, there is a just one utility (io_util.py) in the driver that
references

glance v1 image state 'killed' that needs to updated. There is a patch
https://review.openstack.org/#/c/281134 that refactors the image upload and
removes this file altogether. It will be good to merge it now.

Thanks
Sabari




On Tue, Jun 7, 2016 at 1:35 PM, Matt Riedemann 
wrote:

> Most of the glance v2 integration series in Nova is merged [1]. The
> libvirt support is done and tested, the hyper-v driver change is +2'ed and
> the xenplugin code is being worked on.
>
> The question now is does anyone know if the vmware driver is going to be
> OK with just glance v2, i.e. when nova.conf has use_glance_v1=False?
>
> Is anyone from the vmware subteam going to test that out? You need to
> essentially test like this [2].
>
> Be aware that we're looking to drop the glance v1 support from Nova early
> in Ocata, so we need to make sure this is working in the drivers before
> that happens.
>
> [1] https://review.openstack.org/#/q/topic:bp/use-glance-v2-api
> [2] https://review.openstack.org/#/c/325322/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

2016-06-07 Thread Claudiu Belu
Hello,

Sounds good.

We'll be testing glance v2 in the Hyper-V CI as well, but at the first glance, 
there doesn't seem to be any issues with this. We'll switch to glance v2 as 
soon as we're sure nothing will blow up. :)

Best regards,

Claudiu Belu


From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
Sent: Tuesday, June 07, 2016 11:55 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

I tested the glance v2 stack (glance v1 disabled) using a devstack
change here:

https://review.openstack.org/#/c/325322/

Now that the changes are merged up through the base nova image proxy and
the libvirt driver, and we just have hyper-v/xen driver changes for that
series, we should look at gating on this configuration.

I was originally thinking about adding a new job for this, but it's
probably better if we just change one of the existing integrated gate
jobs, like gate-tempest-dsvm-full or gate-tempest-dsvm-neutron-full.

Does anyone have an issue with that? Glance v1 is deprecated and the
configuration option added to nova (use_glance_v1) defaults to True for
compat but is deprecated, and the Nova team plans to drop it's v1 proxy
code in Ocata. So it seems like changing config to use v2 in the gate
jobs should be a non-issue. We'd want to keep at least one integrated
gate job using glance v1 to make sure we don't regress anything there in
Newton.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]trusts with federated users

2016-06-07 Thread Adam Young

On 06/07/2016 10:28 AM, Gyorgy Szombathelyi wrote:

Hi!

As an OIDC user, tried to play with Heat and Murano recently. They usually fail 
with a trust creation error, noticing that keystone cannot find the _member_ 
role while creating the trust.
Hmmm...that should not be the case.  The user in question should have a 
role on the project, but getting it via a group is OK.


I suspect the problem is the Ephemeral nature of Federated users. With 
the Shadow user construct (under construction) there would be something 
to use.


Please file a bug on this and assign it to me (or notify me if you can't 
assign).




Since a federated user is not really have a role in a project, but it is a 
member of a group, which has the appropriate role(s), I suspect that this will 
never work with Federation?
Or is it a known/general problem with trusts and groups? I cannot really decide 
if it is a problem at the Heat, or the Keystone side, can you give me some 
advice?
If it is not an error in the code, but in my setup, then please forgive me this 
stupid question.

Br,
György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][stable] Call for stable/mitaka reviews for 13.0.1

2016-06-07 Thread Matt Riedemann
Now that we're past n-1 for Newton it's time to do a 13.0.1 point 
release for stable/mitaka. There are several changes with a +2 that need 
final approval:


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/mitaka

So let's push through what's there now and I'll plan to get the mitaka 
release request into the release team tomorrow or early on Thursday.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker][nfv][midcycle] Straw poll on F2F vs Virtual midcycle meetup

2016-06-07 Thread Sridhar Ramaswamy
Tackers,

Please respond to this straw poll to gauge general interest for a midcycle
meetup in this dev cycle,

http://doodle.com/poll/2p62zzgevg6h5xkn

Based on this outcome, if there is enough interest, will send another poll
to zoom on a date (roughly 2nd half of July).

thanks,
Sridhar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] Integration with OVN NAT gateway (Proposal)

2016-06-07 Thread Amitabha Biswas
Sorry that was a typo, it should read:

> Note that the MAC addresses of gtrp and dtrp will be the same on each OVN 
> Join Network, but because they are in different branches of the network 
> topology it doesn’t matter.
Amitabha

> On Jun 7, 2016, at 4:39 PM, Bhalachandra Banavalikar 
>  wrote:
> 
> Can you please provide more details on lgrp and lip ports (last bullet in 
> section 1)?
> 
> Thanks,
> Bhal
> 
> Amitabha Biswas ---06/07/2016 01:56:23 PM---This proposal 
> outlines the modifications needed in networking-ovn (addresses 
> https://bugs.launchpad .
> 
> From: Amitabha Biswas 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: Chandra Sekhar Vejendla/San Jose/IBM@IBMUS
> Date: 06/07/2016 01:56 PM
> Subject: [openstack-dev] [neutron][networking-ovn] Integration with OVN NAT 
> gateway (Proposal)
> 
> 
> 
> 
> This proposal outlines the modifications needed in networking-ovn (addresses 
> https://bugs.launchpad.net/networking-ovn/+bug/1551717 
> ) to provide Floating 
> IP (FIP) and SNAT using the L3 gateway router patches.
> 
> http://patchwork.ozlabs.org/patch/624312/ 
>  
> http://patchwork.ozlabs.org/patch/624313/ 
>  
> http://patchwork.ozlabs.org/patch/624314/ 
>  
> http://patchwork.ozlabs.org/patch/624315/ 
>  
> http://patchwork.ozlabs.org/patch/629607/ 
> 
> 
> Diagram:
> 
> +---+ +---+
> | NET 1 | | NET 2 |
> +---+ +---+
> | |
> | * |
> | ** ** |
> | ** * * ** |
> +---RP1 * DR * RP2 --+
> ** * * **
> ** ** 
> * 
> DTRP (168.254.128.2)
> |
> |
> |
> +--+
> | Transit Network |
> | 169.254.128.0/30 |
> +--+
> |
> |
> |
> |
> GTRP (169.254.128.1)
> *** 
> ** ** 
> ** * * ** +--+
> * GW *-| Provider Network |
> ** * * ** +--+
> ** ** 
> *** 
> 
> New Entities:
> OVN Join/Transit Networks
> One per Neutron Router - /30 address space with only 2 ports for e.g. 
> 169.254.128.0/30
> Created when an external gateway is added to a router.
> One extra datapath per router with an External Gateway.
> (Alternate option - One Transit Network in a deployment, IPAM becomes a 
> headache - Not discussed here).
> Prevent Neutron from using that /30 address space. Specify in networking-ovn 
> conf file.
> Create 1 new “Join” neutron network (to represent all Join OVN Networks) in 
> the networking-ovn.
> Note that it may be possible to replace the Join/Transit network using Router 
> Peering in later versions (not discussed here).
> Allocate 2 ports in the Join network in the networking-ovn plugin.
> Logical Gateway Transit Router Port (gtrp), 169.254.128.1
> Logical Distributed Transit Router Port (dtrp), 169.254.128.2
> Note that Neutron only sees 1 Join network with 2 ports; OVN sees a replica 
> of this Join network as a new Logical Switch for each Gateway Router. The 
> mapping of OVN Logical Switch(es) Join(s) to Gateway Router is discussed in 
> OVN (Default) Gateway Routers below.
> Note that the MAC addresses of gtrp and dtrp will be the same on each OVN 
> Join Network, but because they are in different branches of the network 
> topology it doesn’t matter.
> OVN (Default) Gateway Routers:
> One per Neutron Router.
> 2 ports
> Logical Gateway Transit Router Port (gtrp), 169.254.128.1 (same for each OVN 
> Join network).
> External/Provider Router Port (legwrp), this is allocated by neutron.
> Scheduling - The current OVN gateway proposal relies on the CMS/nbctl to 
> decide on which hypervisor (HV) to schedule a particular gateway router.
> A setting on the chassis (new external_id key or a new column) that allows 
> the hypervisor admin to specify that a chassis can or cannot be used to host 
> a gateway router (similar to a network node in OpenStack). Default - Allow 
> (for compatibility purposes).
> The networking-ovn plugin picks up the list of “candidate” chassis from the 
> Southbound DB and uses an existing scheduling algorithm
> Use a simple random.choice i.e. ChanceScheduler (Version 1)
> Tap into the neutron’s LeastRouterScheduler - but that requires the 
> networking-ovn (or some a hacked up version of the L3 agent) to imitate the 
> L3 agent running on various network nodes.
> Populate the SNAT and DNAT columns in the logical router table. This is under 
> review in OVS - http://openvswitch.org/pipermail/dev/2016-June/072169.html 
> 
> Create static routing entry in the gateway router to route tenant bound 
> traffic to the distributed logical router.ar gate
> 
> Existing Entities:
> Distributed Logical 

Re: [openstack-dev] [neutron][networking-ovn] Integration with OVN NAT gateway (Proposal)

2016-06-07 Thread Bhalachandra Banavalikar
Can you please provide more details on lgrp and lip ports (last bullet in
section 1)?

Thanks,
Bhal



From:   Amitabha Biswas 
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: Chandra Sekhar Vejendla/San Jose/IBM@IBMUS
Date:   06/07/2016 01:56 PM
Subject:[openstack-dev] [neutron][networking-ovn] Integration with OVN
NAT gateway (Proposal)



This proposal outlines the modifications needed in networking-ovn
(addresses https://bugs.launchpad.net/networking-ovn/+bug/1551717) to
provide Floating IP (FIP) and SNAT using the L3 gateway router patches.

http://patchwork.ozlabs.org/patch/624312/
http://patchwork.ozlabs.org/patch/624313/
http://patchwork.ozlabs.org/patch/624314/
http://patchwork.ozlabs.org/patch/624315/
http://patchwork.ozlabs.org/patch/629607/

Diagram:

+---+   +---+
| NET 1 |   | NET 2 |
+---+   +---+
   |   |
   |   *   |
   | ** ** |
   |   ***  * **   |
   +---RP1 *  DR   * RP2 --+
   ***  * **
 ** **
   *
  DTRP (168.254.128.2)
   |
   |
   |
   +--+
   | Transit Network  |
   | 169.254.128.0/30 |
   +--+
   |
   |
   |
   |
  GTRP (169.254.128.1)
***
  **   **
**   *   *   ** +--+
* GW  *-| Provider Network |
**   *   *   ** +--+
  **   **
***

New Entities:

  OVN Join/Transit Networks
One per Neutron Router - /30 address space with only 2 ports
for e.g. 169.254.128.0/30
Created when an external gateway is added to a router.
One extra datapath per router with an External Gateway.
(Alternate option - One Transit Network in a deployment, IPAM
becomes a headache - Not discussed here).
Prevent Neutron from using that /30 address space. Specify in
networking-ovn conf file.
Create 1 new “Join” neutron network (to represent all Join OVN
Networks) in the networking-ovn.
Note that it may be possible to replace the Join/Transit
network using Router Peering in later versions  (not discussed
here).
Allocate 2 ports in the Join network in the networking-ovn
plugin.
  Logical Gateway Transit Router Port (gtrp), 169.254.128.1
  Logical Distributed Transit Router Port (dtrp),
  169.254.128.2
Note that Neutron only sees 1 Join network with 2 ports; OVN
sees a replica of this Join network as a new Logical Switch for
each Gateway Router. The mapping of OVN Logical Switch(es) Join
(s) to Gateway Router is discussed in OVN (Default) Gateway
Routers below.
Note that the MAC addresses of lgrp and lip will be the same on
each OVN Join Network, but because they are in different
branches of the network topology it doesn’t matter.
  OVN (Default) Gateway Routers:
One per Neutron Router.
2 ports
  Logical Gateway Transit Router Port (gtrp), 169.254.128.1
  (same for each OVN Join network).
  External/Provider Router Port (legwrp), this is allocated
  by neutron.
Scheduling - The current OVN gateway proposal relies on the
CMS/nbctl to decide on which hypervisor (HV) to schedule a
particular gateway router.
  A setting on the chassis (new external_id key or a new
  column) that allows the hypervisor admin to specify that
  a chassis can or cannot be used to host a gateway router
  (similar to a network node in OpenStack). Default - Allow
  (for compatibility purposes).
  The networking-ovn plugin picks up the list of
  “candidate” chassis from the Southbound DB and uses an
  existing scheduling algorithm
Use a simple random.choice i.e. ChanceScheduler
(Version 1)
Tap into the neutron’s LeastRouterScheduler - but
that requires the networking-ovn (or some a hacked
up version of the L3 agent) to imitate the L3 agent
running on various network nodes.

Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-07 Thread Jim Baker
On Tue, Jun 7, 2016 at 4:26 PM, Ben Meyer  wrote:

> On 06/07/2016 06:09 PM, Samuel Merritt wrote:
> > On 6/7/16 12:00 PM, Monty Taylor wrote:
> >> [snip]
> > >
> >> I'd rather see us focus energy on Python3, asyncio and its pluggable
> >> event loops. The work in:
> >>
> >> http://magic.io/blog/uvloop-blazing-fast-python-networking/
> >>
> >> is a great indication in an actual apples-to-apples comparison of what
> >> can be accomplished in python doing IO-bound activities by using modern
> >> Python techniques. I think that comparing python2+eventlet to a fresh
> >> rewrite in Go isn't 100% of the story. A TON of work has gone in to
> >> Python that we're not taking advantage of because we're still supporting
> >> Python2. So what I've love to see in the realm of comparative
> >> experimentation is to see if the existing Python we already have can be
> >> leveraged as we adopt newer and more modern things.
> >
> > Asyncio, eventlet, and other similar libraries are all very good for
> > performing asynchronous IO on sockets and pipes. However, none of them
> > help for filesystem IO. That's why Swift needs a golang object server:
> > the go runtime will keep some goroutines running even though some
> > other goroutines are performing filesystem IO, whereas filesystem IO
> > in Python blocks the entire process, asyncio or no asyncio.
>
> That can be modified. gevent has a tool
> (http://www.gevent.org/gevent.fileobject.html) that enables the File IO
> to be async  as well by putting the file into non-blocking mode. I've
> used it, and it works and scales well.
>
> Sadly, Python doesn't offer this by default; perhaps OpenStack can get
> that changed.
>
> $0.02
>
> Ben
>
>
The uvloop library is very new, but libuv itself uses a thread pool, so it
should work just fine with files once this functionality is implemented;
see https://github.com/MagicStack/uvloop/issues/1 and
https://nikhilm.github.io/uvbook/filesystem.html

I don't see any difference here with how Golang would implement such async
support, in terms of mapping goroutines against its own thread pool. With
respect to latency, there could even be better performance for certain
workloads with uvloop than Go. This is because of CPython's refcounting, vs
Go's GC - even with all the recent improvements in Go (
https://blog.golang.org/go15gc), Go still does stop the world.

A key piece here is that uvloop does not hold the GIL in these ops - a big
advantage that Python C extensions enjoy, including libraries they wrap,
and Cython explicitly enables with nogil support. So in particular we have
the following code in the the core uvloop function, which in turn is just
calling into libuv with its own core function, uv_run:

```
# Although every UVHandle holds a reference to the loop,
# we want to do everything to ensure that the loop will
# never deallocate during the run -- so we do some
# manual refs management.
Py_INCREF(self)
with nogil:
err = uv.uv_run(self.uvloop, mode)
Py_DECREF(self)
```

More here on nogil support in Cython:
http://docs.cython.org/src/userguide/external_C_code.html#declaring-a-function-as-callable-without-the-gil

- Jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [HA] RFC: user story including hypervisor reservation / host maintenance / storage AZs / event history

2016-06-07 Thread Adam Spiers
[Cc'ing product-wg@ - when replying, first please consider whether
cross-posting is appropriate]

Hi all,

Currently the OpenStack HA community is putting a lot of effort into
converging on a single upstream solution for high availability of VMs
and hypervisors[0], and we had a lot of very productive discussions in
Austin on this topic[1].

One of the first areas of focus is the high level user story:

   
http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html

In particular, there is an open review on which we could use some
advice from the wider community.  The review proposes adding four
extra usage scenarios to the existing user story.  All of these
scenarios are to some degree related to HA of VMs and hypervisors,
however none of them exclusively - they all have scope extending to
other areas beyond HA.  Here's a very brief summary of all four, as
they relate to HA:

1. "Sticky" shared storage zones

   Scenario: all compute hosts have access to exactly one shared
   storage "availability zone" (potentially independent of the normal
   availability zones).  For example, there could be multiple NFS
   servers, and every compute host has /var/lib/nova/instances mounted
   to one of them.  On first boot, each VM is *implicitly* assigned to
   a zone, depending on which compute host nova-scheduler picks for it
   (so this could be more or less random).  Subsequent operations such
   as "nova evacuate" would need to ensure the VM only ever moves to
   other hosts in the same zone.

2. Hypervisor reservation

   The operator wants a mechanism for reserving some compute hosts
   exclusively for use as failover hosts on which to automatically
   resurrect VMs from other failed compute nodes.

3. Host maintenance

   The operator wants a mechanism for flagging hosts as undergoing
   maintenance, so that the HA mechanisms for automatic recovery are
   temporarily disabled during the maintenance window.

4. Event history

   The operator wants a way to retrieve the history of what, when,
   where and how the HA automatic recovery mechanism is performed.

And here's the review in question:

   https://review.openstack.org/#/c/318431/

My first instinct was that all of these scenarios are sufficiently
independent, complex, and extend far enough outside HA scope, that
they deserve to live in four separate user stories, rather than adding
them to our existing "HA for VMs" user story.  This could also
maximise the chances of converging on a single upstream solution for
each which works both inside and outside HA contexts.  (Please read
the review's comments for much more detail on these arguments.)

However, others made the very valid point that since there are
elements of all these stories which are indisputably related to HA for
VMs, we still need the existing user story for HA VMs to cover them,
so that it can provide "the big picture" which will tie together all
the different strands of work it requires.

So we are currently proposing to take the following steps:

 - Propose four new user stories for each of the above scenarios.

 - Link to the new stories from the "Related User Stories" section of
   the existing HA VMs story.

 - Extend the existing story so that it covers the HA-specific aspects of
   the four cases, leaving any non-HA aspects to be covered by the newly
   linked stories.

Then each story would go through the standard workflow defined by the PWG:

   https://wiki.openstack.org/wiki/ProductTeam/User_Stories

Does this sound reasonable, or is there a better way?

BTW, whilst this email is primarily asking for advice on the process,
feedback on each story is also welcome, whether it's "good idea", "you
can already do that", or "terrible idea!" ;-)  However please first
read the comments on the above review, as the obvious points have
probably already been covered :-)

Thanks a lot!

Adam

[0] A complete description of the problem area and existing solutions
was given in this talk:

  
https://www.openstack.org/videos/video/high-availability-for-pets-and-hypervisors-state-of-the-nation

[1] https://etherpad.openstack.org/p/newton-instance-ha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-07 Thread Hongbin Lu
Hi all,

According to the decision at the last team meeting, we will rename the project 
to “Zun”. Eli Qiao has submitted a rename request: 
https://review.openstack.org/#/c/326306/ . The infra team will rename the 
project in gerrit and git in the next maintenance windows (possibly a couple 
months after). At the meanwhile, I propose to start using the new name by 
ourselves. That includes:
- Use the new launchpad project: https://launchpad.net/zun (need helps to copy 
your bugs and BPs to the new LP project)
- Send emails with “[Zun]” in the subject
- Use the new IRC channel #openstack-zun
(others if you can think of)

If you have any concern or suggestion, please don’t hesitate to contact us. 
Thanks.

Best regards,
Hongbin

From: Yanyan Hu [mailto:huyanya...@gmail.com]
Sent: June-02-16 3:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

Aha, it's pretty interesting, I vote for Zun as well :)

2016-06-02 12:56 GMT+08:00 Fei Long Wang 
>:
+1 for Zun, I love it and it's definitely a good container :)


On 02/06/16 15:46, Monty Taylor wrote:
> On 06/02/2016 06:29 AM, 秀才 wrote:
>> i suggest a name Zun :)
>> please see the reference: https://en.wikipedia.org/wiki/Zun
> It's available on pypi and launchpad. I especially love that one of the
> important examples is the "Four-goat square Zun"
>
> https://en.wikipedia.org/wiki/Zun#Four-goat_square_Zun
>
> I don't get a vote - but I vote for this one.
>
>> -- Original --
>> *From: * "Rochelle 
>> Grober";>;
>> *Date: * Thu, Jun 2, 2016 09:47 AM
>> *To: * "OpenStack Development Mailing List (not for usage
>> questions)">;
>> *Cc: * "Haruhiko 
>> Katou">;
>> *Subject: * Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> Well, you could stick with the wine bottle analogy  and go with a bigger
>> size:
>>
>> Jeroboam
>> Methuselah
>> Salmanazar
>> Balthazar
>> Nabuchadnezzar
>>
>> --Rocky
>>
>> -Original Message-
>> From: Kumari, Madhuri 
>> [mailto:madhuri.kum...@intel.com]
>> Sent: Wednesday, June 01, 2016 3:44 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Haruhiko Katou
>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> Thanks Shu for providing suggestions.
>>
>> I wanted the new name to be related to containers as Magnum is also
>> synonym for containers. So I have few options here.
>>
>> 1. Casket
>> 2. Canister
>> 3. Cistern
>> 4. Hutch
>>
>> All above options are free to be taken on pypi and Launchpad.
>> Thoughts?
>>
>> Regards
>> Madhuri
>>
>> -Original Message-
>> From: Shuu Mutou 
>> [mailto:shu-mu...@rf.jp.nec.com]
>> Sent: Wednesday, June 1, 2016 11:11 AM
>> To: 
>> openstack-dev@lists.openstack.org
>> Cc: Haruhiko Katou >
>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> I found container related names and checked whether other project uses.
>>
>> https://en.wikipedia.org/wiki/Straddle_carrier
>> https://en.wikipedia.org/wiki/Suezmax
>> https://en.wikipedia.org/wiki/Twistlock
>>
>> These words are not used by other project on PYPI and Launchpad.
>>
>> ex.)
>> https://pypi.python.org/pypi/straddle
>> https://launchpad.net/straddle
>>
>>
>> However the chance of renaming in N cycle will be done by Infra-team on
>> this Friday, we would not meet the deadline. So
>>
>> 1. use 'Higgins' ('python-higgins' for package name) 2. consider other
>> name for next renaming chance (after a half year)
>>
>> Thoughts?
>>
>>
>> Regards,
>> Shu
>>
>>
>>> -Original Message-
>>> From: Hongbin Lu 
>>> [mailto:hongbin...@huawei.com]
>>> Sent: Wednesday, June 01, 2016 11:37 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> >
>>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>>
>>> Shu,
>>>
>>> According to the feedback from the last team meeting, Gatling doesn't
>>> seem to be a suitable name. Are you able to find an alternative name?
>>>
>>> Best regards,
>>> Hongbin
>>>
 -Original Message-
 From: Shuu Mutou 
 [mailto:shu-mu...@rf.jp.nec.com]
 Sent: May-24-16 4:30 AM
 To: 
 openstack-dev@lists.openstack.org
 Cc: Haruhiko Katou
 Subject: [openstack-dev] [higgins] Should we rename "Higgins"?

 Hi all,

 Unfortunately "higgins" is used by media server 

Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-06-07 Thread Hongbin Lu
Hi all,

Thanks for your votes. Eli Qiao has been added to the core team: 
https://review.openstack.org/#/admin/groups/1382,members .

Best regards,
Hongbin

> -Original Message-
> From: Chandan Kumar [mailto:chku...@redhat.com]
> Sent: June-01-16 12:27 AM
> To: Sheel Rana Insaan
> Cc: Hongbin Lu; adit...@nectechnologies.in;
> vivek.jain.openst...@gmail.com; Shuu Mutou; Davanum Srinivas; hai-
> x...@xr.jp.nec.com; Yuanying; Kumari, Madhuri; yanya...@cn.ibm.com;
> flw...@catalyst.net.nz; OpenStack Development Mailing List (not for
> usage questions); Qi Ming Teng; sitlani.namr...@yahoo.in;
> qiaoliy...@gmail.com
> Subject: Re: [Higgins] Proposing Eli Qiao to be a Higgins core
> 
> Hello,
> 
> 
> > On Jun 1, 2016 3:09 AM, "Hongbin Lu"  wrote:
> >>
> >> Hi team,
> >>
> >>
> >>
> >> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins
> core.
> >> Normally, the requirement to join the core team is to consistently
> >> contribute to the project for a certain period of time. However,
> >> given the fact that the project is new and the initial core team was
> >> formed based on a commitment, I am fine to propose a new core based
> >> on a strong commitment to contribute plus a few useful
> >> patches/reviews. In addition, Eli Qiao is currently a Magnum core
> and
> >> I believe his expertise will be an asset of Higgins team.
> >>
> >>
> 
> +1 from my side.
> 
> Thanks,
> 
> Chandan Kumar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Kris G. Lindgren
Replying to a digest so sorry for the copy and pastes….


>> There's also been discussion of ways we could do ad-hoc changes in RAID 
>> level,
>> based on flavor metadata, during the provisioning process (rather than ahead 
>> of
>> time) but no code has been done for this yet, AFAIK.
>
> I'm still pretty interested in it, because I agree with anything said
> above about building RAID ahead-of-time not being convenient. I don't
> quite understand how such a feature would look like, we might add it as
> a topic for midcycle.

This sounds like an interesting/acceptable way to handle this problem to me.  
Update the node to set the desired raid state from the flavor.

>> - Inspection is geared towards using a different network and dnsmasq

>> infrastructure than what is in use for ironic/neutron.  Which also means 
>> that in
>> order to not conflict with dhcp requests for servers in ironic I need to use
>> different networks.  Which also means I now need to handle swinging server 
>> ports
>> between different networks.
>
> Inspector is designed to respond only to requests for nodes in the inspection
> phase, so that it *doesn't* conflict with provisioning of nodes by Ironic. 
> I've
> been using the same network for inspection and provisioning without issue -- 
> so
> I'm not sure what problem you're encountering here.

So I was mainly thinking about the use case of using inspector to onboard 
unknown hosts into ironic (though I see didn't mention that).  So in a 
datacenter we are always on boarding servers.  Right now we boot a linux agent 
that "inventories" the box and adds it to our management system as a node to be 
able to be consumed by a build request.  My understanding is that inspector 
supports this as of Mitaka.  However, the install guide for inspection states 
that you need to install its own dnsmasq instance for inspection.  To me this 
implies that this is suppose to be a separate network.  As if I have 2 dhcp 
servers running on the same L2 network I am going to get races between the 2 
dhcp servers for normal provisioning activities.  Especially if one dhcp server 
is configured to respond to everything (so that it can onboard unknown 
hardware) and the other only to specific hosts(ironic/neutron).  The only way 
that wouldn't be an issue is if both inspector and ironic/neutron are using the 
same dhcp servers.  Or am I missing something?

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-07 Thread Ben Meyer
On 06/07/2016 06:09 PM, Samuel Merritt wrote:
> On 6/7/16 12:00 PM, Monty Taylor wrote:
>> [snip]
> >
>> I'd rather see us focus energy on Python3, asyncio and its pluggable
>> event loops. The work in:
>>
>> http://magic.io/blog/uvloop-blazing-fast-python-networking/
>>
>> is a great indication in an actual apples-to-apples comparison of what
>> can be accomplished in python doing IO-bound activities by using modern
>> Python techniques. I think that comparing python2+eventlet to a fresh
>> rewrite in Go isn't 100% of the story. A TON of work has gone in to
>> Python that we're not taking advantage of because we're still supporting
>> Python2. So what I've love to see in the realm of comparative
>> experimentation is to see if the existing Python we already have can be
>> leveraged as we adopt newer and more modern things.
>
> Asyncio, eventlet, and other similar libraries are all very good for
> performing asynchronous IO on sockets and pipes. However, none of them
> help for filesystem IO. That's why Swift needs a golang object server:
> the go runtime will keep some goroutines running even though some
> other goroutines are performing filesystem IO, whereas filesystem IO
> in Python blocks the entire process, asyncio or no asyncio.

That can be modified. gevent has a tool
(http://www.gevent.org/gevent.fileobject.html) that enables the File IO
to be async  as well by putting the file into non-blocking mode. I've
used it, and it works and scales well.

Sadly, Python doesn't offer this by default; perhaps OpenStack can get
that changed.

$0.02

Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

2016-06-07 Thread Nikhil Komawar


On 6/7/16 4:55 PM, Matt Riedemann wrote:
> I tested the glance v2 stack (glance v1 disabled) using a devstack
> change here:
>
> https://review.openstack.org/#/c/325322/
>
> Now that the changes are merged up through the base nova image proxy
> and the libvirt driver, and we just have hyper-v/xen driver changes
> for that series, we should look at gating on this configuration.
>
> I was originally thinking about adding a new job for this, but it's
> probably better if we just change one of the existing integrated gate
> jobs, like gate-tempest-dsvm-full or gate-tempest-dsvm-neutron-full.
>
> Does anyone have an issue with that? Glance v1 is deprecated and the
> configuration option added to nova (use_glance_v1) defaults to True for 

Just wanted to clarify that Glance v1 isn't deprecated yet but the
process will be started, as soon as Nova's port for glance v1->v2 is done.

> compat but is deprecated, and the Nova team plans to drop it's v1
> proxy code in Ocata. So it seems like changing config to use v2 in the
> gate jobs should be a non-issue. We'd want to keep at least one
> integrated gate job using glance v1 to make sure we don't regress
> anything there in Newton.
>

Overall, sounds like a good plan (and no objections).

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar][zaqar-ui] Nomating Shu Muto for Zaqar-UI core

2016-06-07 Thread Fei Long Wang

+1 and thanks for all the great work.

On 08/06/16 08:27, Thai Q Tran wrote:


Hello all,

I am pleased to nominate Shu Muto to the Zaqar-UI core team. Shu's 
reviews are extremely thorough and his work exemplary. His expertise 
in angularJS, translation, and project infrastructure proved to be 
invaluable. His support and reviews has helped the project progressed. 
Combine with his strong understanding of the project, I believe his 
will help guide us in the right direction and allow us to keep our 
current pace.


Please vote +1 or -1 to the nomination.

Thanks,
Thai (tqtran)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-07 Thread Samuel Merritt

On 6/7/16 12:00 PM, Monty Taylor wrote:

[snip]

>

I'd rather see us focus energy on Python3, asyncio and its pluggable
event loops. The work in:

http://magic.io/blog/uvloop-blazing-fast-python-networking/

is a great indication in an actual apples-to-apples comparison of what
can be accomplished in python doing IO-bound activities by using modern
Python techniques. I think that comparing python2+eventlet to a fresh
rewrite in Go isn't 100% of the story. A TON of work has gone in to
Python that we're not taking advantage of because we're still supporting
Python2. So what I've love to see in the realm of comparative
experimentation is to see if the existing Python we already have can be
leveraged as we adopt newer and more modern things.


Asyncio, eventlet, and other similar libraries are all very good for 
performing asynchronous IO on sockets and pipes. However, none of them 
help for filesystem IO. That's why Swift needs a golang object server: 
the go runtime will keep some goroutines running even though some other 
goroutines are performing filesystem IO, whereas filesystem IO in Python 
blocks the entire process, asyncio or no asyncio.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] spawn a group of nodes on different availability zones

2016-06-07 Thread Hongbin Lu
Hi Heat team,

A question inline.

Best regards,
Hongbin

> -Original Message-
> From: Steven Hardy [mailto:sha...@redhat.com]
> Sent: March-03-16 3:57 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][heat] spawn a group of nodes on
> different availability zones
> 
> On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:
> > On 02/03/16 05:50, Mathieu Velten wrote:
> > >Hi all,
> > >
> > >I am looking at a way to spawn nodes in different specified
> > >availability zones when deploying a cluster with Magnum.
> > >
> > >Currently Magnum directly uses predefined Heat templates with Heat
> > >parameters to handle configuration.
> > >I tried to reach my goal by sticking to this model, however I
> > >couldn't find a suitable Heat construct that would allow that.
> > >
> > >Here are the details of my investigation :
> > >- OS::Heat::ResourceGroup doesn't allow to specify a list as a
> > >variable that would be iterated over, so we would need one
> > >ResourceGroup by AZ
> > >- OS::Nova::ServerGroup only allows restriction at the hypervisor
> > >level
> > >- OS::Heat::InstanceGroup has an AZs parameter but it is marked
> > >unimplemented , and is CFN specific.
> > >- OS::Nova::HostAggregate only seems to allow adding some metadatas
> > >to a group of hosts in a defined availability zone
> > >- repeat function only works inside the properties section of a
> > >resource and can't be used at the resource level itself, hence
> > >something like that is not allowed :
> > >
> > >resources:
> > >   repeat:
> > > for_each:
> > >   <%az%>: { get_param: availability_zones }
> > > template:
> > >   rg-<%az%>:
> > > type: OS::Heat::ResourceGroup
> > > properties:
> > >   count: 2
> > >   resource_def:
> > > type: hot_single_server.yaml
> > > properties:
> > >   availability_zone: <%az%>
> > >
> > >
> > >The only possibility that I see is generating a ResourceGroup by AZ,
> > >but it would induce some big changes in Magnum to handle
> > >modification/generation of templates.
> > >
> > >Any ideas ?
> >
> > This is a long-standing missing feature in Heat. There are two
> > blueprints for this (I'm not sure why):
> >
> > https://blueprints.launchpad.net/heat/+spec/autoscaling-
> availabilityzo
> > nes-impl
> > https://blueprints.launchpad.net/heat/+spec/implement-
> autoscalinggroup
> > -availabilityzones
> >
> > The latter had a spec with quite a lot of discussion:
> >
> > https://review.openstack.org/#/c/105907
> >
> > And even an attempted implementation:
> >
> > https://review.openstack.org/#/c/116139/
> >
> > which was making some progress but is long out of date and would need
> > serious work to rebase. The good news is that some of the changes I
> > made in Liberty like https://review.openstack.org/#/c/213555/ should
> > hopefully make it simpler.
> >
> > All of which is to say, if you want to help then I think it would be
> > totally do-able to land support for this relatively early in Newton :)
> >
> >
> > Failing that, the only think I can think to try is something I am
> > pretty sure won't work: a ResourceGroup with something like:
> >
> >   availability_zone: {get_param: [AZ_map, "%i"]}
> >
> > where AZ_map looks something like {"0": "az-1", "1": "az-2", "2":
> > "az-1", ...} and you're using the member index to pick out the AZ to
> > use from the parameter. I don't think that works (if "%i" is resolved
> > after get_param then it won't, and I suspect that's the case) but
> it's
> > worth a try if you need a solution in Mitaka.
> 
> Yeah, this won't work if you attempt to do the map/index lookup in the
> top-level template where the ResourceGroup is defined, but it *does*
> work if you pass both the map and the index into the nested stack, e.g
> something like this (untested):
> 
> $ cat rg_az_map.yaml
> heat_template_version: 2015-04-30
> 
> parameters:
>   az_map:
> type: json
> default:
>   '0': az1
>   '1': az2
> 
> resources:
>  AGroup:
> type: OS::Heat::ResourceGroup
> properties:
>   count: 2
>   resource_def:
> type: server_mapped_az.yaml
> properties:
>   availability_zone_map: {get_param: az_map}
>   index: '%index%'
> 
> $ cat server_mapped_az.yaml
> heat_template_version: 2015-04-30
> 
> parameters:
>   availability_zone_map:
> type: json
>   index:
> type: string
> 
> resources:
>  server:
> type: OS::Nova::Server
> properties:
>   image: the_image
>   flavor: m1.foo
>   availability_zone: {get_param: [availability_zone_map, {get_param:
> index}]}

This is nice. It seems to address our heterogeneity requirement at *deploy* 
time. However, I wonder what is the runtime behavior. For example, I deploy a 
stack by:
$ heat stack-create -f rg_az_map.yaml -P az_map='{"0":"az1","1":"az2"}'

Then, I want to remove a sever by:
$ heat stack-update -f rg_az_map.yaml 

Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-07 Thread James Penick
>rather than making progress on OpenStack, we'll spend the next 4 years
bikeshedding broadly about which bits, if any, should be rewritten in Go.

100% agreed, and well said.


On Tue, Jun 7, 2016 at 12:00 PM, Monty Taylor  wrote:

> This text is in my vote, but as I'm sure there are people who do not
> read all of the gerrit comments for governance changes, I'm posting it
> here so that my thoughts are clear.
>
> Please know that this has actually kept me up at night. I cast my vote
> on this neither glibly or superficially. I have talked to everyone I can
> possibly think of on the topic, and at the end, the only thing I can do
> is use my judgment and vote to the best of my ability. I apologize from
> the bottom of my heart to the people I find myself in disagreement with.
> I have nothing but the utmost respect for you all.
>
> I vote against allowing Go as an official language of OpenStack.
>
> "The needs of the many outweigh the needs of the few, or the one"
>
> I'm super unhappy about both possible votes here.
>
> I think go is a wonderful language. I think hummingbird is a well
> considered solution to a particular problem. I think that lack of
> flexibility is broadly speaking not a problem we have in OpenStack
> currently. I'm more worried about community cohesion in a post-Big Tent
> world than I am about specific optimization.
>
> I do not think that adding Go as a language to OpenStack today is enough
> of a win to justify the cost, so I don't like accepting it.
>
> I do not think that this should preclude serious thought about
> OpenStack's technology underpinnings, so I don't like rejecting it.
>
> "only a great fool would reach for what he was given..."
>
> I think that one of OpenStack's biggest and most loudly spoken about
> problems is too many per-project solutions and not enough holistic
> solutions. Because of that, and the six years of experience we have
> seeing where that gets us, I do not think that adding Go into the mix
> and "seeing what happens" is going to cause anything other than chaos.
>
> If we want to add Go, or any other language, into the mix for server
> projects, I think it should be done with the intent that we are going to
> do it because it's a markedly better choice across the board, that we
> are going to rewrite literally everything, and I believe that we should
> consider the cost associated with retraining 2000 developers as part of
> considering that. Before you think that that's me throwing the baby out
> with the bathwater...
>
> In a previous comment, Deklan says:
>
> "If Go was accepted as an officially supported language in the OpenStack
> community, I'd be the first to start to rewrite as much code as possible
> in Go."
>
> That is, in fact, EXACTLY the concern. That rather than making progress
> on OpenStack, we'll spend the next 4 years bikeshedding broadly about
> which bits, if any, should be rewritten in Go. It took Juju YEARS to
> rewrite from Python to Go and to hit feature parity. The size of that
> codebase was much smaller and they even had a BDFL (which people keep
> telling us makes things go quicker)
>
> It could be argued that we could exercise consideration about which
> things get rewritten in Go so as to avoid that, but I'm pretty sure that
> would just mean that the only conversation the TC would have for the
> next two years would be "should X be in Go or Python" - and we'd have
> strong proponents from each project on each side of the argument.
>
> David Goetz says "you aren’t doing the community any favors by deciding
> for them how they do their jobs". I get that, and can respect that point
> of view. However, for the most part, the negative feedback we get as
> members of the TC is actually that we're too lax, not that we're too
> strict.
>
> I know that it's a popular sentiment with some folks to say "let devs
> use whatever tool they want to." However, that has never been our
> approach with OpenStack. It has been suggested multiple times and
> aligning on limited chosen tech has always been the thing we've chosen.
> I tend to align in my personal thinking more with Dan McKinley in:
>
> http://mcfunley.com/choose-boring-technology
>
> I have effectively been arguing his point for as long as I've been
> involved in OpenStack governance - although probably not as well as he
> does. I don't see any reason to reverse myself now.
>
> I'd rather see us focus energy on Python3, asyncio and its pluggable
> event loops. The work in:
>
> http://magic.io/blog/uvloop-blazing-fast-python-networking/
>
> is a great indication in an actual apples-to-apples comparison of what
> can be accomplished in python doing IO-bound activities by using modern
> Python techniques. I think that comparing python2+eventlet to a fresh
> rewrite in Go isn't 100% of the story. A TON of work has gone in to
> Python that we're not taking advantage of because we're still supporting
> Python2. So what I've love to see in the realm of 

[openstack-dev] [swift] Exploring the feasibility of a dependency approach

2016-06-07 Thread John Dickinson
Below is the entirety of an email thread between myself, Thierry, and Flavio. 
It goes into detail about Swift's design and the feasibility and potential 
impact of a "split repos" scenario.

I'm posting this with permission as an FYI, not to reraise discussion.


--John





Forwarded message:

> From: John Dickinson 
> To: Thierry Carrez 
> Cc: Flavio Percoco 
> Subject: Re: Exploring the feasibility of a dependency approach
> Date: Wed, 01 Jun 2016 13:58:21 -0700
>
>
>
> On 30 May 2016, at 2:48, Thierry Carrez wrote:
>
>> John Dickinson wrote:
>>> Responses inline.
>>
>> Thank you for taking up the time to write this, it's really helpful (to me 
>> at least). I have a few additional comments/questions to make sure I fully 
>> understand.
>>
 [...]
 1. How much sense would a Swift API / Swift engine split make today ?
 [...]
>>>
>>> It doesn't make much sense to try to split Swift into an API part and
>>> an engine part because the things the API handles are inexorably
>>> linked to the storage engine itself. In other words, the API handlers
>>> are the implementation of the engine.
>>>
>>> Since the API is handling the actual resources that are exposed (ie
>>> the data itself), it also has to handle the "engine" pieces like the
>>> consistency model (when is something "durable"), placement (where
>>> should something go), failure handling (what if hardware in the
>>> cluster isn't available), and durability schemes (replicas, erasure
>>> coding, etc).
>>
>> Right, so knowledge of the data placement algorithm (or the durability 
>> constraints) in pervasive across the Swift nodes. The proxy server is, in a 
>> way, as low-level as the storage server.
>>
>>> The "engine" in Swift has two logical parts. One part is responsible
>>> for taking a request, making a canonical persistent "key" for it,
>>> handing the data to the storage media, and ensuring that the media has
>>> durably stored the data. The other part is responsible for handling a
>>> client request, finding data in the cluster, and coordinating all
>>> responses from the stuff in the first part.
>>>
>>> We call the first part "storage servers" and the second part "proxy
>>> servers". There are three different kinds of storage servers in Swift:
>>> account, container, and object, and each also have several background
>>> daemon processes associated with them. For the rest of this email, I'll
>>> refer to a proxy server and storage servers (or specific account,
>>> container, or object servers).
>>>
>>> The proxy server and the storage servers are pluggable. The proxy
>>> server and the storage servers support 3rd party WSGI middleware. The
>>> proxy server has been extended many times in the ecosystem with a lot
>>> of really cool functionality:
>>>
>>>   * Swift as an origin server for CDNs
>>>   * Storlets, which allow executable code stored as objects to
>>> mutate requests and responses
>>>   * Image thumbnails (eg for wikimedia)
>>>   * Genome sequence format conversions, so data coming out of a
>>> gene sequencer can go directly to swift and be usable by other
>>> apps in the workflow
>>>   * Media server timestamp to byte offset translator (eg for CrunchyRoll)
>>>   * Caching systems
>>>   * Metadata indexing
>>>
>>> The object server also supports different implementations for how it
>>> talks to durable media. The in-repo version has a memory-only
>>> implementation and a generic filesystem implementation. Third-party
>>> implementations support different storage media like Kinetic drives.
>>> If there were to be special optimizations for flash media, this is
>>> where it would go. Inside of the object server, this is abstracted as
>>> a "DiskFile", and extending it is a supported use case for Swift.
>>>
>>> The DiskFile is how other full-featured storage systems have plugged
>>> in to Swift. For example, the SwiftOnFile project implements a
>>> DiskFile that handles talking to a distributed filesystem instead of a
>>> local filesystem. This is used for putting Swift on GlusterFS or on
>>> NetApp. It's the same pattern that's used for swift-on-ceph and all of
>>> the other swift-on-* implementations out there. My previous email had
>>> more examples of these.
>>
>> The complaints I heard with DiskFile abstractions is that you pile up two 
>> data distributions algorithms: in the case of Ceph for example you use the 
>> Swift rings only to hand off data distribution to CRUSH at the end, which is 
>> like twice the complexity compared to what you actually need. So it's great 
>> for Kinetic drives, but not so much for alternate data distribution 
>> mechanisms. Is that a fair or a partial complaint ?
>>
>
> If you run Swift on top of a different system that itself provides durable 
> storage, you'll end up with a lot of complexity and cost for little benefit. 
> This isn't a Swift-specific complaint. If you were to run GlusterFS on an 
> Isilon or HDFS on 

Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-07 Thread Amrith Kumar
Well put Monty. It is a tough choice and neither choice was inexpensive.

-amrith 

> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: Tuesday, June 07, 2016 3:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] Reasoning behind my vote on the Go topic
> 
> This text is in my vote, but as I'm sure there are people who do not
> read all of the gerrit comments for governance changes, I'm posting it
> here so that my thoughts are clear.
> 
> Please know that this has actually kept me up at night. I cast my vote
> on this neither glibly or superficially. I have talked to everyone I can
> possibly think of on the topic, and at the end, the only thing I can do
> is use my judgment and vote to the best of my ability. I apologize from
> the bottom of my heart to the people I find myself in disagreement with.
> I have nothing but the utmost respect for you all.
> 
> I vote against allowing Go as an official language of OpenStack.
> 
> "The needs of the many outweigh the needs of the few, or the one"
> 
> I'm super unhappy about both possible votes here.
> 
> I think go is a wonderful language. I think hummingbird is a well
> considered solution to a particular problem. I think that lack of
> flexibility is broadly speaking not a problem we have in OpenStack
> currently. I'm more worried about community cohesion in a post-Big Tent
> world than I am about specific optimization.
> 
> I do not think that adding Go as a language to OpenStack today is enough
> of a win to justify the cost, so I don't like accepting it.
> 
> I do not think that this should preclude serious thought about
> OpenStack's technology underpinnings, so I don't like rejecting it.
> 
> "only a great fool would reach for what he was given..."
> 
> I think that one of OpenStack's biggest and most loudly spoken about
> problems is too many per-project solutions and not enough holistic
> solutions. Because of that, and the six years of experience we have
> seeing where that gets us, I do not think that adding Go into the mix
> and "seeing what happens" is going to cause anything other than chaos.
> 
> If we want to add Go, or any other language, into the mix for server
> projects, I think it should be done with the intent that we are going to
> do it because it's a markedly better choice across the board, that we
> are going to rewrite literally everything, and I believe that we should
> consider the cost associated with retraining 2000 developers as part of
> considering that. Before you think that that's me throwing the baby out
> with the bathwater...
> 
> In a previous comment, Deklan says:
> 
> "If Go was accepted as an officially supported language in the OpenStack
> community, I'd be the first to start to rewrite as much code as possible
> in Go."
> 
> That is, in fact, EXACTLY the concern. That rather than making progress
> on OpenStack, we'll spend the next 4 years bikeshedding broadly about
> which bits, if any, should be rewritten in Go. It took Juju YEARS to
> rewrite from Python to Go and to hit feature parity. The size of that
> codebase was much smaller and they even had a BDFL (which people keep
> telling us makes things go quicker)
> 
> It could be argued that we could exercise consideration about which
> things get rewritten in Go so as to avoid that, but I'm pretty sure that
> would just mean that the only conversation the TC would have for the
> next two years would be "should X be in Go or Python" - and we'd have
> strong proponents from each project on each side of the argument.
> 
> David Goetz says "you aren’t doing the community any favors by deciding
> for them how they do their jobs". I get that, and can respect that point
> of view. However, for the most part, the negative feedback we get as
> members of the TC is actually that we're too lax, not that we're too
> strict.
> 
> I know that it's a popular sentiment with some folks to say "let devs
> use whatever tool they want to." However, that has never been our
> approach with OpenStack. It has been suggested multiple times and
> aligning on limited chosen tech has always been the thing we've chosen.
> I tend to align in my personal thinking more with Dan McKinley in:
> 
> http://mcfunley.com/choose-boring-technology
> 
> I have effectively been arguing his point for as long as I've been
> involved in OpenStack governance - although probably not as well as he
> does. I don't see any reason to reverse myself now.
> 
> I'd rather see us focus energy on Python3, asyncio and its pluggable
> event loops. The work in:
> 
> http://magic.io/blog/uvloop-blazing-fast-python-networking/
> 
> is a great indication in an actual apples-to-apples comparison of what
> can be accomplished in python doing IO-bound activities by using modern
> Python techniques. I think that comparing python2+eventlet to a fresh
> rewrite in Go isn't 100% of the story. A TON of work has 

[openstack-dev] [Smaug] Request to become a Big-Tent project.

2016-06-07 Thread Saggi Mizrahi
This is the first verison of our request for the latest version and
comments go to https://review.openstack.org/#/c/326724/ .


Add project Smaug to OpenStack big-tent

Following the directive
http://governance.openstack.org/reference/new-projects-requirements.htm
l

OpenStack Mission Alignment:
Smaug outlines a framework of APIs and orchestration services that
extends OpenStack to enable its users to deploy Backup and Restore
products from multiple vendors, and use them through a unified,
*coherent* interface.

Smaug exposes project-level backup and restore capabilities, which are
driven by various implementations, based on the selection of the
OpenStack instance administrator.

Smaug is designed with multiple cloud use cases in mind, such as
private
enterprise deployment that may require tiering of project resource data
protection based on service level (SLA), or public cloud scenarios
where
the *provider* admin may wish to provide data protection "as a
service",
while the *project* admins expect to determine data protection policies
by
themselves.

Smaug was initially launched as a joint-effort between Huawei and IBM
and announced during the OpenStack Mitaka summit in Tokyo. [1]

Our Mission Statement:

1) *Formalize* a unified interface with a unified schema to describe
all
   resources, dependencies, policies and actions

2) *Any resource* should be protected, therefore Smaug should be
   extendable and open to any vendor

3) *Diversity* of vendors and solutions is of paramount importance, and
   Smaug should expose all features to the users.

Following the 4 opens
Open Source:
Smaug is 100% open source, everything from implementation
to design to future features and roadmap are happening and shared
openly and presented in various meet-ups and documents.
Smaug plans to become production grade open source OpenStack
Data Protection API framework, to be used with multiple
implementations,
both in OpenStack (like Freezer) and outside of OpenStack.

Open Community:
We are working closely with the community to share use cases and
problems
and decide on future priorities for the project.
It is our mission to collaborate with as many members/companies
in the community as possible and we welcome any feedback and any desire
to help us shape Smaug's future.
This is done using the mailing list, IRC, and weekly meetings [2].

Open Development:
- All Smaug code is being code reviewed in OpenStack Gerrit [3][4][5]
- Smaug has a core team which openly discuss all issues [6]
- Smaug support gate tests which run unit tests and fullstack tests
- Smaug collaborates other projects such as Cinder to try and find
  optimal integration solutions
- Bugs are managed by OpenStack launchpad [7]

Open Design:
- All designs and specs are published for review and managed in
  OpenStack launchpad
- Smaug conducts a weekly IRC meeting to discuss all designs and future
  roadmap
- Smaug repository offers documentation and diagrams to explain the
  current design, code and future roadmap
- Everything is discussed openly either on the review board,
  Smaug IRC channel (which is logged), or the ML.
- Smaug's mission statement is to become an integral part of OpenStack

See Also:
- Smaug Wiki > https://wiki.openstack.org/wiki/smaug
- vBrownBag Presentation * https://youtu.be/_tVYuW_YMB8
- Documentation Root
> https://github.com/openstack/smaug/blob/master/doc/source/
- Smaug Source > https://github.com/openstack/smaug
- Smaug Launchpad > https://launchpad.net/smaug
- Smaug Meetings
> https://wiki.openstack.org/wiki/Meetings/smaug
> http://eavesdrop.openstack.org/meetings/smaug/

[1] https://youtu.be/6RU_4vZZiLQ?t=15m23s
[2] http://eavesdrop.openstack.org/meetings/smaug/
[3] https://review.openstack.org/#/q/project:openstack/smaug
[4] https://review.openstack.org/#/q/project:openstack/smaug-dashboard
[5] https://review.openstack.org/#/q/project:openstack/python-smaugclie
nt
[6] https://trello.com/b/Sudr4fKT/smaug
[7] https://bugs.launchpad.net/smaug


-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.

[openstack-dev] [neutron][networking-ovn] Integration with OVN NAT gateway (Proposal)

2016-06-07 Thread Amitabha Biswas
This proposal outlines the modifications needed in networking-ovn (addresses 
https://bugs.launchpad.net/networking-ovn/+bug/1551717 
) to provide Floating 
IP (FIP) and SNAT using the L3 gateway router patches.

http://patchwork.ozlabs.org/patch/624312/ 
 
http://patchwork.ozlabs.org/patch/624313/ 
 
http://patchwork.ozlabs.org/patch/624314/ 
 
http://patchwork.ozlabs.org/patch/624315/ 
 
http://patchwork.ozlabs.org/patch/629607/ 


Diagram:

+---+   +---+
| NET 1 |   | NET 2 |
+---+   +---+
   |   |
   |   *   |
   | ** ** |
   |   ***  * **   |
   +---RP1 *  DR   * RP2 --+
   ***  * **
 ** **  
   *
  DTRP (168.254.128.2)
   |
   |
   |
   +--+
   | Transit Network  |
   | 169.254.128.0/30 |
   +--+
   |
   |
   |
   |
  GTRP (169.254.128.1)
*** 
  **   **   
**   *   *   ** +--+
* GW  *-| Provider Network |
**   *   *   ** +--+
  **   **   
*** 

New Entities:

OVN Join/Transit Networks
One per Neutron Router - /30 address space with only 2 ports for e.g. 
169.254.128.0/30
Created when an external gateway is added to a router.
One extra datapath per router with an External Gateway.
(Alternate option - One Transit Network in a deployment, IPAM becomes a 
headache - Not discussed here).
Prevent Neutron from using that /30 address space. Specify in networking-ovn 
conf file.
Create 1 new “Join” neutron network (to represent all Join OVN Networks) in the 
networking-ovn.
Note that it may be possible to replace the Join/Transit network using Router 
Peering in later versions  (not discussed here).
Allocate 2 ports in the Join network in the networking-ovn plugin.
Logical Gateway Transit Router Port (gtrp), 169.254.128.1
Logical Distributed Transit Router Port (dtrp), 169.254.128.2
Note that Neutron only sees 1 Join network with 2 ports; OVN sees a replica of 
this Join network as a new Logical Switch for each Gateway Router. The mapping 
of OVN Logical Switch(es) Join(s) to Gateway Router is discussed in OVN 
(Default) Gateway Routers below.
Note that the MAC addresses of lgrp and lip will be the same on each OVN Join 
Network, but because they are in different branches of the network topology it 
doesn’t matter.
OVN (Default) Gateway Routers:
One per Neutron Router.
2 ports
Logical Gateway Transit Router Port (gtrp), 169.254.128.1 (same for each OVN 
Join network).
External/Provider Router Port (legwrp), this is allocated by neutron.
Scheduling - The current OVN gateway proposal relies on the CMS/nbctl to decide 
on which hypervisor (HV) to schedule a particular gateway router.
A setting on the chassis (new external_id key or a new column) that allows the 
hypervisor admin to specify that a chassis can or cannot be used to host a 
gateway router (similar to a network node in OpenStack). Default - Allow (for 
compatibility purposes).
The networking-ovn plugin picks up the list of “candidate” chassis from the 
Southbound DB and uses an existing scheduling algorithm
Use a simple random.choice i.e. ChanceScheduler (Version 1)
Tap into the neutron’s LeastRouterScheduler - but that requires the 
networking-ovn (or some a hacked up version of the L3 agent) to imitate the L3 
agent running on various network nodes.
Populate the SNAT and DNAT columns in the logical router table. This is under 
review in OVS - http://openvswitch.org/pipermail/dev/2016-June/072169.html 

Create static routing entry in the gateway router to route tenant bound traffic 
to the distributed logical router.ar gate

Existing Entities:

Distributed Logical Routers:
Set the default gateway of the distributed logical router to the IP Address of 
the corresponding Logical Gateway Transit Router Port (169.254.128.1).

It would be good to get some feedback on this strategy. Guru mentioned that he 
saw a need for ARP response across multiple gateway routers, we don’t see that 
requirement in this design/use-case.

Thanks
Amitabha (azbiswas) and Chandra (chandrav)

__
OpenStack Development Mailing List (not for usage 

[openstack-dev] [nova][glance][qa] Test plans for glance v2 stack

2016-06-07 Thread Matt Riedemann
I tested the glance v2 stack (glance v1 disabled) using a devstack 
change here:


https://review.openstack.org/#/c/325322/

Now that the changes are merged up through the base nova image proxy and 
the libvirt driver, and we just have hyper-v/xen driver changes for that 
series, we should look at gating on this configuration.


I was originally thinking about adding a new job for this, but it's 
probably better if we just change one of the existing integrated gate 
jobs, like gate-tempest-dsvm-full or gate-tempest-dsvm-neutron-full.


Does anyone have an issue with that? Glance v1 is deprecated and the 
configuration option added to nova (use_glance_v1) defaults to True for 
compat but is deprecated, and the Nova team plans to drop it's v1 proxy 
code in Ocata. So it seems like changing config to use v2 in the gate 
jobs should be a non-issue. We'd want to keep at least one integrated 
gate job using glance v1 to make sure we don't regress anything there in 
Newton.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-07 Thread Zane Bitter

On 07/06/16 15:57, Jay Dobies wrote:


1. Now that we support passing un-merged environment files to heat,
it'd be
good to support an optional description key for environments,


I've never understood why the environment file doesn't have a
description field itself. Templates have descriptions, and IMO it makes
sense for an environment to describe what its particular additions to
the parameters/registry do.


Just use a comment?


I'd be happy to write that patch, but I wanted to first double check
that there wasn't a big philosophical reason why it shouldn't have a
description.


There's not much point unless you're also adding an API to retrieve 
environment files like Steve mentioned. Comments get stripped when the 
yaml is parsed, but that's fairly academic if you don't have a way to 
get it out again.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] Is the vmware driver going to be OK when we drop glance v1 support?

2016-06-07 Thread Matt Riedemann
Most of the glance v2 integration series in Nova is merged [1]. The 
libvirt support is done and tested, the hyper-v driver change is +2'ed 
and the xenplugin code is being worked on.


The question now is does anyone know if the vmware driver is going to be 
OK with just glance v2, i.e. when nova.conf has use_glance_v1=False?


Is anyone from the vmware subteam going to test that out? You need to 
essentially test like this [2].


Be aware that we're looking to drop the glance v1 support from Nova 
early in Ocata, so we need to make sure this is working in the drivers 
before that happens.


[1] https://review.openstack.org/#/q/topic:bp/use-glance-v2-api
[2] https://review.openstack.org/#/c/325322/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] The Magnum Midcycle

2016-06-07 Thread Hongbin Lu
Hi all,

Please find the Doodle pool below for selecting the Magnum midcycle date. 
Presumably, it will be a 2 days event. The location is undecided for now. The 
previous midcycles were hosted in bay area so I guess we will stay there at 
this time.

http://doodle.com/poll/5tbcyc37yb7ckiec

In addition, the Magnum team is finding a host for the midcycle. Please let us 
know if you interest to host us.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar][zaqar-ui] Nomating Shu Muto for Zaqar-UI core

2016-06-07 Thread Thai Q Tran


Hello all,

I am pleased to nominate Shu Muto to the Zaqar-UI core team. Shu's reviews
are extremely thorough and his work exemplary. His expertise in angularJS,
translation, and project infrastructure proved to be invaluable. His
support and reviews has helped the project progressed. Combine with his
strong understanding of the project, I believe his will help guide us in
the right direction and allow us to keep our current pace.

Please vote +1 or -1 to the nomination.

Thanks,
Thai (tqtran)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Request to create puppet-congress

2016-06-07 Thread Emilien Macchi
On Tue, Jun 7, 2016 at 3:52 PM, Dan Radez  wrote:
> I'd like to get puppet-congress started.
>
> I've written some code based on the cookie cutter structure but I've not
> gone through proper channels yet to get it into openstack-puppet.
>
> I'd like to get the project establish so that the code can be run
> through the review process.
>

That's a good news! Thanks for collaborating.
We need to move https://github.com/opnfv/puppet-congress to OpenStack:
https://review.openstack.org/#/c/326720/

And add it to our governance:
https://review.openstack.org/#/c/326721/

One thing about the module, we'll need to make it compliant and tested
like we do for other modules. Please make sure we can at least deploy
congress with Ubuntu or RDO packaging out of the box. I noticed some
workarounds in the code.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [Congress] Request to create puppet-congress

2016-06-07 Thread Iury Gregory
Hi Dan,

We have a documentation explaining the process to create a new module [1],
you should talk to Emilien Macchi (irc EmilienM), he is the PTL.
If you have any question feel free to join us in the IRC #puppet-openstack
channel.

[1]
http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html

2016-06-07 17:01 GMT-03:00 Tim Hinrichs :

> Hi Dan,
>
> As far as the Congress team is concerned, that'd be great!  Let us know
> how we can help.
>
> Tim
>
> On Tue, Jun 7, 2016 at 12:54 PM Dan Radez  wrote:
>
>> I'd like to get puppet-congress started.
>>
>> I've written some code based on the cookie cutter structure but I've not
>> gone through proper channels yet to get it into openstack-puppet.
>>
>> I'd like to get the project establish so that the code can be run
>> through the review process.
>>
>> Dan Radez
>> freenode: radez
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [Congress] Request to create puppet-congress

2016-06-07 Thread Tim Hinrichs
Hi Dan,

As far as the Congress team is concerned, that'd be great!  Let us know how
we can help.

Tim

On Tue, Jun 7, 2016 at 12:54 PM Dan Radez  wrote:

> I'd like to get puppet-congress started.
>
> I've written some code based on the cookie cutter structure but I've not
> gone through proper channels yet to get it into openstack-puppet.
>
> I'd like to get the project establish so that the code can be run
> through the review process.
>
> Dan Radez
> freenode: radez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-07 Thread Jay Dobies

All,

We've got some requirements around adding some interfaces to the heat
environment file format, for example:

1. Now that we support passing un-merged environment files to heat, it'd be
good to support an optional description key for environments,


I've never understood why the environment file doesn't have a 
description field itself. Templates have descriptions, and IMO it makes 
sense for an environment to describe what its particular additions to 
the parameters/registry do.


I'd be happy to write that patch, but I wanted to first double check 
that there wasn't a big philosophical reason why it shouldn't have a 
description.



such that we
could add an API (in addition to the one added by jdob to retrieve the
merged environment for a running stack) that can retrieve
all-the-environments and we can easily tell which one does what (e.g to
display in a UI perhaps)


I'm not sure I follow. Are you saying the API would return the list of 
descriptions, or the actual contents of each environment file that was 
passed in?


Currently, the environment is merged before we do anything with it. We'd 
have to change that to store... I'm not entirely sure. Multiple 
environments in the DB per stack? Is there a raw_environment in the DB 
that we would leverage?




2. We've got requirements around merge strategies for multiple environments
with potentially colliding keys.  Similar to the cloud-init merge
strategy[1] works.  Basically it should be possible to include multiple
environments then have heat e.g append to a list parameter_default instead
of just last-one-wins.

Both of these will likely require some optional additions to the
environment file format - can we handle them just like e.g event_sinks and
just add them?

Clearly since the environment format isn't versioned this poses a
compatibility problem if "new" environments are used on an old heat, but to
be fair we have done this before (with both parameter_defaults and
event_sinks)

What do folks think, can we add at least the description, and what
interface makes sense for the merge strategy (annotation in the environment
vs data passed to the API along with the environment files list?)

Any thoughts on the above would be great :)

Thanks,

Steve

[1] http://cloudinit.readthedocs.io/en/latest/topics/merging.html
[2] 
https://github.com/openstack/python-heatclient/blob/master/heatclient/common/environment_format.py#L22

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Request to create puppet-congress

2016-06-07 Thread Dan Radez
I'd like to get puppet-congress started.

I've written some code based on the cookie cutter structure but I've not
gone through proper channels yet to get it into openstack-puppet.

I'd like to get the project establish so that the code can be run
through the review process.

Dan Radez
freenode: radez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-07 Thread Jay Pipes

Well said and reasoned, Monty.

On 06/07/2016 03:00 PM, Monty Taylor wrote:

This text is in my vote, but as I'm sure there are people who do not
read all of the gerrit comments for governance changes, I'm posting it
here so that my thoughts are clear.

Please know that this has actually kept me up at night. I cast my vote
on this neither glibly or superficially. I have talked to everyone I can
possibly think of on the topic, and at the end, the only thing I can do
is use my judgment and vote to the best of my ability. I apologize from
the bottom of my heart to the people I find myself in disagreement with.
I have nothing but the utmost respect for you all.

I vote against allowing Go as an official language of OpenStack.

"The needs of the many outweigh the needs of the few, or the one"

I'm super unhappy about both possible votes here.

I think go is a wonderful language. I think hummingbird is a well
considered solution to a particular problem. I think that lack of
flexibility is broadly speaking not a problem we have in OpenStack
currently. I'm more worried about community cohesion in a post-Big Tent
world than I am about specific optimization.

I do not think that adding Go as a language to OpenStack today is enough
of a win to justify the cost, so I don't like accepting it.

I do not think that this should preclude serious thought about
OpenStack's technology underpinnings, so I don't like rejecting it.

"only a great fool would reach for what he was given..."

I think that one of OpenStack's biggest and most loudly spoken about
problems is too many per-project solutions and not enough holistic
solutions. Because of that, and the six years of experience we have
seeing where that gets us, I do not think that adding Go into the mix
and "seeing what happens" is going to cause anything other than chaos.

If we want to add Go, or any other language, into the mix for server
projects, I think it should be done with the intent that we are going to
do it because it's a markedly better choice across the board, that we
are going to rewrite literally everything, and I believe that we should
consider the cost associated with retraining 2000 developers as part of
considering that. Before you think that that's me throwing the baby out
with the bathwater...

In a previous comment, Deklan says:

"If Go was accepted as an officially supported language in the OpenStack
community, I'd be the first to start to rewrite as much code as possible
in Go."

That is, in fact, EXACTLY the concern. That rather than making progress
on OpenStack, we'll spend the next 4 years bikeshedding broadly about
which bits, if any, should be rewritten in Go. It took Juju YEARS to
rewrite from Python to Go and to hit feature parity. The size of that
codebase was much smaller and they even had a BDFL (which people keep
telling us makes things go quicker)

It could be argued that we could exercise consideration about which
things get rewritten in Go so as to avoid that, but I'm pretty sure that
would just mean that the only conversation the TC would have for the
next two years would be "should X be in Go or Python" - and we'd have
strong proponents from each project on each side of the argument.

David Goetz says "you aren’t doing the community any favors by deciding
for them how they do their jobs". I get that, and can respect that point
of view. However, for the most part, the negative feedback we get as
members of the TC is actually that we're too lax, not that we're too strict.

I know that it's a popular sentiment with some folks to say "let devs
use whatever tool they want to." However, that has never been our
approach with OpenStack. It has been suggested multiple times and
aligning on limited chosen tech has always been the thing we've chosen.
I tend to align in my personal thinking more with Dan McKinley in:

http://mcfunley.com/choose-boring-technology

I have effectively been arguing his point for as long as I've been
involved in OpenStack governance - although probably not as well as he
does. I don't see any reason to reverse myself now.

I'd rather see us focus energy on Python3, asyncio and its pluggable
event loops. The work in:

http://magic.io/blog/uvloop-blazing-fast-python-networking/

is a great indication in an actual apples-to-apples comparison of what
can be accomplished in python doing IO-bound activities by using modern
Python techniques. I think that comparing python2+eventlet to a fresh
rewrite in Go isn't 100% of the story. A TON of work has gone in to
Python that we're not taking advantage of because we're still supporting
Python2. So what I've love to see in the realm of comparative
experimentation is to see if the existing Python we already have can be
leveraged as we adopt newer and more modern things.

In summary, while I think that Go is a lovely language and the people
who work on it are lovely people, while I'm sure that hummingbird is
beneficial to the Cloud Files team in real ways and while I'm sure that

Re: [openstack-dev] [Neutron] Enabling/Disabling specific API extensions

2016-06-07 Thread Brandon Logan
On Tue, 2016-06-07 at 19:17 +, Sean M. Collins wrote:
> The patch that switches DevStack over to using the Neutron API to
> discover what features are available has landed.
> 
> https://review.openstack.org/#/c/318145/7
> 
> The quick summary is that things like Q_L3_ENABLED[1] and if certain
> services are running/enabled has been replaced with checks for if an API
> extension is available. The point being, the Networking API should be
> discoverable and features should be determined based on what extensions
> are available, instead of some DevStack-y bits.
> 
> Neutron controls what API extensions are loaded via the
> `api_extensions_path`[2]
> 
> https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L46
> 
> So by default Neutron loads up every extension that is included in tree.
> 
> But what if a deployment doesn't want to support an API extension?
> 
> With third party CI, prior to https://review.openstack.org/#/c/318145 -
> systems could get away with it by not enabling services - like q-l3 - and
> that would stop subnets and routers being created. After that patch,
> well that's not the case.
> 
> So is there a way to configure what API extensions are available, so that
> if a CI system doesn't want to provide the ability to create Neutron
> routers, they can disable the router API extension in some manner more
> graceful than rm'ing the extension file?
> 
> I know at least in one deployment I was involved with, we didn't deploy
> the L3 agent, but I don't believe we disabled or deleted the router API
> extension, so users would try and create routers and other resources
> then wonder why nothing would ever work.
> 
> From a discoverability standpoint - do we provide fine-grained a way for 
> deployers to enable/disable specific API extensions?

As far as I know, you can disable an extension by moving it out of that
api_extensions_path, renaming it with an _ in front of it, or the core
plugin or any loaded service plugins do not support it via the
support_extension_aliases variable.  I don't know of any easier way to
do that.  Hopefully I'm just not aware of one that exists.

> 
> 
> Further reading:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/095323.html
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/095349.html
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/095361.html
> 
> 
> [1]: http://lists.openstack.org/pipermail/openstack-dev/2016-May/095095.html
> [2]: 
> https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L46
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Andrew Laski


On Tue, Jun 7, 2016, at 02:34 PM, Joshua Harlow wrote:
> Devananda van der Veen wrote:
> > On 06/07/2016 09:55 AM, Joshua Harlow wrote:
> >> Joshua Harlow wrote:
> >>> Clint Byrum wrote:
>  Excerpts from Joshua Harlow's message of 2016-06-07 08:46:28 -0700:
> > Clint Byrum wrote:
> >> Excerpts from Kris G. Lindgren's message of 2016-06-06 20:44:26 +:
> >>> Hi ironic folks,
> >>> As I'm trying to explore how GoDaddy can use ironic I've created
> >>> the following in an attempt to document some of my concerns, and
> >>> I'm wondering if you folks could help myself identity ongoing work
> >>> to solve these (or alternatives?)
> >> Hi Kris. I've been using Ironic in various forms for a while, and I can
> >> answer a few of these things.
> >>
> >>> List of concerns with ironic:
> >>>
> >>> 1.)Nova<->  ironic interactions are generally seem terrible?
> >> I don't know if I'd call it terrible, but there's friction. Things that
> >> are unchangable on hardware are just software configs in vms (like mac
> >> addresses, overlays, etc), and things that make no sense in VMs are
> >> pretty standard on servers (trunked vlans, bonding, etc).
> >>
> >> One way we've gotten around it is by using Ironic standalone via
> >> Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
> >> and includes playbooks to build config drives and deploy images in a
> >> fairly rudimentary way without Nova.
> >>
> >> I call this the "better than Cobbler" way of getting a toe into the
> >> Ironic waters.
> >>
> >> [1] https://github.com/openstack/bifrost
> > Out of curiosity, why ansible vs turning
> > https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py
> > (or something like it) into a tiny-wsgi-app (pick useful name here) that
> > has its own REST api (that looks pretty similar to the public functions
> > in that driver file)?
>  That's an interesting idea. I think a reason Bifrost doesn't just import
>  nova virt drivers is that they're likely _not_ a supported public API
>  (despite not having _'s at the front). Also, a lot of the reason Bifrost
>  exists is to enable users to get the benefits of all the baremetal
>  abstraction work done in Ironic without having to fully embrace all of
>  OpenStack's core. So while you could get a little bit of the stuff from
>  nova (like config drive building), you'd still need to handle network
>  address assignment, image management, etc. etc., and pretty soon you
>  start having to run a tiny glance and a tiny neutron. The Bifrost way
>  is the opposite: I just want a tiny Ironic, and _nothing_ else.
> 
> >>> Ya, I'm just thinking that at a certain point
> >
> > You've got two statements in here, which I'm going to reply to separately:
> >
> >> Oops forgot to fill this out, was just thinking that at a certain point it 
> >> might
> >> be easier to figure out how to extract that API (meh, if its public or 
> >> private)
> >
> > The nova-core team has repeatedly stated that they do not have plans to 
> > support
> > the nova virt driver API as a stable or externally-consumable python API.
> > Changing that would affect a lot more than Ironic (eg. defcore). A change 
> > like
> > that is not just about what is easier for developers, but also what is 
> > better
> > for the community.
> >
> 
> Right, I'm starting to come to the belief that what is better for the 
> community is to change this; because from what I can tell (from my view 
> of the world) that tying all the things to nova has really been 
> detrimental (to a degree) to the whole progression of the cloud as a
> whole.

When I have heard the statement made about the virt driver API it has
simply been that it is not stable and there are no plans at this point
to make it stable. My own opinion is that it is sometimes painful to use
from within Nova and I do not wish to expose that pain to others. There
are changes I would like to see made before it could be considered
externally consumable.


> 
> It's an opinionated thing to say, yes I understand that, but there 
> becomes a point where I feel we need to re-evaluate what people really 
> care about from openstack (because I start to believe that treating the 
> whole thing as a single product, well that went out the window a long 
> time ago, with the creation of the big-tent by the TC, with the creation 
> of mesos, k8s and others by other companies not in openstack...); and 
> really what's left after that is a bunch of services that to survive (as 
> a useful set of services) must accept that there is more than just 
> openstack in the wider world (ie, kubernetes, mesos, 
> the-next-best-thing...) and if we don't start embracing those other 
> communities (and no that doesn't mean be an `integration engine` on-top 
> or around them) then we are pretty much obsoleting 

[openstack-dev] [Neutron] Enabling/Disabling specific API extensions

2016-06-07 Thread Sean M. Collins
The patch that switches DevStack over to using the Neutron API to
discover what features are available has landed.

https://review.openstack.org/#/c/318145/7

The quick summary is that things like Q_L3_ENABLED[1] and if certain
services are running/enabled has been replaced with checks for if an API
extension is available. The point being, the Networking API should be
discoverable and features should be determined based on what extensions
are available, instead of some DevStack-y bits.

Neutron controls what API extensions are loaded via the
`api_extensions_path`[2]

https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L46

So by default Neutron loads up every extension that is included in tree.

But what if a deployment doesn't want to support an API extension?

With third party CI, prior to https://review.openstack.org/#/c/318145 -
systems could get away with it by not enabling services - like q-l3 - and
that would stop subnets and routers being created. After that patch,
well that's not the case.

So is there a way to configure what API extensions are available, so that
if a CI system doesn't want to provide the ability to create Neutron
routers, they can disable the router API extension in some manner more
graceful than rm'ing the extension file?

I know at least in one deployment I was involved with, we didn't deploy
the L3 agent, but I don't believe we disabled or deleted the router API
extension, so users would try and create routers and other resources
then wonder why nothing would ever work.

>From a discoverability standpoint - do we provide fine-grained a way for 
deployers to enable/disable specific API extensions? 


Further reading:

http://lists.openstack.org/pipermail/openstack-dev/2016-May/095323.html
http://lists.openstack.org/pipermail/openstack-dev/2016-May/095349.html
http://lists.openstack.org/pipermail/openstack-dev/2016-May/095361.html


[1]: http://lists.openstack.org/pipermail/openstack-dev/2016-May/095095.html
[2]: 
https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L46


-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift][keystone] Using JSON as future ACL format

2016-06-07 Thread Pete Zaitcev
On Mon, 6 Jun 2016 13:05:46 -0700
"Thai Q Tran"  wrote:

> My intention is to spark discussion around this topic with the goal of
> moving the Swift community toward accepting the JSON format.

If would be productive if you came up with a specific proposal how to
retrofit JSON for container ACLs. Note that JSON is already used natively
for account ACLs in Swift.

Personally I don't see an actual need of usernames with colons expressed
by operators. The issue that you have identified was known for a while
and apparently did not cause any difficulties in practice. Just don't
put colons into usernames. And if you switch to IDs, those are just UUIDs.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-07 Thread Ricardo Rocha
+1 on this. Another use case would be 'fast storage' for dbs, 'any
storage' for memcache and web servers. Relying on labels for this
makes it really simple.

The alternative of doing it with multiple clusters adds complexity to
the cluster(s) description by users.

On Fri, Jun 3, 2016 at 1:54 AM, Fox, Kevin M  wrote:
> As an operator that has clouds that are partitioned into different host 
> aggregates with different flavors targeting them, I totally believe we will 
> have users that want to have a single k8s cluster span multiple different 
> flavor types. I'm sure once I deploy magnum, I will want it too. You could 
> have some special hardware on some nodes, not on others. but you can still 
> have cattle, if you have enough of them and the labels are set appropriately. 
> Labels allow you to continue to partition things when you need to, and ignore 
> it when you dont, making administration significantly easier.
>
> Say I have a tenant with 5 gpu nodes, and 10 regular nodes allocated into a 
> k8s cluster. I may want 30 instances of container x that doesn't care where 
> they land, and prefer 5 instances that need cuda. The former can be deployed 
> with a k8s deployment. The latter can be deployed with a daemonset. All 
> should work well and very non pet'ish. The whole tenant could be viewed with 
> a single pane of glass, making it easy to manage.
>
> Thanks,
> Kevin
> 
> From: Adrian Otto [adrian.o...@rackspace.com]
> Sent: Thursday, June 02, 2016 4:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing 
> the bay nodes
>
> I am really struggling to accept the idea of heterogeneous clusters. My 
> experience causes me to question whether a heterogeneus cluster makes sense 
> for Magnum. I will try to explain why I have this hesitation:
>
> 1) If you have a heterogeneous cluster, it suggests that you are using 
> external intelligence to manage the cluster, rather than relying on it to be 
> self-managing. This is an anti-pattern that I refer to as “pets" rather than 
> “cattle”. The anti-pattern results in brittle deployments that rely on 
> external intelligence to manage (upgrade, diagnose, and repair) the cluster. 
> The automation of the management is much harder when a cluster is 
> heterogeneous.
>
> 2) If you have a heterogeneous cluster, it can fall out of balance. This 
> means that if one of your “important” or “large” members fail, there may not 
> be adequate remaining members in the cluster to continue operating properly 
> in the degraded state. The logic of how to track and deal with this needs to 
> be handled. It’s much simpler in the heterogeneous case.
>
> 3) Heterogeneous clusters are complex compared to homogeneous clusters. They 
> are harder to work with, and that usually means that unplanned outages are 
> more frequent, and last longer than they with a homogeneous cluster.
>
> Summary:
>
> Heterogeneous:
>   - Complex
>   - Prone to imbalance upon node failure
>   - Less reliable
>
> Heterogeneous:
>   - Simple
>   - Don’t get imbalanced when a min_members concept is supported by the 
> cluster controller
>   - More reliable
>
> My bias is to assert that applications that want a heterogeneous mix of 
> system capacities at a node level should be deployed on multiple homogeneous 
> bays, not a single heterogeneous one. That way you end up with a composition 
> of simple systems rather than a larger complex one.
>
> Adrian
>
>
>> On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
>>
>> Personally, I think this is a good idea, since it can address a set of 
>> similar use cases like below:
>> * I want to deploy a k8s cluster to 2 availability zone (in future 2 
>> regions/clouds).
>> * I want to spin up N nodes in AZ1, M nodes in AZ2.
>> * I want to scale the number of nodes in specific AZ/region/cloud. For 
>> example, add/remove K nodes from AZ1 (with AZ2 untouched).
>>
>> The use case above should be very common and universal everywhere. To 
>> address the use case, Magnum needs to support provisioning heterogeneous set 
>> of nodes at deploy time and managing them at runtime. It looks the proposed 
>> idea (manually managing individual nodes or individual group of nodes) can 
>> address this requirement very well. Besides the proposed idea, I cannot 
>> think of an alternative solution.
>>
>> Therefore, I vote to support the proposed idea.
>>
>> Best regards,
>> Hongbin
>>
>>> -Original Message-
>>> From: Hongbin Lu
>>> Sent: June-01-16 11:44 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>>> managing the bay nodes
>>>
>>> Hi team,
>>>
>>> A blueprint was created for tracking this idea:
>>> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>>> nodes . I won't approve the BP until 

[openstack-dev] Reasoning behind my vote on the Go topic

2016-06-07 Thread Monty Taylor
This text is in my vote, but as I'm sure there are people who do not
read all of the gerrit comments for governance changes, I'm posting it
here so that my thoughts are clear.

Please know that this has actually kept me up at night. I cast my vote
on this neither glibly or superficially. I have talked to everyone I can
possibly think of on the topic, and at the end, the only thing I can do
is use my judgment and vote to the best of my ability. I apologize from
the bottom of my heart to the people I find myself in disagreement with.
I have nothing but the utmost respect for you all.

I vote against allowing Go as an official language of OpenStack.

"The needs of the many outweigh the needs of the few, or the one"

I'm super unhappy about both possible votes here.

I think go is a wonderful language. I think hummingbird is a well
considered solution to a particular problem. I think that lack of
flexibility is broadly speaking not a problem we have in OpenStack
currently. I'm more worried about community cohesion in a post-Big Tent
world than I am about specific optimization.

I do not think that adding Go as a language to OpenStack today is enough
of a win to justify the cost, so I don't like accepting it.

I do not think that this should preclude serious thought about
OpenStack's technology underpinnings, so I don't like rejecting it.

"only a great fool would reach for what he was given..."

I think that one of OpenStack's biggest and most loudly spoken about
problems is too many per-project solutions and not enough holistic
solutions. Because of that, and the six years of experience we have
seeing where that gets us, I do not think that adding Go into the mix
and "seeing what happens" is going to cause anything other than chaos.

If we want to add Go, or any other language, into the mix for server
projects, I think it should be done with the intent that we are going to
do it because it's a markedly better choice across the board, that we
are going to rewrite literally everything, and I believe that we should
consider the cost associated with retraining 2000 developers as part of
considering that. Before you think that that's me throwing the baby out
with the bathwater...

In a previous comment, Deklan says:

"If Go was accepted as an officially supported language in the OpenStack
community, I'd be the first to start to rewrite as much code as possible
in Go."

That is, in fact, EXACTLY the concern. That rather than making progress
on OpenStack, we'll spend the next 4 years bikeshedding broadly about
which bits, if any, should be rewritten in Go. It took Juju YEARS to
rewrite from Python to Go and to hit feature parity. The size of that
codebase was much smaller and they even had a BDFL (which people keep
telling us makes things go quicker)

It could be argued that we could exercise consideration about which
things get rewritten in Go so as to avoid that, but I'm pretty sure that
would just mean that the only conversation the TC would have for the
next two years would be "should X be in Go or Python" - and we'd have
strong proponents from each project on each side of the argument.

David Goetz says "you aren’t doing the community any favors by deciding
for them how they do their jobs". I get that, and can respect that point
of view. However, for the most part, the negative feedback we get as
members of the TC is actually that we're too lax, not that we're too strict.

I know that it's a popular sentiment with some folks to say "let devs
use whatever tool they want to." However, that has never been our
approach with OpenStack. It has been suggested multiple times and
aligning on limited chosen tech has always been the thing we've chosen.
I tend to align in my personal thinking more with Dan McKinley in:

http://mcfunley.com/choose-boring-technology

I have effectively been arguing his point for as long as I've been
involved in OpenStack governance - although probably not as well as he
does. I don't see any reason to reverse myself now.

I'd rather see us focus energy on Python3, asyncio and its pluggable
event loops. The work in:

http://magic.io/blog/uvloop-blazing-fast-python-networking/

is a great indication in an actual apples-to-apples comparison of what
can be accomplished in python doing IO-bound activities by using modern
Python techniques. I think that comparing python2+eventlet to a fresh
rewrite in Go isn't 100% of the story. A TON of work has gone in to
Python that we're not taking advantage of because we're still supporting
Python2. So what I've love to see in the realm of comparative
experimentation is to see if the existing Python we already have can be
leveraged as we adopt newer and more modern things.

In summary, while I think that Go is a lovely language and the people
who work on it are lovely people, while I'm sure that hummingbird is
beneficial to the Cloud Files team in real ways and while I'm sure that
if we were starting OpenStack from scratch today the conversations about
how to 

[openstack-dev] [TripleO] OpenStack Virtual Baremetal on an Unmodified Cloud

2016-06-07 Thread Ben Nemec
Up until recently, one of the issues with OVB was that it required
changes to the host cloud that were not production-ready.  With much
help from Steve Baker, I've recently been able to do OVB deployments in
a completely unmodified cloud.  I had talked to a few people at last
summit who were interested in being able to do this, so here's the demo
that I promised all of you:

https://youtu.be/30tEfP1-aTg

Blog post with a little more information:
http://blog.nemebean.com/content/video-ovb-running-against-stock-openstack-cloud

Next up is finding a public cloud that exposes all the necessary
OpenStack features so that this can be done without any special
infrastructure at all.  Ideally I would love to get support for this
into our infra providers because I believe there are multiple projects
that would benefit from it.  I'm guessing that's still a ways off though.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Joshua Harlow

Devananda van der Veen wrote:

On 06/07/2016 09:55 AM, Joshua Harlow wrote:

Joshua Harlow wrote:

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2016-06-07 08:46:28 -0700:

Clint Byrum wrote:

Excerpts from Kris G. Lindgren's message of 2016-06-06 20:44:26 +:

Hi ironic folks,
As I'm trying to explore how GoDaddy can use ironic I've created
the following in an attempt to document some of my concerns, and
I'm wondering if you folks could help myself identity ongoing work
to solve these (or alternatives?)

Hi Kris. I've been using Ironic in various forms for a while, and I can
answer a few of these things.


List of concerns with ironic:

1.)Nova<->  ironic interactions are generally seem terrible?

I don't know if I'd call it terrible, but there's friction. Things that
are unchangable on hardware are just software configs in vms (like mac
addresses, overlays, etc), and things that make no sense in VMs are
pretty standard on servers (trunked vlans, bonding, etc).

One way we've gotten around it is by using Ironic standalone via
Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
and includes playbooks to build config drives and deploy images in a
fairly rudimentary way without Nova.

I call this the "better than Cobbler" way of getting a toe into the
Ironic waters.

[1] https://github.com/openstack/bifrost

Out of curiosity, why ansible vs turning
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py
(or something like it) into a tiny-wsgi-app (pick useful name here) that
has its own REST api (that looks pretty similar to the public functions
in that driver file)?

That's an interesting idea. I think a reason Bifrost doesn't just import
nova virt drivers is that they're likely _not_ a supported public API
(despite not having _'s at the front). Also, a lot of the reason Bifrost
exists is to enable users to get the benefits of all the baremetal
abstraction work done in Ironic without having to fully embrace all of
OpenStack's core. So while you could get a little bit of the stuff from
nova (like config drive building), you'd still need to handle network
address assignment, image management, etc. etc., and pretty soon you
start having to run a tiny glance and a tiny neutron. The Bifrost way
is the opposite: I just want a tiny Ironic, and _nothing_ else.


Ya, I'm just thinking that at a certain point


You've got two statements in here, which I'm going to reply to separately:


Oops forgot to fill this out, was just thinking that at a certain point it might
be easier to figure out how to extract that API (meh, if its public or private)


The nova-core team has repeatedly stated that they do not have plans to support
the nova virt driver API as a stable or externally-consumable python API.
Changing that would affect a lot more than Ironic (eg. defcore). A change like
that is not just about what is easier for developers, but also what is better
for the community.



Right, I'm starting to come to the belief that what is better for the 
community is to change this; because from what I can tell (from my view 
of the world) that tying all the things to nova has really been 
detrimental (to a degree) to the whole progression of the cloud as a whole.


It's an opinionated thing to say, yes I understand that, but there 
becomes a point where I feel we need to re-evaluate what people really 
care about from openstack (because I start to believe that treating the 
whole thing as a single product, well that went out the window a long 
time ago, with the creation of the big-tent by the TC, with the creation 
of mesos, k8s and others by other companies not in openstack...); and 
really what's left after that is a bunch of services that to survive (as 
a useful set of services) must accept that there is more than just 
openstack in the wider world (ie, kubernetes, mesos, 
the-next-best-thing...) and if we don't start embracing those other 
communities (and no that doesn't mean be an `integration engine` on-top 
or around them) then we are pretty much obsoleting ourselves.


Ya, I know this is a hot (and touchy) topic, and probably other people 
don't agree, that's ok... I don't mind the flames, ha ( spicy).



and just have someone make an executive decision around ironic being a
stand-alone thing or not (and a capable stand-alone thing, not a
sorta-standalone-thing).


We already decided to support Ironic as a stand-alone service. So, could you
clarify what you mean when you call it a "sorta-standalone-thing"? In what ways
do you think it's *not* functional? Do you have specific recommendations on what
we can improve, based on experience using either Ironic or Bifrost?


I'll work on this list, as some folks that are trying start to try to 
connect ironic (IMHO without nova, because well kubernetes is enough 
like nova that there isn't a need for 2-layers of nova-like-systems at 
that point) into kubernetes as a 'resource provider'. I'm sure 
attempting to do that 

[openstack-dev] [Trove] Stepping down from Trove Core

2016-06-07 Thread Victoria Martínez de la Cruz
After one year and a half contributing to the Trove project,
I have decided to change my focus and start gaining more experience
on other storage and data-management related projects.

Because of this decision, I'd like to ask to be removed from the Trove
core team.

I want to thank Trove community for all the good work and shared experiences.
Working with you all has been a very fulfilling experience.

All the best,

Victoria
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Devananda van der Veen
On 06/07/2016 09:55 AM, Joshua Harlow wrote:
> Joshua Harlow wrote:
>> Clint Byrum wrote:
>>> Excerpts from Joshua Harlow's message of 2016-06-07 08:46:28 -0700:
 Clint Byrum wrote:
> Excerpts from Kris G. Lindgren's message of 2016-06-06 20:44:26 +:
>> Hi ironic folks,
>> As I'm trying to explore how GoDaddy can use ironic I've created
>> the following in an attempt to document some of my concerns, and
>> I'm wondering if you folks could help myself identity ongoing work
>> to solve these (or alternatives?)
> Hi Kris. I've been using Ironic in various forms for a while, and I can
> answer a few of these things.
>
>> List of concerns with ironic:
>>
>> 1.)Nova<-> ironic interactions are generally seem terrible?
> I don't know if I'd call it terrible, but there's friction. Things that
> are unchangable on hardware are just software configs in vms (like mac
> addresses, overlays, etc), and things that make no sense in VMs are
> pretty standard on servers (trunked vlans, bonding, etc).
>
> One way we've gotten around it is by using Ironic standalone via
> Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
> and includes playbooks to build config drives and deploy images in a
> fairly rudimentary way without Nova.
>
> I call this the "better than Cobbler" way of getting a toe into the
> Ironic waters.
>
> [1] https://github.com/openstack/bifrost
 Out of curiosity, why ansible vs turning
 https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py
 (or something like it) into a tiny-wsgi-app (pick useful name here) that
 has its own REST api (that looks pretty similar to the public functions
 in that driver file)?
>>>
>>> That's an interesting idea. I think a reason Bifrost doesn't just import
>>> nova virt drivers is that they're likely _not_ a supported public API
>>> (despite not having _'s at the front). Also, a lot of the reason Bifrost
>>> exists is to enable users to get the benefits of all the baremetal
>>> abstraction work done in Ironic without having to fully embrace all of
>>> OpenStack's core. So while you could get a little bit of the stuff from
>>> nova (like config drive building), you'd still need to handle network
>>> address assignment, image management, etc. etc., and pretty soon you
>>> start having to run a tiny glance and a tiny neutron. The Bifrost way
>>> is the opposite: I just want a tiny Ironic, and _nothing_ else.
>>>
>>
>> Ya, I'm just thinking that at a certain point
> 

You've got two statements in here, which I'm going to reply to separately:

> Oops forgot to fill this out, was just thinking that at a certain point it 
> might
> be easier to figure out how to extract that API (meh, if its public or 
> private)

The nova-core team has repeatedly stated that they do not have plans to support
the nova virt driver API as a stable or externally-consumable python API.
Changing that would affect a lot more than Ironic (eg. defcore). A change like
that is not just about what is easier for developers, but also what is better
for the community.

> and just have someone make an executive decision around ironic being a
> stand-alone thing or not (and a capable stand-alone thing, not a
> sorta-standalone-thing).

We already decided to support Ironic as a stand-alone service. So, could you
clarify what you mean when you call it a "sorta-standalone-thing"? In what ways
do you think it's *not* functional? Do you have specific recommendations on what
we can improve, based on experience using either Ironic or Bifrost?


-Devananda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New core reviewers nomination for TOSCA-Parser and or Heat-Translator project [tosca-parser][heat-translator][heat]

2016-06-07 Thread Brad Topol

Bob, Miguel, Bharath and Mathiue,  CONGRATULATIONS!!! Very well deserved!!!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Sahdev P Zala/Durham/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions
\)" 
Date:   06/06/2016 09:32 PM
Subject:Re: [openstack-dev] New core reviewers nomination for
TOSCA-Parser and or Heat-Translator project
[tosca-parser][heat-translator][heat]



Thanks core team for your +1 vote.

Welcome new core - Bob, Miguel, Bharath and Mathiue. Thanks again for your
great contribution!!

Regards,
Sahdev Zala




From:Sahdev P Zala/Durham/IBM@IBMUS
To:"OpenStack Development Mailing List \(not for usage questions\)"

Date:05/31/2016 09:30 AM
Subject:[openstack-dev] New core reviewers nomination for
TOSCA-Parser and or Heat-Translator project
[tosca-parser][heat-translator][heat]



Hello TOSCA-Parser and Heat-Translator core team,

I would like to nominate following current active contributors to the
tosca-parser and or heat-translator project as core reviewers to speed up
the development. They are contributing for more than six months and has
remained one of the top five contributors for a mentioned project(s).

Please reply to this thread or email me with your vote (+1 or -1) by EOD
June 4th.

[1] Bob Haddleton: Bob is a lead developer for the TOSCA NFV specific
parsing and translation in the tosca-parser and heat-translator projects
respectively. Bob actively participates in IRC meetings and other
discussion via emails or IRC. He is a also a core reviewer in OpenStack
Tacker project. I would like to nominate him for core reviewer position for
both tosca-parser and heat-translator.

[2] Miguel Caballar: Miguel is familiar with TOSCA for long time. He is an
asset for the tosca-parser project and has been bringing lot of new use
cases to the project. He is a second lead developer overall for the project
at present. I would like to nominate him for core reviewer position in
tosca-parser.

[3] Bharath Thiruveedula: Bharath is actively contributing to the
heat-translator project. He knows project well and has implemented
important blueprints during the Mitaka cycle including enhancement to the
OSC plugin, automatic deployment of translated templates and dynamic
querying of flavors and images. Bharath actively participates in IRC
meetings and other discussion via emails or IRC. I would like to nominate
him for the core reviewer position in heat-translator.

[4] Mathieu Velten: Mathieu is familiar with TOSCA for long time as well.
He is brining new use cases regularly and actively working on enhancing the
heat-translator project with needed implementation. He also uses the
translated templates with real time deployment with Heat for his work on
project Indigo DataCloud [5]. He knows project well and was the second lead
developer for the project during the Mitaka cycle. I would like to nominate
him for the core reviewer position in heat-translator.

[1]
http://stackalytics.com/?release=all=tosca-parser=commits_id=bob-haddleton
and
http://stackalytics.com/?release=all=heat-translator=commits_id=bob-haddleton

[2]
http://stackalytics.com/?release=all=tosca-parser=commits_id=micafer1

[3]
http://stackalytics.com/?release=all=heat-translator=commits_id=bharath-ves

[4]
http://stackalytics.com/?release=all=commits=heat-translator_id=matmaul

[5] https://www.indigo-datacloud.eu/

Thanks!

Regards,
Sahdev Zala
RTP, NC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][trove][keystone][glance][nova] on the deprecation of isotime() in oslo.utils

2016-06-07 Thread Amrith Kumar
TL;DR

Oslo deprecated isotime() and some other functions in oslo_utils.timeutils a 
while back. The recommended route was to just use datetime.isoformat() instead. 
In researching this for Trove, I found that several projects instead handled 
the situation by cloning the code into their own projects. This seems like an 
[unintended/unfortunate/avoidable] consequence.

In [1] I propose a change for Trove to address this and believe that this 
solution should meet the needs not only of other projects that were using 
isotime() and is, I believe, something that should really go into 
oslo_utils.timeutils and be shared across projects.

The whole story ...

Three attempts have been made to handle the deprecation of isotime() in Trove. 
The deprecation itself happened in [2].

Some projects that didn't just use datetime.datetime.isoformat() as suggested 
can be seen in [3], [4] and [5]. Those are in keystone, glance and nova which 
is why I've tagged those projects on the subject of this email.

As best as I can tell, the issue(s) with timeutils.isotime() are that it was 
willfully naïve, in that it assumed UTC even when it could determine otherwise.

The issue with datetime.isoformat() that led people to replicate the (soon to 
be deprecated) code in timeutils.isotime() was that they didn't want the 
disruptions that would come from the change. This is because isoformat() 
produces a very specific output that includes subsecond information, and 
identifies UTC Timezone as +00:00 rather than just 'Z'.

It turns out that the output produced by timeutils.isotime() is in fact ISO 
8601 compliant and the issue really was that it was *A* valid format while 
datetime.isoformat() is *A* different but also valid ISO 8601 format.

In Trove, I too don't want to change the format of output (in API for example) 
and chose to split the difference; use the perfectly valid ISO 8601 format with 
Z and no subsecond information for UTC and the +/-NN:NN format for all other 
timezones.

I suspect that this should have all the benefits of isoformat() and require no 
changes in existing client code and therefore should reasonably work in 
oslo_utils.timeutils.

Thanks,

-amrith


[1] https://review.openstack.org/#/c/326655/1/trove/common/timeutils.py
[2] https://review.openstack.org/#/c/182602/
[3] https://review.openstack.org/#/c/187751/
[4] https://review.openstack.org/#/c/253517/
[5] https://review.openstack.org/#/c/241179/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-07 Thread Steven Hardy
All,

We've got some requirements around adding some interfaces to the heat
environment file format, for example:

1. Now that we support passing un-merged environment files to heat, it'd be
good to support an optional description key for environments, such that we
could add an API (in addition to the one added by jdob to retrieve the
merged environment for a running stack) that can retrieve
all-the-environments and we can easily tell which one does what (e.g to
display in a UI perhaps)

2. We've got requirements around merge strategies for multiple environments
with potentially colliding keys.  Similar to the cloud-init merge
strategy[1] works.  Basically it should be possible to include multiple
environments then have heat e.g append to a list parameter_default instead
of just last-one-wins.

Both of these will likely require some optional additions to the
environment file format - can we handle them just like e.g event_sinks and
just add them?

Clearly since the environment format isn't versioned this poses a
compatibility problem if "new" environments are used on an old heat, but to
be fair we have done this before (with both parameter_defaults and
event_sinks)

What do folks think, can we add at least the description, and what
interface makes sense for the merge strategy (annotation in the environment
vs data passed to the API along with the environment files list?)

Any thoughts on the above would be great :)

Thanks,

Steve

[1] http://cloudinit.readthedocs.io/en/latest/topics/merging.html
[2] 
https://github.com/openstack/python-heatclient/blob/master/heatclient/common/environment_format.py#L22

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] bug in handling of ISOLATE thread policy

2016-06-07 Thread Chris Friesen

Hi,

The full details are available at https://bugs.launchpad.net/nova/+bug/1590091 
but the short version is this:


1) I'm running stable/mitaka in devstack.  I've got a small system with 2 pCPUs, 
both marked as available for pinning.  They're two cores of a single processor, 
no threads.


2) I tried to boot an instance with two dedicated CPUs and a thread policy of 
ISOLATE, but the NUMATopology filter fails my host.



In _pack_instance_onto_cores(), in _get_pinning() we have the following line:

if threads_no * len(sibling_set) < len(instance_cores):
return

Coming into this line of code the variables look like this:

(Pdb) threads_no
1
(Pdb) sibling_set
[CoercedSet([0, 1])]
(Pdb) len(sibling_set)
1
(Pdb) instance_cores
CoercedSet([0, 1])
(Pdb) len(instance_cores)
2

So the test evaluates to True, and we bail out.

I don't think this is correct, we should be able to schedule on this host since 
it has two full physical cores available.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Joshua Harlow

Joshua Harlow wrote:

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2016-06-07 08:46:28 -0700:

Clint Byrum wrote:

Excerpts from Kris G. Lindgren's message of 2016-06-06 20:44:26 +:

Hi ironic folks,
As I'm trying to explore how GoDaddy can use ironic I've created
the following in an attempt to document some of my concerns, and
I'm wondering if you folks could help myself identity ongoing work
to solve these (or alternatives?)

Hi Kris. I've been using Ironic in various forms for a while, and I can
answer a few of these things.


List of concerns with ironic:

1.)Nova<-> ironic interactions are generally seem terrible?

I don't know if I'd call it terrible, but there's friction. Things that
are unchangable on hardware are just software configs in vms (like mac
addresses, overlays, etc), and things that make no sense in VMs are
pretty standard on servers (trunked vlans, bonding, etc).

One way we've gotten around it is by using Ironic standalone via
Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
and includes playbooks to build config drives and deploy images in a
fairly rudimentary way without Nova.

I call this the "better than Cobbler" way of getting a toe into the
Ironic waters.

[1] https://github.com/openstack/bifrost

Out of curiosity, why ansible vs turning
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py
(or something like it) into a tiny-wsgi-app (pick useful name here) that
has its own REST api (that looks pretty similar to the public functions
in that driver file)?


That's an interesting idea. I think a reason Bifrost doesn't just import
nova virt drivers is that they're likely _not_ a supported public API
(despite not having _'s at the front). Also, a lot of the reason Bifrost
exists is to enable users to get the benefits of all the baremetal
abstraction work done in Ironic without having to fully embrace all of
OpenStack's core. So while you could get a little bit of the stuff from
nova (like config drive building), you'd still need to handle network
address assignment, image management, etc. etc., and pretty soon you
start having to run a tiny glance and a tiny neutron. The Bifrost way
is the opposite: I just want a tiny Ironic, and _nothing_ else.



Ya, I'm just thinking that at a certain point


Oops forgot to fill this out, was just thinking that at a certain point 
it might be easier to figure out how to extract that API (meh, if its 
public or private) and just have someone make an executive decision 
around ironic being a stand-alone thing or not (and a capable 
stand-alone thing, not a sorta-standalone-thing).





That seems almost easier than building a bunch of ansible scripts that
appear (at a glance) to do similar things; and u get the benefit of
using a actual programming language vs a
half-programming-ansible-yaml-language...

A realization I'm having is that I'm really not a fan of using ansible
as a half-programming-ansible-yaml-language, which it seems like people
start to try to do after a while (because at some point you need
something like if statements, then things like [1] get created), no
offense to the authors, but I guess this is my personal preference (it's
also one of the reasons taskflow directly is a lib. in python, because
, people don't need to learn a new language).



We use python in Ansible all the time:

http://git.openstack.org/cgit/openstack/bifrost/tree/playbooks/library

The reason to use Ansible is that it has already implemented all of
the idempotency and error handling and UI needs that one might need for
running workflows.

I've tried multiple times to understand taskflow, and to me, Ansible is
the anti-taskflow. It's easy to pick up, easy to read the workflows,
doesn't require deep surgery on your code to use (just execute
ansible-playbook), and is full of modules to support nearly anything
your deployment may need.


Actually they are pretty similar (to a degree), taskflow is pretty much
the same/similar thing ansible is using internally, a graph structure
(last time I checked) that gets ran in parallel or in serial using a
executor concept[1].

Said surgery is only required if you want a deep integration, nothing is
stopping folks from using taskflow in the same manner as running a bunch
of task == similar to ansible style (taskflow also doesn't need to have
its own module concepts as pypi modules primarily just work because it's
python).

But ya, anyway... can't win over everyone ;)

[1] https://github.com/ansible/ansible/tree/devel/lib/ansible/executor



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2016-06-07 08:46:28 -0700:

Clint Byrum wrote:

Excerpts from Kris G. Lindgren's message of 2016-06-06 20:44:26 +:

Hi ironic folks,
As I'm trying to explore how GoDaddy can use ironic I've created the following 
in an attempt to document some of my concerns, and I'm wondering if you folks 
could help myself identity ongoing work to solve these (or alternatives?)

Hi Kris. I've been using Ironic in various forms for a while, and I can
answer a few of these things.


List of concerns with ironic:

1.)Nova<->   ironic interactions are generally seem terrible?

I don't know if I'd call it terrible, but there's friction. Things that
are unchangable on hardware are just software configs in vms (like mac
addresses, overlays, etc), and things that make no sense in VMs are
pretty standard on servers (trunked vlans, bonding, etc).

One way we've gotten around it is by using Ironic standalone via
Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
and includes playbooks to build config drives and deploy images in a
fairly rudimentary way without Nova.

I call this the "better than Cobbler" way of getting a toe into the
Ironic waters.

[1] https://github.com/openstack/bifrost

Out of curiosity, why ansible vs turning
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py
(or something like it) into a tiny-wsgi-app (pick useful name here) that
has its own REST api (that looks pretty similar to the public functions
in that driver file)?


That's an interesting idea. I think a reason Bifrost doesn't just import
nova virt drivers is that they're likely _not_ a supported public API
(despite not having _'s at the front). Also, a lot of the reason Bifrost
exists is to enable users to get the benefits of all the baremetal
abstraction work done in Ironic without having to fully embrace all of
OpenStack's core. So while you could get a little bit of the stuff from
nova (like config drive building), you'd still need to handle network
address assignment, image management, etc. etc., and pretty soon you
start having to run a tiny glance and a tiny neutron. The Bifrost way
is the opposite: I just want a tiny Ironic, and _nothing_ else.



Ya, I'm just thinking that at a certain point


That seems almost easier than building a bunch of ansible scripts that
appear (at a glance) to do similar things; and u get the benefit of
using a actual programming language vs a
half-programming-ansible-yaml-language...

A realization I'm having is that I'm really not a fan of using ansible
as a half-programming-ansible-yaml-language, which it seems like people
start to try to do after a while (because at some point you need
something like if statements, then things like [1] get created), no
offense to the authors, but I guess this is my personal preference (it's
also one of the reasons taskflow directly is a lib. in python, because
, people don't need to learn a new language).



We use python in Ansible all the time:

http://git.openstack.org/cgit/openstack/bifrost/tree/playbooks/library

The reason to use Ansible is that it has already implemented all of
the idempotency and error handling and UI needs that one might need for
running workflows.

I've tried multiple times to understand taskflow, and to me, Ansible is
the anti-taskflow. It's easy to pick up, easy to read the workflows,
doesn't require deep surgery on your code to use (just execute
ansible-playbook), and is full of modules to support nearly anything
your deployment may need.


Actually they are pretty similar (to a degree), taskflow is pretty much 
the same/similar thing ansible is using internally, a graph structure 
(last time I checked) that gets ran in parallel or in serial using a 
executor concept[1].


Said surgery is only required if you want a deep integration, nothing is 
stopping folks from using taskflow in the same manner as running a bunch 
of task == similar to ansible style (taskflow also doesn't need to have 
its own module concepts as pypi modules primarily just work because it's 
python).


But ya, anyway... can't win over everyone ;)

[1] https://github.com/ansible/ansible/tree/devel/lib/ansible/executor



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Using image metadata to sanity check supplied authentication data at nova 'create' or 'recreate' time?

2016-06-07 Thread Jim Rollenhagen
On Tue, Jun 07, 2016 at 03:10:24PM +0100, Daniel P. Berrange wrote:
> On Tue, Jun 07, 2016 at 09:37:25AM -0400, Jim Rollenhagen wrote:
> > On Tue, Jun 07, 2016 at 08:31:35AM +1000, Michael Still wrote:
> > > On Tue, Jun 7, 2016 at 7:41 AM, Clif Houck  wrote:
> > > 
> > > > Hello all,
> > > >
> > > > At Rackspace we're running into an interesting problem: Consider a user
> > > > who boots an instance in Nova with an image which only supports SSH
> > > > public-key authentication, but the user doesn't provide a public key in
> > > > the boot request. As far as I understand it, today Nova will happily
> > > > boot that image and it may take the user some time to realize their
> > > > mistake when they can't login to the instance.
> > > >
> > > 
> > > What about images where the authentication information is inside the 
> > > image?
> > > For example, there's just a standard account baked in that everyone knows
> > > about? In that case Nova doesn't need to inject anything into the 
> > > instance,
> > > and therefore the metadata doesn't need to supply anything.
> > 
> > Right, so that's a third case. How I'd see this working is maybe an
> > image property called "auth_requires" that could be one of ["none",
> > "ssh_key", "x509_cert", "password"]. Or maybe it could be multiple
> > values that are OR'd, so for example an image could require an ssh key
> > or an x509 cert. If the "auth_requires" property isn't found, default to
> > "none" to maintain compatibility, I guess.
> 
> NB, even if you have an image that requires an SSH key to be provided in
> order to enable login, it is sometimes valid to not provide one. Not least
> during development, I'm often testing images which would ordinarily require
> an SSH key, but I don't actually need the ability to login, so I don't bother
> to provide one.
> 
> So if we provided this ability to tag images as needing an ssh key, and then
> enforced that, we would then also need to extend the API to provide a way to
> tell nova to explicitly ignore this and not bother enforcing it, despite what
> the image metadata says.
> 
> I'm not particularly convinced the original problem is serious enough to
> warrant building such a solution. It feels like the kind of mistake that
> people would do once, and then learn their mistake thereafter. IOW the
> consequences of the mistake don't seem particularly severe really.
> 
> > The bigger question here is around hitting the images API syncronously
> > during a boot request, and where/how/if to cache the metadata that's
> > returned so we don't have to do it so often. I don't have a good answer
> > for that, though.
> 
> Nova already uses image metadata for countless things during the VM boot
> request, so there's nothin new in this respect. We only query glance
> once, thereafter the image metadata is cached by Nova in the DB on a per
> instance basis, because we need to be isolated from later changes to the
> metadata in glance after the VM boots.

This is beyond the API though, right? The purpose of the spec here is to
reject the request if there isn't enough information to boot the
machine.

// jim

> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-07 Thread Oleg Bondarev
On Mon, Jun 6, 2016 at 7:18 PM, Ihar Hrachyshka  wrote:

>
> > On 06 Jun 2016, at 16:44, Sean M. Collins  wrote:
> >
> > I agree, it would be convenient to have something similar to what Nova
> > has:
> >
> >
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/versions.py#L59-L60
> >
> > We should put some resources behind implementing micro versioning and we
> > could end up with something similar.
> >
> > It would also be nice to have the agents report their version, so it
> > bubbles up into the agent-list REST API calls.
>
> Agents already report a list of object versions known to them:
>
>
> https://github.com/openstack/neutron/blob/master/neutron/db/agents_db.py#L258
>
> In theory, we can deduce the version from there. The versions are reported
> through state reports. Not sure if it’s exposed in API.
>

In my case I mostly need to know the version of neutron server, in
particular if it's still Mitaka or Newton already. This is what Dan's
concern is about in https://review.openstack.org/#/c/246910/: if we're
upgrading cluster from Mitaka to Newton and at some point have nova
upgraded but neutron still of Mitaka version, than live migration will be
broken (nova will wait for event which neutron does not send). But if we
can know neutron version we can solve the issue.


>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2016-06-07 08:46:28 -0700:
> Clint Byrum wrote:
> > Excerpts from Kris G. Lindgren's message of 2016-06-06 20:44:26 +:
> >> Hi ironic folks,
> >> As I'm trying to explore how GoDaddy can use ironic I've created the 
> >> following in an attempt to document some of my concerns, and I'm wondering 
> >> if you folks could help myself identity ongoing work to solve these (or 
> >> alternatives?)
> >
> > Hi Kris. I've been using Ironic in various forms for a while, and I can
> > answer a few of these things.
> >
> >> List of concerns with ironic:
> >>
> >> 1.)Nova<->  ironic interactions are generally seem terrible?
> >
> > I don't know if I'd call it terrible, but there's friction. Things that
> > are unchangable on hardware are just software configs in vms (like mac
> > addresses, overlays, etc), and things that make no sense in VMs are
> > pretty standard on servers (trunked vlans, bonding, etc).
> >
> > One way we've gotten around it is by using Ironic standalone via
> > Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
> > and includes playbooks to build config drives and deploy images in a
> > fairly rudimentary way without Nova.
> >
> > I call this the "better than Cobbler" way of getting a toe into the
> > Ironic waters.
> >
> > [1] https://github.com/openstack/bifrost
> 
> Out of curiosity, why ansible vs turning 
> https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py 
> (or something like it) into a tiny-wsgi-app (pick useful name here) that 
> has its own REST api (that looks pretty similar to the public functions 
> in that driver file)?

That's an interesting idea. I think a reason Bifrost doesn't just import
nova virt drivers is that they're likely _not_ a supported public API
(despite not having _'s at the front). Also, a lot of the reason Bifrost
exists is to enable users to get the benefits of all the baremetal
abstraction work done in Ironic without having to fully embrace all of
OpenStack's core. So while you could get a little bit of the stuff from
nova (like config drive building), you'd still need to handle network
address assignment, image management, etc. etc., and pretty soon you
start having to run a tiny glance and a tiny neutron. The Bifrost way
is the opposite: I just want a tiny Ironic, and _nothing_ else.

> 
> That seems almost easier than building a bunch of ansible scripts that 
> appear (at a glance) to do similar things; and u get the benefit of 
> using a actual programming language vs a 
> half-programming-ansible-yaml-language...
> 
> A realization I'm having is that I'm really not a fan of using ansible 
> as a half-programming-ansible-yaml-language, which it seems like people 
> start to try to do after a while (because at some point you need 
> something like if statements, then things like [1] get created), no 
> offense to the authors, but I guess this is my personal preference (it's 
> also one of the reasons taskflow directly is a lib. in python, because 
> , people don't need to learn a new language).
> 

We use python in Ansible all the time:

http://git.openstack.org/cgit/openstack/bifrost/tree/playbooks/library

The reason to use Ansible is that it has already implemented all of
the idempotency and error handling and UI needs that one might need for
running workflows.

I've tried multiple times to understand taskflow, and to me, Ansible is
the anti-taskflow. It's easy to pick up, easy to read the workflows,
doesn't require deep surgery on your code to use (just execute
ansible-playbook), and is full of modules to support nearly anything
your deployment may need.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-07 Thread John McDowall
Srilatha,

I am trying to get these resolved but my devstack is broken and I am truing to 
resolve it.

Let me try and manual edit this and fix it – it is left over from the old API 
before I moved to the port-chain api.

Regards

John

From: Srilatha Tangirala >
Date: Monday, June 6, 2016 at 2:01 PM
To: John McDowall 
>
Cc: "disc...@openvswitch.org" 
>, Na Zhu 
>, "OpenStack Development Mailing 
List (not for usage questions)" 
>, 
Ryan Moats >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN


Hi John,

To get started with adding test scripts, I was trying to workout one end to end 
flow with the latest code from your private repos and found the following.

create_port_chain is calling _create_ovn_vnf.
This is calling self._ovn.create_lservice(
lservice_name = 'sfi-%s' % ovn_info['id'],
lswitch_name = lswitch_name,
name = ovn_info['name'],
app_port = ovn_info['app_port_id'],
in_port = ovn_info['in_port_id'],
out_port = ovn_info['out_port_id'] ))

I could not find create_lservice() in networking-sfc or networking-ovn repos. 
Are you planning to move OVN related apis(ex:_create_ovn_vnf) from SFC driver 
to networking-ovn? Please let us know what is the best way to proceed to write 
the test scripts.

Thanks,
Srilatha.






[Inactive hide details for John McDowall ---06/06/2016 08:36:16 AM---Juno, Let 
me check – my intention was that the networking-]John McDowall ---06/06/2016 
08:36:16 AM---Juno, Let me check – my intention was that the networking-sfc 
OVNB driver would configure all aspect

From: John McDowall 
>
To: Na Zhu >
Cc: "disc...@openvswitch.org" 
>, Ryan 
Moats/Omaha/IBM@IBMUS, Srilatha Tangirala/San Francisco/IBM@IBMUS, "OpenStack 
Development Mailing List (not for usage questions)" 
>
Date: 06/06/2016 08:36 AM
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN





Juno,

Let me check – my intention was that the networking-sfc OVNB driver would 
configure all aspects of the port-chain and add the parameters to the 
networking-sfc db. Once all the parameters were in the creation of a port-chain 
would call networking-ovn (passing a deep copy of the port-chain dict). Here I 
see networking-ovn acting only as a bridge into ovs/ovn (I did not add anything 
in the ovn plugin – not sure if that is the right approach). Networking-ovn 
calls into ovs/ovn and inserts the entire port-chain.

Thoughts?

j

From: Na Zhu >
Date: Monday, June 6, 2016 at 5:49 AM
To: John McDowall 
>
Cc: "disc...@openvswitch.org" 
>, Ryan Moats 
>, Srilatha Tangirala 
>, "OpenStack Development 
Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

One question need confirm with you, I think the ovn flow classifier driver and 
ovn port chain driver should call the APIs which you add to networking-ovn to 
configure the northbound db sfc tables, right? I see your networking-sfc ovn 
drivers, they does not call the APIs you add to networking-ovn, do you miss 
that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From: Na Zhu/China/IBM@IBMCN
To: John McDowall 
>
Cc: Srilatha Tangirala >, 
OpenStack Development Mailing List 
>, 
Ryan Moats >, 
"disc...@openvswitch.org" 
>
Date: 2016/06/06 14:28
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




John,

Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-07 Thread Oleg Bondarev
On Mon, Jun 6, 2016 at 2:03 PM, Armando M.  wrote:

>
>
> On 6 June 2016 at 10:06, Oleg Bondarev  wrote:
>
>> Hi,
>>
>> There are cases where it would be useful to know the version of Neutron
>> (or any other project) from API, like during upgrades or in cross-project
>> communication cases.
>> For example in https://review.openstack.org/#/c/246910/ Nova needs to
>> know if Neutron sends vif-plugged event during live migration. To ensure
>> this it should be enough to know Neutron is "Newton" or higher.
>>
>> Not sure why it wasn't done before (or was it and I'm just blind?) so the
>> question to the community is what are possible issues/downsides of exposing
>> code version through the API?
>>
>
> If you are not talking about features exposed through the API (for which
> they'd have a new extension being advertised), knowing that you're running
> a specific version of the code might not guarantee that a particular
> feature is available, especially in the case where the capability is an
> implementation detail that is config tunable (evil, evil). This may also
> lead to needless coupling between the two projects, as you'd still want to
> code defensively and assume the specific behavior may or may not be there.
>

Agree, that's why we have extensions-list in the API, right?


>
> I suspect that your case is slightly different in that the lack of a
> received event may be due to an error rather than a missing capability and
> you would not be able to distinguish the difference if not optimistically
> assume lack of capability. Then you need to make a "mental" note and come
> back to the code to assume a failure two cycles down the road from when
> your code merges. Definitely not a pretty workflow without advertising the
> new feature explicitly via the API.
>

I'd not call it a feature, but a tiny behavior change for the neutron
reference implementation: if patch https://review.openstack.org/#/c/246898/
merges in Newton then we can be sure that neutron (newton or higher) with
ml2+ovs should send a particular event to nova during live migration (if it
doesn't than it a bug, but it's another topic) and nova can be sure it
should wait for this event.


>
>
>>
>> Thanks,
>> Oleg
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Bharath Thiruveedula to Tacker core team

2016-06-07 Thread Sridhar Ramaswamy
Closing the vote, as all existing core members voted.

Bharath - Welcome to the Tacker core team!

On Mon, Jun 6, 2016 at 11:39 AM, Stephen Wong 
wrote:

> +1
>
> On Fri, Jun 3, 2016 at 9:23 PM, Haddleton, Bob (Nokia - US) <
> bob.haddle...@nokia.com> wrote:
>
>> +1
>>
>> Bob
>>
>> On Jun 3, 2016, at 8:24 PM, Sridhar Ramaswamy  wrote:
>>
>> Tackers,
>>
>> I'm happy to propose Bharath Thiruveedula (IRC: tbh) to join the tacker
>> core team. Bharath has been contributing to Tacker from the Liberty cycle,
>> and he has grown into a key member of this project. His contribution has
>> steadily increased as he picked up bigger pieces to deliver [1].
>> Specifically, he contributed the automatic resource creation blueprint [2]
>> in the Mitaka release. Plus tons of other RFEs and bug fixes [3]. Bharath
>> is also a key contributor in tosca-parser and heat-translator projects
>> which is an added plus.
>>
>> Please provide your +1/-1 votes.
>>
>> Thanks Bharath for your contributions so far and much more to come !!
>>
>> [1]
>> http://stackalytics.com/?project_type=openstack=all=commits_id=bharath-ves=tacker-group
>> [2]
>> https://blueprints.launchpad.net/tacker/+spec/automatic-resource-creation
>> [3] https://bugs.launchpad.net/bugs/+bugs?field.assignee=bharath-ves
>> 
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Kris G. Lindgren's message of 2016-06-06 20:44:26 +:

Hi ironic folks,
As I'm trying to explore how GoDaddy can use ironic I've created the following 
in an attempt to document some of my concerns, and I'm wondering if you folks 
could help myself identity ongoing work to solve these (or alternatives?)


Hi Kris. I've been using Ironic in various forms for a while, and I can
answer a few of these things.


List of concerns with ironic:

1.)Nova<->  ironic interactions are generally seem terrible?


I don't know if I'd call it terrible, but there's friction. Things that
are unchangable on hardware are just software configs in vms (like mac
addresses, overlays, etc), and things that make no sense in VMs are
pretty standard on servers (trunked vlans, bonding, etc).

One way we've gotten around it is by using Ironic standalone via
Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
and includes playbooks to build config drives and deploy images in a
fairly rudimentary way without Nova.

I call this the "better than Cobbler" way of getting a toe into the
Ironic waters.

[1] https://github.com/openstack/bifrost


Out of curiosity, why ansible vs turning 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py 
(or something like it) into a tiny-wsgi-app (pick useful name here) that 
has its own REST api (that looks pretty similar to the public functions 
in that driver file)?


That seems almost easier than building a bunch of ansible scripts that 
appear (at a glance) to do similar things; and u get the benefit of 
using a actual programming language vs a 
half-programming-ansible-yaml-language...


A realization I'm having is that I'm really not a fan of using ansible 
as a half-programming-ansible-yaml-language, which it seems like people 
start to try to do after a while (because at some point you need 
something like if statements, then things like [1] get created), no 
offense to the authors, but I guess this is my personal preference (it's 
also one of the reasons taskflow directly is a lib. in python, because 
, people don't need to learn a new language).


[1] 
https://github.com/openstack/bifrost/blob/master/playbooks/roles/ironic-enroll-dynamic/tasks/main.yml





   -How to accept raid config and partitioning(?) from end users? Seems to not 
a yet agreed upon method between nova/ironic.


AFAIK accepting it from the users just isn't solved. Administrators
do have custom ramdisks that they boot to pre-configure RAID during
enrollment.


-How to run multiple conductors/nova-computes?   Right now as far as I can 
tell all of ironic front-end by a single nova-compute, which I will have to 
manage via a cluster technology between two or mode nodes.  Because of this and 
the way host-agregates work I am unable to expose fault domains for ironic 
instances (all of ironic can only be under a single AZ (the az that is assigned 
to the nova-compute node)). Unless I create multiple nova-compute servers and 
manage multiple independent ironic setups.  This makes on-boarding/query of 
hardware capacity painful.


The nova-compute does almost nothing. It really just talks to the
scheduler to tell it what's going on in Ironic. If it dies, deploys
won't stop. You can run many many conductors and spread load and fault
tolerance among them easily. I think for multiple AZs though, you're
right, there's no way to expose that. Perhaps it can be done with cells,
which I think Rackspace's OnMetal uses (but I'll let them refute or
confirm that).

Seems like the virt driver could be taught to be AZ-aware and some
metadata in the server record could allow AZs to go through to Ironic.


   - Nova appears to be forcing a we are "compute" as long as "compute" is VMs, 
means that we will have a baremetal flavor explosion (ie the mismatch between baremetal and VM).
   - This is a feeling I got from the ironic-nova cross project meeting in 
Austin.  General exmaple goes back to raid config above. I can configure a 
single piece of hardware many different ways, but to fit into nova's world view 
I need to have many different flavors exposed to end-user.  In this way many 
flavors can map back to a single piece of hardware with just a lsightly 
different configuration applied. So how am I suppose to do a single server with 
6 drives as either: Raid 1 + Raid 5, Raid 5, Raid 10, Raid 6, or JBOD.  Seems 
like I would need to pre-mark out servers that were going to be a specific raid 
level.  Which means that I need to start managing additional sub-pools of 
hardware to just deal with how the end users wants the raid configured, this is 
pretty much a non-starter for us.  I have not really heard of whats being done 
on this specific front.


You got that right. Perhaps people are comfortable with this limitation.
It is at least simple.


2.) Inspector:
   - IPA service doesn't gather port/switching information
   - Inspection service doesn't process 

Re: [openstack-dev] [puppet] weekly meeting #84

2016-06-07 Thread Emilien Macchi
We did our meeting, you can read notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-06-07-15.00.html

Thanks,

On Mon, Jun 6, 2016 at 9:12 AM, Emilien Macchi <emil...@redhat.com> wrote:
> Hi Puppeteers!
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting-4.
>
> Here's a first agenda:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160607
>
> Feel free to add more topics, and any outstanding bug and patch.
>
> See you tomorrow!
> Thanks,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Dmitry Tantsur

On 06/07/2016 02:01 AM, Devananda van der Veen wrote:


On 06/06/2016 01:44 PM, Kris G. Lindgren wrote:

Hi ironic folks,
As I'm trying to explore how GoDaddy can use ironic I've created the following
in an attempt to document some of my concerns, and I'm wondering if you folks
could help myself identity ongoing work to solve these (or alternatives?)
List of concerns with ironic:


Hi Kris,

There is a lot of ongoing work in and around the Ironic project. Thanks for
diving in and for sharing your concerns; you're not alone.

I'll respond to each group of concerns, as some of these appear quite similar to
each other and align with stuff we're already doing. Hopefully I can provide
some helpful background to where the project is at today.



1.)Nova <-> ironic interactions are generally seem terrible?


These two projects are coming at the task of managing "compute" with
significantly different situations and we've been working, for the last ~2
years, to build a framework that can provide both virtual and physical resources
through one API. It's not a simple task, and we have a lot more to do.



  -How to accept raid config and partitioning(?) from end users? Seems to not a
yet agreed upon method between nova/ironic.


Nova expresses partitioning in a very limited way on the flavor. You get root,
swap, and ephemeral partitions -- and that's it. Ironic honors those today, but
they're pinned on the flavor definition, eg. by the cloud admin (or whoever can
define the flavor.

If your users need more complex partitioning, they could create additional
partitions after the instance is created. This limitation within Ironic exists,
in part, because the projects' goal is to provide hardware through the OpenStack
Compute API -- which doesn't express arbitrary partitionability. (If you're
interested, there is a lengthier and more political discussion about whether the
cloud should support "pets" and whether arbitrary partitioning is needed for
"cattle".)


RAID configuration isn't something that Nova allows their users to choose today
- it doesn't fit in the Nova model of "compute", and there is, to my knowledge,
nothing in the Nova API to allow its input. We've discussed this a little bit,
but so far settled on leaving it up to the cloud admin to set this in Ironic.

There has been discussion with the Cinder community over ways to express volume
spanning and mirroring, but apply it to a machines' local disks, but these
discussions didn't result in any traction.

There's also been discussion of ways we could do ad-hoc changes in RAID level,
based on flavor metadata, during the provisioning process (rather than ahead of
time) but no code has been done for this yet, AFAIK.


I'm still pretty interested in it, because I agree with anything said 
above about building RAID ahead-of-time not being convenient. I don't 
quite understand how such a feature would look like, we might add it as 
a topic for midcycle.




So, where does that leave us? With the "explosion of flavors" that you
described. It may not be ideal, but that is the common ground we've reached.


   -How to run multiple conductors/nova-computes?   Right now as far as I can
tell all of ironic front-end by a single nova-compute, which I will have to
manage via a cluster technology between two or mode nodes.  Because of this and
the way host-agregates work I am unable to expose fault domains for ironic
instances (all of ironic can only be under a single AZ (the az that is assigned
to the nova-compute node)). Unless I create multiple nova-compute servers and
manage multiple independent ironic setups.  This makes on-boarding/query of
hardware capacity painful.


Yep. It's not ideal, and the community is very well aware of, and actively
working on, this limitation. It also may not be as bad as you may think. The
nova-compute process doesn't do very much, and tests show it handling some
thousands of ironic nodes fairly well in parallel. Standard active-passive
management of that process should suffice.

A lot of design work has been done to come up with a joint solution by folks on
both the Ironic and Nova teams.
http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/ironic-multiple-compute-hosts.html

As a side note, it's possible (though not tested, recommended, or well
documented) to run more than one nova-compute. See
https://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py


  - Nova appears to be forcing a we are "compute" as long as "compute" is VMs,
means that we will have a baremetal flavor explosion (ie the mismatch between
baremetal and VM).
  - This is a feeling I got from the ironic-nova cross project meeting in
Austin.  General exmaple goes back to raid config above. I can configure a
single piece of hardware many different ways, but to fit into nova's world view
I need to have many different flavors exposed to end-user.  In this way many
flavors can map back to a single piece of hardware with just a 

[openstack-dev] [watcher] Meeting Wednesday June 8th at 9:00 UTC

2016-06-07 Thread Antoine Cabot
Hi everyone,

The Watcher team is having its next weekly meeting on Wednesday
June 8th, at 9:00 UTC in #openstack-meeting-4

Meeting agenda available here:
https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda#06.2F08.2F2016

Anyone is welcome to to add agenda items and everyone interested in
Watcher and Infrastructure optimization is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/watcher/2016/watcher.2016-06-01-14.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/watcher/2016/watcher.2016-06-01-14.00.txt
Log: 
http://eavesdrop.openstack.org/meetings/watcher/2016/watcher.2016-06-01-14.00.log.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-07 Thread Paul Michali
Anyone have any thoughts on the two questions below? Namely...

If the huge pages are 2M, we are creating a 2GB VM, have 1945 huge pages,
should the allocation fail (and if so why)?

Why do all the 2GB VMs get created on the same NUMA node, instead of
getting evenly assigned to each of the two NUMA nodes that are available on
the compute node (as a result, allocation fails, when 1/2 the huge pages
are used)? I found that increasing mem_page_size to 2048 resolves the
issue, but don't know why.

ANother thing I was seeing, when the VM create failed due to not enough
huge pages available and was in error state, I could delete the VM, but the
Neutron port was still there.  Is that correct?

I didn't see any log messages in neutron, requesting to unbind and delete
the port.

Thanks!

PCM

.

On Fri, Jun 3, 2016 at 2:03 PM Paul Michali  wrote:

> Thanks for the link Tim!
>
> Right now, I have two things I'm unsure about...
>
> One is that I had 1945 huge pages left (of size 2048k) and tried to create
> a VM with a small flavor (2GB), which should need 1024 pages, but Nova
> indicated that it wasn't able to find a host (and QEMU reported an
> allocation issue).
>
> The other is that VMs are not being evenly distributed on my two NUMA
> nodes, and instead, are getting created all on one NUMA node. Not sure if
> that is expected (and setting mem_page_size to 2048 is the proper way).
>
> Regards,
>
> PCM
>
>
> On Fri, Jun 3, 2016 at 1:21 PM Tim Bell  wrote:
>
>> The documentation at
>> http://docs.openstack.org/admin-guide/compute-flavors.html is gradually
>> improving. Are there areas which were not covered in your clarifications ?
>> If so, we should fix the documentation too since this is a complex area to
>> configure and good documentation is a great help.
>>
>>
>>
>> BTW, there is also an issue around how the RAM for the BIOS is shadowed.
>> I can’t find the page from a quick google but we found an imbalance when we
>> used 2GB pages as the RAM for BIOS shadowing was done by default in the
>> memory space for only one of the NUMA spaces.
>>
>>
>>
>> Having a look at the KVM XML can also help a bit if you are debugging.
>>
>>
>>
>> Tim
>>
>>
>>
>> *From: *Paul Michali 
>> *Reply-To: *"OpenStack Development Mailing List (not for usage
>> questions)" 
>> *Date: *Friday 3 June 2016 at 15:18
>> *To: *"Daniel P. Berrange" , "OpenStack Development
>> Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> *Subject: *Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling
>>
>>
>>
>> See PCM inline...
>>
>> On Fri, Jun 3, 2016 at 8:44 AM Daniel P. Berrange 
>> wrote:
>>
>> On Fri, Jun 03, 2016 at 12:32:17PM +, Paul Michali wrote:
>> > Hi!
>> >
>> > I've been playing with Liberty code a bit and had some questions that
>> I'm
>> > hoping Nova folks may be able to provide guidance on...
>> >
>> > If I set up a flavor with hw:mem_page_size=2048, and I'm creating
>> (Cirros)
>> > VMs with size 1024, will the scheduling use the minimum of the number of
>>
>> 1024 what units ? 1024 MB, or 1024 huge pages aka 2048 MB ?
>>
>>
>>
>> PCM: I was using small flavor, which is 2 GB. So that's 2048 MB and the
>> page size is 2048K, so 1024 pages? Hope I have the units right.
>>
>>
>>
>>
>>
>>
>> > huge pages available and the size requested for the VM, or will it base
>> > scheduling only on the number of huge pages?
>> >
>> > It seems to be doing the latter, where I had 1945 huge pages free, and
>> > tried to create another VM (1024) and Nova rejected the request with "no
>> > hosts available".
>>
>> From this I'm guessing you're meaning 1024 huge pages aka 2 GB earlier.
>>
>> Anyway, when you request huge pages to be used for a flavour, the
>> entire guest RAM must be able to be allocated from huge pages.
>> ie if you have a guest with 2 GB of RAM, you must have 2 GB worth
>> of huge pages available. It is not possible for a VM to use
>> 1.5 GB of huge pages and 500 MB of normal sized pages.
>>
>>
>>
>> PCM: Right, so, with 2GB of RAM, I need 1024 huge pages of size 2048K. In
>> this case, there are 1945 huge pages available, so I was wondering why it
>> failed. Maybe I'm confusing sizes/pages?
>>
>>
>>
>>
>>
>>
>> > Is this still the same for Mitaka?
>>
>> Yep, this use of huge pages has not changed.
>>
>> > Where could I look in the code to see how the scheduling is determined?
>>
>> Most logic related to huge pages is in nova/virt/hardware.py
>>
>> > If I use mem_page_size=large (what I originally had), should it evenly
>> > assign huge pages from the available NUMA nodes (there are two in my
>> case)?
>> >
>> > It looks like it was assigning all VMs to the same NUMA node (0) in this
>> > case. Is the right way to change to 2048, like I did above?
>>
>> Nova will always avoid spreading your VM across 2 host NUMA nodes,
>> since that gives bad performance 

Re: [openstack-dev] [nova] Using image metadata to sanity check supplied authentication data at nova 'create' or 'recreate' time?

2016-06-07 Thread Ghe Rivero
I think nova should completely ignore this issue and boot the image no matter 
what. This is an operational 'workflow', and nova doesn't need to know about 
the image internals at all. If it boots, then is not nova problem. 

Ghe Rivero


Quoting Clif Houck (2016-06-06 23:41:12)
> Hello all,
> 
> At Rackspace we're running into an interesting problem: Consider a user
> who boots an instance in Nova with an image which only supports SSH
> public-key authentication, but the user doesn't provide a public key in
> the boot request. As far as I understand it, today Nova will happily
> boot that image and it may take the user some time to realize their
> mistake when they can't login to the instance.
> 
> I've been thinking about a solution to this problem. Ideally, the Nova
> API would quickly return an HTTP code indicating a problem with the
> request and reject the `create` or `recreate` request if the proper
> credentials were not included as part of the request.
> 
> So in the example above, instead of Nova accepting the create request,
> Nova would check the requested image's meta-data and ensure at least
> one form of authentication is supported AND has credentials available
> to place on the image during provisioning. Basically, ensure the
> requester has a way to login remotely.
> 
> I've put up a short specification on this proposed addition here:
> https://review.openstack.org/#/c/326073/
> and the blueprint is here:
> https://blueprints.launchpad.net/nova/+spec/auth-based-on-image-metadat
> a
> 
> I think one of the glaring weaknesses of this proposal is it would
> require a call to the image API to get image meta-data during `create`
> or `recreate`. This could be alleviated by caching image meta-data in
> Nova, since I wouldn't expect image meta-data to change often.
> 
> There's also the question of the image meta-data itself. I don't think
> there is any existing standard to describe, using metadata, what remote
> login/authentication methods a particular image supports. One way could
> be to provide a set of configuration options for the operator to
> define. That way, each operator could define their own metadata
> describing each authentication method supported by their images.
> 
> Hoping this will elicit some opinions on how to go about solving this,
> or if I've missed something that already exists that solves this
> problem.
> 
> Any thoughts welcome.
> 
> Thanks,
> Clif Houck
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone]trusts with federated users

2016-06-07 Thread Gyorgy Szombathelyi
Hi!

As an OIDC user, tried to play with Heat and Murano recently. They usually fail 
with a trust creation error, noticing that keystone cannot find the _member_ 
role while creating the trust.
Since a federated user is not really have a role in a project, but it is a 
member of a group, which has the appropriate role(s), I suspect that this will 
never work with Federation?
Or is it a known/general problem with trusts and groups? I cannot really decide 
if it is a problem at the Heat, or the Keystone side, can you give me some 
advice?
If it is not an error in the code, but in my setup, then please forgive me this 
stupid question.

Br,
György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Using image metadata to sanity check supplied authentication data at nova 'create' or 'recreate' time?

2016-06-07 Thread Daniel P. Berrange
On Tue, Jun 07, 2016 at 09:37:25AM -0400, Jim Rollenhagen wrote:
> On Tue, Jun 07, 2016 at 08:31:35AM +1000, Michael Still wrote:
> > On Tue, Jun 7, 2016 at 7:41 AM, Clif Houck  wrote:
> > 
> > > Hello all,
> > >
> > > At Rackspace we're running into an interesting problem: Consider a user
> > > who boots an instance in Nova with an image which only supports SSH
> > > public-key authentication, but the user doesn't provide a public key in
> > > the boot request. As far as I understand it, today Nova will happily
> > > boot that image and it may take the user some time to realize their
> > > mistake when they can't login to the instance.
> > >
> > 
> > What about images where the authentication information is inside the image?
> > For example, there's just a standard account baked in that everyone knows
> > about? In that case Nova doesn't need to inject anything into the instance,
> > and therefore the metadata doesn't need to supply anything.
> 
> Right, so that's a third case. How I'd see this working is maybe an
> image property called "auth_requires" that could be one of ["none",
> "ssh_key", "x509_cert", "password"]. Or maybe it could be multiple
> values that are OR'd, so for example an image could require an ssh key
> or an x509 cert. If the "auth_requires" property isn't found, default to
> "none" to maintain compatibility, I guess.

NB, even if you have an image that requires an SSH key to be provided in
order to enable login, it is sometimes valid to not provide one. Not least
during development, I'm often testing images which would ordinarily require
an SSH key, but I don't actually need the ability to login, so I don't bother
to provide one.

So if we provided this ability to tag images as needing an ssh key, and then
enforced that, we would then also need to extend the API to provide a way to
tell nova to explicitly ignore this and not bother enforcing it, despite what
the image metadata says.

I'm not particularly convinced the original problem is serious enough to
warrant building such a solution. It feels like the kind of mistake that
people would do once, and then learn their mistake thereafter. IOW the
consequences of the mistake don't seem particularly severe really.

> The bigger question here is around hitting the images API syncronously
> during a boot request, and where/how/if to cache the metadata that's
> returned so we don't have to do it so often. I don't have a good answer
> for that, though.

Nova already uses image metadata for countless things during the VM boot
request, so there's nothin new in this respect. We only query glance
once, thereafter the image metadata is cached by Nova in the DB on a per
instance basis, because we need to be isolated from later changes to the
metadata in glance after the VM boots.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Question about service subnets spec

2016-06-07 Thread John Davidge
Resurrecting this thread from last week.

On 5/31/16, 10:11 PM, "Brian Haley"  wrote:

>> At this point the enumeration values map simply to device owners.  For
>>example:
>>
>>router_ports -> "network:router_gateway"
>>dvr_fip_ports -> "network:floatingip_agent_gateway"
>>
>> It was at this point that I questioned the need for the abstraction at
>> all.  Hence the proposal to use the device owners directly.
>
>I would agree, think having another name to refer to a device_owner makes
>it
>more confusing.  Using it directly let's us be flexible for deployers,
>and
>allows for using additional owners values if/when they are added.

I agree that a further abstraction is probably not desirable here. If this
is only going to be exposed to admins then using the existing device_owner
values shouldn¹t cause confusion for users.

>
>> Armando expressed some concern about using the device owner as a
>> security issue.  We have the following policy on device_owner:
>>
>>"not rule:network_device or rule:context_is_advsvc or
>> rule:admin_or_network_owner"
>>
>> At the moment, I don't see this as much of an issue.  Do you?
>
>I don't, since only admins should be able to set device_owner to these
>values
>(that's the policy we're talking about here, right?).
>
>To be honest, I think Armando's other comment - "Do we want to expose
>device_owner via tha API or leave it an implementation detail?" is
>important as
>well.  Even though I think an admin should know this level of neutron
>detail,
>will they really?  It's hard to answer that question being so close to
>the code

Seeing as device_owner is already exposed by the port API I don¹t think
this is an issue. And if we agree that a further abstraction isn¹t a good
idea then I don¹t see how we would get around exposing it in this context.

https://review.openstack.org/#/c/300207

John



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Trello board

2016-06-07 Thread Jim Rollenhagen
On Fri, Jun 03, 2016 at 07:50:42PM +, Tristan Cacqueray wrote:
> On 06/03/2016 07:08 PM, Jim Rollenhagen wrote:
> > On Fri, Jun 03, 2016 at 06:29:13PM +0200, Alexis Monville wrote:
> >> Hi,
> >>
> >> On Fri, Jun 3, 2016 at 5:39 PM, Jim Rollenhagen  
> >> wrote:
> >>> Hey all,
> >>>
> >>> Myself and some other cores have had trouble tracking our priorities
> >>> using Launchpad and friends, so we put together a Trello board to help
> >>> us track it. This should also help us focus on what to review or work
> >>> on.
> >>>
> >>> https://trello.com/b/ROTxmGIc/ironic-newton-priorities
> >>>
> >>> Some notes on this:
> >>>
> >>> * This is not the "official" tracking system for ironic - everything
> >>>   should still be tracked in Launchpad as we've been doing. This just
> >>>   helps us organize that.
> >>>
> >>> * This is not free software, unfortunately. Sorry. If this is a serious
> >>>   problem for you in practice, let's chat on IRC and try to come up with
> >>>   a solution.
> >>>
> >>> * I plan on only giving cores edit access on this board to help keep it
> >>>   non-chaotic.
> >>>
> >>> * I'd like to keep this restricted to the priorities we decided on at
> >>>   the summit (including the small stuff not on our priorities page). I'm
> >>>   okay with adding a small number of things here and there, if something
> >>>   comes up that is super important or we think is a nice feature we
> >>>   definitely want to finish in Newton. I don't want to put everything
> >>>   being worked on in this (at least for now).
> >>>
> >>> If you're a core and want edit access to the board, please PM me on IRC
> >>> with your Trello username and I'll add you.
> >>>
> >>> Feedback welcome. :)
> >>
> >> I would like to know if you are aware of this specs around StoryBoard:
> >> http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html
> >>
> >> Maybe it could be interesting to have a look at it and see if it could
> >> fit your needs?
> > 
> > I'm aware of it, and keeping storyboard on my radar.
> > 
> > I am excited for the time when it's feasible to move the project from
> > Launchpad to storyboard, but I don't think that time has come yet.
> > 
> > I don't want to disrupt all of our tracking right now. We simply need a
> > high-level view of what's currently important to the ironic project,
> > where those important things are in terms of getting done, and
> > aggregating pointers to the resources needed to continue working on
> > those things.
> > 
> > We aren't moving our bug/feature list to Trello, simply using it as a
> > way to stay more organized. :)
> > 
> 
> Without moving your project from launhpad to storyboard, it seems like
> you can already use storyboard to keep things organized with a kanban
> board, e.g.:
>   https://storyboard.openstack.org/#!/board/15
> 
> To create a new board, you need to click "Create new" then "board".
> Cards are in fact normal stories that you can update and reference directly.
> 
> Is there something missing that makes Trello a better solution ?

Oh neat! I totally missed that. I didn't evaluate storyboard super deep,
given some of the things I heard about it not being ready for everyone
to switch to. So I don't have a good list as to what Trello does (or
doesn't) do better.

I'd like to stick with Trello for now (since we're all set up on it) and
look at storyboard harder for the Ocata cycle. I specifically
quarantined trello to only our Newton priorities, partially for this
reason.

// jim

> 
> -Tristan
> 



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-07 Thread Jim Rollenhagen
Thanks for getting to this before me, Deva. Saved me some typing. :)

A little more color inline.

On Mon, Jun 06, 2016 at 05:01:04PM -0700, Devananda van der Veen wrote:
> 
> On 06/06/2016 01:44 PM, Kris G. Lindgren wrote:
> > Hi ironic folks,
> > As I'm trying to explore how GoDaddy can use ironic I've created the 
> > following
> > in an attempt to document some of my concerns, and I'm wondering if you 
> > folks
> > could help myself identity ongoing work to solve these (or alternatives?)
> > List of concerns with ironic:
> 
> Hi Kris,
> 
> There is a lot of ongoing work in and around the Ironic project. Thanks for
> diving in and for sharing your concerns; you're not alone.
> 
> I'll respond to each group of concerns, as some of these appear quite similar 
> to
> each other and align with stuff we're already doing. Hopefully I can provide
> some helpful background to where the project is at today.
> 
> > 
> > 1.)Nova <-> ironic interactions are generally seem terrible?
> 
> These two projects are coming at the task of managing "compute" with
> significantly different situations and we've been working, for the last ~2
> years, to build a framework that can provide both virtual and physical 
> resources
> through one API. It's not a simple task, and we have a lot more to do.
> 
> 
> >   -How to accept raid config and partitioning(?) from end users? Seems to 
> > not a
> > yet agreed upon method between nova/ironic.
> 
> Nova expresses partitioning in a very limited way on the flavor. You get root,
> swap, and ephemeral partitions -- and that's it. Ironic honors those today, 
> but
> they're pinned on the flavor definition, eg. by the cloud admin (or whoever 
> can
> define the flavor.
> 
> If your users need more complex partitioning, they could create additional
> partitions after the instance is created. This limitation within Ironic 
> exists,
> in part, because the projects' goal is to provide hardware through the 
> OpenStack
> Compute API -- which doesn't express arbitrary partitionability. (If you're
> interested, there is a lengthier and more political discussion about whether 
> the
> cloud should support "pets" and whether arbitrary partitioning is needed for
> "cattle".)
> 
> 
> RAID configuration isn't something that Nova allows their users to choose 
> today
> - it doesn't fit in the Nova model of "compute", and there is, to my 
> knowledge,
> nothing in the Nova API to allow its input. We've discussed this a little bit,
> but so far settled on leaving it up to the cloud admin to set this in Ironic.
> 
> There has been discussion with the Cinder community over ways to express 
> volume
> spanning and mirroring, but apply it to a machines' local disks, but these
> discussions didn't result in any traction.
> 
> There's also been discussion of ways we could do ad-hoc changes in RAID level,
> based on flavor metadata, during the provisioning process (rather than ahead 
> of
> time) but no code has been done for this yet, AFAIK.
> 
> So, where does that leave us? With the "explosion of flavors" that you
> described. It may not be ideal, but that is the common ground we've reached.
> 
> >-How to run multiple conductors/nova-computes?   Right now as far as I 
> > can
> > tell all of ironic front-end by a single nova-compute, which I will have to
> > manage via a cluster technology between two or mode nodes.  Because of this 
> > and
> > the way host-agregates work I am unable to expose fault domains for ironic
> > instances (all of ironic can only be under a single AZ (the az that is 
> > assigned
> > to the nova-compute node)). Unless I create multiple nova-compute servers 
> > and
> > manage multiple independent ironic setups.  This makes on-boarding/query of
> > hardware capacity painful.
> 
> Yep. It's not ideal, and the community is very well aware of, and actively
> working on, this limitation. It also may not be as bad as you may think. The
> nova-compute process doesn't do very much, and tests show it handling some
> thousands of ironic nodes fairly well in parallel. Standard active-passive
> management of that process should suffice.
> 
> A lot of design work has been done to come up with a joint solution by folks 
> on
> both the Ironic and Nova teams.
> http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/ironic-multiple-compute-hosts.html

It's important to point out here that we're re-working how this works,
but it's still one of our highest priorities:
https://review.openstack.org/#/c/320016/

> 
> As a side note, it's possible (though not tested, recommended, or well
> documented) to run more than one nova-compute. See
> https://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py
> 
> >   - Nova appears to be forcing a we are "compute" as long as "compute" is 
> > VMs,
> > means that we will have a baremetal flavor explosion (ie the mismatch 
> > between
> > baremetal and VM).
> >   - This is a feeling I got from the ironic-nova cross 

Re: [openstack-dev] [nova] Using image metadata to sanity check supplied authentication data at nova 'create' or 'recreate' time?

2016-06-07 Thread Jim Rollenhagen
On Tue, Jun 07, 2016 at 08:31:35AM +1000, Michael Still wrote:
> On Tue, Jun 7, 2016 at 7:41 AM, Clif Houck  wrote:
> 
> > Hello all,
> >
> > At Rackspace we're running into an interesting problem: Consider a user
> > who boots an instance in Nova with an image which only supports SSH
> > public-key authentication, but the user doesn't provide a public key in
> > the boot request. As far as I understand it, today Nova will happily
> > boot that image and it may take the user some time to realize their
> > mistake when they can't login to the instance.
> >
> 
> What about images where the authentication information is inside the image?
> For example, there's just a standard account baked in that everyone knows
> about? In that case Nova doesn't need to inject anything into the instance,
> and therefore the metadata doesn't need to supply anything.

Right, so that's a third case. How I'd see this working is maybe an
image property called "auth_requires" that could be one of ["none",
"ssh_key", "x509_cert", "password"]. Or maybe it could be multiple
values that are OR'd, so for example an image could require an ssh key
or an x509 cert. If the "auth_requires" property isn't found, default to
"none" to maintain compatibility, I guess.

The bigger question here is around hitting the images API syncronously
during a boot request, and where/how/if to cache the metadata that's
returned so we don't have to do it so often. I don't have a good answer
for that, though.

// jim

> 
> Cheers,
> Michael
> 
> -- 
> Rackspace Australia

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Ci] New 'fuel-nailgun' gate job

2016-06-07 Thread Dmitry Kaiharodsev
Hi to all,

please be informed that starting from today we're launching gate job [1]
for 'fuel-nailgun' package [2] in non-voting mode.

Mentioned job will be triggered on each commit and will perform steps:
- build a package from the commit
- run system tests scenario [3] with using created package
- show system test result in current patchset without voting

We're going to enable voting mode when it will be approved from 'fuel-qa'
team side.
Additional notification regarding voting mode will be sent in this thread.

For any additional questions please use our #fuel-infra IRC channel

[1] https://bugs.launchpad.net/fuel/+bug/1557524
[2] https://github.com/openstack/fuel-nailgun-agent
[3]
https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_nailgun_agent.py#L38-45

-- 
Kind Regards,
Dmitry Kaigarodtsev
IRC: dkaigarodtsev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Deprecating the live_migration_flag and block_migration_flag config options

2016-06-07 Thread Koniszewski, Pawel
There is another fix proposed by Eli Qiao long time ago:

https://review.openstack.org/#/c/310707

Basically it blocks block live migration with BDMs and tunneling during 
pre-checks.

From: Timofei Durakov [mailto:tdura...@mirantis.com]
Sent: Tuesday, June 7, 2016 9:04 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [nova][libvirt] Deprecating the 
live_migration_flag and block_migration_flag config options

I've submitted one more patch with potential fix:

https://review.openstack.org/#/c/326258/

Timofey

On Mon, Jun 6, 2016 at 11:58 PM, Timofei Durakov 
> wrote:
On Mon, Jun 6, 2016 at 11:26 PM, Matt Riedemann 
> wrote:
On 6/6/2016 12:15 PM, Matt Riedemann wrote:
On 1/8/2016 12:28 PM, Mark McLoughlin wrote:
On Fri, 2016-01-08 at 14:11 +, Daniel P. Berrange wrote:
On Thu, Jan 07, 2016 at 09:07:00PM +, Mark McLoughlin wrote:
On Thu, 2016-01-07 at 12:23 +0100, Sahid Orentino Ferdjaoui
wrote:
On Mon, Jan 04, 2016 at 09:12:06PM +, Mark McLoughlin
wrote:
Hi

commit 8ecf93e[1] got me thinking - the live_migration_flag
config option unnecessarily allows operators choose arbitrary
behavior of the migrateToURI() libvirt call, to the extent
that we allow the operator to configure a behavior that can
result in data loss[1].

I see that danpb recently said something similar:

https://review.openstack.org/171098

"Honestly, I wish we'd just kill off  'live_migration_flag'
and 'block_migration_flag' as config options. We really
should not be exposing low level libvirt API flags as admin
tunable settings.

Nova should really be in charge of picking the correct set of
flags for the current libvirt version, and the operation it
needs to perform. We might need to add other more sensible
config options in their place [..]"

Nova should really handle internal flags and this serie is
running in the right way.
...

4) Add a new config option for tunneled versus native:

[libvirt] live_migration_tunneled = true

This enables the use of the VIR_MIGRATE_TUNNELLED flag. We
have historically defaulted to tunneled mode because it
requires the least configuration and is currently the only
way to have a secure migration channel.

danpb's quote above continues with:

"perhaps a "live_migration_secure_channel" to indicate that
migration must use encryption, which would imply use of
TUNNELLED flag"

So we need to discuss whether the config option should
express the choice of tunneled vs native, or whether it
should express another choice which implies tunneled vs
native.

https://review.openstack.org/263434

We probably have to consider that operator does not know much
about internal libvirt flags, so options we are exposing for
him should reflect benefice of using them. I commented on your
review we should at least explain benefice of using this option
whatever the name is.

As predicted, plenty of discussion on this point in the review
:)

You're right that we don't give the operator any guidance in the
help message about how to choose true or false for this:

Whether to use tunneled migration, where migration data is
transported over the libvirtd connection. If True, we use the
VIR_MIGRATE_TUNNELLED migration flag

libvirt's own docs on this are here:

https://libvirt.org/migration.html#transport

which emphasizes:

- the data copies involved in tunneling - the extra configuration
steps required for native - the encryption support you get when
tunneling

The discussions I've seen on this topic wrt Nova have revolved
around:

- that tunneling allows for an encrypted transport[1] - that
qemu's NBD based drive-mirror block migration isn't supported
using tunneled mode, and that danpb is working on fixing this
limitation in libvirt - "selective" block migration[2] won't work
with the fallback qemu block migration support, and so won't
currently work in tunneled mode

I'm not working on fixing it, but IIRC some other dev had proposed
patches.

So, the advise to operators would be:

- You may want to choose tunneled=False for improved block
migration capabilities, but this limitation will go away in
future. - You may want to choose tunneled=False if you wish to
trade and encrypted transport for a (potentially negligible)
performance improvement.

Does that make sense?

As for how to name the option, and as I said in the review, I
think it makes sense to be straightforward here and make it
clearly about choosing to disable libvirt's tunneled transport.

If we name it any other way, I think our explanation for
operators will immediately jump to explaining (a) that it
influences the TUNNELLED flag, and (b) the differences between
the tunneled and native transports. So, if we're going to have to
talk about tunneled versus native, why obscure that detail?

Ultimately we need to recognise that libvirt's tunnelled mode was
added as a 

Re: [openstack-dev] [neutron][SFC]

2016-06-07 Thread Alioune
Hi Mohan/Cathy
 I've installed now ovs 2.4.0 and followed
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining but I
got this error :
Regards,

+ neutron-ovs-cleanup
2016-06-07 11:25:36.465 22147 INFO neutron.common.config [-] Logging
enabled!
2016-06-07 11:25:36.468 22147 INFO neutron.common.config [-]
/usr/local/bin/neutron-ovs-cleanup version 7.1.1.dev4
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl [-]
Unable to execute ['ovs-vsctl', '--timeout=10', '--oneline',
'--format=json', '--', 'list-br'].
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
Traceback (most recent call last):
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl   File
"/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, in
run_vsctl
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
log_fail_as_error=False).rstrip()
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl   File
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in execute
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
raise RuntimeError(m)
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
RuntimeError:
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl Command:
['sudo', 'ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--',
'list-br']
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl Exit
code: 1
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
2016-06-07 11:25:36.505 22147 ERROR neutron.agent.ovsdb.impl_vsctl
2016-06-07 11:25:36.512 22147 CRITICAL neutron [-] RuntimeError:
Command: ['sudo', 'ovs-vsctl', '--timeout=10', '--oneline',
'--format=json', '--', 'list-br']
Exit code: 1

2016-06-07 11:25:36.512 22147 ERROR neutron Traceback (most recent call
last):
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/usr/local/bin/neutron-ovs-cleanup", line 10, in 
2016-06-07 11:25:36.512 22147 ERROR neutron sys.exit(main())
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/opt/stack/neutron/neutron/cmd/ovs_cleanup.py", line 89, in main
2016-06-07 11:25:36.512 22147 ERROR neutron ovs_bridges =
set(ovs.get_bridges())
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/opt/stack/neutron/neutron/agent/common/ovs_lib.py", line 132, in
get_bridges
2016-06-07 11:25:36.512 22147 ERROR neutron return
self.ovsdb.list_br().execute(check_error=True)
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 83, in execute
2016-06-07 11:25:36.512 22147 ERROR neutron txn.add(self)
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/opt/stack/neutron/neutron/agent/ovsdb/api.py", line 70, in __exit__
2016-06-07 11:25:36.512 22147 ERROR neutron self.result = self.commit()
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 50, in commit
2016-06-07 11:25:36.512 22147 ERROR neutron res = self.run_vsctl(args)
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 70, in
run_vsctl
2016-06-07 11:25:36.512 22147 ERROR neutron ctxt.reraise = False
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204,
in __exit__
2016-06-07 11:25:36.512 22147 ERROR neutron six.reraise(self.type_,
self.value, self.tb)
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, in
run_vsctl
2016-06-07 11:25:36.512 22147 ERROR neutron
log_fail_as_error=False).rstrip()
2016-06-07 11:25:36.512 22147 ERROR neutron   File
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in execute
2016-06-07 11:25:36.512 22147 ERROR neutron raise RuntimeError(m)
2016-06-07 11:25:36.512 22147 ERROR neutron RuntimeError:
2016-06-07 11:25:36.512 22147 ERROR neutron Command: ['sudo', 'ovs-vsctl',
'--timeout=10', '--oneline', '--format=json', '--', 'list-br']
2016-06-07 11:25:36.512 22147 ERROR neutron Exit code: 1
2016-06-07 11:25:36.512 22147 ERROR neutron
2016-06-07 11:25:36.512 22147 ERROR neutron
+ exit_trap
+ local r=1
++ jobs -p
+ jobs=
+ [[ -n '' ]]
+ kill_spinner
+ '[' '!' -z '' ']'
+ [[ 1 -ne 0 ]]
+ echo 'Error on exit'
Error on exit
+ generate-subunit 1465296797 1939 fail
+ [[ -z /opt/stack/logs ]]
+ /home/alioune/devstack/tools/worlddump.py -d /opt/stack/logs
World dumping... see /opt/stack/logs/worlddump-2016-06-07-112537.txt for
details
+ exit 1


On 7 June 2016 at 12:08, Mohan Kumar  wrote:

> Hi shihanzhang / Alioune ,
>
> *your kernel (check with uname -r )  should support OVS version , below
> table compare kern*el versions and corresponding Open vSwitch release
> support
>
> | Open vSwitch | Linux kernel
> |::|:-:
> |1.4.x | 2.6.18 to 3.2
> |1.5.x | 2.6.18 to 3.2
> |1.6.x | 2.6.18 to 3.2
> |

[openstack-dev] Template Validate API

2016-06-07 Thread Har-Tal, Liat (Nokia - IL)
Hi, 

I added the Template Validation API and it now available through in both API 
and CLI ☺

Validation tests itself consists of content and structure tests
It is possible to check a single template or several templates by providing a 
full path as parameter in the API request.

• By given a full path to template file, validate a single template.
• By given a full path to directory, validate all template files inside it.

Request:
---

Headers:
-  X-Auth-Token (string, required) - Keystone auth token
-  Accept (string) - application/json
-  User-Agent (String)
-  Content-Type (String): application/json

Query Parameters
-  path (string(255), required) - the path to template file or directory

CLI Request Example:
vitrage template validate --path /tmp/broken_templates

URL Request Example:

POST /v1/template/?path=/tmp/broken_templates/basic.yaml
    Host: 135.248.18.122:8999
    User-Agent: keystoneauth1/2.3.0 python-requests/2.9.1 CPython/2.7.6
    Content-Type: application/json
    Accept: application/json
    X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7

Response:


Returns a JSON object that is a list of results.
Each result describes the full validation (syntax and content) of one template 
file.

Result’s fields:
1. status - validation succeeded/failed
2.  file path - the full path to the template file
3. Description
4. message - error message
5. status code

Response Example:

{
  "results": [
    {
  "status": "validation failed",
  "file path": "/tmp/templates/basic_no_meta.yaml",
      "description": "Template syntax validation",
  "message": "metadata is a mandatory section.",
  "status code": 62
    },
    {
  "status": "validation OK",
  "file path": "/tmp/templates/basic.yaml",
  "description": "Template validation",
  "message": "Template validation is OK",
  "status code": 4
    }
  ]
}

For more information, you can find in Vitrage wiki.

Liat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-06-07 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Unable to plug VIP

2016-06-07 Thread Babu Shanmugam

Hi,
I am using octavia deployed using devstack. I am *never* able to 
successfully create a loadbalancer. Following is my investigation,


1. When a loadbalancer is created, octavia controller sends plug_vip 
request to the amphora VM.

2. It waits for some time till the connection to the amphora is established.
3. After it successfully connects, octavia controller gets response 
{"details": "No suitable network interface found"} to plug_vip request.
4. I tried to get the list of interfaces attached to the amphora VM 
using subprocess.check_cmd("sudo virsh domiflist") before returning frpm 
plug_vip 
(https://github.com/openstack/octavia/blob/master/octavia/amphorae/drivers/haproxy/rest_api_driver.py#L326) 
and found that there is indeed a veth device attached to the VM with 
that MAC sent in the request.
5. From the amphora server code, I could understand that the possible 
place for this exception is 
https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/plug.py#L53. 
But, when I ssh to the amphora VM, I was able to see the 
'amphora-haproxy' netns created and also the interface configuration 
file for eth1 device, which should have been executed after #L53.


I am not sure why this problem happens. Everything seems to be fine, but 
I am still facing this problem. Have you seen this problem before?


Thank you,
Babu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [odl-networking] Devstack with ODL using Mitaka?

2016-06-07 Thread Wojciech Dec
Hi Openstack dev Folks,

I'd appreciate your help with setting up the necessary local.conf for using
an ODL as the controller, along with Mitaka.

Would be great if anyone could share a working  Mitaka updated local.conf
that they could share?

A more specific problem I've been experiencing, is that the devstack script
insists on pulling and starting ODL no matter what ODL_MODE setting is
used. According to :
https://github.com/openstack/networking-odl/blob/master/devstack/settings
I've been using ODL_MODE=externalodl as well as =manual. In both cases
devstack starts the ODL it pulls.

Thanks,
Wojtek.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] stepping down from core

2016-06-07 Thread Ryan Hallisey
Thanks for all the hard work Jeff!  I'm sure our paths will cross again!

-Ryan

- Original Message -
From: "Michał Jastrzębski" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, June 6, 2016 7:13:00 PM
Subject: Re: [openstack-dev] [kolla] stepping down from core

Damn, bad news:( All the best Jeff!

On 6 June 2016 at 17:57, Vikram Hosakote (vhosakot)  wrote:
> Thanks for all the contributions to kolla and good luck Jeff!
>
> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: "Steven Dake (stdake)" 
> Reply-To: OpenStack Development Mailing List
> 
> Date: Monday, June 6, 2016 at 6:14 PM
> To: OpenStack Development Mailing List 
> Subject: Re: [openstack-dev] [kolla] stepping down from core
>
> Jeff,
>
> Thanks for the notification.  Likewise it has been a pleasure working with
> you over the last 3 years on Kolla.  I've removed you from gerrit.
>
> You have made a big impact on Kolla.  For folks that don't know, at one
> point Kolla was nearly dead, and Jeff was one of our team of 3 that stuck
> to it.  Without Jeff to carry the work forward, OpenStack deployment in
> containers would have been set back years.
>
> Best wishes on what you work on next.
>
> Regards
> -steve
>
> On 6/6/16, 12:36 PM, "Jeff Peeler"  wrote:
>
> Hi all,
>
> This is my official announcement to leave core on Kolla /
> Kolla-Kubernetes. I've enjoyed working with all of you and hopefully
> we'll cross paths again!
>
> Jeff
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Centralize Configuration: ignore service list for newton

2016-06-07 Thread John Garbutt
On 27 May 2016 at 17:15, Markus Zoeller  wrote:
> On 20.05.2016 11:33, John Garbutt wrote:
>> Hi,
>>
>> The current config template includes a list of "Services which consume this":
>> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/centralize-config-options.html#quality-view
>>
>> I propose we drop this list from the template.
>>
>> I am worried this is going to be hard to maintain, and hard to review
>> / check. As such, its of limited use to most deployers in its current
>> form.
>>
>> I have been thinking about a possible future replacement. Two separate
>> sample configuration files, one for the Compute node, and one for
>> non-compute nodes (i.e. "controller" nodes). The reason for this
>> split, is our move towards removing sensitive credentials from compute
>> nodes, etc. Over time, we could prove the split in gate testing, where
>> we look for conf options accessed by computes that shouldn't be, and
>> v.v.
>>
>>
>> Having said that, for newton, I propose we concentrate on:
>> * completing the move of all the conf options (almost there)
>> * (skip tidy up of deprecated options)
>> * tidying up the main description of each conf option
>> * tidy up the Opt group and Opt types, i.e. int min/max, str choices, etc
>> ** move options to use stevedoor, where needed
>> * deprecating ones that are dumb / unused
>> * identifying "required" options (those you have to set)
>> * add config group descriptions
>> * note any surprising dependencies or value meanings (-1 vs 0 etc)
>> * ensure the docs and sample files are complete and correct
>>
>> I am thinking we could copy API ref and add a comment at the top of
>> each file (expecting a separate patch for each step):
>> * fix_opt_registration_consistency (see sfinucan's tread)
>> * fix_opt_description_indentation
>> * check_deprecation_status
>> * check_opt_group_and_type
>> * fix_opt_description
>
>
> I pushed [1] which introduced the flags from above. I reordered them
> from most to least important, which is IMO:
>
> # needs:fix_opt_description
> # needs:check_deprecation_status
> # needs:check_opt_group_and_type
> # needs:fix_opt_description_indentation
> # needs:fix_opt_registration_consistency

This looks good to me:
https://review.openstack.org/#/c/322255/1

sneti (Sujitha) has put together a wiki page to help describe what
each step means:
https://wiki.openstack.org/wiki/ConfigOptionsConsistency

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][SFC]

2016-06-07 Thread Mohan Kumar
Hi shihanzhang / Alioune ,

*your kernel (check with uname -r )  should support OVS version , below
table compare kern*el versions and corresponding Open vSwitch release
support

| Open vSwitch | Linux kernel
|::|:-:
|1.4.x | 2.6.18 to 3.2
|1.5.x | 2.6.18 to 3.2
|1.6.x | 2.6.18 to 3.2
|1.7.x | 2.6.18 to 3.3
|1.8.x | 2.6.18 to 3.4
|1.9.x | 2.6.18 to 3.8
|1.10.x| 2.6.18 to 3.8
|1.11.x| 2.6.18 to 3.8
|2.0.x | 2.6.32 to 3.10
|2.1.x | 2.6.32 to 3.11
|2.3.x | 2.6.32 to 3.14
|2.4.x | 2.6.32 to 4.0
|2.5.x | 2.6.32 to 4.3

http://openvswitch.org/support/dist-docs/FAQ.md.txt (
### Q: What Linux kernel versions does each Open vSwitch release work with?)

I installed SFC with OVS 2.4.0  and 2.5.0 and not seen any issue

Please check SFC wiki for installation guidelines :
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining


Thanks.,

Mohankumar.N





On Tue, Jun 7, 2016 at 1:46 PM, shihanzhang  wrote:

> Hi Alioune and Cathy,
>  For devstack on ubuntu14.04, the default ovs version is 2.0.2, so
> there was the error as Alioune said.
>  Do we need to install speical ovs version in networking-sfc devstack
> plugin.sh?
>
>
>
>
>
> 在 2016-06-07 07:48:26,"Cathy Zhang"  写道:
>
> Hi Alioune,
>
>
>
> Which OVS version are you using?
>
> Try openvswitch version 2.4.0 and restart the openvswitch-server before
> installing the devstack.
>
>
>
> Cathy
>
>
>
> *From:* Alioune [mailto:baliou...@gmail.com]
> *Sent:* Friday, June 03, 2016 9:07 AM
> *To:* openstack-dev@lists.openstack.org
> *Cc:* Cathy Zhang
> *Subject:* [openstack-dev][neutron][SFC]
>
>
>
> Probleme with OpenStack SFC
>
> Hi all,
>
> I've installed Openstack SFC with devstack and all module are corretly
> running except the neutron L2-agent
>
>
>
> After a "screen -rd", it seems that there is a conflict between l2-agent
> and SFC (see trace bellow).
>
> I solved the issue with "sudo ovs-vsctl set bridge br
> protocols=OpenFlow10,OpenFlow11,OpenFlow12,OpenFlow13" on all openvswitch
> bridge (br-int, br-ex, br-tun and br-mgmt0).
>
> I would like to know:
>
>   - If someone knows why this error arrises ?
>
>  - is there another way to solve it ?
>
>
>
> Regards,
>
>
>
> 2016-06-03 12:51:56.323 WARNING
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
> [req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] OVS is dead.
> OVSNeutronAgent will keep running and checking OVS status periodically.
>
> 2016-06-03 12:51:56.330 DEBUG
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
> [req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop -
> iteration:4722 completed. Processed ports statistics: {'regular':
> {'updated': 0, 'added': 0, 'removed': 0}}. Elapsed:0.086 from (pid=12775)
> loop_count_and_wait
> /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1680
>
> 2016-06-03 12:51:58.256 DEBUG
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
> [req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop -
> iteration:4723 started from (pid=12775) rpc_loop
> /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1732
>
> 2016-06-03 12:51:58.258 DEBUG neutron.agent.linux.utils
> [req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Running command
> (rootwrap daemon): ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int',
> 'table=23'] from (pid=12775) execute_rootwrap_daemon
> /opt/stack/neutron/neutron/agent/linux/utils.py:101
>
> 2016-06-03 12:51:58.311 ERROR neutron.agent.linux.utils
> [req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]
>
> Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']
>
> Exit code: 1
>
> Stdin:
>
> Stdout:
>
> Stderr:
> 2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt:
> version negotiation failed (we support version 0x04, peer supports version
> 0x01)
>
> ovs-ofctl: br-int: failed to connect to socket (Broken pipe)
>
>
>
> 2016-06-03 12:51:58.323 ERROR
> networking_sfc.services.sfc.common.ovs_ext_lib
> [req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]
>
> Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']
>
> Exit code: 1
>
> Stdin:
>
> Stdout:
>
> Stderr:
> 2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt:
> version negotiation failed (we support version 0x04, peer supports version
> 0x01)
>
> ovs-ofctl: br-int: failed to connect to socket (Broken pipe)
>
>
>
> 2016-06-03 12:51:58.323 TRACE
> networking_sfc.services.sfc.common.ovs_ext_lib Traceback (most recent call
> last):
>
> 2016-06-03 12:51:58.323 TRACE
> networking_sfc.services.sfc.common.ovs_ext_lib   File
> "/opt/stack/networking-sfc/networking_sfc/services/sfc/common/ovs_ext_lib.py",
> line 125, in run_ofctl
>
> 2016-06-03 12:51:58.323 TRACE
> 

Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-06-07 Thread Aleksandr Didenko
Hi,

you don't need to change anything in your plugin, we still have the same
netconfig.pp task on all nodes even after bugfix.

Regards,
Alex

On Tue, Jun 7, 2016 at 8:21 AM, Igor Zinovik  wrote:

>   Hello,
>
> Aleksandr, one simple question: do I as a plugin developer for upcoming
> Fuel 9.0 have
> to worry about these network-related changes or not? HCF is approaching,
> but patch
> that you mentioned (342307 ) is
> still not merged. Do I need to spend time on understanding
> it and change plugins deployment tasks
> 
> according to the netconfig.pp refactoring?
>
>
> On 6 June 2016 at 11:12, Aleksandr Didenko  wrote:
>
>> Hi,
>>
>> a bit different patch is on review now [0]. Instead of silently replacing
>> default gateway on the fly in netconfig.pp task it's putting new default
>> gateway into Hiera. Thus we'll have idempotency for subsequent netconfig.pp
>> runs even on Mongo roles. Also we'll have consistent network configuration
>> data in Hiera which any plugin can rely on.
>>
>> I've built a custom ISO with this patch and run a set of custom tests on
>> it to cover multi-role and multi-rack cases [1] plus BVT - everything
>> worked fine.
>>
>> Please feel free to review and comment the patch [0].
>>
>> Regards,
>> Alex
>>
>> [0] https://review.openstack.org/324307
>> [1] http://paste.openstack.org/show/508319/
>>
>> On Wed, Jun 1, 2016 at 4:47 PM, Aleksandr Didenko 
>> wrote:
>>
>>> Hi,
>>>
>>> YAQL expressions support for task dependencies has been added to Nailgun
>>> [0]. So now it's possible to fix network configuration idempotency issue
>>> without introducing new 'netconfig' task [1]. There will be no problems
>>> with loops in task graph in such case (tested on multiroles, worked fine).
>>> When we deprecate role-based deployment (even emulated), then we'll be able
>>> to remove all those additional conditions from manifests and remove
>>> 'configure_default_route' task completely. Please feel free to review and
>>> comment the patch [1].
>>>
>>> Regards,
>>> Alex
>>>
>>> [0] https://review.openstack.org/#/c/320861/
>>> [1] https://review.openstack.org/#/c/322872/
>>>
>>> On Wed, May 25, 2016 at 10:39 AM, Simon Pasquier >> > wrote:
>>>
 Hi Adam,
 Maybe you want to look into network templates [1]? Although the
 documentation is a bit sparse, it allows you to define flexible network
 mappings.
 BR,
 Simon
 [1]
 https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates

 On Wed, May 25, 2016 at 10:26 AM, Adam Heczko 
 wrote:

> Thanks Alex, will experiment with it once again although AFAIR it
> doesn't solve thing I'd like to do.
> I'll come later to you in case of any questions.
>
>
> On Wed, May 25, 2016 at 10:00 AM, Aleksandr Didenko <
> adide...@mirantis.com> wrote:
>
>> Hey Adam,
>>
>> in Fuel we have the following option (checkbox) on Network Setting
>> tab:
>>
>> Assign public network to all nodes
>> When disabled, public network will be assigned to controllers only
>>
>> So if you uncheck it (by default it's unchecked) then public network
>> and 'br-ex' will exist on controllers only. Other nodes won't even have
>> "Public" network on node interface configuration UI.
>>
>> Regards,
>> Alex
>>
>> On Wed, May 25, 2016 at 9:43 AM, Adam Heczko 
>> wrote:
>>
>>> Hello Alex,
>>> I have a question about the proposed changes.
>>> Is it possible to introduce new vlan and associated bridge only for
>>> controllers?
>>> I think about DMZ use case and possibility to expose public IPs/VIP
>>> and API endpoints on controllers on a completely separate L2 network
>>> (segment vlan/bridge) not present on any other nodes than controllers.
>>> Thanks.
>>>
>>> On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hi folks,

 we had to revert those changes [0] since it's impossible to propery
 handle two different netconfig tasks for multi-role nodes. So 
 everything
 stays as it was before - we have single task 'netconfig' to configure
 network for all roles and you don't need to change anything in your
 plugins. Sorry for inconvenience.

 Our current plan for fixing network idempotency is to keep one task
 but change 'cross-depends' parameter to yaql_exp. This will allow us 
 to use
 single 'netconfig' task for all roles but at the same time we'll be 
 able to
 properly order it: netconfig on non-controllers will be 

Re: [openstack-dev] [kolla] OSIC cluster accepteed

2016-06-07 Thread Paul Bourke

Michal,

I'd be interested in helping with this. Keep us updated!

-Paul

On 03/06/16 17:58, Michał Jastrzębski wrote:

Hello Kollagues,

Some of you might know that I submitted request for 130 nodes out of
osic cluster for testing Kolla. We just got accepted. Time window will
be 3 weeks between 7/22 and 8/14, so we need to make most of it. I'd
like some volunteers to help me with tests, setup and such. We need to
prepare test scenerios, streamline bare metal deployment and prepare
architectures we want to run through. I would also make use of our
global distribution to keep nodes being utilized 24h.

Nodes we're talking about are pretty powerful 256gigs of ram each, 12
ssd disks in each and 10Gig networking all the way. We will get IPMI
access to it so bare metal provisioning will have to be there too
(good time to test out bifrost right?:))

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-07 Thread Henry Nash
OK, so thanks for the feedback - understand the message.

However, in terms of compatibility, the one thing that concerns me about the 
hierarchical naming approach is that even with microversioing, we might still 
surprise a client. An unmodified client (i.e. doesn’t understand 3.7) would 
still see a change in the data being returned (the project names have suddenly 
become full path names). We have to return this even if they don’t ask for 3.7, 
since otherwise there is no difference between this approach and relaxing the 
project naming in terms of trying to prevent auth breakages.

In more detail:

1) Both approaches were planned to return the path name (instead of the node 
name) in GET /auth/projects - i.e. the API you are meant to use to find out 
what you can scope to
2) Both approaches were planned to accept the path name in the auth request 
block
3) The difference in hierarchical naming is the if I do a regular GET 
/project(s) I also see the full path name as the “project name”

if we don’t do 3), then code that somehow authenticates, and then uses the 
regular GET /project(s) calls to find a project name and then re-scopes (or 
re-auths) to that name, will fail if the project they want is not a top level 
project. However, the flip side is that if there is code that uses these same 
calls to, say, display projects to the user (e.g. a home grown UI) - then it 
might get confused until it supports 3.7 (i.e. asking for the old microversion 
won’t help it) since all the names include the hierarchical path.

Just want to make sure we understand the implications….

Henry

> On 4 Jun 2016, at 08:34, Monty Taylor  wrote:
> 
> On 06/04/2016 01:53 AM, Morgan Fainberg wrote:
>> 
>> On Jun 3, 2016 12:42, "Lance Bragstad" > 
>> >> wrote:
>>> 
>>> 
>>> 
>>> On Fri, Jun 3, 2016 at 11:20 AM, Henry Nash >> 
>> >> wrote:
 
 
> On 3 Jun 2016, at 16:38, Lance Bragstad  
>> >> wrote:
> 
> 
> 
> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash  
>> >> wrote:
>> 
>> 
>>> On 3 Jun 2016, at 01:22, Adam Young >> 
>> >> wrote:
>>> 
>>> On 06/02/2016 07:22 PM, Henry Nash wrote:
 
 Hi
 
 As you know, I have been working on specs that change the way we
>> handle the uniqueness of project names in Newton. The goal of this is to
>> better support project hierarchies, which as they stand today are
>> restrictive in that all project names within a domain must be unique,
>> irrespective of where in the hierarchy that projects sits (unlike, say,
>> the unix directory structure where a node name only has to be unique
>> within its parent). Such a restriction is particularly problematic when
>> enterprise start modelling things like test, QA and production as
>> branches of a project hierarchy, e.g.:
 
 /mydivsion/projectA/dev
 /mydivsion/projectA/QA
 /mydivsion/projectA/prod
 /mydivsion/projectB/dev
 /mydivsion/projectB/QA
 /mydivsion/projectB/prod
 
 Obviously the idea of a project name (née tenant) being unique
>> has been around since near the beginning of (OpenStack) time, so we must
>> be cautions. There are two alternative specs proposed:
 
 1) Relax project name
>> constraints: https://review.openstack.org/#/c/310048/ 
 2) Hierarchical project
>> naming: https://review.openstack.org/#/c/318605/
 
 First, here’s what they have in common:
 
 a) They both solve the above problem
 b) They both allow an authorization scope to use a path rather
>> than just a simple name, hence allowing you to address a project
>> anywhere in the hierarchy
 c) Neither have any impact if you are NOT using a hierarchy -
>> i.e. if you just have a flat layer of projects in a domain, then they
>> have no API or semantic impact (since both ensure that a project’s name
>> must still be unique within a parent)
 
 Here’s how the differ:
 
 - Relax project name constraints (1), keeps the meaning of the
>> ‘name’ attribute of a project to be its node-name in the hierarchy, but
>> formally relaxes the uniqueness constraint to say that it only has to be
>> unique within its parent. In other words, let’s really model this a bit
>> like a unix directory tree.
> 
> I think I lean towards relaxing the project name constraint. The
>> reason is because we already 

[openstack-dev] [smaug] Smaug video

2016-06-07 Thread xiangxinyong
Hello everyone,


This [1] is a video to introduce smaug and smaug dashboard.
Please feel free to give your feedback.


IRC: #openstack-smaug


[1] https://youtu.be/_tVYuW_YMB8


Best Regards,
  xiangxinyong__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][neutron] Fwd: [Openstack-stable-maint] Stable check of openstack/octavia failed

2016-06-07 Thread Ihar Hrachyshka
Right, the fix is two-fold:

1. constraint gate with upper-constraints (done for both stable branches);
2. exclude Flask 0.11 from global-requirements.txt (landed in stable/mitaka, 
landing in stable/liberty);
3. once ^ is merged, sync exclusion into octavia (landed in stable/mitaka; 
waiting for stable/liberty);
4. fix breakage with Flask 0.11 (landed in stable/mitaka, stable/liberty in 
progress).

We experience a problem in stable/liberty landing a fix for Flask 0.11 (point 
4. above):

https://review.openstack.org/#/c/324312/1

That seems to be unrelated to Flask issue, but is a backwards incompatible 
change in taskflow 1.32.0+ that broke unit tests for Octavia. I reported a bug 
against taskflow:

https://bugs.launchpad.net/taskflow/+bug/1589848

Ihar

> On 06 Jun 2016, at 20:59, Michael Johnson  wrote:
> 
> Hi Matt,
> 
> We are aware of the issue and have cherry picked patches pending
> review by the neutron stable team:
> https://review.openstack.org/#/q/openstack/octavia+status:open+branch:stable/mitaka
> https://review.openstack.org/#/q/openstack/octavia+status:open+branch:stable/liberty
> 
> Michael
> 
> On Mon, Jun 6, 2016 at 11:27 AM, Matt Riedemann
>  wrote:
>> Can someone from the Octavia team check on the stable/liberty failures for
>> the unit test runs?  Those have been failing for several weeks, if not
>> months, now, which makes having a job run Octavia unit tests on the
>> periodic-stable queue pointless since they never pass.
>> 
>> Keep in mind the octavia repo has the stable:follows-policy tag in the
>> governance repo [1] and part of that tag being applied to the project is
>> actually maintaining the stable branches, which includes keeping the CI jobs
>> running.
>> 
>> [1]
>> https://governance.openstack.org/reference/projects/neutron.html#project-neutron
>> 
>> 
>>  Forwarded Message 
>> Subject: [Openstack-stable-maint] Stable check of openstack/octavia failed
>> Date: Mon, 06 Jun 2016 06:23:15 +
>> From: A mailing list for the OpenStack Stable Branch test reports.
>> 
>> Reply-To: openstack-dev@lists.openstack.org
>> To: openstack-stable-ma...@lists.openstack.org
>> 
>> Build failed.
>> 
>> - periodic-octavia-docs-liberty
>> http://logs.openstack.org/periodic-stable/periodic-octavia-docs-liberty/9796536/
>> : SUCCESS in 3m 01s
>> - periodic-octavia-python27-liberty
>> http://logs.openstack.org/periodic-stable/periodic-octavia-python27-liberty/6d96415/
>> : FAILURE in 4m 36s
>> - periodic-octavia-docs-mitaka
>> http://logs.openstack.org/periodic-stable/periodic-octavia-docs-mitaka/b2074b4/
>> : SUCCESS in 3m 36s
>> - periodic-octavia-python27-mitaka
>> http://logs.openstack.org/periodic-stable/periodic-octavia-python27-mitaka/f220954/
>> : SUCCESS in 3m 59s
>> 
>> ___
>> Openstack-stable-maint mailing list
>> openstack-stable-ma...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][SFC]

2016-06-07 Thread shihanzhang
Hi Alioune and Cathy,
 For devstack on ubuntu14.04, the default ovs version is 2.0.2, so there 
was the error as Alioune said.
 Do we need to install speical ovs version in networking-sfc devstack 
plugin.sh?






在 2016-06-07 07:48:26,"Cathy Zhang"  写道:


Hi Alioune,

 

Which OVS version are you using?

Try openvswitch version 2.4.0 and restart the openvswitch-server before 
installing the devstack.

 

Cathy

 

From: Alioune [mailto:baliou...@gmail.com]
Sent: Friday, June 03, 2016 9:07 AM
To:openstack-dev@lists.openstack.org
Cc: Cathy Zhang
Subject: [openstack-dev][neutron][SFC]

 

Probleme with OpenStack SFC

Hi all, 

I've installed Openstack SFC with devstack and all module are corretly running 
except the neutron L2-agent

 

After a "screen -rd", it seems that there is a conflict between l2-agent and 
SFC (see trace bellow).

I solved the issue with "sudo ovs-vsctl set bridge br 
protocols=OpenFlow10,OpenFlow11,OpenFlow12,OpenFlow13" on all openvswitch 
bridge (br-int, br-ex, br-tun and br-mgmt0).

I would like to know:

  - If someone knows why this error arrises ?

 - is there another way to solve it ?

 

Regards,

 

2016-06-03 12:51:56.323 WARNING 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] OVS is dead. 
OVSNeutronAgent will keep running and checking OVS status periodically.

2016-06-03 12:51:56.330 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop - 
iteration:4722 completed. Processed ports statistics: {'regular': {'updated': 
0, 'added': 0, 'removed': 0}}. Elapsed:0.086 from (pid=12775) 
loop_count_and_wait 
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1680

2016-06-03 12:51:58.256 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop - 
iteration:4723 started from (pid=12775) rpc_loop 
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1732

2016-06-03 12:51:58.258 DEBUG neutron.agent.linux.utils 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Running command (rootwrap 
daemon): ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23'] 
from (pid=12775) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:101

2016-06-03 12:51:58.311 ERROR neutron.agent.linux.utils 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]

Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']

Exit code: 1

Stdin:

Stdout:

Stderr: 
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x04, peer supports version 0x01)

ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

 

2016-06-03 12:51:58.323 ERROR networking_sfc.services.sfc.common.ovs_ext_lib 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]

Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']

Exit code: 1

Stdin:

Stdout:

Stderr: 
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x04, peer supports version 0x01)

ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

 

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Traceback (most recent call last):

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib   
File 
"/opt/stack/networking-sfc/networking_sfc/services/sfc/common/ovs_ext_lib.py", 
line 125, in run_ofctl

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
 process_input=process_input)

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib   
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in execute

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
 raise RuntimeError(m)

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
RuntimeError:

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Exit code: 1

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Stdin:

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Stdout:

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Stderr: 
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x04, peer supports version 0x01)

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

2016-06-03 

Re: [openstack-dev] StackViz is now enabled for all devstack-gate jobs

2016-06-07 Thread Monty Taylor
On 06/07/2016 01:32 AM, Buckley, Tim Jason wrote:
> Hello all,
> 
> I'd like to announce that StackViz will now be running at the end all
> tempest-dsvm jobs and saving visualization output to the log server.
> 
> StackViz is a visualization utility for generating interactive visualizations 
> of
> jobs in the OpenStack QA pipeline and aims to ease debugging and performance
> analysis tasks. Currently it renders an interactive timeline for subunit
> results and dstat data, but we are actively working to visualize more log 
> types
> in the future.
> 
> StackViz instances are saved as a 'stackviz' directory under 'logs' for each 
> job
> run on http://logs.openstack.org/. For an example, see:
> 
> http://logs.openstack.org/07/212207/8/check/gate-tempest-dsvm-full/2d30217/logs/stackviz/
> 
> For more information StackViz, see the project page at:
> https://github.com/openstack/stackviz
> 
> Bugs can also be reported at:
> https://bugs.launchpad.net/stackviz
> 
> Feedback is greatly appreciated!

Great work! This is super awesome! Informative, lovely to look at. I'm
thrilled to have this part of our tools!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Deprecating the live_migration_flag and block_migration_flag config options

2016-06-07 Thread Timofei Durakov
I've submitted one more patch with potential fix:

https://review.openstack.org/#/c/326258/

Timofey

On Mon, Jun 6, 2016 at 11:58 PM, Timofei Durakov 
wrote:

> On Mon, Jun 6, 2016 at 11:26 PM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>> On 6/6/2016 12:15 PM, Matt Riedemann wrote:
>>
>>> On 1/8/2016 12:28 PM, Mark McLoughlin wrote:
>>>
 On Fri, 2016-01-08 at 14:11 +, Daniel P. Berrange wrote:

> On Thu, Jan 07, 2016 at 09:07:00PM +, Mark McLoughlin wrote:
>
>> On Thu, 2016-01-07 at 12:23 +0100, Sahid Orentino Ferdjaoui
>> wrote:
>>
>>> On Mon, Jan 04, 2016 at 09:12:06PM +, Mark McLoughlin
>>> wrote:
>>>
 Hi

 commit 8ecf93e[1] got me thinking - the live_migration_flag
 config option unnecessarily allows operators choose arbitrary
 behavior of the migrateToURI() libvirt call, to the extent
 that we allow the operator to configure a behavior that can
 result in data loss[1].

 I see that danpb recently said something similar:

 https://review.openstack.org/171098

 "Honestly, I wish we'd just kill off  'live_migration_flag'
 and 'block_migration_flag' as config options. We really
 should not be exposing low level libvirt API flags as admin
 tunable settings.

 Nova should really be in charge of picking the correct set of
 flags for the current libvirt version, and the operation it
 needs to perform. We might need to add other more sensible
 config options in their place [..]"

>>>
>>> Nova should really handle internal flags and this serie is
>>> running in the right way.
>>>
>>> ...

>>>
>>> 4) Add a new config option for tunneled versus native:

 [libvirt] live_migration_tunneled = true

 This enables the use of the VIR_MIGRATE_TUNNELLED flag. We
 have historically defaulted to tunneled mode because it
 requires the least configuration and is currently the only
 way to have a secure migration channel.

 danpb's quote above continues with:

 "perhaps a "live_migration_secure_channel" to indicate that
 migration must use encryption, which would imply use of
 TUNNELLED flag"

 So we need to discuss whether the config option should
 express the choice of tunneled vs native, or whether it
 should express another choice which implies tunneled vs
 native.

 https://review.openstack.org/263434

>>>
>>> We probably have to consider that operator does not know much
>>> about internal libvirt flags, so options we are exposing for
>>> him should reflect benefice of using them. I commented on your
>>> review we should at least explain benefice of using this option
>>> whatever the name is.
>>>
>>
>> As predicted, plenty of discussion on this point in the review
>> :)
>>
>> You're right that we don't give the operator any guidance in the
>> help message about how to choose true or false for this:
>>
>> Whether to use tunneled migration, where migration data is
>> transported over the libvirtd connection. If True, we use the
>> VIR_MIGRATE_TUNNELLED migration flag
>>
>> libvirt's own docs on this are here:
>>
>> https://libvirt.org/migration.html#transport
>>
>> which emphasizes:
>>
>> - the data copies involved in tunneling - the extra configuration
>> steps required for native - the encryption support you get when
>> tunneling
>>
>> The discussions I've seen on this topic wrt Nova have revolved
>> around:
>>
>> - that tunneling allows for an encrypted transport[1] - that
>> qemu's NBD based drive-mirror block migration isn't supported
>> using tunneled mode, and that danpb is working on fixing this
>> limitation in libvirt - "selective" block migration[2] won't work
>> with the fallback qemu block migration support, and so won't
>> currently work in tunneled mode
>>
>
> I'm not working on fixing it, but IIRC some other dev had proposed
> patches.
>
>
>> So, the advise to operators would be:
>>
>> - You may want to choose tunneled=False for improved block
>> migration capabilities, but this limitation will go away in
>> future. - You may want to choose tunneled=False if you wish to
>> trade and encrypted transport for a (potentially negligible)
>> performance improvement.
>>
>> Does that make sense?
>>
>> As for how to name the option, and as I said in the review, I
>> think it makes sense to be straightforward here and make it
>> clearly about choosing to disable libvirt's tunneled transport.
>>
>> If we name it any other way, I 

Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-06-07 Thread Renat Akhmerov

> On 04 Jun 2016, at 04:16, Doug Hellmann  wrote:
> 
> Excerpts from Joshua Harlow's message of 2016-06-03 09:14:05 -0700:
>> Deja, Dawid wrote:
>>> On Thu, 2016-05-05 at 11:08 +0700, Renat Akhmerov wrote:
 
> On 05 May 2016, at 01:49, Mehdi Abaakouk  > wrote:
> 
> 
> Le 2016-05-04 10:04, Renat Akhmerov a écrit :
>> No problem. Let’s not call it RPC (btw, I completely agree with that).
>> But it’s one of the messaging patterns and hence should be under
>> oslo.messaging I guess, no?
> 
> Yes and no, we currently have two APIs (rpc and notification). And
> personally I regret to have the notification part in oslo.messaging.
> 
> RPC and Notification are different beasts, and both are today limited
> in terms of feature because they share the same driver implementation.
> 
> Our RPC errors handling is really poor, for example Nova just put
> instance in ERROR when something bad occurs in oslo.messaging layer.
> This enforces deployer/user to fix the issue manually.
> 
> Our Notification system doesn't allow fine grain routing of message,
> everything goes into one configured topic/queue.
> 
> And now we want to add a new one... I'm not against this idea,
> but I'm not a huge fan.
> 
> Thoughts from folks (mistral and oslo)?
>>> Also, I was not at the Summit, should I conclude the Tooz+taskflow
>>> approach (that ensure the idempotent of the application within the
>>> library API) have not been accepted by mistral folks ?
>> Speaking about idempotency, IMO it’s not a central question that we
>> should be discussing here. Mistral users should have a choice: if they
>> manage to make their actions idempotent it’s excellent, in many cases
>> idempotency is certainly possible, btw. If no, then they know about
>> potential consequences.
> 
> You shouldn't mix the idempotency of the user task and the idempotency
> of a Mistral action (that will at the end run the user task).
> You can have your Mistral task runner implementation idempotent and just
> make the workflow to use configurable in case the user task is
> interrupted or badly finished even if the user task is idempotent or not.
> This makes the thing very predictable. You will know for example:
> * if the user task has started or not,
> * if the error is due to a node power cut when the user task runs,
> * if you can safely retry a not idempotent user task on an other node,
> * you will not be impacted by rabbitmq restart or TCP connection issues,
> * ...
> 
> With the oslo.messaging approach, everything will just end up in a
> generic MessageTimeout error.
> 
> The RPC API already have this kind of issue. Applications have
> unfortunately
> dealt with that (and I think they want something better now).
> I'm just not convinced we should add a new "working queue" API in
> oslo.messaging for tasks scheduling that have the same issue we already
> have with RPC.
> 
> Anyway, that's your choice, if you want rely on this poor structure,
> I will
> not be against, I'm not involved in Mistral. I just want everybody is
> aware
> of this.
> 
>> And even in this case there’s usually a number
>> of measures that can be taken to mitigate those consequences (reruning
>> workflows from certain points after manually fixing problems, rollback
>> scenarios etc.).
> 
> taskflow allows to describe and automate this kind of workflow really
> easily.
> 
>> What I’m saying is: let’s not make that crucial decision now about
>> what a messaging framework should support or not, let’s make it more
>> flexible to account for variety of different usage scenarios.
> 
> I think the confusion is in the "messaging" keyword, currently
> oslo.messaging
> is a "RPC" framework and a "Notification" framework on top of 'messaging'
> frameworks.
> 
> Messaging framework we uses are 'kombu', 'pika', 'zmq' and 'pingus'.
> 
>> It’s normal for frameworks to give more rather than less.
> 
> I disagree, here we mix different concepts into one library, all concepts
> have to be implemented by different 'messaging framework',
> So we fortunately give less to make thing just works in the same way
> with all
> drivers for all APIs.
> 
>> One more thing, at the summit we were discussing the possibility to
>> define at-most-once/at-least-once individually for Mistral tasks. This
>> is demanded because there cases where we need to do it, advanced users
>> may choose one or another depending on a task/action semantics.
>> However, it won’t be possible to implement w/o changes in the
>> underlying messaging framework.
> 
> If we goes that way, 

Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-06-07 Thread Igor Zinovik
  Hello,

Aleksandr, one simple question: do I as a plugin developer for upcoming
Fuel 9.0 have
to worry about these network-related changes or not? HCF is approaching,
but patch
that you mentioned (342307 ) is
still not merged. Do I need to spend time on understanding
it and change plugins deployment tasks

according to the netconfig.pp refactoring?

On 6 June 2016 at 11:12, Aleksandr Didenko  wrote:

> Hi,
>
> a bit different patch is on review now [0]. Instead of silently replacing
> default gateway on the fly in netconfig.pp task it's putting new default
> gateway into Hiera. Thus we'll have idempotency for subsequent netconfig.pp
> runs even on Mongo roles. Also we'll have consistent network configuration
> data in Hiera which any plugin can rely on.
>
> I've built a custom ISO with this patch and run a set of custom tests on
> it to cover multi-role and multi-rack cases [1] plus BVT - everything
> worked fine.
>
> Please feel free to review and comment the patch [0].
>
> Regards,
> Alex
>
> [0] https://review.openstack.org/324307
> [1] http://paste.openstack.org/show/508319/
>
> On Wed, Jun 1, 2016 at 4:47 PM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> YAQL expressions support for task dependencies has been added to Nailgun
>> [0]. So now it's possible to fix network configuration idempotency issue
>> without introducing new 'netconfig' task [1]. There will be no problems
>> with loops in task graph in such case (tested on multiroles, worked fine).
>> When we deprecate role-based deployment (even emulated), then we'll be able
>> to remove all those additional conditions from manifests and remove
>> 'configure_default_route' task completely. Please feel free to review and
>> comment the patch [1].
>>
>> Regards,
>> Alex
>>
>> [0] https://review.openstack.org/#/c/320861/
>> [1] https://review.openstack.org/#/c/322872/
>>
>> On Wed, May 25, 2016 at 10:39 AM, Simon Pasquier 
>> wrote:
>>
>>> Hi Adam,
>>> Maybe you want to look into network templates [1]? Although the
>>> documentation is a bit sparse, it allows you to define flexible network
>>> mappings.
>>> BR,
>>> Simon
>>> [1]
>>> https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
>>>
>>> On Wed, May 25, 2016 at 10:26 AM, Adam Heczko 
>>> wrote:
>>>
 Thanks Alex, will experiment with it once again although AFAIR it
 doesn't solve thing I'd like to do.
 I'll come later to you in case of any questions.


 On Wed, May 25, 2016 at 10:00 AM, Aleksandr Didenko <
 adide...@mirantis.com> wrote:

> Hey Adam,
>
> in Fuel we have the following option (checkbox) on Network Setting tab:
>
> Assign public network to all nodes
> When disabled, public network will be assigned to controllers only
>
> So if you uncheck it (by default it's unchecked) then public network
> and 'br-ex' will exist on controllers only. Other nodes won't even have
> "Public" network on node interface configuration UI.
>
> Regards,
> Alex
>
> On Wed, May 25, 2016 at 9:43 AM, Adam Heczko 
> wrote:
>
>> Hello Alex,
>> I have a question about the proposed changes.
>> Is it possible to introduce new vlan and associated bridge only for
>> controllers?
>> I think about DMZ use case and possibility to expose public IPs/VIP
>> and API endpoints on controllers on a completely separate L2 network
>> (segment vlan/bridge) not present on any other nodes than controllers.
>> Thanks.
>>
>> On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko <
>> adide...@mirantis.com> wrote:
>>
>>> Hi folks,
>>>
>>> we had to revert those changes [0] since it's impossible to propery
>>> handle two different netconfig tasks for multi-role nodes. So everything
>>> stays as it was before - we have single task 'netconfig' to configure
>>> network for all roles and you don't need to change anything in your
>>> plugins. Sorry for inconvenience.
>>>
>>> Our current plan for fixing network idempotency is to keep one task
>>> but change 'cross-depends' parameter to yaql_exp. This will allow us to 
>>> use
>>> single 'netconfig' task for all roles but at the same time we'll be 
>>> able to
>>> properly order it: netconfig on non-controllers will be executed only
>>> aftetr 'virtual_ips' task.
>>>
>>> Regards,
>>> Alex
>>>
>>> [0] https://review.openstack.org/#/c/320530/
>>>
>>>
>>> On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hi all,

 please be aware that now we have two netconfig tasks (in Fuel 9.0+):

Re: [openstack-dev] StackViz is now enabled for all devstack-gate jobs

2016-06-07 Thread Andrea Frittoli
Great job, thanks!

On Tue, 7 Jun 2016, 3:36 a.m. Masayuki Igawa, 
wrote:

> Congrats! I'm looking forward to seeing an integration/collaboration
> with openstack-heath :)
>
> On Tue, Jun 7, 2016 at 8:32 AM, Buckley, Tim Jason
>  wrote:
> > Hello all,
> >
> > I'd like to announce that StackViz will now be running at the end all
> > tempest-dsvm jobs and saving visualization output to the log server.
> >
> > StackViz is a visualization utility for generating interactive
> visualizations of
> > jobs in the OpenStack QA pipeline and aims to ease debugging and
> performance
> > analysis tasks. Currently it renders an interactive timeline for subunit
> > results and dstat data, but we are actively working to visualize more
> log types
> > in the future.
> >
> > StackViz instances are saved as a 'stackviz' directory under 'logs' for
> each job
> > run on http://logs.openstack.org/. For an example, see:
> >
> http://logs.openstack.org/07/212207/8/check/gate-tempest-dsvm-full/2d30217/logs/stackviz/
> >
> > For more information StackViz, see the project page at:
> > https://github.com/openstack/stackviz
> >
> > Bugs can also be reported at:
> > https://bugs.launchpad.net/stackviz
> >
> > Feedback is greatly appreciated!
> >
> > Thanks,
> > Tim Buckley
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev