Re: [openstack-dev] Running a skipped tempest test: test_connectivity_between_vms_on_different_networks

2016-10-19 Thread Lenny Verkhovsky
Hi Joseph,

You can try deleting line[1] in 
tempest/tempest/scenario/test_network_basic_ops.py
And rerunning the test [2]

[1] 
https://github.com/openstack/tempest/blob/b6508962595dc1dcf351ec5a7316401786625fea/tempest/scenario/test_network_basic_ops.py#L416
[2] testr run 
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks




From: Yossi Tamarov [mailto:yossi.tama...@gmail.com]
Sent: Wednesday, October 19, 2016 10:49 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Running a skipped tempest test: 
test_connectivity_between_vms_on_different_networks

Hello everyone,
Is there a way to force this tempest test to run?
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks

Currently, I'm trying to run the next command, which is followed by the next 
outputץ
Thanks for any help,
Joseph.
 [root@devel tempest]# ostestr --regex 
'(?!.*\[.*\bslow\b.*\])('tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks')'
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \
OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \
${PYTHON:-python} -m subunit.run discover -t ${OS_TOP_LEVEL:-./} 
${OS_TEST_PATH:-./tempest/test_discover} --list
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \
OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \
${PYTHON:-python} -m subunit.run discover -t ${OS_TOP_LEVEL:-./} 
${OS_TEST_PATH:-./tempest/test_discover}  --load-list /tmp/tmphXpVFo
{0} 
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks
 ... SKIPPED: Skipped until Bug: 1610994 is resolved.

==
Totals
==
Ran: 1 tests in 6. sec.
 - Passed: 0
 - Skipped: 1
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 0.0006 sec.

==
Worker Balance
==
 - Worker 0 (1 tests) => 0:00:00.000619

No tests were successful during the run

Slowest Tests:

Test id 

  Runtime (s)

  ---
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks[compute,id-1546850e-fbaa-42f5-8b5f-03d8a6a95f15,network]
  0.001

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra]all release jobs failed

2016-10-19 Thread joehuang
Hello,

The Tricircle was put one tag this morning, and find that the publish-to-pypi 
job was failed. And I also find not only Tricircle, but seems all release jobs 
failed: http://status.openstack.org/zuul/

Would like to know how to re-trigger the publish-to-pypi job?

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc]F2F talk in Barcelona for your concerns on Tricircle big-tent application

2016-10-19 Thread joehuang
Hi TCs,

Tricircle big-tent application (https://review.openstack.org/#/c/338796/) has 
just been resumed after the splitting and cleaning finished to make Tricircle 
being dedicated for networking automation across Neutron.

All risks about API inconsistency/API re-implementation were removed due to the 
deletion of Nova API-GW/Cinder API-GW from the Tricircle, and plugins under 
Neutron will not bring the risk of API inconsistency/API re-implementation: 
there are already lots of Neutron plugins developed in the community.

Kindly please review the application more or less(understand that you are quite 
busy during the summit period), a F2F talk chance would be appreciated if you 
have concerns on the application. Or discuss your concerns in the Tricircle 
design summit sessions: 
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Tricircle%3A,
 https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads#Tricircle

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday Oct 20th at 9:00 UTC

2016-10-19 Thread Ghanshyam Mann
Hello everyone,



Please reminder that the weekly OpenStack QA team IRC meeting will be Thursday, 
Oct 20th at 9:00 UTC in the #openstack-meeting channel.



The agenda for the meeting can be found here:

https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_October_20th_2016_.280900_UTC.29

Anyone is welcome to add an item to the agenda.



To help people figure out what time 9:00 UTC is in other timezones the next 
meeting will be at:



04:00 EST

18:00 JST

18:30 ACST

11:00 CEST

04:00 CDT

02:00 PDT

Thanks
gmann
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Summit preparation

2016-10-19 Thread Richard Jones
Hi folks,

Summit's around the corner!

Even if you aren't going to be at the summit, we'd appreciate if you
could go over the etherpads we're going to be using, to add your
valuable input. They're all linked from
https://etherpad.openstack.org/p/horizon-ocata-summit and in
particular if you have any priority work it needs to go in the
priorities etherpad please.

If you are going, please contact me directly if you'd like to be
included in our "Horizon In BCN" Google Hangout.

If you're a core please let me know explicitly whether you're going or
not, as I would like to email you directly about some stuff if you're
not coming.


Thanks,

 Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of Zun core team

2016-10-19 Thread Shuu Mutou
+1 for both.

Regards,
Shu Muto

> -Original Message-
> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> Sent: Thursday, October 20, 2016 12:33 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Zun] Propose a change of Zun core team
> 
> +1 for both. Shubham will be a great addition to team.
> 
> 
> 
> Thanks!
> 
> 
> 
> Madhuri
> 
> 
> 
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Thursday, October 20, 2016 2:49 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [Zun] Propose a change of Zun core team
> 
> 
> 
> Hi team,
> 
> 
> 
> I am going to propose an exchange of the core team membership as below:
> 
> 
> 
> + Shubham Kumar Sharma (shubham)
> 
> - Chandan Kumar (chandankumar)
> 
> 
> 
> Shubham contributed a lot for the container image feature and active on
> reviews and IRC. I think he is a good addition to the core team. Chandan
> became inactive for a long period of time so he didn’t meet the expectation
> of a core reviewer anymore. However, thanks for his interest to join the
> core team when the team was found. He is welcomed to re-join the core team
> if he become active in the future.
> 
> 
> 
> According to the OpenStack Governance process [1], we require a minimum
> of 4 +1 votes from Zun core reviewers within a 1 week voting window (consider
> this proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot
> get enough votes or there is a veto vote prior to the end of the voting
> window, this proposal is rejected and Shubham is not able to join the core
> team and needs to wait 30 days to reapply.
> 
> 
> 
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
> 
> 
> 
> 
> Best regards,
> 
> Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of Zun core team

2016-10-19 Thread Kumari, Madhuri
+1 for both. Shubham will be a great addition to team.

Thanks!

Madhuri

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Thursday, October 20, 2016 2:49 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Zun] Propose a change of Zun core team

Hi team,

I am going to propose an exchange of the core team membership as below:

+ Shubham Kumar Sharma (shubham)
- Chandan Kumar (chandankumar)

Shubham contributed a lot for the container image feature and active on reviews 
and IRC. I think he is a good addition to the core team. Chandan became 
inactive for a long period of time so he didn't meet the expectation of a core 
reviewer anymore. However, thanks for his interest to join the core team when 
the team was found. He is welcomed to re-join the core team if he become active 
in the future.

According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from Zun core reviewers within a 1 week voting window (consider this 
proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get enough 
votes or there is a veto vote prior to the end of the voting window, this 
proposal is rejected and Shubham is not able to join the core team and needs to 
wait 30 days to reapply.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] OpenStack Summit Barcelona: Heat Meetup (evening)

2016-10-19 Thread Rico Lin
Hi everyone,

We're planning to have an evening Heat contributors meetup in Barcelona
Summit.
We would like every contributor, ops, users join us and have fun.
We need to decide which day of that week would be most suited for all of
us. If you would like to attend, please put your name and possible days at:
http://doodle.com/poll/dyy6tdnawchnddvy

As for location, feel free to suggest any.
I would suggest `Bambu Beach Bar`[1], drink and tapas which nearby venue,
or `Cervecería Catalana`[2] and Tapas 24 [3] which a little far from the
venue. All nice and relax places(Not like the evening place from the last
summit I promise!!). Most importantly, all place served beers and drinks(
This is very essential if we want to attract our Steve!!).


[1]
https://www.tripadvisor.com.tw/Restaurant_Review-g187497-d4355271-Reviews-Bambu_Beach_Bar-Barcelona_Catalonia.htm
[2]
https://www.tripadvisor.com.tw/Restaurant_Review-g187497-d782944-Reviews-Cerveceria_Catalana-Barcelona_Catalonia.html
[3]
https://www.tripadvisor.com.tw/Restaurant_Review-g187497-d1314895-Reviews-Tapas_24-Barcelona_Catalonia.html

-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Tricircle splitting and cleaning completed

2016-10-19 Thread joehuang
Hello,

As all patches for Tricircle cleaning have been merged, the Tricircle splitting 
and cleaning in Tricircle repository was just completed.

Now the cleaned source code is available: 
https://github.com/openstack/tricircle/

A new tag was also given for this milestone as the cleaned baseline: 
https://github.com/openstack/tricircle/tree/2.1.0

Installation guides here (only devstack installation guide available now): 
https://github.com/openstack/tricircle/tree/master/doc/source

Please report issues through launchpad:  https://bugs.launchpad.net/tricircle

Best Regards
Chaoyi Huang(joehuang)


From: joehuang
Sent: 15 October 2016 10:08
To: openstack-dev
Subject: [openstack-dev][tricircle] MUST TO HAVE patches for tricircle cleaning

As suggested, it's better to put priority for each patch set, so patches are 
marked the the tags here according to our cleaning and splitting plan:

Urgent patches:
No urgent patch yet.

MUST TO HAVE patches to get merged before the end of Oct.19:
   MUST TO HAVE  1. central and local plugin for l3: 
https://review.openstack.org/#/c/378476/
   MUST TO HAVE  2. remove api gateway code: 
https://review.openstack.org/#/c/384182/
   MUST TO HAVE  3. security group support: 
https://review.openstack.org/#/c/380054/
   MUST TO HAVE, merged  4: Single Node installation: 
https://review.openstack.org/#/c/384872/
   MUST TO HAVE  5. Multi nodes installation:  
https://review.openstack.org/#/c/385306/

Good to have patches (not required to merge before Oct.19):
   Implement resource routing features: https://review.openstack.org/#/c/375976/
   other patches not listed here

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Performance concerns over all the new notifications

2016-10-19 Thread Joshua Harlow

Matt Riedemann wrote:

There are a lot of specs up for review in ocata related to adding new
versioned notifications for operations that we didn't have notifications
on before, like CRUD operations on resources like flavors and server
groups.

We've got a lot of legacy notifications for server actions, like
server.pause.start and server.pause.end. Those are pretty simple.

The thing that has me concerned about the CRUD operation notifications
on resources is the extra DB query overhead to create the payloads which
might not even get sent out.

For example, I was reviewing this spec about adding notifications for
CRUD ops on server groups:

https://review.openstack.org/#/c/375316/

Looking at the code for InstanceGroup, when a member is added to or
removed from the group, the hosts field implicitly changes, but to
calculate the hosts field we have to get all of the instances (members)
in the group and then built the list of instance.host values.

This is probably less of an issue if the server group object in scope
already has the hosts field set, but if it doesn't and we're
constructing it just for the notification, that's extra DB and RPC
overhead - and notifications might not even be setup.

I was thinking about it like logging details at debug level. If I need
to build some large object or get some data for debugging something
that's not in scope, I'd wrap that in a conditional:

if LOG.isEnabledFor(logging.DEBUG):
LOG.debug('gimme da deets: %s', self.build_da_deets())

But do we have anything like that for notifications? Basically, tell me
if I should even bother building payloads for notifications.



Also at what point is the notification system actually the equivalent of 
the database transaction log (just something to ask/think about). If 
every CRUD operation is doing notifications, it sorta feels like it is 
nearly just the same...


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Performance concerns over all the new notifications

2016-10-19 Thread Joshua Harlow

Matt Riedemann wrote:

There are a lot of specs up for review in ocata related to adding new
versioned notifications for operations that we didn't have notifications
on before, like CRUD operations on resources like flavors and server
groups.

We've got a lot of legacy notifications for server actions, like
server.pause.start and server.pause.end. Those are pretty simple.

The thing that has me concerned about the CRUD operation notifications
on resources is the extra DB query overhead to create the payloads which
might not even get sent out.

For example, I was reviewing this spec about adding notifications for
CRUD ops on server groups:

https://review.openstack.org/#/c/375316/

Looking at the code for InstanceGroup, when a member is added to or
removed from the group, the hosts field implicitly changes, but to
calculate the hosts field we have to get all of the instances (members)
in the group and then built the list of instance.host values.

This is probably less of an issue if the server group object in scope
already has the hosts field set, but if it doesn't and we're
constructing it just for the notification, that's extra DB and RPC
overhead - and notifications might not even be setup.

I was thinking about it like logging details at debug level. If I need
to build some large object or get some data for debugging something
that's not in scope, I'd wrap that in a conditional:

if LOG.isEnabledFor(logging.DEBUG):
LOG.debug('gimme da deets: %s', self.build_da_deets())

But do we have anything like that for notifications? Basically, tell me
if I should even bother building payloads for notifications.



A valid concern IMHO, seems like we might want a isEnabledFor() on the 
Notifier class in 
https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/notify/notifier.py#L175 
(I am assuming here that the underlying drivers can even provide the 
knowledge to know that, which may or may not be a big assumption?)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Performance concerns over all the new notifications

2016-10-19 Thread Matt Riedemann
There are a lot of specs up for review in ocata related to adding new 
versioned notifications for operations that we didn't have notifications 
on before, like CRUD operations on resources like flavors and server groups.


We've got a lot of legacy notifications for server actions, like 
server.pause.start and server.pause.end. Those are pretty simple.


The thing that has me concerned about the CRUD operation notifications 
on resources is the extra DB query overhead to create the payloads which 
might not even get sent out.


For example, I was reviewing this spec about adding notifications for 
CRUD ops on server groups:


https://review.openstack.org/#/c/375316/

Looking at the code for InstanceGroup, when a member is added to or 
removed from the group, the hosts field implicitly changes, but to 
calculate the hosts field we have to get all of the instances (members) 
in the group and then built the list of instance.host values.


This is probably less of an issue if the server group object in scope 
already has the hosts field set, but if it doesn't and we're 
constructing it just for the notification, that's extra DB and RPC 
overhead - and notifications might not even be setup.


I was thinking about it like logging details at debug level. If I need 
to build some large object or get some data for debugging something 
that's not in scope, I'd wrap that in a conditional:


  if LOG.isEnabledFor(logging.DEBUG):
  LOG.debug('gimme da deets: %s', self.build_da_deets())

But do we have anything like that for notifications? Basically, tell me 
if I should even bother building payloads for notifications.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of Zun core team

2016-10-19 Thread Yanyan Hu
+1 for both.

2016-10-20 5:18 GMT+08:00 Hongbin Lu :

> Hi team,
>
>
>
> I am going to propose an exchange of the core team membership as below:
>
>
>
> + Shubham Kumar Sharma (shubham)
>
> - Chandan Kumar (chandankumar)
>
>
>
> Shubham contributed a lot for the container image feature and active on
> reviews and IRC. I think he is a good addition to the core team. Chandan
> became inactive for a long period of time so he didn’t meet the expectation
> of a core reviewer anymore. However, thanks for his interest to join the
> core team when the team was found. He is welcomed to re-join the core team
> if he become active in the future.
>
>
>
> According to the OpenStack Governance process [1], we require a minimum of
> 4 +1 votes from Zun core reviewers within a 1 week voting window (consider
> this proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot
> get enough votes or there is a veto vote prior to the end of the voting
> window, this proposal is rejected and Shubham is not able to join the core
> team and needs to wait 30 days to reapply.
>
>
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,

Yanyan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: Re: [vitrage][aodh] about aodh notifier to create a event alarm

2016-10-19 Thread dong . wenjuan
Hi Gordon Chung,

Could you please tell me why adding another notification topic is not a 
good choice? Thanks~

BR,
dwj






"Afek, Ifat (Nokia - IL)"  
2016-10-19 16:20
请答复 给
"OpenStack Development Mailing List \(not for usage questions\)" 



收件人
"OpenStack Development Mailing List (not for usage questions)" 
, "g...@live.ca" 
抄送

主题
Re: [openstack-dev] [vitrage][aodh] about aodh notifier to create a event 
alarm








From: "dong.wenj...@zte.com.cn"
Date: Wednesday, 19 October 2016 at 11:01

The BP of aodh-message-bus-notifications[1] was blocked as Aodh message 
bus notification. 
As the discuession of Vitrage and Aodh in etherpad[2], only the Aodh 
alarm_deletion notification is missing.
I proposed a patch to add the Aodh alarm_deletion notification.[3]
Please help me to review this patch. 
Do the alarm.creation, alarm.state_transition and alarm.deletion satisfy 
the Vitrage requirement?
I'd like to help to implement the aodh-message-bus-notifications BP if 
there is nobody interest in it.

This is more complex. Aodh has a mechanism for registering a URL to be 
notified when the state of a specific alarm is changed. 
Vitrage asked for something else - a notification whenever *any* alarm 
state is changed. In Vitrage we don’t want to register to each and every 
Aodh alarm separately, so we prefer to get the notifications for all 
changes on the message bus (as we do with other OpenStack projects). In 
addition, there is currently no notification about a newly created alarm, 
so even if we register a URL on each alarm we will not be able to register 
it on the new alarms. 

[dwj]:  If i understand correctly, Aodh already support a notification 
whenever *any* alarm state is changed.
  See 
https://github.com/openstack/aodh/blob/master/aodh/evaluator/__init__.py#L107
.
  We only need to config the vitrage_notifications topics in Aodh then 
Vitrage can get the notifications from Aodh.
  Let me know if i miss something. 

A few months ago I discussed it with Gordon Chung, and understood that he 
blocked the option to add another notification topic. 
Gordon?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Sean Dague

On 10/19/2016 04:22 PM, Matt Riedemann wrote:

I personally thought long-term we wanted unversioned endpoints in the
service catalog, and if you want to do version discovery, you do a GET
on a particular service's endpoint URL in the service catalog, and that
returns the list of available API versions for that service along with a
status value (CURRENT, SUPPORTED, DEPRECATED) and any microversion
ranges within those.

I know we have 3 versions of keystone in the service catalog, which I
find pretty nasty personally, plus the fact that a lot of the client
code we have has had to burn 'volumev2' in as a default endpoint_type to
use cinder. IMO the service catalog should just have a 'volume' endpoint
and if I want to know the supported versions (v1, v2 and/or v3), I do a
GET on the publicURL for the volume endpoint from the service catalog.


At 5 services, maybe. But at 50+ services (and growing) I think that the 
idea of get an endpoint, then have custom parsing code for every service 
because their version documents are different, is a really bad UX.


The reason we have volume, volumev2, and volumev3 is that no one 
actually wants the unversioned volume endpoint. You can't do anything 
with it. Everyone wants the actual endpoint that has resources.


We can solve this for all consumers by adding additional version field 
to the catalog. This was the direction we were headed last spring before 
the api-ref work took over.


I think Brian's initial complaints (which are very valid) here really 
point to the fact that punting on that puts SDK and client authors in a 
place where they are going to end up writing a ton of heuristic code to 
guess url structure. Which would be sad for all of us. :(


-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] FW: Your draft logo & a sneak peek

2016-10-19 Thread Amrith Kumar
I know of no other way to share this with the Trove team . 

 

Here's our logo (a draft of it at least). The rest of the email (below) from
Heidi has more details about appropriate use at this stage.

 

Thanks,

 

-amrith

 



 

From: Heidi Joy Tretheway [mailto:heidi...@openstack.org] 
Sent: Wednesday, October 19, 2016 3:05 PM
To: Amrith Kumar 
Subject: Your draft logo & a sneak peek 

 

Hi Amrith,

 

We're excited to show you the draft version of your project logo, attached.
We want to give you and your team a chance to see the mascot illustrations
before we make them official, so we decided to make Barcelona the draft
target, with final logos ready by the Project Team Gathering in Atlanta in
February. 

 

Our illustrators worked as fast as possible to draft nearly 60 logos, and
we're thrilled to see how they work as a family. Here's a 50-second "sneak
peek" at how they came together:  
https://youtu.be/JmMTCWyY8Y4

 

We welcome you to share this logo with your team and discuss it in
Barcelona. We're very happy to take feedback on it if we've missed the mark.
The style of the logos is consistent across projects, and we did our best to
incorporate any special requests, such as an element of an animal that is
especially important, or a reference to an old logo.



We ask that you don't start using this logo now since it's a draft. Here's
what you can expect for the final product:

*   A horizontal version of the logo, including your mascot, project
name and the words "An OpenStack Community project"
*   A square(ish) version of the logo, including all of the above
*   A mascot-only version of the logo
*   Stickers for all project teams distributed at the PTG
*   One piece of swag that incorporates all project mascots, such as a
deck of playing cards, distributed at the PTG
*   All digital files will be available through the website

 

We know this is a busy time for you, so to take some of the burden of
coordinating feedback off you, we made a feedback form:
 http://tinyurl.com/OSmascot  You are also
welcome to reach out to Heidi Joy directly with questions or concerns.
Please provide feedback by Friday, Nov. 11, so that we can request revisions
from the illustrators if needed. Or, if this logo looks great, just reply to
this email and you don't need to take any further action.

 

Thank you!

Heidi Joy Tretheway - project lead

Todd Morey - creative lead

 

P.S. Here's an email that you can copy/paste to send to your team (remember
to attach your logo from my email):

 

Hi team, 

I just received a draft version of our project logo, using the mascot we
selected together. A final version (and some cool swag) will be ready for us
before the Project Team Gathering in February. Before they make our logo
final, they want to be sure we're happy with our mascot. 

 

We can discuss any concerns in Barcelona and you can also provide direct
feedback to the designers:  
http://tinyurl.com/OSmascot  Logo feedback is due Friday, Nov. 11. To get a
sense of how ours stacks up to others, check out this sneak preview of
several dozen draft logos from our community:
 https://youtu.be/JmMTCWyY8Y4

 


 
 

Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation

  503 816 9769 | Skype:
 heidi.tretheway

 
    

 



 



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][VMT][Security] Glance coresec reorg

2016-10-19 Thread Jeremy Stanley
On 2016-10-18 22:22:28 + (+), Brian Rosmaita wrote:
> Thus, the main point of this email is to propose Ian Cordasco and Erno
> Kuvaja as new members of the Glance coresec team.  They've both been
> Glance cores for several cycles, have a broad knowledge of the software
> and team, contribute high-quality reviews, and are conversant with good
> security practices.
[...]

Sounds good to me. From a VMT perspective, I'm just happy to see
Glance keeping active participants with available bandwidth looking
at prospective vulnerability reports so we can continue to churn
through them faster and make them public sooner. Thanks for keeping
the wheels turning!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] retiring python-neutron-pd-driver

2016-10-19 Thread Armando M.
To whom it may concern,

I have started the procedure [1] to retire project [2]. If you are affected
by this, this is the last opportunity to provide feedback. That said, users
should be able to use the in tree version of dibbler as documented in [3].

Cheers,
Armando

[1]
https://review.openstack.org/#/q/I77099ba826b8c7d28379a823b4dc74aa65e653d8
[2] http://git.openstack.org/cgit/openstack/python-neutron-pd-driver/
[3]
http://docs.openstack.org/newton/networking-guide/config-ipv6.html#configuring-the-dibbler-server
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Brant Knudson
On Wed, Oct 19, 2016 at 2:27 PM, Brian Curtin  wrote:
...

>
> This started back in August when it came up that we didn't know where
> that Keystone v3 endpoint was. After talking with a few people, Steve
> Martinelli mentioned that at least as of then, hitting the unversioned
> endpoint was the way to solve that. It being unlisted anywhere was
> something for me to figure out (via that path manipulation from the
> given v2), but it was later mentioned that ideally they would like to
> have unversioned endpoints in the catalog anyway.


devstack now sets up the identity/keystone endpoints as unversioned. So you
get an endpoint with "http://192.168.122.102:5000"; for example. So this is
what we're testing with now and you're lucky if a versioned endpoint works
at all ;).


> I'm talking to Steve
> now and perhaps I took that too far in extrapolating which direction
> things were going in reality, but it was a solution that had to be
> undertaken nonetheless and was seen as the best way forward at the
> time. It's also the only one that mostly works at the moment.
>
> In the end, I'll take listing major versions as long as it's accurate
> and complete, but I'll also take listing the service root even if it
> means an extra request for me to determine those versions.
>
>
-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Matt Riedemann

On 10/19/2016 3:22 PM, Matt Riedemann wrote:

On 10/19/2016 2:27 PM, Brian Curtin wrote:

On Wed, Oct 19, 2016 at 2:03 PM, Sean Dague  wrote:

On 10/19/2016 01:40 PM, Brian Curtin wrote:

On Wed, Oct 19, 2016 at 12:59 PM, Jay Pipes  wrote:

On 10/19/2016 05:32 PM, Brian Curtin wrote:


I'm currently facing what looks more and more like an impossible
problem in determining the root of each service on a given cloud. It
is apparently a free-for-all in how endpoints can be structured,
and I
think we're out of ways to approach it that catch all of the ways
that
all people can think of.

In openstacksdk, we can no longer use the service catalog for
determining each service's endpoints. Among other things, this is due
to a combination of some versions of some services not actually being
listed, and with things heading the direction of version-less
services
anyway. Recently we changed to using the service catalog as a pointer
to where services live and then try to find the root of that service
by stripping the path down and making some extra requests on startup
to find what's offered. Despite a few initial snags, this now works
reasonably well in a majority of cases.

We have seen endpoints structured in the following ways:
 A. subdomains, e.g., https://service.cloud.com/v2
 B. paths, e.g., https://cloud.com/service/v2 (sometimes there are
more paths in between the root and /service/)
 C. service-specific ports, e.g., https://cloud.com:1234/v2
 D. both A and B plus ports

Within all of these, we can find the root of the given service just
fine. We split the path and build successively longer paths starting
from the root. In the above examples, we need to hit the path just
short of the /v2, so in B it actually takes two requests as we'd make
one to cloud.com which fails, but then a second one to
cloud.com/service gives us what we need.

However, another case came up: the root of all endpoints is itself
another service. That makes it look like this:

 E. https://cloud.com:/service/v2
 F. https://cloud.com:/otherservice

In this case, https://cloud.com: is keystone, so trying to get
E's
base by going from the root and outward will give me a versions
response I can parse properly, but it points to keystone. We then end
up building requests for 'service' that go to keystone endpoints and
end up failing. We're doing this using itertools.accumulate on the
path fragments, so you might think 'just throw it through
`reversed()`' and go the other way. If we do that, we'll also get a
versions response that we can parse, but it's the v2 specific info,
not all available versions.

So now that we can't reliably go from the left, and we definitely
can't go from the right, how about the middle?

This sounds ridiculous, and if it sounds familiar it's because they
devise a "middle out" algorithm on the show Silicon Valley, but in
most cases it'd actually work. In E above, it'd be fine. However,
depending on the number of path fragments and which direction we
chose
to move first, we'd sometimes hit either a version-specific response
or another service's response, so it's not reliable.

Ultimately, I would like to know how something like this can be
solved.

1. Is there any reliable, functional, and accurate programmatic
way to
get the versions and endpoints that all services on a cloud offer?



The Keystone service catalog should be the thing that provides the
endpoints
for all services in the cloud. Within each service, determining the
(micro)version of the API is unfortunately going to be a per-service
endeavour. For some APIs, a microversion header is returned, others
don't
have microversions. The microversion header is unfortunately not
standardized for all APIs that use microversions, though a number
of us
would like to see a single:

OpenStack-API-Version:  , ...

header supported. This is the header supported in the new placement
REST
API, for what it's worth.


I get the microversion part, and we support that (for some degree of
support), but this is about the higher level major versions. The
example that started this was Keystone only listing a v2 endpoint in
the service catalog, at least on devstack. I need to be able to hit v3
APIs when a user wants to do v3 things, regardless of which version
they auth to, so the way to get it was to get the root and go from
there. That both versions weren't listed was initially confusing to
me, but that's where the suggestion of "go to the root and get
everything" started out.

The service catalog holding providing all of the available endpoints
made sense to me from what I understood in the past, but two things
are for sure about this: it doesn't work that way, and I've been told
several times that it's not going to work that way even in cases where
it is apparently working. I don't have sources to cite, but it's come
up a few times that the goal is one entry per service and you talk to
the service to find out all of its details - major versions, micro
versions, etc.


2. 

[openstack-dev] [nova] Draft mascot/logo

2016-10-19 Thread Matt Riedemann
I just received a draft version of our project logo. A final version 
will be ready for us before the Project Team Gathering in February. 
Before they make our logo final, they want to be sure we're happy with 
our mascot.


We can discuss any concerns in Barcelona and you can also provide direct 
feedback to the designers: http://tinyurl.com/OSmascot


Logo feedback is due Friday, Nov. 11. To get a sense of how ours stacks 
up to others, check out this sneak preview of several dozen draft logos 
from our community: https://youtu.be/JmMTCWyY8Y4


--

Thanks,

Matt Riedemann
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Propose a change of Zun core team

2016-10-19 Thread Hongbin Lu
Hi team,

I am going to propose an exchange of the core team membership as below:

+ Shubham Kumar Sharma (shubham)
- Chandan Kumar (chandankumar)

Shubham contributed a lot for the container image feature and active on reviews 
and IRC. I think he is a good addition to the core team. Chandan became 
inactive for a long period of time so he didn't meet the expectation of a core 
reviewer anymore. However, thanks for his interest to join the core team when 
the team was found. He is welcomed to re-join the core team if he become active 
in the future.

According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from Zun core reviewers within a 1 week voting window (consider this 
proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get enough 
votes or there is a veto vote prior to the end of the voting window, this 
proposal is rejected and Shubham is not able to join the core team and needs to 
wait 30 days to reapply.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [lbaas] [octavia] Next LBaaS/Octavia meeting will be November 9th

2016-10-19 Thread Michael Johnson
Hi OpenStack Devs,

Since a large number of team members are attending the OpenStack
Summit, we have decided to cancel the next two LBaaS/Octavia IRC
meetings.

We will resume our regular meetings November 9th.

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo][openstack-ansible] DB deadlocks, Mitaka, and You

2016-10-19 Thread Matt Riedemann

On 10/19/2016 9:47 AM, Carter, Kevin wrote:

Hi Matt and thanks for the reply,

We do have that commit as found here: [
https://github.com/openstack/nova/blob/dd30603f91e6fd3d1a4db452f20a51ba8820e1f4/nova/db/sqlalchemy/api.py#L1846
]. If there's anything you'd like to see as we're trying to figure
this out I'd be happy to provide {any,every}thing.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Well I still don't understand how you get a DBDeadlock failure in n-cpu 
while building the VM and that's coming back to the client via n-api - 
by the time you hit a failure in n-cpu building the VM we should have 
cast from the API and responded with a 202 to the client.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Matt Riedemann

On 10/19/2016 2:27 PM, Brian Curtin wrote:

On Wed, Oct 19, 2016 at 2:03 PM, Sean Dague  wrote:

On 10/19/2016 01:40 PM, Brian Curtin wrote:

On Wed, Oct 19, 2016 at 12:59 PM, Jay Pipes  wrote:

On 10/19/2016 05:32 PM, Brian Curtin wrote:


I'm currently facing what looks more and more like an impossible
problem in determining the root of each service on a given cloud. It
is apparently a free-for-all in how endpoints can be structured, and I
think we're out of ways to approach it that catch all of the ways that
all people can think of.

In openstacksdk, we can no longer use the service catalog for
determining each service's endpoints. Among other things, this is due
to a combination of some versions of some services not actually being
listed, and with things heading the direction of version-less services
anyway. Recently we changed to using the service catalog as a pointer
to where services live and then try to find the root of that service
by stripping the path down and making some extra requests on startup
to find what's offered. Despite a few initial snags, this now works
reasonably well in a majority of cases.

We have seen endpoints structured in the following ways:
 A. subdomains, e.g., https://service.cloud.com/v2
 B. paths, e.g., https://cloud.com/service/v2 (sometimes there are
more paths in between the root and /service/)
 C. service-specific ports, e.g., https://cloud.com:1234/v2
 D. both A and B plus ports

Within all of these, we can find the root of the given service just
fine. We split the path and build successively longer paths starting
from the root. In the above examples, we need to hit the path just
short of the /v2, so in B it actually takes two requests as we'd make
one to cloud.com which fails, but then a second one to
cloud.com/service gives us what we need.

However, another case came up: the root of all endpoints is itself
another service. That makes it look like this:

 E. https://cloud.com:/service/v2
 F. https://cloud.com:/otherservice

In this case, https://cloud.com: is keystone, so trying to get E's
base by going from the root and outward will give me a versions
response I can parse properly, but it points to keystone. We then end
up building requests for 'service' that go to keystone endpoints and
end up failing. We're doing this using itertools.accumulate on the
path fragments, so you might think 'just throw it through
`reversed()`' and go the other way. If we do that, we'll also get a
versions response that we can parse, but it's the v2 specific info,
not all available versions.

So now that we can't reliably go from the left, and we definitely
can't go from the right, how about the middle?

This sounds ridiculous, and if it sounds familiar it's because they
devise a "middle out" algorithm on the show Silicon Valley, but in
most cases it'd actually work. In E above, it'd be fine. However,
depending on the number of path fragments and which direction we chose
to move first, we'd sometimes hit either a version-specific response
or another service's response, so it's not reliable.

Ultimately, I would like to know how something like this can be solved.

1. Is there any reliable, functional, and accurate programmatic way to
get the versions and endpoints that all services on a cloud offer?



The Keystone service catalog should be the thing that provides the endpoints
for all services in the cloud. Within each service, determining the
(micro)version of the API is unfortunately going to be a per-service
endeavour. For some APIs, a microversion header is returned, others don't
have microversions. The microversion header is unfortunately not
standardized for all APIs that use microversions, though a number of us
would like to see a single:

OpenStack-API-Version:  , ...

header supported. This is the header supported in the new placement REST
API, for what it's worth.


I get the microversion part, and we support that (for some degree of
support), but this is about the higher level major versions. The
example that started this was Keystone only listing a v2 endpoint in
the service catalog, at least on devstack. I need to be able to hit v3
APIs when a user wants to do v3 things, regardless of which version
they auth to, so the way to get it was to get the root and go from
there. That both versions weren't listed was initially confusing to
me, but that's where the suggestion of "go to the root and get
everything" started out.

The service catalog holding providing all of the available endpoints
made sense to me from what I understood in the past, but two things
are for sure about this: it doesn't work that way, and I've been told
several times that it's not going to work that way even in cases where
it is apparently working. I don't have sources to cite, but it's come
up a few times that the goal is one entry per service and you talk to
the service to find out all of its details - major versions, micro
versions, etc.


2. Are there any guidelines, rules, expectations,

Re: [openstack-dev] [Neutron] Neutron team social event in Barcelona

2016-10-19 Thread Ian Wells
+1

On 14 October 2016 at 11:30, Miguel Lavalle  wrote:

> Dear Neutrinos,
>
> I am organizing a social event for the team on Thursday 27th at 19:30.
> After doing some Google research, I am proposing Raco de la Vila, which is
> located in Poblenou: http://www.racodelavila.com/en/index.htm. The menu
> is here: http://www.racodelavila.com/en/carta-racodelavila.htm
>
> It is easy to get there by subway from the Summit venue:
> https://goo.gl/maps/HjaTEcBbDUR2. I made a reservation for 25 people
> under 'Neutron' or "Miguel Lavalle". Please confirm your attendance so we
> can get a final count.
>
> Here's some reviews: https://www.tripadvisor.com/
> Restaurant_Review-g187497-d1682057-Reviews-Raco_De_La_
> Vila-Barcelona_Catalonia.html
>
> Cheers
>
> Miguel
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Running a skipped tempest test: test_connectivity_between_vms_on_different_networks

2016-10-19 Thread Yossi Tamarov
Hello everyone,
Is there a way to force this tempest test to run?
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks

Currently, I'm trying to run the next command, which is followed by the
next outputץ
Thanks for any help,
Joseph.
* [root@devel tempest]# ostestr --regex
'(?!.*\[.*\bslow\b.*\])('tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks')'*
*running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \*
*OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \*
*OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \*
*OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \*
*${PYTHON:-python} -m subunit.run discover -t ${OS_TOP_LEVEL:-./}
${OS_TEST_PATH:-./tempest/test_discover} --list*
*running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \*
*OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \*
*OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \*
*OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \*
*${PYTHON:-python} -m subunit.run discover -t ${OS_TOP_LEVEL:-./}
${OS_TEST_PATH:-./tempest/test_discover}  --load-list /tmp/tmphXpVFo*
*{0}
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks
... SKIPPED: Skipped until Bug: 1610994 is resolved.*

*==*
*Totals*
*==*
*Ran: 1 tests in 6. sec.*
* - Passed: 0*
* - Skipped: 1*
* - Expected Fail: 0*
* - Unexpected Success: 0*
* - Failed: 0*
*Sum of execute time for each test: 0.0006 sec.*

*==*
*Worker Balance*
*==*
* - Worker 0 (1 tests) => 0:00:00.000619*

*No tests were successful during the run*

*Slowest Tests:*

*Test id

Runtime (s)*
*
 ---*
*tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks[compute,id-1546850e-fbaa-42f5-8b5f-03d8a6a95f15,network]
 0.001*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: Your draft logo & a sneak peek

2016-10-19 Thread Claudiu Belu
Hellou,

I just received a draft version of our project logo, using the mascot we 
selected together. A final version (and some cool swag) will be ready for us 
before the Project Team Gathering in February. Before they make our logo final, 
they want to be sure we're happy with our mascot.

We can discuss any concerns in Barcelona and you can also provide direct 
feedback to the designers: http://tinyurl.com/OSmascot  Logo feedback is due 
Friday, Nov. 11. To get a sense of how ours stacks up to others, check out this 
sneak preview of several dozen draft logos from our community: 
https://youtu.be/JmMTCWyY8Y4


[cid:EC563D9E-68E0-4CD9-BFF2-2EB3239F4DCF]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Brian Curtin
On Wed, Oct 19, 2016 at 2:03 PM, Sean Dague  wrote:
> On 10/19/2016 01:40 PM, Brian Curtin wrote:
>> On Wed, Oct 19, 2016 at 12:59 PM, Jay Pipes  wrote:
>>> On 10/19/2016 05:32 PM, Brian Curtin wrote:

 I'm currently facing what looks more and more like an impossible
 problem in determining the root of each service on a given cloud. It
 is apparently a free-for-all in how endpoints can be structured, and I
 think we're out of ways to approach it that catch all of the ways that
 all people can think of.

 In openstacksdk, we can no longer use the service catalog for
 determining each service's endpoints. Among other things, this is due
 to a combination of some versions of some services not actually being
 listed, and with things heading the direction of version-less services
 anyway. Recently we changed to using the service catalog as a pointer
 to where services live and then try to find the root of that service
 by stripping the path down and making some extra requests on startup
 to find what's offered. Despite a few initial snags, this now works
 reasonably well in a majority of cases.

 We have seen endpoints structured in the following ways:
  A. subdomains, e.g., https://service.cloud.com/v2
  B. paths, e.g., https://cloud.com/service/v2 (sometimes there are
 more paths in between the root and /service/)
  C. service-specific ports, e.g., https://cloud.com:1234/v2
  D. both A and B plus ports

 Within all of these, we can find the root of the given service just
 fine. We split the path and build successively longer paths starting
 from the root. In the above examples, we need to hit the path just
 short of the /v2, so in B it actually takes two requests as we'd make
 one to cloud.com which fails, but then a second one to
 cloud.com/service gives us what we need.

 However, another case came up: the root of all endpoints is itself
 another service. That makes it look like this:

  E. https://cloud.com:/service/v2
  F. https://cloud.com:/otherservice

 In this case, https://cloud.com: is keystone, so trying to get E's
 base by going from the root and outward will give me a versions
 response I can parse properly, but it points to keystone. We then end
 up building requests for 'service' that go to keystone endpoints and
 end up failing. We're doing this using itertools.accumulate on the
 path fragments, so you might think 'just throw it through
 `reversed()`' and go the other way. If we do that, we'll also get a
 versions response that we can parse, but it's the v2 specific info,
 not all available versions.

 So now that we can't reliably go from the left, and we definitely
 can't go from the right, how about the middle?

 This sounds ridiculous, and if it sounds familiar it's because they
 devise a "middle out" algorithm on the show Silicon Valley, but in
 most cases it'd actually work. In E above, it'd be fine. However,
 depending on the number of path fragments and which direction we chose
 to move first, we'd sometimes hit either a version-specific response
 or another service's response, so it's not reliable.

 Ultimately, I would like to know how something like this can be solved.

 1. Is there any reliable, functional, and accurate programmatic way to
 get the versions and endpoints that all services on a cloud offer?
>>>
>>>
>>> The Keystone service catalog should be the thing that provides the endpoints
>>> for all services in the cloud. Within each service, determining the
>>> (micro)version of the API is unfortunately going to be a per-service
>>> endeavour. For some APIs, a microversion header is returned, others don't
>>> have microversions. The microversion header is unfortunately not
>>> standardized for all APIs that use microversions, though a number of us
>>> would like to see a single:
>>>
>>> OpenStack-API-Version:  , ...
>>>
>>> header supported. This is the header supported in the new placement REST
>>> API, for what it's worth.
>>
>> I get the microversion part, and we support that (for some degree of
>> support), but this is about the higher level major versions. The
>> example that started this was Keystone only listing a v2 endpoint in
>> the service catalog, at least on devstack. I need to be able to hit v3
>> APIs when a user wants to do v3 things, regardless of which version
>> they auth to, so the way to get it was to get the root and go from
>> there. That both versions weren't listed was initially confusing to
>> me, but that's where the suggestion of "go to the root and get
>> everything" started out.
>>
>> The service catalog holding providing all of the available endpoints
>> made sense to me from what I understood in the past, but two things
>> are for sure about this: it doesn't work that way, and I've been told

[openstack-dev] Project mascot logos - sneak peek

2016-10-19 Thread Heidi Joy Tretheway
Hi Developers, 
Thanks for waiting patiently for news on your project logos! Our team of 
illustrators has been working hard on nearly 60 illustrations, and we have a 
sneak peek for you here: https://youtu.be/JmMTCWyY8Y4 


I'm reaching out to the PTLs individually to share your team's draft logo, so 
you should have it in hand early next week (about half have gone out already). 
Feel free to share the logo within your team (though it's best to wait for the 
final version before making it public). You'll get the final version prior to 
the Project Team Gathering in Atlanta, plus some great swag.

If you want to give direct feedback on your mascot to the designers, go here: 
http://tinyurl.com/OSmascot  - the deadline for 
feedback is Nov. 11. Feel free to hit me up with questions and comments, and 
thanks for your wonderfully vibrant and inventive mascots!



Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769  | Skype: heidi.tretheway 

     




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Draft logo & a sneak peek

2016-10-19 Thread Hongbin Lu
Hi team,

Please find below for the draft of Magnum mascot.

Best regards,
Hongbin

From: Heidi Joy Tretheway [mailto:heidi...@openstack.org]
Sent: October-19-16 2:54 PM
To: Hongbin Lu
Subject: Your draft logo & a sneak peek

Hi Hongbin,

We're excited to show you the draft version of your project logo, attached. We 
want to give you and your team a chance to see the mascot illustrations before 
we make them official, so we decided to make Barcelona the draft target, with 
final logos ready by the Project Team Gathering in Atlanta in February.

Our illustrators worked as fast as possible to draft nearly 60 logos, and we're 
thrilled to see how they work as a family. Here's a 50-second "sneak peek" at 
how they came together: https://youtu.be/JmMTCWyY8Y4

We welcome you to share this logo with your team and discuss it in Barcelona. 
We're very happy to take feedback on it if we've missed the mark. The style of 
the logos is consistent across projects, and we did our best to incorporate any 
special requests, such as an element of an animal that is especially important, 
or a reference to an old logo.

We ask that you don't start using this logo now since it's a draft. Here's what 
you can expect for the final product:

  *   A horizontal version of the logo, including your mascot, project name and 
the words "An OpenStack Community project"
  *   A square(ish) version of the logo, including all of the above
  *   A mascot-only version of the logo
  *   Stickers for all project teams distributed at the PTG
  *   One piece of swag that incorporates all project mascots, such as a deck 
of playing cards, distributed at the PTG
  *   All digital files will be available through the website

We know this is a busy time for you, so to take some of the burden of 
coordinating feedback off you, we made a feedback form: 
http://tinyurl.com/OSmascot  You are also welcome to reach out to Heidi Joy 
directly with questions or concerns. Please provide feedback by Friday, Nov. 
11, so that we can request revisions from the illustrators if needed. Or, if 
this logo looks great, just reply to this email and you don't need to take any 
further action.

Thank you!
Heidi Joy Tretheway - project lead
Todd Morey - creative lead

P.S. Here's an email that you can copy/paste to send to your team (remember to 
attach your logo from my email):

Hi team,
I just received a draft version of our project logo, using the mascot we 
selected together. A final version (and some cool swag) will be ready for us 
before the Project Team Gathering in February. Before they make our logo final, 
they want to be sure we're happy with our mascot.

We can discuss any concerns in Barcelona and you can also provide direct 
feedback to the designers: http://tinyurl.com/OSmascot  Logo feedback is due 
Friday, Nov. 11. To get a sense of how ours stacks up to others, check out this 
sneak preview of several dozen draft logos from our community: 
https://youtu.be/JmMTCWyY8Y4

[photo]

Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: 
heidi.tretheway
[https://s3.amazonaws.com/images.wisestamp.com/icons_32/linkedin.png]
 [https://s3.amazonaws.com/images.wisestamp.com/icons_32/twitter.png] 
  
[https://s3.amazonaws.com/ucwebapp.wisestamp.com/5dafe09f-4769-4e67-8016-7a75904bb079/OpenStackicon.format_png.resize_32x32.png]
 


[cid:image001.jpg@01D22A1A.CBB02280]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Does anyone use the proxy_token/proxy_tenant_id stuff in the various clients?

2016-10-19 Thread Matt Riedemann
python-novaclient has these proxy_token and proxy_tenant_id kwargs 
available when constructing the client, added way back in 2011 with no 
docs or tests:


https://github.com/openstack/python-novaclient/commit/2c3a865f6b408d85aaeaafafd9ff9cdcee5d8cb4

Those have been copied into the various other clients that were 
originally written based on novaclient:


http://codesearch.openstack.org/?q=proxy_tenant_id&i=nope&files=&repos=

As far as I can tell this is something that predated service users, or 
the services having credentials for other services to act on an 
end-users behalf, for example, for nova to create a floating IP in 
neutron for an end user.


Is anyone else aware of any other usage of these kwargs? If not, we're 
going to deprecate them in python-novaclient and we could probably also 
do that in cinder/manila/trove clients as well.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][puppet] Spine/Leaf: Adding Multiple Subnets to ironic-inspector-dnsmasq

2016-10-19 Thread Dan Sneddon
On 10/19/2016 10:33 AM, Dan Sneddon wrote:
> I am doing research to support the spec for TripleO deployment on
> routed networks [1]. I would like some input on how to represent
> multiple subnet ranges for the provisioning network in undercloud.conf.
> 
> The Ironic Inspector dnsmasq service is currently configured using the
> puppet-ironic module, and the range of IP addresses is taken directly
> from undercloud.conf. For example, here is the .erb which configures
> /etc/ironic-inspector/dnsmasq.conf if using TFTP [2]:
> 
> ## inspector_dnsmasq_tftp.erb ##
> port=0
> interface=<%= @dnsmasq_interface %>
> bind-interfaces
> dhcp-range=<%= @dnsmasq_ip_range %>,29
> dhcp-boot=pxelinux.0,localhost.localdomain,<%= @dnsmasq_local_ip %>
> dhcp-sequential-ip
> 
> 
> Since there is only one dnsmasq_ip_range, only a single subnet is
> served via DHCP. What I would like to do is extend the undercloud.conf
> to support multiple IP ranges, and I'm looking for input on the best
> way to represent the data.
> 
> I am not sure if we can be fully backwards-compatible here. My gut
> feeling is no, unless we leave the existing parameters as-is and add
> something like an "additional_inspection_ipranges" parameter. The data
> that will need to be represented for each subnet is:
> 
> * Network subnet
> * Start and end of inspection IP range
> * Subnet mask (could be determined by parsing cidr, like 172.20.1.0/24)
> * Gateway router for the subnet
> 
> We could potentially represent this data as a JSON, or as a list of
> strings. Here are some potential examples:
> 
> JSON:
> additional_inspection_ipranges = [
>   {
> "subnet": "172.20.1.0/24",
> "start": "172.20.1.100",
> "end": "172.20.1.120",
> "gateway": "172.20.1.254"
>   },
>   {
> "subnet": "172.20.2.0/24",
> "start": "172.20.2.100",
> "end": "172.20.2.120",
> "gateway": "172.20.2.254"
>   }
> ]
> 
> String:
> additional_inspection_ipranges =
> "172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254;172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"
> 
> Either of these might get unwieldy depending on the number of networks.
> Perhaps we could have a repeating parameter? Something like this:
> 
> additional_inspection_iprange =
> "172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254"
> additional_inspection_iprange =
> "172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"
> 
> I would like some feedback about how to represent this data in a way
> that it can be easily parsed by Puppet, while remaining readable. Any
> suggestions would be very much appreciated.
> 
> [1] - https://review.openstack.org/#/c/377088
> [2] -
> https://github.com/openstack/puppet-ironic/blob/master/templates/inspector_dnsmasq_tftp.erb
> 

After writing this, I realized that I neglected to present another data
point. The Neutron DHCP agent handles this situation very well. If
there are multiple subnets that belong to a network, the ranges are all
included, and each range has a tag that matches a default-gateway that
is taken from the subnet object.

Would it be feasible to modify ironic-inspector and
ironic-inspector-dnsmasq to instead get it's configuration from a given
network. So if the provisioning network is "ctlplane", then the values
would be taken from the "ctlplane" network. This would allow us to
manipulate the values for the ironic-inspector-dnsmasq via Heat
templates or even the Neutron command-line/python client.

The advantage of this approach is that it may have side benefits for
tenant bare metal use cases.

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][puppet] Spine/Leaf: Adding Multiple Subnets to ironic-inspector-dnsmasq

2016-10-19 Thread Alex Schultz
On Wed, Oct 19, 2016 at 11:33 AM, Dan Sneddon  wrote:
> I am doing research to support the spec for TripleO deployment on
> routed networks [1]. I would like some input on how to represent
> multiple subnet ranges for the provisioning network in undercloud.conf.
>
> The Ironic Inspector dnsmasq service is currently configured using the
> puppet-ironic module, and the range of IP addresses is taken directly
> from undercloud.conf. For example, here is the .erb which configures
> /etc/ironic-inspector/dnsmasq.conf if using TFTP [2]:
>
> ## inspector_dnsmasq_tftp.erb ##
> port=0
> interface=<%= @dnsmasq_interface %>
> bind-interfaces
> dhcp-range=<%= @dnsmasq_ip_range %>,29
> dhcp-boot=pxelinux.0,localhost.localdomain,<%= @dnsmasq_local_ip %>
> dhcp-sequential-ip
> 
>
> Since there is only one dnsmasq_ip_range, only a single subnet is
> served via DHCP. What I would like to do is extend the undercloud.conf
> to support multiple IP ranges, and I'm looking for input on the best
> way to represent the data.
>

So I think this is just an issue with the current implementation.  I
think the awkwardness comes from trying to configure dnsmasq
specifically for the inspector use case and we may have tied them too
closely together in a inflexible fashion.  dnsmasq supports a
configuration folder that we could use instead which would allow us to
create as many of these as we'd like.  For example over in fuel[0],
the fuel master supports something similar and it uses multiple
configuration files to handle the case of additional dhcp ranges.  We
could extend the puppet-ironic to support something similar if we can
configure the underlying dnsmasq to point to a configuration
directory. If we create a ironic::inspector::dhcp_range resource then
the way to configure these extra ranges, then it is just an array of
hashes in hiera and it becomes much easier to manage than the proposed
data representation.  Additionally this would support the current
implementation (no change) and multiple ranges (new resource) with
little effort.

Thanks,
-Alex

[0] 
https://github.com/openstack/fuel-library/blob/master/deployment/puppet/fuel/manifests/dnsmasq/dhcp_range.pp

> I am not sure if we can be fully backwards-compatible here. My gut
> feeling is no, unless we leave the existing parameters as-is and add
> something like an "additional_inspection_ipranges" parameter. The data
> that will need to be represented for each subnet is:
>
> * Network subnet
> * Start and end of inspection IP range
> * Subnet mask (could be determined by parsing cidr, like 172.20.1.0/24)
> * Gateway router for the subnet
>
> We could potentially represent this data as a JSON, or as a list of
> strings. Here are some potential examples:
>
> JSON:
> additional_inspection_ipranges = [
>   {
> "subnet": "172.20.1.0/24",
> "start": "172.20.1.100",
> "end": "172.20.1.120",
> "gateway": "172.20.1.254"
>   },
>   {
> "subnet": "172.20.2.0/24",
> "start": "172.20.2.100",
> "end": "172.20.2.120",
> "gateway": "172.20.2.254"
>   }
> ]
>
> String:
> additional_inspection_ipranges =
> "172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254;172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"
>
> Either of these might get unwieldy depending on the number of networks.
> Perhaps we could have a repeating parameter? Something like this:
>
> additional_inspection_iprange =
> "172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254"
> additional_inspection_iprange =
> "172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"
>
> I would like some feedback about how to represent this data in a way
> that it can be easily parsed by Puppet, while remaining readable. Any
> suggestions would be very much appreciated.
>
> [1] - https://review.openstack.org/#/c/377088
> [2] -
> https://github.com/openstack/puppet-ironic/blob/master/templates/inspector_dnsmasq_tftp.erb
> --
> Dan Sneddon |  Senior Principal OpenStack Engineer
> dsned...@redhat.com |  redhat.com/openstack
> dsneddon:irc|  @dxs:twitter
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Sean Dague
On 10/19/2016 01:40 PM, Brian Curtin wrote:
> On Wed, Oct 19, 2016 at 12:59 PM, Jay Pipes  wrote:
>> On 10/19/2016 05:32 PM, Brian Curtin wrote:
>>>
>>> I'm currently facing what looks more and more like an impossible
>>> problem in determining the root of each service on a given cloud. It
>>> is apparently a free-for-all in how endpoints can be structured, and I
>>> think we're out of ways to approach it that catch all of the ways that
>>> all people can think of.
>>>
>>> In openstacksdk, we can no longer use the service catalog for
>>> determining each service's endpoints. Among other things, this is due
>>> to a combination of some versions of some services not actually being
>>> listed, and with things heading the direction of version-less services
>>> anyway. Recently we changed to using the service catalog as a pointer
>>> to where services live and then try to find the root of that service
>>> by stripping the path down and making some extra requests on startup
>>> to find what's offered. Despite a few initial snags, this now works
>>> reasonably well in a majority of cases.
>>>
>>> We have seen endpoints structured in the following ways:
>>>  A. subdomains, e.g., https://service.cloud.com/v2
>>>  B. paths, e.g., https://cloud.com/service/v2 (sometimes there are
>>> more paths in between the root and /service/)
>>>  C. service-specific ports, e.g., https://cloud.com:1234/v2
>>>  D. both A and B plus ports
>>>
>>> Within all of these, we can find the root of the given service just
>>> fine. We split the path and build successively longer paths starting
>>> from the root. In the above examples, we need to hit the path just
>>> short of the /v2, so in B it actually takes two requests as we'd make
>>> one to cloud.com which fails, but then a second one to
>>> cloud.com/service gives us what we need.
>>>
>>> However, another case came up: the root of all endpoints is itself
>>> another service. That makes it look like this:
>>>
>>>  E. https://cloud.com:/service/v2
>>>  F. https://cloud.com:/otherservice
>>>
>>> In this case, https://cloud.com: is keystone, so trying to get E's
>>> base by going from the root and outward will give me a versions
>>> response I can parse properly, but it points to keystone. We then end
>>> up building requests for 'service' that go to keystone endpoints and
>>> end up failing. We're doing this using itertools.accumulate on the
>>> path fragments, so you might think 'just throw it through
>>> `reversed()`' and go the other way. If we do that, we'll also get a
>>> versions response that we can parse, but it's the v2 specific info,
>>> not all available versions.
>>>
>>> So now that we can't reliably go from the left, and we definitely
>>> can't go from the right, how about the middle?
>>>
>>> This sounds ridiculous, and if it sounds familiar it's because they
>>> devise a "middle out" algorithm on the show Silicon Valley, but in
>>> most cases it'd actually work. In E above, it'd be fine. However,
>>> depending on the number of path fragments and which direction we chose
>>> to move first, we'd sometimes hit either a version-specific response
>>> or another service's response, so it's not reliable.
>>>
>>> Ultimately, I would like to know how something like this can be solved.
>>>
>>> 1. Is there any reliable, functional, and accurate programmatic way to
>>> get the versions and endpoints that all services on a cloud offer?
>>
>>
>> The Keystone service catalog should be the thing that provides the endpoints
>> for all services in the cloud. Within each service, determining the
>> (micro)version of the API is unfortunately going to be a per-service
>> endeavour. For some APIs, a microversion header is returned, others don't
>> have microversions. The microversion header is unfortunately not
>> standardized for all APIs that use microversions, though a number of us
>> would like to see a single:
>>
>> OpenStack-API-Version:  , ...
>>
>> header supported. This is the header supported in the new placement REST
>> API, for what it's worth.
> 
> I get the microversion part, and we support that (for some degree of
> support), but this is about the higher level major versions. The
> example that started this was Keystone only listing a v2 endpoint in
> the service catalog, at least on devstack. I need to be able to hit v3
> APIs when a user wants to do v3 things, regardless of which version
> they auth to, so the way to get it was to get the root and go from
> there. That both versions weren't listed was initially confusing to
> me, but that's where the suggestion of "go to the root and get
> everything" started out.
> 
> The service catalog holding providing all of the available endpoints
> made sense to me from what I understood in the past, but two things
> are for sure about this: it doesn't work that way, and I've been told
> several times that it's not going to work that way even in cases where
> it is apparently working. I don't have sources to cite, but it's com

Re: [openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Brian Curtin
On Wed, Oct 19, 2016 at 12:59 PM, Jay Pipes  wrote:
> On 10/19/2016 05:32 PM, Brian Curtin wrote:
>>
>> I'm currently facing what looks more and more like an impossible
>> problem in determining the root of each service on a given cloud. It
>> is apparently a free-for-all in how endpoints can be structured, and I
>> think we're out of ways to approach it that catch all of the ways that
>> all people can think of.
>>
>> In openstacksdk, we can no longer use the service catalog for
>> determining each service's endpoints. Among other things, this is due
>> to a combination of some versions of some services not actually being
>> listed, and with things heading the direction of version-less services
>> anyway. Recently we changed to using the service catalog as a pointer
>> to where services live and then try to find the root of that service
>> by stripping the path down and making some extra requests on startup
>> to find what's offered. Despite a few initial snags, this now works
>> reasonably well in a majority of cases.
>>
>> We have seen endpoints structured in the following ways:
>>  A. subdomains, e.g., https://service.cloud.com/v2
>>  B. paths, e.g., https://cloud.com/service/v2 (sometimes there are
>> more paths in between the root and /service/)
>>  C. service-specific ports, e.g., https://cloud.com:1234/v2
>>  D. both A and B plus ports
>>
>> Within all of these, we can find the root of the given service just
>> fine. We split the path and build successively longer paths starting
>> from the root. In the above examples, we need to hit the path just
>> short of the /v2, so in B it actually takes two requests as we'd make
>> one to cloud.com which fails, but then a second one to
>> cloud.com/service gives us what we need.
>>
>> However, another case came up: the root of all endpoints is itself
>> another service. That makes it look like this:
>>
>>  E. https://cloud.com:/service/v2
>>  F. https://cloud.com:/otherservice
>>
>> In this case, https://cloud.com: is keystone, so trying to get E's
>> base by going from the root and outward will give me a versions
>> response I can parse properly, but it points to keystone. We then end
>> up building requests for 'service' that go to keystone endpoints and
>> end up failing. We're doing this using itertools.accumulate on the
>> path fragments, so you might think 'just throw it through
>> `reversed()`' and go the other way. If we do that, we'll also get a
>> versions response that we can parse, but it's the v2 specific info,
>> not all available versions.
>>
>> So now that we can't reliably go from the left, and we definitely
>> can't go from the right, how about the middle?
>>
>> This sounds ridiculous, and if it sounds familiar it's because they
>> devise a "middle out" algorithm on the show Silicon Valley, but in
>> most cases it'd actually work. In E above, it'd be fine. However,
>> depending on the number of path fragments and which direction we chose
>> to move first, we'd sometimes hit either a version-specific response
>> or another service's response, so it's not reliable.
>>
>> Ultimately, I would like to know how something like this can be solved.
>>
>> 1. Is there any reliable, functional, and accurate programmatic way to
>> get the versions and endpoints that all services on a cloud offer?
>
>
> The Keystone service catalog should be the thing that provides the endpoints
> for all services in the cloud. Within each service, determining the
> (micro)version of the API is unfortunately going to be a per-service
> endeavour. For some APIs, a microversion header is returned, others don't
> have microversions. The microversion header is unfortunately not
> standardized for all APIs that use microversions, though a number of us
> would like to see a single:
>
> OpenStack-API-Version:  , ...
>
> header supported. This is the header supported in the new placement REST
> API, for what it's worth.

I get the microversion part, and we support that (for some degree of
support), but this is about the higher level major versions. The
example that started this was Keystone only listing a v2 endpoint in
the service catalog, at least on devstack. I need to be able to hit v3
APIs when a user wants to do v3 things, regardless of which version
they auth to, so the way to get it was to get the root and go from
there. That both versions weren't listed was initially confusing to
me, but that's where the suggestion of "go to the root and get
everything" started out.

The service catalog holding providing all of the available endpoints
made sense to me from what I understood in the past, but two things
are for sure about this: it doesn't work that way, and I've been told
several times that it's not going to work that way even in cases where
it is apparently working. I don't have sources to cite, but it's come
up a few times that the goal is one entry per service and you talk to
the service to find out all of its details - major versions, micro
versions, etc.

[openstack-dev] [tripleo][ironic][puppet] Spine/Leaf: Adding Multiple Subnets to ironic-inspector-dnsmasq

2016-10-19 Thread Dan Sneddon
I am doing research to support the spec for TripleO deployment on
routed networks [1]. I would like some input on how to represent
multiple subnet ranges for the provisioning network in undercloud.conf.

The Ironic Inspector dnsmasq service is currently configured using the
puppet-ironic module, and the range of IP addresses is taken directly
from undercloud.conf. For example, here is the .erb which configures
/etc/ironic-inspector/dnsmasq.conf if using TFTP [2]:

## inspector_dnsmasq_tftp.erb ##
port=0
interface=<%= @dnsmasq_interface %>
bind-interfaces
dhcp-range=<%= @dnsmasq_ip_range %>,29
dhcp-boot=pxelinux.0,localhost.localdomain,<%= @dnsmasq_local_ip %>
dhcp-sequential-ip


Since there is only one dnsmasq_ip_range, only a single subnet is
served via DHCP. What I would like to do is extend the undercloud.conf
to support multiple IP ranges, and I'm looking for input on the best
way to represent the data.

I am not sure if we can be fully backwards-compatible here. My gut
feeling is no, unless we leave the existing parameters as-is and add
something like an "additional_inspection_ipranges" parameter. The data
that will need to be represented for each subnet is:

* Network subnet
* Start and end of inspection IP range
* Subnet mask (could be determined by parsing cidr, like 172.20.1.0/24)
* Gateway router for the subnet

We could potentially represent this data as a JSON, or as a list of
strings. Here are some potential examples:

JSON:
additional_inspection_ipranges = [
  {
"subnet": "172.20.1.0/24",
"start": "172.20.1.100",
"end": "172.20.1.120",
"gateway": "172.20.1.254"
  },
  {
"subnet": "172.20.2.0/24",
"start": "172.20.2.100",
"end": "172.20.2.120",
"gateway": "172.20.2.254"
  }
]

String:
additional_inspection_ipranges =
"172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254;172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"

Either of these might get unwieldy depending on the number of networks.
Perhaps we could have a repeating parameter? Something like this:

additional_inspection_iprange =
"172.20.1.0,172.20.1.100,172.20.1.120,255.255.255.0,172.20.1.254"
additional_inspection_iprange =
"172.20.2.0,172.20.2.100,172.20.2.120,255.255.255.0,172.20.2.254"

I would like some feedback about how to represent this data in a way
that it can be easily parsed by Puppet, while remaining readable. Any
suggestions would be very much appreciated.

[1] - https://review.openstack.org/#/c/377088
[2] -
https://github.com/openstack/puppet-ironic/blob/master/templates/inspector_dnsmasq_tftp.erb
-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [architecture] Cancelling next two Architecture Working Group IRC meetings

2016-10-19 Thread Clint Byrum
With many preparing to travel to the summit, and our summit fishbowl
session next week [1] we're going to go ahead and cancel this week and
next week's Architecture WG IRC meetings.

For those attending the summit, I hope to see you all there with your
proposals for Architecture WG topics. And for those of you not, please be
sure to watch the linked etherpads, and we'll see you in 3 weeks on IRC!

[1] 
https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16922/cross-project-workshops-architecture-working-group-fishbowl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Matt Riedemann

On 10/19/2016 10:32 AM, Brian Curtin wrote:

I'm currently facing what looks more and more like an impossible
problem in determining the root of each service on a given cloud. It
is apparently a free-for-all in how endpoints can be structured, and I
think we're out of ways to approach it that catch all of the ways that
all people can think of.

In openstacksdk, we can no longer use the service catalog for
determining each service's endpoints. Among other things, this is due
to a combination of some versions of some services not actually being
listed, and with things heading the direction of version-less services
anyway. Recently we changed to using the service catalog as a pointer
to where services live and then try to find the root of that service
by stripping the path down and making some extra requests on startup
to find what's offered. Despite a few initial snags, this now works
reasonably well in a majority of cases.

We have seen endpoints structured in the following ways:
 A. subdomains, e.g., https://service.cloud.com/v2
 B. paths, e.g., https://cloud.com/service/v2 (sometimes there are
more paths in between the root and /service/)
 C. service-specific ports, e.g., https://cloud.com:1234/v2
 D. both A and B plus ports

Within all of these, we can find the root of the given service just
fine. We split the path and build successively longer paths starting
from the root. In the above examples, we need to hit the path just
short of the /v2, so in B it actually takes two requests as we'd make
one to cloud.com which fails, but then a second one to
cloud.com/service gives us what we need.

However, another case came up: the root of all endpoints is itself
another service. That makes it look like this:

 E. https://cloud.com:/service/v2
 F. https://cloud.com:/otherservice

In this case, https://cloud.com: is keystone, so trying to get E's
base by going from the root and outward will give me a versions
response I can parse properly, but it points to keystone. We then end
up building requests for 'service' that go to keystone endpoints and
end up failing. We're doing this using itertools.accumulate on the
path fragments, so you might think 'just throw it through
`reversed()`' and go the other way. If we do that, we'll also get a
versions response that we can parse, but it's the v2 specific info,
not all available versions.

So now that we can't reliably go from the left, and we definitely
can't go from the right, how about the middle?

This sounds ridiculous, and if it sounds familiar it's because they
devise a "middle out" algorithm on the show Silicon Valley, but in
most cases it'd actually work. In E above, it'd be fine. However,
depending on the number of path fragments and which direction we chose
to move first, we'd sometimes hit either a version-specific response
or another service's response, so it's not reliable.

Ultimately, I would like to know how something like this can be solved.

1. Is there any reliable, functional, and accurate programmatic way to
get the versions and endpoints that all services on a cloud offer?

2. Are there any guidelines, rules, expectations, or other
documentation on how services can be installed and their endpoints
structured that are helpful to people build apps that use them, not in
those trying to install and operate them? I've looked around a few
times and found nothing useful. A lot of what I've found has
referenced suggestions for operators setting them up behind various
load balancing tools.

3. If 1 and 2 won't actually help me solve this, do you have any other
suggestions that will? We already go left, right, and middle of each
URI, so I'm out of directions to go, and we can't go back to the
service catalog.

Thanks,

Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That's a tricky one. Just yesterday I was looking into how Tempest 
creates the service clients it uses and lists versions, e.g. for compute:


https://github.com/openstack/tempest/blob/13.0.0/tempest/lib/services/compute/versions_client.py#L27

I was trying to figure out where that base_url value came from and in 
the case of Tempest it's from an auth provider class, and I think the 
services inside that thing are created from this code at some point:


https://github.com/openstack/tempest/blob/13.0.0/tempest/config.py#L1405

So at a basic level that builds the client with config options for the 
service type (e.g. compute) and endpoint type (e.g. publicURL), so then 
it can lookup the publicURL for the 'compute' service/endpoint in the 
service catalog and then start parsing the endpoint URL to do a GET on 
the root endpoint for compute to get the available API versions.


It doesn't sound like the same case with the SDK though since you don't 
have 

Re: [openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Jay Pipes

On 10/19/2016 05:32 PM, Brian Curtin wrote:

I'm currently facing what looks more and more like an impossible
problem in determining the root of each service on a given cloud. It
is apparently a free-for-all in how endpoints can be structured, and I
think we're out of ways to approach it that catch all of the ways that
all people can think of.

In openstacksdk, we can no longer use the service catalog for
determining each service's endpoints. Among other things, this is due
to a combination of some versions of some services not actually being
listed, and with things heading the direction of version-less services
anyway. Recently we changed to using the service catalog as a pointer
to where services live and then try to find the root of that service
by stripping the path down and making some extra requests on startup
to find what's offered. Despite a few initial snags, this now works
reasonably well in a majority of cases.

We have seen endpoints structured in the following ways:
 A. subdomains, e.g., https://service.cloud.com/v2
 B. paths, e.g., https://cloud.com/service/v2 (sometimes there are
more paths in between the root and /service/)
 C. service-specific ports, e.g., https://cloud.com:1234/v2
 D. both A and B plus ports

Within all of these, we can find the root of the given service just
fine. We split the path and build successively longer paths starting
from the root. In the above examples, we need to hit the path just
short of the /v2, so in B it actually takes two requests as we'd make
one to cloud.com which fails, but then a second one to
cloud.com/service gives us what we need.

However, another case came up: the root of all endpoints is itself
another service. That makes it look like this:

 E. https://cloud.com:/service/v2
 F. https://cloud.com:/otherservice

In this case, https://cloud.com: is keystone, so trying to get E's
base by going from the root and outward will give me a versions
response I can parse properly, but it points to keystone. We then end
up building requests for 'service' that go to keystone endpoints and
end up failing. We're doing this using itertools.accumulate on the
path fragments, so you might think 'just throw it through
`reversed()`' and go the other way. If we do that, we'll also get a
versions response that we can parse, but it's the v2 specific info,
not all available versions.

So now that we can't reliably go from the left, and we definitely
can't go from the right, how about the middle?

This sounds ridiculous, and if it sounds familiar it's because they
devise a "middle out" algorithm on the show Silicon Valley, but in
most cases it'd actually work. In E above, it'd be fine. However,
depending on the number of path fragments and which direction we chose
to move first, we'd sometimes hit either a version-specific response
or another service's response, so it's not reliable.

Ultimately, I would like to know how something like this can be solved.

1. Is there any reliable, functional, and accurate programmatic way to
get the versions and endpoints that all services on a cloud offer?


The Keystone service catalog should be the thing that provides the 
endpoints for all services in the cloud. Within each service, 
determining the (micro)version of the API is unfortunately going to be a 
per-service endeavour. For some APIs, a microversion header is returned, 
others don't have microversions. The microversion header is 
unfortunately not standardized for all APIs that use microversions, 
though a number of us would like to see a single:


OpenStack-API-Version:  , ...

header supported. This is the header supported in the new placement REST 
API, for what it's worth.



2. Are there any guidelines, rules, expectations, or other
documentation on how services can be installed and their endpoints
structured that are helpful to people build apps that use them, not in
those trying to install and operate them? I've looked around a few
times and found nothing useful. A lot of what I've found has
referenced suggestions for operators setting them up behind various
load balancing tools.


I presume you are referring to the "internal" vs "public" endpoint 
stuff? If so, my preference has been that such "internal vs. external" 
routing should be handled via the Keystone service catalog returning a 
set of endpoints depending on the source (or X-forwarded-for) IP. So, 
requests from "internal" networks (for whatever definition of "internal" 
you want) return a set of endpoint URLs reflecting the "internal" endpoints.



3. If 1 and 2 won't actually help me solve this, do you have any other
suggestions that will? We already go left, right, and middle of each
URI, so I'm out of directions to go, and we can't go back to the
service catalog.


I really don't understand why the service catalog should not be the 
thing that we use to catalog the... services. To me it seems obvious the 
Keystone service catalog is the focal point for what's needed here.


Best,
-jay

__

[openstack-dev] [Winstackers][Hyper-V] 26.10.2016 Hyper-V IRC meeting cancelled

2016-10-19 Thread Claudiu Belu
Hello!

Just a heads-up, the 26.10.2016's Hyper-V IRC meeting is cancelled due to the 
OpenStack Summit.

We do, however, have a Winstackers work session on Wednesday, October 26, 
3:05pm-3:45pm at AC Hotel - P3 - Eixample. Feel free to join us then and there!

For more details about the work session, follow the link [1].

[1] 
https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/17082/winstackers-work-session

Best regards,

Claudiu Belu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][fuel][tripleo][kolla][ansible] newton finals for cycle-trailing projects

2016-10-19 Thread Doug Hellmann
I have prepared a proposed set of tags for the final releases for
cycle-trailing projects that are using pre-release versions [1].
Please review the tags for your deliverables and ensure they are
correct. The release team will approve and tag the releases by the
end of the day tomorrow.

Thanks!
Doug

[1] https://review.openstack.org/#/c/388799

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] next meeting is a spooky one

2016-10-19 Thread milanisko k
lol, I'm definitely carving a special meeting pumpkin :)

Cheers,
milan

st 19. 10. 2016 v 18:01 odesílatel Loo, Ruby  napsal:

> Hi,
>
>
>
> Since the Barcelona summit is next week, the Monday (Oct 24) ironic
> meeting is cancelled.
>
>
>
> We'll have a spooky meeting the week after (Oct 31). Feel free to come in
> costumes :)
>
>
>
> --ruby
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] IRC Meetings - Canceled for the next 2 weeks

2016-10-19 Thread Hayes, Graham
Hi All,

With the Design Summit a week away, and having no agenda items,
I suggest we skip the meeting this week, and as we will be in Spain
next week, we will also be skipping that meeting.

See you all in Barcelona, or at the meeting on the 2nd of November.

Thanks,

Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] next meeting is a spooky one

2016-10-19 Thread Loo, Ruby
Hi,

Since the Barcelona summit is next week, the Monday (Oct 24) ironic meeting is 
cancelled.

We'll have a spooky meeting the week after (Oct 31). Feel free to come in 
costumes :)

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Davanum Srinivas
Adam,

One distinction i heard was that this is just a runtime dependency,
hence my suggestion.

Thanks,
Dims

On Wed, Oct 19, 2016 at 11:00 AM, Adam Harwell  wrote:
> Dims: that wasn't meant as hostile to you, though re-reading it kind of
> sounds that way.
> You were not the first in this thread to suggest bindep, and while your
> links are useful, I don't think it makes a lot of sense for our use case. I
> legitimately can't understand why this *one* dependency (not anything a
> deployer will need to install on their control-plane instances) is suggested
> as a binary dependency, when it is a python module that we include in our
> code just like *everything else* in our requirements file.
>
> On Wed, Oct 19, 2016 at 11:02 PM Adam Harwell  wrote:
>>
>> We literally install every other dependency from pypi with
>> requirements.txt, so I'm struggling understand why all the sudden we need to
>> install this one as a binary, for our devstack specific script, when we are
>> planning a move to a distro that doesn't even support binary packages?
>> Should we switch our entire requirements file to bindep? If not, what makes
>> this different?
>>
>>
>> On Wed, Oct 19, 2016, 22:56 Davanum Srinivas  wrote:
>>>
>>> Adam,
>>>
>>> Have you see this yet?
>>>
>>>
>>> http://docs.openstack.org/infra/bindep/readme.html#writing-requirements-files
>>>
>>> http://codesearch.openstack.org/?q=platform&i=nope&files=bindep.txt&repos=
>>>
>>> Thanks,
>>> Dims
>>>
>>> On Wed, Oct 19, 2016 at 9:40 AM, Adam Harwell 
>>> wrote:
>>> > Yes, but we need to use SOMETHING for our own devstack gate tests --
>>> > maybe
>>> > it is easier to think of our devstack code as a "third party setup",
>>> > and
>>> > that it uses gunicorn for its DIB images (but not every deployer needs
>>> > to).
>>> > In this case, how do we include it? Devstack needs it to run our gate
>>> > jobs,
>>> > which means it has to be in our main codebase, but deployers don't
>>> > necessarily need it for their deployments (though it is the default
>>> > option).
>>> > Do we include it in global-requirements or not? How do we use it in
>>> > devstack
>>> > if it is not in global-requirements? We don't install it as a binary
>>> > because
>>> > the plan is to stay completely distro-independant (or target a distro
>>> > that
>>> > doesn't even HAVE binary packages like cirros). Originally I just put
>>> > the
>>> > line "pip install gunicorn>=19.0" directly in our DIB script, but was
>>> > told
>>> > that was a dirty hack, and that it should be in requirements.txt like
>>> > everything else. I'm not sure I agree, and it seems like maybe others
>>> > are
>>> > suggesting I go back to that method?
>>> >
>>> >  --Adam
>>> >
>>> > On Wed, Oct 19, 2016 at 10:19 PM Hayes, Graham 
>>> > wrote:
>>> >>
>>> >> On 18/10/2016 19:57, Doug Wiegley wrote:
>>> >> >
>>> >> >> On Oct 18, 2016, at 12:42 PM, Doug Hellmann 
>>> >> >> wrote:
>>> >> >>
>>> >> >> Excerpts from Doug Wiegley's message of 2016-10-18 12:21:20 -0600:
>>> >> >>>
>>> >>  On Oct 18, 2016, at 12:10 PM, Doug Hellmann
>>> >>  
>>> >>  wrote:
>>> >> 
>>> >>  Excerpts from Doug Wiegley's message of 2016-10-18 12:00:35
>>> >>  -0600:
>>> >> >
>>> >> >> On Oct 18, 2016, at 11:30 AM, Doug Hellmann
>>> >> >> >> >> >> > wrote:
>>> >> >>
>>> >> >> Excerpts from Doug Wiegley's message of 2016-10-18 09:59:54
>>> >> >> -0600:
>>> >> >>>
>>> >>  On Oct 18, 2016, at 5:14 AM, Ian Cordasco
>>> >>  >> >>  
>>> >>  >> >>  >> wrote:
>>> >> 
>>> >> 
>>> >> 
>>> >>  -Original Message-
>>> >>  From: Thierry Carrez >> >>   >> >>  > >> >>   >> >>  >> >>  Reply: OpenStack Development Mailing List (not for usage
>>> >>  questions) >> >>  
>>> >>  >> >>  >
>>> >>  >> >>  
>>> >>  >> >>  >> >>  Date: October 18, 2016 at 03:55:41
>>> >>  To: openstack-dev@lists.openstack.org
>>> >>  
>>> >>  >> >>  >
>>> >>  >> >>  

[openstack-dev] Endpoint structure: a free-for-all

2016-10-19 Thread Brian Curtin
I'm currently facing what looks more and more like an impossible
problem in determining the root of each service on a given cloud. It
is apparently a free-for-all in how endpoints can be structured, and I
think we're out of ways to approach it that catch all of the ways that
all people can think of.

In openstacksdk, we can no longer use the service catalog for
determining each service's endpoints. Among other things, this is due
to a combination of some versions of some services not actually being
listed, and with things heading the direction of version-less services
anyway. Recently we changed to using the service catalog as a pointer
to where services live and then try to find the root of that service
by stripping the path down and making some extra requests on startup
to find what's offered. Despite a few initial snags, this now works
reasonably well in a majority of cases.

We have seen endpoints structured in the following ways:
 A. subdomains, e.g., https://service.cloud.com/v2
 B. paths, e.g., https://cloud.com/service/v2 (sometimes there are
more paths in between the root and /service/)
 C. service-specific ports, e.g., https://cloud.com:1234/v2
 D. both A and B plus ports

Within all of these, we can find the root of the given service just
fine. We split the path and build successively longer paths starting
from the root. In the above examples, we need to hit the path just
short of the /v2, so in B it actually takes two requests as we'd make
one to cloud.com which fails, but then a second one to
cloud.com/service gives us what we need.

However, another case came up: the root of all endpoints is itself
another service. That makes it look like this:

 E. https://cloud.com:/service/v2
 F. https://cloud.com:/otherservice

In this case, https://cloud.com: is keystone, so trying to get E's
base by going from the root and outward will give me a versions
response I can parse properly, but it points to keystone. We then end
up building requests for 'service' that go to keystone endpoints and
end up failing. We're doing this using itertools.accumulate on the
path fragments, so you might think 'just throw it through
`reversed()`' and go the other way. If we do that, we'll also get a
versions response that we can parse, but it's the v2 specific info,
not all available versions.

So now that we can't reliably go from the left, and we definitely
can't go from the right, how about the middle?

This sounds ridiculous, and if it sounds familiar it's because they
devise a "middle out" algorithm on the show Silicon Valley, but in
most cases it'd actually work. In E above, it'd be fine. However,
depending on the number of path fragments and which direction we chose
to move first, we'd sometimes hit either a version-specific response
or another service's response, so it's not reliable.

Ultimately, I would like to know how something like this can be solved.

1. Is there any reliable, functional, and accurate programmatic way to
get the versions and endpoints that all services on a cloud offer?

2. Are there any guidelines, rules, expectations, or other
documentation on how services can be installed and their endpoints
structured that are helpful to people build apps that use them, not in
those trying to install and operate them? I've looked around a few
times and found nothing useful. A lot of what I've found has
referenced suggestions for operators setting them up behind various
load balancing tools.

3. If 1 and 2 won't actually help me solve this, do you have any other
suggestions that will? We already go left, right, and middle of each
URI, so I'm out of directions to go, and we can't go back to the
service catalog.

Thanks,

Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Adam Harwell
Dims: that wasn't meant as hostile to you, though re-reading it kind of
sounds that way.
You were not the first in this thread to suggest bindep, and while your
links are useful, I don't think it makes a lot of sense for our use case. I
legitimately can't understand why this *one* dependency (not anything a
deployer will need to install on their control-plane instances) is
suggested as a binary dependency, when it is a python module that we
include in our code just like *everything else* in our requirements file.

On Wed, Oct 19, 2016 at 11:02 PM Adam Harwell  wrote:

> We literally install every other dependency from pypi with
> requirements.txt, so I'm struggling understand why all the sudden we need
> to install this one as a binary, for our devstack specific script, when we
> are planning a move to a distro that doesn't even support binary packages?
> Should we switch our entire requirements file to bindep? If not, what makes
> this different?
>
> On Wed, Oct 19, 2016, 22:56 Davanum Srinivas  wrote:
>
> Adam,
>
> Have you see this yet?
>
>
> http://docs.openstack.org/infra/bindep/readme.html#writing-requirements-files
> http://codesearch.openstack.org/?q=platform&i=nope&files=bindep.txt&repos=
>
> Thanks,
> Dims
>
> On Wed, Oct 19, 2016 at 9:40 AM, Adam Harwell  wrote:
> > Yes, but we need to use SOMETHING for our own devstack gate tests --
> maybe
> > it is easier to think of our devstack code as a "third party setup", and
> > that it uses gunicorn for its DIB images (but not every deployer needs
> to).
> > In this case, how do we include it? Devstack needs it to run our gate
> jobs,
> > which means it has to be in our main codebase, but deployers don't
> > necessarily need it for their deployments (though it is the default
> option).
> > Do we include it in global-requirements or not? How do we use it in
> devstack
> > if it is not in global-requirements? We don't install it as a binary
> because
> > the plan is to stay completely distro-independant (or target a distro
> that
> > doesn't even HAVE binary packages like cirros). Originally I just put the
> > line "pip install gunicorn>=19.0" directly in our DIB script, but was
> told
> > that was a dirty hack, and that it should be in requirements.txt like
> > everything else. I'm not sure I agree, and it seems like maybe others are
> > suggesting I go back to that method?
> >
> >  --Adam
> >
> > On Wed, Oct 19, 2016 at 10:19 PM Hayes, Graham 
> wrote:
> >>
> >> On 18/10/2016 19:57, Doug Wiegley wrote:
> >> >
> >> >> On Oct 18, 2016, at 12:42 PM, Doug Hellmann 
> >> >> wrote:
> >> >>
> >> >> Excerpts from Doug Wiegley's message of 2016-10-18 12:21:20 -0600:
> >> >>>
> >>  On Oct 18, 2016, at 12:10 PM, Doug Hellmann  >
> >>  wrote:
> >> 
> >>  Excerpts from Doug Wiegley's message of 2016-10-18 12:00:35 -0600:
> >> >
> >> >> On Oct 18, 2016, at 11:30 AM, Doug Hellmann <
> d...@doughellmann.com
> >> >> > wrote:
> >> >>
> >> >> Excerpts from Doug Wiegley's message of 2016-10-18 09:59:54
> -0600:
> >> >>>
> >>  On Oct 18, 2016, at 5:14 AM, Ian Cordasco <
> sigmaviru...@gmail.com
> >>    >>  >> wrote:
> >> 
> >> 
> >> 
> >>  -Original Message-
> >>  From: Thierry Carrez  >>    >>  >  >>    >>   >>  Reply: OpenStack Development Mailing List (not for usage
> >>  questions)  >>  
> >>   >>  >
> >>   >>  
> >>   >>   >>  Date: October 18, 2016 at 03:55:41
> >>  To: openstack-dev@lists.openstack.org
> >>  
> >>   >>  >
> >>   >>  
> >>   >>  >>
> >>   >>   openstack-dev@lists.openstack.org
> >>  >
> >>   >>  
> >>  

Re: [openstack-dev] [Neutron] Neutron team social event in Barcelona

2016-10-19 Thread Ichihara Hirofumi
+1

2016-10-19 19:55 GMT+09:00 Andreas Scheuring :

> +1
> --
> -
> Andreas
> IRC: andreas_s
>
>
>
> On Di, 2016-10-18 at 16:18 -0700, Isaku Yamahata wrote:
> > +1
> > Thanks for organizing this.
> >
> > On Fri, Oct 14, 2016 at 01:30:57PM -0500,
> > Miguel Lavalle  wrote:
> >
> > > Dear Neutrinos,
> > >
> > > I am organizing a social event for the team on Thursday 27th at 19:30.
> > > After doing some Google research, I am proposing Raco de la Vila,
> which is
> > > located in Poblenou: http://www.racodelavila.com/en/index.htm. The
> menu is
> > > here: http://www.racodelavila.com/en/carta-racodelavila.htm
> > >
> > > It is easy to get there by subway from the Summit venue:
> > > https://goo.gl/maps/HjaTEcBbDUR2. I made a reservation for 25 people
> under
> > > 'Neutron' or "Miguel Lavalle". Please confirm your attendance so we
> can get
> > > a final count.
> > >
> > > Here's some reviews:
> > > https://www.tripadvisor.com/Restaurant_Review-g187497-
> d1682057-Reviews-Raco_De_La_Vila-Barcelona_Catalonia.html
> > >
> > > Cheers
> > >
> > > Miguel
> >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo][openstack-ansible] DB deadlocks, Mitaka, and You

2016-10-19 Thread Carter, Kevin
Hi Matt and thanks for the reply,

We do have that commit as found here: [
https://github.com/openstack/nova/blob/dd30603f91e6fd3d1a4db452f20a51ba8820e1f4/nova/db/sqlalchemy/api.py#L1846
]. If there's anything you'd like to see as we're trying to figure
this out I'd be happy to provide {any,every}thing.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Propose Zhenyu Zheng for Searchlight core

2016-10-19 Thread McLellan, Steven
Hi,

I'd like to propose Zhenyu Zheng (Kevin_Zheng on IRC) for Searchlight core. 
While he's most active on Nova, he's also been very active on Searchlight both 
in commits and reviews during the Newton release and into Ocata on Searchlight. 
Kevin's participated during the weekly meetings and during the week, and his 
reviews have been very high quality as well as numerous. This would also help 
move towards have greater cross-project participation, especially with Nova.

If anyone has any objections, let me know, otherwise I will add Kevin to the 
core list at the weekend.

Thanks!

Steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Adam Harwell
We literally install every other dependency from pypi with
requirements.txt, so I'm struggling understand why all the sudden we need
to install this one as a binary, for our devstack specific script, when we
are planning a move to a distro that doesn't even support binary packages?
Should we switch our entire requirements file to bindep? If not, what makes
this different?

On Wed, Oct 19, 2016, 22:56 Davanum Srinivas  wrote:

> Adam,
>
> Have you see this yet?
>
>
> http://docs.openstack.org/infra/bindep/readme.html#writing-requirements-files
> http://codesearch.openstack.org/?q=platform&i=nope&files=bindep.txt&repos=
>
> Thanks,
> Dims
>
> On Wed, Oct 19, 2016 at 9:40 AM, Adam Harwell  wrote:
> > Yes, but we need to use SOMETHING for our own devstack gate tests --
> maybe
> > it is easier to think of our devstack code as a "third party setup", and
> > that it uses gunicorn for its DIB images (but not every deployer needs
> to).
> > In this case, how do we include it? Devstack needs it to run our gate
> jobs,
> > which means it has to be in our main codebase, but deployers don't
> > necessarily need it for their deployments (though it is the default
> option).
> > Do we include it in global-requirements or not? How do we use it in
> devstack
> > if it is not in global-requirements? We don't install it as a binary
> because
> > the plan is to stay completely distro-independant (or target a distro
> that
> > doesn't even HAVE binary packages like cirros). Originally I just put the
> > line "pip install gunicorn>=19.0" directly in our DIB script, but was
> told
> > that was a dirty hack, and that it should be in requirements.txt like
> > everything else. I'm not sure I agree, and it seems like maybe others are
> > suggesting I go back to that method?
> >
> >  --Adam
> >
> > On Wed, Oct 19, 2016 at 10:19 PM Hayes, Graham 
> wrote:
> >>
> >> On 18/10/2016 19:57, Doug Wiegley wrote:
> >> >
> >> >> On Oct 18, 2016, at 12:42 PM, Doug Hellmann 
> >> >> wrote:
> >> >>
> >> >> Excerpts from Doug Wiegley's message of 2016-10-18 12:21:20 -0600:
> >> >>>
> >>  On Oct 18, 2016, at 12:10 PM, Doug Hellmann  >
> >>  wrote:
> >> 
> >>  Excerpts from Doug Wiegley's message of 2016-10-18 12:00:35 -0600:
> >> >
> >> >> On Oct 18, 2016, at 11:30 AM, Doug Hellmann <
> d...@doughellmann.com
> >> >> > wrote:
> >> >>
> >> >> Excerpts from Doug Wiegley's message of 2016-10-18 09:59:54
> -0600:
> >> >>>
> >>  On Oct 18, 2016, at 5:14 AM, Ian Cordasco <
> sigmaviru...@gmail.com
> >>    >>  >> wrote:
> >> 
> >> 
> >> 
> >>  -Original Message-
> >>  From: Thierry Carrez  >>    >>  >  >>    >>   >>  Reply: OpenStack Development Mailing List (not for usage
> >>  questions)  >>  
> >>   >>  >
> >>   >>  
> >>   >>   >>  Date: October 18, 2016 at 03:55:41
> >>  To: openstack-dev@lists.openstack.org
> >>  
> >>   >>  >
> >>   >>  
> >>   >>  >>
> >>   >>   openstack-dev@lists.openstack.org
> >>  >
> >>   >>  
> >>   >>   >>  Subject:  Re: [openstack-dev] [requirements][lbaas] gunicorn to
> >>  g-r
> >> 
> >> > Doug Wiegley wrote:
> >> >> [...] Paths forward:
> >> >>
> >> >> 1. Add gunicorn to global requirements.
> >> >>
> >> >> 2. Create a project specific “amphora-requirements.txt” file
> >> >> for the
> >> >> service VM packages (this is actually my preference.) It has
> >> >> been
> >> >> pointed out that this wouldn’t be 

[openstack-dev] [openstack-ansible][release] OpenStack-Ansible Newton RC4 available (Re: [openstack-ansible][release] OpenStack-Ansible Newton RC3 available)

2016-10-19 Thread Davanum Srinivas
Please test RC4, same details as below :)

Thanks,
Dims

On Thu, Oct 13, 2016 at 9:00 PM, Davanum Srinivas  wrote:
> Hello everyone,
>
> A new release candidate for OpenStack Ansible for the end of the
> Newton cycle is available!
>
> You can find the source code tarballs at:
> https://releases.openstack.org/newton/index.html#newton-openstack-ansible
>
> Alternatively, you can directly test the stable/newton release branch at:
> http://git.openstack.org/cgit/openstack/openstack-ansible/log/?h=stable/newton
>
> (Note: there are many repositories named openstack/openstack-ansible*)
>
> If you find an issue that could be considered release-critical, please
> file it at:
> https://bugs.launchpad.net/openstack-ansible/+filebug
>
> and tag it *newton-rc-potential* to bring it to the OpenStackAnsible
> release crew's attention.
>
> Thanks,
> Dims (On behalf of the Release team)
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Ian Cordasco
-Original Message-
From: Adam Harwell 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: October 19, 2016 at 08:44:31
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

> Yes, but we need to use SOMETHING for our own devstack gate tests -- maybe
> it is easier to think of our devstack code as a "third party setup", and
> that it uses gunicorn for its DIB images (but not every deployer needs to).
> In this case, how do we include it? Devstack needs it to run our gate jobs,
> which means it has to be in our main codebase, but deployers don't
> necessarily need it for their deployments (though it is the default option).
> Do we include it in global-requirements or not? How do we use it in
> devstack if it is not in global-requirements? We don't install it as a
> binary because the plan is to stay completely distro-independant (or target
> a distro that doesn't even HAVE binary packages like cirros). Originally I
> just put the line "pip install gunicorn>=19.0" directly in our DIB script,
> but was told that was a dirty hack, and that it should be in
> requirements.txt like everything else. I'm not sure I agree, and it seems
> like maybe others are suggesting I go back to that method?
>  
> --Adam

I'm still not clear as to why Tony isn't in favor of this. No one really has 
clear technical objects to gunicorn itself. It's maintained, up-to-date, and 
reliable (even if not the de facto OpenStack choice for running an OpenSack 
service ... which you're not doing).

I would agree that having it in g-r and in your requirements.txt makes the most 
sense. Especially since Monty did an excellent job explaining why not everyone 
will want to use pip+Aline/cirros. In my opinion, we just need to understand 
why people are still opposed to this being in g-r and your requirements.txt so 
that you can move along.
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][release] Kolla Newton RC3 available (Re: [kolla][release] Kolla Newton RC2 available)

2016-10-19 Thread Davanum Srinivas
Please test:
https://tarballs.openstack.org/kolla/kolla-3.0.0.0rc3.tar.gz

Thanks,
Dims

On Thu, Oct 13, 2016 at 9:57 AM, Davanum Srinivas  wrote:
> Hello everyone,
>
> A new release candidate for Kolla for the end of the Newton cycle
> is available!  You can find the source code tarball at:
>
> https://tarballs.openstack.org/kolla/kolla-3.0.0.0rc2.tar.gz
>
> Alternatively, you can directly test the stable/newton release
> branch at:
>
> http://git.openstack.org/cgit/openstack/kolla/log/?h=stable/newton
>
> If you find an issue that could be considered release-critical,
> please file it at:
>
> https://bugs.launchpad.net/kolla/+filebug
>
> and tag it *newton-rc-potential* to bring it to the PROJECT release
> crew's attention.
>
> Thanks,
> Dims (On behalf of the Release team)
>
> --
> Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Davanum Srinivas
Adam,

Have you see this yet?

http://docs.openstack.org/infra/bindep/readme.html#writing-requirements-files
http://codesearch.openstack.org/?q=platform&i=nope&files=bindep.txt&repos=

Thanks,
Dims

On Wed, Oct 19, 2016 at 9:40 AM, Adam Harwell  wrote:
> Yes, but we need to use SOMETHING for our own devstack gate tests -- maybe
> it is easier to think of our devstack code as a "third party setup", and
> that it uses gunicorn for its DIB images (but not every deployer needs to).
> In this case, how do we include it? Devstack needs it to run our gate jobs,
> which means it has to be in our main codebase, but deployers don't
> necessarily need it for their deployments (though it is the default option).
> Do we include it in global-requirements or not? How do we use it in devstack
> if it is not in global-requirements? We don't install it as a binary because
> the plan is to stay completely distro-independant (or target a distro that
> doesn't even HAVE binary packages like cirros). Originally I just put the
> line "pip install gunicorn>=19.0" directly in our DIB script, but was told
> that was a dirty hack, and that it should be in requirements.txt like
> everything else. I'm not sure I agree, and it seems like maybe others are
> suggesting I go back to that method?
>
>  --Adam
>
> On Wed, Oct 19, 2016 at 10:19 PM Hayes, Graham  wrote:
>>
>> On 18/10/2016 19:57, Doug Wiegley wrote:
>> >
>> >> On Oct 18, 2016, at 12:42 PM, Doug Hellmann 
>> >> wrote:
>> >>
>> >> Excerpts from Doug Wiegley's message of 2016-10-18 12:21:20 -0600:
>> >>>
>>  On Oct 18, 2016, at 12:10 PM, Doug Hellmann 
>>  wrote:
>> 
>>  Excerpts from Doug Wiegley's message of 2016-10-18 12:00:35 -0600:
>> >
>> >> On Oct 18, 2016, at 11:30 AM, Doug Hellmann > >> > wrote:
>> >>
>> >> Excerpts from Doug Wiegley's message of 2016-10-18 09:59:54 -0600:
>> >>>
>>  On Oct 18, 2016, at 5:14 AM, Ian Cordasco >   >  >> wrote:
>> 
>> 
>> 
>>  -Original Message-
>>  From: Thierry Carrez >   >  > >   >  >  Reply: OpenStack Development Mailing List (not for usage
>>  questions) >  
>>  >  >
>>  >  
>>  >  >  Date: October 18, 2016 at 03:55:41
>>  To: openstack-dev@lists.openstack.org
>>  
>>  >  >
>>  >  
>>  >  >>
>>  >  >  >
>>  >  
>>  >  >  Subject:  Re: [openstack-dev] [requirements][lbaas] gunicorn to
>>  g-r
>> 
>> > Doug Wiegley wrote:
>> >> [...] Paths forward:
>> >>
>> >> 1. Add gunicorn to global requirements.
>> >>
>> >> 2. Create a project specific “amphora-requirements.txt” file
>> >> for the
>> >> service VM packages (this is actually my preference.) It has
>> >> been
>> >> pointed out that this wouldn’t be kept up-to-date by the bot.
>> >> We could
>> >> modify the bot to include it in some way, or do it manually, or
>> >> with a
>> >> project specific job.
>> >>
>> >> 3. Split our service VM builds into another repo, to keep a
>> >> clean
>> >> separation between API services and the backend. But, even this
>> >> new
>> >> repo’s standlone requirements.txt file will have the g-r issue
>> >> from #1.
>> >>
>> >> 4. Boot the backend out of OpenStack entirely.
>> >
>> > All those options sound valid to me, so the requirements team
>> > should
>> > pick w

Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Adam Harwell
To reply more directly and clearly:

On Wed, Oct 19, 2016 at 9:30 PM Tony Breeds  wrote:

> On Wed, Oct 19, 2016 at 08:41:16AM +, Adam Harwell wrote:
> > I wonder if maybe it is not clear -- for us, gunicorn is a runtime
> > dependency for our gate jobs to work, not a deploy dependency.
>
> Okay then frankly I'm deeply confused.
>
> Can we see the code that uses it? to understand why the deployer can't
> build a
> custom service VM using an alternative to gunicorn?
>
A deployer is perfectly free to use an alternative to gunicorn. Gunicorn is
built into our devstack plugin code, not the main agent application
(agent.py is just the runner we created for devstack -- you could run
octavia.amphorae.backends.agent.api_server with any WSGI runner you want in
a real deployment). We just have to use SOMETHING in devstack for our gate
scenario tests to run...

>
> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Adam Harwell
Yes, but we need to use SOMETHING for our own devstack gate tests -- maybe
it is easier to think of our devstack code as a "third party setup", and
that it uses gunicorn for its DIB images (but not every deployer needs to).
In this case, how do we include it? Devstack needs it to run our gate jobs,
which means it has to be in our main codebase, but deployers don't
necessarily need it for their deployments (though it is the default option).
Do we include it in global-requirements or not? How do we use it in
devstack if it is not in global-requirements? We don't install it as a
binary because the plan is to stay completely distro-independant (or target
a distro that doesn't even HAVE binary packages like cirros). Originally I
just put the line "pip install gunicorn>=19.0" directly in our DIB script,
but was told that was a dirty hack, and that it should be in
requirements.txt like everything else. I'm not sure I agree, and it seems
like maybe others are suggesting I go back to that method?

 --Adam

On Wed, Oct 19, 2016 at 10:19 PM Hayes, Graham  wrote:

> On 18/10/2016 19:57, Doug Wiegley wrote:
> >
> >> On Oct 18, 2016, at 12:42 PM, Doug Hellmann 
> wrote:
> >>
> >> Excerpts from Doug Wiegley's message of 2016-10-18 12:21:20 -0600:
> >>>
>  On Oct 18, 2016, at 12:10 PM, Doug Hellmann 
> wrote:
> 
>  Excerpts from Doug Wiegley's message of 2016-10-18 12:00:35 -0600:
> >
> >> On Oct 18, 2016, at 11:30 AM, Doug Hellmann  > wrote:
> >>
> >> Excerpts from Doug Wiegley's message of 2016-10-18 09:59:54 -0600:
> >>>
>  On Oct 18, 2016, at 5:14 AM, Ian Cordasco   >> wrote:
> 
> 
> 
>  -Original Message-
>  From: Thierry Carrez  thie...@openstack.org> >    Reply: OpenStack Development Mailing List (not for usage
> questions)  openstack-dev@lists.openstack.org>  openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org>>  openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org>  openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org
>  Date: October 18, 2016 at 03:55:41
>  To: openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org>  openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org>>  openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org>  openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org>>>   openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org>>  openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org>  openstack-dev@lists.openstack.org  openstack-dev@lists.openstack.org
>  Subject:  Re: [openstack-dev] [requirements][lbaas] gunicorn to
> g-r
> 
> > Doug Wiegley wrote:
> >> [...] Paths forward:
> >>
> >> 1. Add gunicorn to global requirements.
> >>
> >> 2. Create a project specific “amphora-requirements.txt” file
> for the
> >> service VM packages (this is actually my preference.) It has
> been
> >> pointed out that this wouldn’t be kept up-to-date by the bot.
> We could
> >> modify the bot to include it in some way, or do it manually, or
> with a
> >> project specific job.
> >>
> >> 3. Split our service VM builds into another repo, to keep a
> clean
> >> separation between API services and the backend. But, even this
> new
> >> repo’s standlone requirements.txt file will have the g-r issue
> from #1.
> >>
> >> 4. Boot the backend out of OpenStack entirely.
> >
> > All those options sound valid to me, so the requirements team
> should
> > pick what they are the most comfortable with.
> >
> > My 2c: yes g-r is mostly about runtime dependencies and ensuring
> > co-installability. However it also includes test/build-time
> deps, and
> > generally converging dependencies overall sounds like a valid
> goal. Is
> > there any drawback in adding gunicorn to g-r (option 1) ?
> 
>  The drawback (in my mind) is that new projects might start using
> it giving operators yet another thing to learn about when deploying a new
> component (eventlet, gevent, gunicorn, ...).
> 
>  On the flip, what's the benefit of adding it to g-r?
> >>>
> >>> The positive benefit is the same as Octavia’s use case: it
> provides an alternative for any non-frontline-api service to run a
> lightweight http/wsgi service as needed (service VMs, health monitor
> agents, etc). And something better than the built-in debug servers in most
> of the f

Re: [openstack-dev] [nova][oslo][openstack-ansible] DB deadlocks, Mitaka, and You

2016-10-19 Thread Mike Bayer



On 10/19/2016 08:36 AM, Ian Cordasco wrote:

Hey Kevin,

So just looking at the pastes you have here, I'm inclined to believe
this is actually a bug in oslo_db/sqlalchemy. If you follow the trace,
there's a PyMySQL InternalError not being handled inside of
sqlalchemy. I'm not sure if SQLAlchemy considers InternalErrors to be
something it cannot retry, or something that the user should decide
how to handle, but I would start chatting with the folk who work on
oslo_db and SQLAlchemy in the community.


SQLAlchemy itself does not retry transactions.  A retry is typically at 
the method level where the calling application (nova in this case) would 
make use of the oslo retry decorator, seen here: 
https://github.com/openstack/oslo.db/blob/master/oslo_db/api.py#L85 . 
This decorator is configured to retry based on specific oslo-level 
exceptions being intercepted, of which DBDeadlock is the primary 
exception this function was written for.


In this case, both stack traces illustrate the error being thrown is 
DBDeadlock, which is an oslo-db-specific error that is the result of the 
correct handling of this PyMySQL error code.   The original error object 
is maintained as a data member of DBDeadlock so that the source of the 
DBDeadlock can be seen.  The declaration of this interception is here: 
https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/exc_filters.py#L56 
.   SQLAlchemy re-throws this user-generated exception in the context of 
the original, so in Python 2 where stack traces are still a confusing 
affair, it's hard to see that this interception occurred, but DBDeadlock 
indicates that it has.









That said, this also looks like something that should be reported to
Nova. Something causing an unhandled exception is definitely bug worth
(even if the fix belongs somewhere in one of its dependencies).

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Hayes, Graham
On 18/10/2016 19:57, Doug Wiegley wrote:
>
>> On Oct 18, 2016, at 12:42 PM, Doug Hellmann  wrote:
>>
>> Excerpts from Doug Wiegley's message of 2016-10-18 12:21:20 -0600:
>>>
 On Oct 18, 2016, at 12:10 PM, Doug Hellmann  wrote:

 Excerpts from Doug Wiegley's message of 2016-10-18 12:00:35 -0600:
>
>> On Oct 18, 2016, at 11:30 AM, Doug Hellmann > > wrote:
>>
>> Excerpts from Doug Wiegley's message of 2016-10-18 09:59:54 -0600:
>>>
 On Oct 18, 2016, at 5:14 AM, Ian Cordasco >>>  >> wrote:



 -Original Message-
 From: Thierry Carrez >>>  >  >>  
 > 
  
  
 > 
  
 >> 
 >>> > 
  
  Doug Wiegley wrote:
>> [...] Paths forward:
>>
>> 1. Add gunicorn to global requirements.
>>
>> 2. Create a project specific “amphora-requirements.txt” file for the
>> service VM packages (this is actually my preference.) It has been
>> pointed out that this wouldn’t be kept up-to-date by the bot. We 
>> could
>> modify the bot to include it in some way, or do it manually, or with 
>> a
>> project specific job.
>>
>> 3. Split our service VM builds into another repo, to keep a clean
>> separation between API services and the backend. But, even this new
>> repo’s standlone requirements.txt file will have the g-r issue from 
>> #1.
>>
>> 4. Boot the backend out of OpenStack entirely.
>
> All those options sound valid to me, so the requirements team should
> pick what they are the most comfortable with.
>
> My 2c: yes g-r is mostly about runtime dependencies and ensuring
> co-installability. However it also includes test/build-time deps, and
> generally converging dependencies overall sounds like a valid goal. Is
> there any drawback in adding gunicorn to g-r (option 1) ?

 The drawback (in my mind) is that new projects might start using it 
 giving operators yet another thing to learn about when deploying a new 
 component (eventlet, gevent, gunicorn, ...).

 On the flip, what's the benefit of adding it to g-r?
>>>
>>> The positive benefit is the same as Octavia’s use case: it provides an 
>>> alternative for any non-frontline-api service to run a lightweight 
>>> http/wsgi service as needed (service VMs, health monitor agents, etc). 
>>> And something better than the built-in debug servers in most of the 
>>> frameworks.
>>>
>>> On the proliferation point, it is certainly a risk, though I’ve 
>>> personally heard pretty strong guidance that all main API services in 
>>> our community should be trending towards pecan.
>>
>> Pecan is a way to build WSGI applications. Gunicorn is a way to deploy
>> them. So they're not mutually exclusive.
>
> Right, agreed.
>
> What we’re trying to convey here is:
>
> - The normal way of making a REST endpoint in OpenStack is to use pecan 
> (or flask or falcon), and let the deployer or packager worry about the 
> runtime wsgi and/or reverse proxy.
>
> - This isn't a “normal” O

[openstack-dev] [Murano] No meetings on Oct 25 and Nov 1 due to summit

2016-10-19 Thread Kirill Zaitsev
Most of the team is going to attend summit, so there will be no meeting on
Oct 25
I would also suggest skipping meeting on Nov 1, since afaik a lot of folks
would still be travelling by that time (myself included). Unless someone’s
volunteering to chair and has specific agenda — let’s skip Nov 1 also.

-- 
Kirill Zaitsev
Murano Project Tech Lead
Software Engineer at
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Thomas Goirand
On 10/18/2016 08:25 PM, Monty Taylor wrote:
> On 10/18/2016 12:05 PM, Adam Harwell wrote:
>> Inline comments.
>>
>> On Wed, Oct 19, 2016 at 1:38 AM Thomas Goirand > > wrote:
>>
>> On 10/18/2016 02:37 AM, Ian Cordasco wrote:
>> > On Oct 17, 2016 7:27 PM, "Thomas Goirand" > 
>> > >> wrote:
>> >>
>> >> On 10/17/2016 08:43 PM, Adam Harwell wrote:
>> >> > Jim, that is exactly my thought -- the main focus of g-r as far
>> as I was
>> >> > aware is to maintain interoperability between project
>> dependencies for
>> >> > openstack deploys, and since our amphora image is totally
>> separate, it
>> >> > should not be restricted to g-r requirements.
>> >>
>> >> The fact that we have a unified version number of a given lib in
>> all of
>> >> OpenStack is also because that's a requirement of downstream distros.
>> >>
>> >> Imagine that someone would like to build the Octavia image using
>> >> exclusively packages from ...
>> >>
>> >> > I brought this up, but
>> >> > others thought it would be prudent to go the g-r route anyway.
>> >>
>> >> It is, and IMO you should go this route.
>> >
>> > I'm not convinced by your arguments here, Thomas. If the distributor
>> > were packaging Octavia for X but the image is using some other
>> operating
>> > system, say Y, why are X's packages relevant?
>>
>> What if operating systems would be the same?
>>
>> We still want to install from pypi, because we still want deployers to
>> build images for their cloud using our DIB elements. There is absolutely
>> no situation in which I can imagine we'd want to install a binary
>> packaged version of this. There's a VERY high chance we will soon be
>> using a distro that isn't even a supported OpenStack deploy target...
>>
>>
>> As a Debian package maintainer, I really prefer if the underlying images
>> can also be Debian (and preferably Debian stable everywhere).
>>
>> Sure, I love Debian too, but we're investigating things like Alpine and
>> Cirros as our base image, and there's pretty much zero chance anyone
>> will package ANY of our deps for those distros. Cirros doesn't even have
>> a package manager AFAIK. 
>>
>>
>> > I would think that if this
>> > is something inside an image going to be launched by Octavia that
>> > co-installibilty wouldn't really be an issue.
>>
>> The issue isn't co-instability, but the fact that downstream
>> distribution vendors will only package *ONE* version of a given python
>> module. If we have Octavia with version X, and another component of
>> OpenStack with version Y, then we're stuck with Octavia not being
>> packageable in downstream distros.
>>
>> Octavia will not use gunicorn for its main OpenStack API layer. It will
>> continue to be packagable regardless of whether gunicorn is available.
>> Gunicorn is used for our *amphora image*, which is not part of the main
>> deployment layer. It is part of our *dataplane*. It is unrelated to any
>> part of Octavia that is deployed as part of the main service layer of
>> Openstack. In fact, in production, deployers may completely ignore
>> gunicorn altogether and use a different solution, that is up to the way
>> they build their amphora image (which, again, is not part of the main
>> deployment). We just use gunicorn in the image we use for our gate tests.
>>
>>
>> > I don't lean either way right now, so I'd really like to
>> understand your
>> > point of view, especially since right now it isn't making much
>> sense to me.
>>
>> Do you understand now? :)
>>
>> I see what you are saying, but I assert it does not apply to our case at
>> all. Do you see how our case is different? 
> 
> I totally understand, and I can see why it would seem very different.
> Consider a few things though:
> 
> - OpenStack tries its best to not pick favorites for OS, and I think the
> same applies to guest VMs, even if they just seem like appliances. While
> we as upstream may be looking at using something like alpine as the base
> OS for the service VM appliance, that does not necessarily imply that
> all deployers _must_ use Alpine in their service VM, for exactly the
> reason you mention (you intend for them to run diskimage-builder themselves)
> 
> - If a deployer happens to have a strong preference for a given OS (I
> know I've been on customer calls where an OpenStack product having a tie
> to a particular OS that is not the one that is in the vetted choice
> otherwise at that customer was an issue) - then the use of dib by the
> deployer allows them to choose to base their service VM on the OS of
> their choice. That's pretty awesome.
> 
> - If that deployer similarly has an aversion to deploying any software
> that didn't come from distro packages, one could imagine that they would
> want their dis

Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Thomas Goirand
On 10/18/2016 07:05 PM, Adam Harwell wrote:
> What if operating systems would be the same?
> 
> We still want to install from pypi, because we still want deployers to
> build images for their cloud using our DIB elements. There is absolutely
> no situation in which I can imagine we'd want to install a binary
> packaged version of this. There's a VERY high chance we will soon be
> using a distro that isn't even a supported OpenStack deploy target...
> 
> 
> As a Debian package maintainer, I really prefer if the underlying images
> can also be Debian (and preferably Debian stable everywhere).
> 
> Sure, I love Debian too, but we're investigating things like Alpine and
> Cirros as our base image, and there's pretty much zero chance anyone
> will package ANY of our deps for those distros. Cirros doesn't even have
> a package manager AFAIK. 

YOUR preference may not be the same as the ones deploying. For example,
if I was to deploy, I'd prefer if all components where from a single
distribution vendor, even if the image gets bigger at the end. It is
easier to address security vulnerabilities this way (ie: you only need
to watch for a single vendor updates).

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon]New feature: enable/disable port security

2016-10-19 Thread Gyorgy Szombathelyi
Hi Rob,


> -Original Message-
> From: Rob Cresswell [mailto:robert.cressw...@outlook.com]
> Sent: 2016 október 19, szerda 13:49
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [horizon]New feature: enable/disable port
> security
> 
> Thanks for the patch! For something of this size, a wishlist bug is usually
> adequate, perhaps with an API reference to help us review faster :)

I've just opened a LP bug:

https://bugs.launchpad.net/horizon/+bug/1634877

And also antoher bugreport which somewhat releates to this wish item:
https://bugs.launchpad.net/horizon/+bug/1634836


> 
> Rob
> 
Cheers,
György


> On 19 October 2016 at 12:19, Gyorgy Szombathelyi
>   > wrote:
> 
> 
>   Hi!
> 
>   After I saw the allowed-address-pair handling is added to Horizon, I
> felt that completely enabling/disabling anti-spoofing rules would be a good
> addition, too, so I've create a patch:
>   https://review.openstack.org/#/c/388611/
> 
> 
>   Don't know if it needs a blueprint, or other administration stuff, but
> the patch is very light. Please tell me, what should be done to be accepted.
> 
>   Br,
>   György
> 
>   
> __
>   OpenStack Development Mailing List (not for usage questions)
>   Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe  requ...@lists.openstack.org?subject:unsubscribe>
>   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo][openstack-ansible] DB deadlocks, Mitaka, and You

2016-10-19 Thread Ian Cordasco
-Original Message-
From: Carter, Kevin 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: October 18, 2016 at 21:18:13
To: OpenStack Development Mailing List (not for usage questions)

Subject:  [openstack-dev] [nova][oslo][openstack-ansible] DB
deadlocks, Mitaka, and You

> Hello all,
>
> As some folks may know the OSIC cloud1 just upgraded to Mitaka last
> week (I know what's old is new again, sorry). Since the upgrade things
> have been running fairly smoothly however there's been one issue that
> has left me scratching my head. When attempting a scale out test we've
> run into issues where the nova client was returning a message
> indicating it had encountered a DB deadlock [0]. In attempting to gain
> more information we enabled debug logging and collected the following
> [1] (Here is a pretty version for the tracebacks [4]). This is the
> command we're using to build all of the VMs [2] which was being used
> to build 3 vms per compute node for 242 compute nodes. Once the
> instances are all online we grab the nodes IPv6 address and ensure
> we're able to SSH to them. While we're happy to report that almost all
> of the VMs came online in one shot we did run into a few of these DB
> dead lock messages and would like to see if folks out on the interwebs
> have seen or experienced such problems. If so we'd love to know if
> there's some remediation that we can use to make this all even
> happier. It should be noted that this is not a Major issue at this
> point, 3 out of 726 VMs didn't come online due to the problem but it
> would be amazing if there was something we could do to generally
> resolve this.
>
> Other potentially interesting things:
> * DB is MariaDB "mariadb-galera-server-10.0". The replication system
> is using xtrabackup from percona with version "2.3.5-1.trusty".
> * The DB is a 3 node cluster however the connection to the cluster
> is using a VIP on our loadbalancer and the services are only ever
> connected to 1 of the three nodes; this is for both reads and writes.
> Should a node become un-happy the load balancer promotes and demotes
> always ensuring only 1 node is being connected to.
> * While reproducing this issue repeatedly we've watched wsrep to see
> if nodes we're dropping or otherwise having a bad day. To our dismay
> there were no un-happy nodes and the wsrep state seemed to remain
> OPERATIONAL with minimal latency (Example [3]).
> * For all of the OpenStack Services we use pymsql as it's DB driver
> we were using mysql-python but I believe OpenStack-Ansible switched in
> kilo due to DB deadlock issues we were experiencing.
> * Cloud1 is running nova at commit
> "dd30603f91e6fd3d1a4db452f20a51ba8820e1f4" which was the HEAD of
> "stable/mitaka" on 29-09-2016. This code is in production today with
> absolutely no modifications.
>
> Any insight would be greatly appreciated. Thanks again everyone!
>
> [0] http://paste.openstack.org/show/586302/
> [1] http://paste.openstack.org/show/586309/
> [2] http://paste.openstack.org/show/586298/
> [3] http://paste.openstack.org/show/586306/
> [4] http://paste.openstack.org/show/586307/

Hey Kevin,

So just looking at the pastes you have here, I'm inclined to believe
this is actually a bug in oslo_db/sqlalchemy. If you follow the trace,
there's a PyMySQL InternalError not being handled inside of
sqlalchemy. I'm not sure if SQLAlchemy considers InternalErrors to be
something it cannot retry, or something that the user should decide
how to handle, but I would start chatting with the folk who work on
oslo_db and SQLAlchemy in the community.

That said, this also looks like something that should be reported to
Nova. Something causing an unhandled exception is definitely bug worth
(even if the fix belongs somewhere in one of its dependencies).

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo][openstack-ansible] DB deadlocks, Mitaka, and You

2016-10-19 Thread Matt Riedemann

On 10/18/2016 9:16 PM, Carter, Kevin wrote:

Hello all,

As some folks may know the OSIC cloud1 just upgraded to Mitaka last
week (I know what's old is new again, sorry). Since the upgrade things
have been running fairly smoothly however there's been one issue that
has left me scratching my head. When attempting a scale out test we've
run into issues where the nova client was returning a message
indicating it had encountered a DB deadlock [0]. In attempting to gain
more information we enabled debug logging and collected the following
[1] (Here is a pretty version for the tracebacks [4]). This is the
command we're using to build all of the VMs [2] which was being used
to build 3 vms per compute node for 242 compute nodes. Once the
instances are all online we grab the nodes IPv6 address and ensure
we're able to SSH to them. While we're happy to report that almost all
of the VMs came online in one shot we did run into a few of these DB
dead lock messages and would like to see if folks out on the interwebs
have seen or experienced such problems. If so we'd love to know if
there's some remediation that we can use to make this all even
happier. It should be noted that this is not a Major issue at this
point, 3 out of 726 VMs didn't come online due to the problem but it
would be amazing if there was something we could do to generally
resolve this.

Other potentially interesting things:
  * DB is MariaDB "mariadb-galera-server-10.0". The replication system
is using xtrabackup from percona with version "2.3.5-1.trusty".
  * The DB is a 3 node cluster however the connection to the cluster
is using a VIP on our loadbalancer and the services are only ever
connected to 1 of the three nodes; this is for both reads and writes.
Should a node become un-happy the load balancer promotes and demotes
always ensuring only 1 node is being connected to.
  * While reproducing this issue repeatedly we've watched wsrep to see
if nodes we're dropping or otherwise having a bad day. To our dismay
there were no un-happy nodes and the wsrep state seemed to remain
OPERATIONAL with minimal latency (Example [3]).
  * For all of the OpenStack Services we use pymsql as it's DB driver
we were using mysql-python but I believe OpenStack-Ansible switched in
kilo due to DB deadlock issues we were experiencing.
  * Cloud1 is running nova at commit
"dd30603f91e6fd3d1a4db452f20a51ba8820e1f4" which was the HEAD of
"stable/mitaka" on 29-09-2016. This code is in production today with
absolutely no modifications.

Any insight would be greatly appreciated. Thanks again everyone!

[0] http://paste.openstack.org/show/586302/
[1] http://paste.openstack.org/show/586309/
[2] http://paste.openstack.org/show/586298/
[3] http://paste.openstack.org/show/586306/
[4] http://paste.openstack.org/show/586307/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



It doesn't look like the same stacktrace but do you have this patch?

https://review.openstack.org/#/c/367508/

That was a known deadlock when creating an instance that we'd see in the 
upstream CI system.


It looks like you're failing on the compute manager, which is confusing 
to me how you'd get a DBDeadlock back in the nova CLI given we've 
already returned the API response by the time we cast to a compute to 
build the instance.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Tony Breeds
On Wed, Oct 19, 2016 at 11:26:51PM +1100, Tony Breeds wrote:
> On Wed, Oct 19, 2016 at 08:41:16AM +, Adam Harwell wrote:
> > I wonder if maybe it is not clear -- for us, gunicorn is a runtime
> > dependency for our gate jobs to work, not a deploy dependency.
> 
> Okay then frankly I'm deeply confused.
> 
> Can we see the code that uses it? to understand why the deployer can't build a
> custom service VM using an alternative to gunicorn?

And then of course I'm pointed at https://review.openstack.org/#/c/386758

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ansible][fuel][kolla][puppet][tripleo] proposed deadlines for cycle-trailing projects

2016-10-19 Thread Doug Hellmann
Excerpts from Steven Dake (stdake)'s message of 2016-10-19 02:32:21 +:
> Doug,
> 
> Kolla rc3 is available in the queue by the hard deadline (more or less).  I 
> have a quick Q - would I need to submit another 3.0.0 patch with the same git 
> commit id, or does the release team do that automatically?

I will take care of the final patch. I'll prepare it later today and
then we will approve it tomorrow.

Doug

> 
> Regards
> -steve
> 
> On 10/7/16, 12:16 PM, "Doug Hellmann"  wrote:
> 
> >This week we tagged the final releases for projects using the
> >cycle-with-milestones release model. Projects using the cycle-trailing
> >model have two more weeks before their final release tags are due. In
> >the time between now and then, we expect those projects to be preparing
> >and tagging release candidates.
> >
> >Just as with the milestone-based projects, we want to manage the number,
> >frequency, and timing of release candidates for cycle-trailing projects.
> >With that in mind, I would like to propose the following rough timeline
> >(my apologies for not preparing this sooner):
> >
> >10 Oct -- All cycle-trailing projects tag at least their first RC.
> >13 Oct -- Soft deadline for cycle-trailing projects to tag a final RC.
> >18 Oct -- Hard deadline for cycle-trailing projects to tag a final RC.
> >20 Oct -- Re-tag the final RCs as a final release.
> >
> >Between the first and later release candidates, any translations and
> >bug fixes should be merged.
> >
> >We want to leave a few days between the last release candidate and
> >the final release so that downstream consumers of the projects can
> >report issues against stable artifacts. Given the nature of most
> >of our trailing projects, and the lateness of starting to discuss
> >these deadlines, I don't think we need the same amount of time as
> >we usually set aside for the milestone-based projects. Based on
> >that assumption, I've proposed a 1 week soft goal and a 2 day hard
> >deadline.
> >
> >Let me know what you think,
> >Doug
> >
> >Newton schedule: https://releases.openstack.org/newton/schedule.html
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deprecate VMware CI that uses nova-network

2016-10-19 Thread Matt Riedemann

On 10/18/2016 11:38 PM, Sihan Wang wrote:

Hi,

We have added Nova upstream CI to use NSX, currently it is non-voting.
After we test its stability, we will replace the novanet one.

Thanks

Sihan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Cool, thanks for the heads up. I think we've been wanting to switch that 
over to using NSX(v3) for a couple of releases now.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Tony Breeds
On Wed, Oct 19, 2016 at 08:41:16AM +, Adam Harwell wrote:
> I wonder if maybe it is not clear -- for us, gunicorn is a runtime
> dependency for our gate jobs to work, not a deploy dependency.

Okay then frankly I'm deeply confused.

Can we see the code that uses it? to understand why the deployer can't build a
custom service VM using an alternative to gunicorn?

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] FSM: stable states

2016-10-19 Thread Rafael Xavier


Abraços,
Rafael

---
Marcus Rafael Xavier Laurentino
Universidade Federal de Campina Grande
Laboratório de Sistemas Distribuídos

- Mensagem original -
De: Yuriy Zveryanskyy 
Para: openstack-dev@lists.openstack.org
Enviadas: Wed, 19 Oct 2016 07:02:30 -0300 (BRT)
Assunto: [openstack-dev]  [ironic] FSM: stable states

Hi,

There is inconsistency of stable states definitions in
the documentation and in the FSM code. Documentation defines
stable states "which can be changed by external request"
(and state diagram looks outdated), but FSM code which can
be set as "target".

Details: https://review.openstack.org/#/c/385459/

I'm not a terminology expert, any suggestions are welcome.

Yuriy Zveryanskyy





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon]New feature: enable/disable port security

2016-10-19 Thread Rob Cresswell
Thanks for the patch! For something of this size, a wishlist bug is usually 
adequate, perhaps with an API reference to help us review faster :)

Rob

On 19 October 2016 at 12:19, Gyorgy Szombathelyi 
mailto:gyorgy.szombathe...@doclerholding.com>>
 wrote:
Hi!

After I saw the allowed-address-pair handling is added to Horizon, I felt that 
completely enabling/disabling anti-spoofing rules would be a good addition, 
too, so I've create a patch:
https://review.openstack.org/#/c/388611/

Don't know if it needs a blueprint, or other administration stuff, but the 
patch is very light. Please tell me, what should be done to be accepted.

Br,
György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon]New feature: enable/disable port security

2016-10-19 Thread Gyorgy Szombathelyi
Hi!

After I saw the allowed-address-pair handling is added to Horizon, I felt that 
completely enabling/disabling anti-spoofing rules would be a good addition, 
too, so I've create a patch:
https://review.openstack.org/#/c/388611/

Don't know if it needs a blueprint, or other administration stuff, but the 
patch is very light. Please tell me, what should be done to be accepted.

Br,
György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron team social event in Barcelona

2016-10-19 Thread Andreas Scheuring
+1
-- 
-
Andreas 
IRC: andreas_s 



On Di, 2016-10-18 at 16:18 -0700, Isaku Yamahata wrote:
> +1
> Thanks for organizing this.
> 
> On Fri, Oct 14, 2016 at 01:30:57PM -0500,
> Miguel Lavalle  wrote:
> 
> > Dear Neutrinos,
> > 
> > I am organizing a social event for the team on Thursday 27th at 19:30.
> > After doing some Google research, I am proposing Raco de la Vila, which is
> > located in Poblenou: http://www.racodelavila.com/en/index.htm. The menu is
> > here: http://www.racodelavila.com/en/carta-racodelavila.htm
> > 
> > It is easy to get there by subway from the Summit venue:
> > https://goo.gl/maps/HjaTEcBbDUR2. I made a reservation for 25 people under
> > 'Neutron' or "Miguel Lavalle". Please confirm your attendance so we can get
> > a final count.
> > 
> > Here's some reviews:
> > https://www.tripadvisor.com/Restaurant_Review-g187497-d1682057-Reviews-Raco_De_La_Vila-Barcelona_Catalonia.html
> > 
> > Cheers
> > 
> > Miguel
> 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][spec] Feedback wanted on TripleO real-time compute node proposal

2016-10-19 Thread Oliver Walsh
Hi,

I'd like to post a link to a blueprint/spec that I'm hoping to discuss
at the upcoming summit.

https://blueprints.launchpad.net/tripleo/+spec/tripleo-realtime
https://review.openstack.org/388162

I'm new to both TripleO and OpenStack development. Any
advice/hints/criticism is very much appreciated.

Thanks,
Ollie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] FSM: stable states

2016-10-19 Thread Yuriy Zveryanskyy

Hi,

There is inconsistency of stable states definitions in
the documentation and in the FSM code. Documentation defines
stable states "which can be changed by external request"
(and state diagram looks outdated), but FSM code which can
be set as "target".

Details: https://review.openstack.org/#/c/385459/

I'm not a terminology expert, any suggestions are welcome.

Yuriy Zveryanskyy





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Contributing a new fuel-plugin

2016-10-19 Thread Vladimir Kuklin
Hi Omar

You might want to follow basic OpenStack documentation here:
http://docs.openstack.org/infra/manual/creators.html if you want to create
a new project. For fuel plugins just put it into fuel namespace and call it
something like 'fuel-plugin-'.

If you want to contribute to the existing plugin, please use its respective
launchpad project.

Please let me know if you need any other help.

On Fri, Oct 7, 2016 at 10:03 PM, Omar Rivera  wrote:

> Please, could someone help me find the correct process on how to
> contribute fuel plugins upstream?
>
> I had followed these instructions but when I opened bug it was declared
> invalid - has the process changed perhaps? [1]
>
> I cannot find relevant documentation for how to go about creating the
> repository in gerrit and creating the launchpad project correctly. The only
> thing I found is the add to the DriverLog repository however they expect
> that there is a plugin repository already created.
>
> I had hoped for more instruction in this [2] launchpad project since it
> seems to manage bugs for many plugins. Or should it be like the contrail
> plugin [3] that has their own page.
>
>
> ​[1] http://docs.openstack.org/developer/fuel-docs/
> plugindocs/fuel-plugin-sdk-guide/create-environment/plugin-repo.html​
> ​[2] https://launchpad.net/fuel-plugins
> [3] https://launchpad.net/fuel-plugin-contrail​
>
> --
> - Omar Rivera -
> -
> ​ irc: gomarivera -​
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron team social event in Barcelona

2016-10-19 Thread Furukawa, Yushiro
+1  Thank you, Miguel!

Best regards,


  Yushiro Furukawa

From: Miguel Lavalle [mailto:mig...@mlavalle.com]
Sent: Saturday, October 15, 2016 3:31 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] Neutron team social event in Barcelona

Dear Neutrinos,
I am organizing a social event for the team on Thursday 27th at 19:30. After 
doing some Google research, I am proposing Raco de la Vila, which is located in 
Poblenou: http://www.racodelavila.com/en/index.htm. The menu is here: 
http://www.racodelavila.com/en/carta-racodelavila.htm
It is easy to get there by subway from the Summit venue: 
https://goo.gl/maps/HjaTEcBbDUR2. I made a reservation for 25 people under 
'Neutron' or "Miguel Lavalle". Please confirm your attendance so we can get a 
final count.
Here's some reviews: 
https://www.tripadvisor.com/Restaurant_Review-g187497-d1682057-Reviews-Raco_De_La_Vila-Barcelona_Catalonia.html

Cheers

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] No weekly meeting

2016-10-19 Thread TommyLike Hu
Cool, wish I could join the next Summit~

Sean McGinnis 于2016年10月19日周三 下午4:57写道:

> Hello all,
>
> I know there are a lot of folks travelling for the Summit already. There
> are no agenda items added to the weekly meeting wiki, so I am going to
> cancel this weeks meeting.
>
> If there are any important topics or things that need to be discussed
> prior to the Summit, please bring those up in the #openstack-cinder
> channel. Responses may be delayed, but we should be able to get some
> discussion going if needed.
>
> Thanks, and I hope to see a lot of you in Barcelona!
>
> Sean (smcginnis)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] No weekly meeting

2016-10-19 Thread Sean McGinnis
Hello all,

I know there are a lot of folks travelling for the Summit already. There
are no agenda items added to the weekly meeting wiki, so I am going to
cancel this weeks meeting.

If there are any important topics or things that need to be discussed
prior to the Summit, please bring those up in the #openstack-cinder
channel. Responses may be delayed, but we should be able to get some
discussion going if needed.

Thanks, and I hope to see a lot of you in Barcelona!

Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-vpp]Introducing networking-vpp

2016-10-19 Thread Neil Jerram
On Wed, Oct 19, 2016 at 4:30 AM Ian Wells  wrote:

> Sorry to waken an old thread, but I chose a perfect moment to go on
> holiday...
>
> So yes: I don't entirely trust the way we use RabbitMQ, and that's largely
> because what we're doing with it - distributing state, or copies of state,
> or information derived from state - leads to some fragility and odd
> situations when using a tool perhaps better suited to listing off tasks.
> We've tried to find a different model of working that is closer to the
> behaviour we're after.  It is, I believe, similar to the Calico team's
> thinking, but not derived from their code.  I have to admit at this point
> that it's not been tested at scale in our use of it, and that's something
> we will be doing, but I can say that this is working in a way that is in
> line with how etcd is intended to be used, we have tested representative
> etcd performance, and we don't expect problems.
>
> As mentioned before, Neutron's SQL database is the source of truth - you
> need to have one, and that one represents what the client asked for in its
> purest form.  In the nature of keeping two datastores in sync, there is a
> worker thread outside of the REST call to do the synchronisation (because
> we don't want the cloud user to be waiting on our internal workings, and
> because consistently committing to two databases is a recipe for disaster)
> - etcd lags the Neutron DB commits very slightly, and the Neutron DB is
> always right.  This allows the API to be quick while the backend will run
> as efficiently as possible.
>
> It does also mean that failures to communicate in the backend don't result
> in failed API calls - the call succeeds but state updates don't happen.
> This is in line with a 'desired state' model.  A user tells Neutron what
> they want to do and Neutron should generally accept the request if it's
> well formatted and consistent.  Exceptional error codes like 500s are
> annoying to deal with, as you never know if that means 'I failed to save
> that' or 'I failed to implement that' or 'I saved and implemented that, but
> didn't quite get the answer to you' - having simple frontend code ensures
> the answer is highly likely to be 'I will do that it in a moment', in
> keeping with with the eventually consistent model OpenStack has.  The
> driver will then work its magic and update object states when the work is
> finally complete.
>
> Watching changes - and the pub-sub model you end up with - is a means of
> being efficient, but should we miss notifications there's a fallback
> mechanism to get back into state sync with the most recent version of the
> state.  In the worst case, we focus on the currently desired state, and not
> the backlog of recent changes to state.
>
> And Jay, you're right.  What we should be comparing here is how well it
> works.  Is it easy to use, is it easy to maintain, is it annoyingly
> fragile, and does it eat network or CPU?  I believe so (or I wouldn't have
> chosen to do it this way), and I hope we've produced something simple to
> understand while being easier to operate.  However, the proof of the
> pudding is in the eating, so let's see how this works as we continue to
> develop and test it.
>
>
Full ack.  This is indeed "similar to the Calico team's thinking", but
you've done a beautiful job of expressing it.

 Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron team social event in Barcelona

2016-10-19 Thread Rossella Sblendido
+1 Thanks Miguel!

On 10/19/2016 01:18 AM, Isaku Yamahata wrote:
> +1
> Thanks for organizing this.
> 
> On Fri, Oct 14, 2016 at 01:30:57PM -0500,
> Miguel Lavalle  wrote:
> 
>> Dear Neutrinos,
>>
>> I am organizing a social event for the team on Thursday 27th at 19:30.
>> After doing some Google research, I am proposing Raco de la Vila, which is
>> located in Poblenou: http://www.racodelavila.com/en/index.htm. The menu is
>> here: http://www.racodelavila.com/en/carta-racodelavila.htm
>>
>> It is easy to get there by subway from the Summit venue:
>> https://goo.gl/maps/HjaTEcBbDUR2. I made a reservation for 25 people under
>> 'Neutron' or "Miguel Lavalle". Please confirm your attendance so we can get
>> a final count.
>>
>> Here's some reviews:
>> https://www.tripadvisor.com/Restaurant_Review-g187497-d1682057-Reviews-Raco_De_La_Vila-Barcelona_Catalonia.html
>>
>> Cheers
>>
>> Miguel
> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-19 Thread Adam Harwell
I wonder if maybe it is not clear -- for us, gunicorn is a runtime
dependency for our gate jobs to work, not a deploy dependency.

On Wed, Oct 19, 2016, 11:16 Tony Breeds  wrote:

> On Mon, Oct 17, 2016 at 08:12:45PM -0600, Doug Wiegley wrote:
>
> > Right, so, we’re dancing around the common problem in openstack lately:
> what
> > the heck is openstack?
>
> Sorry to get here so late.
>
> > This came up because service VMs/data plane implementations, which this
> is,
> > have different requirements than API services. Paths forward:
> >
> > 1. Add gunicorn to global requirements.
>
> I'd rather avoid this.  Other have done a great job explaining the runtime
> vs
> deploy dependencies.
>
> > 2. Create a project specific “amphora-requirements.txt” file for the
> service
> > VM packages (this is actually my preference.) It has been pointed out
> that
> > this wouldn’t be kept up-to-date by the bot. We could modify the bot to
> > include it in some way, or do it manually, or with a project specific
> job.
> >
> > 3. Split our service VM builds into another repo, to keep a clean
> separation
> > between API services and the backend.  But, even this new repo’s
> standlone
> > requirements.txt file will have the g-r issue from #1.
>
> Actually Options 2 and 3 are functionally the same (from my POV).  We'd
> need a
> specific job to update your *requirements.txt files.  I feel like a
> separate
> repo is slightly neater but it has the most impact on the Octavia team.
>
> So I'd suggest you go with one of 2 or 3 and we can work together to make
> the
> tools work with you.
>
> > 4. Boot the backend out of OpenStack entirely.
>
> :(  I really hope this was a joke suggestion.  If it isn't then we have
> some
> problems in our community / tools :(
>
> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] Next PTL/TC elections timeframes

2016-10-19 Thread Thierry Carrez
Thierry Carrez wrote:
> [...]
> As a result of that discussion a proposal was made, with a focus on
> limiting the impact of the change, avoid the need to modify Foundation
> bylaws, and introduce some flexibility in vote organization.
> 
> See: https://review.openstack.org/#/c/385951/
> 
> The TL;DR: is that PTL elections would continue to be organized around
> development cycle boundaries, while TC elections would continue to be
> organized relative to OpenStack Summit dates. The net effect is that TC
> elections would now be organized separately from PTL elections (rather
> than run just the week after).
> 
> Another consequence is that since we'd continue to elect PTLs around
> development cycles (and Ocata being a short cycle), the Ocata PTLs would
> be renewed early (vote early February).

This proposal was discussed on the review and at the TC meeting
yesterday. While it has so far attracted broad support, we'd like to
give extra time for PTLs to review it.

As mentioned by notmyname on the review, the change is effectively
changing the term for which the current PTLs just got elected. The
charter said "6 months", and with this change in effect the PTLs will
actually be renewed before the start of Pike, so in 4 months.

So we'd like to get extra time for PTLs to chime in on the change and
post their +1 if they are fine with it. We'll wait until the TC meeting
on November 8th to finally approve this.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage][aodh] about aodh notifier to create a event alarm

2016-10-19 Thread Afek, Ifat (Nokia - IL)


From: "dong.wenj...@zte.com.cn"
Date: Wednesday, 19 October 2016 at 11:01

The BP of aodh-message-bus-notifications[1] was blocked as Aodh message bus 
notification.
As the discuession of Vitrage and Aodh in etherpad[2], only the Aodh 
alarm_deletion notification is missing.
I proposed a patch to add the Aodh alarm_deletion notification.[3]
Please help me to review this patch.
Do the alarm.creation, alarm.state_transition and alarm.deletion satisfy the 
Vitrage requirement?
I'd like to help to implement the aodh-message-bus-notifications BP if there is 
nobody interest in it.

This is more complex. Aodh has a mechanism for registering a URL to be notified 
when the state of a specific alarm is changed.
Vitrage asked for something else - a notification whenever *any* alarm state is 
changed. In Vitrage we don’t want to register to each and every Aodh alarm 
separately, so we prefer to get the notifications for all changes on the 
message bus (as we do with other OpenStack projects). In addition, there is 
currently no notification about a newly created alarm, so even if we register a 
URL on each alarm we will not be able to register it on the new alarms.

[dwj]:  If i understand correctly, Aodh already support a notification whenever 
*any* alarm state is changed.
  See 
https://github.com/openstack/aodh/blob/master/aodh/evaluator/__init__.py#L107.
  We only need to config the vitrage_notifications topics in Aodh then 
Vitrage can get the notifications from Aodh.
  Let me know if i miss something.

A few months ago I discussed it with Gordon Chung, and understood that he 
blocked the option to add another notification topic.
Gordon?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Oct.19

2016-10-19 Thread joehuang
Agenda of Oct.19 weekly meeting:


# Tricircle cleaning

# bugs and blueprint cleaning

# Ocata planning, mile stones.  OpenStack Ocata planning 
https://releases.openstack.org/ocata/schedule.html

# Ocata cycle design summit sessions: [1][2]

# open discussion


[1]https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Tricircle%3A
[2]https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads#Tricircle

How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: Re: [vitrage][aodh] about aodh notifier to create a event alarm

2016-10-19 Thread dong . wenjuan
"Afek, Ifat (Nokia - IL)"  
2016-10-19 15:21
请答复 给
"OpenStack Development Mailing List \(not for usage questions\)" 



收件人
"OpenStack Development Mailing List (not for usage questions)" 

抄送

主题
Re: [openstack-dev] [vitrage][aodh] about aodh notifier to create a event 
alarm






From: "dong.wenj...@zte.com.cn"
Date: Wednesday, 19 October 2016 at 09:21

The BP of aodh-message-bus-notifications[1] was blocked as Aodh message 
bus notification. 
As the discuession of Vitrage and Aodh in etherpad[2], only the Aodh 
alarm_deletion notification is missing.
I proposed a patch to add the Aodh alarm_deletion notification.[3]
Please help me to review this patch. 
Do the alarm.creation, alarm.state_transition and alarm.deletion satisfy 
the Vitrage requirement?
I'd like to help to implement the aodh-message-bus-notifications BP if 
there is nobody interest in it.

This is more complex. Aodh has a mechanism for registering a URL to be 
notified when the state of a specific alarm is changed. 
Vitrage asked for something else - a notification whenever *any* alarm 
state is changed. In Vitrage we don’t want to register to each and every 
Aodh alarm separately, so we prefer to get the notifications for all 
changes on the message bus (as we do with other OpenStack projects). In 
addition, there is currently no notification about a newly created alarm, 
so even if we register a URL on each alarm we will not be able to register 
it on the new alarms. 

[dwj]:  If i understand correctly, Aodh already support a notification 
whenever *any* alarm state is changed.
  See 
https://github.com/openstack/aodh/blob/master/aodh/evaluator/__init__.py#L107
.
  We only need to config the vitrage_notifications topics in Aodh then 
Vitrage can get the notifications from Aodh.
  Let me know if i miss something.


About the Aodh custon alarm:
What about the alarm type as `prompt` or something else like this, which 
means the alarm 
is fired with no evaluation. And the metedata include the `source_id` 
means which source
the alarm is on? 

This is more or less what we had in mind. Be able to control the state 
change externally + add metadata to the alarm (resource_id and optimally 
other information).


[1]
https://blueprints.launchpad.net/vitrage/+spec/aodh-message-bus-notifications

[2]https://etherpad.openstack.org/p/newton-telemetry-vitrage
[3]https://review.openstack.org/#/c/387754/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage][aodh] about aodh notifier to create a event alarm

2016-10-19 Thread Afek, Ifat (Nokia - IL)
From: "dong.wenj...@zte.com.cn"
Date: Wednesday, 19 October 2016 at 09:21

The BP of aodh-message-bus-notifications[1] was blocked as Aodh message bus 
notification.
As the discuession of Vitrage and Aodh in etherpad[2], only the Aodh 
alarm_deletion notification is missing.
I proposed a patch to add the Aodh alarm_deletion notification.[3]
Please help me to review this patch.
Do the alarm.creation, alarm.state_transition and alarm.deletion satisfy the 
Vitrage requirement?
I'd like to help to implement the aodh-message-bus-notifications BP if there is 
nobody interest in it.

This is more complex. Aodh has a mechanism for registering a URL to be notified 
when the state of a specific alarm is changed.
Vitrage asked for something else - a notification whenever *any* alarm state is 
changed. In Vitrage we don’t want to register to each and every Aodh alarm 
separately, so we prefer to get the notifications for all changes on the 
message bus (as we do with other OpenStack projects). In addition, there is 
currently no notification about a newly created alarm, so even if we register a 
URL on each alarm we will not be able to register it on the new alarms.

About the Aodh custon alarm:
What about the alarm type as `prompt` or something else like this, which means 
the alarm
is fired with no evaluation. And the metedata include the `source_id` means 
which source
the alarm is on?

This is more or less what we had in mind. Be able to control the state change 
externally + add metadata to the alarm (resource_id and optimally other 
information).


[1]https://blueprints.launchpad.net/vitrage/+spec/aodh-message-bus-notifications
[2]https://etherpad.openstack.org/p/newton-telemetry-vitrage
[3]https://review.openstack.org/#/c/387754/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev