[openstack-dev] [Neutron][Vmware] Sync port groups from ESXi/VCenter into Neutron

2014-03-17 Thread Zhu Zhu
Hi Stackers,

Currently we are working on supporting import existing ESXi/distributed port 
groups into Neutron under Cluster level(which vmware nova driver managed 
level).  By doing it, nova could be able to deploy vm to multiple port groups 
without NSX plugin.  Now the called vmware agent is for the network sync from 
VC to neutron database.  And it will work under ML2 plugin. 

Anyone have thoughts about this approach? Appreciate for your comments. 

For detail, please refer to
https://blueprints.launchpad.net/neutron/+spec/vcenter-neutron-agent  




Best Regards
Zarric(Zhu Zhu)___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] scheduler sub-group meeting 3/18 - Cancel

2014-03-17 Thread Sylvain Bauza
I can chair this one, no worries.

I have the below topics in mind :
- no-db scheduler blueprint
- scheduler forklift efforts
- open discussion

Any other subjects to discuss ?

-Sylvain
Le 17 mars 2014 00:55, Dugger, Donald D donald.d.dug...@intel.com a
écrit :



 I can't make the meeting this week so, unless someone else wants to
 volunteer to run the meeting, let's cancel this one.



 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder + taskflow

2014-03-17 Thread Kekane, Abhishek
Hi John,

With reference to 
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026189.html

Point #3 unit testing / mock infrastructure is broken is fixed 
(https://review.openstack.org/#/c/73984)

Currently we are planning to work on persisting create_volume api using 
taskflow.
I just want to confirm if any other team/developer is not working on the same 
task so that efforts will not be duplicated.

Also please let us know if you have any suggestions on the same.

Thanks  Regards,

Abhishek Kekane 

-Original Message-
From: Joshua Harlow [mailto:harlo...@yahoo-inc.com] 
Sent: Tuesday, February 04, 2014 6:11 AM
To: OpenStack Development Mailing List (not for usage questions); John Griffith
Cc: Yassine lamgarchal
Subject: Re: [openstack-dev] Cinder + taskflow

Thanks john for the input.

Hopefully we can help focus some of the refactoring on solving the 
state-management problem very soon.

For the mocking case, is there any active work being done here?

As for the state-management and persistence, I think that the goal of both of 
these will be reached and it is a good idea to focus around these problems and 
I am all in to figuring out those solutions, although my guess is that both of 
these will be long-term no matter what. Refactoring cinder from what it is to 
what it could/can be will take time (and should take time, to be careful and 
meticulous) and hopefully we can ensure that focus is retained. Since in the 
end it benefits everyone :)

Lets reform around that state-management issue (which involved a state-machine 
concept?). To me the current work/refactoring helps establish tasks objects 
that can be plugged into this machine (which is part of the problem, without 
task objects its hard to create a state-machine concept around code that is 
dispersed). To me that¹s where the current refactoring work helps (in 
identifying those tasks and adjusting code to be closer to smaller units that 
do a single task), later when a state-machine concept (or something similar) 
comes along with will be using these tasks (or variations of) to automate 
transitions based on given events (the flow concept that exists in taskflow is 
similar to this already).

The questions I had (or can currently think of) with the state-machine idea 
(versus just defined flows of tasks) are:

1. What are the events that trigger a state-machine to transition?
  - Typically some type of event causes a machine to transition to a new state 
(after performing some kind of action). Who initiates that transition.
2. What are the events that will cause this triggering? They are likely related 
directly to API requests (but may not be).
3. If a state-machine ends up being created, how does it interact with other 
state-machines that are also running at the same time (does it?)
  - This is a bigger question, and involves how one state-machine could be 
modifying a resource, while another one could be too (this is where u want only 
one state-machine to be modifying a resource at a time). This would solve some 
of the races that are currently existent (while introducing the complexity of 
distributed locking).
  - It is of my opnion that the same problem in #3 happens when using task and 
flows that also affect simulatenous resources; so its not a unique problem that 
is directly connected to flows. Part of this I am hoping the tooz project[1] 
can help with, since last time I checked they want to help make a nice API 
around distributed locking backends (among other similar APIs).

[1] https://github.com/stackforge/tooz#tooz

-Original Message-
From: John Griffith john.griff...@solidfire.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Monday, February 3, 2014 at 1:16 PM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Cinder + taskflow

On Mon, Feb 3, 2014 at 1:53 PM, Joshua Harlow harlo...@yahoo-inc.com
wrote:
 Hi all,

 After talking with john g. about taskflow in cinder and seeing more 
 and more reviews showing up I wanted to start a thread to gather all 
 our lessons learned and how we can improve a little before continuing 
 to add too many more refactoring and more reviews (making sure 
 everyone is understands the larger goal and larger picture of 
 switching pieces of cinder - piece by piece - to taskflow).

 Just to catch everyone up.

 Taskflow started integrating with cinder in havana and there has been 
some  continued work around these changes:

 - https://review.openstack.org/#/c/58724/
 - https://review.openstack.org/#/c/66283/
 - https://review.openstack.org/#/c/62671/

 There has also been a few other pieces of work going in (forgive me 
 if I missed any...):

 - https://review.openstack.org/#/c/64469/
 - https://review.openstack.org/#/c/69329/
 - https://review.openstack.org/#/c/64026/

 I think now would be a good time (and seems like a 

Re: [openstack-dev] Neutron L3-DVR F2F Discussion - Follow up L3-Agent Design Doc

2014-03-17 Thread Smith, Michael (HPN RD)
All,

As requested at the F2F we’ve created a design doc to cover changes to the 
L3-Agent.  We have already sent out the L2-Agent doc for review and now we are 
providing the L3-Agent doc.Please provide your review comments.  See below 
for a link to the google doc page.

https://docs.google.com/document/d/1jCmraZGirmXq5V1MtRqhjdZCbUfiwBhRkUjDXGt5QUQ/edit

Yours,

Michael Smith
Hewlett-Packard Company
HP Networking RD
8000 Foothills Blvd. M/S 5557
Roseville, CA 95747
Ph: 916 785-0918
Fax: 916 785-1199


_
From: Vasudevan, Swaminathan (PNB Roseville)
Sent: Monday, February 17, 2014 8:48 AM
To: Baldwin, Carl (HPCS Neutron); sylvain.afch...@enovance.com; James Clark, 
(CES BU) (james.cl...@kt.com); sumit naiksatam (sumitnaiksa...@gmail.com); 
Nachi Ueno (na...@ntti3.com); Kyle Mestery (mest...@siliconloons.com); 
enikanorov (enikano...@mirantis.com); Assaf Muller (amul...@redhat.com); 
cloudbe...@gmail.com; OpenStack Development Mailing List 
(openstack-dev@lists.openstack.org); 'mmccl...@yahoo-inc.com'; Hemanth Ravi 
(hemanth.r...@oneconvergence.com); Grover, Rajeev; Smith, Michael (HPN RD); 
Narasimhan, Vivekanandan; Birru, Murali Yadav
Cc: 'Sahdev P Zala'; 'Michael S Ly'; 'kba...@redhat.com'; 'Donaldson, 
Jonathan'; 'Kiran.Makhijani'; 'Murali Allada'; Rouault, Jason (HP Cloud); 
Atwood, Mark; 'Rajesh Ramchandani'; 'Miguel Angel Ajo Pelayo'; 'CARVER, PAUL'; 
'Geraint North'; 'Kristen Wright (schaffk)'; 'Srinivas R Brahmaroutu'; 'Fei 
Long Wang'; 'Marcio A Silva'; Clark, Robert Graham; 'Dugger, Donald D'; Walls, 
Jeffrey Joel (Cloud OS RD); Kant, Arun; Pratt, Gavin; ravi...@gmail.com; 
Shurtleff, Dave; 'steven.l...@hgst.com'; 'Ryan Hsu'; 'Jesse Noller'; 'David 
Kranz'; 'Shekar Nagesh'; 'Maciocco, Christian'; 'Yanick DELARBRE'; 'Brian 
Emord'; 'Edmund Troche'; 'Gabriel Hurley'; 'James Carey'; Palecanda, Ponnappa; 
'Bill Owen'; Millward, Scott T (HP Networking / CEB- Roseville); 'Michael 
Factor'; 'Mohammad Banikazemi'; 'Octavian Ciuhandu'; 'Dagan Gilat'; 'Kodam, 
Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)'; 'Linda Mellor'; 'LELOUP 
Julien'; 'Jim Fehlig'; 'Stefan Hellkvist'; Carlino, Chuck; 'David Peraza'; 
'Shiv Haris'; 'Lei Lei Zhou'; 'Zuniga, Israel'; 'Ed Hall'; Modi, Prashant; 
'공용준(Next Gen)'; 'David Lai'; 'Murali Allada'; 'Daryl Walleck'; 'Robert Craig'; 
Nguyen, Hoa (Palo Alto); 'Gardiner, Mike'; '안재석(Cloud코어개발2팀)'; Johnson, Anita 
(Exec Asst:SDN-Roseville); Hobbs, Jeannie (HPN Executive Assistant); 'Abby 
Sohal (aksohal)'; 'Tim Serong'; 'greg_jac...@dell.com'; 'Hathaway.Jon'; 'Robbie 
Gill'; Griswold, Joshua; Arunachalam, Yogalakshmi (HPCC Cloud OS); Keith Burns 
(alagalah); Assaf Muller; William Henry; Manish Godara
Subject: RE: Neutron L3-DVR F2F Discussion - Conference Room Updated - 
Directions Attached


Hi Folks,
Thanks for attending the Neutron L3-DVR F2F discussion last week and thanks for 
all your feedback.
Here is the link to the slides that I presented during our meeting.

https://docs.google.com/presentation/d/1XJY30ZM0K3xz1U4CVWaQuaqBotGadAFM1xl1UuLtbrk/edit#slide=id.p

Here are the meeting notes.


1.  DVR Team updated the OpenStack Community on what has changed from the 
earlier proposal.
a.  No kernel Module
b.  Use existing namespaces
c.  Split L3, Floating IP and default External Access.
d.  Provide migration Path
e.  Supporting both East-West and North-South.
2.  Got a clear direction from the PTL that we don’t need to address the 
distributed SNAT at this point of time and focus on the centralized solution 
that we proposed.
3.  The DVR Agent design (both L2 and L3) should  be discussed with the 
respective teams before we proceed. Probably a separate document or blueprint 
that discusses the flows.
4.  No support for Dynamic routing protocols.
5.  Discussed both active blueprints.
6.  Community suggested that we need to consider or check if the OVS ARP 
responder can be utilized. ( Proposed by Eduard, working on it).
7.  HA for the Centralized Service Node.

Thanks
Swami


-Original Appointment-
From: Vasudevan, Swaminathan (PNB Roseville)
Sent: Wednesday, February 05, 2014 10:02 AM
To: Vasudevan, Swaminathan (PNB Roseville); Baldwin, Carl (HPCS Neutron); 
sylvain.afch...@enovance.commailto:sylvain.afch...@enovance.com; James Clark, 
(CES BU) (james.cl...@kt.commailto:james.cl...@kt.com); sumit naiksatam 
(sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com); Nachi Ueno 
(na...@ntti3.commailto:na...@ntti3.com); Kyle Mestery 
(mest...@siliconloons.commailto:mest...@siliconloons.com); enikanorov 
(enikano...@mirantis.commailto:enikano...@mirantis.com); Assaf Muller 
(amul...@redhat.commailto:amul...@redhat.com); 
cloudbe...@gmail.commailto:cloudbe...@gmail.com; OpenStack Development 
Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org); 
'mmccl...@yahoo-inc.com'; Hemanth Ravi 

[openstack-dev] [Neutron] _notify_port_updated in ML2 plugin doesn't take effect under some conditions

2014-03-17 Thread Li Ma
Hi stackers,

I'm trying to extend the capability of port by propagating
binding:profile from neutron-server to l2-agents.

When I issue update-port-api with a new binding:profile, I find that the
action is not notified to any agents. Then I check the code and find the
following function:

def _notify_port_updated(self, mech_context):
port = mech_context._port
segment = mech_context.bound_segment
if not segment:
# REVISIT(rkukura): This should notify agent to unplug port
network = mech_context.network.current
LOG.warning(_(In _notify_port_updated(), no bound segment for 
  port %(port_id)s on network %(network_id)s),
{'port_id': port['id'],
 'network_id': network['id']})
return
self.notifier.port_update(mech_context._plugin_context, port,
  segment[api.NETWORK_TYPE],
  segment[api.SEGMENTATION_ID],
  segment[api.PHYSICAL_NETWORK])

I'm not sure why it checks bound segment here to prevent sending
port_update out?
In my situation, I run a devstack environment and the bound segment is
None by default. Actually, I need this message to be sent out in any
situations.

I'd appreciate any hints.

Thanks a lot,

-- 
---
cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] devstack: Unable to restart rabbitmq-server

2014-03-17 Thread Deepak C Shetty

Hi List,
It been few hours and I tried everything from ensuring /etc/hosts, 
/etc/hostname etc (per google results) and rabbitmq-server still doesn't 
start. I am using latest devstack as of today on F20


Below is the error I see

[stack@devstack-vm ~]$ sudo systemctl status rabbitmq-server.service
rabbitmq-server.service - RabbitMQ broker
   Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; 
disabled)
   Active: failed (Result: timeout) since Mon 2014-03-17 07:20:47 UTC; 
14s ago
  Process: 30065 ExecStopPost=/usr/bin/rm /var/run/rabbitmq/pid 
(code=exited, status=0/SUCCESS)
  Process: 30027 ExecStop=/usr/lib/rabbitmq/bin/rabbitmqctl stop 
(code=exited, status=0/SUCCESS)
  Process: 29879 ExecStart=/usr/lib/rabbitmq/bin/rabbitmq-server 
(code=killed, signal=TERM)

 Main PID: 29879 (code=killed, signal=TERM)
   CGroup: /system.slice/rabbitmq-server.service

Mar 17 07:19:11 devstack-vm.localdomain rabbitmqctl[29880]: pid is 29879 ...
Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: RabbitMQ 
3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc.
Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: ##  
##  Licensed under the MPL.  See http://www.rabbitmq.com/

Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: ##  ##
Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: 
##  Logs: /var/log/rabbitmq/rab...@devstack-vm.log
Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: ##  
##/var/log/rabbitmq/rab...@devstack-vm-sasl.log

Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: ##
Mar 17 07:20:41 devstack-vm.localdomain systemd[1]: 
rabbitmq-server.service operation timed out. Stopping.
Mar 17 07:20:41 devstack-vm.localdomain rabbitmqctl[30027]: Stopping and 
halting node 'rabbit@devstack-vm' ...
Mar 17 07:20:46 devstack-vm.localdomain rabbitmq-server[29879]: Starting 
broker... completed with 0 plugins.

Mar 17 07:20:47 devstack-vm.localdomain rabbitmqctl[29880]: ...done.
Mar 17 07:20:47 devstack-vm.localdomain rabbitmqctl[30027]: ...done.
Mar 17 07:20:47 devstack-vm.localdomain systemd[1]: Failed to start 
RabbitMQ broker.
Mar 17 07:20:47 devstack-vm.localdomain systemd[1]: Unit 
rabbitmq-server.service entered failed state.

[stack@devstack-vm ~]$ sudo systemctl start rabbitmq-server.service


Any additional things I can try to get past the above issue ?

===
debug info ...

[stack@devstack-vm ~]$ hostname -s
devstack-vm

[stack@devstack-vm ~]$ sudo cat /etc/hostname
devstack-vm.localdomain

[stack@devstack-vm ~]$  hostname
devstack-vm.localdomain

[stack@devstack-vm ~]$ sudo cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4  devstack-vm
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6



[stack@devstack-vm ~]$ sudo cat /etc/hostname
devstack-vm.localdomain

[stack@devstack-vm ~]$ hostname -s
devstack-vm
[stack@devstack-vm ~]$


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] horizon PyPi distribution missing

2014-03-17 Thread Akihiro Motoki
Does muranodashboard depend on only horizon or require
openstack_dashboard too?

I think we can publish openstack_dashboard (including horizon) even if
horizon and openstack_dashboard are not separeted.
Once we separate them successfully, we can publish horizon module
and add dependency on horizon to openstack_dashboard module.
Am I missing somothing?

Another point we need to consider is that Horizon includes
a lot of Javascirpt files and it fits PyPI.

Thanks,
Akihiro

(2014/03/17 15:12), Timur Sufiev wrote:
 Is there any chance of having horizon distribution in its current
 state (i.e., with openstack_dashboard and other 3rd-party stuff) on
 PyPi? Because the 'next' milestone assigned to this blueprint suggests
 (at least to me) that the separation is not going to happen soon :).

 On Thu, Mar 13, 2014 at 1:40 PM, Matthias Runge mru...@redhat.com wrote:
 On Thu, Mar 13, 2014 at 01:10:06PM +0400, Timur Sufiev wrote:
 Recently I've discovered (and it was really surprising for me) that
 horizon package isn't published on PyPi (see
 http://paste.openstack.org/show/73348/). The reason why I needed to
 install horizon this way is that it is desirable for muranodashboard
 unittests to have horizon in the test environment (and it currently
 seems not possible).

 I'd expect this to change, when horizon and OpenStack Dashboard
 are finally separated. I agree, it makes sense to have something
 comparable to the package now called horizon on PyPi.

 https://blueprints.launchpad.net/horizon/+spec/separate-horizon-from-dashboard
 --
 Matthias Runge mru...@redhat.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] devstack: Unable to restart rabbitmq-server

2014-03-17 Thread Deepak C Shetty

Hi List,
It been few hours and I tried everything from ensuring /etc/hosts, 
/etc/hostname etc (per google results) and rabbitmq-server still doesn't 
start. I am using latest devstack as of today on F20


Below is the error I see

[stack@devstack-vm ~]$ sudo systemctl status rabbitmq-server.service
rabbitmq-server.service - RabbitMQ broker
   Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; 
disabled)
   Active: failed (Result: timeout) since Mon 2014-03-17 07:20:47 UTC; 
14s ago
  Process: 30065 ExecStopPost=/usr/bin/rm /var/run/rabbitmq/pid 
(code=exited, status=0/SUCCESS)
  Process: 30027 ExecStop=/usr/lib/rabbitmq/bin/rabbitmqctl stop 
(code=exited, status=0/SUCCESS)
  Process: 29879 ExecStart=/usr/lib/rabbitmq/bin/rabbitmq-server 
(code=killed, signal=TERM)

 Main PID: 29879 (code=killed, signal=TERM)
   CGroup: /system.slice/rabbitmq-server.service

Mar 17 07:19:11 devstack-vm.localdomain rabbitmqctl[29880]: pid is 29879 ...
Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: RabbitMQ 
3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc.
Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: ##  
##  Licensed under the MPL.  See http://www.rabbitmq.com/

Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: ##  ##
Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: 
##  Logs: /var/log/rabbitmq/rab...@devstack-vm.log
Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: ##  
##/var/log/rabbitmq/rab...@devstack-vm-sasl.log

Mar 17 07:19:12 devstack-vm.localdomain rabbitmq-server[29879]: ##
Mar 17 07:20:41 devstack-vm.localdomain systemd[1]: 
rabbitmq-server.service operation timed out. Stopping.
Mar 17 07:20:41 devstack-vm.localdomain rabbitmqctl[30027]: Stopping and 
halting node 'rabbit@devstack-vm' ...
Mar 17 07:20:46 devstack-vm.localdomain rabbitmq-server[29879]: Starting 
broker... completed with 0 plugins.

Mar 17 07:20:47 devstack-vm.localdomain rabbitmqctl[29880]: ...done.
Mar 17 07:20:47 devstack-vm.localdomain rabbitmqctl[30027]: ...done.
Mar 17 07:20:47 devstack-vm.localdomain systemd[1]: Failed to start 
RabbitMQ broker.
Mar 17 07:20:47 devstack-vm.localdomain systemd[1]: Unit 
rabbitmq-server.service entered failed state.

[stack@devstack-vm ~]$ sudo systemctl start rabbitmq-server.service


Any additional things I can try to
===
debug info ...

[stack@devstack-vm ~]$ hostname -s
devstack-vm

[stack@devstack-vm ~]$ sudo cat /etc/hostname
devstack-vm.localdomain

[stack@devstack-vm ~]$  hostname
devstack-vm.localdomain

[stack@devstack-vm ~]$ sudo cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4  devstack-vm
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6



[stack@devstack-vm ~]$ sudo cat /etc/hostname
devstack-vm.localdomain

[stack@devstack-vm ~]$ hostname -s
devstack-vm
[stack@devstack-vm ~]$


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-17 Thread IWAMOTO Toshihiro
At Thu, 13 Mar 2014 07:48:53 -0700,
Aaron Rosen wrote:
 
 [1  multipart/alternative (7bit)]
 [1.1  text/plain; ISO-8859-1 (7bit)]
 The easiest/quickest thing to do for ice house would probably be to run the
 initial sync in parallel like the dhcp-agent does for this exact reason.
 See: https://review.openstack.org/#/c/28914/ which did this for thr
 dhcp-agent.
 
 Best,
 
 Aaron
 On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo majop...@redhat.comwrote:
 
  Yuri, could you elaborate your idea in detail? , I'm lost at some
  points with your unix domain / token authentication.
 
  Where does the token come from?,
 
  Who starts rootwrap the first time?
 
  If you could write a full interaction sequence, on the etherpad, from
  rootwrap daemon start ,to a simple call to system happening, I think that'd
  help my understanding.
 
 
 Here it is: https://etherpad.openstack.org/p/rootwrap-agent
 Please take a look.

I've added a couple of security-related comments (pickle decoding and
token leak) on the etherpad.
Please check.

--
IWAMOTO Toshihiro


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] _notify_port_updated in ML2 plugin doesn't take effect under some conditions

2014-03-17 Thread Li Ma
Misunderstanding. Just find out this message is sent to
notifications.info topic.

Anyway, is there any solution to get port_update_info from l2-agents?

Thanks,
Li Ma

On 3/17/2014 4:11 PM, Li Ma wrote:
 Hi stackers,

 I'm trying to extend the capability of port by propagating
 binding:profile from neutron-server to l2-agents.

 When I issue update-port-api with a new binding:profile, I find that the
 action is not notified to any agents. Then I check the code and find the
 following function:

 def _notify_port_updated(self, mech_context):
 port = mech_context._port
 segment = mech_context.bound_segment
 if not segment:
 # REVISIT(rkukura): This should notify agent to unplug port
 network = mech_context.network.current
 LOG.warning(_(In _notify_port_updated(), no bound segment for 
   port %(port_id)s on network %(network_id)s),
 {'port_id': port['id'],
  'network_id': network['id']})
 return
 self.notifier.port_update(mech_context._plugin_context, port,
   segment[api.NETWORK_TYPE],
   segment[api.SEGMENTATION_ID],
   segment[api.PHYSICAL_NETWORK])

 I'm not sure why it checks bound segment here to prevent sending
 port_update out?
 In my situation, I run a devstack environment and the bound segment is
 None by default. Actually, I need this message to be sent out in any
 situations.

 I'd appreciate any hints.

 Thanks a lot,


-- 

cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] _notify_port_updated in ML2 plugin doesn't take effect under some conditions

2014-03-17 Thread Li Ma
Updated. I commented out the segment checking in _notify_port_updated of
ml2-plugin, and finally I can get port_update message on l2-agents.

Is there any side effect? It is working for me, but I'm not sure it is
the real solution.

thanks,

-- 

cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] os-cloud-config ssh access to cloud

2014-03-17 Thread Jiří Stránský

On 16.3.2014 21:20, Steve Baker wrote:

On 15/03/14 02:33, Jiří Stránský wrote:

On 12.3.2014 17:03, Jiří Stránský wrote:


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] -
upload pre-created signing cert, signing key and CA cert to controller
nodes using Heat. This seems like a much cleaner approach to
initializing overcloud than having to SSH into it, and it will solve
both problems i outlined in the initial e-mail.

It creates another problem though - for simple (think PoC) deployments
without external CA we'll need to create the keys/certs
somehow/somewhere anyway :) It shouldn't be hard because it's already
implemented in keystone-manage pki_setup but we should figure out a way
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup locally
and passing a parameter to pki_setup to override default location where
new keys/certs will be generated?


Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/



I'm adding [Heat] to the subject. After some discussion on IRC it
seems that what we need to do with Heat is not totally straightforward.

Here's an attempt at a brief summary:

In TripleO we deploy OpenStack using Heat, the cloud is described in a
Heat template [1]. We want to externally generate and then upload 3
small binary files to the controller nodes (Keystone PKI key and
certificates [2]). We don't want to generate them in place or scp them
into the controller nodes, because that would require having ssh
access to the deployed controller nodes, which comes with drawbacks [3].

It would be good if we could have the 3 binary files put into the
controller nodes as part of the Heat stack creation. Can we include
them in the template somehow? Or is there an alternative feasible
approach?


Thank you

Jirka

[1]
https://github.com/openstack/tripleo-heat-templates/blob/0490dd665899d3265a72965aeaf3a342275f4328/overcloud-source.yaml
[2]
http://docs.openstack.org/developer/keystone/configuration.html#install-external-signing-certificate
[3]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029327.html


It looks like the cert files you want to transfer are all ascii rather
than binary, which is good as we have yet to implement a way to attach
binary data to a heat stack create call.

One way to write out these files would be using cloud-config. The
disadvantages of this is that it is boot-time config only, so those keys
couldn't be updated with a stack update. You would also be consuming a
decent proportion of your 16k user_data limit.

   keystone_certs_config:
 Type: OS::Heat::CloudConfig
 Properties:
   cloud_config:
 write_files:
 - path: /etc/keystone/ssl/certs/signing_cert.pem
   content: |
 # You have 3 options for how to insert the content here:
 # 1. inline the content
 # 2. Same as 1, but automatically with your own template
pre-processing logic
 # 3. call {get_file: path/to/your/signing_cert.pem} but this
only works for HOT syntax templates
   permissions: '0600'

   keystone_init:
 Type: OS::Heat::MultipartMime
 Properties:
   parts:
   - subtype: cloud-config
 config:
   get_resource: keystone_certs_config
   notCompute0:
 Type: OS::Nova::Server
 Properties:
   user_data: {Ref: keystone_init}

But it looks like you should just be using os-apply-config templates for
all of the files in /etc/keystone/ssl/certs/

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 ...
 Metadata:
   ...
   keystone:
 signing_cert: |
   # You have 3 options for how to insert the content here:
   # 1. inline the content
   # 2. Same as 1, but automatically with your own template
pre-processing logic
   # 3. call {get_file: path/to/your/signing_cert.pem} but this
only works for HOT syntax templates

If the files really are binary then currently you'll have to encode to
base64 before including the content in your templates, then have an
os-refresh-config script to decode and write out the binary files.


Ah i don't know why i thought .pem files were binary. Thank you Steve, 
your reply is super helpful :)


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-17 Thread Thierry Carrez
Anne Gentle wrote:
 The reference list lives in the governance git repository:
 
 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
 
 A bit of metadata I'd like added to the programs.yaml file is which
 release the project was in what status, integrated or incubated. Shall I
 propose a patch?
 
 Currently you have to look at italicizations
 on https://wiki.openstack.org/wiki/Programs to determine
 integrated/incubated but you can't really know what release.

Yes, this has been requested in the past. Feel free to propose a patch.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] horizon PyPi distribution missing

2014-03-17 Thread Timur Sufiev
It depends on openstack_dashboard, namely on
openstack_dashboard.settings. So it is fine from Murano's point of
view to have openstack_dashboard published on PyPi. Many thanks for
considering my request :).

On Mon, Mar 17, 2014 at 12:19 PM, Akihiro Motoki mot...@da.jp.nec.com wrote:
 Does muranodashboard depend on only horizon or require
 openstack_dashboard too?

 I think we can publish openstack_dashboard (including horizon) even if
 horizon and openstack_dashboard are not separeted.
 Once we separate them successfully, we can publish horizon module
 and add dependency on horizon to openstack_dashboard module.
 Am I missing somothing?

 Another point we need to consider is that Horizon includes
 a lot of Javascirpt files and it fits PyPI.

 Thanks,
 Akihiro

 (2014/03/17 15:12), Timur Sufiev wrote:
 Is there any chance of having horizon distribution in its current
 state (i.e., with openstack_dashboard and other 3rd-party stuff) on
 PyPi? Because the 'next' milestone assigned to this blueprint suggests
 (at least to me) that the separation is not going to happen soon :).

 On Thu, Mar 13, 2014 at 1:40 PM, Matthias Runge mru...@redhat.com wrote:
 On Thu, Mar 13, 2014 at 01:10:06PM +0400, Timur Sufiev wrote:
 Recently I've discovered (and it was really surprising for me) that
 horizon package isn't published on PyPi (see
 http://paste.openstack.org/show/73348/). The reason why I needed to
 install horizon this way is that it is desirable for muranodashboard
 unittests to have horizon in the test environment (and it currently
 seems not possible).

 I'd expect this to change, when horizon and OpenStack Dashboard
 are finally separated. I agree, it makes sense to have something
 comparable to the package now called horizon on PyPi.

 https://blueprints.launchpad.net/horizon/+spec/separate-horizon-from-dashboard
 --
 Matthias Runge mru...@redhat.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack: Unable to restart rabbitmq-server

2014-03-17 Thread Kashyap Chamarthy
On Mon, Mar 17, 2014 at 02:27:29PM +0530, Deepak C Shetty wrote:
 Hi List,
 It been few hours and I tried everything from ensuring
 /etc/hosts, /etc/hostname etc (per google results) and
 rabbitmq-server still doesn't start. I am using latest devstack as
 of today on F20
 
 Below is the error I see
 
 [stack@devstack-vm ~]$ sudo systemctl status rabbitmq-server.service
 rabbitmq-server.service - RabbitMQ broker
Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service;
 disabled)
Active: failed (Result: timeout) since Mon 2014-03-17 07:20:47
 UTC; 14s ago
   Process: 30065 ExecStopPost=/usr/bin/rm /var/run/rabbitmq/pid
 (code=exited, status=0/SUCCESS)
   Process: 30027 ExecStop=/usr/lib/rabbitmq/bin/rabbitmqctl stop
 (code=exited, status=0/SUCCESS)
   Process: 29879 ExecStart=/usr/lib/rabbitmq/bin/rabbitmq-server
 (code=killed, signal=TERM)
  Main PID: 29879 (code=killed, signal=TERM)
CGroup: /system.slice/rabbitmq-server.service

I don't know much about RabbitMQ itself, but a few things that may give
you some debugging clues from systemd journal (look up the man page for
what the switches mean):

Show all logs of priority error:

$ journalctl -p err

Some variations:

$ journalctl /usr/sbin/rabbitmq-server
$ journalctl -u rabbitmq-server -l -p err
$ journalctl -u rabbitmq-server -l --since=yesterday -p err


-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-17 Thread Radomir Dopieralski
On 16/03/14 06:04, Clint Byrum wrote:

 I think you can achieve this level of protection simply by denying
 interactive users the rights to delete individual things directly, and
 using stop instead of delete. Then have something else (cron?) clean up
 stopped instances after a safety period has been reached.

I would be very interested in the approach to determining the optimal
value for that safety period you are mentioning. Or is this going to be
left as an exercise for the reader? (That is, set in the configuration,
so that the users have to somehow perform this impossible task.)

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [TROVE] Manual Installation Again

2014-03-17 Thread tattabbum
Hi Mark,
have you some news about the right steps (Prepare Image) that are needed
perform in order to launch a trove instance with a correctly configured
trove-guest agent?

As you, I have read all the trove-integration/script/redstack script to find
the right commands to perform a manual trove installatio, and I have
realized the gerrit review https://review.openstack.org/#/c/78608/.
I would be happy if you could contribute to the review, particularly for the
Prepare Image steps.

Thank you,
Giuseppe





--
View this message in context: 
http://openstack.10931.n7.nabble.com/Openstack-TROVE-Manual-Installation-Again-tp34470p35366.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Selenium (which is non-free) is back again in Horizon (Icehouse Beta 3)

2014-03-17 Thread Matthias Runge
On Fri, Mar 14, 2014 at 01:03:26PM +0100, Sascha Peilicke wrote:
 
 
 
 Am 14. März 2014 12:32:41 schrieb Thomas Goirand z...@debian.org:
 
 Hi,
 
 A few months ago, I raised the fact that Selenium *CANNOT* be a hard
 test-requirements.txt build-dependency of Horizon, because it is
 non-free (because of binaries like the browser plug-ins not being
 build-able from source). So it was removed.
 
 Now, on the new Icehouse beta 3, it's back again, and I get some unit
 tests errors (see below).
 
 Guys, could we stop having this kind of regressions, and make Selenium
 tests not mandatory? They aren't runnable in Debian.
 
 Identical situation with openSUSE. And I guess Fedora is no different.

An additional note:

I was very astonished to see that now included in Fedora; it looks like
it was sufficient to remove the pre-compiled blob

%install
%{__python2} setup.py install --skip-build --root %{buildroot}

rm -f
%{buildroot}%{python2_sitelib}/selenium/webdriver/firefox/amd64/x_ignore_nofocus.so
rm -f
%{buildroot}%{python2_sitelib}/selenium/webdriver/firefox/x86/x_ignore_nofocus.so

%if %{with python3}
pushd %{py3dir}
%{__python3} setup.py install --skip-build --root %{buildroot}
popd
rm -f
%{buildroot}%{python3_sitelib}/selenium/webdriver/firefox/amd64/x_ignore_nofocus.so
rm -f
%{buildroot}%{python3_sitelib}/selenium/webdriver/firefox/x86/x_ignore_nofocus.so
%endif


(the review request is here: [1])

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1070125

Matthias
-- 
Matthias Runge mru...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] review for 49316 and corresponding blue print

2014-03-17 Thread Masashi Ozawa
Hi Guys,

Regarding https://review.openstack.org/#/c/49316/, we have been working
on the implementation of this feature for ~5 months and unfortunately
it's not in icehouse yet. We have been testing this feature againt
Amazon S3 and other S3 servers and it's working and it can handle
large image/snapshot objects.

To complete this, the following BP needs to be re-opened as zhiyan
left the mesage there.

https://blueprints.launchpad.net/glance/+spec/s3-multi-part-upload

We really hope to have this feature in the next release so can someone
please take this one ?

thanks,
- Ozawa



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [I18n][Horizon] I18n compliance test string freeze exception

2014-03-17 Thread Thierry Carrez
Ying Chun Guo wrote:
 StringFreeze - Start translation  test - Report bugs which may cause
 string changes - Cannot fix these bugs because of StringFreeze.
 So I'd like to bring this question to dev: when shall we fix these
 errors then?
 
 From my point of view, FeatureFreeze means not accept new features,
 and doesn't mean cannot fix bugs in features.
 StringFreeze should mean not to add new strings. But we could be able
 to improve strings and fix bugs.
 I think shipping with incorrect messages is worse than strict string freeze.

First of all, StringFreeze is there to help translators, so we should
definitely evolve it if it misses the target :)

It was never meant to be strict. My idea was that *if* a string gets
changed, we (1) need a good reason for doing so (no cosmetic or
gratuitous change) and (2) need a mechanism in place to warn translators
about that late change (that could just be a ML post).

That way we make sure that if a string gets changed, it's worth the
hassle it creates and the relevant people know about it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-17 Thread Mathieu Rohon
Hi

On Fri, Mar 7, 2014 at 7:33 PM, Nader Lahouti nader.laho...@gmail.com wrote:
 1) Does it mean an interim solution is to have our own plugin (and have all
 the changes in it) and declare it as core_plugin instead of Ml2Plugin?

I don't think you should create your own Plugin, having a MD is
simpler to develop and to maintain,
you should just help us make ML2 evolve on the good path that feet
your needs. Moreover,
having MD to be able to load extensions is already identified, as Akihiro said.
Developing this part would be more usefull for you and for the entire ML2 users.

 2) The other issue as I mentioned before, is that the extension(s) is not
 showing up in the result, for instance when create_network is called
 [result = super(Ml2Plugin, self).create_network(context, network)], and as a
 result they cannot be used in the mechanism drivers when needed.

 Looks like the process_extensions is disabled when fix for Bug 1201957
 committed and here is the change:
 Any idea why it is disabled?

As you underlined it, it shouldn't be disabled since MD might need to
have the entire network dict,
with extensions data. You might contact salvatore to talk about
another workaround for its bug.

 --
 Avoid performing extra query for fetching port security binding

 Bug 1201957


 Add a relationship performing eager load in Port and Network

 models, thus preventing the 'extend' function from performing

 an extra database query.

 Also fixes a comment in securitygroups_db.py


 Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa

  master   h.1

 ...

  2013.2

 commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7

 Salvatore Orlando salv-orlando authored 8 months ago


 2  neutron/db/db_base_plugin_v2.py View

  @@ -995,7 +995,7 @@ def create_network(self, context, network):

 995   'status': constants.NET_STATUS_ACTIVE}

 996   network = models_v2.Network(**args)

 997   context.session.add(network)

 998 -return self._make_network_dict(network)

 998 +return self._make_network_dict(network,
 process_extensions=False)

 999

 1000  def update_network(self, context, id, network):

 1001

  n = network['network']


 ---


 Regards,
 Nader.





 On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura kuk...@noironetworks.com
 wrote:


 On 3/7/14, 3:53 AM, Édouard Thuleau wrote:

 Yes, that sounds good to be able to load extensions from a mechanism
 driver.

 But another problem I think we have with ML2 plugin is the list extensions
 supported by default [1].
 The extensions should only load by MD and the ML2 plugin should only
 implement the Neutron core API.


 Keep in mind that ML2 supports multiple MDs simultaneously, so no single
 MD can really control what set of extensions are active. Drivers need to be
 able to load private extensions that only pertain to that driver, but we
 also need to be able to share common extensions across subsets of drivers.
 Furthermore, the semantics of the extensions need to be correct in the face
 of multiple co-existing drivers, some of which know about the extension, and
 some of which don't. Getting this properly defined and implemented seems
 like a good goal for juno.

 -Bob



 Any though ?
 Édouard.

 [1]
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87



 On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi,

 I think it is better to continue the discussion here. It is a good log
 :-)

 Eugine and I talked the related topic to allow drivers to load
 extensions)  in Icehouse Summit
 but I could not have enough time to work on it during Icehouse.
 I am still interested in implementing it and will register a blueprint on
 it.

 etherpad in icehouse summit has baseline thought on how to achieve it.
 https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
 I hope it is a good start point of the discussion.

 Thanks,
 Akihiro

 On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti nader.laho...@gmail.com
 wrote:
  Hi Kyle,
 
  Just wanted to clarify: Should I continue using this mailing list to
  post my
  question/concerns about ML2? Please advise.
 
  Thanks,
  Nader.
 
 
 
  On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery
  mest...@noironetworks.com
  wrote:
 
  Thanks Edgar, I think this is the appropriate place to continue this
  discussion.
 
 
  On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana emag...@plumgrid.com
  wrote:
 
  Nader,
 
  I would encourage you to first discuss the possible extension with
  the
  ML2 team. Rober and Kyle are leading this effort and they have a IRC
  meeting
  every week:
  https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
 
  Bring your concerns on this meeting and get the right feedback.
 
  Thanks,
 
  Edgar
 
  From: Nader Lahouti nader.laho...@gmail.com
  Reply-To: OpenStack List openstack-dev@lists.openstack.org
  Date: Thursday, March 6, 2014 12:14 PM
  To: OpenStack List 

Re: [openstack-dev] icehouse-3 release cross reference is added into www.xrefs.info

2014-03-17 Thread Li Ma
Good job.

On 3/13/2014 1:51 PM, John Smith wrote:
 icehouse-3 release cross reference is added into www.xrefs.info, check
 it out http://www.xrefs.info. Thx. xrefs.info admin

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
---
cheers,
Li Ma



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-17 Thread Mathieu Rohon
Hi racha,

I don't think your topic has something to deal with Nader's topics.
Please, create another topic, it would be easier to follow.
FYI, robert kukura is currently refactoring the MD binding, please
have a look here : https://bugs.launchpad.net/neutron/+bug/1276391. As
i understand, there won't be priority between MD that can bind a same
port. The first that will respond to the binding request will give its
vif_type.

Best,

Mathieu

On Fri, Mar 14, 2014 at 8:14 PM, racha ben...@gmail.com wrote:
 Hi,
   Is it possible (in the latest upstream) to partition the same
 integration bridge br-int into multiple isolated partitions (in terms of
 lvids ranges, patch ports, etc.) between OVS mechanism driver and ODL
 mechanism driver? And then how can we pass some details to Neutron API (as
 in the provider segmentation type/id/etc) so that ML2 assigns a mechanism
 driver to the virtual network? The other alternative I guess is to create
 another integration bridge managed by a different Neutron instance? Probably
 I am missing something.

 Best Regards,
 Racha


 On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti nader.laho...@gmail.com
 wrote:

 1) Does it mean an interim solution is to have our own plugin (and have
 all the changes in it) and declare it as core_plugin instead of Ml2Plugin?

 2) The other issue as I mentioned before, is that the extension(s) is not
 showing up in the result, for instance when create_network is called
 [result = super(Ml2Plugin, self).create_network(context, network)], and as
 a result they cannot be used in the mechanism drivers when needed.

 Looks like the process_extensions is disabled when fix for Bug 1201957
 committed and here is the change:
 Any idea why it is disabled?

 --
 Avoid performing extra query for fetching port security binding

 Bug 1201957


 Add a relationship performing eager load in Port and Network

 models, thus preventing the 'extend' function from performing

 an extra database query.

 Also fixes a comment in securitygroups_db.py


 Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa

  master   h.1

 ...

  2013.2

 commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7

 Salvatore Orlando salv-orlando authored 8 months ago


 2  neutron/db/db_base_plugin_v2.py View

  @@ -995,7 +995,7 @@ def create_network(self, context, network):

 995   'status': constants.NET_STATUS_ACTIVE}

 996   network = models_v2.Network(**args)

 997   context.session.add(network)

 998 -return self._make_network_dict(network)

 998 +return self._make_network_dict(network,
 process_extensions=False)

 999

 1000  def update_network(self, context, id, network):

 1001

  n = network['network']


 ---


 Regards,
 Nader.





 On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura kuk...@noironetworks.com
 wrote:


 On 3/7/14, 3:53 AM, Édouard Thuleau wrote:

 Yes, that sounds good to be able to load extensions from a mechanism
 driver.

 But another problem I think we have with ML2 plugin is the list
 extensions supported by default [1].
 The extensions should only load by MD and the ML2 plugin should only
 implement the Neutron core API.


 Keep in mind that ML2 supports multiple MDs simultaneously, so no single
 MD can really control what set of extensions are active. Drivers need to be
 able to load private extensions that only pertain to that driver, but we
 also need to be able to share common extensions across subsets of drivers.
 Furthermore, the semantics of the extensions need to be correct in the face
 of multiple co-existing drivers, some of which know about the extension, and
 some of which don't. Getting this properly defined and implemented seems
 like a good goal for juno.

 -Bob



 Any though ?
 Édouard.

 [1]
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87



 On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi,

 I think it is better to continue the discussion here. It is a good log
 :-)

 Eugine and I talked the related topic to allow drivers to load
 extensions)  in Icehouse Summit
 but I could not have enough time to work on it during Icehouse.
 I am still interested in implementing it and will register a blueprint
 on it.

 etherpad in icehouse summit has baseline thought on how to achieve it.
 https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
 I hope it is a good start point of the discussion.

 Thanks,
 Akihiro

 On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti nader.laho...@gmail.com
 wrote:
  Hi Kyle,
 
  Just wanted to clarify: Should I continue using this mailing list to
  post my
  question/concerns about ML2? Please advise.
 
  Thanks,
  Nader.
 
 
 
  On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery
  mest...@noironetworks.com
  wrote:
 
  Thanks Edgar, I think this is the appropriate place to continue this
  discussion.
 
 
  On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana emag...@plumgrid.com
  wrote:
 
  

[openstack-dev] [requirements] python-solumclient

2014-03-17 Thread Noorul Islam Kamal Malmiyoda
Hello all,

In solum we are using python-solumclient to communicate with our
builder-api [1] . For that we need python-solumclient to be included
as a dependency in requirements.txt. But requirement check gate is
failing since python-solumclient is not part of global-requirements.
Submitted a patch [2] for adding the same. But it got a -2, since
solum is not an OpenStack project. But I see that solum and
python-solumclient in requirements/projects.txt.

All I can think of is to remove requirements gating for solum. I would
like to know if someone has other better solution for this problem.

Regards,
Noorul

[1] https://review.openstack.org/#/c/80459
[2] https://review.openstack.org/#/c/80756/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-03-17 Thread Telles Nobrega
That is good news, I can have both information sent to nova really easy. I
just need to add a field into the token, or more than one if needed. RIght
now I send Ids, it could names just as easily and we can add a new field so
we can have both information sent. I'm not sure which is the best option
for us but i would think that sending both for now would keep the
compatibility and we could still use the names for display porpuse


On Sun, Mar 16, 2014 at 9:18 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-03-14 at 13:43 -0700, Vishvananda Ishaya wrote:
  Awesome, this is exactly what I was thinking. I think this is really
  close to being usable on the nova side. First of all the
  dot.sperated.form looks better imo, and I think my code should still
  work that way as well. The other piece that is needed is mapping ids
  to names for display purposes. I did something like this for a
  prototype of names in dns caching that should work nicely. The
  question simply becomes how do we expose those names. I'm thinking we
  have to add an output field to the display of objects in the system
  showing the fully qualified name.  We can then switch the display in
  novaclient to show names instead of ids.  That way an admin listing
  all the projects in orga would see the owner as orga.projb instead of
  the id string.
 
  The other option would be to pass names instead of ids from keystone
  and store those instead. That seems simpler at first glance, it is not
  backwards compatible with the current model so it will be painful for
  providers to switch.

 -1 for instead of. in addition to would have been fine, IMO.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Telles Mota Vidal Nobrega
Bsc in Computer Science at UFCG
Software Engineer at PulsarOpenStack Project - HP/LSD-UFCG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Edit subnet in workflows - ip_version hidden?

2014-03-17 Thread Akihiro Motoki
Hi Abishek, Radomir,

I just noticed this mail.

It seems better the code discussed will be refactored.
UpdateSubnetInfoAction in projects/networks/subnets/workflow.py
inherits CreateSubnetInfoAction in projects/networks/workflow.py.
IIRC I would like to share most logic between two and tried to
remove ip_version from the parent class (in networks.workflow)
in the child class and the current implemention just worked.
It is no more than it.

After looking at it now, it looks enough just to delete ip_version
from self.fields. The same thing can say 'with_subnet' attribute
in networks.subnets.workflows.CreateSubnetInfoAction

When I implemented this, I was not so familir with Django
and perhaps I must have not know deleting a field from self.fields :-(
Thanks for raising this!

Akihiro

(2014/03/12 15:43), Radomir Dopieralski wrote:
 On 11/03/14 16:57, Abishek Subramanian (absubram) wrote:

 Althouh - how up to date is this code?

 This should be easy to check with the git blame command:

 $ git blame
 openstack_dashboard/dashboards/project/networks/subnets/workflows.py

 [...]
 31d55e50 (Akihiro MOTOKI  2013-01-04 18:33:03 +0900  56) class
 CreateSubnet(network_workflows.CreateNetwork):
 [...]
 31d55e50 (Akihiro MOTOKI  2013-01-04 18:33:03 +0900  82) class
 UpdateSubnetInfoAction(CreateSubnetInfoAction):
 [...]
 31d55e50 (Akihiro MOTOKI  2013-01-04 18:33:03 +0900 101)
  #widget=forms.Select(
 [...]

 As you can see, it's all in the same patch, so it's on purpose.

 It seems to me, that in the update dialog you are not supposed to change
 the IP Version field, Akihiro Motoki tried to disable it
 first, but then he hit the problem with the browser not submitting
 the field's value and the form displaying the wrong option in there,
 so he decided to hide it instead. But we won't know until the author
 speaks for himself.

 Personally, I would also add a check in the clean() method that the
 IP Version field value indeed didn't change -- to make sure nobody
 edited the form's HTML to get rid of the disabled or readonly attribute.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-17 Thread Tim Bell

Glance provides a very nice set up for this

- Default is no delayed deletion
- Length of time before scrubbing is configurable
- The clean up process is automated using the glance scrubber which can be run 
as a standalone job or as a daemon

Tim

 -Original Message-
 From: Radomir Dopieralski [mailto:openst...@sheep.art.pl]
 Sent: 17 March 2014 10:33
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
 deletion (step by step)
 
 On 16/03/14 06:04, Clint Byrum wrote:
 
  I think you can achieve this level of protection simply by denying
  interactive users the rights to delete individual things directly, and
  using stop instead of delete. Then have something else (cron?) clean
  up stopped instances after a safety period has been reached.
 
 I would be very interested in the approach to determining the optimal value 
 for that safety period you are mentioning. Or is this
 going to be left as an exercise for the reader? (That is, set in the 
 configuration, so that the users have to somehow perform this
 impossible task.)
 
 --
 Radomir Dopieralski
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting reminder - 03/18/2014

2014-03-17 Thread Renat Akhmerov
Hi,

This is a reminder that we’ll have a community meeting today as usually at 
16.00 UTC at #openstack-meeting.

Here’s the agenda (also at 
https://wiki.openstack.org/wiki/Meetings/MistralAgenda):
Review action items
Current status (quickly by team members)
Alternatives to std:repeater
Mistral on top of TaskFlow prototype: goals, approach, approximate schedule
Open discussion

Looking forward to see you there.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-17 Thread Yuzhou (C)
Hi Duncan Thomas,

Maybe the statement about approval process is not very exact. In fact 
in my mail, I mean:
In the enterprise private cloud, if beyond the quota, you want to create a new 
VM ,that needs to wait for approval process.


@stackers,

I think the following two use cases show why non-persistent disk is useful:

1.Non-persistent VDI: 
When users access a non-persistent desktop, none of their settings or 
data is saved once they log out. At the end of a session, 
the desktop reverts back to its original state and the user receives a 
fresh image the next time he logs in.
1). Image manageability, Since non-persistent desktops are built from a 
master image, it's easier for administrators to patch and update the image, 
back it up quickly and deploy company-wide applications to all end users.
2). Greater security, Users can't alter desktop settings or install 
their own applications, making the image more secure.
3). Less storage.

2.As the use case mentioned several days ago by zhangleiqiang:

Let's take a virtual machine which hosts a web service, but it is 
primarily a read-only web site with content that rarely changes. This VM has 
three disks. Disk 1 contains the Guest OS and web application (e.g. 
Apache). Disk 2 contains the web pages for the web site. Disk 3 contains all 
the logging activity.
 In this case, disk 1 (OS  app) are dependent (default) settings and 
is backed up nightly. Disk 2 is independent non-persistent (not backed up, and 
any changes to these pages will be discarded). Disk 3 is   independent 
persistent (not backed up, but any changes are persisted to the disk).
 If updates are needed to the web site's pages, disk 2 must be taken 
out of independent non-persistent mode temporarily to allow the changes to be 
made.
 Now let's say that this site gets hacked, and the pages are doctored 
with something which is not very nice. A simple reboot of this host will 
discard the changes made to the web pages on disk 2, but will persistthe 
logs on disk 3 so that a root cause analysis can be carried out.

Hope to get more suggestions about non-persistent disk!

Thanks.

Zhou Yu




 -Original Message-
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Saturday, March 15, 2014 12:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
 stopping VM, data will be rollback automatically), do you think we shoud
 introduce this feature?
 
 On 7 March 2014 08:17, Yuzhou (C) vitas.yuz...@huawei.com wrote:
  First, generally, in public or private cloud, the end users of VMs
 have no right to create new VMs directly.
  If someone want to create new VMs, he or she need to wait for approval
 process.
  Then, the administrator Of cloud create a new VM to applicant. So the
 workflow that you suggested is not convenient.
 
 This approval process  admin action is the exact opposite to what cloud is
 all about. I'd suggest that anybody using such a process has little
 understanding of cloud and should be educated, not weird interfaces added
 to nova to support a broken premise. The cloud /is different/ from
 traditional IT, that is its strength, and we should be wary of undermining 
 that
 to allow old-style thinking to continue.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Some thoughts on the nova-specs design process

2014-03-17 Thread Sean Dague
On 03/17/2014 12:34 AM, Michael Still wrote:
 On Mon, Mar 17, 2014 at 3:21 PM, Christopher Yeoh cbky...@gmail.com wrote:
 
 To accommodate those who happen to find the blueprint first, I think we
 need a link from the blueprint to the nova-specs review or when its
 approved into the nova-specs repository. I kind of expected the link
 from the blueprint to review to happen automatically, but it doesn't
 seem to have happened for your example.
 
 I think this is because of the git repo problem (its a proposed commit
 for nova-specs not nova). I'm not sure how to fix that apart from
 expecting the author to create a comment in the launchpad blueprint
 manually, but perhaps that's good enough.
 
 Michael
 

I believe this will give you the behavior you are looking for -
https://review.openstack.org/#/c/80957/

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MuranoPL questions?

2014-03-17 Thread Stan Lagun
Joshua,

Completely agree with you. We wouldn't be writing another language if we
knew how any of existing languages can be used for this particular purpose.
If anyone suggest such language and show how it can be used to solve those
issues DSL was designed to solve we will consider dropping MuranoPL. np

Surely DSL hasn't stood the test of time. It just hasn't had a chance yet.
100% of successful programming languages were in such position once.

Anyway it is the best time to come with your suggestions. If you know how
exactly DSL can be replaced or improved we would like you to share


On Wed, Mar 12, 2014 at 2:05 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  I guess I might be a bit biased to programming; so maybe I'm not the
 target audience.

  I'm not exactly against DSL's, I just think that DSL's need to be really
 really proven to become useful (in general this applies to any language
 that 'joe' comp-sci student can create). Its not that hard to just make
 one, but the real hard part is making one that people actually like and use
 and survives the test of time. That's why I think its just nicer to use
 languages that have stood the test of time already (if we can), creating a
 new DSL (muranoPL seems to be slightly more than a DSL imho) means creating
 a new language that has not stood the test of time (in terms of lifetime,
 battle tested, supported over years) so that's just the concern I have.

  Of course we have to accept innovation and I hope that the DSL/s makes
 it easier/simpler, I just tend to be a bit more pragmatic maybe in this
 area.

  Here's hoping for the best! :-)

  -Josh

   From: Renat Akhmerov rakhme...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, March 10, 2014 at 8:36 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] MuranoPL questions?

   Although being a little bit verbose it makes a lot of sense to me.

  @Joshua,

  Even assuming Python could be sandboxed and whatever else that's needed
 to be able to use it as DSL (for something like Mistral, Murano or Heat) is
 done  why do you think Python would be a better alternative for people who
 don't know neither these new DSLs nor Python itself. Especially, given the
 fact that Python has A LOT of things that they'd never use. I know many
 people who have been programming in Python for a while and they admit they
 don't know all the nuances of Python and actually use 30-40% of all of its
 capabilities. Even not in domain specific development. So narrowing a
 feature set that a language provides and limiting it to a certain domain
 vocabulary is what helps people solve tasks of that specific domain much
 easier and in the most expressive natural way. Without having to learn tons
 and tons of details that a general purpose language (GPL, hah :) ) provides
 (btw, the reason to write thick books).

  I agree with Stan, if you begin to use a technology you'll have to learn
 something anyway, be it TaskFlow API and principles or DSL. Well-designed
 DSL just encapsulates essential principles of a system it is used for. By
 learning DSL you're leaning the system itself, as simple as that.

  Renat Akhmerov
 @ Mirantis Inc.



  On 10 Mar 2014, at 05:35, Stan Lagun sla...@mirantis.com wrote:

I'd be very interested in knowing the resource controls u plan to
 add. Memory, CPU...
  We haven't discussed it yet. Any suggestions are welcomed

  I'm still trying to figure out where something like
 https://github.com/istalker2/MuranoDsl/blob/master/meta/com.mirantis.murano.demoApp.DemoInstance/manifest.yamlwould
  be beneficial, why not  just spend effort sand boxing lua, python...
 Instead of spending effort on creating a new language and then having to
 sandbox it as well... Especially if u picked languages that are made to be
 sandboxed from the start (not python)...

  1. See my detailed answer in Mistral thread why haven't we used any of
 those languages. There are many reasons besides sandboxing.

  2. You don't need to sandbox MuranoPL. Sandboxing is restricting some
 operations. In MuranoPL ALL operations (including operators in expressions,
 functions, methods etc.) are just those that you explicitly provided. So
 there is nothing to restrict. There are no builtins that throw
 AccessViolationError

  3. Most of the value of MuranoPL comes not form the workflow code but
 from class declarations. In all OOP languages classes are just a convenient
 to organize your code. There are classes that represent real-life objects
 and classes that are nothing more than data-structures, DTOs etc. In Murano
 classes in MuranoPL are deployable entities like Heat resources application
 components, services etc. In dashboard UI user works with those entities.
 He (in UI!) creates instances of those classes, fills their property
 values, binds objects together 

Re: [openstack-dev] [savanna] Savanna 2014.1.b3 (Icehouse-3) dev milestone available

2014-03-17 Thread Sergey Lukjanov
Thank you!

Heh, looking forward for sahara packages for rc1 :)

On Sat, Mar 15, 2014 at 12:42 AM, Matthew Farrellee m...@redhat.com wrote:
 On 03/06/2014 04:00 PM, Sergey Lukjanov wrote:

 Hi folks,

 the third development milestone of Icehouse cycle is now available for
 Savanna.

 Here is a list of new features and fixed bug:

 https://launchpad.net/savanna/+milestone/icehouse-3

 and here you can find tarballs to download it:

 http://tarballs.openstack.org/savanna/savanna-2014.1.b3.tar.gz

 http://tarballs.openstack.org/savanna-dashboard/savanna-dashboard-2014.1.b3.tar.gz

 http://tarballs.openstack.org/savanna-image-elements/savanna-image-elements-2014.1.b3.tar.gz
 http://tarballs.openstack.org/savanna-extra/savanna-extra-2014.1.b3.tar.gz

 There are 20 blueprint implemented, 45 bugs fixed during the
 milestone. It includes savanna, savanna-dashboard,
 savanna-image-element and savanna-extra sub-projects. In addition
 python-savannaclient 0.5.0 that was released early this week supports
 all new features introduced in this savanna release.

 Thanks.


 rdo packages -

 f21 - savanna - http://koji.fedoraproject.org/koji/taskinfo?taskID=6634141
 el6 - savanna - http://koji.fedoraproject.org/koji/taskinfo?taskID=6634119

 f21 - python-django-savanna -
 http://koji.fedoraproject.org/koji/taskinfo?taskID=6634139
 el6 - python-django-savanna -
 http://koji.fedoraproject.org/koji/taskinfo?taskID=6634116

 best,


 matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Eugene Nikanorov
Hi folks,

We've been discussing a patch that fixes
https://bugs.launchpad.net/neutron/+bug/1242351
and came to a conclusion that what we have right now as an operational
status (status attribute of the resource) may be confusing for a user.

This attribute is used to show deployment status and readiness of the
configuration. For some reason we have 'ACTIVE' constant in the range of
possible constants for 'status' attribute and that creates wrong
expectation for users. Users think that if status is ACTIVE then
configuration should work, but ACTIVE just means that it has been
successfully deployed.

I've seen bugs/questions for other advanced services that expose the same
user confusion as the bug that I've mentioned. I also saw same patches that
try to fix that.

IMO, admin_state_up (kind of confusing attribute too) and state are two
different independent
attributes that could have any value and in most cases should not affect
each other, for example:

1) Configuration is UP, but not deployed, e.g. state = PENDING_CREATE
2) Configuration is DOWN, but deployed, state = ACTIVE
Case #2 is clearly confusing, but that just because of the name 'ACTIVE',
which should probably better changed to 'DEPLOYED'

My proposal is the following:
1) admin_state_up and status are two independent attributes.
admin_state_up turns on/off the configuration, status is for information
only: PENDING_CREATE/DELETE, DEPLOYED, ERROR.
I'm not sure we need INACTIVE here.
2) We document this behavior. We can't just rename ACTIVE to DEPLOYED
because it's a bw-incompatible API change.
3) We deprecate ACTIVE constant in favor of DEPLOYED

There is one serious consequence of the proposal above: real backends
should support turning configurations on and off. Otherwise we could only
implement admin_state_up change with deploy/undeploy (or status attribute
will not make sense for particular driver)
Deploy/undeploy might be simple to implement is an overkill from
performance stand point. Need to do wiring, communicate with backend to
redeploy whole config, etc

Please share your feedback.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack: Unable to restart rabbitmq-server

2014-03-17 Thread John Eckersberg
Deepak C Shetty deepa...@redhat.com writes:
 Hi List,
  It been few hours and I tried everything from ensuring /etc/hosts, 
 /etc/hostname etc (per google results) and rabbitmq-server still doesn't 
 start. I am using latest devstack as of today on F20

There are a couple of known bugs that can prevent rabbitmq-server from
starting on F20.

First one (same bug, two BZs) is related to SELinux and port probing:
https://bugzilla.redhat.com/show_bug.cgi?id=1032595#c8
https://bugzilla.redhat.com/show_bug.cgi?id=998682

Second one is a race condition in Erlang.  If you are repeatedly unable
to start rabbitmq-server, it's probably not this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1059913

I have a patched rabbitmq-server package which includes the fixes for
these two issues, if you'd like to try it and see if it helps your
issue.  And if it helps, please comment on the bug(s) to encourage the
maintainer to pull them into the package :)

http://jeckersb.fedorapeople.org/rabbitmq-server-3.1.5-3.fc20.noarch.rpm

Hope that helps,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Kyle Mestery
On Mon, Mar 17, 2014 at 7:26 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi folks,

 We've been discussing a patch that fixes
 https://bugs.launchpad.net/neutron/+bug/1242351
 and came to a conclusion that what we have right now as an operational
 status (status attribute of the resource) may be confusing for a user.

 This attribute is used to show deployment status and readiness of the
 configuration. For some reason we have 'ACTIVE' constant in the range of
 possible constants for 'status' attribute and that creates wrong
 expectation for users. Users think that if status is ACTIVE then
 configuration should work, but ACTIVE just means that it has been
 successfully deployed.

 I've seen bugs/questions for other advanced services that expose the same
 user confusion as the bug that I've mentioned. I also saw same patches that
 try to fix that.

 IMO, admin_state_up (kind of confusing attribute too) and state are two
 different independent
 attributes that could have any value and in most cases should not affect
 each other, for example:

 1) Configuration is UP, but not deployed, e.g. state = PENDING_CREATE
 2) Configuration is DOWN, but deployed, state = ACTIVE
 Case #2 is clearly confusing, but that just because of the name 'ACTIVE',
 which should probably better changed to 'DEPLOYED'

 It's a typical use case for network devices to have both admin and
operational
state. In the case of having admin_state=DOWN and operational_state=ACTIVE,
this just means the port/link is active but has been configured down. Isn't
this
the same for LBaaS here? Even reading the bug, the user has clearly
configured
the VIP pool as admin_state=DOWN. When it becomes ACTIVE, it's due to this
configuration that the pool remains admin_state=DOWN.

Am I missing something here?

Thanks,
Kyle


 My proposal is the following:
 1) admin_state_up and status are two independent attributes.
 admin_state_up turns on/off the configuration, status is for information
 only: PENDING_CREATE/DELETE, DEPLOYED, ERROR.
 I'm not sure we need INACTIVE here.
 2) We document this behavior. We can't just rename ACTIVE to DEPLOYED
 because it's a bw-incompatible API change.
 3) We deprecate ACTIVE constant in favor of DEPLOYED

 There is one serious consequence of the proposal above: real backends
 should support turning configurations on and off. Otherwise we could only
 implement admin_state_up change with deploy/undeploy (or status attribute
 will not make sense for particular driver)
 Deploy/undeploy might be simple to implement is an overkill from
 performance stand point. Need to do wiring, communicate with backend to
 redeploy whole config, etc

 Please share your feedback.

 Thanks,
 Eugene.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-17 Thread Kyle Mestery
Edgar:

I don't see the configuration options for the OpenDaylight ML2
MechanismDriver
added here yet, even though the code was checked in well over a week ago.
How long does it take to autogenerate this page from the code?

Thanks!
Kyle



On Wed, Mar 12, 2014 at 5:10 PM, Edgar Magana emag...@plumgrid.com wrote:

 You should be able to add your plugin here:

 http://docs.openstack.org/havana/config-reference/content/networking-options-plugins.html

 Thanks,

 Edgar

 From: Mohammad Banikazemi m...@us.ibm.com
 Date: Monday, March 10, 2014 2:40 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Cc: Edgar Magana emag...@plumgrid.com
 Subject: Re: [openstack-dev] [Neutron] Docs for new plugins

 Would like to know what to do for adding documentation for a new plugin.
 Can someone point me to the right place/process please.

 Thanks,

 Mohammad

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Can I use a new plugin based on Ml2Plugin instead of Ml2Plugin as core_plugin

2014-03-17 Thread Kyle Mestery
On Thu, Mar 13, 2014 at 12:07 PM, Nader Lahouti nader.laho...@gmail.comwrote:

 -- edited the subject

 I'm resending this question.
 The issue is described in email thread and. In brief, I need to add load
 new extensions and it seems the mechanism driver does not support that. In
 order to do that I was thinking to have a new ml2 plugin base on existing
 Ml2Plugin and add my changes there and have it as core_plugin.
 Please read the email thread and glad to have your suggestion.

 Nader, as has been pointed out in the prior thread, it would be best to
not write a
new core plugin copied from ML2. A much better approach would be to work to
make the extension loading function in the existing ML2 plugin, as this will
benefit all users of ML2.

Thanks,
Kyle



 On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti nader.laho...@gmail.comwrote:

 1) Does it mean an interim solution is to have our own plugin (and have
 all the changes in it) and declare it as core_plugin instead of Ml2Plugin?

 2) The other issue as I mentioned before, is that the extension(s) is not
 showing up in the result, for instance when create_network is called
 [*result = super(Ml2Plugin, self).create_network(context, network)]*,
 and as a result they cannot be used in the mechanism drivers when needed.

 Looks like the process_extensions is disabled when fix for Bug 1201957
 committed and here is the change:
 Any idea why it is disabled?

 --
 Avoid performing extra query for fetching port security binding

 Bug 1201957


 Add a relationship performing eager load in Port and Network

 models, thus preventing the 'extend' function from performing

 an extra database query.

 Also fixes a comment in securitygroups_db.py


 Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa

  master   h.1

 ...

  2013.2

 commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7

 Salvatore Orlando salv-orlando authored 8 months ago


 2  neutron/db/db_base_plugin_v2.py View

  @@ -995,7 +995,7 @@ def create_network(self, context, network):

 995   'status': constants.NET_STATUS_ACTIVE}

 996   network = models_v2.Network(**args)

 997   context.session.add(network)

 *998 -return self._make_network_dict(network)*

 *998 +return self._make_network_dict(network,
 process_extensions=False)*

 999

 1000  def update_network(self, context, id, network):

 1001

  n = network['network']

 ---


 Regards,
 Nader.





 On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura 
 kuk...@noironetworks.comwrote:


 On 3/7/14, 3:53 AM, Édouard Thuleau wrote:

 Yes, that sounds good to be able to load extensions from a mechanism
 driver.

 But another problem I think we have with ML2 plugin is the list
 extensions supported by default [1].
 The extensions should only load by MD and the ML2 plugin should only
 implement the Neutron core API.


 Keep in mind that ML2 supports multiple MDs simultaneously, so no single
 MD can really control what set of extensions are active. Drivers need to be
 able to load private extensions that only pertain to that driver, but we
 also need to be able to share common extensions across subsets of drivers.
 Furthermore, the semantics of the extensions need to be correct in the face
 of multiple co-existing drivers, some of which know about the extension,
 and some of which don't. Getting this properly defined and implemented
 seems like a good goal for juno.

 -Bob



  Any though ?
 Édouard.

  [1]
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87



 On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki amot...@gmail.comwrote:

 Hi,

 I think it is better to continue the discussion here. It is a good log
 :-)

 Eugine and I talked the related topic to allow drivers to load
 extensions)  in Icehouse Summit
 but I could not have enough time to work on it during Icehouse.
 I am still interested in implementing it and will register a blueprint
 on it.

 etherpad in icehouse summit has baseline thought on how to achieve it.
 https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
 I hope it is a good start point of the discussion.

 Thanks,
 Akihiro

 On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti nader.laho...@gmail.com
 wrote:
  Hi Kyle,
 
  Just wanted to clarify: Should I continue using this mailing list to
 post my
  question/concerns about ML2? Please advise.
 
  Thanks,
  Nader.
 
 
 
  On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery 
 mest...@noironetworks.com
  wrote:
 
  Thanks Edgar, I think this is the appropriate place to continue this
  discussion.
 
 
  On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana emag...@plumgrid.com
 wrote:
 
  Nader,
 
  I would encourage you to first discuss the possible extension with
 the
  ML2 team. Rober and Kyle are leading this effort and they have a
 IRC meeting
  every week:
 
 https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
 
  Bring your concerns on this meeting and get the right 

Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Eugene Nikanorov
Hi Kyle,






 It's a typical use case for network devices to have both admin and
 operational
 state. In the case of having admin_state=DOWN and operational_state=ACTIVE,
 this just means the port/link is active but has been configured down.
 Isn't this
 the same for LBaaS here? Even reading the bug, the user has clearly
 configured
 the VIP pool as admin_state=DOWN. When it becomes ACTIVE, it's due to this
 configuration that the pool remains admin_state=DOWN.

 Am I missing something here?

No, you're not. The user sees 'ACTIVE' status and think it contradicts
'DOWN' admin_state.
It's naming (UX problem), in my opinion.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-17 Thread Renat Akhmerov
Left my comments in https://etherpad.openstack.org/p/taskflow-mistral.

@Changbin, I think the most interesting section for you is “What’s Different”. 
Thanks. Hope this helps. If it doesn’t then let us know your specific questions.

@Joshua, thanks for your input on architecture. At a high-level it makes sense. 
We need to keep discussing it and switch to details. For that reason, like I 
said before, we want to create a very very simple taskflow based prototype (in 
progress). Then we’ll have a chance to think how to evolve TaskFlow properly so 
that it fits Mistral needs.

Renat Akhmerov
@ Mirantis Inc.

On 15 Mar 2014, at 00:31, Joshua Harlow harlo...@yahoo-inc.com wrote:

 Sure, I can try to help,
 
 I started https://etherpad.openstack.org/p/taskflow-mistral so that we can 
 all work on this.
 
 Although I'd rather not make architecture for mistral (cause that doesn't 
 seem like an appropriate thing to do, for me to tell mistral what to do with 
 its architecture), but I'm all for working on it together as a community 
 (instead of me producing something that likely won't have much value).
 
 Let us work on the above etherpad together and hopefully get some good ideas 
 flowing :-)
 
 From: Stan Lagun sla...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Friday, March 14, 2014 at 12:11 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow
 
 Joshua,
 
 why wait? Why not just help Renat with his research on that integration and 
 bring your own vision to the table? Write some 1-page architecture 
 description on how Mistral can be built on top of TaskFlow and we discuss 
 pros and cons. In would be much more productive.
 
 
 On Fri, Mar 14, 2014 at 11:35 AM, Joshua Harlow harlo...@yahoo-inc.com 
 wrote:
 Thanks Renat,
 
 I'll keep waiting, and hoping that we can figure this out for everyone's 
 benefit. Because in the end we are all much stronger working together and 
 much weaker when not.
 
 Sent from my really tiny device...
 
 On Mar 13, 2014, at 11:41 PM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 Folks,
 
 Mistral and TaskFlow are significantly different technologies. With 
 different set of capabilities, with different target audience.
 
 We may not be doing enough to clarify all the differences, I admit that. 
 The challenge here is that people tend to judge having minimal amount of 
 information about both things. As always, devil in the details. Stan is 
 100% right, “seems” is not an appropriate word here. Java seems to be 
 similar to C++ at the first glance for those who have little or no 
 knowledge about them.
 
 To be more consistent I won’t be providing all the general considerations 
 that I’ve been using so far (in etherpads, MLs, in personal discussions), 
 it doesn’t seem to be working well, at least not with everyone. So to make 
 it better, like I said in that different thread: we’re evaluating TaskFlow 
 now and will share the results. Basically, it’s what Boris said about what 
 could and could not be implemented in TaskFlow. But since the very 
 beginning of the project I never abandoned the idea of using TaskFlow some 
 day when it’s possible. 
 
 So, again: Joshua, we hear you, we’re working in that direction.
 
 
 I'm reminded of
 http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
 k/2 where it seemed like we were doing much better collaboration, what 
 has
 happened to break this continuity?
 
 Not sure why you think something is broken. We just want to finish the 
 pilot with all the ‘must’ things working in it. This is a plan. Then we 
 can revisit and change absolutely everything. Remember, to the great 
 extent this is research. Joshua, this is what we talked about and agreed 
 on many times. I know you might be anxious about that given the fact it’s 
 taking more time than planned but our vision of the project has 
 drastically evolved and gone far far beyond the initial Convection 
 proposal. So the initial idea of POC is no longer relevant. Even though we 
 finished the first version in December, we realized it wasn’t something 
 that should have been shared with the community since it lacked some 
 essential things.
 
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Sincerely yours
 Stanislav (Stan) Lagun
 Senior Developer
 Mirantis
 35b/3, Vorontsovskaya St.
 Moscow, Russia
 Skype: stanlagun
 www.mirantis.com
 sla...@mirantis.com
 

Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review approval

2014-03-17 Thread Thierry Carrez
Sean Dague wrote:
 I want StoryBoard more than anyone else. However future Puppies and
 Unicorns don't fix real problems right now. With the tools already at
 our disposal, just using them a different way, I think we can fix some
 real problems. I think, more importantly, we're going to discover a
 whole new class of problems because we're not blocked on launchpad.

FWIW this model is not incompatible with StoryBoard at all. A feature
story in StoryBoard can definitely have, as its first task, a spec
task that points to a review in the nova-specs repository. When that
task is completed (i.e. the change to nova-specs is merged), you should
add additional tasks to that feature story, corresponding to implementation.

That keeps the approval workflows in Gerrit (be it at design or
implementation level) and uses StoryBoard to link all the things
together (which is its main feature).

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Paul Michali


On Mar 17, 2014, at 8:26 AM, Eugene Nikanorov enikano...@mirantis.com wrote:

 Hi folks,
 
 We've been discussing a patch that fixes 
 https://bugs.launchpad.net/neutron/+bug/1242351 
 and came to a conclusion that what we have right now as an operational status 
 (status attribute of the resource) may be confusing for a user.

PCM: I'm currently working similar issues on VPN…

https://bugs.launchpad.net/neutron/+bug/1291619
https://bugs.launchpad.net/neutron/+bug/1291609

And there is an existing bug that is a subset of the second one I created:

https://bugs.launchpad.net/neutron/+bug/1228005


 
 This attribute is used to show deployment status and readiness of the 
 configuration. For some reason we have 'ACTIVE' constant in the range of 
 possible constants for 'status' attribute and that creates wrong expectation 
 for users. Users think that if status is ACTIVE then configuration should 
 work, but ACTIVE just means that it has been successfully deployed.
 

PCM: For the Cisco plugin, I was working on the following (to stay within the 
confines of existing definitions)…

- If service ADMIN DOWN - service and all connections are moved to DOWN state.
- If service ADMIN UP - If one connection, then service state = connection 
state. If  1 connection, service ACTIVE (could later check all conns and set 
service ACTIVE if at least one is ACTIVE).
- If connection fails to create - connection status = ERROR, and use DOWN for 
service, if only one connection.


 I've seen bugs/questions for other advanced services that expose the same 
 user confusion as the bug that I've mentioned. I also saw same patches that 
 try to fix that.
 
 IMO, admin_state_up (kind of confusing attribute too) and state are two 
 different independent 
 attributes that could have any value and in most cases should not affect each 
 other, for example:
 
 1) Configuration is UP, but not deployed, e.g. state = PENDING_CREATE
 2) Configuration is DOWN, but deployed, state = ACTIVE
 Case #2 is clearly confusing, but that just because of the name 'ACTIVE', 
 which should probably better changed to 'DEPLOYED'

PCM: I agree that ACTIVE is misleading. I'm not sure DEPLOYED is much clearer 
for VPNaaS, but not sure of a better alternative. Having created a service is 
only part of the VPN deployment, one needs a connection created as well. The 
service just binds VPN to a router.

I do think that a new status of ADMIN DOWN is a good definition of a service or 
connection that has admin_state_up=False. It indicates that the user does not 
want the connections to be on-line at this time.


 
 My proposal is the following:
 1) admin_state_up and status are two independent attributes.
 admin_state_up turns on/off the configuration, status is for information 
 only: PENDING_CREATE/DELETE, DEPLOYED, ERROR.
 I'm not sure we need INACTIVE here.

PCM: I guess I'd like to see one status for VPN service with the values: 
PENDING CREATE/DELETE, UP, ERROR, ADMIN DOWN, DOWN. I could see the same thing 
for IPSec connections for the service.

The ADMIN DOWN indicates that there is not an operational issue, but an 
administrative action holding the service down. Not sure how this maps to other 
services.


 2) We document this behavior. We can't just rename ACTIVE to DEPLOYED because 
 it's a bw-incompatible API change.

 3) We deprecate ACTIVE constant in favor of DEPLOYED

PCM: I like UP better that DEPLOYED, only because a created VPN service is not 
fully deployed.


 
 There is one serious consequence of the proposal above: real backends should 
 support turning configurations on and off.

PCM: Yeah, I've put a request in for the Cisco VPN device driver to support 
admin up/down from the REST API (device has the ability already, but not in the 
REST API). I'm currently maintaining some state in the driver as a temporary 
work-around to track when the connection is admin down - as it is deleted on 
the device.


 Otherwise we could only implement admin_state_up change with deploy/undeploy 
 (or status attribute will not make sense for particular driver) 
 Deploy/undeploy might be simple to implement is an overkill from performance 
 stand point. Need to do wiring, communicate with backend to redeploy whole 
 config, etc

PCM: I currently have the device driver deleting the IPSec connection, when 
ADMIN DOWN, but once REST API is in place, the device will just set the state 
to down and it can easily be set ADMIN UP.

This is a timely subject (thanks for bringing it up), as I'm trying to figure 
out how to deal with admin up/down with reference VPN implementation and need 
to quickly figure that out.

Regards,

Regards,

PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83

 
 Please share your feedback. 
 
 Thanks,
 Eugene.
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review approval

2014-03-17 Thread Thierry Carrez
Doug Hellmann wrote:
 Other projects up to try it? The only possible addition is that we might
 need to work out is cross-project blueprints and which repo should those
 live in? We're doing well on integration, be careful about siloing.
 
 TBH tracking cross-project blueprint impact is a problem *today*,
 typically we end up with either only one of the involved projects
 having a blueprint for the feature or all of them having one (if you
 are lucky they might at least link to the same design on the wiki
 but often not ;)). I am hoping that is something that can ultimately
 be addressed in storyboard but am unsure of how we would resolve
 that as part of this proposal, unless instead you had a central
 blueprint repository and used a tag in the review to indicate which
 projects are impacted/involved?
 
 A central repository does have a certain appeal, especially from my
 perspective in Oslo where the work that we do will have an increasing
 impact on the projects that consume the libraries. It makes review
 permissions on the designs a little tricky, but I think we can work that
 out with agreements rather than having to enforce it in gerrit.

Yes, the main drawback of the nova-specs repository is that it
perpetuates a project-centric view of features. StoryBoard will enable
tracking of cross-project features (the same way Launchpad bugs can have
tasks affecting multiple projects). So if this idea sticks, it would be
nice to have a solution for approval of cross-project specs in the future.

In the mean time this is not a blocker to experimentation.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-17 Thread Thierry Carrez
Yuriy Taraday wrote:
 Another option would be to allow rootwrap to run in daemon mode and
 provide RPC interface. This way Neutron can spawn rootwrap (with its
 CPython startup overhead) once and send new commands to be run later
 over UNIX socket.
 This way we won't need learn new language (C/C++), adopt new toolchain
 (RPython, Cython, whatever else) and still get secure way to run
 commands with root priviledges.

Note that the whole concept behind rootwrap is to limit the amount of
code that runs with elevated privileges. If you end up running a full
service as root which imports as many libraries as the rest of OpenStack
services, then you should seriously consider switching to running your
root-heavy service as root directly, because it won't make that much of
a difference.

I'm not closing the door to a persistent implementation... Just saying
that in order to be useful, it needs to be as minimal as possible (both
in amount of code written and code imported) and as simple as possible
(so that its security model can be easily proven safe).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-17 Thread Sheng Bo Hou
Hi Jay and Zhao Qin,

Thank you for your reply. I have recap my recent ideas about the 
blueprints and put them in the link: 
https://etherpad.openstack.org/p/live-snapshot.
Waiting for your comments. 
Thank you folks again.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



Jay Pipes jaypi...@gmail.com 
2014/03/14 11:40

To
Sheng Bo Hou/China/IBM@IBMCN, 
cc
chaoc...@gmail.com
Subject
Re: 转发: Re: [openstack-dev] [nova] a question about instance snapshot






Hi again, Vincent! I'm including Qin Zhao (cc'd) in our conversation,
since we were chatting about this on IRC :)

Qin helpfully created an Etherpad where we are beginning to discuss this
blueprint (and the related half-completed one).

https://etherpad.openstack.org/p/live-snapshot

See you on the etherpad! :)

Best,
-jay

On Fri, 2014-03-14 at 09:47 +0800, Sheng Bo Hou wrote:
 Hi Jay, 
 
 I found you are in the discussion about live snapshot. I came up with
 a relatively generic solution for Nova in the following mail. Hope you
 can take a look review and give me your feedbacks. 
 
 Thank you so much. 
 
 Best wishes,
 Vincent Hou (侯胜博)

 Hi everyone,
 
 I got excited to hear that this live snapshot has been taken into
 discussion in our community. Recently my clients in China came up with
 this live snapshot requirement as well, because they have already had
 their legacy environment and expect the original functions work fine
 when they transfer to use OpenStack. In my opinion, we need to think a
 little bit about these clients' needs, because it is also a potential
 market for OpenStack.
 
 I registered a new blueprint for Nova
 https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot.
 It is named driver-specific before, but can be changed later.
 
 The Nova API could be implemented via the extension, the following API
 may be added:
 • CreateSnapshot: create a snapshot from the VM. The snapshot can be
 live snapshot or other hypervisor native way to create a snapshot.
 • RestoreFromSnapshot: restore/revert the VM from a snapshot.
 • DeleteSnapshot: delete a snapshot.
 • ListSnapshot: list all the snapshots or list all the snapshots if a
 VM id is given.
 • SpawnFromSnapshot: spawn a new VM from an existing snapshot, which
 is the live snapshot or the snapshot of other snapshot created in a
 hypervisor native way. 
 The features in this blueprint can be optional for any drivers. If a
 driver does not have a native way to do live snapshot or other kind
 of snapshots, it is fine to leave the API not implemented; if a driver
 can provide the native feature to do snapshot, it is an opportunity
 to reinforce Nova with this snapshot support. 
 
 I sincerely need your comments and hope we can figure it out in a most
 favorable way. 
 Thank you so much. 
 
 Best wishes,
 Vincent Hou (侯胜博)
 
 Staff Software Engineer, Open Standards and Open Source Team, Emerging
 Technology Institute, IBM China Software Development Lab
 
 Tel: 86-10-82450778 Fax: 86-10-82453660
 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
 Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
 West Road, Haidian District, Beijing, P.R.C.100193
 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:
 100193 
 
 Jay Pipes jaypi...@gmail.com 
 
 2014/03/12 03:15 
 Please respond to
 OpenStack Development Mailing List
 \(not for usage questions\)
 openstack-dev@lists.openstack.org
 
 To
 openstack-dev@lists.openstack.org, 
 cc
 
 Subject
 Re:
 [openstack-dev]
 [nova] a question
 about instance
 snapshot
 
 
 
 
 
 
 
 
 
 
 On Tue, 2014-03-11 at 06:35 +, Bohai (ricky) wrote:
   -Original Message-
   From: Jay Pipes [mailto:jaypi...@gmail.com]
   Sent: Tuesday, March 11, 2014 3:20 AM
   To: openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] [nova] a question about instance
 snapshot
  
   On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
We have very strong interest in pursing this feature in the
 VMware
driver as well. I would like to see the revert instance feature
implemented at least.
   
When I used to work in multi-discipline roles involving
 operations it
would be common for us to snapshot a vm, run through an upgrade
process, then revert if something did not upgrade smoothly. This
ability alone can be exceedingly valuable in long-lived virtual
machines.
   
I also have some comments from parties interested in refactoring
 how
the VMware drivers handle snapshots but I'm not certain how much
 that
plays into this live snapshot discussion.
  
   I think the reason that there isn't much interest 

Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Eugene Nikanorov
Hi Paul,

Thanks for the reply.

IMO, a disadvantage of having UP/DOWN/ADMIN DOWN values for status is that
we mix admin_state and status.
That will require us to implement non-trivial logic of state-status
transitions.
It also has to be carefully documented to avoid user confusion like in all
those bugs.

Thanks,
Eugene.


On Mon, Mar 17, 2014 at 5:38 PM, Paul Michali p...@cisco.com wrote:



 On Mar 17, 2014, at 8:26 AM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

 Hi folks,

 We've been discussing a patch that fixes
 https://bugs.launchpad.net/neutron/+bug/1242351
 and came to a conclusion that what we have right now as an operational
 status (status attribute of the resource) may be confusing for a user.


 PCM: I'm currently working similar issues on VPN...

 https://bugs.launchpad.net/neutron/+bug/1291619
 https://bugs.launchpad.net/neutron/+bug/1291609

 And there is an existing bug that is a subset of the second one I created:

 https://bugs.launchpad.net/neutron/+bug/1228005



 This attribute is used to show deployment status and readiness of the
 configuration. For some reason we have 'ACTIVE' constant in the range of
 possible constants for 'status' attribute and that creates wrong
 expectation for users. Users think that if status is ACTIVE then
 configuration should work, but ACTIVE just means that it has been
 successfully deployed.


 PCM: For the Cisco plugin, I was working on the following (to stay within
 the confines of existing definitions)...

 - If service ADMIN DOWN - service and all connections are moved to DOWN
 state.
 - If service ADMIN UP - If one connection, then service state =
 connection state. If  1 connection, service ACTIVE (could later check all
 conns and set service ACTIVE if at least one is ACTIVE).
 - If connection fails to create - connection status = ERROR, and use DOWN
 for service, if only one connection.


 I've seen bugs/questions for other advanced services that expose the same
 user confusion as the bug that I've mentioned. I also saw same patches that
 try to fix that.

 IMO, admin_state_up (kind of confusing attribute too) and state are two
 different independent
 attributes that could have any value and in most cases should not affect
 each other, for example:

 1) Configuration is UP, but not deployed, e.g. state = PENDING_CREATE
 2) Configuration is DOWN, but deployed, state = ACTIVE
 Case #2 is clearly confusing, but that just because of the name 'ACTIVE',
 which should probably better changed to 'DEPLOYED'


 PCM: I agree that ACTIVE is misleading. I'm not sure DEPLOYED is much
 clearer for VPNaaS, but not sure of a better alternative. Having created a
 service is only part of the VPN deployment, one needs a connection created
 as well. The service just binds VPN to a router.

 I do think that a new status of ADMIN DOWN is a good definition of a
 service or connection that has admin_state_up=False. It indicates that the
 user does not want the connections to be on-line at this time.



 My proposal is the following:
 1) admin_state_up and status are two independent attributes.
 admin_state_up turns on/off the configuration, status is for information
 only: PENDING_CREATE/DELETE, DEPLOYED, ERROR.
 I'm not sure we need INACTIVE here.


 PCM: I guess I'd like to see one status for VPN service with the
 values: PENDING CREATE/DELETE, UP, ERROR, ADMIN DOWN, DOWN. I could see the
 same thing for IPSec connections for the service.

 The ADMIN DOWN indicates that there is not an operational issue, but an
 administrative action holding the service down. Not sure how this maps to
 other services.


 2) We document this behavior. We can't just rename ACTIVE to DEPLOYED
 because it's a bw-incompatible API change.


 3) We deprecate ACTIVE constant in favor of DEPLOYED


 PCM: I like UP better that DEPLOYED, only because a created VPN service is
 not fully deployed.



 There is one serious consequence of the proposal above: real backends
 should support turning configurations on and off.


 PCM: Yeah, I've put a request in for the Cisco VPN device driver to
 support admin up/down from the REST API (device has the ability already,
 but not in the REST API). I'm currently maintaining some state in the
 driver as a temporary work-around to track when the connection is admin
 down - as it is deleted on the device.


 Otherwise we could only implement admin_state_up change with
 deploy/undeploy (or status attribute will not make sense for particular
 driver)
 Deploy/undeploy might be simple to implement is an overkill from
 performance stand point. Need to do wiring, communicate with backend to
 redeploy whole config, etc


 PCM: I currently have the device driver deleting the IPSec connection,
 when ADMIN DOWN, but once REST API is in place, the device will just set
 the state to down and it can easily be set ADMIN UP.

 This is a timely subject (thanks for bringing it up), as I'm trying to
 figure out how to deal with admin up/down with reference VPN 

Re: [openstack-dev] [Mistral] Actions design BP

2014-03-17 Thread Renat Akhmerov

On 16 Mar 2014, at 19:05, Clint Byrum cl...@fewbar.com wrote:

 From my perspective, as somebody who is considering Mistral for some
 important things, the fact that the taskflow people are not aligned with
 the Mistral people tells me that Mistral is not what I thought it was:
 taskflow for direct user consumption.

Yes, it was just an initial idea we were trying to pursue. As we moved forward 
we understood it was too limited and had a lot of pitfalls. The reason is the 
key difference between library and service. Making something a service flips 
everything upside down in terms of requirements to this new system. The logical 
questions/implications are:
If that’s a service why should its REST API be Python oriented? Bindings - yes 
(it gives a certain convenience) but that’s a different story..
A whole set of questions related to how we distribute Python-written tasks 
(especially in multi tenant environment, e.g. as a cloud hosted service):
Dependencies
Sandboxing
Result serialisation
etc.
It becomes logical to be able to use it with non-python clients and external 
systems.
Orientation to long-lived processes(workflows). Synchronous execution model no 
longer works well, instead de facto asynchronous event-based architecture fits 
better. I know it may sound too generic, it’s a whole different topic..
It becomes logical to provide more high-level capabilities rather than a 
library does (e.g. Horizon Dashboard. It would not be much of use if the 
service was based on Python).

I believe it’s not the full list.

Despite of all this I’m not against of using TaskFlow in implementation, at 
least some pieces of it. My only call is: it should be really beneficial rather 
than bringing pain. So I’m ok if we keep aligning our capabilities, roadmaps, 
terminology and whatever else is needed. 

Btw, it would be really helpful if you could clarify what you meant “some 
important things”. Asking because I still feel like we could have much more 
input from potential users. Any information is appreciated.

 So, while it may be annoying to have your day to day project
 activities questioned, it is pretty core to the discussion considering
 that this suggests several things that diverge from taskflow's core
 model.

Np. I still keep finding things/rules in OpenStack that I’m not really aware of 
or didn’t get used to yet. If that’s not how it should be done in OS then it’s 
fine.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] scheduler sub-group meeting 3/18 - agenda

2014-03-17 Thread Dugger, Donald D
All-

Just to be clear, Sylvain has agreed to host the meeting this week so it will 
proceed as scheduled.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

From: Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Sent: Monday, March 17, 2014 12:11 AM
To: OpenStack Development Mailing List, (not for usage questions)
Subject: Re: [openstack-dev] [gantt] scheduler sub-group meeting 3/18 - Cancel


I can chair this one, no worries.

I have the below topics in mind :
- no-db scheduler blueprint
- scheduler forklift efforts
- open discussion

Any other subjects to discuss ?

-Sylvain
Le 17 mars 2014 00:55, Dugger, Donald D 
donald.d.dug...@intel.commailto:donald.d.dug...@intel.com a écrit :

I can't make the meeting this week so, unless someone else wants to volunteer 
to run the meeting, let's cancel this one.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786tel:303%2F443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-17 Thread Mohammad Banikazemi

I think the docs get updated for each release, so probably the newly added
stuff (after I3) will be picked up by the RC1 release date. (cc'ing Tom
Fifield for a definitive answer.)

By the way I do see the odl config table in the openstack-manuals source
tree:
https://github.com/openstack/openstack-manuals/blob/master/doc/common/tables/neutron-ml2_odl.xml
and that is being referenced here:
https://github.com/openstack/openstack-manuals/blob/master/doc/config-reference/networking/section_networking-plugins-ml2.xml

Best,

Mohammad




From:   Kyle Mestery mest...@noironetworks.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   03/17/2014 09:40 AM
Subject:Re: [openstack-dev] [Neutron] Docs for new plugins



Edgar:

I don't see the configuration options for the OpenDaylight ML2
MechanismDriver
added here yet, even though the code was checked in well over a week ago.
How long does it take to autogenerate this page from the code?

Thanks!
Kyle



On Wed, Mar 12, 2014 at 5:10 PM, Edgar Magana emag...@plumgrid.com wrote:
  You should be able to add your plugin here:
  
http://docs.openstack.org/havana/config-reference/content/networking-options-plugins.html

  Thanks,

  Edgar

  From: Mohammad Banikazemi m...@us.ibm.com
  Date: Monday, March 10, 2014 2:40 PM
  To: OpenStack List openstack-dev@lists.openstack.org
  Cc: Edgar Magana emag...@plumgrid.com
  Subject: Re: [openstack-dev] [Neutron] Docs for new plugins



  Would like to know what to do for adding documentation for a new plugin.
  Can someone point me to the right place/process please.

  Thanks,

  Mohammad

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
inline: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Paul Michali
Eugene,

Won't we need that mixing anyway?  If there is an admin down state, shouldn't 
that drive the status to down?
Seems awkward to me, if an IPSec connection has a status of ACTIVE, but an 
admin state of ADMIN DOWN. Or were you thinking of something different?

Regards,


PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83

On Mar 17, 2014, at 10:32 AM, Eugene Nikanorov enikano...@mirantis.com wrote:

 Hi Paul,
 
 Thanks for the reply.
 
 IMO, a disadvantage of having UP/DOWN/ADMIN DOWN values for status is that we 
 mix admin_state and status.
 That will require us to implement non-trivial logic of state-status 
 transitions.
 It also has to be carefully documented to avoid user confusion like in all 
 those bugs.
 
 Thanks,
 Eugene.
 
 
 On Mon, Mar 17, 2014 at 5:38 PM, Paul Michali p...@cisco.com wrote:
 
 
 On Mar 17, 2014, at 8:26 AM, Eugene Nikanorov enikano...@mirantis.com wrote:
 
 Hi folks,
 
 We've been discussing a patch that fixes 
 https://bugs.launchpad.net/neutron/+bug/1242351 
 and came to a conclusion that what we have right now as an operational 
 status (status attribute of the resource) may be confusing for a user.
 
 PCM: I'm currently working similar issues on VPN…
 
 https://bugs.launchpad.net/neutron/+bug/1291619
 https://bugs.launchpad.net/neutron/+bug/1291609
 
 And there is an existing bug that is a subset of the second one I created:
 
 https://bugs.launchpad.net/neutron/+bug/1228005
 
 
 
 This attribute is used to show deployment status and readiness of the 
 configuration. For some reason we have 'ACTIVE' constant in the range of 
 possible constants for 'status' attribute and that creates wrong expectation 
 for users. Users think that if status is ACTIVE then configuration should 
 work, but ACTIVE just means that it has been successfully deployed.
 
 
 PCM: For the Cisco plugin, I was working on the following (to stay within the 
 confines of existing definitions)…
 
 - If service ADMIN DOWN - service and all connections are moved to DOWN 
 state.
 - If service ADMIN UP - If one connection, then service state = connection 
 state. If  1 connection, service ACTIVE (could later check all conns and set 
 service ACTIVE if at least one is ACTIVE).
 - If connection fails to create - connection status = ERROR, and use DOWN 
 for service, if only one connection.
 
 
 I've seen bugs/questions for other advanced services that expose the same 
 user confusion as the bug that I've mentioned. I also saw same patches that 
 try to fix that.
 
 IMO, admin_state_up (kind of confusing attribute too) and state are two 
 different independent 
 attributes that could have any value and in most cases should not affect 
 each other, for example:
 
 1) Configuration is UP, but not deployed, e.g. state = PENDING_CREATE
 2) Configuration is DOWN, but deployed, state = ACTIVE
 Case #2 is clearly confusing, but that just because of the name 'ACTIVE', 
 which should probably better changed to 'DEPLOYED'
 
 PCM: I agree that ACTIVE is misleading. I'm not sure DEPLOYED is much clearer 
 for VPNaaS, but not sure of a better alternative. Having created a service is 
 only part of the VPN deployment, one needs a connection created as well. The 
 service just binds VPN to a router.
 
 I do think that a new status of ADMIN DOWN is a good definition of a service 
 or connection that has admin_state_up=False. It indicates that the user does 
 not want the connections to be on-line at this time.
 
 
 
 My proposal is the following:
 1) admin_state_up and status are two independent attributes.
 admin_state_up turns on/off the configuration, status is for information 
 only: PENDING_CREATE/DELETE, DEPLOYED, ERROR.
 I'm not sure we need INACTIVE here.
 
 PCM: I guess I'd like to see one status for VPN service with the values: 
 PENDING CREATE/DELETE, UP, ERROR, ADMIN DOWN, DOWN. I could see the same 
 thing for IPSec connections for the service.
 
 The ADMIN DOWN indicates that there is not an operational issue, but an 
 administrative action holding the service down. Not sure how this maps to 
 other services.
 
 
 2) We document this behavior. We can't just rename ACTIVE to DEPLOYED 
 because it's a bw-incompatible API change.
 
 3) We deprecate ACTIVE constant in favor of DEPLOYED
 
 PCM: I like UP better that DEPLOYED, only because a created VPN service is 
 not fully deployed.
 
 
 
 There is one serious consequence of the proposal above: real backends should 
 support turning configurations on and off.
 
 PCM: Yeah, I've put a request in for the Cisco VPN device driver to support 
 admin up/down from the REST API (device has the ability already, but not in 
 the REST API). I'm currently maintaining some state in the driver as a 
 temporary work-around to track when the connection is admin down - as it is 
 deleted on the device.
 
 
 Otherwise we 

Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Kyle Mestery
On Mon, Mar 17, 2014 at 8:36 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi Kyle,






 It's a typical use case for network devices to have both admin and
 operational
 state. In the case of having admin_state=DOWN and
 operational_state=ACTIVE,
 this just means the port/link is active but has been configured down.
 Isn't this
 the same for LBaaS here? Even reading the bug, the user has clearly
 configured
 the VIP pool as admin_state=DOWN. When it becomes ACTIVE, it's due to this
 configuration that the pool remains admin_state=DOWN.

 Am I missing something here?

 No, you're not. The user sees 'ACTIVE' status and think it contradicts
 'DOWN' admin_state.
 It's naming (UX problem), in my opinion.

 OK, so the change is merely change ACTIVE into DEPLOYED instead?


 Thanks,
 Eugene.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Integrating network policies and network services

2014-03-17 Thread Carlos Gonçalves
Inline comments.

On 15 Mar 2014, at 15:06, Kanzhe Jiang kan...@gmail.com wrote:

 On Fri, Mar 14, 2014 at 3:18 PM, Mohammad Banikazemi m...@us.ibm.com wrote:
 1- This fits ok with the policy model we had developed earlier where the 
 policy would get defined between a source and a destination policy endpoint 
 group. The chain could be instantiated at the time the policy gets defined. 
 (More questions on the instantiation below marked as 1.a and 1.b.) How would 
 that work in a contract based model for policy? At the time a contract is 
 defined, it's producers and consumers are not defined yet. Would we postpone 
 the instantiation of the service chain to the time a contract gets a producer 
 and at least a consumer?
 
 In a contract based model, we can add a state attribute to the service chain. 
 Once a contract is defined, a corresponding chain could be defined without 
 insertion contexts. The chain state is pending. I assume the producer and 
 consumers can be used to derive the source and destination insertion contexts 
 for the chain. Once a contract gets producer and a consumer, the chain can 
 then be instantiated. When new consumers are added, the chain would verify if 
 the new context can be supported before updating the existing contexts. If 
 all producer and consumers are removed from a contract, the chain provider 
 deletes all service instances in the chain.

Exactly. I was about to suggest the same.
 3- For the service chain creation, I am sure there are good reasons for 
 requiring a specific provider for a given chain of services but wouldn't it 
 be possible to have a generic chain provider which would instantiate each 
 service in the chain using the required provider for each service (e.g., 
 firewall or loadbalancer service) and with setting the insertion contexts for 
 each service such that the chain gets constructed as well? I am sure I am 
 ignoring some practical requirements but is it worth rethinking the current 
 approach? 
 
 
 Service Chaining often means a form of traffic steering. Depending on how the 
 steering is done, the capabilities of different providers differ. Different 
 provider may define different context of individual service in the chain. For 
 example, a bump-in-the-wire service can be inserted as a virtual wire or L3 
 next hop. So it will be hard to define a generic chain provider.  

I’m partially with Mohammad on this.

For what I’ve understood from the service chaining proposal, there would be 
different service chain provider implementations with each one restricting to a 
statically defined and limited number of services for chaining (please correct 
me if I’m mistaken). This is, and taking the “Firewall-VPN-ref-Chain” service 
chain provider from the document as example, users would be limited to creating 
chains “firewall - VPN” (and I’m not even considering the restrictiveness of 
service providers) but not “VPN - firewall”, or introducing a LB in the middle.

My rough understanding on chaining, in a broad term, would be to firstly 
support generic L2/L3 chaining, and not restricting to Neutron services (FWaaS, 
LBaaS, VPNaaS) if that is the case, but should also be valid for them as they 
can be reached via network ports as well.

Last week during the advanced services meeting I presented the following use 
case. DPI (Deep Packet Inspection) is an example of a absent Neutron service as 
of now. Users wanting to run a DPI instance in OpenStack would have to create a 
virtual machine and run it there which is fine. Now, in case they want to 
filter inbound traffic from a (public) network, traffic should be steered first 
to the VM running the DPI and then to the final destination. Currently in 
OpenStack it is not possible to configure this and I don’t see how in the 
proposed BP it would be. It was given the example of a DPI, but it can be 
virtually any service type and service implementation. Sure users wouldn’t get 
all the fancy APIs OpenStack providers instantiate and configure services.

Does any of this even make any sense? :-)

On a side note, it may be worth watching one of the two OpenContrail’s videos 
demonstrating their approach on service chain:
- 
http://opencontrail.org/videogallery/elastic-ssl-vpn-service-over-contrail-for-secure-mobile-connectivity/
- 
http://opencontrail.org/videogallery/dynamic-and-elastic-anti-ddos-service-through-contrail-and-ddos-secure/

Thanks,
Carlos Goncalves

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy types

2014-03-17 Thread Tim Hinrichs
Hi Prabhakar,

One big piece we're missing in terms of code right now is the Data Integration 
component.  The job of this component is to integrate data sources available in 
the cloud so that tables like nova:virtual_machine, neutron:owner, etc. reflect 
the information stored in Nova, Neutron, etc.  Rajdeep is making progress on 
that (he's got some code up on review that we're iterating on), and Peter had 
worked on integrating AD a while back.

Typically I've seen the Python functions (which I usually call 'builtins') 
integrated into a Datalog system have explicit declarations (e.g. inputs, 
outputs, etc.).  This is good if we need to do deeper analysis of the policy 
(which is one of the reasons to use a policy language) and typically requires 
information about how that builtin works.  

I dug through some old (Lisp) code to see how I've done this in the past.

// (defbuiltin datalog-name lisp-function list of types of args list of 
types of returns [internal])
(defbuiltin plus + (integer integer) integer)
(defbuiltin minus - (integer integer) integer)
(defbuiltin times * (integer integer) integer)
(defbuiltin div (lambda (x y) (floor (/ x y))) (integer integer) integer)
(defbuiltin lt numlessp (integer integer) nil)
(defbuiltin lte numleqp (integer integer) nil)
(defbuiltin gte numgeqp (integer integer) nil)
(defbuiltin gt numgreaterp (integer integer) nil)

But maybe you're right in that we could do away with these explicit 
declarations and just assume that everything that (i) is calleable, (ii) not 
managed by the Data Integration component, and (iii) does not appear in the 
head of a rule is a builtin.  My only worry is that I could imagine silent and 
weird problems showing up, e.g. someone forgot to define a table with rules and 
there happens to be a function in Python by that name.  Or someone supplies the 
wrong number of arguments, and we just get an error from Python, which we'd 
have no direct way to communicate to the policy-writer, i.e. there's no 
compile-time argument-length checking.

The other thing I've seen done is to have a single builtin 'evaluate' that lets 
us call an arbitrary Python function, e.g.

p(x, y) :- q(x), evaluate(mypyfunc(x), y)

Then we wouldn't need to declare the functions.  Errors would still be silent.  
But it would be clear whether we were using a Python function or not.

Thoughts?
Tim




- Original Message -
| From: prabhakar Kudva nandava...@hotmail.com
| To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
| Sent: Saturday, March 15, 2014 8:28:27 PM
| Subject: Re: [openstack-dev] [Congress] Policy types
| 
| 
| 
| Hi Tim,
| 
| Here is a small change I wanted to try in runtime.py
| It may already exist in MaterializedViewTheory, but wasn't clear to
| me.
| Checking to see if this is something that:1. makes sense 2. already
| exists
| 3. worth implementing? in that order.
| 
| Let's take the example from private_public_network.classify
| 
| error(vm) :- nova:virtual_machine(vm), nova:network(vm, network),
| not neutron:public_network(network),
| neutron:owner(network, netowner), nova:owner(vm, vmowner), not
| same_group(netowner, vmowner)
| 
| same_group(user1, user2) :- cms:group(user1, group), cms:group(user2,
| group)
| 
| nova:virtual_machine(vm1)
| nova:virtual_machine(vm2)
| nova:virtual_machine(vm3)
| nova:network(vm1, net_private)
| nova:network(vm2, net_public)
| neutron:public_network(net_public)
| nova:owner(vm1, tim)
| nova:owner(vm2, pete)
| nova:owner(vm3, pierre)
| neutron:owner(net_private, martin)
| 
| 
| In this example, if as in Scenario 1:
| 
| Cloud services at our disposal:
| nova:virtual_machine(vm)
| nova:network(vm, network)
| nova:owner(vm, owner)
| neutron:public_network(network)
| neutron:owner(network, owner)
| cms:group(user, group)
| 
| are all python functions called through some nova/neutron api,
| then, we just execute them to get a true/false value in runtime.py
| They should be first checked to make sure they are python functions
| and
| not condition primitives using 'callable' and os.dir or some such
| combination.
| 
| If not, and they are assertions made in the file, not directly
| related to OS
| state, then in Scenario 2
| 
| nova:owner(vm1, tim)
| nova:owner(vm2, pete)
| nova:owner(vm3, pierre)
| 
| Are assertions made in the file. In a dynamic environment,
| a python function could query an OS client to actually find the
| current owner,
| since some other OS command could have been used to change the owner
| without an entry being made in this file, i.e., without explicitly
| informing Congres.
| This may not occur currently with vms, but may be implemented
| in a future release. Similar other examples are possible
| 
https://ask.openstack.org/en/question/5582/how-to-change-ownership-between-tenants-of-volume/
| https://blueprints.launchpad.net/cinder/+spec/volume-transfer
| 
| 
| So, I was thinking that python_nova_owner(vm1), is first checked as
| a 

Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-17 Thread Ben Nemec

On 2014-03-15 12:23, Solly Ross wrote:

--nodeps will only sync the modules specified on the command line:
https://wiki.openstack.org/wiki/Oslo#Syncing_Code_from_Incubator


Heh, whoops.  Must have missed that.  Is it in the README/info at the
top of the update.py script?


It wasn't, but I pushed a change to add it: 
https://review.openstack.org/#/c/81004/




That said, it's not always safe to do that.  You might sync a change 
in

one module that depends on a change in another module and end up
breaking something.  It might not be caught in the sync either because
the Oslo unit tests don't get synced across.


Hmm... I suppose this is why we have libraries with dependencies (not
meant to sound snarky).
Although in the case of updating a library that you wrote, it's less
likely to break things.


Yeah, ideally you would never need nodeps because the deps would already 
be up to date when you do your sync, but unfortunately we aren't that 
good about keeping Oslo syncs current right now.  Hopefully that will 
get better in Juno, but we'll see.




Best Regards,
Solly Ross

- Original Message -
From: Ben Nemec openst...@nemebean.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Cc: Solly Ross sr...@redhat.com
Sent: Friday, March 14, 2014 4:36:03 PM
Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

On 2014-03-14 14:49, Solly Ross wrote:

It would also be great if there was a way to only sync one package.


There is. :-)

--nodeps will only sync the modules specified on the command line:
https://wiki.openstack.org/wiki/Oslo#Syncing_Code_from_Incubator

That said, it's not always safe to do that.  You might sync a change in
one module that depends on a change in another module and end up
breaking something.  It might not be caught in the sync either because
the Oslo unit tests don't get synced across.


When adding a new library
to a project (e.g. openstack.common.report to Nova), one would want to
only sync the openstack.common.report
parts, and not the any changes from the rest of openstack.common.  My
process has been

1. Edit openstack-common.conf to only contain the packages I want
2. Run the update
3. Make sure there wasn't code that didn't get changed from
'openstack.common.xyz' to 'nova.openstack.common.xyz' (hint: this
happens some times)
4. git checkout openstack-common.conf to revert the changes to
openstack-common.conf

IMHO, update.py needs a bit of work (well, I think the whole code
copying thing needs a bit of work, but that's a different story).

Best Regards,
Solly Ross

- Original Message -
From: Jay S Bryant jsbry...@us.ibm.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Friday, March 14, 2014 3:36:49 PM
Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py




From: Brant Knudson b...@acm.org
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date: 03/14/2014 02:21 PM
Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py







On Fri, Mar 14, 2014 at 2:05 PM, Jay S Bryant  jsbry...@us.ibm.com 
wrote:

It would be great if we could get the process for this automated. In
the
mean time, those of us doing the syncs will just have to slog through
the
process.

Jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


What's the process? How do I generate the list of changes?

Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Brant,

My process thus far has been the following:


1. Do the sync to see what files are changed.
2. Take a look at the last commit sync'd to what is currently in
master for a file.
3. Document all the commits that have come in on that file since.
4. Repeat process for all the relevant files if there is more than
one.
5. If are multiples files I organize the commits with a list of
the files touched by that commit.
6. Document the master level of Oslo when the sync was done for
reference.

Process may not be perfect, but it gets the job done. Here is an
example of the format I use: https://review.openstack.org/#/c/75740/

Jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Development Issue: Cloud System Security Auditing

2014-03-17 Thread Caleb Groom
Hello Ahsan,

I’d encourage you to check out https://wiki.openstack.org/wiki/Satori. We’re 
actively working on a configuration discovery engine that is capable of 
connecting to remote machines and retrieving information about them. The 
security audit use case is very interesting to us.

Caleb

On March 16, 2014 at 1:56:23 AM, Ahsan Habib 
(ahabi...@gmail.com(mailto:ahabi...@gmail.com)) wrote:

 To whom it may concern,
  
 We are planning to work on the Cloud System for our undergraduate thesis. Our 
 main concern is Cloud System Security where we want to produce a auditing 
 tool/software that can scan a cloud system and test its vulnerability. In 
 this concern we need some feedback about the progress of work in this field. 
 We look forward to any help or suggestions from your side as our terminal 
 goal is to contribute to the community.  
  
 Thank You,  
 Ahsan Habib
 Zunayeed Bin Zahir
 North South University
 Dhaka
 Bangladesh
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Eugene Nikanorov
 Seems awkward to me, if an IPSec connection has a status of ACTIVE, but
an admin state of ADMIN DOWN.
Right, you see, that's the problem. Constant name 'ACTIVE' makes you expect
that IPSec connection should work, while it is a deployment status.

 OK, so the change is merely change ACTIVE into DEPLOYED instead?
We can't just rename the ACTIVE to DEPLOYED, and may be the latter is not
the best name, but yes, that's the intent.

Thanks,
Eugene.



On Mon, Mar 17, 2014 at 7:31 PM, Kyle Mestery mest...@noironetworks.comwrote:

 On Mon, Mar 17, 2014 at 8:36 AM, Eugene Nikanorov enikano...@mirantis.com
  wrote:

 Hi Kyle,






 It's a typical use case for network devices to have both admin and
 operational
 state. In the case of having admin_state=DOWN and
 operational_state=ACTIVE,
 this just means the port/link is active but has been configured down.
 Isn't this
 the same for LBaaS here? Even reading the bug, the user has clearly
 configured
 the VIP pool as admin_state=DOWN. When it becomes ACTIVE, it's due to
 this
 configuration that the pool remains admin_state=DOWN.

 Am I missing something here?

 No, you're not. The user sees 'ACTIVE' status and think it contradicts
 'DOWN' admin_state.
 It's naming (UX problem), in my opinion.

 OK, so the change is merely change ACTIVE into DEPLOYED instead?


 Thanks,
 Eugene.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday March 18th at 19:00 UTC

2014-03-17 Thread Elizabeth Krumbach Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday March 18th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travel Support Program validation date vs. early-bird deadline

2014-03-17 Thread Stefano Maffulli
Hi Sylvain,

On 03/08/2014 09:34 AM, Sylvain Bauza wrote:
 As I only requested for hotel+travel support, I was supposed to use my
 ATC badge for the registration, *but* it is clearly explained that the
 coupon code is only valid until March 21...

Thanks for bringing it up, indeed I think we can optimize the deadlines
next time. In any case, at the foundation we can 'fix' this sort of
stuff very easily.

Feel free to register using your ATC code if you want, or we'll issue
you a new one in case you get accepted for the Travel Program.

Cheers,
Stef


-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] OPENSTACK SERVICE ERROR

2014-03-17 Thread Ben Nemec
 

First, please don't use all caps in your subject. Second, please do use
tags to indicate which projects your message relates to. In this case,
that appears to be nova. 

As far as the error, it looks like you may have some out of date code.
The line referenced in the backtrace is now line 1049 in api.py, not
953. That suggests to me that there have been some pretty significant
changes since whatever git revision you're currently using. 

-Ben 

On 2014-03-15 08:15, abhishek jain wrote: 

 Hi all
 
 I have installed openstack using devstack and nearly all the functionality is 
 working fine.
 However I'm getting an issue during live migration .
 I'm creating a stack of one controller node and two compute nodes i.e Compute 
 node 1 and compute node 2 .I'm booting one VM at compute node 1 and need to 
 migrate it over compute node 2.
 For this I'm using NFS which is working fine.
 
 Also I have enabled 
 live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
  in /etc/nova.conf over both the compute nodes and over controller node.
 
 However when I apply nova live-migration command after restarting the 
 nova-compute service using screen session the VM is not able to migrate.
 
 Below are the logs after restarting nova-compute service ..
 
 16:33.500 2599 TRACE nova.openstack.common.rpc.amqp 
 2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp File 
 /opt/stack/nova/nova/network/neutronv2/api.py, line 953, in 
 _nw_info_build_network
 2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp 
 label=network_name,
 2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp 
 2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp 
 UnboundLocalError: local variable 'network_name' referenced before assignment
 2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp 
 2014-03-15 10:16:33.500 2599 TRACE nova.openstack.common.rpc.amqp 
 
 10$(L) n-cpu* 11$(L) n-cond 12$(L) n-crt 13$(L) n-sch 14$(L) n-novnc 15$(L) 
 n-xvnc 16$(L) n-cauth 
 
 Also find the attached screanshot describing the complete error.
 
 Please help regarding this.
 
 Thanks
 
 Abhishek Jain
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [1]

 

Links:
--
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openvswitch mirror for tcpdump

2014-03-17 Thread sowmini . varadhan
I'm following the instructions at
http://docs.openstack.org/trunk/openstack-ops/content/network_troubleshooting.html

to set up patch-tun mirrors, but running tcpdump on the snooper0
produces inconsistent results- sometimes, I'm able to get a copy
of the packet (e.g., the syn packet for ssh below) but not others
(the tcpdump session did not show any more packets, for example).

Why is this? 

--Sowmini
tcpdump on snooper0-

root@sowmini-virtual-machine:~/devstack# tcpdump -i snooper0 -xenvv
tcpdump: WARNING: snooper0: no IPv4 address assigned
tcpdump: listening on snooper0, link-type EN10MB (Ethernet), capture size 65535 
bytes
12:15:04.106523 fa:16:3e:1d:eb:b8  fa:16:3e:e2:be:81, ethertype 802.1Q 
(0x8100), length 78: vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 63, id 51325, 
offset 0, flags [DF], proto TCP (6), length 60)
192.168.12.41.48975  10.0.0.4.22: Flags [S], cksum 0xabdb (correct), seq 
3145500194, win 29200, options [mss 1460,sackOK,TS val 64704095 ecr 
0,nop,wscale 7], length 0
0x:  0001 0800 4500 003c c87d 4000 3f06 9c69
0x0010:  c0a8 0c29 0a00 0004 bf4f 0016 bb7c 8622
0x0020:    a002 7210 abdb  0204 05b4
0x0030:  0402 080a 03db 4e5f   0103 0307
12:15:04.109993 fa:16:3e:e2:be:81  ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
(0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 
4), Request who-has 10.0.0.1 tell 10.0.0.4, length 28
0x:  0001 0806 0001 0800 0604 0001 fa16 3ee2
0x0010:  be81 0a00 0004    0a00 0001


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-17 Thread Ryan Petrello
Changing the configuration object at runtime is not thread-safe.  If you want 
to share objects with controllers, I’d suggest checking out Pecan’s hook 
functionality.

http://pecan.readthedocs.org/en/latest/hooks.html#implementating-a-pecan-hook

e.g.,

class SpecialContextHook(object):

def __init__(self, some_obj):
self.some_obj = some_obj

def before(self, state):
# In any pecan controller, `pecan.request` is a thread-local 
webob.Request instance,
# allowing you to access `pecan.request.context[‘foo’]` in your 
controllers.  In this example,
# self.some_obj could be just about anything - a Python primitive, or 
an instance of some class
state.request.context = {
‘foo’: self.some_obj
}

...

wsgi_app = pecan.Pecan(
my_package.controllers.root.RootController(),
hooks=[SpecialContextHook(SomeObj(1, 2, 3))]
)

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

On Mar 14, 2014, at 8:53 AM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Take a look at method get_pecan_config() in mistral/api/app.py. It’s where 
 you can pass any parameters into pecan app (see a dictionary ‘cfg_dict’ 
 initialization). They can be then accessed via pecan.conf as described here: 
 http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
  If I understood the problem correctly this should be helpful.
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 14 Mar 2014, at 05:14, Dmitri Zimine d...@stackstorm.com wrote:
 
 We have access to all configuration parameters in the context of api.py. May 
 be you don't pass it but just instantiate it where you need it? Or I may 
 misunderstand what you're trying to do...
 
 DZ 
 
 PS: can you generate and update mistral.config.example to include new oslo 
 messaging options? I forgot to mention it on review on time. 
 
 
 On Mar 13, 2014, at 11:15 AM, W Chan m4d.co...@gmail.com wrote:
 
 On the transport variable, the problem I see isn't with passing the 
 variable to the engine and executor.  It's passing the transport into the 
 API layer.  The API layer is a pecan app and I currently don't see a way 
 where the transport variable can be passed to it directly.  I'm looking at 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50 
 and 
 https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.  
 Do you have any suggestion?  Thanks. 
 
 
 On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
 
• I can write a method in base test to start local executor.  I will do 
 that as a separate bp.  
 Ok.
 
• After the engine is made standalone, the API will communicate to the 
 engine and the engine to the executor via the oslo.messaging transport.  
 This means that for the local option, we need to start all three 
 components (API, engine, and executor) on the same process.  If the long 
 term goal as you stated above is to use separate launchers for these 
 components, this means that the API launcher needs to duplicate all the 
 logic to launch the engine and the executor. Hence, my proposal here is to 
 move the logic to launch the components into a common module and either 
 have a single generic launch script that launch specific components based 
 on the CLI options or have separate launch scripts that reference the 
 appropriate launch function from the common module.
 Ok, I see your point. Then I would suggest we have one script which we 
 could use to run all the components (any subset of of them). So for those 
 components we specified when launching the script we use this local 
 transport. Btw, scheduler eventually should become a standalone component 
 too, so we have 4 components.
 
• The RPC client/server in oslo.messaging do not determine the 
 transport.  The transport is determine via oslo.config and then given 
 explicitly to the RPC client/server.  
 https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
  and 
 https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
  are examples for the client and server respectively.  The in process 
 Queue is instantiated within this transport object from the fake driver.  
 For the local option, all three components need to share the same 
 transport in order to have the Queue in scope. Thus, we will need some 
 method to have this transport object visible to all three components and 
 hence my proposal to use a global variable and a factory method. 
 I’m still not sure I follow your point here.. Looking at the links you 
 provided I see this:
 
 transport = messaging.get_transport(cfg.CONF)
 
 So my point here is we can make this call once in the launching script and 
 pass it to engine/executor (and now API too if we want it to be launched by 
 the same script). Of course, we’ll have to change the way how we initialize 

Re: [openstack-dev] [neutron][policy] Integrating network policies and network services

2014-03-17 Thread Kanzhe Jiang
Hi Carlos,

The provider mechanism is currently under discussion in advanced service
group. However, your use-case of chaining non-neutron service has not been
considered in the proposal. If you believe it is an important feature,
please definitely be vocal, even better to have a proposal. :-)


3- For the service chain creation, I am sure there are good reasons for
 requiring a specific provider for a given chain of services but wouldn't it
 be possible to have a generic chain provider which would instantiate each
 service in the chain using the required provider for each service (e.g.,
 firewall or loadbalancer service) and with setting the insertion contexts
 for each service such that the chain gets constructed as well? I am sure I
 am ignoring some practical requirements but is it worth rethinking the
 current approach?

 Service Chaining often means a form of traffic steering. Depending on how
 the steering is done, the capabilities of different providers differ.
 Different provider may define different context of individual service in
 the chain. For example, a bump-in-the-wire service can be inserted as a
 virtual wire or L3 next hop. So it will be hard to define a generic chain
 provider.


 I'm partially with Mohammad on this.

 For what I've understood from the service chaining proposal, there would
 be different service chain provider implementations with each one
 restricting to a statically defined and limited number of services for
 chaining (please correct me if I'm mistaken). This is, and taking the
 Firewall-VPN-ref-Chain service chain provider from the document as
 example, users would be limited to creating chains firewall - VPN (and
 I'm not even considering the restrictiveness of service providers) but not
 VPN - firewall, or introducing a LB in the middle.




 My rough understanding on chaining, in a broad term, would be to firstly
 support generic L2/L3 chaining, and not restricting to Neutron services
 (FWaaS, LBaaS, VPNaaS) if that is the case, but should also be valid for
 them as they can be reached via network ports as well.

 Last week during the advanced services meeting I presented the following
 use case. DPI (Deep Packet Inspection) is an example of a absent Neutron
 service as of now. Users wanting to run a DPI instance in OpenStack would
 have to create a virtual machine and run it there which is fine. Now, in
 case they want to filter inbound traffic from a (public) network, traffic
 should be steered first to the VM running the DPI and then to the final
 destination. Currently in OpenStack it is not possible to configure this
 and I don't see how in the proposed BP it would be. It was given the
 example of a DPI, but it can be virtually any service type and service
 implementation. Sure users wouldn't get all the fancy APIs OpenStack
 providers instantiate and configure services.






-- 
Kanzhe Jiang
MTS at BigSwitch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] [Neutron] Docs for new plugins

2014-03-17 Thread Gauvain Pocentek

Hi,

Le 2014-03-17 16:01, Steve Gordon a écrit :

- Original Message -

Edgar:

I don't see the configuration options for the OpenDaylight ML2
MechanismDriver
added here yet, even though the code was checked in well over a week 
ago.

How long does it take to autogenerate this page from the code?

Thanks!
Kyle


Adding the docs list, I suspect there are two issues here. The first
is that the generation of the config-reference content is initiated
manually and potentially hasn't been done since this was added. The
second is that prior to running it the flag mappings file needs to be
updated [1], this maps a given configuration option to a group and is
the main hole in our current automation process for this guide.

Edgar/Kyle can you check whether the new options are listed in there
and we'll arrange to get this re-generated for rc1?


FYI I've just submitted an updated flagmappings (and generated tables) 
for review: https://review.openstack.org/#/c/81013/


Let me know if the options are there and correctly categorized.

Thanks,

Gauvain



[1]
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/tools/autogenerate-config-flagmappings/neutron.flagmappings

On Wed, Mar 12, 2014 at 5:10 PM, Edgar Magana emag...@plumgrid.com 
wrote:



You should be able to add your plugin here:

http://docs.openstack.org/havana/config-reference/content/networking-options-plugins.html

Thanks,

Edgar

From: Mohammad Banikazemi m...@us.ibm.com
Date: Monday, March 10, 2014 2:40 PM
To: OpenStack List openstack-dev@lists.openstack.org
Cc: Edgar Magana emag...@plumgrid.com
Subject: Re: [openstack-dev] [Neutron] Docs for new plugins

Would like to know what to do for adding documentation for a new 
plugin.

Can someone point me to the right place/process please.

Thanks,

Mohammad

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting minutes - 03/17/2014

2014-03-17 Thread Renat Akhmerov
Hi,

Thanks for joining our meeting today at #openstack-meeting.

As usually,

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-03-17-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-03-17-16.00.log.html

Join us next time on March 24 at the same time.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] does exception need localize or not?

2014-03-17 Thread Jay S Bryant
Doug,

I am glad that this has come up as Mike Perez and I were talking about 
this on Friday and even the documentation that you point to here:  
https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation is not 
clear.  At least, not to me.

Should I interpret that to mean that debug messages should be created 
using LOG.debug and not use _() on the message being sent?  If that is the 
case, there are many places where LOG.debug is using _() on the message 
being passed.  Should we be planning to remove those?  I just want to 
understand what the plan going forward is there, especially given that the 
documentation currently says:  Debug messages are not translated, for 
now. 

With regards to _LI(), etc.  It appears that this should be used in place 
of LOG.info _(Message).  Should we be enforcing a move from LOG.* to _L* 
in new code that is coming in?

Appreciate your thoughts on the subject!


Jay S. Bryant
   IBM Cinder Subject Matter ExpertCinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   Doug Hellmann doug.hellm...@dreamhost.com
To: Joshua Harlow harlo...@yahoo-inc.com, 
Cc: OpenStack Development Mailing List \(not for usage questions\) 
openstack-dev@lists.openstack.org
Date:   03/14/2014 03:54 PM
Subject:Re: [openstack-dev] does exception need localize or not?






On Thu, Mar 13, 2014 at 6:44 PM, Joshua Harlow harlo...@yahoo-inc.com 
wrote:
From: Doug Hellmann doug.hellm...@dreamhost.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: Thursday, March 13, 2014 at 12:44 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] does exception need localize or not?




On Thu, Feb 27, 2014 at 3:45 AM, yongli he yongli...@intel.com wrote:
refer to :
https://wiki.openstack.org/wiki/Translations

now some exception use _ and some not.  the wiki suggest do not to do 
that. but i'm not sure.

what's the correct way?


F.Y.I 

What To Translate
At present the convention is to translate all user-facing strings. This 
means API messages, CLI responses, documentation, help text, etc.
There has been a lack of consensus about the translation of log messages; 
the current ruling is that while it is not against policy to mark log 
messages for translation if your project feels strongly about it, 
translating log messages is not actively encouraged.

I've updated the wiki to replace that paragraph with a pointer to 
https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation which 
explains the log translation rules. We will be adding the job needed to 
have different log translations during Juno.

 
Exception text should not be marked for translation, becuase if an 
exception occurs there is no guarantee that the translation machinery will 
be functional.

This makes no sense to me. Exceptions should be translated. By far the 
largest number of errors will be presented to users through the API or 
through Horizon (which gets them from the API). We will ensure that the 
translation code does its best to fall back to the original string if the 
translation fails.

Doug

 


Regards
Yongli He


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I think this question comes up every 3 months, haha ;)

As we continue to expand all the libraries in 
https://github.com/openstack/requirements/blob/master/global-requirements.txt
 and knowing that those libraries likely don't translate there exceptions 
(probably in the majority of cases, especially in non-openstack/oslo 3rd 
party libraries) are we chasing a ideal that can not be caught?

Food for thought,

We can't control what the other projects do, but that doesn't prevent us 
from doing more. 

Doug

 

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack: Unable to restart rabbitmq-server

2014-03-17 Thread Solly Ross
I've also had devstack somehow end up starting qpid, then try to start rabbit 
on F20.  In this case
it seems sufficient to stop qpid then re-run devstack.  I haven't had time to 
track down the issue yet.

Best Regards,
Solly Ross

- Original Message -
From: John Eckersberg jecke...@redhat.com
To: Deepak C Shetty deepa...@redhat.com, openstack-dev@lists.openstack.org
Sent: Monday, March 17, 2014 9:19:03 AM
Subject: Re: [openstack-dev] devstack: Unable to restart rabbitmq-server

Deepak C Shetty deepa...@redhat.com writes:
 Hi List,
  It been few hours and I tried everything from ensuring /etc/hosts, 
 /etc/hostname etc (per google results) and rabbitmq-server still doesn't 
 start. I am using latest devstack as of today on F20

There are a couple of known bugs that can prevent rabbitmq-server from
starting on F20.

First one (same bug, two BZs) is related to SELinux and port probing:
https://bugzilla.redhat.com/show_bug.cgi?id=1032595#c8
https://bugzilla.redhat.com/show_bug.cgi?id=998682

Second one is a race condition in Erlang.  If you are repeatedly unable
to start rabbitmq-server, it's probably not this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1059913

I have a patched rabbitmq-server package which includes the fixes for
these two issues, if you'd like to try it and see if it helps your
issue.  And if it helps, please comment on the bug(s) to encourage the
maintainer to pull them into the package :)

http://jeckersb.fedorapeople.org/rabbitmq-server-3.1.5-3.fc20.noarch.rpm

Hope that helps,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-17 Thread Clint Byrum
Excerpts from Renat Akhmerov's message of 2014-03-17 07:35:02 -0700:
 
 On 16 Mar 2014, at 19:05, Clint Byrum cl...@fewbar.com wrote:
 
  From my perspective, as somebody who is considering Mistral for some
  important things, the fact that the taskflow people are not aligned with
  the Mistral people tells me that Mistral is not what I thought it was:
  taskflow for direct user consumption.
 
 Yes, it was just an initial idea we were trying to pursue. As we moved 
 forward we understood it was too limited and had a lot of pitfalls. The 
 reason is the key difference between library and service. Making something a 
 service flips everything upside down in terms of requirements to this new 
 system. The logical questions/implications are:
 If that’s a service why should its REST API be Python oriented? Bindings - 
 yes (it gives a certain convenience) but that’s a different story..
 A whole set of questions related to how we distribute Python-written tasks 
 (especially in multi tenant environment, e.g. as a cloud hosted service):
 Dependencies
 Sandboxing
 Result serialisation
 etc.
 It becomes logical to be able to use it with non-python clients and external 
 systems.
 Orientation to long-lived processes(workflows). Synchronous execution model 
 no longer works well, instead de facto asynchronous event-based architecture 
 fits better. I know it may sound too generic, it’s a whole different topic..
 It becomes logical to provide more high-level capabilities rather than a 
 library does (e.g. Horizon Dashboard. It would not be much of use if the 
 service was based on Python).
 

I assume Mistral is written in Python though, and so it should be using
Taskflow for its own workflow. I understand though, that to run _user_
workflows you can't just expect them to upload python or always run
python as Mistral wouldn't do much for them at that point.

However, for the User-Taskflow integration, I think Josh Harlow offered
a really reasonable suggestion to that, which is instead of inventing
a DSL (frankly, I'm pretty frustrated with the Heat DSL's we have, so I
may be biased here), just embrace javascript or lua, which are designed
to be embedded and executed in a less-than-trusted context.

I think it would be a great story for users if Mistral worked like this:

- Upload javascript expression of workflow, with external callouts for
  complicated things.
- Run code that uses Mistral API to poll-for or subscribe-to
  notifications waiting for instructions from Mistral when it is
  supposed to run said external callouts, and feeds back data.

 I believe it’s not the full list.
 
 Despite of all this I’m not against of using TaskFlow in implementation, at 
 least some pieces of it. My only call is: it should be really beneficial 
 rather than bringing pain. So I’m ok if we keep aligning our capabilities, 
 roadmaps, terminology and whatever else is needed. 
 
 Btw, it would be really helpful if you could clarify what you meant “some 
 important things”. Asking because I still feel like we could have much more 
 input from potential users. Any information is appreciated.
 

- Heat needs to move from being a single state machine (heat-engine owns
  all of the running tasks for a stack in one engine at a time) to a
  distributed state machine. Before we do that, we need to consider how
  Heat expresses workflow. If Mistral were a distributed workflow
  engine, it would make a lot of sense for Heat to make use of it for
  this purpose.

- TripleO deploys a number of things that need distributed workflow.
  I think at this point the people involved with that are looking more
  at lower level tools like RAFT and Concoord. But once the distributed
  state machine is settled, there will be a need to express the
  distributed workflow. I'm disinclined to diverge from Taskflow, even
  though I am quite inclined to embrace API's.

  So, while it may be annoying to have your day to day project
  activities questioned, it is pretty core to the discussion considering
  that this suggests several things that diverge from taskflow's core
  model.
 
 Np. I still keep finding things/rules in OpenStack that I’m not really aware 
 of or didn’t get used to yet. If that’s not how it should be done in OS then 
 it’s fine.

I'm sorry if the message was harsh, but I see this happening a lot.

I don't think it is a rule. I think the principle here is to collaborate
on things that should be aligned. If Taskflow doesn't do what you need
it to do, then I suggest _fixing that_ rather than writing a private
implementation. It will make life better for all users of taskflow and
it will keep Mistral extremely simple, which will help with adoption by
operators _and_ users.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-17 Thread John Garbutt
On 15 March 2014 18:39, Chris Friesen chris.frie...@windriver.com wrote:
 Hi,

 I'm curious why the specified git commit chose to fix the anti-affinity race
 condition by aborting the boot and triggering a reschedule.

 It seems to me that it would have been more elegant for the scheduler to do
 a database transaction that would atomically check that the chosen host was
 not already part of the group, and then add the instance (with the chosen
 host) to the group.  If the check fails then the scheduler could update the
 group_hosts list and reschedule.  This would prevent the race condition in
 the first place rather than detecting it later and trying to work around it.

 This would require setting the host field in the instance at the time of
 scheduling rather than the time of instance creation, but that seems like it
 should work okay.  Maybe I'm missing something though...

We deal with memory races in the same way as this today, when they
race against the scheduler.

Given the scheduler split, writing that value into the nova db from
the scheduler would be a step backwards, and it probably breaks lots
of code that assumes the host is not set until much later.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [Infra] pep8 issues in tempest gate / testscenarios lib

2014-03-17 Thread Joe Gordon
On Thu, Mar 13, 2014 at 8:14 AM, Koderer, Marc m.kode...@telekom.de wrote:

 Hi folks,

 I can't make it to the QA meeting for today so I wanted to summarize the
 issue
 that we have with the pep8 and tempest gate. An example for the issue you
 can
 find here:
   https://review.openstack.org/#/c/79256/

 http://logs.openstack.org/56/79256/1/gate/gate-tempest-pep8/088cc12/console.html

 pep8 check shows an error but the check itself is marked as success.

 For me this show two issues. First flake8 should return with an exit code
 !=0.
 I will have a closer look into hacking and what went wrong here.


This was an intentional compromise.  At the time of doing that we didn't
have consensus that any module should be importable without a stacktrace.

If we want to make that a rule, something which I am favor of, we can just
add a new hacking rule.




 Second issue is the current implementation of the negative testing
 framework:
 we are using the testscenarios lib with the load_tests variable
 interpreted
 by the test runner. This forces us to build the scenario at import time
 and if
 we want to have tempest configurations for this (like introduced in
 https://review.openstack.org/#/c/73982/) the laziness for the config
 doesn't
 work.

 Although it seems like if I remove the inheritance of the xml class to the
 json class (
 https://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_flavors_negative_xml.py#L24
 )
 that error doesn't appear any longer, I see a general problem with
 the usage of import-time code and we may think about a better solution
 in general.

 I'll try to address the missing pieces tomorrow.
 Bug: https://bugs.launchpad.net/tempest/+bug/1291826

 Regards,
 Marc

 DEUTSCHE TELEKOM AG
 Digital Business Unit, Cloud Services (PI)
 Marc Koderer
 Cloud Technology Software Developer
 T-Online-Allee 1, 64211 Darmstadt
 E-Mail: m.kode...@telekom.de
 www.telekom.com

 LIFE IS FOR SHARING.

 DEUTSCHE TELEKOM AG
 Supervisory Board: Prof. Dr. Ulrich Lehner (Chairman)
 Board of Management: René Obermann (Chairman),
 Reinhard Clemens, Niek Jan van Damme, Timotheus Höttges,
 Dr. Thomas Kremer, Claudia Nemat, Prof. Dr. Marion Schick
 Commercial register: Amtsgericht Bonn HRB 6794
 Registered office: Bonn

 BIG CHANGES START SMALL - CONSERVE RESOURCES BY NOT PRINTING EVERY E-MAIL.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-17 Thread Vishvananda Ishaya

On Mar 17, 2014, at 4:34 AM, Yuzhou (C) vitas.yuz...@huawei.com wrote:

 Hi Duncan Thomas,
 
   Maybe the statement about approval process is not very exact. In fact 
 in my mail, I mean:
 In the enterprise private cloud, if beyond the quota, you want to create a 
 new VM ,that needs to wait for approval process.
 
 
 @stackers,
 
 I think the following two use cases show why non-persistent disk is useful:
 
 1.Non-persistent VDI: 
   When users access a non-persistent desktop, none of their settings or 
 data is saved once they log out. At the end of a session, 
   the desktop reverts back to its original state and the user receives a 
 fresh image the next time he logs in.
   1). Image manageability, Since non-persistent desktops are built from a 
 master image, it's easier for administrators to patch and update the image, 
 back it up quickly and deploy company-wide applications to all end users.
   2). Greater security, Users can't alter desktop settings or install 
 their own applications, making the image more secure.
   3). Less storage.
 
 2.As the use case mentioned several days ago by zhangleiqiang:
 
   Let's take a virtual machine which hosts a web service, but it is 
 primarily a read-only web site with content that rarely changes. This VM has 
 three disks. Disk 1 contains the Guest OS and web application (e.g. 
 Apache). Disk 2 contains the web pages for the web site. Disk 3 contains all 
 the logging activity.
 In this case, disk 1 (OS  app) are dependent (default) settings and 
 is backed up nightly. Disk 2 is independent non-persistent (not backed up, 
 and any changes to these pages will be discarded). Disk 3 is  independent 
 persistent (not backed up, but any changes are persisted to the disk).
 If updates are needed to the web site's pages, disk 2 must be taken 
 out of independent non-persistent mode temporarily to allow the changes to be 
 made.
 Now let's say that this site gets hacked, and the pages are doctored 
 with something which is not very nice. A simple reboot of this host will 
 discard the changes made to the web pages on disk 2, but will persist   the 
 logs on disk 3 so that a root cause analysis can be carried out.
 
 Hope to get more suggestions about non-persistent disk!


Making the disk rollback on reboot seems like an unexpected side-effect we 
should avoid. Rolling back the system to a known state is a useful feature, but 
this should be an explicit api command, not a side-effect of rebooting the 
machine, IMHO.

Vish

 
 Thanks.
 
 Zhou Yu
 
 
 
 
 -Original Message-
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Saturday, March 15, 2014 12:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
 stopping VM, data will be rollback automatically), do you think we shoud
 introduce this feature?
 
 On 7 March 2014 08:17, Yuzhou (C) vitas.yuz...@huawei.com wrote:
First, generally, in public or private cloud, the end users of VMs
 have no right to create new VMs directly.
 If someone want to create new VMs, he or she need to wait for approval
 process.
 Then, the administrator Of cloud create a new VM to applicant. So the
 workflow that you suggested is not convenient.
 
 This approval process  admin action is the exact opposite to what cloud is
 all about. I'd suggest that anybody using such a process has little
 understanding of cloud and should be educated, not weird interfaces added
 to nova to support a broken premise. The cloud /is different/ from
 traditional IT, that is its strength, and we should be wary of undermining 
 that
 to allow old-style thinking to continue.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Automatic version creation in PBR

2014-03-17 Thread Jay Pipes
On Mon, 2014-03-17 at 16:10 +1300, Robert Collins wrote:
 Right now PBR's creation of versions for postversioning is problematic
 - it generates versions that (if recognized by pip) would be treated
 as releases, even when its a non-tagged commit.
 
 https://etherpad.openstack.org/p/pbr-postversion-semver
 
 The tl;dr is a proposal to generate dev marked versions of the lowest
 possible higher tag that we would accept - which would be any of full
 release or alpha/beta/rc
 
 A related but can be done separately change is to pick version strings
 for alpha releases that are compatible with both PEP 440 and semver.
 
 Feedback solicited - if this is something contentious, we can make it
 an opt-in feature, but it seems unambiguously better to the folk that
 chatted through it on #openstack-infra, so ideally I'd like to
 transition any existing incompatible tags we have, and then land code
 to make this the behaviour for post-versioned (the default - no
 'version' key in setup.cfg) untagged commits.

Hi Rob, thanks for the heads up.

A number of us use pbr for outside-of-OpenStack projects, and have
relied on it for things like proper package versioning using git tags.

I'm a little unclear what, if any, changes to my standard python sdist
and upload actions I will need to take to publish releases of my
projects that use pbr.

Would you mind easing my mind and letting me know if this is something
that is going to break things for me? I'm no packaging expert, and rely
on things like pbr to do a lot of this magic for me :)

All the best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-17 Thread Joshua Harlow
Since the uploading of JS is an interesting concept, let me just share a little 
about another service at Y! that has done this.

http://developer.yahoo.com/yql/guide/yql-execute-chapter.html

YQL (although closed-source) has exposed this ability, so its entirely possible 
to do (although it likely won't be running in python). YQL itself is slightly 
similar to a DSL also (its DSL is similar to SQL). The nice thing about using 
something like JS (to continue along this idea, even if nobody wants to 
actually implement this) is that things like 
rhinohttp://en.wikipedia.org/wiki/Rhino_%28JavaScript_engine%29 do provide 
execution limits (at the instruction level). Of course this would potentially 
bring in java (I am guessing node.js can do something similar as rhino). 
Anyways…

To further this lets continue working on 
https://etherpad.openstack.org/p/taskflow-mistral and see if we can align 
somehow (I hope it's not to late to do this, seeing that there appears to be a 
lot of resistance from the mistral community to change). But I agree with 
clint, and hope that we can have a healthy collaboration as a community instead 
of being in competing silos (which is not healthy).

-Josh

From: Clint Byrum cl...@fewbar.commailto:cl...@fewbar.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 17, 2014 at 9:41 AM
To: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Actions design BP

Excerpts from Renat Akhmerov's message of 2014-03-17 07:35:02 -0700:
On 16 Mar 2014, at 19:05, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:
 From my perspective, as somebody who is considering Mistral for some
 important things, the fact that the taskflow people are not aligned with
 the Mistral people tells me that Mistral is not what I thought it was:
 taskflow for direct user consumption.
Yes, it was just an initial idea we were trying to pursue. As we moved forward 
we understood it was too limited and had a lot of pitfalls. The reason is the 
key difference between library and service. Making something a service flips 
everything upside down in terms of requirements to this new system. The logical 
questions/implications are:
If that’s a service why should its REST API be Python oriented? Bindings - yes 
(it gives a certain convenience) but that’s a different story..
A whole set of questions related to how we distribute Python-written tasks 
(especially in multi tenant environment, e.g. as a cloud hosted service):
Dependencies
Sandboxing
Result serialisation
etc.
It becomes logical to be able to use it with non-python clients and external 
systems.
Orientation to long-lived processes(workflows). Synchronous execution model no 
longer works well, instead de facto asynchronous event-based architecture fits 
better. I know it may sound too generic, it’s a whole different topic..
It becomes logical to provide more high-level capabilities rather than a 
library does (e.g. Horizon Dashboard. It would not be much of use if the 
service was based on Python).

I assume Mistral is written in Python though, and so it should be using
Taskflow for its own workflow. I understand though, that to run _user_
workflows you can't just expect them to upload python or always run
python as Mistral wouldn't do much for them at that point.

However, for the User-Taskflow integration, I think Josh Harlow offered
a really reasonable suggestion to that, which is instead of inventing
a DSL (frankly, I'm pretty frustrated with the Heat DSL's we have, so I
may be biased here), just embrace javascript or lua, which are designed
to be embedded and executed in a less-than-trusted context.

I think it would be a great story for users if Mistral worked like this:

- Upload javascript expression of workflow, with external callouts for
  complicated things.
- Run code that uses Mistral API to poll-for or subscribe-to
  notifications waiting for instructions from Mistral when it is
  supposed to run said external callouts, and feeds back data.

I believe it’s not the full list.
Despite of all this I’m not against of using TaskFlow in implementation, at 
least some pieces of it. My only call is: it should be really beneficial rather 
than bringing pain. So I’m ok if we keep aligning our capabilities, roadmaps, 
terminology and whatever else is needed.
Btw, it would be really helpful if you could clarify what you meant “some 
important things”. Asking because I still feel like we could have much more 
input from potential users. Any information is appreciated.

- Heat needs to move from being a single state machine (heat-engine owns
  all of the running tasks for a stack in one engine at a time) to a
  distributed state machine. Before we do that, we need to consider how
  Heat expresses workflow. If Mistral were a distributed workflow
  engine, it would 

Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-17 Thread Joshua Harlow
Thanks, lets keep the collaboration going, updated it with some more 
questions/ideas, thoughts :)

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 17, 2014 at 5:37 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

Left my comments in https://etherpad.openstack.org/p/taskflow-mistral.

@Changbin, I think the most interesting section for you is “What’s Different”. 
Thanks. Hope this helps. If it doesn’t then let us know your specific questions.

@Joshua, thanks for your input on architecture. At a high-level it makes sense. 
We need to keep discussing it and switch to details. For that reason, like I 
said before, we want to create a very very simple taskflow based prototype (in 
progress). Then we’ll have a chance to think how to evolve TaskFlow properly so 
that it fits Mistral needs.

Renat Akhmerov
@ Mirantis Inc.

On 15 Mar 2014, at 00:31, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

Sure, I can try to help,

I started https://etherpad.openstack.org/p/taskflow-mistral so that we can all 
work on this.

Although I'd rather not make architecture for mistral (cause that doesn't seem 
like an appropriate thing to do, for me to tell mistral what to do with its 
architecture), but I'm all for working on it together as a community (instead 
of me producing something that likely won't have much value).

Let us work on the above etherpad together and hopefully get some good ideas 
flowing :-)

From: Stan Lagun sla...@mirantis.commailto:sla...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, March 14, 2014 at 12:11 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

Joshua,

why wait? Why not just help Renat with his research on that integration and 
bring your own vision to the table? Write some 1-page architecture description 
on how Mistral can be built on top of TaskFlow and we discuss pros and cons. In 
would be much more productive.


On Fri, Mar 14, 2014 at 11:35 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
Thanks Renat,

I'll keep waiting, and hoping that we can figure this out for everyone's 
benefit. Because in the end we are all much stronger working together and much 
weaker when not.

Sent from my really tiny device...

On Mar 13, 2014, at 11:41 PM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:

Folks,

Mistral and TaskFlow are significantly different technologies. With different 
set of capabilities, with different target audience.

We may not be doing enough to clarify all the differences, I admit that. The 
challenge here is that people tend to judge having minimal amount of 
information about both things. As always, devil in the details. Stan is 100% 
right, “seems” is not an appropriate word here. Java seems to be similar to C++ 
at the first glance for those who have little or no knowledge about them.

To be more consistent I won’t be providing all the general considerations that 
I’ve been using so far (in etherpads, MLs, in personal discussions), it doesn’t 
seem to be working well, at least not with everyone. So to make it better, like 
I said in that different thread: we’re evaluating TaskFlow now and will share 
the results. Basically, it’s what Boris said about what could and could not be 
implemented in TaskFlow. But since the very beginning of the project I never 
abandoned the idea of using TaskFlow some day when it’s possible.

So, again: Joshua, we hear you, we’re working in that direction.


I'm reminded of
http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
k/2http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-track/2
 where it seemed like we were doing much better collaboration, what has
happened to break this continuity?

Not sure why you think something is broken. We just want to finish the pilot 
with all the ‘must’ things working in it. This is a plan. Then we can revisit 
and change absolutely everything. Remember, to the great extent this is 
research. Joshua, this is what we talked about and agreed on many times. I know 
you might be anxious about that given the fact it’s taking more time than 
planned but our vision of the project has drastically evolved and gone far far 
beyond the initial Convection proposal. So the initial idea of POC is no longer 
relevant. Even though we finished the first version 

Re: [openstack-dev] MuranoPL questions?

2014-03-17 Thread Joshua Harlow
So I guess this is similar to the other thread.

http://lists.openstack.org/pipermail/openstack-dev/2014-March/030185.html

I know that the way YQL has provided it could be a good example; where the core 
DSL (the select queries and such) are augmented by the addition and usage of 
JS, for example 
http://developer.yahoo.com/yql/guide/yql-execute-examples.html#yql-execute-example-helloworld
 (ignore that its XML, haha). Such usage already provides rate-limits and 
execution-limits 
(http://developer.yahoo.com/yql/guide/yql-execute-intro-ratelimits.html) and 
afaik if something like what YQL is doing then u don't need to recreate simialr 
features in your DSL (and then u also don't need to teach people about a new 
language and syntax and …)

Just an idea (I believe lua offers similar controls/limits.., although its not 
as popular of course as JS).

From: Stan Lagun sla...@mirantis.commailto:sla...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 17, 2014 at 3:59 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] MuranoPL questions?

Joshua,

Completely agree with you. We wouldn't be writing another language if we knew 
how any of existing languages can be used for this particular purpose. If 
anyone suggest such language and show how it can be used to solve those issues 
DSL was designed to solve we will consider dropping MuranoPL. np

Surely DSL hasn't stood the test of time. It just hasn't had a chance yet. 100% 
of successful programming languages were in such position once.

Anyway it is the best time to come with your suggestions. If you know how 
exactly DSL can be replaced or improved we would like you to share


On Wed, Mar 12, 2014 at 2:05 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
I guess I might be a bit biased to programming; so maybe I'm not the target 
audience.

I'm not exactly against DSL's, I just think that DSL's need to be really really 
proven to become useful (in general this applies to any language that 'joe' 
comp-sci student can create). Its not that hard to just make one, but the real 
hard part is making one that people actually like and use and survives the test 
of time. That’s why I think its just nicer to use languages that have stood the 
test of time already (if we can), creating a new DSL (muranoPL seems to be 
slightly more than a DSL imho) means creating a new language that has not stood 
the test of time (in terms of lifetime, battle tested, supported over years) so 
that’s just the concern I have.

Of course we have to accept innovation and I hope that the DSL/s makes it 
easier/simpler, I just tend to be a bit more pragmatic maybe in this area.

Here's hoping for the best! :-)

-Josh

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 10, 2014 at 8:36 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] MuranoPL questions?

Although being a little bit verbose it makes a lot of sense to me.

@Joshua,

Even assuming Python could be sandboxed and whatever else that’s needed to be 
able to use it as DSL (for something like Mistral, Murano or Heat) is done  why 
do you think Python would be a better alternative for people who don’t know 
neither these new DSLs nor Python itself. Especially, given the fact that 
Python has A LOT of things that they’d never use. I know many people who have 
been programming in Python for a while and they admit they don’t know all the 
nuances of Python and actually use 30-40% of all of its capabilities. Even not 
in domain specific development. So narrowing a feature set that a language 
provides and limiting it to a certain domain vocabulary is what helps people 
solve tasks of that specific domain much easier and in the most expressive 
natural way. Without having to learn tons and tons of details that a general 
purpose language (GPL, hah :) ) provides (btw, the reason to write thick books).

I agree with Stan, if you begin to use a technology you’ll have to learn 
something anyway, be it TaskFlow API and principles or DSL. Well-designed DSL 
just encapsulates essential principles of a system it is used for. By learning 
DSL you’re leaning the system itself, as simple as that.

Renat Akhmerov
@ Mirantis Inc.



On 10 Mar 2014, at 05:35, Stan Lagun 
sla...@mirantis.commailto:sla...@mirantis.com wrote:

 I'd be very interested in knowing the resource controls u plan to add. 
 Memory, CPU...
We haven't discussed it yet. Any suggestions are welcomed

 I'm 

Re: [openstack-dev] [db][all] (Proposal) Restorable Delayed deletion of OS Resources

2014-03-17 Thread Jay Pipes
On Sun, 2014-03-16 at 23:02 -0700, Allamaraju, Subbu wrote:
 Hi Boris,
 
 I just read the other thread. As Jay asked in [1], have you considered 
 precautions in the UI in stead? That should take care of mistakes with manual 
 deletes.
 
 Thx
 Subbu
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/029784.html

After hearing from Tim Bell and others, I think that a two-pronged
approach is most useful. First, the above-mentioned UI changes to
prevent common mistakes, and second, using a consistent, standardized
way of undoing certain operations (Boris' proposal).

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Constructive Conversations

2014-03-17 Thread Adrian Otto
Kurt,

I think that a set of community values for OpenStack would be a terrific asset. 
I refer to values constantly as a way to align my efforts with the needs of my 
company. I'd love to have the same tools for my contributions to community 
efforts as well.

Adrian

On Mar 7, 2014, at 11:56 AM, Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com wrote:

Folks,

I’m sure that I’m not the first person to bring this up, but I’d like to get 
everyone’s thoughts on what concrete actions we, as a community, can take to 
improve the status quo.

There have been a variety of instances where community members have expressed 
their ideas and concerns via email or at a summit, or simply submitted a patch 
that perhaps challenges someone’s opinion of The Right Way to Do It, and 
responses to that person have been far less constructive than they could have 
been[1]. In an open community, I don’t expect every person who comments on a ML 
post or a patch to be congenial, but I do expect community leaders to lead by 
example when it comes to creating an environment where every person’s voice is 
valued and respected.

What if every time someone shared an idea, they could do so without fear of 
backlash and bullying? What if people could raise their concerns without being 
summarily dismissed? What if “seeking first to understand”[2] were a core value 
in our culture? It would not only accelerate our pace of innovation, but also 
help us better understand the needs of our cloud users, helping ensure we 
aren’t just building OpenStack in the right way, but also building the right 
OpenStack.

We need open minds to build an open cloud.

Many times, we do have wonderful, constructive discussions, but the times we 
don’t cause wounds in the community that take a long time to heal. 
Psychologists tell us that it takes a lot of good experiences to make up for 
one bad one. I will be the first to admit I’m not perfect. Communication is 
hard. But I’m convinced we can do better. We must do better.

How can we build on what is already working, and make the bad experiences as 
rare as possible?

A few ideas to seed the discussion:

  *   Identify a set of core values that the community already embraces for the 
most part, and put them down “on paper.”[3] Leaders can keep these values fresh 
in everyone’s minds by (1) leading by example, and (2) referring to them 
regularly in conversations and talks.
  *   PTLs can add mentoring skills and a mindset of seeking first to 
understand” to their list of criteria for evaluating proposals to add a 
community member to a core team.
  *   Get people together in person, early and often. Mid-cycle meetups and 
mini-summits provide much higher-resolution communication channels than email 
and IRC, and are great ways to clear up misunderstandings, build relationships 
of trust, and generally get everyone pulling in the same direction.

What else can we do?

Kurt

[1] There are plenty of examples, going back years. Anyone who has been in the 
community very long will be able to recall some to mind. Recent ones I thought 
of include Barbican’s initial request for incubation on the ML, dismissive and 
disrespectful exchanges in some of the design sessions in Hong Kong (bordering 
on personal attacks), and the occasional “WTF?! This is the dumbest idea ever!” 
patch comment.
[2] https://www.stephencovey.com/7habits/7habits-habit5.php
[3] We already have a code of 
conducthttps://www.openstack.org/legal/community-code-of-conduct/ but I think 
a list of core values would be easier to remember and allude to in day-to-day 
discussions. I’m trying to think of ways to make this idea practical. We need 
to stand up for our values, not just say we have them.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Integrating network policies and network services

2014-03-17 Thread Mohammad Banikazemi

Kanzhe, thanks for your response to my comments and questions. Please see
below.

 From: Kanzhe Jiang kan...@gmail.com

[...]
 On Fri, Mar 14, 2014 at 3:18 PM, Mohammad Banikazemi m...@us.ibm.com
wrote:

[...]
 3- For the service chain creation, I am sure there are good reasons
 for requiring a specific provider for a given chain of services but
 wouldn't it be possible to have a generic chain provider which
 would instantiate each service in the chain using the required
 provider for each service (e.g., firewall or loadbalancer service)
 and with setting the insertion contexts for each service such that
 the chain gets constructed as well? I am sure I am ignoring some
 practical requirements but is it worth rethinking the current approach?

 Service Chaining often means a form of traffic steering. Depending
 on how the steering is done, the capabilities of different providers
 differ. Different provider may define different context of
 individual service in the chain. For example, a bump-in-the-wire
 service can be inserted as a virtual wire or L3 next hop. So it will
 be hard to define a generic chain provider.

With respect to Question 3 above, yes you are right we need possibly
different providers for this generic chain service type. The solution
could be having the chain as a service type itself which can be provided
by different providers. Different providers can implement the instantiation
of a chain differently. This is similar to the current service-chain model
in that there may be different providers for a service-chain with the
difference being that the service type itself is generic and not specific
to a particular chain such as firewall-vpn.

Best,

Mohammad
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Integrating network policies and network services

2014-03-17 Thread Mohammad Banikazemi
I think there are a couple of issues here:

1- Having network services in VMs: There was a design summit session in
this regard in Hong Kong: Framework for Advanced Services in VMs [1]. There
is a corresponding blueprint [2] and some code submitted for early review
marked as work in progress [3].  We should follow up on this work and see
its status and what the plans for near future are. This seems to be
increasingly more relevant and more important.

2- Is it worth revisiting the requirement for having service chain types
specific to particular chains of services? The argument I have heard for
the current design is that the set of chains that are practically used are
very limited. Furthermore, having generic service type chain drivers may be
difficult to develop. With respect to limited use cases, I think even if
that is the case right now, we may be pushing ourselves into a corner as
more diverse set of network services and functions become available (as
suggested by Carlos). So I think the real question is are there practical
barriers in developing a more generic service type for service chains.

Best,

-Mohammad


[1]
http://icehousedesignsummit.sched.org/event/1deb4de716730ca7cecf0c3b968bc592
[2] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
[3] https://review.openstack.org/#/c/72068/



From:   Kanzhe Jiang kanzhe.ji...@bigswitch.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   03/17/2014 12:54 PM
Subject:Re: [openstack-dev] [neutron][policy] Integrating network
policies and network services




Hi Carlos,

The provider mechanism is currently under discussion in advanced service
group. However, your use-case of chaining non-neutron service has not been
considered in the proposal. If you believe it is an important feature,
please definitely be vocal, even better to have a proposal. :-)


 3- For the service chain creation, I am sure there are good
 reasons for requiring a specific provider for a given chain of
 services but wouldn't it be possible to have a generic chain
 provider which would instantiate each service in the chain using
 the required provider for each service (e.g., firewall or
 loadbalancer service) and with setting the insertion contexts for
 each service such that the chain gets constructed as well? I am
 sure I am ignoring some practical requirements but is it worth
 rethinking the current approach?



Service Chaining often means a form of traffic steering. Depending
on how the steering is done, the capabilities of different
providers differ. Different provider may define different context
of individual service in the chain. For example, a bump-in-the-wire
service can be inserted as a virtual wire or L3 next hop. So it
will be hard to define a generic chain provider.

  I’m partially with Mohammad on this.

  For what I’ve understood from the service chaining proposal, there would
  be different service chain provider implementations with each one
  restricting to a statically defined and limited number of services for
  chaining (please correct me if I’m mistaken). This is, and taking the
  “Firewall-VPN-ref-Chain” service chain provider from the document as
  example, users would be limited to creating chains “firewall - VPN” (and
  I’m not even considering the restrictiveness of service providers) but
  not “VPN - firewall”, or introducing a LB in the middle.


  My rough understanding on chaining, in a broad term, would be to firstly
  support generic L2/L3 chaining, and not restricting to Neutron services
  (FWaaS, LBaaS, VPNaaS) if that is the case, but should also be valid for
  them as they can be reached via network ports as well.

  Last week during the advanced services meeting I presented the following
  use case. DPI (Deep Packet Inspection) is an example of a absent Neutron
  service as of now. Users wanting to run a DPI instance in OpenStack would
  have to create a virtual machine and run it there which is fine. Now, in
  case they want to filter inbound traffic from a (public) network, traffic
  should be steered first to the VM running the DPI and then to the final
  destination. Currently in OpenStack it is not possible to configure this
  and I don’t see how in the proposed BP it would be. It was given the
  example of a DPI, but it can be virtually any service type and service
  implementation. Sure users wouldn’t get all the fancy APIs OpenStack
  providers instantiate and configure services.






--
Kanzhe Jiang
MTS at BigSwitch___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
inline: graycol.gif___
OpenStack-dev mailing list

Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer resource_list CLI command

2014-03-17 Thread Gordon Chung
hi Matt,

 test_ceilometer_resource_list which just calls ceilometer 
 resource_list from the
 CLI once is taking =2 min to respond. For example:
 http://logs.openstack.org/68/80168/3/gate/gate-tempest-dsvm-
 postgres-full/07ab7f5/logs/tempest.txt.gz#_2014-03-17_17_08_25_003
 (where it takes  3min)

thanks for bringing this up... we're tracking this here: 
https://bugs.launchpad.net/ceilometer/+bug/1264434

i've put a patch out that partially fixes the issue. from bad to 
average... but i guess i should make the fix a bit more aggressive to 
bring the performance in line with the 'seconds' expectation.

cheers,
gordon chung
openstack, ibm software standards

Matthew Treinish mtrein...@kortar.org wrote on 17/03/2014 02:55:40 PM:

 From: Matthew Treinish mtrein...@kortar.org
 To: openstack-dev@lists.openstack.org
 Date: 17/03/2014 02:57 PM
 Subject: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer 
 resource_list CLI command
 
 Hi everyone,
 
 So a little while ago we noticed that in all the gate runs one of 
 the ceilometer
 cli tests is consistently in the list of slowest tests. (and often 
 the slowest)
 This was a bit surprising given the nature of the cli tests we expect 
them to
 execute very quickly.
 
 test_ceilometer_resource_list which just calls ceilometer 
 resource_list from the
 CLI once is taking =2 min to respond. For example:
 http://logs.openstack.org/68/80168/3/gate/gate-tempest-dsvm-
 postgres-full/07ab7f5/logs/tempest.txt.gz#_2014-03-17_17_08_25_003
 (where it takes  3min)
 
 The cli tests are supposed to be quick read-only sanity checks of the 
cli
 functionality and really shouldn't ever be on the list of slowest tests 
for a
 gate run. I think there was possibly a performance regression recently 
in
 ceilometer because from I can tell this test used to normally take ~60 
sec.
 (which honestly is probably too slow for a cli test too) but it is 
currently
 much slower than that.
 
 From logstash it seems there are still some cases when the resource list 
takes
 as long to execute as it used to, but the majority of runs take a long 
time:
 http://goo.gl/smJPB9
 
 In the short term I've pushed out a patch that will remove this testfrom 
gate
 runs: https://review.openstack.org/#/c/81036 But, I thought it wouldbe 
good to
 bring this up on the ML to try and figure out what changed or why this 
is so
 slow.
 
 Thanks,
 
 -Matt Treinish
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Stack breakpoint

2014-03-17 Thread Ton Ngo
I would like to revisit with more details an idea that was mentioned in the
last design summit and hopefully get some feedback.

The scenario is troubleshooting a failed template.
Currently we can stop on the point of failure by disabling rollback:  this
works well for stack-create; stack-update requires some more work but
that's different thread.  In many cases however, the point of failure may
be too late or too hard to debug because the condition causing the failure
may not be obvious or may have been changed.  If we can pause the stack at
a point before the failure, then we can check whether the state of the
environment and the stack is what we expect.
The analogy with program debugging is breakpoint/step, so it may be useful
to introduce this same concept in a stack.

The usage would be something like:
-Run stack-create (or stack-update once it can handle failure) with one or
more resource name specified as breakpoint
-As the engine traverses down the dependency graph, it would stop at the
breakpoint resource and all dependent resources.  Other resources with no
dependency will proceed to completion.
-After debugging, continue the stack by:
-Stepping: remove current breakpoint, set breakpoint for next resource
(s) in dependency graph, resume stack-create (or stack-update)
-Running to completion: remove current breakpoint, resume stack-create
(or stack-update)

Some other possible uses for this breakpoint:
- While developing new template or resource type, bring up a stack to a
point before the new code is to be executed
- Introduce human process: pause the partial stack so the user can get the
stack info and perform some tasks before continuing

Some issues to consider (with initial feedback from shardy):
- Granularity of stepping:  resource level or internal steps within a
resource
- How to specify breakpoints:  CLI argument or coded in template or both
- How to handle resources with timer, e.g. wait condition:  pause/resume
timer value
- New state for a resource:  PAUSED

Thanks.

Ton Ngo,


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-17 Thread Chris Friesen

On 03/17/2014 11:59 AM, John Garbutt wrote:

On 17 March 2014 17:54, John Garbutt j...@johngarbutt.com wrote:



Given the scheduler split, writing that value into the nova db from
the scheduler would be a step backwards, and it probably breaks lots
of code that assumes the host is not set until much later.


Why would that be a step backwards?  The scheduler has picked a host for 
the instance, so it seems reasonable to record that information in the 
instance itself as early as possible (to be incorporated into other 
decision-making) rather than have it be implicit in the destination of 
the next RPC message.


Now I could believe that we have code that assumes that having 
instance.host set implies that it's already running on that host, but 
that's a different issue.



I forgot to mention, I am starting to be a fan of a two-phase commit
approach, which could deal with these kinds of things in a more
explicit way, before starting the main boot process.

Its not as elegant as a database transaction, but that doesn't seems
possible in the log run, but there could well be something I am
missing here too.


I'm not an expert in this area, so I'm curious why you think that 
database transactions wouldn't be possible in the long run.


Given that the database is one of the few services that isn't prone to 
races, it seems reasonable to me to implement decision-making as 
transactions within the database.


Where possible it seems to make a lot more sense to have the database do 
an atomic transaction than to scan the database, extract a bunch of 
(potentially unnecessary) data and transfer it over the network, do 
logic in python, send the result back over the network and update the 
database with the result.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer resource_list CLI command

2014-03-17 Thread Joe Gordon
On Mon, Mar 17, 2014 at 11:55 AM, Matthew Treinish mtrein...@kortar.orgwrote:

 Hi everyone,

 So a little while ago we noticed that in all the gate runs one of the
 ceilometer
 cli tests is consistently in the list of slowest tests. (and often the
 slowest)
 This was a bit surprising given the nature of the cli tests we expect them
 to
 execute very quickly.

 test_ceilometer_resource_list which just calls ceilometer resource_list
 from the
 CLI once is taking =2 min to respond. For example:

 http://logs.openstack.org/68/80168/3/gate/gate-tempest-dsvm-postgres-full/07ab7f5/logs/tempest.txt.gz#_2014-03-17_17_08_25_003
 (where it takes  3min)

 The cli tests are supposed to be quick read-only sanity checks of the cli
 functionality and really shouldn't ever be on the list of slowest tests
 for a
 gate run. I think there was possibly a performance regression recently in
 ceilometer because from I can tell this test used to normally take ~60 sec.
 (which honestly is probably too slow for a cli test too) but it is
 currently
 much slower than that.


Sounds like we should add another round of sanity checking to the CLI
tests: make sure all commands return within x seconds.   As a first pass we
can say x=60 and than crank it down in the future.



 From logstash it seems there are still some cases when the resource list
 takes
 as long to execute as it used to, but the majority of runs take a long
 time:
 http://goo.gl/smJPB9

 In the short term I've pushed out a patch that will remove this test from
 gate
 runs: https://review.openstack.org/#/c/81036 But, I thought it would be
 good to
 bring this up on the ML to try and figure out what changed or why this is
 so
 slow.

 Thanks,

 -Matt Treinish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer resource_list CLI command

2014-03-17 Thread Jay Pipes
On Mon, 2014-03-17 at 14:55 -0400, Matthew Treinish wrote:
 Hi everyone,
 
 So a little while ago we noticed that in all the gate runs one of the 
 ceilometer
 cli tests is consistently in the list of slowest tests. (and often the 
 slowest)
 This was a bit surprising given the nature of the cli tests we expect them to
 execute very quickly.
 
 test_ceilometer_resource_list which just calls ceilometer resource_list from 
 the
 CLI once is taking =2 min to respond. For example:
 http://logs.openstack.org/68/80168/3/gate/gate-tempest-dsvm-postgres-full/07ab7f5/logs/tempest.txt.gz#_2014-03-17_17_08_25_003
 (where it takes  3min)

Yep. At ATT, we had to disable calls to GET /resources without any
filters on it. The call would return hundreds of thousands of records,
all being JSON-ified at the Ceilometer API endpoint, and the result
would take minutes to return. There was no default limit on the query,
which meant every single records in the database was returned, and on
even a semi-busy system, that meant horrendous performance.

Besides the problem that the SQLAlchemy driver doesn't yet support
pagination [1], the main problem with the get_resources() call is the
underlying databases schema for the Sample model is wacky, and forces
the use of a dependent subquery in the WHERE clause [2] which completely
kills performance of the query to get resources.

[1]
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/impl_sqlalchemy.py#L436
[2]
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/impl_sqlalchemy.py#L503

 The cli tests are supposed to be quick read-only sanity checks of the cli
 functionality and really shouldn't ever be on the list of slowest tests for a
 gate run.

Oh, the test is readonly all-right. ;) It's just that it's reading
hundreds of thousands of records.

  I think there was possibly a performance regression recently in
 ceilometer because from I can tell this test used to normally take ~60 sec.
 (which honestly is probably too slow for a cli test too) but it is currently
 much slower than that.
 
 From logstash it seems there are still some cases when the resource list takes
 as long to execute as it used to, but the majority of runs take a long time:
 http://goo.gl/smJPB9
 
 In the short term I've pushed out a patch that will remove this test from gate
 runs: https://review.openstack.org/#/c/81036 But, I thought it would be good 
 to
 bring this up on the ML to try and figure out what changed or why this is so
 slow.

I agree with removing the test from the gate in the short term. Medium
to long term, the root causes of the problem (that GET /resources has no
support for pagination on the query, there is no default for limiting
results based on a since timestamp, and that the underlying database
schema is non-optimal) should be addressed.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-17 Thread Andrew Laski

On 03/17/14 at 01:11pm, Chris Friesen wrote:

On 03/17/2014 11:59 AM, John Garbutt wrote:

On 17 March 2014 17:54, John Garbutt j...@johngarbutt.com wrote:



Given the scheduler split, writing that value into the nova db from
the scheduler would be a step backwards, and it probably breaks lots
of code that assumes the host is not set until much later.


Why would that be a step backwards?  The scheduler has picked a host 
for the instance, so it seems reasonable to record that information 
in the instance itself as early as possible (to be incorporated into 
other decision-making) rather than have it be implicit in the 
destination of the next RPC message.


Now I could believe that we have code that assumes that having 
instance.host set implies that it's already running on that host, 
but that's a different issue.



I forgot to mention, I am starting to be a fan of a two-phase commit
approach, which could deal with these kinds of things in a more
explicit way, before starting the main boot process.

Its not as elegant as a database transaction, but that doesn't seems
possible in the log run, but there could well be something I am
missing here too.


I'm not an expert in this area, so I'm curious why you think that 
database transactions wouldn't be possible in the long run.


There has been some effort around splitting the scheduler out of Nova 
and into its own project.  So down the road the scheduler may not have 
direct access to the Nova db.




Given that the database is one of the few services that isn't prone 
to races, it seems reasonable to me to implement decision-making as 
transactions within the database.


Where possible it seems to make a lot more sense to have the database 
do an atomic transaction than to scan the database, extract a bunch 
of (potentially unnecessary) data and transfer it over the network, 
do logic in python, send the result back over the network and update 
the database with the result.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Stack breakpoint

2014-03-17 Thread Clint Byrum
Ton, could you repost this as a new thread? It has very little to do
with the referenced thread.

Excerpts from Ton Ngo's message of 2014-03-17 12:10:33 -0700:
 I would like to revisit with more details an idea that was mentioned in the
 last design summit and hopefully get some feedback.
 
 The scenario is troubleshooting a failed template.
 Currently we can stop on the point of failure by disabling rollback:  this
 works well for stack-create; stack-update requires some more work but
 that's different thread.  In many cases however, the point of failure may
 be too late or too hard to debug because the condition causing the failure
 may not be obvious or may have been changed.  If we can pause the stack at
 a point before the failure, then we can check whether the state of the
 environment and the stack is what we expect.
 The analogy with program debugging is breakpoint/step, so it may be useful
 to introduce this same concept in a stack.
 
 The usage would be something like:
 -Run stack-create (or stack-update once it can handle failure) with one or
 more resource name specified as breakpoint
 -As the engine traverses down the dependency graph, it would stop at the
 breakpoint resource and all dependent resources.  Other resources with no
 dependency will proceed to completion.
 -After debugging, continue the stack by:
 -Stepping: remove current breakpoint, set breakpoint for next resource
 (s) in dependency graph, resume stack-create (or stack-update)
 -Running to completion: remove current breakpoint, resume stack-create
 (or stack-update)
 
 Some other possible uses for this breakpoint:
 - While developing new template or resource type, bring up a stack to a
 point before the new code is to be executed
 - Introduce human process: pause the partial stack so the user can get the
 stack info and perform some tasks before continuing
 
 Some issues to consider (with initial feedback from shardy):
 - Granularity of stepping:  resource level or internal steps within a
 resource
 - How to specify breakpoints:  CLI argument or coded in template or both
 - How to handle resources with timer, e.g. wait condition:  pause/resume
 timer value
 - New state for a resource:  PAUSED
 
 Thanks.
 
 Ton Ngo,
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-17 Thread Joe Gordon
On Mon, Mar 17, 2014 at 12:29 PM, Andrew Laski
andrew.la...@rackspace.comwrote:

 On 03/17/14 at 01:11pm, Chris Friesen wrote:

 On 03/17/2014 11:59 AM, John Garbutt wrote:

 On 17 March 2014 17:54, John Garbutt j...@johngarbutt.com wrote:


  Given the scheduler split, writing that value into the nova db from
 the scheduler would be a step backwards, and it probably breaks lots
 of code that assumes the host is not set until much later.


 Why would that be a step backwards?  The scheduler has picked a host for
 the instance, so it seems reasonable to record that information in the
 instance itself as early as possible (to be incorporated into other
 decision-making) rather than have it be implicit in the destination of the
 next RPC message.

 Now I could believe that we have code that assumes that having
 instance.host set implies that it's already running on that host, but
 that's a different issue.

  I forgot to mention, I am starting to be a fan of a two-phase commit
 approach, which could deal with these kinds of things in a more
 explicit way, before starting the main boot process.

 Its not as elegant as a database transaction, but that doesn't seems
 possible in the log run, but there could well be something I am
 missing here too.


 I'm not an expert in this area, so I'm curious why you think that
 database transactions wouldn't be possible in the long run.


 There has been some effort around splitting the scheduler out of Nova and
 into its own project.  So down the road the scheduler may not have direct
 access to the Nova db.


If we do pull out the nova scheduler it can have its own DB, so I don't
think this should be an issue.





 Given that the database is one of the few services that isn't prone to
 races, it seems reasonable to me to implement decision-making as
 transactions within the database.

 Where possible it seems to make a lot more sense to have the database do
 an atomic transaction than to scan the database, extract a bunch of
 (potentially unnecessary) data and transfer it over the network, do logic
 in python, send the result back over the network and update the database
 with the result.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer resource_list CLI command

2014-03-17 Thread Joe Gordon
On Mon, Mar 17, 2014 at 12:25 PM, Sean Dague s...@dague.net wrote:

 On 03/17/2014 03:22 PM, Joe Gordon wrote:
 
 
 
  On Mon, Mar 17, 2014 at 11:55 AM, Matthew Treinish mtrein...@kortar.org
  mailto:mtrein...@kortar.org wrote:
 
  Hi everyone,
 
  So a little while ago we noticed that in all the gate runs one of
  the ceilometer
  cli tests is consistently in the list of slowest tests. (and often
  the slowest)
  This was a bit surprising given the nature of the cli tests we
  expect them to
  execute very quickly.
 
  test_ceilometer_resource_list which just calls ceilometer
  resource_list from the
  CLI once is taking =2 min to respond. For example:
 
 http://logs.openstack.org/68/80168/3/gate/gate-tempest-dsvm-postgres-full/07ab7f5/logs/tempest.txt.gz#_2014-03-17_17_08_25_003
  (where it takes  3min)
 
  The cli tests are supposed to be quick read-only sanity checks of
  the cli
  functionality and really shouldn't ever be on the list of slowest
  tests for a
  gate run. I think there was possibly a performance regression
  recently in
  ceilometer because from I can tell this test used to normally take
  ~60 sec.
  (which honestly is probably too slow for a cli test too) but it is
  currently
  much slower than that.
 
 
  Sounds like we should add another round of sanity checking to the CLI
  tests: make sure all commands return within x seconds.   As a first pass
  we can say x=60 and than crank it down in the future.

 So, the last thing I want to do is trigger a race here by us
 artificially timing out on tests. However I do think cli tests should be
 returning in  2s otherwise they are not simple readonly tests.


Agreed, I said 60 just as a starting point.



 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Salvatore Orlando
It is a common practice to have both an operational and an administrative
status.
I agree ACTIVE as a term might result confusing. Even in the case of a
port, it is not really clear whether it means READY or LINK UP.
Terminology-wise I would suggest READY rather than DEPLOYED, as it is a
term which makes sense for all resources, whereas the latter is probably a
bit more suitable for high layer services.

In my opinion [2] putting a resource administratively down mean the user is
deliberately deciding to disable that resource, and this goes beyond simply
disabling its configuration, as mentioned in an earlier post. For
instance, when a port is put administratively down, I'd expect it to not
forward traffic anymore; similarly for a VIP.
Hence, the reaction to putting a resource administratively down should that
its operational status goes down as well, and therefore there is no need
for an explicit operational status ADMIN DOWN.
This is, from what I can gather, what already happens with ports.
The bug [1] is, in a way, an example of the above situation, since no
action is taken upon an object , in this case a network, being put
administratively down.

However, since this is that time of the release cycle when we can use the
mailing list to throw random ideas... what about doing an API change were
we decide to put the administrative status on its way to deprecation? While
it's a common practice in network engineering to have an admin status, do
we have a compelling use case for Neutron?
I'm asking because 'admin_state_up' is probably the only attribute I've
never updated on any resource since when I started using Quantum!
Also, other IaaS network APIs that I am aware of ([3],[4],[5]) do not have
such concept; with the exception of [3] for the virtual router, if I'm not
wrong.

Thanks in advance for reading through my ramblings!
Salvatore

[1] https://bugs.launchpad.net/neutron/+bug/1237807
[2] Please bear in mind that my opinion is wrong in most cases, or at least
is different from that of the majority!
[3] https://cloudstack.apache.org/docs/api/apidocs-4.2/TOC_Root_Admin.html
[4] http://archives.opennebula.org/documentation:archives:rel2.0:api
[5] http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API-ItemTypes.html



On 17 March 2014 17:16, Eugene Nikanorov enikano...@mirantis.com wrote:

  Seems awkward to me, if an IPSec connection has a status of ACTIVE, but
 an admin state of ADMIN DOWN.
 Right, you see, that's the problem. Constant name 'ACTIVE' makes you
 expect that IPSec connection should work, while it is a deployment status.

  OK, so the change is merely change ACTIVE into DEPLOYED instead?
 We can't just rename the ACTIVE to DEPLOYED, and may be the latter is not
 the best name, but yes, that's the intent.

 Thanks,
 Eugene.



 On Mon, Mar 17, 2014 at 7:31 PM, Kyle Mestery 
 mest...@noironetworks.comwrote:

 On Mon, Mar 17, 2014 at 8:36 AM, Eugene Nikanorov 
 enikano...@mirantis.com wrote:

 Hi Kyle,






 It's a typical use case for network devices to have both admin and
 operational
 state. In the case of having admin_state=DOWN and
 operational_state=ACTIVE,
 this just means the port/link is active but has been configured down.
 Isn't this
 the same for LBaaS here? Even reading the bug, the user has clearly
 configured
 the VIP pool as admin_state=DOWN. When it becomes ACTIVE, it's due to
 this
 configuration that the pool remains admin_state=DOWN.

 Am I missing something here?

 No, you're not. The user sees 'ACTIVE' status and think it contradicts
 'DOWN' admin_state.
 It's naming (UX problem), in my opinion.

 OK, so the change is merely change ACTIVE into DEPLOYED instead?


 Thanks,
 Eugene.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-17 Thread Jay Pipes
On Mon, 2014-03-17 at 12:39 -0700, Joe Gordon wrote:
 On Mon, Mar 17, 2014 at 12:29 PM, Andrew Laski
 andrew.la...@rackspace.com wrote:
 On 03/17/14 at 01:11pm, Chris Friesen wrote:
 On 03/17/2014 11:59 AM, John Garbutt wrote:
 On 17 March 2014 17:54, John Garbutt
 j...@johngarbutt.com wrote:
 
 Given the scheduler split, writing
 that value into the nova db from
 the scheduler would be a step
 backwards, and it probably breaks lots
 of code that assumes the host is not
 set until much later.
 
 Why would that be a step backwards?  The scheduler has
 picked a host for the instance, so it seems reasonable
 to record that information in the instance itself as
 early as possible (to be incorporated into other
 decision-making) rather than have it be implicit in
 the destination of the next RPC message.
 
 Now I could believe that we have code that assumes
 that having instance.host set implies that it's
 already running on that host, but that's a different
 issue.
 
 I forgot to mention, I am starting to be a fan
 of a two-phase commit
 approach, which could deal with these kinds of
 things in a more
 explicit way, before starting the main boot
 process.
 
 Its not as elegant as a database transaction,
 but that doesn't seems
 possible in the log run, but there could well
 be something I am
 missing here too.
 
 I'm not an expert in this area, so I'm curious why you
 think that database transactions wouldn't be possible
 in the long run.
 
 
 There has been some effort around splitting the scheduler out
 of Nova and into its own project.  So down the road the
 scheduler may not have direct access to the Nova db.
 
 
 If we do pull out the nova scheduler it can have its own DB, so I
 don't think this should be an issue.

Just playing devil's advocate here, but even if Gantt had its own
database, would that necessarily mean that there would be only a single
database across the entire deployment? I'm thinking specifically in the
case of cells, where presumably, scheduling requests would jump through
multiple layers of Gantt services, would a single database transaction
really be possible to effectively fence the entire scheduling request?

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [db][all] (Proposal) Restorable Delayed deletion of OS Resources

2014-03-17 Thread Mark Washenberger
On Thu, Mar 13, 2014 at 12:42 PM, Boris Pavlovic bpavlo...@mirantis.comwrote:

 Hi stackers,

 As a result of discussion:
 [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion
 (step by step)
 http://osdir.com/ml/openstack-dev/2014-03/msg00947.html

 I understood that there should be another proposal. About how we should
 implement Restorable  Delayed Deletion of OpenStack Resource in common way
  without these hacks with soft deletion in DB.  It is actually very
 simple, take a look at this document:


 https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing


 Best regards,
 Boris Pavlovic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi Boris,

Before I voice a little disagreement, I'd like to thank you for kicking off
this discussion and stress that I strongly agree with your view (pulled
from the other thread)

 To put in a nutshell: Restoring Delete resources / Delayed Deletion !=
Soft deletion.

This is absolutely correct and the key to unlocking the problem we have.

However, because of migrations and because being explicit is better than
being implicit, I disagree about the idea of lumping deleted resources all
into the same table. For glance, I'd much rather have a table
deleted_images than a table deleted_resources that has some image
entries. There are a number of reasons, I'll try to give a quick high-level
view of them.

1) Migrations for deleted data are more straightforward and more obviously
necessary.
2) It is possible to make specific modifications to the deleted_X schema.
3) It is possible to take many tables that are used to represent a single
active resource (images, image_locations, image_tags, image_properties) and
combine them into a single table for a deleted resource. This is actually
super important as today we have the problem of not always knowing what
image_properties were actually deleted prior to the image deletion vs the
ones that were deleted as a part of the image deletion.
4) It makes it a conscious choice to decide to have certain types of
resources restorable or have delayed deletes. As you said before, many
types of resources just don't need this functionality, so let's not make it
a feature of the common base class.

(I am assuming for #2 and #3 that this common approach would be implemented
something like deleted_resource['data'] =
json.dumps(dict(active_resource)), sorry if that is seriously incorrect.)

Thanks for your consideration,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-17 Thread Chris Friesen

On 03/17/2014 01:29 PM, Andrew Laski wrote:

On 03/17/14 at 01:11pm, Chris Friesen wrote:

On 03/17/2014 11:59 AM, John Garbutt wrote:

On 17 March 2014 17:54, John Garbutt j...@johngarbutt.com wrote:



Given the scheduler split, writing that value into the nova db from
the scheduler would be a step backwards, and it probably breaks lots
of code that assumes the host is not set until much later.


Why would that be a step backwards?  The scheduler has picked a host
for the instance, so it seems reasonable to record that information in
the instance itself as early as possible (to be incorporated into
other decision-making) rather than have it be implicit in the
destination of the next RPC message.

Now I could believe that we have code that assumes that having
instance.host set implies that it's already running on that host,
but that's a different issue.


I forgot to mention, I am starting to be a fan of a two-phase commit
approach, which could deal with these kinds of things in a more
explicit way, before starting the main boot process.

Its not as elegant as a database transaction, but that doesn't seems
possible in the log run, but there could well be something I am
missing here too.


I'm not an expert in this area, so I'm curious why you think that
database transactions wouldn't be possible in the long run.


There has been some effort around splitting the scheduler out of Nova
and into its own project.  So down the road the scheduler may not have
direct access to the Nova db.



Even if the scheduler itself doesn't have access to the nova DB, at some 
point we need to return back from the scheduler into a nova service 
(presumably nova-conductor) at which point we could update the nova db 
with the scheduler's decision and at that point we could check for 
conflicts and reschedule if necessary.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Paul Michali
On Mar 17, 2014, at 3:46 PM, Salvatore Orlando sorla...@nicira.com wrote:

 It is a common practice to have both an operational and an administrative 
 status.
 I agree ACTIVE as a term might result confusing. Even in the case of a port, 
 it is not really clear whether it means READY or LINK UP.
 Terminology-wise I would suggest READY rather than DEPLOYED, as it is a 
 term which makes sense for all resources, whereas the latter is probably a 
 bit more suitable for high layer services.

PCM: I like READY!


 
 In my opinion [2] putting a resource administratively down mean the user is 
 deliberately deciding to disable that resource, and this goes beyond simply 
 disabling its configuration, as mentioned in an earlier post. For instance, 
 when a port is put administratively down, I'd expect it to not forward 
 traffic anymore; similarly for a VIP.
 Hence, the reaction to putting a resource administratively down should that 
 its operational status goes down as well, and therefore there is no need for 
 an explicit operational status ADMIN DOWN.
 This is, from what I can gather, what already happens with ports.
 The bug [1] is, in a way, an example of the above situation, since no action 
 is taken upon an object , in this case a network, being put administratively 
 down.
 
 However, since this is that time of the release cycle when we can use the 
 mailing list to throw random ideas... what about doing an API change were we 
 decide to put the administrative status on its way to deprecation? While it's 
 a common practice in network engineering to have an admin status, do we have 
 a compelling use case for Neutron?

PCM: The only thing I could think of, with VPN, is maybe an operator wanting to 
bring down all IPSec connections maybe for some maintenance action. It would be 
much easier to do an admin down on the service, do whatever is needed, and then 
do an admin up, rather than deleting all the IPSec connections and then 
recreating them.

I wouldn't have heartburn in removing admin control, but then again, I'm not 
familiar with how network operations would use this stuff.


 I'm asking because 'admin_state_up' is probably the only attribute I've never 
 updated on any resource since when I started using Quantum!
 Also, other IaaS network APIs that I am aware of ([3],[4],[5]) do not have 
 such concept; with the exception of [3] for the virtual router, if I'm not 
 wrong.

PCM: It clearly is something commonly seen in the hardware router/switch world 
(at least at Cisco :).


PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83


 
 Thanks in advance for reading through my ramblings!
 Salvatore
 
 [1] https://bugs.launchpad.net/neutron/+bug/1237807
 [2] Please bear in mind that my opinion is wrong in most cases, or at least 
 is different from that of the majority!
 [3] https://cloudstack.apache.org/docs/api/apidocs-4.2/TOC_Root_Admin.html
 [4] http://archives.opennebula.org/documentation:archives:rel2.0:api
 [5] http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API-ItemTypes.html
 
 
 
 On 17 March 2014 17:16, Eugene Nikanorov enikano...@mirantis.com wrote:
  Seems awkward to me, if an IPSec connection has a status of ACTIVE, but an 
  admin state of ADMIN DOWN.
 Right, you see, that's the problem. Constant name 'ACTIVE' makes you expect 
 that IPSec connection should work, while it is a deployment status.
 
  OK, so the change is merely change ACTIVE into DEPLOYED instead?
 We can't just rename the ACTIVE to DEPLOYED, and may be the latter is not the 
 best name, but yes, that's the intent.
 
 Thanks,
 Eugene.
  
 
 
 On Mon, Mar 17, 2014 at 7:31 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 On Mon, Mar 17, 2014 at 8:36 AM, Eugene Nikanorov enikano...@mirantis.com 
 wrote:
 Hi Kyle,
 
 
 
 
 
 
 It's a typical use case for network devices to have both admin and operational
 state. In the case of having admin_state=DOWN and operational_state=ACTIVE,
 this just means the port/link is active but has been configured down. Isn't 
 this
 the same for LBaaS here? Even reading the bug, the user has clearly configured
 the VIP pool as admin_state=DOWN. When it becomes ACTIVE, it's due to this
 configuration that the pool remains admin_state=DOWN.
 
 Am I missing something here?
 No, you're not. The user sees 'ACTIVE' status and think it contradicts 'DOWN' 
 admin_state. 
 It's naming (UX problem), in my opinion.
 
 OK, so the change is merely change ACTIVE into DEPLOYED instead?
  
 Thanks,
 Eugene.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-17 Thread Sylvain Bauza
There is a global concern here about how an holistic scheduler can perform
decisions, and from which key metrics.
The current effort is leading to having the Gantt DB updated thanks to
resource tracker for scheduling appropriately the hosts.

If we consider these metrics as not enough, ie. that Gantt should perform
an active check to another project, that's something which needs to be
considered carefully. IMHO, on that case, Gantt should only access metrics
thanks to the project REST API (and python client) in order to make sure
that rolling upgrades could happen.
tl;dr: If Gantt requires accessing Nova data, it should request Nova REST
API, and not perform database access directly (even thru the conductor)

-Sylvain


2014-03-17 21:10 GMT+01:00 Chris Friesen chris.frie...@windriver.com:

 On 03/17/2014 01:29 PM, Andrew Laski wrote:

 On 03/17/14 at 01:11pm, Chris Friesen wrote:

 On 03/17/2014 11:59 AM, John Garbutt wrote:

 On 17 March 2014 17:54, John Garbutt j...@johngarbutt.com wrote:


  Given the scheduler split, writing that value into the nova db from
 the scheduler would be a step backwards, and it probably breaks lots
 of code that assumes the host is not set until much later.


 Why would that be a step backwards?  The scheduler has picked a host
 for the instance, so it seems reasonable to record that information in
 the instance itself as early as possible (to be incorporated into
 other decision-making) rather than have it be implicit in the
 destination of the next RPC message.

 Now I could believe that we have code that assumes that having
 instance.host set implies that it's already running on that host,
 but that's a different issue.

  I forgot to mention, I am starting to be a fan of a two-phase commit
 approach, which could deal with these kinds of things in a more
 explicit way, before starting the main boot process.

 Its not as elegant as a database transaction, but that doesn't seems
 possible in the log run, but there could well be something I am
 missing here too.


 I'm not an expert in this area, so I'm curious why you think that
 database transactions wouldn't be possible in the long run.


 There has been some effort around splitting the scheduler out of Nova
 and into its own project.  So down the road the scheduler may not have
 direct access to the Nova db.



 Even if the scheduler itself doesn't have access to the nova DB, at some
 point we need to return back from the scheduler into a nova service
 (presumably nova-conductor) at which point we could update the nova db with
 the scheduler's decision and at that point we could check for conflicts and
 reschedule if necessary.


 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [db][all] (Proposal) Restorable Delayed deletion of OS Resources

2014-03-17 Thread Tim Bell

Interesting proposal... there would also be a benefit of different tables per 
program from an operational perspective. If I need to recover a database for 
any reason, having different tables would ensure that I could restore glance to 
a point in time without having to lose the nova delete data.

Tim

From: Mark Washenberger [mailto:mark.washenber...@markwash.net]
Sent: 17 March 2014 21:08
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [db][all] (Proposal) Restorable  Delayed deletion 
of OS Resources



On Thu, Mar 13, 2014 at 12:42 PM, Boris Pavlovic 
bpavlo...@mirantis.commailto:bpavlo...@mirantis.com wrote:
Hi stackers,

As a result of discussion:
[openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step 
by step)
http://osdir.com/ml/openstack-dev/2014-03/msg00947.html

I understood that there should be another proposal. About how we should 
implement Restorable  Delayed Deletion of OpenStack Resource in common way  
without these hacks with soft deletion in DB.  It is actually very simple, take 
a look at this document:

https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing


Best regards,
Boris Pavlovic

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Boris,

Before I voice a little disagreement, I'd like to thank you for kicking off 
this discussion and stress that I strongly agree with your view (pulled from 
the other thread)

 To put in a nutshell: Restoring Delete resources / Delayed Deletion != Soft 
 deletion.

This is absolutely correct and the key to unlocking the problem we have.

However, because of migrations and because being explicit is better than being 
implicit, I disagree about the idea of lumping deleted resources all into the 
same table. For glance, I'd much rather have a table deleted_images than a 
table deleted_resources that has some image entries. There are a number of 
reasons, I'll try to give a quick high-level view of them.

1) Migrations for deleted data are more straightforward and more obviously 
necessary.
2) It is possible to make specific modifications to the deleted_X schema.
3) It is possible to take many tables that are used to represent a single 
active resource (images, image_locations, image_tags, image_properties) and 
combine them into a single table for a deleted resource. This is actually super 
important as today we have the problem of not always knowing what 
image_properties were actually deleted prior to the image deletion vs the ones 
that were deleted as a part of the image deletion.
4) It makes it a conscious choice to decide to have certain types of resources 
restorable or have delayed deletes. As you said before, many types of resources 
just don't need this functionality, so let's not make it a feature of the 
common base class.

(I am assuming for #2 and #3 that this common approach would be implemented 
something like deleted_resource['data'] = json.dumps(dict(active_resource)), 
sorry if that is seriously incorrect.)

Thanks for your consideration,
markwash



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Automatic version creation in PBR

2014-03-17 Thread Doug Hellmann
On Sun, Mar 16, 2014 at 11:10 PM, Robert Collins
robe...@robertcollins.netwrote:

 Right now PBR's creation of versions for postversioning is problematic
 - it generates versions that (if recognized by pip) would be treated
 as releases, even when its a non-tagged commit.

 https://etherpad.openstack.org/p/pbr-postversion-semver

 The tl;dr is a proposal to generate dev marked versions of the lowest
 possible higher tag that we would accept - which would be any of full
 release or alpha/beta/rc

 A related but can be done separately change is to pick version strings
 for alpha releases that are compatible with both PEP 440 and semver.

 Feedback solicited - if this is something contentious, we can make it
 an opt-in feature, but it seems unambiguously better to the folk that
 chatted through it on #openstack-infra, so ideally I'd like to
 transition any existing incompatible tags we have, and then land code
 to make this the behaviour for post-versioned (the default - no
 'version' key in setup.cfg) untagged commits.

 -Rob



As mordred, lifeless, and I discussed in #openstack-infra today, this
represents a backwards-incompatible change to the version format strings,
which we believe are being consumed by packagers. We should wait until
after the feature freeze, make sure we have pbr pinned in the requirements
for stable/icehouse, and then we can make this change and update the major
version number of pbr.

Rob, is there a library for python to compute semver numbers? If not,
should that be stand-alone or part of pbr?

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sphinxcontrib-pecanwsme 0.7.1 released

2014-03-17 Thread Doug Hellmann
sphinxcontrib-pecanwsme is used for documenting APIs built with the Pecan
web framework and WSME.

This bug fix release includes one change:

* Fix formatting issue for docstrings without param list
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][FWaaS][VPN] Admin status vs operational status

2014-03-17 Thread Eugene Nikanorov
On Mon, Mar 17, 2014 at 11:46 PM, Salvatore Orlando sorla...@nicira.comwrote:

 It is a common practice to have both an operational and an administrative
 status.
 I agree ACTIVE as a term might result confusing. Even in the case of a
 port, it is not really clear whether it means READY or LINK UP.
 Terminology-wise I would suggest READY rather than DEPLOYED, as it is
 a term which makes sense for all resources, whereas the latter is probably
 a bit more suitable for high layer services.

Yep, READY seems fine to me as well.



 In my opinion [2] putting a resource administratively down mean the user
 is deliberately deciding to disable that resource, and this goes beyond
 simply disabling its configuration, as mentioned in an earlier post. For
 instance, when a port is put administratively down, I'd expect it to not
 forward traffic anymore; similarly for a VIP.

Agree. But it worth mentioning that disabling a resource doesn't mean
removing it from the backend, which, in turn, requires the backend and
their drivers to support switching configuration off (or otherwise that
kind of behavior becomes backend-dependent and that creates what is called
'uneven API experience)



 However, since this is that time of the release cycle when we can use the
 mailing list to throw random ideas... what about doing an API change were
 we decide to put the administrative status on its way to deprecation?

Quite radical solution, what would be the alternative?
I'd be glad just to improve the names and set of operational status
constants.

While it's a common practice in network engineering to have an admin
 status, do we have a compelling use case for Neutron?

I'm asking because 'admin_state_up' is probably the only attribute I've
 never updated on any resource since when I started using Quantum!

I think it will be used often at least in lbaas world.

Thanks,
Eugene.

Also, other IaaS network APIs that I am aware of ([3],[4],[5]) do not have
 such concept; with the exception of [3] for the virtual router, if I'm not
 wrong.

 Thanks in advance for reading through my ramblings!
 Salvatore

 [1] https://bugs.launchpad.net/neutron/+bug/1237807
 [2] Please bear in mind that my opinion is wrong in most cases, or at
 least is different from that of the majority!
 [3] https://cloudstack.apache.org/docs/api/apidocs-4.2/TOC_Root_Admin.html
 [4] http://archives.opennebula.org/documentation:archives:rel2.0:api
 [5]
 http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API-ItemTypes.html



 On 17 March 2014 17:16, Eugene Nikanorov enikano...@mirantis.com wrote:

  Seems awkward to me, if an IPSec connection has a status of ACTIVE,
 but an admin state of ADMIN DOWN.
 Right, you see, that's the problem. Constant name 'ACTIVE' makes you
 expect that IPSec connection should work, while it is a deployment status.

  OK, so the change is merely change ACTIVE into DEPLOYED instead?
 We can't just rename the ACTIVE to DEPLOYED, and may be the latter is not
 the best name, but yes, that's the intent.

 Thanks,
 Eugene.



 On Mon, Mar 17, 2014 at 7:31 PM, Kyle Mestery 
 mest...@noironetworks.comwrote:

 On Mon, Mar 17, 2014 at 8:36 AM, Eugene Nikanorov 
 enikano...@mirantis.com wrote:

 Hi Kyle,






 It's a typical use case for network devices to have both admin and
 operational
 state. In the case of having admin_state=DOWN and
 operational_state=ACTIVE,
 this just means the port/link is active but has been configured down.
 Isn't this
 the same for LBaaS here? Even reading the bug, the user has clearly
 configured
 the VIP pool as admin_state=DOWN. When it becomes ACTIVE, it's due to
 this
 configuration that the pool remains admin_state=DOWN.

 Am I missing something here?

 No, you're not. The user sees 'ACTIVE' status and think it contradicts
 'DOWN' admin_state.
 It's naming (UX problem), in my opinion.

 OK, so the change is merely change ACTIVE into DEPLOYED instead?


 Thanks,
 Eugene.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Automatic version creation in PBR

2014-03-17 Thread Robert Collins
On 18 March 2014 07:28, Jay Pipes jaypi...@gmail.com wrote:
 On Mon, 2014-03-17 at 16:10 +1300, Robert Collins wrote:

 Hi Rob, thanks for the heads up.

 A number of us use pbr for outside-of-OpenStack projects, and have
 relied on it for things like proper package versioning using git tags.

Yup!

 I'm a little unclear what, if any, changes to my standard python sdist
 and upload actions I will need to take to publish releases of my
 projects that use pbr.

If you set 'version' in setup.cfg, pbr's behaviour will not change at all.

If you do not set 'version' in setup.cfg then:
 - for tagged commits, pbr's behaviour will not change at all.
 - for untagged commits, pbr will change from
'$last_tag_version.$commit_count.g$sha' to
'$next_highest_pre_release.dev$commit_count.g$sha'

The last point is incompatible if you were uploading untagged commits
to pypi. Of course, you shouldn't be doing that because they are
pre-release versions but not marked as such for pypi!

 Would you mind easing my mind and letting me know if this is something
 that is going to break things for me? I'm no packaging expert, and rely
 on things like pbr to do a lot of this magic for me :)

It should make it better in all regards.

However, as Doug mentions, its not backwards compatible (consider
1.0.0 + 5 commits):

Old:
1.0.0.5.g$sha

New:
1.0.1.0a0.dev5.g$sha

*if* you were post-processing 1.0.0.5.g$sha into some other version
schema, the change in version string may break your tooling.

I expect that to happen to folk making temporary debs and things like
that - but the new versions are strictly better for that, once folk
migrate.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Automatic version creation in PBR

2014-03-17 Thread Robert Collins
On 18 March 2014 09:32, Doug Hellmann doug.hellm...@dreamhost.com wrote:

 As mordred, lifeless, and I discussed in #openstack-infra today, this
 represents a backwards-incompatible change to the version format strings,
 which we believe are being consumed by packagers. We should wait until after
 the feature freeze, make sure we have pbr pinned in the requirements for
 stable/icehouse, and then we can make this change and update the major
 version number of pbr.

Note that there is *no* change to *release* format strings. *only* to
intermediate format strings.

So - we should:
 - pin existing stable branches (so their intermediate version
behaviour stays the same)
 - get this improvement into a pbr release now, so that I series
intermediate version numbers will be better

IMO anyhow.

 Rob, is there a library for python to compute semver numbers? If not, should
 that be stand-alone or part of pbr?

pbr has significant issues with dependencies due to its installation
in the early bootstrap stage of setup.py. So, it will be in pbr.

-Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Stack breakpoint

2014-03-17 Thread Ton Ngo

(reposting as new thread)

I would like to revisit with more details an idea that was mentioned in the
last design summit and hopefully get some feedback.

The scenario is troubleshooting a failed template.
Currently we can stop on the point of failure by disabling rollback:  this
works well for stack-create; stack-update requires some more work but
that's different thread.  In many cases however, the point of failure may
be too late or too hard to debug because the condition causing the failure
may not be obvious or may have been changed.  If we can pause the stack at
a point before the failure, then we can check whether the state of the
environment and the stack is what we expect.
The analogy with program debugging is breakpoint/step, so it may be useful
to introduce this same concept in a stack.

The usage would be something like:
-Run stack-create (or stack-update once it can handle failure) with one or
more resource name specified as breakpoint
-As the engine traverses down the dependency graph, it would stop at the
breakpoint resource and all dependent resources.  Other resources with no
dependency will proceed to completion.
-After debugging, continue the stack by:
-Stepping: remove current breakpoint, set breakpoint for next resource
(s) in dependency graph, resume stack-create (or stack-update)
-Running to completion: remove current breakpoint, resume stack-create
(or stack-update)

Some other possible uses for this breakpoint:
- While developing new template or resource type, bring up a stack to a
point before the new code is to be executed
- Introduce human process: pause the partial stack so the user can get the
stack info and perform some tasks before continuing

Some issues to consider (with some initial feedback from shardy):
- Granularity of stepping:  resource level or internal steps within a
resource
- How to specify breakpoints:  CLI argument or coded in template or both
- How to handle resources with timer, e.g. wait condition:  pause/resume
timer value
- New state for a resource:  PAUSED

Thanks.

Ton Ngo,


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] need help with unit test framework, trying to fix bug 1292963

2014-03-17 Thread Chris Friesen


I've submitted code for review at https://review.openstack.org/80808; 
but it seems to break the unit tests.


Where do the deleted and deleted_at fields for the instance get 
created for unit tests?  Where is the database stored for unit tests, 
and is there a way to look at it directly?


Here is what's confusing me.  I added a breakpoint in the testcase at 
the point where it's trying to retrieve the instances.



The original code looks like this:

filters = {'uuid': filter_uuids, 'deleted_at': None}
instances = instance_obj.InstanceList.get_by_filters(context, 
filters=filters)


If I run that code, I get three instances, as expected.


If I change it to filters = {'uuid': filter_uuids, 'deleted': 0} and 
rerun get_by_filters() then I get no instances in the result.



However, if I run db.instance_get_all() and look at the result, there 
are three instances and the deleted field is zero in each case:


(Pdb) db.instance_get_all(context)[0]['deleted']
0
(Pdb) db.instance_get_all(context)[1]['deleted']
0
(Pdb) db.instance_get_all(context)[2]['deleted']
0


So why does it fail if I try and filter by the deleted field?


Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >