[openstack-dev] Fwd: [OpenStack:Networking] Router Internal interface goes down sometimes

2013-10-25 Thread Balamurugan V G
Hi,

It will be very helpful if any developers involved in Neutron can throw
some light about cases when a router's internal interface can be deleted
after being added. Please refer to the issue below.

Thanks,
Balu


-- Forwarded message --
From: Balamurugan V G balamuruga...@gmail.com
Date: Tue, Oct 22, 2013 at 7:51 AM
Subject: [OpenStack:Networking] Router Internal interface goes down
sometimes
To: openst...@lists.openstack.org


   Hi,


 I have a Grizzly 2013.1.3 4 nodes setup(Controller, Network, Compute1,
 Compute2) on Ubuntu 12.04 LTS.

 I launch an instance which is attached to two networks(subnets) as shown
 in the diagram linked below. The subnets are in turn attached to two
 separate routers. Some times I see that the internal interface on one of
 the router remains DOWN (shown as red dot in the diagram below). Can
 someone help me with why this is happening. As said this only happens once
 in a while making the deployment unreliable.



 https://dl.dropboxusercontent.com/u/5145369/OpenStack/NetworkTopology_RouterInternalPortIssue.png
 (Topology Diagram)


Please note that the provisioning of the networks, routers, instances
are automated and happen successively with no delay between operations.


 When i look at the logs, the tap interfaces corresponding to the port
 which is DOWN is removed soon after its added. The ones highlighted in
 green correspond the internal interface which is good and the ones in
 yellow correspond to the problem port.

 /var/log/syslog


8bc4ac6b - Interface which is fine
09d96215  - Interface which is DOWN



 Oct 21 10:07:13 openstack-blr-network ovs-vsctl: 1|vsctl|INFO|Called
 as /usr/bin/ovs-vsctl -- --may-exist add-port br-int qr-8bc4ac6b-90 --
 set Interface qr-8bc4ac6b-90 type=internal -- set Interface qr-8bc4ac6b-90
 external-ids:iface-id=8bc4ac6b-9027-4eee-9e1e-7af7b9a9c0df -- set Interface
 qr-8bc4ac6b-90 external-ids:iface-status=active -- set Interface
 qr-8bc4ac6b-90 external-ids:attached-mac=fa:16:3e:02:3c:1e

 Oct 21 10:07:13 openstack-blr-network kernel: [  401.933281] device
 qr-8bc4ac6b-90 entered promiscuous mode

 Oct 21 10:07:15 openstack-blr-network ovs-vsctl: 1|vsctl|INFO|Called
 as /usr/bin/ovs-vsctl --timeout=2 set Port qr-8bc4ac6b-90 tag=5

 Oct 21 10:07:20 openstack-blr-network kernel: [  408.144038]
 qg-f1b0e315-df: no IPv6 routers present

 Oct 21 10:07:20 openstack-blr-network ovs-vsctl: 1|vsctl|INFO|Called
 as /usr/bin/ovs-vsctl -- --may-exist add-port br-int qr-09d96215-c4 --
 set Interface qr-09d96215-c4 type=internal -- set Interface qr-09d96215-c4
 external-ids:iface-id=09d96215-c4e8-40a1-afc0-9c3904474b73 -- set Interface
 qr-09d96215-c4 external-ids:iface-status=active -- set Interface
 qr-09d96215-c4 external-ids:attached-mac=fa:16:3e:56:9d:29

 Oct 21 10:07:20 openstack-blr-network kernel: [  408.329781] device qr-
 09d96215-c4 entered promiscuous mode

 Oct 21 10:07:20 openstack-blr-network kernel: [  408.640042]
 tap5507f5e7-ea: no IPv6 routers present

 Oct 21 10:07:21 openstack-blr-network ovs-vsctl: 1|vsctl|INFO|Called
 as /usr/bin/ovs-vsctl --timeout=2 set Port qr-09d96215-c4 tag=4

 Oct 21 10:07:22 openstack-blr-network kernel: [  410.280038]
 tap44bede85-d8: no IPv6 routers present

 Oct 21 10:07:24 openstack-blr-network ovs-vsctl: 1|vsctl|INFO|Called
 as /usr/bin/ovs-vsctl -- --may-exist add-port br-ex qg-4414a87a-f1 -- set
 Interface qg-4414a87a-f1 type=internal -- set Interface qg-4414a87a-f1
 external-ids:iface-id=4414a87a-f166-426c-93ab-b51d9e8253c1 -- set Interface
 qg-4414a87a-f1 external-ids:iface-status=active -- set Interface
 qg-4414a87a-f1 external-ids:attached-mac=fa:16:3e:94:c0:dd

 Oct 21 10:07:24 openstack-blr-network kernel: [  412.883270] device
 qg-4414a87a-f1 entered promiscuous mode

 Oct 21 10:07:25 openstack-blr-network kernel: [  413.296045] qr-8bc4ac6b-90:
 no IPv6 routers present

 Oct 21 10:07:29 openstack-blr-network ovs-vsctl: 1|vsctl|INFO|Called
 as /usr/bin/ovs-vsctl --timeout=2 -- --if-exists del-port br-int qr-
 09d96215-c4

 Oct 21 10:07:29 openstack-blr-network kernel: [  417.399603] device qr-
 09d96215-c4 left promiscuous mode

 Oct 21 10:07:35 openstack-blr-network kernel: [  423.800021]
 qg-4414a87a-f1: no IPv6 routers present

 Oct 21 10:07:45 openstack-blr-network dnsmasq-dhcp[3861]:
 DHCPREQUEST(tap3faa7b42-37) 192.168.2.2 fa:16:3e:81:7a:58

 Oct 21 10:07:45 openstack-blr-network dnsmasq-dhcp[3861]:
 DHCPACK(tap3faa7b42-37) 192.168.2.2 fa:16:3e:81:7a:58 host-192-168-2-2

 Oct 21 10:08:16 openstack-blr-network ovs-vsctl: 1|vsctl|INFO|Called
 as /usr/bin/ovs-vsctl -- --may-exist add-port br-ex qg-4f64cb40-a6 -- set
 Interface qg-4f64cb40-a6 type=internal -- set Interface qg-4f64cb40-a6
 external-ids:iface-id=4f64cb40-a63a-47cc-935d-950e99212304 -- set Interface
 qg-4f64cb40-a6 external-ids:iface-status=active -- set Interface
 qg-4f64cb40-a6 external-ids:attached-mac=fa:16:3e:4d:44:75

 Oct 21 10:08:16 openstack-blr-network 

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-25 Thread Thomas Spatzier
Hi Keith,

thanks for sharing your opinion. That seems to make sense, and I know
Adrian was heavily involved in discussion at the Portland summit,. so seems
like the right contacts are hooked up.
Looking forward to the discussions at the summit.

Regards,
Thomas

Keith Bray keith.b...@rackspace.com wrote on 25.10.2013 02:23:55:

 From: Keith Bray keith.b...@rackspace.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 25.10.2013 02:31
 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal

 Hi Thomas, here's my opinion:  Heat and Solum contributors will work
 closely together to figure out where specific feature implementations
 belong... But, in general, Solum is working at a level above Heat.  To
 write a Heat template, you have to know about infrastructure setup and
 configuration settings of infrastructure and API services.  I believe
 Solum intends to provide the ability to tweak and configure the amount of
 complexity that gets exposed or hidden so that it becomes easier for
cloud
 consumers to just deal with their application and not have to necessarily
 know or care about the underlying infrastructure and API services, but
 that level of detail can be exposed to them if necessary. Solum will know
 what infrastructure and services to set up to run applications, and it
 will leverage Heat and Heat templates for this.

 The Solum project has been very vocal about leveraging Heat under the
hood
 for the functionality and vision of orchestration that it intends to
 provide.  It seems, based on this thread (and +1 from me), enough people
 are interested in having Heat provide some level of software
 orchestration, even if it's just bootstrapping other CM tools and
 coordinating the when are you done, and I haven't heard any Solum folks
 object to Heat implementing software orchestration capabilities... So,
I'm
 looking forward to great discussions on this topic for Heat at the
summit.
  If you recall, Adrian Otto (who announced project Solum) was also the
one
 who was vocal at the Portland summit about the need for HOT syntax.  I
 think both projects are on a good path with a lot of fun collaboration
 time ahead.

 Kind regards,
 -Keith

 On 10/24/13 7:56 AM, Thomas Spatzier thomas.spatz...@de.ibm.com
wrote:

 Hi all,
 
 maybe a bit off track with respect to latest concrete discussions, but I
 noticed the announcement of project Solum on openstack-dev.
 Maybe this is playing on a different level, but I still see some
relation
 to all the software orchestration we are having. What are your opinions
on
 this?
 
 BTW, I just posted a similar short question in reply to the Solum
 announcement mail, but some of us have mail filters an might read [Heat]
 mail with higher prio, and I was interested in the Heat view.
 
 Cheers,
 Thomas
 
 Patrick Petit patrick.pe...@bull.net wrote on 24.10.2013 12:15:13:
  From: Patrick Petit patrick.pe...@bull.net
  To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org,
  Date: 24.10.2013 12:18
  Subject: Re: [openstack-dev] [Heat] HOT Software configuration
proposal
 
  Sorry, I clicked the 'send' button too quickly.
 
  On 10/24/13 11:54 AM, Patrick Petit wrote:
   Hi Clint,
   Thank you! I have few replies/questions in-line.
   Cheers,
   Patrick
   On 10/23/13 8:36 PM, Clint Byrum wrote:
   Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700:
   Dear Steve and All,
  
   If I may add up on this already busy thread to share our
experience
   with
   using Heat in large and complex software deployments.
  
   Thanks for sharing Patrick, I have a few replies in-line.
  
   I work on a project which precisely provides additional value at
the
   articulation point between resource orchestration automation and
   configuration management. We rely on Heat and chef-solo
respectively
   for
   these base management functions. On top of this, we have developed
 an
   event-driven workflow to manage the life-cycles of complex
software
   stacks which primary purpose is to support middleware components
as
   opposed to end-user apps. Our use cases are peculiar in the sense
 that
   software setup (install, config, contextualization) is not a
 one-time
   operation issue but a continuous thing that can happen any time in
   life-span of a stack. Users can deploy (and undeploy) apps long
time
   after the stack is created. Auto-scaling may also result in an
   asynchronous apps deployment. More about this latter. The
framework
 we
   have designed works well for us. It clearly refers to a PaaS-like
   environment which I understand is not the topic of the HOT
software
   configuration proposal(s) and that's absolutely fine with us.
 However,
   the question for us is whether the separation of software config
 from
   resources would make our life easier or not. I think the answer is
   definitely yes but at the condition that the DSL extension
preserves
   almost everything from the expressiveness of 

Re: [openstack-dev] [Ironic] Nominating Lucas Gomes to ironic core

2013-10-25 Thread Ghe Rivero
+1


On Fri, Oct 25, 2013 at 2:18 AM, Devananda van der Veen 
devananda@gmail.com wrote:

 Hi all,

 I'd like to nominate Lucas Gomes for ironic-core. He's been consistently
 doing reviews for several months and has led a lot of the effort on the API
 and client libraries.

 Thanks for the great work!
 -Deva

 http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Pinky: Gee, Brain, what do you want to do tonight?
The Brain: The same thing we do every night, Pinky—try to take over the
world!

 .''`.  Pienso, Luego Incordio
: :' :
`. `'
  `-www.debian.orgwww.openstack.com

GPG Key: 26F020F7
GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Lucas Gomes to ironic core

2013-10-25 Thread Robert Collins
On 25 October 2013 13:18, Devananda van der Veen
devananda@gmail.com wrote:
 Hi all,

 I'd like to nominate Lucas Gomes for ironic-core. He's been consistently
 doing reviews for several months and has led a lot of the effort on the API
 and client libraries.

+1. Oh, and man I need to do more Ironic reviews.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Lucas Gomes to ironic core

2013-10-25 Thread Roman Prykhodchenko
Totally agree.

On Oct 25, 2013, at 03:18 , Devananda van der Veen devananda@gmail.com 
wrote:

 Hi all,
 
 I'd like to nominate Lucas Gomes for ironic-core. He's been consistently 
 doing reviews for several months and has led a lot of the effort on the API 
 and client libraries.
 
 Thanks for the great work!
 -Deva
 
 http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] About single entry point in trove-guestagent

2013-10-25 Thread Illia Khudoshyn
I'll try to code it this weekend. Hope, could be able to show it by Monday.


On Fri, Oct 25, 2013 at 7:46 AM, Michael Basnight mbasni...@gmail.comwrote:


 On Oct 23, 2013, at 7:03 AM, Illia Khudoshyn wrote:

  Hi Denis, Michael, Vipul and all,
 
  I noticed a discussion in irc about adding a single entry point (sort of
 'SuperManager') to the guestagent. Let me add my 5cent.
 
  I agree with that we would ultimately avoid code duplication. But from
 my experience, only very small part of GA Manager can be considered as
 really duplicated code, namely Manager#prepare(). A 'backup' part may be
 another candidate, but I'm not yet. It may still be rather service type
 specific. All the rest of the code was just delegating.

 Yes, currently that is the case :)

 
  If we add a 'SuperManager' all we'll have -- just more delegation:
 
  1. There is no use for dynamic loading of corresponding Manager
 implementation because there will never be more than one service type
 supported on a concrete guest. So current implementation with configurable
 dictionary service_type-ManagerImpl looks good for me.
 
  2. Neither the 'SuperManager' provide common interface for Manager --
 due to dynamic nature of python. As it has been told,
 trove.guestagent.api.API provides list of methods with parameters we need
 to implement. What I'd like to have is a description of types for those
 params as well as return types. (Man, I miss static typing). All we can do
 for that is make sure we have proper unittests with REAL values for params
 and returns.
 
  As for the common part of the Manager's code, I'd go for extracting that
 into a mixin.

 When we started talking about it, i mentioned to one of the rackspace
 trove developers privately we might be able to solve effectively w/ a mixin
 instead of more parent classes :) I would like to see an example of both
 of them. At the end of the day all i care about is not having more copy
 pasta between manager impls as we grow the common stuff. even if that is
 just a method call in each guest to call each bit of common code.

 
  Thanks for your attention.
 
  --
  Best regards,
  Illia Khudoshyn,
  Software Engineer, Mirantis, Inc.
 
  38, Lenina ave. Kharkov, Ukraine
  www.mirantis.com
  www.mirantis.ru
 
  Skype: gluke_work
  ikhudos...@mirantis.com
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com http://www.mirantis.ru/

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 router service integration with Service Type Framework

2013-10-25 Thread IWAMOTO Toshihiro
Let me try to clarify things a bit.

At Fri, 25 Oct 2013 08:44:05 +0900,
Itsuro ODA wrote:
 
 Hi Gray,
 
 Thanks for your response.
 
 Our plan is as follows:
 * LVS driver is one of lbaas provider driver.
   It communicates with l3_agent instead of lbaas_agent.

LVS agent part will reside in the L3 agent, just like FWaaS.
At this point, I plan to use the same LBaaS RPC topic, so the server
(Neutron) side of the LVS implementation isn't much different from
the current HAproxy one.

 * For l3_agent side, I think the implementation is same as
   fwaas. L3NATAgent inherits like LVSL3AgentRpcCallback
   class added which communicates with LVS provider driver.
   So l3_agent functions already existed are not changed,
   just added LB function.

L3NATAgent inherits FWaaSL3AgentRpcCallback.
My plan is to create LVSL3AgentRpcCallback and make L3NATAgent inherit
this. LVSL3AgentRpcCallback will make use of the L3 router ports for
LVS operations.

   I think the implementation would change depending on
   the service chaining discussion.

Exactly.
There is a private LVS LBaaS implementation.
I think the above plan is a good way to provide a working example
based on the code.
Implementations can change as the service chaining discussion
develops.


 Thanks,
 Itsuto Oda
 
 # note that Toshihiro Iwamoto, a main developer of our BPs
 # may replay instead of me. He will attend the HK summit.
 
 On Thu, 24 Oct 2013 16:03:25 -0700
 Gary Duan gd...@varmour.com wrote:
 
  Hi, Oda-san,
  
  Thanks for your response.
  
  L3 agent function should remain the same, as one driver implementation of
  L3 router plugin.
  
  My understanding is your lbaas driver can be running on top of L3 agent and
  LVS' own routing services. Is my understanding correct?
  
  Thanks,
  Gary
  
  
  On Thu, Oct 24, 2013 at 3:16 PM, Itsuro ODA o...@valinux.co.jp wrote:
  
   Hi,
  
   We are going to implement 2-arm type lbaas using LVS, and have submitted
   the following BPs.
  
  
   https://blueprints.launchpad.net/neutron/+spec/lbaas-support-routed-service-insertion
   https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-driver
   https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-extra-features
  
   Maybe the first one is same as yours.
   We are happy if we just concentrate making a provider driver.
  
   Thanks.
   Itsuro Oda
  
   On Thu, 24 Oct 2013 11:56:53 -0700
   Gary Duan gd...@varmour.com wrote:
  
Hi,
   
I've registered a BP for L3 router service integration with service
framework.
   
https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type
   
In general, the implementation will align with how LBaaS is integrated
   with
the framework. One consideration we heard from several team members is 
to
be able to support vendor specific features and extensions in the 
service
plugin.
   
Any comment is welcome.
   
Thanks,
Gary
  
   --
   Itsuro ODA o...@valinux.co.jp
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 -- 
 Itsuro ODA o...@valinux.co.jp
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [OpenStackfr] Organisation, mailing-list...

2013-10-25 Thread Patrick Petit


FYI,
Patrick

 Original Message 
Subject:[OpenStackfr] Organisation, mailing-list...
Date:   Fri, 25 Oct 2013 10:20:26 +0200
From:   Jonathan Le Lous jonathan.lel...@gmail.com
To: organisat...@listes.openstack.fr
CC: 	Nicolas Thomas nicolas.tho...@canonical.com, Dave Neary 
dne...@redhat.com, Patrick Petit patrick.pe...@bull.net, Sylvain 
Bauza sylv...@bauza.org, Christophe Sauthier 
christophe.sauth...@objectif-libre.com, Thierry Carrez 
thie...@openstack.org, Dave Neary dne...@gnome.org, Raphael Ferreira 
r.ferre...@enovance.com, Yannick Foeillet 
yannick.foeil...@alterway.fr, Loic Dachary l...@dachary.org, Julien 
Danjou jul...@danjou.info, Michael Bright mjbrigh...@gmail.com, 
Bruno Seznec brunocons...@gmail.com, Adrien Cunin 
adr...@adriencunin.fr, Antoine Castaing 
antoine.casta...@hedera-technology.com, Bernard Paques 
bernard.paq...@gmail.com, Jean-Pierre Dion 
jean-pierre.d...@bull.net, Nicolas Barcet n...@enovance.com, 
Philippe Desmaison philippe.desmai...@suse.com, Thierry Lefort 
thierry.lef...@gmail.com, eric-olivier.lamey-...@cloudwatt.com, 
tho...@goirand.fr, vfre...@redhat.com, yac...@alyseo.com, 
vincent.u...@suse.com, Jérémie Bourdoncle 
jeremie.bourdon...@hederatech.com, Chmouel Boudjnah 
chmo...@chmouel.com, Stephane EVEILLARD 
stephane.eveill...@gmail.com, emil...@macchi.pro, 
francois.bur...@cloudwatt.com, lebru...@googlemail.com, 
bouachria.ma...@orange.fr, pomm...@wanadoo.fr




Bonjour à tous,

La communauté française d'OpenStack avance avec de nouveaux meetup plus 
techniques, des rencontres informelles, des articles... Le tout est 
organisé de façon communautaire :-)


Pour y participer, juste un petit rappel, nous avons maintenant 
plusieurs mailing-list pour discuter d'OpenStack en France et/ou de 
participer à l' organisation de la communauté:


- Mailing-list France pour se tenir informé: 
https://wiki.openstack.org/wiki/OpenStackUsersGroup#France


- Mailing-list Organisation pour ceux qu'ils veulent agir concrètement 
autour de l'animation de la communauté: 
http://listes.openstack.fr/listinfo/organisation


Faîtes passer le mot !

A bientôt !
Librement,
Jonathan

Jonathan Le Lous

Membre du Conseil d'Administration de l'April http://www.april.org
Board member of Apri http://www.april.org/en/l

fr.linkedin.com/in/jonathanlelous/ 
http://fr.linkedin.com/in/jonathanlelous/

Blog : http://blog.itnservice.net/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint review process

2013-10-25 Thread Joe Gordon
On Oct 24, 2013 9:14 PM, Robert Collins robe...@robertcollins.net wrote:

 On 24 October 2013 04:33, Russell Bryant rbry...@redhat.com wrote:
  Greetings,
 
  At the last Nova meeting we started talking about some updates to the
  Nova blueprint process for the Icehouse cycle.  I had hoped we could
  talk about and finalize this in a Nova design summit session on Nova
  Project Structure and Process [1], but I think we need to push forward
  on finalizing this as soon as possible so that it doesn't block current
  work being done.

 Cool

  Here is a first cut at the process.  Let me know what you think is
  missing or should change.  I'll get the result of this thread posted on
  the wiki.
 
  1) Proposing a Blueprint
 
  Proposing a blueprint for Nova is not much different than other
  projects.  You should follow the instructions here:
 
  https://wiki.openstack.org/wiki/Blueprints
 
  The particular important step that seems to be missed by most is:
 
  Once it is ready for PTL review, you should set:
 
  Milestone: Which part of the release cycle you think your work will be
  proposed for merging.
 
  That is really important.  Due to the volume of Nova blueprints, it
  probably will not be seen until you do this.

 The other thing I'm seeing some friction on is 'significant features'
 : it sometimes feels like folk are filing blueprints for everything
 that isn't 'the code crashed' style problems, and while I appreciate
 folk wanting to work within the system, blueprints are a heavyweight
 tool, primarily suited for things that require significant
 coordination.

  2) Blueprint Review Team
 
  Ensuring blueprints get reviewed is one of the responsibilities of the
  PTL.  However, due to the volume of Nova blueprints, it's not practical
  for me to do it alone.  A team of people (nova-drivers) [2], a subset of
  nova-core, will be doing blueprint reviews.

 Why a subset of nova-core? With nova-core defined as 'knows the code
 well *AND* reviews a lot', I can see that those folk are in a position
 to spot a large class of design defects. However, there are plenty of
 folk with expertise in e.g. SOA, operations, deployment @ scale, who
 are not nova-core but who will spot plenty of issues. Is there some
 way they can help out?

  By having more people reviewing blueprints, we can do a more thorough
  job and have a higher quality result.
 
  Note that even though there is a nova-drivers team, *everyone* is
  encouraged to participate in the review process by providing feedback on
  the mailing list.

 I'm not sure about this bit here: blueprints don't have the spec
 content, usually thats in an etherpad; etherpads are editable by
 everyone - wouldn't it be better to keep the conversation together? I
 guess part of my concern here comes back to the (ab)use of blueprints
 for shallow features.

  3) Blueprint Review Criteria
 
  Here are some things that the team reviewing blueprints should look for:
 
  The blueprint ...
 
   - is assigned to the person signing up to do the work
 
   - has been targeted to the milestone when the code is
 planned to be completed
 
   - is an appropriate feature for Nova.  This means it fits with the
 vision for Nova and OpenStack overall.  This is obviously very
 subjective, but the result should represent consensus.
 
   - includes enough detail to be able to complete an initial design
 review before approving the blueprint. In many cases, the design
 review may result in a discussion on the mailing list to work
 through details. A link to this discussion should be left in the
 whiteboard of the blueprint for reference.  This initial design
 review should be completed before the blueprint is approved.
 
   - includes information that describes the user impact (or lack of).
 Between the blueprint and text that comes with the DocImpact flag [3]
 in commits, the docs team should have *everything* they need to
 thoroughly document the feature.

 I'd like to add:
  - has an etherpad with the design (the blueprint summary has no
 markup and is a poor place for capturing the design).

  Once the review has been complete, the blueprint should be marked as
  approved and the priority should be set.  A set priority is how we know
  from the blueprint list which ones have already been reviewed.


  4) Blueprint Prioritization
 
  I would like to do a better job of using priorities in Icehouse.  The
  priority field services a couple of purposes:
 
- helps reviewers prioritize their time
 
- helps set expectations for the submitter for how reviewing this
  work stacks up against other things
 
  In the last meeting we discussed an idea that I think is worth trying at
  least for icehouse-1 to see if we like it or not.  The idea is that
  *every* blueprint starts out at a Low priority, which means best
  effort, but no promises.  For a blueprint to get prioritized higher, it
  should have 2 nova-core members signed up to 

Re: [openstack-dev] Does DB schema hygiene warrant long migrations?

2013-10-25 Thread Joe Gordon
On Oct 24, 2013 11:38 PM, Michael Still mi...@stillhq.com wrote:

 On Fri, Oct 25, 2013 at 9:07 AM, Boris Pavlovic bo...@pavlovic.me wrote:
  Johannes,
 
  +1, purging should help here a lot.

 Sure, but my point is more:

  - pruning isn't done by the system automatically, so we have to
 assume it never happens

I thought this was done to preserve records , as at the time ceilometer
didn't exist, and we wanted to make sure nova kept records by default.  And
we couldn't assume deployers were using db snapshots either. But once we
can say don't use nova DB for records, we can start automatically pruning
or optionally move over to hard delete.

I think there have been some implicit assumptions that all large
deployments know about this and manually prune or do something similar.

IMHO OpenStack in general needs a stronger record keeping story. As these
issues will (or do?) affect other services besides nova.


  - we need to have a clearer consensus about what we think the maximum
 size of a nova deployment is. Are we really saying we don't support
 nova installs with a million instances? If so what is the maximum
 number of instances we're targeting? Having a top level size in mind
 isn't a bad thing, but I don't think we have one at the moment that we
 all agree on. Until that happens I'm going to continue targeting the
 largest databases people have told me about (plus a fudge factor).


Agreed, I am surprised that the 30 million entry DB people aren't pruning
somehow (so I assume). I wonder what that DB looks like with pruning.

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint review process

2013-10-25 Thread Nikola Đipanov
On 23/10/13 17:33, Russell Bryant wrote:
 
 4) Blueprint Prioritization
 
 I would like to do a better job of using priorities in Icehouse.  The
 priority field services a couple of purposes:
 
   - helps reviewers prioritize their time
 
   - helps set expectations for the submitter for how reviewing this
 work stacks up against other things
 
 In the last meeting we discussed an idea that I think is worth trying at
 least for icehouse-1 to see if we like it or not.  The idea is that
 *every* blueprint starts out at a Low priority, which means best
 effort, but no promises.  For a blueprint to get prioritized higher, it
 should have 2 nova-core members signed up to review the resulting code.
 

All of the mentioned seem like awesome ideas that I +1 wholeheartedly. A
comment on this point though.

I don't have the numbers but I have a feeling that what happened in
Havana was that a lot of blueprints slipped until the time for feature
freeze. Reviewers thought it was a worthwile feature at that point (this
was, I feel, when *actual* blueprint reviews are done - whatever the
process says. It's natural too - once the code is there so much more is
clear) and wanted to get it in - but it was late in the cycle so we
ended up accepting things that could have been done better.

It would be good to levarage the blueprint process to make people post
code as soon as possible IMHO. How about making posted code a pre-req
for core reviewers to sign up for them? Thoughts?

Thanks,

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Thoughs please on how to address a problem with mutliple deletes leading to a nova-compute thread pool problem

2013-10-25 Thread Day, Phil
Hi Folks,

We're very occasionally seeing problems where a thread processing a create 
hangs (and we've seen when taking to Cinder and Glance).  Whilst those issues 
need to be hunted down in their own rights, they do show up what seems to me to 
be a weakness in the processing of delete requests that I'd like to get some 
feedback on.

Delete is the one operation that is allowed regardless of the Instance state 
(since it's a one-way operation, and users should always be able to free up 
their quota).   However when we get a create thread hung in one of these 
states, the delete requests when they hit the manager will also block as they 
are synchronized on the uuid.   Because the user making the delete request 
doesn't see anything happen they tend to submit more delete requests.   The 
Service is still up, so these go to the computer manager as well, and 
eventually all of the threads will be waiting for the lock, and the compute 
manager will stop consuming new messages.

The problem isn't limited to deletes - although in most cases the change of 
state in the API means that you have to keep making different calls to get past 
the state checker logic to do it with an instance stuck in another state.   
Users also seem to be more impatient with deletes, as they are trying to free 
up quota for other things. 

So while I know that we should never get a thread into a hung state into the 
first place, I was wondering about one of the following approaches to address 
just the delete case:

i) Change the delete call on the manager so it doesn't wait for the uuid lock.  
Deletes should be coded so that they work regardless of the state of the VM, 
and other actions should be able to cope with a delete being performed from 
under them.  There is of course no guarantee that the delete itself won't block 
as well. 

ii) Record in the API server that a delete has been started (maybe enough to 
use the task state being set to DELETEING in the API if we're sure this doesn't 
get cleared), and add a periodic task in the compute manager to check for and 
delete instances that are in a DELETING state for more than some timeout. 
Then the API, knowing that the delete will be processes eventually can just 
no-op any further delete requests.

iii) Add some hook into the ServiceGroup API so that the timer could depend on 
getting a free thread from the compute manager pool (ie run some no-op task) - 
so that of there are no free threads then the service becomes down. That would 
(eventually) stop the scheduler from sending new requests to it, and make 
deleted be processed in the API server but won't of course help with commands 
for other instances on the same host.

iv) Move away from having a general topic and thread pool for all requests, and 
start a listener on an instance specific topic for each running instance on a 
host (leaving the general topic and pool just for creates and other 
non-instance calls like the hypervisor API).   Then a blocked task would only 
affect request for a specific instance.

I'm tending towards ii) as a simple and pragmatic solution in the near term, 
although I like both iii) and iv) as being both generally good enhancments - 
but iv) in particular feels like a pretty seismic change.

Thoughts please,

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Lucas Gomes to ironic core

2013-10-25 Thread Martyn Taylor

+1.  Great work Lucas!

On 25/10/13 09:16, Yuriy Zveryanskyy wrote:

+1 for Lucas.

On 10/25/2013 03:18 AM, Devananda van der Veen wrote:

Hi all,

I'd like to nominate Lucas Gomes for ironic-core. He's been 
consistently doing reviews for several months and has led a lot of 
the effort on the API and client libraries.


Thanks for the great work!
-Deva

http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thoughs please on how to address a problem with mutliple deletes leading to a nova-compute thread pool problem

2013-10-25 Thread Robert Collins
On 25 October 2013 23:46, Day, Phil philip@hp.com wrote:
 Hi Folks,

 We're very occasionally seeing problems where a thread processing a create 
 hangs (and we've seen when taking to Cinder and Glance).  Whilst those issues 
 need to be hunted down in their own rights, they do show up what seems to me 
 to be a weakness in the processing of delete requests that I'd like to get 
 some feedback on.

 Delete is the one operation that is allowed regardless of the Instance state 
 (since it's a one-way operation, and users should always be able to free up 
 their quota).   However when we get a create thread hung in one of these 
 states, the delete requests when they hit the manager will also block as they 
 are synchronized on the uuid.   Because the user making the delete request 
 doesn't see anything happen they tend to submit more delete requests.   The 
 Service is still up, so these go to the computer manager as well, and 
 eventually all of the threads will be waiting for the lock, and the compute 
 manager will stop consuming new messages.

 The problem isn't limited to deletes - although in most cases the change of 
 state in the API means that you have to keep making different calls to get 
 past the state checker logic to do it with an instance stuck in another 
 state.   Users also seem to be more impatient with deletes, as they are 
 trying to free up quota for other things.

 So while I know that we should never get a thread into a hung state into the 
 first place, I was wondering about one of the following approaches to address 
 just the delete case:

 i) Change the delete call on the manager so it doesn't wait for the uuid 
 lock.  Deletes should be coded so that they work regardless of the state of 
 the VM, and other actions should be able to cope with a delete being 
 performed from under them.  There is of course no guarantee that the delete 
 itself won't block as well.

I like this.

 ii) Record in the API server that a delete has been started (maybe enough to 
 use the task state being set to DELETEING in the API if we're sure this 
 doesn't get cleared), and add a periodic task in the compute manager to check 
 for and delete instances that are in a DELETING state for more than some 
 timeout. Then the API, knowing that the delete will be processes eventually 
 can just no-op any further delete requests.

There may be multiple API servers; global state in an API server seems
fraught with issues.

 iii) Add some hook into the ServiceGroup API so that the timer could depend 
 on getting a free thread from the compute manager pool (ie run some no-op 
 task) - so that of there are no free threads then the service becomes down. 
 That would (eventually) stop the scheduler from sending new requests to it, 
 and make deleted be processed in the API server but won't of course help with 
 commands for other instances on the same host.

This seems a little kludgy to me.

 iv) Move away from having a general topic and thread pool for all requests, 
 and start a listener on an instance specific topic for each running instance 
 on a host (leaving the general topic and pool just for creates and other 
 non-instance calls like the hypervisor API).   Then a blocked task would only 
 affect request for a specific instance.

That seems to suggest instance  # topics? Aieee. I don't think that
solves the problem anyway, because either a) you end up with a tonne
of threads, or b) you have a multiplexing thread with the same
potential issue.

You could more simply just have a dedicated thread pool for deletes,
and have no thread limit on the pool. Of course, this will fail when
you OOM :). You could do a dict with instance - thread for deletes
instead, without creating lots of queues.

 I'm tending towards ii) as a simple and pragmatic solution in the near term, 
 although I like both iii) and iv) as being both generally good enhancments - 
 but iv) in particular feels like a pretty seismic change.


My inclination would be (i) - make deletes nonblocking idempotent with
lazy cleanup if resources take a while to tear down.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-25 Thread Dan Prince
-1

Slight preference for keeping them. I personally would go the other way and 
just add them everywhere.

- Original Message -
 From: Joe Gordon joe.gord...@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Thursday, October 24, 2013 8:38:57 AM
 Subject: [openstack-dev]  Remove vim modelines?
 
 Since the beginning of OpenStack we have had vim modelines all over the
 codebase, but after seeing this patch
 https://review.opeenstack.org/#/c/50891/https://review.openstack.org/#/c/50891/I
 took a further look into vim modelines and think we should remove
 them.
 Before going any further, I should point out these lines don't bother me
 too much but I figured if we could get consensus, then we could shrink our
 codebase by a little bit.

I'm not sure removing these counts as a meaningful codebase reduction. These 
lines can mostly be ignored. Likewise, If the foundation required us to double 
or triple our Apache license headers I would count that as a codebase increase.

 
 Sidenote: This discussion is being moved to the mailing list because it
 'would
 be better to have a mailing list thread about this rather than bits and
 pieces of discussion in gerrit' as this change requires multiple patches.
 https://review.openstack.org/#/c/51295/.
 
 
 Why remove them?
 
 * Modelines aren't supported by default in debian or ubuntu due to security
 reasons: https://wiki.python.org/moin/Vim
 * Having modelines for vim means if someone wants we should support
 modelines for emacs (
 http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables)
 etc. as well.  And having a bunch of headers for different editors in each
 file seems like extra overhead.
 * There are other ways of making sure tabstop is set correctly for python
 files, see  https://wiki.python.org/moin/Vim.  I am a vIm user myself and
 have never used modelines.
 * We have vim modelines in only 828 out of 1213 python files in nova (68%),
 so if anyone is using modelines today, then it only works 68% of the time
 in nova
 * Why have the same config 828 times for one repo alone?  This violates the
 DRY principle (Don't Repeat Yourself).
 
 
 Related Patches:
 https://review.openstack.org/#/c/51295/
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:noboilerplate,n,z
 
 best,
 Joe
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thoughs please on how to address a problem with mutliple deletes leading to a nova-compute thread pool problem

2013-10-25 Thread Day, Phil
There may be multiple API servers; global state in an API server seems fraught 
with issues.

No, the state would be in the DB (it would either be a task_state of Deleteing 
or some new delete_stated_at timestamp

I agree that i) is nice and simple - it just has the minor risks that the 
delete itself could hang, and/or that we might find some other issues with bits 
of the code that can't cope at the moment with the instance being deleted from 
underneath them

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: 25 October 2013 12:21
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] Thoughs please on how to address a problem 
with mutliple deletes leading to a nova-compute thread pool problem

On 25 October 2013 23:46, Day, Phil philip@hp.com wrote:
 Hi Folks,

 We're very occasionally seeing problems where a thread processing a create 
 hangs (and we've seen when taking to Cinder and Glance).  Whilst those issues 
 need to be hunted down in their own rights, they do show up what seems to me 
 to be a weakness in the processing of delete requests that I'd like to get 
 some feedback on.

 Delete is the one operation that is allowed regardless of the Instance state 
 (since it's a one-way operation, and users should always be able to free up 
 their quota).   However when we get a create thread hung in one of these 
 states, the delete requests when they hit the manager will also block as they 
 are synchronized on the uuid.   Because the user making the delete request 
 doesn't see anything happen they tend to submit more delete requests.   The 
 Service is still up, so these go to the computer manager as well, and 
 eventually all of the threads will be waiting for the lock, and the compute 
 manager will stop consuming new messages.

 The problem isn't limited to deletes - although in most cases the change of 
 state in the API means that you have to keep making different calls to get 
 past the state checker logic to do it with an instance stuck in another 
 state.   Users also seem to be more impatient with deletes, as they are 
 trying to free up quota for other things.

 So while I know that we should never get a thread into a hung state into the 
 first place, I was wondering about one of the following approaches to address 
 just the delete case:

 i) Change the delete call on the manager so it doesn't wait for the uuid 
 lock.  Deletes should be coded so that they work regardless of the state of 
 the VM, and other actions should be able to cope with a delete being 
 performed from under them.  There is of course no guarantee that the delete 
 itself won't block as well.

I like this.

 ii) Record in the API server that a delete has been started (maybe enough to 
 use the task state being set to DELETEING in the API if we're sure this 
 doesn't get cleared), and add a periodic task in the compute manager to check 
 for and delete instances that are in a DELETING state for more than some 
 timeout. Then the API, knowing that the delete will be processes eventually 
 can just no-op any further delete requests.

There may be multiple API servers; global state in an API server seems fraught 
with issues.

 iii) Add some hook into the ServiceGroup API so that the timer could depend 
 on getting a free thread from the compute manager pool (ie run some no-op 
 task) - so that of there are no free threads then the service becomes down. 
 That would (eventually) stop the scheduler from sending new requests to it, 
 and make deleted be processed in the API server but won't of course help with 
 commands for other instances on the same host.

This seems a little kludgy to me.

 iv) Move away from having a general topic and thread pool for all requests, 
 and start a listener on an instance specific topic for each running instance 
 on a host (leaving the general topic and pool just for creates and other 
 non-instance calls like the hypervisor API).   Then a blocked task would only 
 affect request for a specific instance.

That seems to suggest instance  # topics? Aieee. I don't think that solves the 
problem anyway, because either a) you end up with a tonne of threads, or b) you 
have a multiplexing thread with the same potential issue.

You could more simply just have a dedicated thread pool for deletes, and have 
no thread limit on the pool. Of course, this will fail when you OOM :). You 
could do a dict with instance - thread for deletes instead, without creating 
lots of queues.

 I'm tending towards ii) as a simple and pragmatic solution in the near term, 
 although I like both iii) and iv) as being both generally good enhancments - 
 but iv) in particular feels like a pretty seismic change.


My inclination would be (i) - make deletes nonblocking idempotent with lazy 
cleanup if resources take a while to tear down.

-Rob

--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud


[openstack-dev] [qa] Issue with tests of host admin api

2013-10-25 Thread David Kranz
A patch was submitted with some new tests of this api 
https://review.openstack.org/#/c/49778/. I gave a -1 because if a 
negative test to shutdown a host fails, a compute node will be shutdown. 
The author thinks this test should be part of tempest. My issue was that 
we should not have tempest tests for apis that:


1. May corrupt the underlying system (that is part of the reason we 
moved whitebox out)
2. Can have only negative tests because positive ones could prevent 
other tests from executing


Thoughts?

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Savanna on Bare Metal and Base Requirements

2013-10-25 Thread Erik Bergenholtz
Travis,

Sounds like your environment should work fine. There are no special 
requirements beyond Grizzly/Havanna. Following the installation guides: 
https://savanna.readthedocs.org/en/latest/ is relatively straight forward. If 
you are using neutron networking, you will need to rely on public IPs until the 
we complete an enhancement for working around this limitation.

Erik


On Oct 25, 2013, at 12:46 AM, Tripp, Travis S travis.tr...@hp.com wrote:

 Hello Savanna team,
  
 I’ve just skimmed through the online documentation and I’m very interested in 
 this project. We have a grizzly environment with all the latest patches as 
 well as several Havana backports applied. We are are doing bare metal 
 provisioning through Nova.  It is limited to flat networking.
  
 Would Savanna work in this environment?  What are the requirements?  What are 
 the minimum set of API calls that need to supported (for example, we can’t 
 support snapshots)?
  
 Thank you,
 Travis
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Issue with tests of host admin api

2013-10-25 Thread Sean Dague

On 10/25/2013 08:39 AM, David Kranz wrote:

A patch was submitted with some new tests of this api
https://review.openstack.org/#/c/49778/. I gave a -1 because if a
negative test to shutdown a host fails, a compute node will be shutdown.
The author thinks this test should be part of tempest. My issue was that
we should not have tempest tests for apis that:

1. May corrupt the underlying system (that is part of the reason we
moved whitebox out)


I really felt the reason we moved out whitebox is that OpenStack 
internals move way to fast to have them being validated by an external 
system. We have defined surfaces (i.e. API) and that should be the focus.



2. Can have only negative tests because positive ones could prevent
other tests from executing


Honestly, trying to shut down the host with invalid credentials seems 
like a fair test. Because if we fail, we're going to know really quick 
when everything blackholes.


In the gate this is slightly funnier because tempest is running on the 
same place as the host, however it seems like a sane check to have in there.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Savanna on Bare Metal and Base Requirements

2013-10-25 Thread Dmitry Mescheryakov
Hello Travis,

We didn't researched Savanna on bare metal, though we considered it some
time ago. I know little of bare metal provisioning, so I am rather unsure
what problems you might experience.

My main concern are images: does bare metal provisioning work with qcow2
images? Vanilla plugin (which installs Vanilla Apache Hadoop) requires a
pre-built Linux images with Hadoop, so if qcow2 does not work for bare
metal, you will need to somehow build images in required format. On the
other hand HDP plugin (which installs Hortonworks Data Platform), does not
require pre-built images, but works only on Red Hat OSes, as far as I know.

Another concern: does bare metal support cloud-init? Savanna relies on it
and reimplementing that functionality some other way might take some time.

As for your concern on which API calls Savanna makes: it is a pretty small
list of requests. Mainly authentication with keystone, basic operations
with VMs via nova (create, list, terminate), basic operations with images
(list, set/get attributes). Snapshots are not used. That is for basic
functionality. Other than that, some features might require additional API
calls. For instance Cinder support naturally requires calls for volume
create/list/delete.

Thanks,

Dmitry



2013/10/25 Tripp, Travis S travis.tr...@hp.com

  Hello Savanna team,

 ** **

 I’ve just skimmed through the online documentation and I’m very interested
 in this project. We have a grizzly environment with all the latest patches
 as well as several Havana backports applied. We are are doing bare metal
 provisioning through Nova.  It is limited to flat networking.

 ** **

 Would Savanna work in this environment?  What are the requirements?  What
 are the minimum set of API calls that need to supported (for example, we
 can’t support snapshots)?

 ** **

 Thank you,

 Travis

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint review process

2013-10-25 Thread Russell Bryant
It would be helpful if you could follow the reply style being used.  :-)

 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com] 
 Sent: October-24-13 5:08 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Blueprint review process
 
 On 10/24/2013 10:52 AM, Gary Kotton wrote:


 On 10/24/13 4:46 PM, Dan Smith d...@danplanet.com wrote:

 In the last meeting we discussed an idea that I think is worth 
 trying at least for icehouse-1 to see if we like it or not.  The 
 idea is that
 *every* blueprint starts out at a Low priority, which means best 
 effort, but no promises.  For a blueprint to get prioritized 
 higher, it should have 2 nova-core members signed up to review the 
 resulting code.

 Huge +1 to this. I'm in favor of the whole plan, but specifically the 
 prioritization piece is very important, IMHO.

 I too am in favor of the idea. It is just not clear how 2 Nova cores 
 will be signed up.
 
 Good point, there was no detail on that.  I propose just comments on the 
 blueprint whiteboard.  It can be something simple like this to indicate that 
 Dan and I have agreed to review the code for something:
 
 nova-core reviewers: russellb, dansmith

On 10/24/2013 06:17 PM, Alan Kavanagh wrote:
 Is this really a viable solution?
 I believe its more democratic to ensure everyone gets a chance to
 present the blueprint someone has spent time to write. This was no
 favoritism or biased view will ever take place and we let the
 community gauge the interest.

I don't really understand.  The key to this is that it really doesn't
change anything for the end result.  It just makes the blueprint list a
better reflection of what was already happening.

Note that the prioritization changes have nothing to do with whether
blueprints are accepted or not.  That's a separate issue.  That part
isn't changing much beyond having more people helping review blueprints
and having more explicit blueprint review criteria.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Savanna on Bare Metal and Base Requirements

2013-10-25 Thread Monty Taylor


On 10/25/2013 09:22 AM, Dmitry Mescheryakov wrote:
 Hello Travis,
 
 We didn't researched Savanna on bare metal, though we considered it some
 time ago. I know little of bare metal provisioning, so I am rather
 unsure what problems you might experience.
 
 My main concern are images: does bare metal provisioning work with qcow2
 images? Vanilla plugin (which installs Vanilla Apache Hadoop) requires a
 pre-built Linux images with Hadoop, so if qcow2 does not work for bare
 metal, you will need to somehow build images in required format.. On the
 other hand HDP plugin (which installs Hortonworks Data Platform), does
 not require pre-built images, but works only on Red Hat OSes, as far as
 I know.

Absolutely works with qcow2 images. We should start a conversation about
savana and diskimage-builder too, btw.

 Another concern: does bare metal support cloud-init? Savanna relies on
 it and reimplementing that functionality some other way might take some
 time.

Absolutely. (as long as the image has cloud-init in it of course)

 As for your concern on which API calls Savanna makes: it is a pretty
 small list of requests. Mainly authentication with keystone, basic
 operations with VMs via nova (create, list, terminate), basic operations
 with images (list, set/get attributes). Snapshots are not used. That is
 for basic functionality. Other than that, some features might require
 additional API calls. For instance Cinder support naturally requires
 calls for volume create/list/delete.

I'm excited to see an exploration of savana on nova-baremetal.

 
 2013/10/25 Tripp, Travis S travis.tr...@hp.com
 mailto:travis..tr...@hp.com
 
 Hello Savanna team,
 
 __ __
 
 I’ve just skimmed through the online documentation and I’m very
 interested in this project. We have a grizzly environment with all
 the latest patches as well as several Havana backports applied. We
 are are doing bare metal provisioning through Nova.  It is limited
 to flat networking.
 
 __ __
 
 Would Savanna work in this environment?  What are the requirements? 
 What are the minimum set of API calls that need to supported (for
 example, we can’t support snapshots)?
 
 __ __
 
 Thank you,
 
 Travis
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Issue with tests of host admin api

2013-10-25 Thread David Kranz

On 10/25/2013 09:10 AM, Sean Dague wrote:

On 10/25/2013 08:39 AM, David Kranz wrote:

A patch was submitted with some new tests of this api
https://review.openstack.org/#/c/49778/. I gave a -1 because if a
negative test to shutdown a host fails, a compute node will be shutdown.
The author thinks this test should be part of tempest. My issue was that
we should not have tempest tests for apis that:

1. May corrupt the underlying system (that is part of the reason we
moved whitebox out)


I really felt the reason we moved out whitebox is that OpenStack 
internals move way to fast to have them being validated by an external 
system. We have defined surfaces (i.e. API) and that should be the focus.

It was also because we were side-effecting the database out-of-band.



2. Can have only negative tests because positive ones could prevent
other tests from executing


Honestly, trying to shut down the host with invalid credentials seems 
like a fair test. Because if we fail, we're going to know really quick 
when everything blackholes.


In the gate this is slightly funnier because tempest is running on the 
same place as the host, however it seems like a sane check to have in 
there.


-Sean


OK, I don't feel strongly about it. Just seemed like a potential landmine.

 -David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Savanna on Bare Metal and Base Requirements

2013-10-25 Thread Erik Bergenholtz

On Oct 25, 2013, at 9:22 AM, Dmitry Mescheryakov dmescherya...@mirantis.com 
wrote:

 Hello Travis,
 
 We didn't researched Savanna on bare metal, though we considered it some time 
 ago. I know little of bare metal provisioning, so I am rather unsure what 
 problems you might experience.
 
 My main concern are images: does bare metal provisioning work with qcow2 
 images? Vanilla plugin (which installs Vanilla Apache Hadoop) requires a 
 pre-built Linux images with Hadoop, so if qcow2 does not work for bare metal, 
 you will need to somehow build images in required format. On the other hand 
 HDP plugin (which installs Hortonworks Data Platform), does not require 
 pre-built images, but works only on Red Hat OSes, as far as I know.
HDP Plugin will support SUSE, Debian in the future, but for now HDP only 
provides pre-built CentOS images.
 
 Another concern: does bare metal support cloud-init? Savanna relies on it and 
 reimplementing that functionality some other way might take some time.
 
 As for your concern on which API calls Savanna makes: it is a pretty small 
 list of requests. Mainly authentication with keystone, basic operations with 
 VMs via nova (create, list, terminate), basic operations with images (list, 
 set/get attributes). Snapshots are not used. That is for basic functionality. 
 Other than that, some features might require additional API calls. For 
 instance Cinder support naturally requires calls for volume 
 create/list/delete.
 
 Thanks,
 
 Dmitry  
 
 
 
 2013/10/25 Tripp, Travis S travis.tr...@hp.com
 Hello Savanna team,
 
  
 
 I’ve just skimmed through the online documentation and I’m very interested in 
 this project. We have a grizzly environment with all the latest patches as 
 well as several Havana backports applied. We are are doing bare metal 
 provisioning through Nova.  It is limited to flat networking.
 
  
 
 Would Savanna work in this environment?  What are the requirements?  What are 
 the minimum set of API calls that need to supported (for example, we can’t 
 support snapshots)?
 
  
 
 Thank you,
 
 Travis
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Savanna on Bare Metal and Base Requirements

2013-10-25 Thread Robert Collins
On 26 October 2013 02:30, Monty Taylor mord...@inaugust.com wrote:

 Absolutely works with qcow2 images. We should start a conversation about
 savana and diskimage-builder too, btw.

Savanna. Also it already uses diskimage-builder :)

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Issue with tests of host admin api

2013-10-25 Thread Christopher Yeoh


On 26/10/2013, at 12:01 AM, David Kranz dkr...@redhat.com wrote:

 On 10/25/2013 09:10 AM, Sean Dague wrote:
 On 10/25/2013 08:39 AM, David Kranz wrote:
 A patch was submitted with some new tests of this api
 https://review.openstack.org/#/c/49778/. I gave a -1 because if a
 negative test to shutdown a host fails, a compute node will be shutdown.
 The author thinks this test should be part of tempest. My issue was that
 we should not have tempest tests for apis that:
 
 1. May corrupt the underlying system (that is part of the reason we
 moved whitebox out)
 
 I really felt the reason we moved out whitebox is that OpenStack internals 
 move way to fast to have them being validated by an external system. We have 
 defined surfaces (i.e. API) and that should be the focus.
 It was also because we were side-effecting the database out-of-band.
 
 2. Can have only negative tests because positive ones could prevent
 other tests from executing
 
 Honestly, trying to shut down the host with invalid credentials seems like a 
 fair test. Because if we fail, we're going to know really quick when 
 everything blackholes.
 
 In the gate this is slightly funnier because tempest is running on the same 
 place as the host, however it seems like a sane check to have in there.
 
-Sean
 OK, I don't feel strongly about it. Just seemed like a potential landmine.
 

I think this is something we want to test in the gate. But perhaps there could 
be a tag for these sorts of test cases that some people may not want to risk 
running on their system so they can exclude them easily?

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Thursday meeting follow-up

2013-10-25 Thread Eugene Nikanorov
Hi folks,

Thanks to everyone who joined the meeting on Thursday.
We've discussed desired features and changes in LBaaS service and
identified dependencies between them.

You can find them on etherpad:
https://etherpad.openstack.org/p/neutron-icehouse-lbaas
Most of the features are also captured in the document shared by Sam:
https://docs.google.com/document/d/1Vjm57lh7PnXDelOy-VxsJkzc8QRiNN368sS11ePs_vA/edit?pli=1#
IRC meeting logs: http://paste.openstack.org/show/49561/

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Issue with tests of host admin api

2013-10-25 Thread Lingxian Kong
nice idea, Chris! +1 for me. and very thankful for dkranz to bring this to
maillist here.


2013/10/25 Christopher Yeoh cbky...@gmail.com



 On 26/10/2013, at 12:01 AM, David Kranz dkr...@redhat.com wrote:

  On 10/25/2013 09:10 AM, Sean Dague wrote:
  On 10/25/2013 08:39 AM, David Kranz wrote:
  A patch was submitted with some new tests of this api
  https://review.openstack.org/#/c/49778/. I gave a -1 because if a
  negative test to shutdown a host fails, a compute node will be
 shutdown.
  The author thinks this test should be part of tempest. My issue was
 that
  we should not have tempest tests for apis that:
 
  1. May corrupt the underlying system (that is part of the reason we
  moved whitebox out)
 
  I really felt the reason we moved out whitebox is that OpenStack
 internals move way to fast to have them being validated by an external
 system. We have defined surfaces (i.e. API) and that should be the focus.
  It was also because we were side-effecting the database out-of-band.
 
  2. Can have only negative tests because positive ones could prevent
  other tests from executing
 
  Honestly, trying to shut down the host with invalid credentials seems
 like a fair test. Because if we fail, we're going to know really quick when
 everything blackholes.
 
  In the gate this is slightly funnier because tempest is running on the
 same place as the host, however it seems like a sane check to have in there.
 
 -Sean
  OK, I don't feel strongly about it. Just seemed like a potential
 landmine.
 

 I think this is something we want to test in the gate. But perhaps there
 could be a tag for these sorts of test cases that some people may not want
 to risk running on their system so they can exclude them easily?

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Lucas Gomes to ironic core

2013-10-25 Thread Chris K
+1 Lucas has been a great asset to the ironic team.


On Thu, Oct 24, 2013 at 5:18 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 Hi all,

 I'd like to nominate Lucas Gomes for ironic-core. He's been consistently
 doing reviews for several months and has led a lot of the effort on the API
 and client libraries.

 Thanks for the great work!
 -Deva

 http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nominating Lucas Gomes to ironic core

2013-10-25 Thread Devananda van der Veen
All the current core folks have weighed in, so I'll go ahead and approve it.

Cheers!




On Fri, Oct 25, 2013 at 8:28 AM, Chris K nobody...@gmail.com wrote:

 +1 Lucas has been a great asset to the ironic team.


 On Thu, Oct 24, 2013 at 5:18 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 Hi all,

 I'd like to nominate Lucas Gomes for ironic-core. He's been consistently
 doing reviews for several months and has led a lot of the effort on the API
 and client libraries.

 Thanks for the great work!
 -Deva

 http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] About single entry point in trove-guestagent

2013-10-25 Thread Illia Khudoshyn
Here is my mixin way, draftly. https://review.openstack.org/#/c/53826/


On Fri, Oct 25, 2013 at 11:36 AM, Illia Khudoshyn
ikhudos...@mirantis.comwrote:

 I'll try to code it this weekend. Hope, could be able to show it by Monday.


 On Fri, Oct 25, 2013 at 7:46 AM, Michael Basnight mbasni...@gmail.comwrote:


 On Oct 23, 2013, at 7:03 AM, Illia Khudoshyn wrote:

  Hi Denis, Michael, Vipul and all,
 
  I noticed a discussion in irc about adding a single entry point (sort
 of 'SuperManager') to the guestagent. Let me add my 5cent.
 
  I agree with that we would ultimately avoid code duplication. But from
 my experience, only very small part of GA Manager can be considered as
 really duplicated code, namely Manager#prepare(). A 'backup' part may be
 another candidate, but I'm not yet. It may still be rather service type
 specific. All the rest of the code was just delegating.

 Yes, currently that is the case :)

 
  If we add a 'SuperManager' all we'll have -- just more delegation:
 
  1. There is no use for dynamic loading of corresponding Manager
 implementation because there will never be more than one service type
 supported on a concrete guest. So current implementation with configurable
 dictionary service_type-ManagerImpl looks good for me.
 
  2. Neither the 'SuperManager' provide common interface for Manager --
 due to dynamic nature of python. As it has been told,
 trove.guestagent.api.API provides list of methods with parameters we need
 to implement. What I'd like to have is a description of types for those
 params as well as return types. (Man, I miss static typing). All we can do
 for that is make sure we have proper unittests with REAL values for params
 and returns.
 
  As for the common part of the Manager's code, I'd go for extracting
 that into a mixin.

 When we started talking about it, i mentioned to one of the rackspace
 trove developers privately we might be able to solve effectively w/ a mixin
 instead of more parent classes :) I would like to see an example of both
 of them. At the end of the day all i care about is not having more copy
 pasta between manager impls as we grow the common stuff. even if that is
 just a method call in each guest to call each bit of common code.

 
  Thanks for your attention.
 
  --
  Best regards,
  Illia Khudoshyn,
  Software Engineer, Mirantis, Inc.
 
  38, Lenina ave. Kharkov, Ukraine
  www.mirantis.com
  www.mirantis.ru
 
  Skype: gluke_work
  ikhudos...@mirantis.com
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com




-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com http://www.mirantis.ru/

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Keystone TLS Question

2013-10-25 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello,

Is there any direct TLS support by Keystone other than using the Apache2 front 
end?

Mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thoughs please on how to address a problem with mutliple deletes leading to a nova-compute thread pool problem

2013-10-25 Thread Clint Byrum
Excerpts from Day, Phil's message of 2013-10-25 03:46:01 -0700:
 Hi Folks,
 
 We're very occasionally seeing problems where a thread processing a create 
 hangs (and we've seen when taking to Cinder and Glance).  Whilst those issues 
 need to be hunted down in their own rights, they do show up what seems to me 
 to be a weakness in the processing of delete requests that I'd like to get 
 some feedback on.
 
 Delete is the one operation that is allowed regardless of the Instance state 
 (since it's a one-way operation, and users should always be able to free up 
 their quota).   However when we get a create thread hung in one of these 
 states, the delete requests when they hit the manager will also block as they 
 are synchronized on the uuid.   Because the user making the delete request 
 doesn't see anything happen they tend to submit more delete requests.   The 
 Service is still up, so these go to the computer manager as well, and 
 eventually all of the threads will be waiting for the lock, and the compute 
 manager will stop consuming new messages.
 
 The problem isn't limited to deletes - although in most cases the change of 
 state in the API means that you have to keep making different calls to get 
 past the state checker logic to do it with an instance stuck in another 
 state.   Users also seem to be more impatient with deletes, as they are 
 trying to free up quota for other things. 
 
 So while I know that we should never get a thread into a hung state into the 
 first place, I was wondering about one of the following approaches to address 
 just the delete case:
 
 i) Change the delete call on the manager so it doesn't wait for the uuid 
 lock.  Deletes should be coded so that they work regardless of the state of 
 the VM, and other actions should be able to cope with a delete being 
 performed from under them.  There is of course no guarantee that the delete 
 itself won't block as well. 
 

Almost anything unexpected that isn't start the creation results in
just marking an instance as an ERROR right? So this approach is actually
pretty straight forward to implement. You don't really have to make
other operations any more intelligent than they already should be in
cleaning up half-done operations when they encounter an error. It might
be helpful to suppress or de-prioritize logging of these errors when it
is obvious that this result was intended.

 ii) Record in the API server that a delete has been started (maybe enough to 
 use the task state being set to DELETEING in the API if we're sure this 
 doesn't get cleared), and add a periodic task in the compute manager to check 
 for and delete instances that are in a DELETING state for more than some 
 timeout. Then the API, knowing that the delete will be processes eventually 
 can just no-op any further delete requests.
 

s/API server/database/ right? I like the coalescing approach where you
no longer take up more resources for repeated requests.

I don't like the garbage collection aspect of this plan though.Garbage
collection is a trade off of user experience for resources. If your GC
thread gets too far behind your resources will be exhausted. If you make
it too active, it wastes resources doing the actual GC. Add in that you
have a timeout before things can be garbage collected and I think this
becomes a very tricky thing to tune, and it may not be obvious it needs
to be tuned until you have a user who does a lot of rapid create/delete
cycles.

 iii) Add some hook into the ServiceGroup API so that the timer could depend 
 on getting a free thread from the compute manager pool (ie run some no-op 
 task) - so that of there are no free threads then the service becomes down. 
 That would (eventually) stop the scheduler from sending new requests to it, 
 and make deleted be processed in the API server but won't of course help with 
 commands for other instances on the same host.
 

I'm not sure I understand this one.

 iv) Move away from having a general topic and thread pool for all requests, 
 and start a listener on an instance specific topic for each running instance 
 on a host (leaving the general topic and pool just for creates and other 
 non-instance calls like the hypervisor API).   Then a blocked task would only 
 affect request for a specific instance.
 

A topic per record will get out of hand rapidly. If you think of the
instance record in the DB as the topic though, then (i) and (iv) are
actually quite similar.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-25 Thread Joe Gordon
On Oct 25, 2013 12:24 PM, Dan Prince dpri...@redhat.com wrote:

 -1

 Slight preference for keeping them. I personally would go the other way
and just add them everywhere.

May I ask why? Do you use the modeline?


 - Original Message -
  From: Joe Gordon joe.gord...@gmail.com
  To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
  Sent: Thursday, October 24, 2013 8:38:57 AM
  Subject: [openstack-dev]  Remove vim modelines?
 
  Since the beginning of OpenStack we have had vim modelines all over the
  codebase, but after seeing this patch
  https://review.opeenstack.org/#/c/50891/
https://review.openstack.org/#/c/50891/I
  took a further look into vim modelines and think we should remove
  them.
  Before going any further, I should point out these lines don't bother me
  too much but I figured if we could get consensus, then we could shrink
our
  codebase by a little bit.

 I'm not sure removing these counts as a meaningful codebase reduction.
These lines can mostly be ignored. Likewise, If the foundation required us
to double or triple our Apache license headers I would count that as a
codebase increase.

 
  Sidenote: This discussion is being moved to the mailing list because it
  'would
  be better to have a mailing list thread about this rather than bits and
  pieces of discussion in gerrit' as this change requires multiple
patches.
  https://review.openstack.org/#/c/51295/.
 
 
  Why remove them?
 
  * Modelines aren't supported by default in debian or ubuntu due to
security
  reasons: https://wiki.python.org/moin/Vim
  * Having modelines for vim means if someone wants we should support
  modelines for emacs (
 
http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables
)
  etc. as well.  And having a bunch of headers for different editors in
each
  file seems like extra overhead.
  * There are other ways of making sure tabstop is set correctly for
python
  files, see  https://wiki.python.org/moin/Vim.  I am a vIm user myself
and
  have never used modelines.
  * We have vim modelines in only 828 out of 1213 python files in nova
(68%),
  so if anyone is using modelines today, then it only works 68% of the
time
  in nova
  * Why have the same config 828 times for one repo alone?  This violates
the
  DRY principle (Don't Repeat Yourself).
 
 
  Related Patches:
  https://review.openstack.org/#/c/51295/
 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:noboilerplate,n,z
 
  best,
  Joe
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-25 Thread Clint Byrum
Excerpts from Angus Salkeld's message of 2013-10-24 18:48:16 -0700:
 On 24/10/13 11:54 +0200, Patrick Petit wrote:
 Hi Clint,
 Thank you! I have few replies/questions in-line.
 Cheers,
 Patrick
 On 10/23/13 8:36 PM, Clint Byrum wrote:
 I think this fits into something that I want for optimizing
 os-collect-config as well (our in-instance Heat-aware agent). That is
 a way for us to wait for notification of changes to Metadata without
 polling.
 Interesting... If I understand correctly that's kinda replacement of 
 cfn-hup... Do you have a blueprint pointer or something more 
 specific? While I see the benefits of it, in-instance notifications 
 is not really what we are looking for. We are looking for a 
 notification service that exposes an API whereby listeners can 
 register for Heat notifications. AWS Alarming / CloudFormation has 
 that. Why not Ceilometer / Heat? That would be extremely valuable for 
 those who build PaaS-like solutions above Heat. To say it bluntly, 
 I'd like to suggest we explore ways to integrate Heat with Marconi.
 
 Yeah, I am trying to do a PoC of this now. I'll let you know how
 it goes.
 
 I am trying to implement the following:
 
 heat_template_version: 2013-05-23
 parameters:
key_name:
  type: String
flavor:
  type: String
  default: m1.small
image:
  type: String
  default: fedora-19-i386-heat-cfntools
 resources:
config_server:
  type: OS::Marconi::QueueServer
  properties:
image: {get_param: image}
flavor: {get_param: flavor}
key_name: {get_param: key_name}
 
configA:
  type: OS::Heat::OrderedConfig
  properties:
marconi_server: {get_attr: [config_server, url]}
hosted_on: {get_resource: serv1}
script: |
  #!/bin/bash
  logger 1. hello from marconi
 
configB:
  type: OS::Heat::OrderedConfig
  properties:
marconi_server: {get_attr: [config_server, url]}
hosted_on: {get_resource: serv1}
depends_on: {get_resource: configA}
script: |
  #!/bin/bash
  logger 2. hello from marconi
 
serv1:
  type: OS::Nova::Server
  properties:
image: {get_param: image}
flavor: {get_param: flavor}
key_name: {get_param: key_name}
user_data: |
  #!/bin/sh
  # poll marconi url/v1/queues/{hostname}/messages
  # apply config
  # post a response message with any outputs
  # delete request message
 

If I may diverge this a bit, I'd like to consider the impact of
hosted_on on reusability in templates. hosted_on feels like an
anti-pattern, and I've never seen anything quite like it. It feels wrong
for a well contained component to then reach out and push itself onto
something else which has no mention of it.

I'll rewrite your template as I envision it working:

resources:
   config_server:
 type: OS::Marconi::QueueServer
 properties:
   image: {get_param: image}
   flavor: {get_param: flavor}
   key_name: {get_param: key_name}

   configA:
 type: OS::Heat::OrderedConfig
 properties:
   marconi_server: {get_attr: [config_server, url]}
   script: |
 #!/bin/bash
 logger 1. hello from marconi

   configB:
 type: OS::Heat::OrderedConfig
 properties:
   marconi_server: {get_attr: [config_server, url]}
   depends_on: {get_resource: configA}
   script: |
 #!/bin/bash
 logger 2. hello from marconi

   serv1:
 type: OS::Nova::Server
 properties:
   image: {get_param: image}
   flavor: {get_param: flavor}
   key_name: {get_param: key_name}
   components:
 - configA
 - configB
   user_data: |
 #!/bin/sh
 # poll marconi url/v1/queues/{hostname}/messages
 # apply config
 # post a response message with any outputs
 # delete request message

This only becomes obvious why it is important when you want to do this:

configC:
  type: OS::Heat::OrderedConfig
  properties:
script: |
  #!/bin/bash
  logger ?. I can race with A, no dependency needed

serv2:
  type: OS::Nova::Server
  properties:
  ...
  components:
- configA
- configC

This is proper composition, where the caller defines the components, not
the callee. Now you can re-use configA with a different component in the
same template. As we get smarter we can have these configs separate from
the template and reusable across templates.

Anyway, I'd like to see us stop talking about hosted_on, and if it has
been implemented, that it be deprecated and eventually removed, as it is
just plain confusing.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] One question to AggregateRamFilter

2013-10-25 Thread Jiang, Yunhong
Hi, stackers,

When reading code related to the resource tracker, I noticed AggregateRamFilter 
as in https://review.openstack.org/#/c/33828/. 

I'm not sure if it's better to use per node configuration of ram ration, 
instead of depends on the host aggregate? Currently we have to have DB call for 
each scheduler call and is really a performance issue. Also, if any instance is 
scheduled to a host before the host aggregate is created/setup, a wrong ram 
ratio can cause trouble to host like OOM.

With per node configuration, I'd add an column in DB to indicate 
memory_mb_limit, and this information will be provided by resource tracker. The 
benefits of the change are:
a) The host have better idea of the memory limit usable. And we can even 
provide other method to calculate in resource tracker other than ration.
b) it makes the flow more clean. Currently the resource tracker make claims 
decision with 'limits' passing from scheduler, that's a bit strange IMHO. I'd 
think scheduler makes the scheduler decision, instead of the resource 
calculation, while resource tracker provide resource information.

I think the shortcoming of the per node configuration is, not so easy to 
change. But such information should mostly related to host configuration like 
swap size etc and should be more static and deployment setup should be ok.

Any idea?

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 router service integration with Service Type Framework

2013-10-25 Thread Gary Duan
I just wrote a short spec on the wiki page and link it to the blueprint. I
should have done this when we registered the BP.

Please let me know if you have any question.

Thanks,
Gary


On Thu, Oct 24, 2013 at 5:35 PM, Gary Duan gd...@varmour.com wrote:

 Hi, Geoff,

 This is because I haven't added spec to the BP yet.

 Gary


 On Thu, Oct 24, 2013 at 4:51 PM, Geoff Arnold ge...@geoffarnold.comwrote:

 I’m getting a *“**Not allowed here”* error when I click through to the
 BP. (Yes, I’m subscribed.)

 On Oct 24, 2013, at 11:56 AM, Gary Duan gd...@varmour.com wrote:

 Hi,

 I've registered a BP for L3 router service integration with service
 framework.

 https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type

 In general, the implementation will align with how LBaaS is integrated
 with the framework. One consideration we heard from several team members is
 to be able to support vendor specific features and extensions in the
 service plugin.

 Any comment is welcome.

 Thanks,
 Gary


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Use cases: Tasks Scheduling - Cloud Cron

2013-10-25 Thread Renat Akhmerov
Hi OpenStackers,

There’s been recently a lot of questions in OpenStack community about the 
particular real life use cases that the new Mistral project addresses.

So I’d like to present the description of one of the interesting Mistral use 
cases so that we could discuss it together. You can also find a wiki version 
with pictures at 
https://wiki.openstack.org/wiki/Mistral#Tasks_Scheduling_-_Cloud_Cron as well 
as other useful information about the project.

More use cases are on their way.

—

Mistral Use Cases: Tasks Scheduling - Cloud Cron
Problem Statement
Pretty often while administering a network of computers there’s a need to 
establish periodic on-schedule execution of maintenance jobs for doing various 
kinds of work that otherwise would have to be started manually by a system 
administrator. The set of such jobs ranges widely from cleaning up needless log 
files to health monitoring and reporting. One of the most commonly known tools 
in Unix world to set up and manage those periodic jobs is Cron. It perfectly 
fits the uses cases mentioned above. For example, using Cron we can easily 
schedule any system process for running every even day of week at 2.00 am. For 
a single machine it’s fairly straightforward how to administer jobs using Cron 
and the approach itself has been adopted by millions of IT folks all over the 
world.
Now what if we want to be able to set up and manage on-schedule jobs for 
multiple machines? It would be very convenient to have a single point of 
control over their schedule and themselves (i.e. “when” and “what”). 
Furthermore, when it comes to a cloud environment the cloud provides additional 
RESTful services (and not only RESTful) that we may also want to call in 
on-schedule manner along with operating system local processes.
Solution
Mistral service for OpenStack cloud addresses this demand naturally. Its 
capabilities allow configuring any number of tasks to be run according to a 
specified schedule in a scale of a cloud. Here’s the list of some typical jobs 
we can choose from:
Run a shell script on specified virtual instances (e.g. VM1, VM3 and VM27).
Run an arbitrary system process on specified instances.
Start/Reboot/Shutdown instances.
Call an accessible cloud services (e.g. Trove).
Add instances to a load balancer.
Deploy an application on specified instances.
This list is not full and any other user meaningful jobs can be added. To make 
it possible Mistral provides a plugin mechanism so that it’s pretty easy to add 
new functionality via supplying new Mistral plugins.
Basically, Mistral acts as a mediator between a user, virtual instances and 
cloud services in a sense that it brings capabilities over them like task 
management (start, stop etc.), task state and execution monitoring (success, 
failure, in progress etc.) and task scheduling.
Since Mistral is a distributed workflow engine those types of jobs listed above 
can be combined in a single logical unit, a workflow. For example, we can tell 
Mistral to take care of the following workflow for us:
On every Monday at 1.00 am start grepping phrase “Hello, Mistral!” from log 
files located at /var/log/myapp.log on instances VM1, VM30, VM54 and put the 
results in Swift.
On success: Generate the report based on the data in Swift.
On success: Send the generated report to an email address.
On failure: Send an SMS with error details to a system administrator.
On failure: Send an SMS with error details to a system administrator.
A workflow similar to the one described above may be of any complexity but 
still considered a single task from a user perspective. However, Mistral is 
smart enough to analyze the workflow and identify individual sequences that can 
be run in parallel thereby taking advantage of distribution and load balancing 
under the hood.
It is worth noting that Mistral is nearly linearly scalable and hence is 
capable to schedule and process virtually any number of tasks simultaneously.
Notes
So in this use case description we tried to show how Mistral capabilities can 
be used for scheduling different user tasks in a cloud scale. Semantically it 
would be correct to call this use case Distributed Con or Cloud Cron. One of 
the advantages of using a service like Mistral in case like this is that along 
with base functionality to schedule and execute tasks it provides additional 
capabilities like navigating over task execution status and history (using web 
UI or REST API), replaying already finished tasks, on-demand task suspension 
and resumption and many other things that are useful for both system 
administrators and application developers.


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone TLS Question

2013-10-25 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello again,

It looks to me that TLS is automatically supported by the Keystone Havana. I 
performed the following curl call and it seems to indicate that Keystone is 
using TLS. Can anyone validate that Keystone Havana does or does not support 
TLS?

Thanks,

Mark

root@build-HP-Compaq-6005-Pro-SFF-PC:/etc/keystone# curl -v --insecure 
https://15.253.58.165:35357/v2.0/certificates/signing

* About to connect() to 15.253.58.165 port 35357 (#0)
*   Trying 15.253.58.165... connected
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using AES256-SHA
* Server certificate:
*subject: C=US; ST=CA; L=Sunnyvale; O=OpenStack; OU=Keystone; 
emailAddress=keyst...@openstack.org; CN=Keystone
*start date: 2013-03-15 01:44:55 GMT
*expire date: 2013-03-15 01:44:55 GMT
*common name: Keystone (does not match '15.253.58.165')
*issuer: serialNumber=5; C=US; ST=CA; L=Sunnyvale; O=OpenStack; 
OU=Keystone; emailAddress=keyst...@openstack.org; CN=Self Signed
*SSL certificate verify result: unable to get local issuer certificate 
(20), continuing anyway.
 GET /v2.0/certificates/signing HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: 15.253.58.165:35357
 Accept: */*

 HTTP/1.1 200 OK
 Content-Type: text/html; charset=UTF-8
 Content-Length: 973
 Date: Fri, 25 Oct 2013 18:27:52 GMT

-BEGIN CERTIFICATE-
MIICoDCCAgkCAREwDQYJKoZIhvcNAQEFBQAwgZ4xCjAIBgNVBAUTATUxCzAJBgNV
BAYTAlVTMQswCQYDVQQIEwJDQTESMBAGA1UEBxMJU3Vubnl2YWxlMRIwEAYDVQQK
EwlPcGVuU3RhY2sxETAPBgNVBAsTCEtleXN0b25lMSUwIwYJKoZIhvcNAQkBFhZr
ZXlzdG9uZUBvcGVuc3RhY2sub3JnMRQwEgYDVQQDEwtTZWxmIFNpZ25lZDAgFw0x
…
3S9E696tVhWqc+HAW91KgZcIwAgQrxWeC0x5O76Q3MGrxvWwyMHPlsxyL4H67AnI
wq8zJxOFtzvP8rVWrQ3PnzBozXKuU3VLPqAsDI4nDxjqFpVf3LYCFDRueS2EI5xc
5/rt9g==
-END CERTIFICATE-
* Connection #0 to host 15.253.58.165 left intact
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
root@build-HP-Compaq-6005-Pro-SFF-PC:/etc/keystone#




From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
Sent: Friday, October 25, 2013 8:58 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Keystone TLS Question

Hello,

Is there any direct TLS support by Keystone other than using the Apache2 front 
end?

Mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Proposal for new heat-core member

2013-10-25 Thread Steven Dake

Hi,

I would like to propose Randall Burt for Heat Core.  He has shown 
interest in Heat by participating in IRC and providing high quality 
reviews.  The most important aspect in my mind of joining Heat Core is 
output and quality of reviews.  Randall has been involved in Heat 
reviews for atleast 6 months.  He has had 172 reviews over the last 6 
months staying in the pack [1] of core heat reviewers.  His 90 day 
stats are also encouraging, with 97 reviews (compared to the top 
reviewer Steve Hardy with 444 reviews).  Finally his 30 day stats also 
look good, beating out 3 core reviewers [2] on output with good quality 
reviews.


Please have a vote +1/-1 and take into consideration: 
https://wiki.openstack.org/wiki/Heat/CoreTeam


Regards,
-steve

[1]http://russellbryant.net/openstack-stats/heat-reviewers-180.txt 
http://russellbryant.net/openstack-stats/heat-reviewers-180.txt
[2]http://russellbryant.net/openstack-stats/heat-reviewers-30.txt 
http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-25 Thread Paul Nelson
After reading the article Joe linked https://wiki.python.org/moin/Vim I
created a python.vim in my ~/.vim/ftplugin directory. I also have tabstop
defaults set in my .vimrc for global defaults that are different from
python preferences. So I get to keep my preferences for other stuff while
making python indentation correct.

+1 to removing the lines entirely.


On Fri, Oct 25, 2013 at 11:20 AM, Joe Gordon joe.gord...@gmail.com wrote:


 On Oct 25, 2013 12:24 PM, Dan Prince dpri...@redhat.com wrote:
 
  -1
 
  Slight preference for keeping them. I personally would go the other way
 and just add them everywhere.

 May I ask why? Do you use the modeline?

 
  - Original Message -
   From: Joe Gordon joe.gord...@gmail.com
   To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
   Sent: Thursday, October 24, 2013 8:38:57 AM
   Subject: [openstack-dev]  Remove vim modelines?
  
   Since the beginning of OpenStack we have had vim modelines all over the
   codebase, but after seeing this patch
   https://review.opeenstack.org/#/c/50891/
 https://review.openstack.org/#/c/50891/I
   took a further look into vim modelines and think we should remove
   them.
   Before going any further, I should point out these lines don't bother
 me
   too much but I figured if we could get consensus, then we could shrink
 our
   codebase by a little bit.
 
  I'm not sure removing these counts as a meaningful codebase reduction.
 These lines can mostly be ignored. Likewise, If the foundation required us
 to double or triple our Apache license headers I would count that as a
 codebase increase.
 
  
   Sidenote: This discussion is being moved to the mailing list because it
   'would
   be better to have a mailing list thread about this rather than bits and
   pieces of discussion in gerrit' as this change requires multiple
 patches.
   https://review.openstack.org/#/c/51295/.
  
  
   Why remove them?
  
   * Modelines aren't supported by default in debian or ubuntu due to
 security
   reasons: https://wiki.python.org/moin/Vim
   * Having modelines for vim means if someone wants we should support
   modelines for emacs (
  
 http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables
 )
   etc. as well.  And having a bunch of headers for different editors in
 each
   file seems like extra overhead.
   * There are other ways of making sure tabstop is set correctly for
 python
   files, see  https://wiki.python.org/moin/Vim.  I am a vIm user myself
 and
   have never used modelines.
   * We have vim modelines in only 828 out of 1213 python files in nova
 (68%),
   so if anyone is using modelines today, then it only works 68% of the
 time
   in nova
   * Why have the same config 828 times for one repo alone?  This
 violates the
   DRY principle (Don't Repeat Yourself).
  
  
   Related Patches:
   https://review.openstack.org/#/c/51295/
  
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:noboilerplate,n,z
  
   best,
   Joe
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-25 Thread Nikhil Manchanda

It seems strange to me to treat both the datastore_type and version as
two separate entities, when they aren't really independent of each
other. (You can't really deploy a mysql type with a cassandra version,
and vice-versa, so why have separate datastore-list and version-list
calls?)

I think it's a better idea to store in the db (and list) actual
representations of the datastore type/versions that an image we can
deploy supports. Any disambiguation could then happen based on what
entries actually exist here.

Let me illustrate what I'm trying to get at with a few examples:

Database has:
id | type  | version | active
--
a  | mysql | 5.6.14  |   1
b  | mysql | 5.1.0   |   0
c  | postgres  | 9.3.1   |   1
d  | redis | 2.6.16  |   1
e  | redis | 2.6.15  |   1
f  | cassandra | 2.0.1   |   1
g  | cassandra | 2.0.0   |   0

Config specifies:
default_datastore_id = a

1. trove-cli instance create ...
Just works - Since nothing is specified, this uses the
default_datastore_id from the config (mysql 5.6.14 a) . No need for
disambiguation.

2. trove-cli instance create --datastore_id e
The datastore_id specified always identifies a unique datastore type /
version so no other information is needed for disambiguation. (In this
case redis 2.6.15, identified by e)

3. trove-cli instance create --datastore_type postgres
The datastore_type in this case uniquely identifies postgres 9.3.1 c,
so no disambiguation is necessary.

4. trove-cli instance create --datastore_type cassandra
In this case, there is only one _active_ datastore with the given
datastore_type, so no further disambiguation is needed and cassandra
2.0.1 f is uniquely identified.

5. trove-cli instance create --datastore_type redis
In this case, there are _TWO_ active versions of the specified
datastore_type (2.6.16, and 2.6.17) so the call should return that
further disambiguation _is_ needed.

6. trove-cli instance create --datastore_type redis --datastore_version 2.6.16
We have both datastore_type and datastore_version, and that uniquely
identifies redis 2.6.16 e. No further disambiguation is needed.

7. trove-cli instance create --datastore_type cassandra --version 2.0.0,
or trove-cli instance create --datastore_id g
Here, we are attempting to deploy a datastore which is _NOT_ active and
this call should fail with an appropriate error message.

Cheers,
-Nikhil


Andrey Shestakov writes:

 2. it can be confusing coz not clear to what type version belongs
 (possible add type field in version).
 also if you have default type, then specified version recognizes as
 version of default type (no lookup in version.datastore_type_id)
 but i think we can do lookup in version.datastore_type_id before pick
 default.

 4. if default version is need, then it should be specified in db, coz
 switching via versions can be frequent and restart service to reload
 config all times is not good.

 On 10/21/2013 05:12 PM, Tim Simpson wrote:
 Thanks for the feedback Andrey.

  2. Got this case in irc, and decided to pass type and version
 together to avoid confusing.
 I don't understand how allowing the user to only pass the version
 would confuse anyone. Could you elaborate?

  3. Names of types and maybe versions can be good, but in irc conversation 
  rejected this case, i cant
 remember exactly reason.
 Hmm. Does anyone remember the reason for this?

  4. Actually, active field in version marks it as default in type.
 Specify default version in config can be usefull if you have more then
 one active versions in default type.
 If 'active' is allowed to be set for multiple rows of the
 'datastore_versions' table then it isn't a good substitute for the
 functionality I'm seeking, which is to allow operators to specify a
 *single* default version for each datastore_type in the database. I
 still think we should still add a 'default_version_id' field to the
 'datastore_types' table.

 Thanks,

 Tim

 
 *From:* Andrey Shestakov [ashesta...@mirantis.com]
 *Sent:* Monday, October 21, 2013 7:15 AM
 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] [Trove] How users should specify a
 datastore type when creating an instance

 1. Good point
 2. Got this case in irc, and decided to pass type and version together
 to avoid confusing.
 3. Names of types and maybe versions can be good, but in irc
 conversation rejected this case, i cant remember exactly reason.
 4. Actually, active field in version marks it as default in type.
 Specify default version in config can be usefull if you have more then
 one active versions in default type.
 But how match active version in type depends on operator`s
 configuration. And what if default version in config will marked as
 inactive?

 On 10/18/2013 10:30 PM, Tim Simpson wrote:
 Hello fellow Trovians,

 There has been some good work recently to figure out a way to specify
 a specific datastore  when 

Re: [openstack-dev] Remove vim modelines?

2013-10-25 Thread Dolph Mathews
On Thu, Oct 24, 2013 at 1:48 PM, Robert Collins
robe...@robertcollins.netwrote:


 *) They help casual contributors *more* than long time core
 contributors : and those are the folk that are most likely to give up
 and walk away. Keeping barriers to entry low is an important part of
 making OpenStack development accessible to new participants.


This is an interesting point. My reasoning for removing them was that I've
never seen *anyone* working to maintain them, or to add them to files where
they're missing. However, I suspect that the users benefiting from them
simply aren't deeply enough involved with the project to notice or care
about the inconsistency? I'm all for low barriers of entry, so if there's
any evidence that this is true, I'd want to make them more prolific.

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Agenda for Monday's meeting

2013-10-25 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-
alt on Mondays, 1600 UTC.

  http://goo.gl/Li9V4o

The next meeting is Monday, Oct 28. Everyone is welcome, but please
take a minute to review the wiki before attending for the first time:

  http://wiki.openstack.org/marconi

Proposed Agenda:

  * Review actions from last time
  * Updates on sharding
  * Updates on bugs
  * Triage, update blueprints
  * Review API v1 feedback
  * Suggestions for things to talk about in HGK
  * Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and
note your IRC name so we can call on you during the meeting:

  http://wiki.openstack.org/Meetings/Marconi

Cheers,

---
@kgriffs
Kurt Giffiths



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-25 Thread Dolph Mathews
On Fri, Oct 25, 2013 at 2:43 PM, Robert Collins
robe...@robertcollins.netwrote:

 On 26 October 2013 08:40, Dolph Mathews dolph.math...@gmail.com wrote:
 
  On Thu, Oct 24, 2013 at 1:48 PM, Robert Collins 
 robe...@robertcollins.net
  wrote:
 
 
  *) They help casual contributors *more* than long time core
  contributors : and those are the folk that are most likely to give up
  and walk away. Keeping barriers to entry low is an important part of
  making OpenStack development accessible to new participants.
 
 
  This is an interesting point. My reasoning for removing them was that
 I've
  never seen *anyone* working to maintain them, or to add them to files
 where
  they're missing. However, I suspect that the users benefiting from them
  simply aren't deeply enough involved with the project to notice or care
  about the inconsistency?

 Thats my hypothesis too.

  I'm all for low barriers of entry, so if there's
  any evidence that this is true, I'd want to make them more prolific.

 I'm not sure how to gather evidence for this, either for or against ;(.


If we removed them, we might see an uptick in whitespace PEP8 violations.
Alternatively, compare the number of historical whitespace violations in
files with modelines vs those without.

Collecting the data for either of those sound like more work than just
adding the modelines to files where they are missing.


 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-25 Thread Dean Troyer
On Fri, Oct 25, 2013 at 2:43 PM, Robert Collins
robe...@robertcollins.netwrote:

 On 26 October 2013 08:40, Dolph Mathews dolph.math...@gmail.com wrote:
 I'm all for low barriers of entry, so if there's
  any evidence that this is true, I'd want to make them more prolific.

 I'm not sure how to gather evidence for this, either for or against ;(.


We do have a captive audience in a week or so to do unscientific room-temp
surveys. Asking a question or two at the end of some sessions would at
least give us a data point broader than ML participants.

Who would be resistant to enforcing removal modelines?
Who would be resistant to enforcing addition of modelines?
Who doesn't know or care what a modeline is?

For the record, I don't mind these things as long as they are at the end of
the file.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] what is the default timeout of the session the python nova client instance set up with the compute endpoint?

2013-10-25 Thread openstack learner
Hi guys,

I am using the python-novaclient api and creating a nova client using
client = Client(USERNAME, PASSWORD, PROJECT_ID, AUTH_URL)   My question
is: what is the default timeout of the session the nova client instance set
up with the compute endpoint?


From the man page of the novaclient.v1_1.client, I can see there is a
timeout=None parameter in the __init__ method of the Client Object.
Anyone know what is the this timeout setting for?


 help (novaclient.v1_1.client)


class Client(__builtin__.object)
 |  Top-level object to access the OpenStack Compute API.
 |
 |  Create an instance with your creds::
 |
 |   client = Client(USERNAME, PASSWORD, PROJECT_ID, AUTH_URL)
 |
 |  Then call methods on its managers::
 |
 |   client.servers.list()
 |  ...
 |   client.flavors.list()
 |  ...
 |
 |  Methods defined here:
 |
 |  __init__(self, username, api_key, project_id, auth_url=None,
insecure=False, timeout=None, proxy_tenant_id=None, proxy_token=None,
region_name=None, endpoint_type='publicURL', extensions=None,
service_type='compute', service_name=None, volume_service_name=None,
timings=False, bypass_url=None, os_cache=False, no_cache=True,
http_log_debug=False, auth_system='keystone', auth_plugin=None,
cacert=None, tenant_id=None)
 |  # FIXME(jesse): project_id isn't required to authenticate


thanks
xin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] does the nova client python API support Keystone's token-based authentication?

2013-10-25 Thread openstack learner
hi guys,


Instead of username/password, does the nova client python API support
Keystone's token-based authentication?

thanks

xin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-25 Thread Robert Li (baoli)
Hi Irena,

This is Robert Li from Cisco Systems. Recently, I was tasked to investigate 
such support for Cisco's systems that support VM-FEX, which is a SRIOV 
technology supporting 802-1Qbh. I was able to bring up nova instances with 
SRIOV interfaces, and establish networking in between the instances that 
employes the SRIOV interfaces. Certainly, this was accomplished with hacking 
and some manual intervention. Based on this experience and my study with the 
two existing nova pci-passthrough blueprints that have been implemented and 
committed into Havana 
(https://blueprints.launchpad.net/nova/+spec/pci-passthrough-base and
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-libvirt),  I 
registered a couple of blueprints (one on Nova side, the other on the Neutron 
side):

https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov
https://blueprints.launchpad.net/neutron/+spec/pci-passthrough-sriov

in order to address SRIOV support in openstack.

Please take a look at them and see if they make sense, and let me know any 
comments and questions. We can also discuss this in the summit, I suppose.

I noticed that there is another thread on this topic, so copy those folks  from 
that thread as well.

thanks,
Robert

On 10/16/13 4:32 PM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi,
As one of the next steps for PCI pass-through I would like to discuss is the 
support for PCI pass-through vNIC.
While nova takes care of PCI pass-through device resources  management and VIF 
settings, neutron should manage their networking configuration.
I would like to register a summit proposal to discuss the support for PCI 
pass-through networking.
I am not sure what would be the right topic to discuss the PCI pass-through 
networking, since it involve both nova and neutron.
There is already a session registered by Yongli on nova topic to discuss the 
PCI pass-through next steps.
I think PCI pass-through networking is quite a big topic and it worth to have a 
separate discussion.
Is there any other people who are interested to discuss it and share their 
thoughts and experience?

Regards,
Irena

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-25 Thread John Dennis
On 10/25/2013 03:43 PM, Robert Collins wrote:
 On 26 October 2013 08:40, Dolph Mathews dolph.math...@gmail.com wrote:

 On Thu, Oct 24, 2013 at 1:48 PM, Robert Collins robe...@robertcollins.net
 wrote:


 *) They help casual contributors *more* than long time core
 contributors : and those are the folk that are most likely to give up
 and walk away. Keeping barriers to entry low is an important part of
 making OpenStack development accessible to new participants.


 This is an interesting point. My reasoning for removing them was that I've
 never seen *anyone* working to maintain them, or to add them to files where
 they're missing. However, I suspect that the users benefiting from them
 simply aren't deeply enough involved with the project to notice or care
 about the inconsistency?
 
 Thats my hypothesis too.
 
 I'm all for low barriers of entry, so if there's
 any evidence that this is true, I'd want to make them more prolific.
 
 I'm not sure how to gather evidence for this, either for or against ;(.

vim and it's cousins constitutes only a subset of popular editors. Emacs
is quite popular and it requires different syntax and requires the per
file variables to be the 1st line (or the 2nd line if there is a shell
interpreter line on the 1st line). In Emacs you can also use Local
Variables comments at the end of the file (a location many will not see
or cause to move during editing). So don't see how vim and emacs
specifications will coexist nicely and stay that way consistently.

And what about other editors? Where do you stop?

My personal feeling is you need to have enough awareness to configure
your editor correctly to contribute to a project. It's your
responsibility and our gate tools will hold you to that promise.

Let's just remove the mode lines, they really don't belong in every file.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thoughs please on how to address a problem with mutliple deletes leading to a nova-compute thread pool problem

2013-10-25 Thread Day, Phil


 -Original Message-
 From: Clint Byrum [mailto:cl...@fewbar.com]
 Sent: 25 October 2013 17:05
 To: openstack-dev
 Subject: Re: [openstack-dev] [nova] Thoughs please on how to address a
 problem with mutliple deletes leading to a nova-compute thread pool
 problem
 
 Excerpts from Day, Phil's message of 2013-10-25 03:46:01 -0700:
  Hi Folks,
 
  We're very occasionally seeing problems where a thread processing a
 create hangs (and we've seen when taking to Cinder and Glance).  Whilst
 those issues need to be hunted down in their own rights, they do show up
 what seems to me to be a weakness in the processing of delete requests
 that I'd like to get some feedback on.
 
  Delete is the one operation that is allowed regardless of the Instance state
 (since it's a one-way operation, and users should always be able to free up
 their quota).   However when we get a create thread hung in one of these
 states, the delete requests when they hit the manager will also block as they
 are synchronized on the uuid.   Because the user making the delete request
 doesn't see anything happen they tend to submit more delete requests.
 The Service is still up, so these go to the computer manager as well, and
 eventually all of the threads will be waiting for the lock, and the compute
 manager will stop consuming new messages.
 
  The problem isn't limited to deletes - although in most cases the change of
 state in the API means that you have to keep making different calls to get
 past the state checker logic to do it with an instance stuck in another state.
 Users also seem to be more impatient with deletes, as they are trying to free
 up quota for other things.
 
  So while I know that we should never get a thread into a hung state into
 the first place, I was wondering about one of the following approaches to
 address just the delete case:
 
  i) Change the delete call on the manager so it doesn't wait for the uuid 
  lock.
 Deletes should be coded so that they work regardless of the state of the VM,
 and other actions should be able to cope with a delete being performed from
 under them.  There is of course no guarantee that the delete itself won't
 block as well.
 
 
 Almost anything unexpected that isn't start the creation results in just
 marking an instance as an ERROR right? So this approach is actually pretty
 straight forward to implement. You don't really have to make other
 operations any more intelligent than they already should be in cleaning up
 half-done operations when they encounter an error. It might be helpful to
 suppress or de-prioritize logging of these errors when it is obvious that this
 result was intended.
 
  ii) Record in the API server that a delete has been started (maybe enough
 to use the task state being set to DELETEING in the API if we're sure this
 doesn't get cleared), and add a periodic task in the compute manager to
 check for and delete instances that are in a DELETING state for more than
 some timeout. Then the API, knowing that the delete will be processes
 eventually can just no-op any further delete requests.
 
 
 s/API server/database/ right? I like the coalescing approach where you no
 longer take up more resources for repeated requests.

Yep, the state is saved in the DB, but its set by the API server  - that's what 
I meant.
So it's not dependent on the manager getting the delete.

 
 I don't like the garbage collection aspect of this plan though.Garbage
 collection is a trade off of user experience for resources. If your GC thread
 gets too far behind your resources will be exhausted. If you make it too
 active, it wastes resources doing the actual GC. Add in that you have a
 timeout before things can be garbage collected and I think this becomes a
 very tricky thing to tune, and it may not be obvious it needs to be tuned 
 until
 you have a user who does a lot of rapid create/delete cycles.
 

The GC is just a backstop here - you always let the first delete message 
through 
so normally things work as they do now.   Its only if the delete message 
doesn't get
processed for some reason that the GC would kick in.   There are already
examples of this kind of clean-up in other periodic tasks.


  iii) Add some hook into the ServiceGroup API so that the timer could
 depend on getting a free thread from the compute manager pool (ie run
 some no-op task) - so that of there are no free threads then the service
 becomes down. That would (eventually) stop the scheduler from sending
 new requests to it, and make deleted be processed in the API server but
 won't of course help with commands for other instances on the same host.
 
 
 I'm not sure I understand this one.
 

At the moment the liveness of a service is determined by a separate thread
in the  ServiceGroup class - all it really shows is that something in the 
manager
is still running.   What I was thinking of is extending that so that it shows 
that 
the manager is still capable of doing something useful.   Doing some 

Re: [openstack-dev] extending nova boot

2013-10-25 Thread Day, Phil
Hi Drew,

Generally you need to create a new api extention and make some changes in the 
main servers.py

The scheduler-hints API extension does this kind of thing, so if you look at:  
api/openstack/compute/contrib/scheduler_hints.py for how the extension is 
defined, and look  in api/poenstack/compute/servers.py code for 
scheduler_hints   (e.g. _extract_scheduler_hints()  ) then that should point 
you in the right direction.

Hope that helps,
Phil

 -Original Message-
 From: Drew Fisher [mailto:drew.fis...@oracle.com]
 Sent: 25 October 2013 16:34
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] extending nova boot
 
 Good morning!
 
 I am looking at extending nova boot with a few new flags.  I've found enough
 examples online that I have a working extension to novaclient (I can see the
 new flags in `nova help boot` and if I run with the --debug flag I can see the
 curl requests to the API have the data.
 
 What I can't seem to figure out is how nova-api processes these extra
 arguments.  With stable/grizzly bits, in
 nova/api/openstack/compute/servers.py, I can see where that data is
 processed (in Controller.create()) but it doesn't appear to me that any
 leftover flags are handled.
 
 What do I need to do to get these new flags to nova boot from novaclient
 into nova-api and ultimately my compute driver?
 
 Thanks for any help!
 
 -Drew Fisher
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] does the nova client python API support Keystone's token-based authentication?

2013-10-25 Thread Chris Friesen

On 10/25/2013 02:08 PM, openstack learner wrote:

hi guys,


Instead of username/password, does the nova client python API support 
Keystone's token-based authentication?


Yes, but normal tokens expire, so the idea is that you authenticate with 
username/password, then get back a token that you use for the rest of 
your session.


See here for samples.

http://www.ibm.com/developerworks/cloud/library/cl-openstack-pythonapis/

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thoughs please on how to address a problem with mutliple deletes leading to a nova-compute thread pool problem

2013-10-25 Thread Chris Behrens

On Oct 25, 2013, at 3:46 AM, Day, Phil philip@hp.com wrote:

 Hi Folks,
 
 We're very occasionally seeing problems where a thread processing a create 
 hangs (and we've seen when taking to Cinder and Glance).  Whilst those issues 
 need to be hunted down in their own rights, they do show up what seems to me 
 to be a weakness in the processing of delete requests that I'd like to get 
 some feedback on.
 
 Delete is the one operation that is allowed regardless of the Instance state 
 (since it's a one-way operation, and users should always be able to free up 
 their quota).   However when we get a create thread hung in one of these 
 states, the delete requests when they hit the manager will also block as they 
 are synchronized on the uuid.   Because the user making the delete request 
 doesn't see anything happen they tend to submit more delete requests.   The 
 Service is still up, so these go to the computer manager as well, and 
 eventually all of the threads will be waiting for the lock, and the compute 
 manager will stop consuming new messages.
 
 The problem isn't limited to deletes - although in most cases the change of 
 state in the API means that you have to keep making different calls to get 
 past the state checker logic to do it with an instance stuck in another 
 state.   Users also seem to be more impatient with deletes, as they are 
 trying to free up quota for other things. 
 
 So while I know that we should never get a thread into a hung state into the 
 first place, I was wondering about one of the following approaches to address 
 just the delete case:
 
 i) Change the delete call on the manager so it doesn't wait for the uuid 
 lock.  Deletes should be coded so that they work regardless of the state of 
 the VM, and other actions should be able to cope with a delete being 
 performed from under them.  There is of course no guarantee that the delete 
 itself won't block as well. 
 

Agree.  I've argued for a long time that our code should be able to handle the 
instance disappearing.  We do have a number of places where we catch 
InstanceNotFound to handle this already.


 ii) Record in the API server that a delete has been started (maybe enough to 
 use the task state being set to DELETEING in the API if we're sure this 
 doesn't get cleared), and add a periodic task in the compute manager to check 
 for and delete instances that are in a DELETING state for more than some 
 timeout. Then the API, knowing that the delete will be processes eventually 
 can just no-op any further delete requests.

We already set to DELETING in the API (unless I'm mistaken -- but I looked at 
this recently).  However, instead of dropping duplicate deletes, I say they 
should still be sent/handled.  Any delete code should be able to handle if 
another delete is occurring at the same time, IMO…  much like how you say other 
methods should be able to handle an instance disappearing from underneath.  If 
a compute goes down while 'deleting', a 2nd delete later should still be able 
to function locally.  Same thing if the message to compute happens to be lost.

 
 iii) Add some hook into the ServiceGroup API so that the timer could depend 
 on getting a free thread from the compute manager pool (ie run some no-op 
 task) - so that of there are no free threads then the service becomes down. 
 That would (eventually) stop the scheduler from sending new requests to it, 
 and make deleted be processed in the API server but won't of course help with 
 commands for other instances on the same host.

This seems kinda hacky to me.

 
 iv) Move away from having a general topic and thread pool for all requests, 
 and start a listener on an instance specific topic for each running instance 
 on a host (leaving the general topic and pool just for creates and other 
 non-instance calls like the hypervisor API).   Then a blocked task would only 
 affect request for a specific instance.
 

I don't like this one when thinking about scale.  1 million instances = = 1 
million more queues.

 I'm tending towards ii) as a simple and pragmatic solution in the near term, 
 although I like both iii) and iv) as being both generally good enhancments - 
 but iv) in particular feels like a pretty seismic change.

I vote for both i) and ii) at minimum.

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-25 Thread Angus Salkeld

On 25/10/13 09:25 -0700, Clint Byrum wrote:

Excerpts from Angus Salkeld's message of 2013-10-24 18:48:16 -0700:

On 24/10/13 11:54 +0200, Patrick Petit wrote:
Hi Clint,
Thank you! I have few replies/questions in-line.
Cheers
Patrick
On 10/23/13 8:36 PM, Clint Byrum wrote:
I think this fits into something that I want for optimizing
os-collect-config as well (our in-instance Heat-aware agent). That is
a way for us to wait for notification of changes to Metadata without
polling.
Interesting... If I understand correctly that's kinda replacement of
cfn-hup... Do you have a blueprint pointer or something more
specific? While I see the benefits of it, in-instance notifications
is not really what we are looking for. We are looking for a
notification service that exposes an API whereby listeners can
register for Heat notifications. AWS Alarming / CloudFormation has
that. Why not Ceilometer / Heat? That would be extremely valuable for
those who build PaaS-like solutions above Heat. To say it bluntly,
I'd like to suggest we explore ways to integrate Heat with Marconi.

Yeah, I am trying to do a PoC of this now. I'll let you know how
it goes.

I am trying to implement the following:

heat_template_version: 2013-05-23
parameters:
   key_name:
 type: String
   flavor:
 type: String
 default: m1.small
   image:
 type: String
 default: fedora-19-i386-heat-cfntools
resources:
   config_server:
 type: OS::Marconi::QueueServer
 properties:
   image: {get_param: image}
   flavor: {get_param: flavor}
   key_name: {get_param: key_name}

   configA:
 type: OS::Heat::OrderedConfig
 properties:
   marconi_server: {get_attr: [config_server, url]}
   hosted_on: {get_resource: serv1}
   script: |
 #!/bin/bash
 logger 1. hello from marconi

   configB:
 type: OS::Heat::OrderedConfig
 properties:
   marconi_server: {get_attr: [config_server, url]}
   hosted_on: {get_resource: serv1}
   depends_on: {get_resource: configA}
   script: |
 #!/bin/bash
 logger 2. hello from marconi

   serv1:
 type: OS::Nova::Server
 properties:
   image: {get_param: image}
   flavor: {get_param: flavor}
   key_name: {get_param: key_name}
   user_data: |
 #!/bin/sh
 # poll marconi url/v1/queues/{hostname}/messages
 # apply config
 # post a response message with any outputs
 # delete request message



If I may diverge this a bit, I'd like to consider the impact of
hosted_on on reusability in templates. hosted_on feels like an
anti-pattern, and I've never seen anything quite like it. It feels wrong
for a well contained component to then reach out and push itself onto
something else which has no mention of it.


Maybe I shouldn't have used hosted_on, it could be role_name/config_queue.



I'll rewrite your template as I envision it working:

resources:
  config_server:
type: OS::Marconi::QueueServer
properties:
  image: {get_param: image}
  flavor: {get_param: flavor}
  key_name: {get_param: key_name}

  configA:
type: OS::Heat::OrderedConfig
properties:
  marconi_server: {get_attr: [config_server, url]}
  script: |
#!/bin/bash
logger 1. hello from marconi

  configB:
type: OS::Heat::OrderedConfig
properties:
  marconi_server: {get_attr: [config_server, url]}
  depends_on: {get_resource: configA}
  script: |
#!/bin/bash
logger 2. hello from marconi

  serv1:
type: OS::Nova::Server
properties:
  image: {get_param: image}
  flavor: {get_param: flavor}
  key_name: {get_param: key_name}
  components:
- configA
- configB
  user_data: |
#!/bin/sh
# poll marconi url/v1/queues/{hostname}/messages
# apply config
# post a response message with any outputs
# delete request message

This only becomes obvious why it is important when you want to do this:

   configC:
 type: OS::Heat::OrderedConfig
 properties:
   script: |
 #!/bin/bash
 logger ?. I can race with A, no dependency needed

Well if you put no dependency, it's like any other heat resource
they will be run in parallel (or at least either may go first)
there are lots of configs where this may not be important.


   serv2:
 type: OS::Nova::Server
 properties:
 ...
 components:
   - configA
   - configC

This is proper composition, where the caller defines the components, not
the callee. Now you can re-use configA with a different component in the
same template. As we get smarter we can have these configs separate from
the template and reusable across templates.

Anyway, I'd like to see us stop talking about hosted_on, and if it has
been implemented, that it be deprecated and eventually removed, as it is
just plain confusing.


There are pros and cons to both, I don't however think it is helpful
to shutdown 

Re: [openstack-dev] [heat] Proposal for new heat-core member

2013-10-25 Thread Angus Salkeld

On 25/10/13 12:12 -0700, Steven Dake wrote:

Hi,

I would like to propose Randall Burt for Heat Core.  He has shown 
interest in Heat by participating in IRC and providing high quality 
reviews.  The most important aspect in my mind of joining Heat Core 
is output and quality of reviews.  Randall has been involved in Heat 
reviews for atleast 6 months.  He has had 172 reviews over the last 6 
months staying in the pack [1] of core heat reviewers.  His 90 day 
stats are also encouraging, with 97 reviews (compared to the top 
reviewer Steve Hardy with 444 reviews).  Finally his 30 day stats 
also look good, beating out 3 core reviewers [2] on output with good 
quality reviews.


Please have a vote +1/-1 and take into consideration: 
https://wiki.openstack.org/wiki/Heat/CoreTeam


+1



Regards,
-steve

[1]http://russellbryant.net/openstack-stats/heat-reviewers-180.txt 
http://russellbryant.net/openstack-stats/heat-reviewers-180.txt
[2]http://russellbryant.net/openstack-stats/heat-reviewers-30.txt 
http://russellbryant.net/openstack-stats/heat-reviewers-30.txt



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Possible Keystone OS-TRUST bug

2013-10-25 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello,

We are getting an HTTP 500 error when we try to list all trusts. We can list 
individual trusts, but not the generic list.



GET REST Request:

curl -v -X GET http://10.1.8.20:35357/v3/OS-TRUST/trusts -H X-Auth-Token: 
ed241ae1e986319086f3



REST Response:

{
error: {
message: An unexpected error prevented the server from fulfilling 
your request. 'id',
code: 500,
title: Internal Server Error
}
}



/var/log/keystone/keystone.log file entry:

2013-10-25 22:39:25DEBUG [keystone.common.wsgi] arg_dict: {'trust_id': 
u'f30840dc20b3417bbc187bc15e1b72dd'}
2013-10-25 22:39:26 INFO [sqlalchemy.engine.base.Engine] SELECT token.id AS 
token_id, token.expires AS token_expires, token.extra AS token_extra, 
token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS 
token_trust_id
FROM token
WHERE token.id = %(param_1)s
2013-10-25 22:39:26 INFO [sqlalchemy.engine.base.Engine] {'param_1': 
'a8b2004cd0ea47be9a350890b0463fc2'}
2013-10-25 22:39:26 INFO [sqlalchemy.engine.base.Engine] SELECT trust.id AS 
trust_id, trust.trustor_user_id AS trust_trustor_user_id, trust.trustee_user_id 
AS trust_trustee_user_id, trust.project_id AS trust_project_id, 
trust.impersonation AS trust_impersonation, trust.deleted_at AS 
trust_deleted_at, trust.expires_at AS trust_expires_at, trust.extra AS 
trust_extra
FROM trust
WHERE trust.deleted_at IS NULL AND trust.id = %(id_1)s
LIMIT %(param_1)s
2013-10-25 22:39:26 INFO [sqlalchemy.engine.base.Engine] {'id_1': 
u'f30840dc20b3417bbc187bc15e1b72dd', 'param_1': 1}
2013-10-25 22:39:26 INFO [sqlalchemy.engine.base.Engine] SELECT 
trust_role.trust_id AS trust_role_trust_id, trust_role.role_id AS 
trust_role_role_id
FROM trust_role
WHERE trust_role.trust_id = %(trust_id_1)s
2013-10-25 22:39:26 INFO [sqlalchemy.engine.base.Engine] {'trust_id_1': 
u'f30840dc20b3417bbc187bc15e1b72dd'}
2013-10-25 22:39:26 INFO [sqlalchemy.engine.base.Engine] SELECT role.id AS 
role_id, role.name AS role_name, role.extra AS role_extra
FROM role
2013-10-25 22:39:26 INFO [sqlalchemy.engine.base.Engine] {}
2013-10-25 22:39:26 INFO [access] 127.0.0.1 - - [25/Oct/2013:22:39:26 
+] GET 
http://localhost:35357/v3/OS-TRUST/trusts/f30840dc20b3417bbc187bc15e1b72dd 
HTTP/1.0 200 626
2013-10-25 22:39:26DEBUG [eventlet.wsgi.server] 127.0.0.1 - - [25/Oct/2013 
22:39:26] GET /v3/OS-TRUST/trusts/f30840dc20b3417bbc187bc15e1b72dd HTTP/1.1 
200 755 0.241976

2013-10-25 22:41:52DEBUG [keystone.common.wsgi] arg_dict: {}
2013-10-25 22:41:52  WARNING [keystone.common.controller] RBAC: Bypassing 
authorization
2013-10-25 22:41:52 INFO [sqlalchemy.engine.base.Engine] SELECT token.id AS 
token_id, token.expires AS token_expires, token.extra AS token_extra, 
token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS 
token_trust_id
FROM token
WHERE token.id = %(param_1)s
2013-10-25 22:41:52 INFO [sqlalchemy.engine.base.Engine] {'param_1': 
'a03530ce84ab4384aca15d2ef8e5fa9d'}
2013-10-25 22:41:52ERROR [keystone.token.providers.uuid] Failed to verify 
token
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/keystone/token/providers/uuid.py, 
line 560, in validate_token
return self._validate_v3_token(token_id)
  File /usr/lib/python2.6/site-packages/keystone/token/providers/uuid.py, 
line 532, in _validate_v3_token
token_ref = self._verify_token(token_id)
  File /usr/lib/python2.6/site-packages/keystone/token/providers/uuid.py, 
line 441, in _verify_token
token_ref = self.token_api.get_token(token_id=token_id)
  File /usr/lib/python2.6/site-packages/keystone/token/core.py, line 118, in 
get_token
return self.driver.get_token(self._unique_id(token_id))
  File /usr/lib/python2.6/site-packages/keystone/token/backends/sql.py, line 
47, in get_token
raise exception.TokenNotFound(token_id=token_id)
TokenNotFound: Could not find token, a03530ce84ab4384aca15d2ef8e5fa9d.
2013-10-25 22:41:52  WARNING [keystone.common.wsgi] Authorization failed. Could 
not find token, a03530ce84ab4384aca15d2ef8e5fa9d. from 127.0.0.1
2013-10-25 22:41:52 INFO [access] 127.0.0.1 - - [25/Oct/2013:22:41:52 
+] GET http://localhost:5000/v3/auth/tokens HTTP/1.0 401 119
2013-10-25 22:41:52DEBUG [eventlet.wsgi.server] 10.1.5.157,127.0.0.1 - - 
[25/Oct/2013 22:41:52] GET //v3/auth/tokens HTTP/1.1 401 282 0.017426
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-25 Thread Vishvananda Ishaya
Interesting Background Information:

Why do we have modelines?

Termie put them in all the files of the first version of nova

Why did he put in modelines instead of configuring his editor?

Termie does a lot of python coding and he prefers a tabstop of 2 on all his 
personal projects[1]

I really don't see much value outside of people who prefer other tabstops

+1 to remove them

Vish

[1] https://github.com/termie/git-bzr-ng/blob/master/git-bzr
On Oct 24, 2013, at 5:38 AM, Joe Gordon joe.gord...@gmail.com wrote:

 Since the beginning of OpenStack we have had vim modelines all over the 
 codebase, but after seeing this patch 
 https://review.opeenstack.org/#/c/50891/ I took a further look into vim 
 modelines and think we should remove them. Before going any further, I should 
 point out these lines don't bother me too much but I figured if we could get 
 consensus, then we could shrink our codebase by a little bit.
 
 Sidenote: This discussion is being moved to the mailing list because it 
 'would be better to have a mailing list thread about this rather than bits 
 and pieces of discussion in gerrit' as this change requires multiple patches. 
  https://review.openstack.org/#/c/51295/.
 
 
 Why remove them?
 
 * Modelines aren't supported by default in debian or ubuntu due to security 
 reasons: https://wiki.python.org/moin/Vim
 * Having modelines for vim means if someone wants we should support modelines 
 for emacs 
 (http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables)
  etc. as well.  And having a bunch of headers for different editors in each 
 file seems like extra overhead.
 * There are other ways of making sure tabstop is set correctly for python 
 files, see  https://wiki.python.org/moin/Vim.  I am a vIm user myself and 
 have never used modelines.
 * We have vim modelines in only 828 out of 1213 python files in nova (68%), 
 so if anyone is using modelines today, then it only works 68% of the time in 
 nova
 * Why have the same config 828 times for one repo alone?  This violates the 
 DRY principle (Don't Repeat Yourself).
 
 
 Related Patches:
 https://review.openstack.org/#/c/51295/
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:noboilerplate,n,z
 
 best,
 Joe
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] distibuted caching system in front of mysql server for openstack transactions

2013-10-25 Thread Qing He
All,
Has anyone looked at the options of putting a distributed caching system in 
front of mysql server to improve performance? This should be similar to Oracle 
Coherence, or VMware VFabric SQLFire.

Thanks,

Qing
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Where is neutron.conf and plugin's conf file parsed

2013-10-25 Thread Xu Zhongxing
Hi,


Could someone give me a pointer to the code that parse the conf file of neutron 
and the plugin (and populate the CONF object)?
I am new to the code and cannot find it.
Thank you.


- Xu Zhongxing___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Where is neutron.conf and plugin's conf file parsed

2013-10-25 Thread Xu Zhongxing
I get it. It's parsed in ConfigOpts.__call__().


At 2013-10-26 11:05:16,Xu Zhongxing xu_zhong_x...@163.com wrote:

Hi,


Could someone give me a pointer to the code that parse the conf file of neutron 
and the plugin (and populate the CONF object)?
I am new to the code and cannot find it.
Thank you.


- Xu Zhongxing


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone TLS Question

2013-10-25 Thread Jamie Lennox
Yes keystone can run under SSL using the eventlet server. Look for the ssl 
section in keystone.conf 
https://github.com/openstack/keystone/blob/master/etc/keystone.conf.sample#L296

You'll want to set enabled, certfile and keyfile, from memory ca_certs is to do 
with client side certs.

Jamie



- Original Message -
 From: Mark M Miller (EB SW Cloud - RD - Corvallis) mark.m.mil...@hp.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Saturday, 26 October, 2013 4:31:09 AM
 Subject: Re: [openstack-dev] Keystone TLS Question
 
 
 
 Hello again,
 
 
 
 It looks to me that TLS is automatically supported by the Keystone Havana. I
 performed the following curl call and it seems to indicate that Keystone is
 using TLS. Can anyone validate that Keystone Havana does or does not support
 TLS?
 
 
 
 Thanks,
 
 
 
 Mark
 
 
 
 root@build-HP-Compaq-6005-Pro-SFF-PC:/etc/keystone# curl -v --insecure
 https://15.253.58.165:35357/v2.0/certificates/signing
 
 
 
 * About to connect() to 15.253.58.165 port 35357 (#0)
 
 * Trying 15.253.58.165... connected
 
 * successfully set certificate verify locations:
 
 * CAfile: none
 
 CApath: /etc/ssl/certs
 
 * SSLv3, TLS handshake, Client hello (1):
 
 * SSLv3, TLS handshake, Server hello (2):
 
 * SSLv3, TLS handshake, CERT (11):
 
 * SSLv3, TLS handshake, Server finished (14):
 
 * SSLv3, TLS handshake, Client key exchange (16):
 
 * SSLv3, TLS change cipher, Client hello (1):
 
 * SSLv3, TLS handshake, Finished (20):
 
 * SSLv3, TLS change cipher, Client hello (1):
 
 * SSLv3, TLS handshake, Finished (20):
 
 * SSL connection using AES256-SHA
 
 * Server certificate:
 
 * subject: C=US; ST=CA; L=Sunnyvale; O=OpenStack; OU=Keystone;
 emailAddress=keyst...@openstack.org; CN=Keystone
 
 * start date: 2013-03-15 01:44:55 GMT
 
 * expire date: 2013-03-15 01:44:55 GMT
 
 * common name: Keystone (does not match '15.253.58.165')
 
 * issuer: serialNumber=5; C=US; ST=CA; L=Sunnyvale; O=OpenStack; OU=Keystone;
 emailAddress=keyst...@openstack.org; CN=Self Signed
 
 * SSL certificate verify result: unable to get local issuer certificate (20),
 continuing anyway.
 
  GET /v2.0/certificates/signing HTTP/1.1
 
  User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1
  zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 
  Host: 15.253.58.165:35357
 
  Accept: */*
 
  
 
  HTTP/1.1 200 OK
 
  Content-Type: text/html; charset=UTF-8
 
  Content-Length: 973
 
  Date: Fri, 25 Oct 2013 18:27:52 GMT
 
 
 
 -BEGIN CERTIFICATE-
 
 MIICoDCCAgkCAREwDQYJKoZIhvcNAQEFBQAwgZ4xCjAIBgNVBAUTATUxCzAJBgNV
 
 BAYTAlVTMQswCQYDVQQIEwJDQTESMBAGA1UEBxMJU3Vubnl2YWxlMRIwEAYDVQQK
 
 EwlPcGVuU3RhY2sxETAPBgNVBAsTCEtleXN0b25lMSUwIwYJKoZIhvcNAQkBFhZr
 
 ZXlzdG9uZUBvcGVuc3RhY2sub3JnMRQwEgYDVQQDEwtTZWxmIFNpZ25lZDAgFw0x
 
 …
 
 3S9E696tVhWqc+HAW91KgZcIwAgQrxWeC0x5O76Q3MGrxvWwyMHPlsxyL4H67AnI
 
 wq8zJxOFtzvP8rVWrQ3PnzBozXKuU3VLPqAsDI4nDxjqFpVf3LYCFDRueS2EI5xc
 
 5/rt9g==
 
 -END CERTIFICATE-
 
 * Connection #0 to host 15.253.58.165 left intact
 
 * Closing connection #0
 
 * SSLv3, TLS alert, Client hello (1):
 
 root@build-HP-Compaq-6005-Pro-SFF-PC:/etc/keystone#
 
 
 
 
 
 
 
 
 
 
 From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
 Sent: Friday, October 25, 2013 8:58 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] Keystone TLS Question
 
 
 
 
 
 Hello,
 
 
 
 Is there any direct TLS support by Keystone other than using the Apache2
 front end?
 
 
 
 Mark
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev