Re: [openstack-dev] extend Network topology view in horizon

2013-10-23 Thread Akihiro Motoki
Hi,

In Havana release, FWaaS and VPNaaS are inserted onto routers.
We don't need new blocks but it is better we have some information on
firewall or vpnservice on routers.
In the reference implementation LBaaS is implemented as one-arm load balancer,
so we need a new block in network topology view.

As you may know, service insertion model is planned to enhance.
It would be great if it is taken into account.

Thanks,
Akihiro

On Wed, Oct 23, 2013 at 2:47 PM, Toshiyuki Hayashi haya...@ntti3.com wrote:
 Hi,

 Regarding No.2, I'm gonna support FWaaS/LBaaS/VPNaaS, and I've just
 started creating a demo for that. So I'll add the blueprint soon.

 Thanks,
 Toshi



 On Tue, Oct 22, 2013 at 2:02 AM, Ofer Blaut obl...@redhat.com wrote:
 Hi

 It will be helpful to extend Network topology view in horizon

 1. Admin should be able to see the entire/per tenant network topology (we 
 might need a flag to enable/disable it).

 2. Supporting ICON for FWaaS/LBaaS/VPNaaS on both admin  tenant level, so 
 it will be easy to see the deployments

 Are there any blueprints to support it ?

 Thanks

 Ofer


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Toshiyuki Hayashi
 NTT Innovation Institute Inc.
 Tel:650-579-0800 ex4292
 mail:haya...@ntti3.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-23 Thread Thomas Goirand
On 10/23/2013 06:32 AM, Michael Basnight wrote:
 
 On Oct 22, 2013, at 10:35 AM, Michael Basnight wrote:
 
 Top posting cuz im a baller. We will get this fixed today. PS clint i like 
 the way you think ;)

 https://review.openstack.org/#/c/53176/

 
 Now that this is merged, and there is no stable/havana for clients, Ive got a 
 question. What do the package maintainers use for clients? the largest 
 versioned tag? If so i can push a new version of the client for packaging.

Thanks for doing this.

Replied privately about updating the troveclient in Debian.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] extend Network topology view in horizon

2013-10-23 Thread Toshiyuki Hayashi
Hi Akihito,

Thank you for your information. Definitely I'll take care of it.

Thanks,
Toshi

On Tue, Oct 22, 2013 at 11:02 PM, Akihiro Motoki amot...@gmail.com wrote:
 Hi,

 In Havana release, FWaaS and VPNaaS are inserted onto routers.
 We don't need new blocks but it is better we have some information on
 firewall or vpnservice on routers.
 In the reference implementation LBaaS is implemented as one-arm load balancer,
 so we need a new block in network topology view.

 As you may know, service insertion model is planned to enhance.
 It would be great if it is taken into account.

 Thanks,
 Akihiro

 On Wed, Oct 23, 2013 at 2:47 PM, Toshiyuki Hayashi haya...@ntti3.com wrote:
 Hi,

 Regarding No.2, I'm gonna support FWaaS/LBaaS/VPNaaS, and I've just
 started creating a demo for that. So I'll add the blueprint soon.

 Thanks,
 Toshi



 On Tue, Oct 22, 2013 at 2:02 AM, Ofer Blaut obl...@redhat.com wrote:
 Hi

 It will be helpful to extend Network topology view in horizon

 1. Admin should be able to see the entire/per tenant network topology (we 
 might need a flag to enable/disable it).

 2. Supporting ICON for FWaaS/LBaaS/VPNaaS on both admin  tenant level, so 
 it will be easy to see the deployments

 Are there any blueprints to support it ?

 Thanks

 Ofer


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Toshiyuki Hayashi
 NTT Innovation Institute Inc.
 Tel:650-579-0800 ex4292
 mail:haya...@ntti3.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Toshiyuki Hayashi
NTT Innovation Institute Inc.
Tel:650-579-0800 ex4292
mail:haya...@ntti3.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-23 Thread Thomas Spatzier
Clint Byrum cl...@fewbar.com wrote on 23.10.2013 00:28:17:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 23.10.2013 00:30
 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal

 Excerpts from Georgy Okrokvertskhov's message of 2013-10-22 13:32:40
-0700:
  Hi Thomas,
 
  I agree with you on semantics part. At the same time I see a potential
  question which might appear - if semantics is limited by few states
visible
  for Heat engine, then who actually does software orchestration?
  Will it be reasonable then to have software orchestration as separate
  subproject for Heat as a part of Orchestration OpenStack program?
Heat
  engine will then do dependency tracking and will use components as a
  reference for software orchestration engine which will perform actual
  deployment and high level software components coordination.
 
  This separated software orchestration engine may address all specific
  requirements proposed by different teams in this thread without
affecting
  existing Heat engine.
 

 I'm not sure I know what software orchestration is, but I will take a
 stab at a succinct definition:

 Coordination of software configuration across multiple hosts.

 If that is what you mean, then I believe what you actually want is
 workflow. And for that, we have the Mistral project which was recently
 announced [1].

My view of software orchestration in a sense of what Heat should be able
to do is being able to bring up software installations (e.g. web server, a
DBMS, a custom application) on-top of a bare compute resource by invoking a
software config tool (e.g. Chef, Puppet ...) at the right point in time and
let that tool do the actual work.
Invoke does not necessarily mean to call an API of such a tool, but rather
make sure it is bootstrapped and maybe gets a go signal to start.
software orchestration could then further mean to give CM tools across
hosts the go signal when the config on one host has completed. This be
enabled by the signaling enhancements Steve Baker mentioned in one of his
recent mails.

For such kind of stuff, I think we could live without workflows but do it
purely declaratively. Of course, a workflow could be the underlying
mechanism, but I would not want to express this in a template. If users
have very complex problems to solve and cannot live with the simple
software orchestration I outlined, then still a workflow could be used for
everying on-top of the OS.

Anyway, that was just my view and others most probably have different views
again, so seems like we really have to sort out terminology :-)


 Use that and you will simply need to define your desired workflow and
 feed it into Mistral using a Mistral Heat resource. We can create a
 nice bootstrapping resource for Heat instances that shims the mistral
 workflow execution agent into machines (or lets us use one already there
 via custom images).

 I can imagine it working something like this:

 resources:
   mistral_workflow_handle:
 type: OS::Mistral::WorkflowHandle
   web_server:
 type: OS::Nova::Server
 components:
   mistral_agent:
 component_type: mistral
 params:
   workflow_: {ref: mistral_workflow_handle}
   mysql_server:
 type: OS::Nova::Server
 components:
   mistral_agent:
 component_type: mistral
 params:
   workflow_handle: {ref: mistral_workflow_handle}
   mistral_workflow:
 type: OS::Mistral::Workflow
 properties:
   handle: {ref: mistral_workflow_handle}
   workflow_reference: mysql_webapp_workflow
   params:
 mysql_server: {ref: mysql_server}
 webserver: {ref: web_server}


While I can imagine that this works, I think for a big percentage of use
cases I would be nice to avoid this inter-weaving of workflow constructs
with a HOT template. I think we could do a purely declarative approach (if
we scope software orchestration in context of Heat right), and not define
such handles and references.
We are trying to shield this from the users in other cases in HOT
(WaitConditionHandle and references), so why introduce it here ...


 And then the workflow is just defined outside of the Heat template (ok
 I'm sure somebody will want to embed it, but I prefer stronger
 separation). Something like this gets uploaded as
 mysql_webapp_workflow:

 [ 'step1': 'install_stuff',
   'step2': 'wait(step1)',
   'step3': 'allocate_sql_user(server=%mysql_server%)'
   'step4': 'credentials=wait_and_read(step3)'
   'step5': 'write_config_file(server=%webserver%)' ]

 Or maybe it is declared as a graph, or whatever, but it is not Heat's
 problem how to do workflows, it just feeds the necessary data from
 orchestration into the workflow engine. This also means you can use a
 non OpenStack workflow engine without any problems.

 I think after having talked about this, we should have workflow live in
 its own program.. we can always combine them if we want to, 

Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-23 Thread Khanh-Toan Tran
I didn't see any command referring InstanceGroupMemberConnection. What is it 
exactly? Could you give an example? 
And how can we create an InstanceGroup? 
1) Create an empty group 
2) Add policy, metadata 
3) Add group instances 
... ? 
or in the InstanceGroup POST message there is already a description of all 
InstanceGroupMembers, Connections, etc ? 
An (raw) example would be really helpful to understand the proposition. 

Best regards, 
Toan 


- Original Message -

From: Mike Spreitzer mspre...@us.ibm.com 
To: Yathiraj Udupi (yudupi) yud...@cisco.com 
Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org 
Sent: Wednesday, October 23, 2013 5:36:25 AM 
Subject: Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - 
Updated Instance Group Model and API extension model - WIP Draft 

Yathiraj Udupi (yudupi) yud...@cisco.com wrote on 10/15/2013 03:08:32 AM: 

 I have made some edits to the document: https://docs.google.com/ 
 document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?pli=1# 
 ... 

One other minor thing to discuss in the modeling is metadata. I am not eager to 
totally gorp up the model, but shouldn't all sorts of things allow metadata? 

Thanks, 
Mike 
___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

2013-10-23 Thread Eugene Nikanorov
Hi Neutron folks!

We're going to have an IRC meeting where we will discuss development plans
for LBaaS in Icehouse.

Currently I'm proposing to meet on Thursday, 24 at 8:00 PDT on freenode
#neutron-lbaas channel.

Agenda for the meeting:
1. New features for lbaas in Icehouse
Pretty much everything vendors expect to be impl in Icehouse should be
briefly covered.
2. Feature ordering/dependencies
3. Dev resources evaluation

If the time is not convenient for you, please suggest another time. (It's
better to have it this week)

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Testing before sending for review

2013-10-23 Thread Eugene Nikanorov
Hi,

1. It's not necessary to abandon your patch if it has failed jenkins tests.
2. Before submitting the new patch for review it's better to run unit tests
(tox -epy27) and pep8 check (tox -epep8)
Integration testing is being done by check-tempest-devstack-vm-neutron*
suites and some of them fail from time to time due to other known bugs.
In case of such failure just put 'recheck no bug' in general review
comment, (or 'recheck bug xx' if you know which bug you are hitting).
Sometimes integration test failures are caused by the patch itself, in this
case you need to analyze the logs and fix the code.
But I believe it's not your case.

Thanks,
Eugene.


On Wed, Oct 23, 2013 at 9:38 AM, S Sridhar sridha...@outlook.com wrote:

 Hi All,

 I posted a review earlier - https://review.openstack.org/#/c/53160/,
 which failed Jenkins test. I realized that changes required in other files
 too. I 'Abandoned Changes' so that I can post review set again. I have
 made the changes now, but want to test them before sending for review.

 It is suggested in https://wiki.openstack.org/wiki/GerritWorkflow to run
 'tox' before checking in. Is this enough or there are any other steps I
 need to follow for unit testing?

 Please suggest.

 Regards
 Sridhar

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

2013-10-23 Thread Yongsheng Gong
HI,
the following time is ok for me:
UTC+8: 6:00-23:00
UTC: 22:00-15:00

If it is scheduled during these time slot, I will join.

Thanks
Yong Sheng Gong


On Wed, Oct 23, 2013 at 4:50 PM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi Neutron folks!

 We're going to have an IRC meeting where we will discuss development plans
 for LBaaS in Icehouse.

 Currently I'm proposing to meet on Thursday, 24 at 8:00 PDT on freenode
 #neutron-lbaas channel.

 Agenda for the meeting:
 1. New features for lbaas in Icehouse
 Pretty much everything vendors expect to be impl in Icehouse should be
 briefly covered.
 2. Feature ordering/dependencies
 3. Dev resources evaluation

 If the time is not convenient for you, please suggest another time. (It's
 better to have it this week)

 Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Testing before sending for review

2013-10-23 Thread Rosa, Andrea (HP Cloud Services)
Hi

 2. Before submitting the new patch for review it's better to run unit tests 
 (tox -epy27) and pep8 check (tox -epep8)

Instead of pep8 I think you should run flake8 we moved to that some months 
ago[1].
Usually I find always useful to test my changes in devstack.
Regards
--
Andrea Rosa

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-May/009178.html 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Testing before sending for review

2013-10-23 Thread Marek Denis

Hey,

On 23.10.2013 11:29, Rosa, Andrea (HP Cloud Services) wrote:

Usually I find always useful to test my changes in devstack.


How do you do that? I think the devstack does not always contain up to 
date codebase does it, so what would be the point in testing changes on 
the old code?

Thanks for the reply.

--
Marek Denis
[marek.de...@cern.ch]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Testing before sending for review

2013-10-23 Thread Rosa, Andrea (HP Cloud Services)
Hi
On 23.10.2013 11:29, Rosa, Andrea (HP Cloud Services) wrote:
 Usually I find always useful to test my changes in devstack.

How do you do that? I think the devstack does not always contain up to date
codebase does it, so what would be the point in testing changes on the old
code?

With devstack you can decide which code you want to install and run playing 
with the configuration files:
1 you can define to reclone your devstack installation using the latest trunk 
code every time you run stack.sh, to do that add RECLONE=yes option [1]
2 you can specify which branch you want to use [2]

Hope this helps
--
Andrea Rosa
[1]  http://devstack.org/localrc.html
[2] http://devstack.org/stackrc.html




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Distributed Virtual Router Discussion

2013-10-23 Thread Maciocco, Christian
Hi,
I'm interested as well, please include me in the discussion.
Thanks
Christian

-Original Message-
From: Artem Dmytrenko [mailto:nexton...@yahoo.com] 
Sent: Monday, October 21, 2013 11:51 AM
To: yong sheng gong (gong...@unitedstack.com); cloudbe...@gmail.com; OpenStack 
Development Mailing List
Subject: Re: [openstack-dev] Distributed Virtual Router Discussion

Hi Swaminathan.

I work for a virtual networking startup called Midokura and I'm very interested 
in joining the discussion. We currently have distributed router implementation 
using existing Neutron API. Could you clarify why distributed vs centrally 
located routing implementation need to be distinguished? Another question is 
that are you proposing distributed routing implementation for tenant routers or 
for the router connecting the virtual cloud to the external network? The reason 
that I'm asking this question is because our company would also like to propose 
a router implementation that would eliminate a single point uplink failures. We 
have submitted a couple blueprints on that topic 
(https://blueprints.launchpad.net/neutron/+spec/provider-router-support, 
https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing) and would 
appreciate an opportunity to collaborate on making it a reality.

Note that the images in your document are badly corrupted - maybe my questions 
could already be answered by your diagrams. Could you update your document with 
legible diagrams?

Looking forward to further discussing this topic with you!

Sincerely,
Artem Dmytrenko


On Mon, 10/21/13, Vasudevan, Swaminathan (PNB Roseville) 
swaminathan.vasude...@hp.com wrote:

 Subject: [openstack-dev] Distributed Virtual Router Discussion
 To: yong sheng gong (gong...@unitedstack.com) gong...@unitedstack.com, 
cloudbe...@gmail.com cloudbe...@gmail.com, OpenStack Development Mailing 
List (openstack-dev@lists.openstack.org) openstack-dev@lists.openstack.org
 Date: Monday, October 21, 2013, 12:18 PM
 
 
 
  
  
 
 
 
 
 Hi Folks,
 I am currently working on a
 blueprint for Distributed Virtual Router. 
 If anyone interested in
 being part of the discussion please let me know. 
 I have put together a first
 draft of my blueprint and have posted it on Launchpad for  review. 
 https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
 
    
    
 Thanks. 
    
 Swaminathan Vasudevan
 Systems Software Engineer
 (TC) 
    
    
 HP Networking
 Hewlett-Packard
 8000 Foothills Blvd
 M/S 5541
 Roseville, CA - 95747
 tel: 916.785.0937
 fax: 916.785.1815
 email: swaminathan.vasude...@hp.com
 
    
    
 
 
 
 
 -Inline Attachment Follows-
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux

2013-10-23 Thread Balaji Patnala
Hi Qing,

Freescale SoCs like P4080 and T4240 etc are supported for OpenStack as well.

We have been using them from OpenStack Diablo release onwards.

We demonstrated at ONS 2013, Interop 2013 and China Road Show.

Regards,
Balaji.P


On 23 October 2013 08:57, Qing He qing...@radisys.com wrote:

  Matt,

 ** **

 Great. 

 Yes, what processor and free scale version you are running on? Do you have
 something for tryout?

 ** **

 Thanks,

 Qing

 ** **

 *From:* Matt Riedemann [mailto:mrie...@us.ibm.com]
 *Sent:* Tuesday, October 22, 2013 8:11 PM

 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] [nova] Openstack on power pc/Freescale
 linux

 ** **

 Yeah, my team does.  We're using openvswitch 1.10, qpid 0.22, DB2 10.5
 (but MySQL also works).  Do you have specific issues/questions?

 We're working on getting continuous integration testing working for the
 nova powervm driver in the icehouse release, so you can see some more
 details about what we're doing with openstack on power in this thread:

 http://lists.openstack.org/pipermail/openstack-dev/2013-October/016395.html



 Thanks,

 *MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development 
--

 *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
 E-mail:* mrie...@us.ibm.com 

 [image: IBM]

 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States






 From:Qing He qing...@radisys.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:10/22/2013 07:43 PM
 Subject:Re: [openstack-dev] [nova] Openstack on power
 pc/Freescale linux 
  --




 Thanks Matt.
 I’d like know if anyone has tried to run the controller, API server and
 MySql database, msg queue, etc—the brain of the openstack, on ppc.
 Qing

 *From:* Matt Riedemann [mailto:mrie...@us.ibm.com mrie...@us.ibm.com] *
 Sent:* Tuesday, October 22, 2013 4:17 PM*
 To:* OpenStack Development Mailing List*
 Subject:* Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux

 We run openstack on ppc64 with RHEL 6.4 using the powervm nova virt
 driver.  What do you want to know?



 Thanks,
 *
 MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development 
--

 *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
 E-mail:* mrie...@us.ibm.com 

 [image: IBM]

 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States







 From:Qing He qing...@radisys.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:10/22/2013 05:49 PM
 Subject:[openstack-dev]  [nova] Openstack on power pc/Freescale
 linux 
  --





 All,
 I'm wondering if anyone tried OpenStack on Power PC/ free scale Linux?

 Thanks,
 Qing

 ___
 OpenStack-dev mailing list*
 *OpenStack-dev@lists.openstack.org*
 *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image001.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] What's the recipe to build Oozie-4.0.0.tar.gz?

2013-10-23 Thread Nikolay Makhotkin
Hi, Matthew!

Note, apache does not make oozie builds, and we have to build it manually.

To build oozie.tar.gz you should follow the steps below:

1. Download oozie distribution from apache mirror (e.g.
http://apache-mirror.rbc.ru/pub/apache/oozie/4.0.0)
2. Build includes maven project, so you should have installed maven
3. Download ExtJS library (extJS-2.2) (http://extjs.com/deploy/ext-2.2.zip)
for enabling Oozie web console in build
4. run mkdistro.sh -DskipTests in oozie distribution directory (some tests
are failed so we don't need it to pass)
5. copy this build file (in distro/target/) to newly created hadoop cluster
and unpack
6. copy hadoop jars (including hadoop-core, hadoop-client, hadoop-auth) to
oozie-dir/libext/ directory (you should create it)
7. copy ext-2.2.zip to libext/ directory too
8. run   $ bin/oozie-setup.sh prepare-war -d libext

Then, your oozie package is ready, pack it to tar.gz and deploy on clusters

Similar instruction to build oozie.tar.gz you may find here:
http://oozie.apache.org/docs/4.0.0/DG_QuickStart.html#Building_Oozie


On Wed, Oct 23, 2013 at 2:01 PM, Alexander Ignatov aigna...@mirantis.comwrote:




  Original Message   Subject: [openstack-dev] [savanna]
 What's the recipe to build Oozie-4.0.0.tar.gz?  Date: Tue, 22 Oct 2013
 15:42:49 -0400  From: Matthew Farrellee m...@redhat.comm...@redhat.com  
 Reply-To:
 OpenStack Development Mailing List 
 openstack-dev@lists.openstack.orgopenstack-dev@lists.openstack.org  To:
 OpenStack Development Mailing List 
 openstack-dev@lists.openstack.orgopenstack-dev@lists.openstack.org

 Having diskimage-create.sh is a great addition for the Savanna user
 community. It greatly simplifies the image building process (using DIB
 for those of you not familiar), making it repeatable and giving everyone
 a hope of debugging issues.

 One thing it does is install oozie. It pulls oozie from 
 http://savanna-files.mirantis.com/oozie-4.0.0.tar.gz

 What's the recipe to create oozie-4.0.0.tar.gz?

 Best,


 matt

 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






-- 
Best Regards,
Nikolay Makhotkin,
Intern Software Engineer,
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

2013-10-23 Thread Samuel Bercovici
Hi,

I assume you are proposing 8:00AM and not 8:00PM PDT.
I will not be able to attend on this time.

Better time for me is between 10:00AM PDT - 12:00AM PDT

Thanks,
-Sam.






From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, October 23, 2013 11:51 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

Hi Neutron folks!

We're going to have an IRC meeting where we will discuss development plans for 
LBaaS in Icehouse.

Currently I'm proposing to meet on Thursday, 24 at 8:00 PDT on freenode 
#neutron-lbaas channel.

Agenda for the meeting:
1. New features for lbaas in Icehouse
Pretty much everything vendors expect to be impl in Icehouse should be briefly 
covered.
2. Feature ordering/dependencies
3. Dev resources evaluation

If the time is not convenient for you, please suggest another time. (It's 
better to have it this week)

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

2013-10-23 Thread Eugene Nikanorov
Hi Sam,

Yes, I meant 8:00AM PDT, 10:00-12:00 AM PDT works for me as well.
Looks like this time is not convenient for Yongsheng, unfortunately, but I
think we should stick to the time that is convenient for the majority of
interested folks.

Thanks,
Eugene.



On Wed, Oct 23, 2013 at 3:01 PM, Samuel Bercovici samu...@radware.comwrote:

  Hi,

 ** **

 I assume you are proposing 8:00AM and not 8:00PM PDT. 

 I will not be able to attend on this time.

 ** **

 Better time for me is between 10:00AM PDT – 12:00AM PDT

 ** **

 Thanks,

 -Sam.

 ** **

 ** **

 ** **

 ** **

 ** **

 ** **

 *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
 *Sent:* Wednesday, October 23, 2013 11:51 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

 ** **

 Hi Neutron folks!

 ** **

 We're going to have an IRC meeting where we will discuss development plans
 for LBaaS in Icehouse.

 ** **

 Currently I'm proposing to meet on Thursday, 24 at 8:00 PDT on freenode
 #neutron-lbaas channel.

 ** **

 Agenda for the meeting:

 1. New features for lbaas in Icehouse

 Pretty much everything vendors expect to be impl in Icehouse should be
 briefly covered.

 2. Feature ordering/dependencies

 3. Dev resources evaluation

 ** **

 If the time is not convenient for you, please suggest another time. (It's
 better to have it this week)

 ** **

 Thanks,

 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Distributed Virtual Router Discussion

2013-10-23 Thread Sylvain Afchain
Hi Swaminathan,

I'm interested as well. On our side we are working on this BP
https://blueprints.launchpad.net/neutron/+spec/l3-high-availability

Suggestions are welcome.

Regards,
Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FOSDEM 2014 devroom CFP

2013-10-23 Thread Thierry Carrez
There will be a two-day Virtualisation and IaaS devroom at FOSDEM 2014
(Brussels, February 1-2). See below for the CFP.

Note: For this edition we'll avoid high-level, generic project
presentations and give priority to deep dives and developer-oriented
content, so please take that into account before submitting anything.

--
Call for Participation
--

The scope for this devroom is open source, openly-developed projects in
the areas of virtualisation and IaaS type clouds, ranging from low level
to data center, up to cloud management platforms and cloud resource
orchestration.

Sessions should always target a developer audience. Bonus points for
collaborative sessions that would be appealing to developers from
multiple projects.

We are particularly interested in the following themes:
* low level virtualisation aspects
* new features in classic and container-based virtualisation technologies
* new use cases for virtualisation, such as virtualisation in mobile,
automotive and embedded in general
* other resource virtualisation technologies: networking, storage, …
* deep technical dives into specific IaaS or virtualisation management
projects features
* relationship between IaaS projects and specific dependencies (not just
virtualisation)
* integration and development leveraging solutions from multiple projects


Important dates
---

Submission deadline: Sunday, December 1st, 2013
Acceptance notifications: Sunday, December 15th, 2013
Final schedule announcement: Friday January 10th, 2014
Devroom @ FOSDEM'14: February 1st  2nd, 2014


Practical
-

Submissions should be 40 minutes, and consist of a 30 minute
presentation with 10 minutes of QA or 40 minutes of discussions (e.g.,
requests for feedback, open discussions, etc.). Interactivity is
encouraged, but optional. Talks are in English only.

We do not provide travel assistance or reimbursement of travel expenses
for accepted speakers.

Submissions should be made via the FOSDEM submission page at
https://penta.fosdem.org/submission/FOSDEM14 :

* If necessary, create a Pentabarf account and activate it
* In the “Person” section, provide First name, Last name (in the
“General” tab), Email (in the “Contact” tab) and Bio (“Abstract” field
in the “Description” tab)
* Submit a proposal by clicking on “Create event
* Important! Select the Virtualisation and IaaS track (on the
“General” tab)
* Provide the title of your talk (“Event title” in the “General” tab)
* Provide a 250-word description of the subject of the talk and the
intended audience (in the “Abstract” field of the “Description” tab)
* Provide a rough outline of the talk or goals of the session (a short
list of bullet points covering topics that will be discussed) in the
“Full description” field in the “Description” tab


Contact
---

For questions w.r.t. the Virtualisation and IaaS DevRoom at FOSDEM'14,
please contact the organizers via
fosdem14-virt-and-iaas-devr...@googlegroups.com (or via
https://groups.google.com/forum/#!forum/fosdem14-virt-and-iaas-devroom).


This CFP is also visible at:
https://groups.google.com/forum/#!topic/fosdem14-virt-and-iaas-devroom/04y5YkyqzIo

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [ml2] Canceling today's ML2 meeting

2013-10-23 Thread Kyle Mestery (kmestery)
Hi folks:

We don't really have an agenda today, so we're going to cancel the
ML2 meeting today. Depending on what ML2 items get selected for
the Summit, we'll likely meet next week to plan the discussions around
those.

One other note: Please keep an eye open for any bug fixes we should
backport to the first stable release of Havana which fix issues in ML2.

Thanks!
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] VPNaaS questions...

2013-10-23 Thread Paul Michali
Hi guys,

Some questions on VPNaaS…

Can we get the review reopened of the service type framework changes for VPN on 
the server side?
I was thinking of trying to rebase that patch, based on the latest from master, 
but before doing so, I ran TOX on the latest master commit. TOX fails with a 
bunch of errors, some reporting that the system is out of memory. I have a 4GB 
Ubuntu 12.04 VM for this and I see it max out on memory, when TOX is run on the 
whole Neutron code for py27. Anyone seen this?
I have tried the current patch of service type framework, and found that client 
changes are needed too. I have changes ready for review, should I post them, or 
do we need to wait (or indicate some dependency on the server side changes)?
I see that there is VPN connection status and VPN service status. What is the 
purpose of the latter? What is the status, if the service has multiple 
connections in different states?
Have you guys tried VPNaaS with Havana and the now default ML2 plugin? I got a 
failure on connection create, saying that it could not find 
get_l3_agents_hosting_routers() attribute. I haven't looked into this yet, but 
will try as soon as I can.
Thanks!

PCM (Paul Michali)

Contact info for Cisco users http://twiki.cisco.com/Main/pcm




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NOVA][NEUTRON] Whats the correct firewall driver and interface driver to use neutron sec groups in havana

2013-10-23 Thread Leandro Reox
Hi guys,

Seem that i cant find the right combination to get neutron security groups
working with nova and OVS

- I see the logs on the ovs agent like sec group updated or rule updated
- I can configure the rules on neutron without an issue

BUT

Seems like nova is not doing anything with the the rules itself, i dont see
any root-wrap event trying to apply an iptables chain, its like the the
agent is not passing the order to apply the rules to nova

Here is all the nova.conf stuff, and agent logs, and iptables chains:
http://pastebin.com/RMgQxFyN


I dont know what to try to get this working , maybe im using the wrong
firewall driver or something ? or do i need for example that neutron and
nova connects to the same queue??

Best
Lean
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] updating password user_crud vs credentials

2013-10-23 Thread Chmouel Boudjnah
Hello,

If i understand correctly (and I may be wrong) we are moving away from
user_crud to use /credentials for updating password including ec2. The
credentials facility was implemented in this blueprint :

https://blueprints.launchpad.net/keystone/+spec/extract-credentials-id

and documented here :

http://docs.openstack.org/api/openstack-identity-service/2.0/content/POST_updateUserCredential_v2.0_users__userId__OS-KSADM_credentials__credential-type__.html

I may be low on my grep-fu today but I can't seem to find anything
implementing something like :

POST /v2.0/users/{userId}/OSKSADM/credentials/password

but only implemented for OS-EC2

So my question is, user_crud seems to be way to update password currently
(by /OS-KSADM/password path) is it something that would need to be added in
the future to /credentials/password ?

Cheers,

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPNaaS questions...

2013-10-23 Thread Akihiro Motoki
Hi Paul,


On Wed, Oct 23, 2013 at 9:56 PM, Paul Michali p...@cisco.com wrote:

 Hi guys,

 Some questions on VPNaaS…

 Can we get the review reopened of the service type framework changes for VPN 
 on the server side?
 I was thinking of trying to rebase that patch, based on the latest from 
 master, but before doing so, I ran TOX on the latest master commit. TOX fails 
 with a bunch of errors, some reporting that the system is out of memory. I 
 have a 4GB Ubuntu 12.04 VM for this and I see it max out on memory, when TOX 
 is run on the whole Neutron code for py27. Anyone seen this?

I see this too. On 4GB Ubuntu 13.04 VM, I have over 1GB swap while
running the whole test
and the test slows down after swap begins

 I have tried the current patch of service type framework, and found that 
 client changes are needed too. I have changes ready for review, should I post 
 them, or do we need to wait (or indicate some dependency on the server side 
 changes)?

My suggestion is to post a patch with WIP status.
We can test the server side patch with CLI. It really helps us all.

 I see that there is VPN connection status and VPN service status. What is the 
 purpose of the latter? What is the status, if the service has multiple 
 connections in different states?

I see the same.

 Have you guys tried VPNaaS with Havana and the now default ML2 plugin? I got 
 a failure on connection create, saying that it could not find 
 get_l3_agents_hosting_routers() attribute. I haven't looked into this yet, 
 but will try as soon as I can.

I think https://bugs.launchpad.net/neutron/+bug/1238846 is same as
what you encountered.
I believe this bug was fixed in the final RC. Doesn't it work?

Thanks,
Akihiro


 Thanks!

 PCM (Paul Michali)

 Contact info for Cisco users http://twiki.cisco.com/Main/pcm



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] About single entry point in trove-guestagent

2013-10-23 Thread Illia Khudoshyn
Hi Denis, Michael, Vipul and all,

I noticed a discussion in irc about adding a single entry point (sort of
'SuperManager') to the guestagent. Let me add my 5cent.

I agree with that we would ultimately avoid code duplication. But from my
experience, only very small part of GA Manager can be considered as really
duplicated code, namely Manager#prepare(). A 'backup' part may be another
candidate, but I'm not yet. It may still be rather service type specific.
All the rest of the code was just delegating.

If we add a 'SuperManager' all we'll have -- just more delegation:

1. There is no use for dynamic loading of corresponding Manager
implementation because there will never be more than one service type
supported on a concrete guest. So current implementation with configurable
dictionary service_type-ManagerImpl looks good for me.

2. Neither the 'SuperManager' provide common interface for Manager -- due
to dynamic nature of python. As it has been told, trove.guestagent.api.API
provides list of methods with parameters we need to implement. What I'd
like to have is a description of types for those params as well as return
types. (Man, I miss static typing). All we can do for that is make sure we
have proper unittests with REAL values for params and returns.

As for the common part of the Manager's code, I'd go for extracting that
into a mixin.

Thanks for your attention.

-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com http://www.mirantis.ru/

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-23 Thread Daniel P. Berrange
On Sun, Oct 20, 2013 at 05:01:23AM +, Joshua Harlow wrote:
 I created some gerrit tools that I think others might find useful.
 
 https://github.com/harlowja/gerrit_view
 
 The neat one there is a curses based real time gerrit review receiver
 that uses a similar mechanism as the gerrit irc bot to sit on the
 gerrit event queue and receive events.

Actually, from my POV, the neat one there is the qgerrit script - I had
no idea you could query this info so easily. I've done some work on it
to allow you to filter based on project name, commit message string,
approval flags, and best of all, file path changed. I also improved the
date display to make it clearer how old patches are, which may help
people prioritize reviews for oldest stuff.

With this, I can now finally keep an eye on any change which impacts the
libvirt driver code:

eg to see all code touching 'nova/virt/libvirt', which has not been
-1'd by jenkins

$ qgerrit -f url -f subject:100 -f approvals -f lastUpdated -f createdOn -p 
openstack/nova -a v1 nova/virt/libvirt 
++---+--+--+-+
| URL| Subject  
 | Created  | Updated  | Approvals   |
++---+--+--+-+
| https://review.openstack.org/33409 | Adding image multiple location support   
 | 127 days | 17 hours | v=1 c=-1,1  |
| https://review.openstack.org/35303 | Stop, Rescue, and Delete should give 
guest a chance to shutdown   | 112 days | 2 hours  | v=1,1 c=-1  |
| https://review.openstack.org/35760 | Added monitor (e.g. CPU) to monitor and 
collect data  | 110 days | 18 hours | v=1,1 c=-1,-1   |
| https://review.openstack.org/39929 | Port to oslo.messaging   
 | 82 days  | 7 hours  | v=1,1   |
| https://review.openstack.org/43984 | Call baselineCPU for full feature list   
 | 56 days  | 1 day| v=1,1 c=-1,1,1,1|
| https://review.openstack.org/44359 | Wait for files to be accessible when 
migrating| 54 days  | 2 days   | v=1 c=1,1,1 |
| https://review.openstack.org/45993 | Remove multipath mapping device 
descriptor| 42 days  | 4 hours  | v=1,1 c=-1
  |
| https://review.openstack.org/46055 | Remove dup of LibvirtISCSIVolumeDriver 
in LibvirtISERVolumeDriver | 42 days  | 18 hours | v=1,1 c=2   |
| https://review.openstack.org/48246 | Disconnect from iSCSI volume sessions 
after live migration| 28 days  | 5 days   | v=1 |
| https://review.openstack.org/48362 | Fixing ephemeral disk creation.  
 | 27 days  | 16 hours | v=1,1 c=2   |
| https://review.openstack.org/49329 | Add unsafe flag to libvirt live 
migration call.   | 21 days  | 6 days   | v=1,1 
c=-1,-1,1,1,1 |
| https://review.openstack.org/50857 | Apply six for metaclass  
 | 13 days  | 6 hours  | v=1,1   |
| https://review.openstack.org/51193 | clean up numeric expressions with byte 
constants  | 12 days  | 9 hours  | v=1 |
| https://review.openstack.org/51282 | nova.exception does not have a 
ProcessExecutionError  | 11 days  | 21 hours | v=1,1
   |
| https://review.openstack.org/51287 | Remove vim header from from nova/virt
 | 11 days  | 2 days   | v=1,1 c=-1,-1   |
| https://review.openstack.org/51718 | libvirt: Fix spurious backing file 
existence check.   | 8 days   | 5 days   | v=1 c=1 |
| https://review.openstack.org/52184 | Reply with a meaningful exception, when 
libvirt connection is broken. | 6 days   | 16 hours | v=1,1 c=2   |
| https://review.openstack.org/52363 | Remove unnecessary steps for cold 
snapshots   | 6 days   | 45 mins  | v=1,1 c=-1  
|
| https://review.openstack.org/52401 | make libvirt driver get_connection 
thread-safe| 5 days   | 3 hours  | v=1,1   |
| https://review.openstack.org/52581 | Add context as parameter for resume  
 | 5 days   | 2 days   | v=1,1,1 c=1,1   |
| https://review.openstack.org/52777 | Optimize libvirt live migration workflow 
at source| 3 days   | 1 day| v=1,1   |
| https://review.openstack.org/52807 | Create image again when resize revert a 
VM with image type as LVM | 3 days   | 3 days   | v=1,1   |
| https://review.openstack.org/53069 | Fix lxc 

Re: [openstack-dev] extend Network topology view in horizon

2013-10-23 Thread Raja Srinivasan
Hi Toshi
If you have some documentation on the demo, please share it.


Thanks  Regards
Raja Srinivasan
E:  raja.sriniva...@riverbed.com
P: +1(408) 598-1175

-Original Message-
From: Toshiyuki Hayashi [mailto:haya...@ntti3.com] 
Sent: Tuesday, October 22, 2013 10:47 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] extend Network topology view in horizon

Hi,

Regarding No.2, I'm gonna support FWaaS/LBaaS/VPNaaS, and I've just started 
creating a demo for that. So I'll add the blueprint soon.

Thanks,
Toshi



On Tue, Oct 22, 2013 at 2:02 AM, Ofer Blaut obl...@redhat.com wrote:
 Hi

 It will be helpful to extend Network topology view in horizon

 1. Admin should be able to see the entire/per tenant network topology (we 
 might need a flag to enable/disable it).

 2. Supporting ICON for FWaaS/LBaaS/VPNaaS on both admin  tenant 
 level, so it will be easy to see the deployments

 Are there any blueprints to support it ?

 Thanks

 Ofer


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Toshiyuki Hayashi
NTT Innovation Institute Inc.
Tel:650-579-0800 ex4292
mail:haya...@ntti3.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux

2013-10-23 Thread Qing He
Thanks, Balaji.
Did you keep it upto date with openstack releases or do you still stay with 
that Diablo?

Qing

From: Balaji Patnala [mailto:patnala...@gmail.com]
Sent: Wednesday, October 23, 2013 3:17 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux

Hi Qing,

Freescale SoCs like P4080 and T4240 etc are supported for OpenStack as well.

We have been using them from OpenStack Diablo release onwards.

We demonstrated at ONS 2013, Interop 2013 and China Road Show.

Regards,
Balaji.P

On 23 October 2013 08:57, Qing He 
qing...@radisys.commailto:qing...@radisys.com wrote:
Matt,

Great.
Yes, what processor and free scale version you are running on? Do you have 
something for tryout?

Thanks,
Qing

From: Matt Riedemann [mailto:mrie...@us.ibm.commailto:mrie...@us.ibm.com]
Sent: Tuesday, October 22, 2013 8:11 PM

To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux

Yeah, my team does.  We're using openvswitch 1.10, qpid 0.22, DB2 10.5 (but 
MySQL also works).  Do you have specific issues/questions?

We're working on getting continuous integration testing working for the nova 
powervm driver in the icehouse release, so you can see some more details about 
what we're doing with openstack on power in this thread:

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016395.html



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com

[IBM]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States






From:Qing He qing...@radisys.commailto:qing...@radisys.com
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:10/22/2013 07:43 PM
Subject:Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux




Thanks Matt.
I'd like know if anyone has tried to run the controller, API server and MySql 
database, msg queue, etc-the brain of the openstack, on ppc.
Qing

From: Matt Riedemann [mailto:mrie...@us.ibm.com]
Sent: Tuesday, October 22, 2013 4:17 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux

We run openstack on ppc64 with RHEL 6.4 using the powervm nova virt driver.  
What do you want to know?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com

[IBM]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States







From:Qing He qing...@radisys.commailto:qing...@radisys.com
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:10/22/2013 05:49 PM
Subject:[openstack-dev]  [nova] Openstack on power pc/Freescale linux






All,
I'm wondering if anyone tried OpenStack on Power PC/ free scale Linux?

Thanks,
Qing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

inline: image001.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal runs

2013-10-23 Thread John Griffith
On Sun, Oct 20, 2013 at 7:38 AM, Sean Dague s...@dague.net wrote:

 Dave Kranz has been building a system so that we can ensure that during a
 Tempest run services don't spew ERRORs in the logs. Eventually, we're going
 to gate on this, because there is nothing that Tempest does to the system
 that should cause any OpenStack service to ERROR or stack trace (Errors
 should actually be exceptional events that something is wrong with the
 system, not regular events).


So I have to disagree with the approach being taken here.  Particularly in
the case of Cinder and the negative tests that are in place.  When I read
this last week I assumed you actually meant that Exceptions were
exceptional and nothing in Tempest should cause Exceptions.  It turns out
you apparently did mean Errors.  I completely disagree here, Errors happen,
some are recovered, some are expected by the tests etc.  Having a policy
and especially a gate that says NO ERROR MESSAGE in logs makes absolutely
no sense to me.

Something like NO TRACE/EXCEPTION MESSAGE in logs I can agree with, but
this makes no sense to me.  By the way, here's a perfect example:
https://bugs.launchpad.net/cinder/+bug/1243485

As long as we have Tempest tests that do things like show non-existent
volume you're going to get an Error message and I think that you should
quite frankly.



 Ceilometer is currently one of the largest offenders in dumping ERRORs in
 the gate - http://logs.openstack.org/68/**52768/1/check/check-tempest-**
 devstack-vm-full/76f83a4/**console.html#_2013-10-19_14_**51_51_271http://logs.openstack.org/68/52768/1/check/check-tempest-devstack-vm-full/76f83a4/console.html#_2013-10-19_14_51_51_271(that
  item isn't in our whitelist yet, so you'll see a lot of it at the end
 of every run)

 and http://logs.openstack.org/68/**52768/1/check/check-tempest-**
 devstack-vm-full/76f83a4/logs/**screen-ceilometer-collector.**
 txt.gz?level=TRACEhttp://logs.openstack.org/68/52768/1/check/check-tempest-devstack-vm-full/76f83a4/logs/screen-ceilometer-collector.txt.gz?level=TRACEfor
  full details

 This seems like something is wrong in the integration, and would be really
 helpful if we could get ceilometer eyes on this one to put ceilo into a non
 erroring state.

 -Sean

 --
 Sean Dague
 http://dague.net

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal runs

2013-10-23 Thread Sean Dague

On 10/23/2013 10:40 AM, John Griffith wrote:




On Sun, Oct 20, 2013 at 7:38 AM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:

Dave Kranz has been building a system so that we can ensure that
during a Tempest run services don't spew ERRORs in the logs.
Eventually, we're going to gate on this, because there is nothing
that Tempest does to the system that should cause any OpenStack
service to ERROR or stack trace (Errors should actually be
exceptional events that something is wrong with the system, not
regular events).


So I have to disagree with the approach being taken here.  Particularly
in the case of Cinder and the negative tests that are in place.  When I
read this last week I assumed you actually meant that Exceptions were
exceptional and nothing in Tempest should cause Exceptions.  It turns
out you apparently did mean Errors.  I completely disagree here, Errors
happen, some are recovered, some are expected by the tests etc.  Having
a policy and especially a gate that says NO ERROR MESSAGE in logs makes
absolutely no sense to me.

Something like NO TRACE/EXCEPTION MESSAGE in logs I can agree with, but
this makes no sense to me.  By the way, here's a perfect example:
https://bugs.launchpad.net/cinder/+bug/1243485

As long as we have Tempest tests that do things like show non-existent
volume you're going to get an Error message and I think that you should
quite frankly.


Ok, I guess that's where we probably need to clarify what Not Found 
is. Because Not Found to me seems like it should be a request at INFO 
level, not ERROR.


ERROR from an admin perspective should really be something that would 
suitable for sending an alert to an administrator for them to come and 
fix the cloud.


TRACE is actually a lower level of severity in our log systems than 
ERROR is.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPNaaS questions...

2013-10-23 Thread Paul Michali
See PCM: in-line.


PCM (Paul Michali)

MAIL p...@cisco.com
IRC   pcm_  (irc.freenode.net)
TW   @pmichali

On Oct 23, 2013, at 9:41 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi Paul,
 
 
 On Wed, Oct 23, 2013 at 9:56 PM, Paul Michali p...@cisco.com wrote:
 
 Hi guys,
 
 Some questions on VPNaaS…
 
 Can we get the review reopened of the service type framework changes for VPN 
 on the server side?
 I was thinking of trying to rebase that patch, based on the latest from 
 master, but before doing so, I ran TOX on the latest master commit. TOX 
 fails with a bunch of errors, some reporting that the system is out of 
 memory. I have a 4GB Ubuntu 12.04 VM for this and I see it max out on 
 memory, when TOX is run on the whole Neutron code for py27. Anyone seen this?
 
 I see this too. On 4GB Ubuntu 13.04 VM, I have over 1GB swap while
 running the whole test
 and the test slows down after swap begins….

PCM: Whew! I was worried that it was something in my setup.  Any idea on a root 
cause/workaround? Is this happening when Jenkins runs?




 
 I have tried the current patch of service type framework, and found that 
 client changes are needed too. I have changes ready for review, should I 
 post them, or do we need to wait (or indicate some dependency on the server 
 side changes)?
 
 My suggestion is to post a patch with WIP status.
 We can test the server side patch with CLI. It really helps us all.

PCM: Thanks! I wasn't sure how to proceed as the client change is useless w/o 
the server change.


 
 I see that there is VPN connection status and VPN service status. What is 
 the purpose of the latter? What is the status, if the service has multiple 
 connections in different states?
 
 I see the same.

PCM: Yeah, need to understand what the desired meaning is for the service 
status in this context.



 
 Have you guys tried VPNaaS with Havana and the now default ML2 plugin? I got 
 a failure on connection create, saying that it could not find 
 get_l3_agents_hosting_routers() attribute. I haven't looked into this yet, 
 but will try as soon as I can.
 
 I think https://bugs.launchpad.net/neutron/+bug/1238846 is same as
 what you encountered.
 I believe this bug was fixed in the final RC. Doesn't it work?

PCM: Ah, I missed that bug review. I probably need to update my repo with the 
latest to pick this up.  Thanks!

Regards,

PCM


 
 Thanks,
 Akihiro
 
 
 Thanks!
 
 PCM (Paul Michali)
 
 Contact info for Cisco users http://twiki.cisco.com/Main/pcm
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal runs

2013-10-23 Thread John Griffith
On Wed, Oct 23, 2013 at 8:47 AM, Sean Dague s...@dague.net wrote:

 On 10/23/2013 10:40 AM, John Griffith wrote:




 On Sun, Oct 20, 2013 at 7:38 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

 Dave Kranz has been building a system so that we can ensure that
 during a Tempest run services don't spew ERRORs in the logs.
 Eventually, we're going to gate on this, because there is nothing
 that Tempest does to the system that should cause any OpenStack
 service to ERROR or stack trace (Errors should actually be
 exceptional events that something is wrong with the system, not
 regular events).


 So I have to disagree with the approach being taken here.  Particularly
 in the case of Cinder and the negative tests that are in place.  When I
 read this last week I assumed you actually meant that Exceptions were
 exceptional and nothing in Tempest should cause Exceptions.  It turns
 out you apparently did mean Errors.  I completely disagree here, Errors
 happen, some are recovered, some are expected by the tests etc.  Having
 a policy and especially a gate that says NO ERROR MESSAGE in logs makes
 absolutely no sense to me.

 Something like NO TRACE/EXCEPTION MESSAGE in logs I can agree with, but
 this makes no sense to me.  By the way, here's a perfect example:
 https://bugs.launchpad.net/**cinder/+bug/1243485https://bugs.launchpad.net/cinder/+bug/1243485

 As long as we have Tempest tests that do things like show non-existent
 volume you're going to get an Error message and I think that you should
 quite frankly.


 Ok, I guess that's where we probably need to clarify what Not Found is.
 Because Not Found to me seems like it should be a request at INFO level,
 not ERROR.



 ERROR from an admin perspective should really be something that would
 suitable for sending an alert to an administrator for them to come and fix
 the cloud.

 TRACE is actually a lower level of severity in our log systems than ERROR
 is.


Sorry, by Trace I was referring to unhandled stack/exception trace messages
in the logs.



 -Sean

 --
 Sean Dague
 http://dague.net

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-23 Thread James E. Blair
Daniel P. Berrange berra...@redhat.com writes:

 Actually, from my POV, the neat one there is the qgerrit script - I had
 no idea you could query this info so easily.

FYI the query syntax for SSH and the web is the same, so you can also
make a bookmark for a query like that.  The search syntax is here:

  https://review.openstack.org/Documentation/user-search.html

In the next version of Gerrit, you can actually make a dashboard based
on such queries.

However, note the following in the docs about the file operator:

  Currently this operator is only available on a watched project and may
  not be used in the search bar.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Testing before sending for review

2013-10-23 Thread Ben Nemec

On 2013-10-23 04:29, Rosa, Andrea (HP Cloud Services) wrote:

Hi

2. Before submitting the new patch for review it's better to run unit 
tests (tox -epy27) and pep8 check (tox -epep8)


Instead of pep8 I think you should run flake8 we moved to that some
months ago[1].
Usually I find always useful to test my changes in devstack.
Regards
--
Andrea Rosa

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-May/009178.html


tox -epep8 actually runs flake8.  It just wasn't renamed to avoid 
breaking people's existing workflow.


Also, just running tox will run all of the appropriate tests for your 
environment, which is probably what you want before pushing changes.  
Specifying -e is useful while you're working on something though so you 
don't have to run flake8 every time.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-23 Thread Joshua Harlow
Wow, awesomeness!

I'll put out a 0.2 on pypi when u are ready with that, very cool.

Sent from my really tiny device...

 On Oct 23, 2013, at 7:12 AM, Daniel P. Berrange berra...@redhat.com wrote:
 
 On Sun, Oct 20, 2013 at 05:01:23AM +, Joshua Harlow wrote:
 I created some gerrit tools that I think others might find useful.
 
 https://github.com/harlowja/gerrit_view
 
 The neat one there is a curses based real time gerrit review receiver
 that uses a similar mechanism as the gerrit irc bot to sit on the
 gerrit event queue and receive events.
 
 Actually, from my POV, the neat one there is the qgerrit script - I had
 no idea you could query this info so easily. I've done some work on it
 to allow you to filter based on project name, commit message string,
 approval flags, and best of all, file path changed. I also improved the
 date display to make it clearer how old patches are, which may help
 people prioritize reviews for oldest stuff.
 
 With this, I can now finally keep an eye on any change which impacts the
 libvirt driver code:
 
 eg to see all code touching 'nova/virt/libvirt', which has not been
 -1'd by jenkins
 
 $ qgerrit -f url -f subject:100 -f approvals -f lastUpdated -f createdOn -p 
 openstack/nova -a v1 nova/virt/libvirt 
 ++---+--+--+-+
 | URL| Subject
| Created  | Updated  | Approvals   |
 ++---+--+--+-+
 | https://review.openstack.org/33409 | Adding image multiple location support 
| 127 days | 17 hours | v=1 c=-1,1  |
 | https://review.openstack.org/35303 | Stop, Rescue, and Delete should give 
 guest a chance to shutdown   | 112 days | 2 hours  | v=1,1 c=-1  |
 | https://review.openstack.org/35760 | Added monitor (e.g. CPU) to monitor 
 and collect data  | 110 days | 18 hours | v=1,1 c=-1,-1   
 |
 | https://review.openstack.org/39929 | Port to oslo.messaging 
| 82 days  | 7 hours  | v=1,1   |
 | https://review.openstack.org/43984 | Call baselineCPU for full feature list 
| 56 days  | 1 day| v=1,1 c=-1,1,1,1|
 | https://review.openstack.org/44359 | Wait for files to be accessible when 
 migrating| 54 days  | 2 days   | v=1 c=1,1,1 |
 | https://review.openstack.org/45993 | Remove multipath mapping device 
 descriptor| 42 days  | 4 hours  | v=1,1 c=-1  
 |
 | https://review.openstack.org/46055 | Remove dup of LibvirtISCSIVolumeDriver 
 in LibvirtISERVolumeDriver | 42 days  | 18 hours | v=1,1 c=2   |
 | https://review.openstack.org/48246 | Disconnect from iSCSI volume sessions 
 after live migration| 28 days  | 5 days   | v=1 |
 | https://review.openstack.org/48362 | Fixing ephemeral disk creation.
| 27 days  | 16 hours | v=1,1 c=2   |
 | https://review.openstack.org/49329 | Add unsafe flag to libvirt live 
 migration call.   | 21 days  | 6 days   | v=1,1 
 c=-1,-1,1,1,1 |
 | https://review.openstack.org/50857 | Apply six for metaclass
| 13 days  | 6 hours  | v=1,1   |
 | https://review.openstack.org/51193 | clean up numeric expressions with byte 
 constants  | 12 days  | 9 hours  | v=1 |
 | https://review.openstack.org/51282 | nova.exception does not have a 
 ProcessExecutionError  | 11 days  | 21 hours | v=1,1  
  |
 | https://review.openstack.org/51287 | Remove vim header from from nova/virt  
| 11 days  | 2 days   | v=1,1 c=-1,-1   |
 | https://review.openstack.org/51718 | libvirt: Fix spurious backing file 
 existence check.   | 8 days   | 5 days   | v=1 c=1
  |
 | https://review.openstack.org/52184 | Reply with a meaningful exception, 
 when libvirt connection is broken. | 6 days   | 16 hours | v=1,1 c=2  
  |
 | https://review.openstack.org/52363 | Remove unnecessary steps for cold 
 snapshots   | 6 days   | 45 mins  | v=1,1 c=-1
   |
 | https://review.openstack.org/52401 | make libvirt driver get_connection 
 thread-safe| 5 days   | 3 hours  | v=1,1  
  |
 | https://review.openstack.org/52581 | Add context as parameter for resume
| 5 days   | 2 days   | v=1,1,1 c=1,1   |
 | https://review.openstack.org/52777 | Optimize libvirt live migration 
 workflow at 

Re: [openstack-dev] [keystone] updating password user_crud vs credentials

2013-10-23 Thread Adam Young

On 10/23/2013 09:14 AM, Chmouel Boudjnah wrote:

Hello,

If i understand correctly (and I may be wrong) we are moving away from 
user_crud to use /credentials for updating password including ec2. The 
credentials facility was implemented in this blueprint :


https://blueprints.launchpad.net/keystone/+spec/extract-credentials-id

and documented here :

http://docs.openstack.org/api/openstack-identity-service/2.0/content/POST_updateUserCredential_v2.0_users__userId__OS-KSADM_credentials__credential-type__.html

I may be low on my grep-fu today but I can't seem to find anything 
implementing something like :


POST /v2.0/users/{userId}/OSKSADM/credentials/password

but only implemented for OS-EC2

So my question is, user_crud seems to be way to update password 
currently (by /OS-KSADM/password path) is it something that would need 
to be added in the future to /credentials/password ?




There seems to be multiple conflicting views on how to p[ush ahead. One 
is this API approach:


https://review.openstack.org/#/c/46771/

But I think we need to discuss further


Cheers,

Chmouel.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-23 Thread Daniel P. Berrange
On Wed, Oct 23, 2013 at 03:05:32PM +, James E. Blair wrote:
 Daniel P. Berrange berra...@redhat.com writes:
 
  Actually, from my POV, the neat one there is the qgerrit script - I had
  no idea you could query this info so easily.
 
 FYI the query syntax for SSH and the web is the same, so you can also
 make a bookmark for a query like that.  The search syntax is here:
 
   https://review.openstack.org/Documentation/user-search.html
 
 In the next version of Gerrit, you can actually make a dashboard based
 on such queries.
 
 However, note the following in the docs about the file operator:
 
   Currently this operator is only available on a watched project and may
   not be used in the search bar.

Yeah, I'm only too well aware of that limitation  - makes it
basically useless to me :-(  Also I can't see a way to filter
based on the jenkins  code review +1/+2/-1/-2 flags

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Blueprint review process

2013-10-23 Thread Russell Bryant
Greetings,

At the last Nova meeting we started talking about some updates to the
Nova blueprint process for the Icehouse cycle.  I had hoped we could
talk about and finalize this in a Nova design summit session on Nova
Project Structure and Process [1], but I think we need to push forward
on finalizing this as soon as possible so that it doesn't block current
work being done.

Here is a first cut at the process.  Let me know what you think is
missing or should change.  I'll get the result of this thread posted on
the wiki.

1) Proposing a Blueprint

Proposing a blueprint for Nova is not much different than other
projects.  You should follow the instructions here:

https://wiki.openstack.org/wiki/Blueprints

The particular important step that seems to be missed by most is:

Once it is ready for PTL review, you should set:

Milestone: Which part of the release cycle you think your work will be
proposed for merging.

That is really important.  Due to the volume of Nova blueprints, it
probably will not be seen until you do this.

2) Blueprint Review Team

Ensuring blueprints get reviewed is one of the responsibilities of the
PTL.  However, due to the volume of Nova blueprints, it's not practical
for me to do it alone.  A team of people (nova-drivers) [2], a subset of
nova-core, will be doing blueprint reviews.

By having more people reviewing blueprints, we can do a more thorough
job and have a higher quality result.

Note that even though there is a nova-drivers team, *everyone* is
encouraged to participate in the review process by providing feedback on
the mailing list.

3) Blueprint Review Criteria

Here are some things that the team reviewing blueprints should look for:

The blueprint ...

 - is assigned to the person signing up to do the work

 - has been targeted to the milestone when the code is
   planned to be completed

 - is an appropriate feature for Nova.  This means it fits with the
   vision for Nova and OpenStack overall.  This is obviously very
   subjective, but the result should represent consensus.

 - includes enough detail to be able to complete an initial design
   review before approving the blueprint. In many cases, the design
   review may result in a discussion on the mailing list to work
   through details. A link to this discussion should be left in the
   whiteboard of the blueprint for reference.  This initial design
   review should be completed before the blueprint is approved.

 - includes information that describes the user impact (or lack of).
   Between the blueprint and text that comes with the DocImpact flag [3]
   in commits, the docs team should have *everything* they need to
   thoroughly document the feature.

Once the review has been complete, the blueprint should be marked as
approved and the priority should be set.  A set priority is how we know
from the blueprint list which ones have already been reviewed.

4) Blueprint Prioritization

I would like to do a better job of using priorities in Icehouse.  The
priority field services a couple of purposes:

  - helps reviewers prioritize their time

  - helps set expectations for the submitter for how reviewing this
work stacks up against other things

In the last meeting we discussed an idea that I think is worth trying at
least for icehouse-1 to see if we like it or not.  The idea is that
*every* blueprint starts out at a Low priority, which means best
effort, but no promises.  For a blueprint to get prioritized higher, it
should have 2 nova-core members signed up to review the resulting code.

If we do this, I suspect we may end up with more blueprints at Low, but
I also think we'll end up with a more realistic list of blueprints.  The
reality is if a feature doesn't have reviewers agreeing to do the
review, it really is in a best effort, but no promises situation.

5) Blueprint Fall Cleaning

Finally, it's about time we do some cleaning of the blueprint backlog.
There are a bunch not currently being worked on.  I propose that we
close out all blueprints not targeted at a release milestone by November
22 (2 weeks after the end of the design summit), with the exception of
anything just recently filed and still being drafted.


[1] http://summit.openstack.org/cfp/details/341
[2] https://launchpad.net/~nova-drivers/+members#active
[3]
http://justwriteclick.com/2013/09/17/openstack-docimpact-flag-walk-through/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] updating password user_crud vs credentials

2013-10-23 Thread Dolph Mathews
On Wed, Oct 23, 2013 at 8:14 AM, Chmouel Boudjnah chmo...@enovance.comwrote:

 Hello,

 If i understand correctly (and I may be wrong) we are moving away from
 user_crud to use /credentials for updating password including ec2. The
 credentials facility was implemented in this blueprint :

 https://blueprints.launchpad.net/keystone/+spec/extract-credentials-id

 and documented here :


 http://docs.openstack.org/api/openstack-identity-service/2.0/content/POST_updateUserCredential_v2.0_users__userId__OS-KSADM_credentials__credential-type__.html

 I may be low on my grep-fu today but I can't seem to find anything
 implementing something like :

 POST /v2.0/users/{userId}/OSKSADM/credentials/password


The v3 version of this call is in progress:
https://blueprints.launchpad.net/keystone/+spec/v3-user-update-own-password


 but only implemented for OS-EC2

 So my question is, user_crud seems to be way to update password currently
 (by /OS-KSADM/password path) is it something that would need to be added in
 the future to /credentials/password ?

That's sort of being tackled here, with slightly different terminology:

https://blueprints.launchpad.net/keystone/+spec/access-key-authentication

Regular passwords are currently backed to the identity driver, but
there's no reason why they couldn't be managed via /v3/credentials.


 Cheers,

 Chmouel.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Replication and Clustering API

2013-10-23 Thread Daniel Salinas
Do you have any specific examples on that?  I'm not opposed to adding more
to the replication contract but I want to be sure that it is important to
the end user of the api.  I can see some edge cases for saying
replication_type: async-master-slave or something like that.  This
becomes more important for datastore technologies that support multiple
replication types/methodologies.  Imagine a world where trove supports
using mysql async replication as well as something like tungsten replicator.


On Wed, Oct 23, 2013 at 10:06 AM, Daniel Morris daniel.mor...@rackspace.com
 wrote:

 Would it be beneficial in this case to extend the meta-data model of the
 replication contract to allow for additional key/value pairs in the
 meta-data to account for DB specific and/or replication and clustering
 specific meta-data?

 -Daniel

 From:  Daniel Salinas imsplit...@gmail.com
 Reply-To:  OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date:  Wednesday, October 23, 2013 9:42 AM
 To:  OpenStack Development Mailing List openstack-dev@lists.openstack.org
 
 Subject:  Re: [openstack-dev] [Trove] Replication and Clustering API


 Galera cluster, in this model would be considered a service type or
 datastore type not a replication type.  All clusters would be treated
 this way.  The method of replication is really not important to the api
 IMO but rather the contract should
  reflect what host has copies of data (in whole or in part) on other
 hosts.  How the data gets to each host is a function of the underlying
 technology.  That is not to say that we couldn't add more verbose
 information to the replication contract but I haven't
  yet seen where or how that's important to the end user.
 
 
 
 On Tue, Oct 22, 2013 at 5:32 PM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com wrote:
 
 Hi,
 
 I don't see the replication type in the metadata replication contract.
 For example someone can use MySQL Galera cluster with synchronous
 replication + asynchronous replication master-slave for backup to remote
 site.
 
 MS SQL offers alwaysON availability groups clustering with pair of
 synchronous replication plus up to 3 nodes with asynchronous replication.
 Also there are some existing different mechanisms like data mirroring
 (synchronous or asynchronous) or log shipping.
 
 So my point is that when you say replication, it is not obvious which
 type of replication is used.
 
 Thanks
 Georgy
 
 
 
 
 
 On Tue, Oct 22, 2013 at 12:37 PM, Daniel Salinas
 imsplit...@gmail.com wrote:
 
 
 
 We have drawn up a new spec for the clustering api which removes the
 concept of a /clusters path as well as the need for the /clustertypes
 path.  The spec lives here now:
 
 https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API
 
 
 Initially I'd like to get eyes on this and see if we can't generate some
 discussion.  This proposal is far reaching and will ultimately require a
 major versioning of the trove API to support.  It is an amalgam of ideas
 from Vipul, hub_cap and a few others but
  we feel like this gets us much closer to having a more intuitive
 interface for users.  Please peruse the document and lets start working
 through any issues.
 
 I would like to discuss the API proposal tomorrow during our weekly
 meeting but I would welcome comments/concerns on the mailing list as well.
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 
 
 --
 Georgy Okrokvertskhov
 Technical Program Manager,
 Cloud and Infrastructure Services,
 Mirantis
 http://www.mirantis.com http://www.mirantis.com/
 Tel. +1 650 963 9828 tel:%2B1%20650%20963%209828
 Mob. +1 650 996 3284 tel:%2B1%20650%20996%203284
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] updating password user_crud vs credentials

2013-10-23 Thread Adam Young

On 10/23/2013 11:35 AM, Dolph Mathews wrote:


On Wed, Oct 23, 2013 at 8:14 AM, Chmouel Boudjnah 
chmo...@enovance.com mailto:chmo...@enovance.com wrote:


Hello,

If i understand correctly (and I may be wrong) we are moving away
from user_crud to use /credentials for updating password including
ec2. The credentials facility was implemented in this blueprint :

https://blueprints.launchpad.net/keystone/+spec/extract-credentials-id

and documented here :


http://docs.openstack.org/api/openstack-identity-service/2.0/content/POST_updateUserCredential_v2.0_users__userId__OS-KSADM_credentials__credential-type__.html

I may be low on my grep-fu today but I can't seem to find anything
implementing something like :

POST /v2.0/users/{userId}/OSKSADM/credentials/password


The v3 version of this call is in progress: 
https://blueprints.launchpad.net/keystone/+spec/v3-user-update-own-password


but only implemented for OS-EC2

So my question is, user_crud seems to be way to update password
currently (by /OS-KSADM/password path) is it something that would
need to be added in the future to /credentials/password ?

That's sort of being tackled here, with slightly different terminology:

https://blueprints.launchpad.net/keystone/+spec/access-key-authentication

Regular passwords are currently backed to the identity driver, but 
there's no reason why they couldn't be managed via /v3/credentials.

+1  :  I think this is the right approach.


Cheers,

Chmouel.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

2013-10-23 Thread Sumit Naiksatam
So is it at 8 AM PDT, or 10 AM PDT?

Thanks,
~Sumit.


On Wed, Oct 23, 2013 at 4:18 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi Sam,

 Yes, I meant 8:00AM PDT, 10:00-12:00 AM PDT works for me as well.
 Looks like this time is not convenient for Yongsheng, unfortunately, but I
 think we should stick to the time that is convenient for the majority of
 interested folks.

 Thanks,
 Eugene.



 On Wed, Oct 23, 2013 at 3:01 PM, Samuel Bercovici samu...@radware.comwrote:

  Hi,

 ** **

 I assume you are proposing 8:00AM and not 8:00PM PDT. 

 I will not be able to attend on this time.

 ** **

 Better time for me is between 10:00AM PDT – 12:00AM PDT

 ** **

 Thanks,

 -Sam.

 ** **

 ** **

 ** **

 ** **

 ** **

 ** **

 *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
 *Sent:* Wednesday, October 23, 2013 11:51 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

 ** **

 Hi Neutron folks!

 ** **

 We're going to have an IRC meeting where we will discuss development
 plans for LBaaS in Icehouse.

 ** **

 Currently I'm proposing to meet on Thursday, 24 at 8:00 PDT on freenode
 #neutron-lbaas channel.

 ** **

 Agenda for the meeting:

 1. New features for lbaas in Icehouse

 Pretty much everything vendors expect to be impl in Icehouse should be
 briefly covered.

 2. Feature ordering/dependencies

 3. Dev resources evaluation

 ** **

 If the time is not convenient for you, please suggest another time. (It's
 better to have it this week)

 ** **

 Thanks,

 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-23 Thread Ilya Sviridov
Besides the strategy of selecting the default behavior.

Let me share with you my ideas of configuration management in Trove and how
the datastore concept can help with that.

Initially there was only one database and all configuration was in one
config file.
With adding of new databases, heat provisioning mechanism, we are
introducing more options.

Not only assigning specific image_id, but custom packages, heat templates,
probably specific strategies of working with security groups.
Such needs already exist because we have a lot of optional things in
config, and any new feature is implemented with back sight to already
existing legacy installations of Trove.

What is  actually datastore_type + datastore_version?

The model which glues all the bricks together, so let us use it for all
variable part of *service type* configuration.

from current config file

# Trove DNS
trove_dns_support = False

# Trove Security Groups for Instances
trove_security_groups_support = True
trove_security_groups_rules_support = False
trove_security_group_rule_protocol = tcp
trove_security_group_rule_port = 3306
trove_security_group_rule_cidr = 0.0.0.0/0

#guest_config = $pybasedir/etc/trove/trove-guestagent.conf.sample
#cloudinit_location = /etc/trove/cloudinit

block_device_mapping = vdb
device_path = /dev/vdb
mount_point = /var/lib/mysql

All that configurations can be moved to data_strore (some defined in heat
templates) and be manageable by operator in case if any default behavior
should be changed.

The trove-config becomes core functionality specific only.

What do you think about it?


With best regards,
Ilya Sviridov

http://www.mirantis.ru/


On Tue, Oct 22, 2013 at 8:21 PM, Michael Basnight mbasni...@gmail.comwrote:


 On Oct 22, 2013, at 9:34 AM, Tim Simpson wrote:

   It's not intuitive to the User, if they are specifying a version
 alone.  You don't boot a 'version' of something, with specifying what that
 some thing is.  I would rather they only specified the datastore_type
 alone, and not have them specify a version at all.
 
  I agree for most users just selecting the datastore_type would be most
 intutive.
 
  However, when they specify a version it's going to a be GUID which they
 could only possibly know if they have recently enumerated all versions and
 thus *know* the version is for the given type they want. In that case I
 don't think most users would appreciate having to also pass the type- it
 would just be redundant. So in that case why not make it optional?]

 im ok w/ making either optional if the criteria for selecting the
 _other_ is not ambiguous.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

2013-10-23 Thread Eugene Nikanorov
So currently it moves to 10AM PDT

Thanks,
Eugene.


On Wed, Oct 23, 2013 at 9:07 PM, Sumit Naiksatam
sumitnaiksa...@gmail.comwrote:

 So is it at 8 AM PDT, or 10 AM PDT?

 Thanks,
 ~Sumit.


 On Wed, Oct 23, 2013 at 4:18 AM, Eugene Nikanorov enikano...@mirantis.com
  wrote:

 Hi Sam,

 Yes, I meant 8:00AM PDT, 10:00-12:00 AM PDT works for me as well.
 Looks like this time is not convenient for Yongsheng, unfortunately, but
 I think we should stick to the time that is convenient for the majority of
 interested folks.

 Thanks,
 Eugene.



 On Wed, Oct 23, 2013 at 3:01 PM, Samuel Bercovici samu...@radware.comwrote:

  Hi,

 ** **

 I assume you are proposing 8:00AM and not 8:00PM PDT. 

 I will not be able to attend on this time.

 ** **

 Better time for me is between 10:00AM PDT – 12:00AM PDT

 ** **

 Thanks,

 -Sam.

 ** **

 ** **

 ** **

 ** **

 ** **

 ** **

 *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
 *Sent:* Wednesday, October 23, 2013 11:51 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse

 ** **

 Hi Neutron folks!

 ** **

 We're going to have an IRC meeting where we will discuss development
 plans for LBaaS in Icehouse.

 ** **

 Currently I'm proposing to meet on Thursday, 24 at 8:00 PDT on freenode
 #neutron-lbaas channel.

 ** **

 Agenda for the meeting:

 1. New features for lbaas in Icehouse

 Pretty much everything vendors expect to be impl in Icehouse should be
 briefly covered.

 2. Feature ordering/dependencies

 3. Dev resources evaluation

 ** **

 If the time is not convenient for you, please suggest another time.
 (It's better to have it this week)

 ** **

 Thanks,

 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-23 Thread Patrick Petit

Dear Steve and All,

If I may add up on this already busy thread to share our experience with 
using Heat in large and complex software deployments.


I work on a project which precisely provides additional value at the 
articulation point between resource orchestration automation and 
configuration management. We rely on Heat and chef-solo respectively for 
these base management functions. On top of this, we have developed an 
event-driven workflow to manage the life-cycles of complex software 
stacks which primary purpose is to support middleware components as 
opposed to end-user apps. Our use cases are peculiar in the sense that 
software setup (install, config, contextualization) is not a one-time 
operation issue but a continuous thing that can happen any time in 
life-span of a stack. Users can deploy (and undeploy) apps long time 
after the stack is created. Auto-scaling may also result in an 
asynchronous apps deployment. More about this latter. The framework we 
have designed works well for us. It clearly refers to a PaaS-like 
environment which I understand is not the topic of the HOT software 
configuration proposal(s) and that's absolutely fine with us. However, 
the question for us is whether the separation of software config from 
resources would make our life easier or not. I think the answer is 
definitely yes but at the condition that the DSL extension preserves 
almost everything from the expressiveness of the resource element. In 
practice, I think that a strict separation between resource and 
component will be hard to achieve because we'll always need a little bit 
of application's specific in the resources. Take for example the case of 
the SecurityGroups. The ports open in a SecurityGroup are application 
specific.


Then, designing a Chef or Puppet component type may be harder than it 
looks at first glance. Speaking of our use cases we still need a little 
bit of scripting in the instance's user-data block to setup a working 
chef-solo environment. For example, we run librarian-chef prior to 
starting chef-solo to resolve the cookbook dependencies. A cookbook can 
present itself as a downloadable tarball but it's not always the case. A 
chef component type would have to support getting a cookbook from a 
public or private git repo (maybe subversion), handle situations where 
there is one cookbook per repo or multiple cookbooks per repo, let the 
user choose a particular branch or label, provide ssh keys if it's a 
private repo, and so forth. We support all of this scenarios and so we 
can provide more detailed requirements if needed.


I am not sure adding component relations like the 'depends-on' would 
really help us since it is the job of config management to handle 
software dependencies. Also, it doesn't address the issue of circular 
dependencies. Circular dependencies occur in complex software stack 
deployments. Example. When we setup a Slum virtual cluster, both the 
head node and compute nodes depend on one another to complete their 
configuration and so they would wait for each other indefinitely if we 
were to rely on the 'depends-on'. In addition, I think it's critical to 
distinguish between configuration parameters which are known ahead of 
time, like a db name or user name and password, versus contextualization 
parameters which are known after the fact generally when the instance is 
created. Typically those contextualization parameters are IP addresses 
but not only. The fact packages x,y,z have been properly installed and 
services a,b,c successfully started is contextualization information 
(a.k.a facts) which may be indicative that other components can move on 
to the next setup stage.


The case of complex deployments with or without circular dependencies is 
typically resolved by making the system converge toward the desirable 
end-state through running idempotent recipes. This is our approach. The 
first configuration phase handles parametrization which in general 
brings an instance to CREATE_COMPLETE state. A second phase follows to 
handle contextualization at the stack level. As a matter of fact, a new 
contextualization should be triggered every time an instance enters or 
leave the CREATE_COMPLETE state which may happen any time with 
auto-scaling. In that phase, circular dependencies can be resolved 
because all contextualization data can be compiled globally. Notice that 
Heat doesn't provide a purpose built resource or service like Chef's 
data-bag for the storage and retrieval of metadata. This a gap which IMO 
should be addressed in the proposal. Currently, we use a kludge that is 
to create a fake AWS::AutoScaling::LaunchConfiguration resource to store 
contextualization data in the metadata section of that resource.


Aside from the HOT software configuration proposal(s). There are two 
critical enhancements in Heat that would make software life-cycles 
management much easier. In fact, they are actual blockers for us.


The first one would be 

[openstack-dev] Glance Client - new blue print

2013-10-23 Thread GROSZ, Maty (Maty)
Hi *,

I have uploaded a new blue print regarding Glance Client.
The basic idea is to make the schema validation check configurable (do validate 
the schemas or not) or let the user decide whether to skip the validation or 
not.

The blue print can be viewed here:

https://blueprints.launchpad.net/python-glanceclient/+spec/make-schema-api-calls-configurable

I will appreciate if you could view it and progress it to the next step.

Thanks,

Maty

[logo]
Maty Grosz
Alcatel-Lucent
APIs Functional Owner, RD
CLOUDBAND BUSINESS UNIT
16 Atir Yeda St. Kfar-Saba 44643, ISRAEL
T: +972 (0) 9 7933078
F: +972 (0) 9 7933700
maty.gr...@alcatel-lucent.commailto:bkarin.bercov...@alcatel-lucent.com


inline: image001.jpg___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] EC2 Compatibility metadata on config drive, fixed IP info

2013-10-23 Thread Lorin Hochstein
On Tue, Oct 22, 2013 at 7:35 PM,
openstack-dev-requ...@lists.openstack.orgwrote:

 Date: Wed, 23 Oct 2013 10:21:00 +1100
 From: Michael Still mi...@stillhq.com
 To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] EC2 Compatibility metadata on
 config drive, fixed IP info
 Message-ID:
 CAEd1pt5SME3-X18czdRD-W_N-=
 no0zzljhagphktq3t1rz2...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 On Wed, Oct 23, 2013 at 6:48 AM, Mate Lakat mate.la...@citrix.com wrote:
  Hi,
 
  We are looking at config drive use cases, and saw this in the official
  docs:
 
Do not rely on the presence of the EC2 metadata present in the config
drive (i.e., files under the ec2 directory), as this content may be
removed in a future release.

 Huh. That's news to me. I wonder why we'd bother implementing it if we
 don't want people to use it?



I was the one who made that doc edit, but I added that particular text on
suggestion of Scott Moser. The review in question was:
https://review.openstack.org/#/c/16504/


Lorin

-- 
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] RFC - Icehouse logging harmonization

2013-10-23 Thread Sean Dague
One of the efforts that we're working on from the QA team is tooling 
that ensures we aren't stack tracing into our test logs during normal 
tempest runs. Random stack traces are scary to cloud admins consuming 
OpenStack logs, and exceptions in the logs should really be exceptional 
events (and indicative of a failing system), not something that we do by 
default. Our intent is to gate code on clean logs (no stacktraces) 
eventually (i.e. if you try to land a patch that causes stack traces in 
OpenStack, that becomes a failing condition), and we've got an 
incremental white list based approach that should let us make forward 
progress on that. But on that thread - 
http://lists.openstack.org/pipermail/openstack-dev/2013-October/017012.html 
we exposed another issue... across projects, OpenStack is very 
inconsistent with logging.


First... baseline, these are the logging levels that we have in 
OpenStack today (with numeric values, higher = worse):


CRITICAL = 50
FATAL = CRITICAL
ERROR = 40
WARNING = 30
WARN = WARNING
AUDIT = 21  # invented for oslo-logging
INFO = 20
DEBUG = 10
NOTSET = 0

We also have TRACE, which isn't a level per say, it happens at another 
level. However TRACE is typically an ERROR in the way we use it.



Some examples of oddities in the current system (all from a single 
devstack/tempest run):


Example 1:
==

n-conductor log in tempest/devstack - 
http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz


Total log lines: 84076
Total non DEBUG lines: 61

Question: do we need more than 1 level of DEBUG? 3 orders of 
magnitude information change between INFO - DEBUG seems too steep a cliff.


Example 2:
==

ceilometer-collector - 
http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-ceilometer-collector.txt.gz


AUDIT log level being used as DEBUG level (even though it's higher 
than INFO).


2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush 
pipeline meter_pipeline
2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush 
pipeline cpu_pipeline
2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush 
pipeline meter_pipeline
2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush 
pipeline cpu_pipeline


(this is every second, for most seconds, for the entire run)

Example 3:
===

cinder-api - 
http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-c-api.txt.gz?level=ERROR

ERROR level being used for 404s of volumes

Example 4:
===
glance-api - 
http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-g-api.txt.gz


2013-10-23 12:23:27.436 23731 ERROR glance.store.sheepdog [-] Error in 
store configuration: Unexpected error while running command.Command: 
collieExit code: 127Stdout: ''Stderr: '/bin/sh: 1: collie: not found\n'
2013-10-23 12:23:27.436 23731 WARNING glance.store.base [-] Failed to 
configure store correctly: Store sheepdog could not be configured 
correctly. Reason: Error in store configuration: Unexpected error while 
running command.Command: collieExit code: 127Stdout: ''Stderr: '/bin/sh: 
1: collie: not found\n' Disabling add method.


part of every single Tempest / Devstack run, even though we aren't 
trying to configure sheepdog in the gate



I think we can, and should do better, and started trying to brain dump 
into this etherpad - 
https://etherpad.openstack.org/p/icehouse-logging-harmonization 
(examples included).


This is one of those topics that I think our current 6 track summit 
model doesn't make easy address, as we really need general concensus 
across any project that's using oslo-logging, so I believe mailing list 
is the better option, at least for now.



Goals - Short Term
===
As much feedback as possible from both core projects and openstack 
deployers about the kinds of things that they believe we should be 
logging, and the kinds of levels they think those things should land at.


Determining how crazy it is to try to harmonize this across services.

Figure out who else wants to help. Where help means:
 * helping figure out what's already concensus in services
 * helping figure out things that are really aberrant from that concensus
 * helping build concensus with various core teams on a common
 * helping with contributions to projects that are interested in 
contributions to move them closer to the concensus


Determining if everyone just hates the idea, and I should give up now. 
:) (That is a valid response to this RFC, feel free to put that out there).



Goals - Long Term
===
A set of guidelines on logging standards so that OpenStack as a whole 
feels more whole when it comes to dealing with the log data.


These are going to be guidelines, not rules. Some projects are always 
going to have unique needs. But I suspect a lot of 

[openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-10-23 Thread Mark Washenberger
Hi folks!

In the images api, we depend on iso8601 to parse some dates and times.
Recently, since version 0.1.4, python-iso8601 added support for a few more
formats, and we finally got some other issues nailed down by 0.1.8. Maybe
the fact that these formats weren't supported before was a bug. I don't
really know.

This puts us in an awkward place, however. With the help of our unit tests,
we noticed that, if you switch from 0.1.8 back to 0.1.4 in your deployment,
your image api will lose support for certain datetime formats like
-MM-DD (where the time part is assumed to be all zeros). This obviously
creates a (perhaps small) compatibility concern.

Here are our alternatives:

1) Adopt 0.1.8 as the minimum version in openstack-requirements.
2) Do nothing (i.e. let Glance behavior depend on iso8601 in this way, and
just fix the tests so they don't care about these extra formats)
3) Make Glance work with the added formats even if 0.1.4 is installed.

As of yesterday we were resolved to do #3, trying to be good citizens. But
it appears that to do so requires essentially reimplementing a large swath
of iso8601 0.1.8 in glance itself. Gross!

So, I'd like to suggest that we instead adopt option #1, or alternatively
agree that option #2 is no big deal, we can all go back to sleep. Thoughts?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-23 Thread Clint Byrum
Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700:
 Dear Steve and All,
 
 If I may add up on this already busy thread to share our experience with 
 using Heat in large and complex software deployments.
 

Thanks for sharing Patrick, I have a few replies in-line.

 I work on a project which precisely provides additional value at the 
 articulation point between resource orchestration automation and 
 configuration management. We rely on Heat and chef-solo respectively for 
 these base management functions. On top of this, we have developed an 
 event-driven workflow to manage the life-cycles of complex software 
 stacks which primary purpose is to support middleware components as 
 opposed to end-user apps. Our use cases are peculiar in the sense that 
 software setup (install, config, contextualization) is not a one-time 
 operation issue but a continuous thing that can happen any time in 
 life-span of a stack. Users can deploy (and undeploy) apps long time 
 after the stack is created. Auto-scaling may also result in an 
 asynchronous apps deployment. More about this latter. The framework we 
 have designed works well for us. It clearly refers to a PaaS-like 
 environment which I understand is not the topic of the HOT software 
 configuration proposal(s) and that's absolutely fine with us. However, 
 the question for us is whether the separation of software config from 
 resources would make our life easier or not. I think the answer is 
 definitely yes but at the condition that the DSL extension preserves 
 almost everything from the expressiveness of the resource element. In 
 practice, I think that a strict separation between resource and 
 component will be hard to achieve because we'll always need a little bit 
 of application's specific in the resources. Take for example the case of 
 the SecurityGroups. The ports open in a SecurityGroup are application 
 specific.


Components can only be made up of the things that are common to all users
of said component. Also components would, if I understand the concept
correctly, just be for things that are at the sub-resource level.
Security groups and open ports would be across multiple resources, and
thus would be separately specified from your app's component (though it
might be useful to allow components to export static values so that the
port list can be referred to along with the app component).

 Then, designing a Chef or Puppet component type may be harder than it 
 looks at first glance. Speaking of our use cases we still need a little 
 bit of scripting in the instance's user-data block to setup a working 
 chef-solo environment. For example, we run librarian-chef prior to 
 starting chef-solo to resolve the cookbook dependencies. A cookbook can 
 present itself as a downloadable tarball but it's not always the case. A 
 chef component type would have to support getting a cookbook from a 
 public or private git repo (maybe subversion), handle situations where 
 there is one cookbook per repo or multiple cookbooks per repo, let the 
 user choose a particular branch or label, provide ssh keys if it's a 
 private repo, and so forth. We support all of this scenarios and so we 
 can provide more detailed requirements if needed.


Correct me if I'm wrong though, all of those scenarios are just variations
on standard inputs into chef. So the chef component really just has to
allow a way to feed data to chef.

 I am not sure adding component relations like the 'depends-on' would 
 really help us since it is the job of config management to handle 
 software dependencies. Also, it doesn't address the issue of circular 
 dependencies. Circular dependencies occur in complex software stack 
 deployments. Example. When we setup a Slum virtual cluster, both the 
 head node and compute nodes depend on one another to complete their 
 configuration and so they would wait for each other indefinitely if we 
 were to rely on the 'depends-on'. In addition, I think it's critical to 
 distinguish between configuration parameters which are known ahead of 
 time, like a db name or user name and password, versus contextualization 
 parameters which are known after the fact generally when the instance is 
 created. Typically those contextualization parameters are IP addresses 
 but not only. The fact packages x,y,z have been properly installed and 
 services a,b,c successfully started is contextualization information 
 (a.k.a facts) which may be indicative that other components can move on 
 to the next setup stage.
 

The form of contextualization you mention above can be handled by a
slightly more capable wait condition mechanism than we have now. I've
been suggesting that this is the interface that workflow systems should
use.

 The case of complex deployments with or without circular dependencies is 
 typically resolved by making the system converge toward the desirable 
 end-state through running idempotent recipes. This is our approach. The 
 first 

Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-23 Thread Michael Basnight

On Oct 23, 2013, at 10:54 AM, Ilya Sviridov wrote:

 Besides the strategy of selecting the default behavior.
 
 Let me share with you my ideas of configuration management in Trove and how 
 the datastore concept can help with that.
 
 Initially there was only one database and all configuration was in one config 
 file. 
 With adding of new databases, heat provisioning mechanism, we are introducing 
 more options. 
 
 Not only assigning specific image_id, but custom packages, heat templates, 
 probably specific strategies of working with security groups.
 Such needs already exist because we have a lot of optional things in config, 
 and any new feature is implemented with back sight to already existing legacy 
 installations of Trove.
 
 What is  actually datastore_type + datastore_version?
 
 The model which glues all the bricks together, so let us use it for all 
 variable part of *service type* configuration.
 
 from current config file
 
 # Trove DNS
 trove_dns_support = False
 
 # Trove Security Groups for Instances
 trove_security_groups_support = True
 trove_security_groups_rules_support = False
 trove_security_group_rule_protocol = tcp
 trove_security_group_rule_port = 3306
 trove_security_group_rule_cidr = 0.0.0.0/0
 
 #guest_config = $pybasedir/etc/trove/trove-guestagent.conf.sample
 #cloudinit_location = /etc/trove/cloudinit
 
 block_device_mapping = vdb
 device_path = /dev/vdb
 mount_point = /var/lib/mysql
 
 All that configurations can be moved to data_strore (some defined in heat 
 templates) and be manageable by operator in case if any default behavior 
 should be changed.
 
 The trove-config becomes core functionality specific only.

Its fine for it to be in the config or the heat templates… im not sure it 
matters. what i would like to see is that specific thing to each service be in 
their own config group in the configuration.

[mysql]
mount_point=/var/lib/mysql
…
[redis]
volume_support=False
…..

and so on.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-23 Thread Stan Lagun
Hi Patric,

Thank you for such great post! This is very close to the vision I've tried
to propose earlier on software orchestration thread and I'm glad other
people concern about the same issues. However the problem the problem with
PaaS-like approached it that they currently on a little bit higher
abstraction layer than Heat is intended to be. Typical Heat users are more
of DevOps people rather than those who enjoy PaaS-related solutions. Going
that direction would require some major paradigm shift for the Heat which I
think is unnecessary.

I believe there is a place in OpenStack software-orchestration ecosystem
for layers that would sin on top of Heat and provide more high-level
services for software composition, dependency management. Heat is not aimed
to be software-everything. I would suggest you to take a look at Murano
project as it is very very close to what you want to achieve and as every
open-source project it needs community contributions. And I believe that it
is the place in OpenStack ecosystem where your expirience would be most
valuable and appreciated as well as your contributions


On Wed, Oct 23, 2013 at 9:58 PM, Patrick Petit patrick.pe...@bull.netwrote:

  Dear Steve and All,

 If I may add up on this already busy thread to share our experience with
 using Heat in large and complex software deployments.

 I work on a project which precisely provides additional value at the
 articulation point between resource orchestration automation and
 configuration management. We rely on Heat and chef-solo respectively for
 these base management functions. On top of this, we have developed an
 event-driven workflow to manage the life-cycles of complex software stacks
 which primary purpose is to support middleware components as opposed to
 end-user apps. Our use cases are peculiar in the sense that software setup
 (install, config, contextualization) is not a one-time operation issue but
 a continuous thing that can happen any time in life-span of a stack. Users
 can deploy (and undeploy) apps long time after the stack is created.
 Auto-scaling may also result in an asynchronous apps deployment. More about
 this latter. The framework we have designed works well for us. It clearly
 refers to a PaaS-like environment which I understand is not the topic of
 the HOT software configuration proposal(s) and that's absolutely fine with
 us. However, the question for us is whether the separation of software
 config from resources would make our life easier or not. I think the answer
 is definitely yes but at the condition that the DSL extension preserves
 almost everything from the expressiveness of the resource element. In
 practice, I think that a strict separation between resource and component
 will be hard to achieve because we'll always need a little bit of
 application's specific in the resources. Take for example the case of the
 SecurityGroups. The ports open in a SecurityGroup are application specific.

 Then, designing a Chef or Puppet component type may be harder than it
 looks at first glance. Speaking of our use cases we still need a little bit
 of scripting in the instance's user-data block to setup a working chef-solo
 environment. For example, we run librarian-chef prior to starting chef-solo
 to resolve the cookbook dependencies. A cookbook can present itself as a
 downloadable tarball but it's not always the case. A chef component type
 would have to support getting a cookbook from a public or private git repo
 (maybe subversion), handle situations where there is one cookbook per repo
 or multiple cookbooks per repo, let the user choose a particular branch or
 label, provide ssh keys if it's a private repo, and so forth. We support
 all of this scenarios and so we can provide more detailed requirements if
 needed.

 I am not sure adding component relations like the 'depends-on' would
 really help us since it is the job of config management to handle software
 dependencies. Also, it doesn't address the issue of circular dependencies.
 Circular dependencies occur in complex software stack deployments. Example.
 When we setup a Slum virtual cluster, both the head node and compute nodes
 depend on one another to complete their configuration and so they would
 wait for each other indefinitely if we were to rely on the 'depends-on'. In
 addition, I think it's critical to distinguish between configuration
 parameters which are known ahead of time, like a db name or user name and
 password, versus contextualization parameters which are known after the
 fact generally when the instance is created. Typically those
 contextualization parameters are IP addresses but not only. The fact
 packages x,y,z have been properly installed and services a,b,c successfully
 started is contextualization information (a.k.a facts) which may be
 indicative that other components can move on to the next setup stage.

 The case of complex deployments with or without circular dependencies is
 typically resolved by 

Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-10-23 Thread Chuck Short
Hi,

Why not use python-dateutil?

Regards
chuck


On Wed, Oct 23, 2013 at 11:34 AM, Mark Washenberger 
mark.washenber...@markwash.net wrote:

 Hi folks!

 In the images api, we depend on iso8601 to parse some dates and times.
 Recently, since version 0.1.4, python-iso8601 added support for a few more
 formats, and we finally got some other issues nailed down by 0.1.8. Maybe
 the fact that these formats weren't supported before was a bug. I don't
 really know.

 This puts us in an awkward place, however. With the help of our unit
 tests, we noticed that, if you switch from 0.1.8 back to 0.1.4 in your
 deployment, your image api will lose support for certain datetime formats
 like -MM-DD (where the time part is assumed to be all zeros). This
 obviously creates a (perhaps small) compatibility concern.

 Here are our alternatives:

 1) Adopt 0.1.8 as the minimum version in openstack-requirements.
 2) Do nothing (i.e. let Glance behavior depend on iso8601 in this way, and
 just fix the tests so they don't care about these extra formats)
 3) Make Glance work with the added formats even if 0.1.4 is installed.

 As of yesterday we were resolved to do #3, trying to be good citizens. But
 it appears that to do so requires essentially reimplementing a large swath
 of iso8601 0.1.8 in glance itself. Gross!

 So, I'd like to suggest that we instead adopt option #1, or alternatively
 agree that option #2 is no big deal, we can all go back to sleep. Thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-10-23 Thread Davanum Srinivas
+1 to option #1

-- dims


On Wed, Oct 23, 2013 at 2:34 PM, Mark Washenberger 
mark.washenber...@markwash.net wrote:

 Hi folks!

 In the images api, we depend on iso8601 to parse some dates and times.
 Recently, since version 0.1.4, python-iso8601 added support for a few more
 formats, and we finally got some other issues nailed down by 0.1.8. Maybe
 the fact that these formats weren't supported before was a bug. I don't
 really know.

 This puts us in an awkward place, however. With the help of our unit
 tests, we noticed that, if you switch from 0.1.8 back to 0.1.4 in your
 deployment, your image api will lose support for certain datetime formats
 like -MM-DD (where the time part is assumed to be all zeros). This
 obviously creates a (perhaps small) compatibility concern.

 Here are our alternatives:

 1) Adopt 0.1.8 as the minimum version in openstack-requirements.
 2) Do nothing (i.e. let Glance behavior depend on iso8601 in this way, and
 just fix the tests so they don't care about these extra formats)
 3) Make Glance work with the added formats even if 0.1.4 is installed.

 As of yesterday we were resolved to do #3, trying to be good citizens. But
 it appears that to do so requires essentially reimplementing a large swath
 of iso8601 0.1.8 in glance itself. Gross!

 So, I'd like to suggest that we instead adopt option #1, or alternatively
 agree that option #2 is no big deal, we can all go back to sleep. Thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Project Solum

2013-10-23 Thread Adrian Otto
OpenStack,

OpenStack has emerged as the preferred choice for open cloud software 
worldwide. We use it to power our cloud, and we love it. We’re proud to be a 
part of growing its capabilities to address more needs every day. When we ask 
customers, partners, and community members about what problems they want to 
solve next, we have consistently found a few areas where OpenStack has room to 
grow in addressing the needs of software developers:

1)   Ease of application development and deployment via integrated support for 
Git, CI/CD, and IDEs

2)   Ease of application lifecycle management across dev, test, and production 
types of environments -- supported by the Heat project’s automated 
orchestration (resource deployment, monitoring-based self-healing, 
auto-scaling, etc.)

3)   Ease of application portability between public and private clouds -- with 
no vendor-driven requirements within the application stack or control system

Along with eBay, RedHat, Ubuntu/Canonical, dotCloud/Docker, Cloudsoft, and 
Cumulogic, we at Rackspace are happy to announce we have started project Solum 
as an OpenStack Related open source project. Solum is a community-driven 
initiative currently in its open design phase amongst the seven contributing 
companies with more to come.

We plan to leverage the capabilities already offered in OpenStack in addressing 
these needs so anyone running an OpenStack cloud can make it easier to use for 
developers. By leveraging your existing OpenStack cloud, the aim of Project 
Solum is to reduce the number of services you need to manage in tackling these 
developer needs. You can use all the OpenStack services you already run instead 
of standing up overlapping, vendor-specific capabilities to accomplish this.

We welcome you to join us to build this exciting new addition to the OpenStack 
ecosystem.

Project Wiki
https://wiki.openstack.org/wiki/Solum

Lauchpad Project
https://launchpad.net/solum

IRC
Public IRC meetings are held on Tuesdays 1600 UTC
irc://irc.freenode.net:6667/solum

Thanks,

Adrian Otto
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC - Icehouse logging harmonization

2013-10-23 Thread Clark Boylan
On Wed, Oct 23, 2013 at 11:20 AM, Sean Dague s...@dague.net wrote:
 One of the efforts that we're working on from the QA team is tooling that
 ensures we aren't stack tracing into our test logs during normal tempest
 runs. Random stack traces are scary to cloud admins consuming OpenStack
 logs, and exceptions in the logs should really be exceptional events (and
 indicative of a failing system), not something that we do by default. Our
 intent is to gate code on clean logs (no stacktraces) eventually (i.e. if
 you try to land a patch that causes stack traces in OpenStack, that becomes
 a failing condition), and we've got an incremental white list based approach
 that should let us make forward progress on that. But on that thread -
 http://lists.openstack.org/pipermail/openstack-dev/2013-October/017012.html
 we exposed another issue... across projects, OpenStack is very inconsistent
 with logging.

 First... baseline, these are the logging levels that we have in OpenStack
 today (with numeric values, higher = worse):

 CRITICAL = 50
 FATAL = CRITICAL
 ERROR = 40
 WARNING = 30
 WARN = WARNING
 AUDIT = 21  # invented for oslo-logging
 INFO = 20
 DEBUG = 10
 NOTSET = 0

 We also have TRACE, which isn't a level per say, it happens at another
 level. However TRACE is typically an ERROR in the way we use it.


 Some examples of oddities in the current system (all from a single
 devstack/tempest run):

 Example 1:
 ==

 n-conductor log in tempest/devstack -
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz

 Total log lines: 84076
 Total non DEBUG lines: 61

 Question: do we need more than 1 level of DEBUG? 3 orders of magnitude
 information change between INFO - DEBUG seems too steep a cliff.

 Example 2:
 ==

 ceilometer-collector -
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-ceilometer-collector.txt.gz

 AUDIT log level being used as DEBUG level (even though it's higher than
 INFO).

 2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush pipeline
 meter_pipeline
 2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush pipeline
 cpu_pipeline
 2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush pipeline
 meter_pipeline
 2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush pipeline
 cpu_pipeline

 (this is every second, for most seconds, for the entire run)

 Example 3:
 ===

 cinder-api -
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-c-api.txt.gz?level=ERROR
 ERROR level being used for 404s of volumes

 Example 4:
 ===
 glance-api -
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-g-api.txt.gz

 2013-10-23 12:23:27.436 23731 ERROR glance.store.sheepdog [-] Error in store
 configuration: Unexpected error while running command.Command: collieExit
 code: 127Stdout: ''Stderr: '/bin/sh: 1: collie: not found\n'
 2013-10-23 12:23:27.436 23731 WARNING glance.store.base [-] Failed to
 configure store correctly: Store sheepdog could not be configured correctly.
 Reason: Error in store configuration: Unexpected error while running
 command.Command: collieExit code: 127Stdout: ''Stderr: '/bin/sh: 1: collie:
 not found\n' Disabling add method.

 part of every single Tempest / Devstack run, even though we aren't trying to
 configure sheepdog in the gate


 I think we can, and should do better, and started trying to brain dump into
 this etherpad -
 https://etherpad.openstack.org/p/icehouse-logging-harmonization (examples
 included).

 This is one of those topics that I think our current 6 track summit model
 doesn't make easy address, as we really need general concensus across any
 project that's using oslo-logging, so I believe mailing list is the better
 option, at least for now.


 Goals - Short Term
 ===
 As much feedback as possible from both core projects and openstack deployers
 about the kinds of things that they believe we should be logging, and the
 kinds of levels they think those things should land at.

 Determining how crazy it is to try to harmonize this across services.

 Figure out who else wants to help. Where help means:
  * helping figure out what's already concensus in services
  * helping figure out things that are really aberrant from that concensus
  * helping build concensus with various core teams on a common
  * helping with contributions to projects that are interested in
 contributions to move them closer to the concensus

 Determining if everyone just hates the idea, and I should give up now. :)
 (That is a valid response to this RFC, feel free to put that out there).


 Goals - Long Term
 ===
 A set of guidelines on logging standards so that OpenStack as a whole feels
 more whole when it comes to dealing with the log data.

 These are going to 

Re: [openstack-dev] [Neutron] FWaaS IceHouse summit prep and IRC meeting

2013-10-23 Thread Sumit Naiksatam
Log from today's meeting:

http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-23-18.02.log.html

Action items for some of the folks included.

Please join us for the meeting next week.

Thanks,
~Sumit.

On Tue, Oct 22, 2013 at 2:00 PM, Sumit Naiksatam
sumitnaiksa...@gmail.comwrote:

 Reminder - we will have the Neutron FWaaS IRC meeting tomorrow Wednesday
 18:00 UTC (11 AM PDT).

 Agenda:
 * Tempest tests
 * Definition and use of zones
 * Address Objects
 * Counts API
 * Service Objects
 * Integration with service type framework
 * Open discussion - any other topics you would like to bring up for
 discussion during the summit.

 https://wiki.openstack.org/wiki/Meetings/FWaaS

 Thanks,
 ~Sumit.


 On Sun, Oct 13, 2013 at 1:56 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
  wrote:

 Hi All,

 For the next of phase of FWaaS development we will be considering a
 number of features. I am proposing an IRC meeting on Oct 16th Wednesday
 18:00 UTC (11 AM PDT) to discuss this.

 The etherpad for the summit session proposal is here:
 https://etherpad.openstack.org/p/icehouse-neutron-fwaas

 and has a high level list of features under consideration.

 Thanks,
 ~Sumit.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC - Icehouse logging harmonization

2013-10-23 Thread Dolph Mathews
On Wed, Oct 23, 2013 at 1:20 PM, Sean Dague s...@dague.net wrote:

 One of the efforts that we're working on from the QA team is tooling that
 ensures we aren't stack tracing into our test logs during normal tempest
 runs. Random stack traces are scary to cloud admins consuming OpenStack
 logs, and exceptions in the logs should really be exceptional events (and
 indicative of a failing system), not something that we do by default. Our
 intent is to gate code on clean logs (no stacktraces) eventually (i.e. if
 you try to land a patch that causes stack traces in OpenStack, that becomes
 a failing condition), and we've got an incremental white list based
 approach that should let us make forward progress on that. But on that
 thread - http://lists.openstack.org/**pipermail/openstack-dev/2013-**
 October/017012.htmlhttp://lists.openstack.org/pipermail/openstack-dev/2013-October/017012.htmlwe
  exposed another issue... across projects, OpenStack is very inconsistent
 with logging.

 First... baseline, these are the logging levels that we have in OpenStack
 today (with numeric values, higher = worse):

 CRITICAL = 50
 FATAL = CRITICAL
 ERROR = 40
 WARNING = 30
 WARN = WARNING
 AUDIT = 21  # invented for oslo-logging
 INFO = 20
 DEBUG = 10
 NOTSET = 0

 We also have TRACE, which isn't a level per say, it happens at another
 level. However TRACE is typically an ERROR in the way we use it.


 Some examples of oddities in the current system (all from a single
 devstack/tempest run):

 Example 1:
 ==

 n-conductor log in tempest/devstack - http://logs.openstack.org/70/**
 52870/3/check/check-tempest-**devstack-vm-full/f46b756/logs/**
 screen-n-cond.txt.gzhttp://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz

 Total log lines: 84076
 Total non DEBUG lines: 61

 Question: do we need more than 1 level of DEBUG? 3 orders of magnitude
 information change between INFO - DEBUG seems too steep a cliff.

 Example 2:
 ==

 ceilometer-collector - http://logs.openstack.org/70/**
 52870/3/check/check-tempest-**devstack-vm-full/f46b756/logs/**
 screen-ceilometer-collector.**txt.gzhttp://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-ceilometer-collector.txt.gz

 AUDIT log level being used as DEBUG level (even though it's higher
 than INFO).

 2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush pipeline
 meter_pipeline
 2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush pipeline
 cpu_pipeline
 2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush pipeline
 meter_pipeline
 2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush pipeline
 cpu_pipeline

 (this is every second, for most seconds, for the entire run)

 Example 3:
 ===

 cinder-api - http://logs.openstack.org/70/**
 52870/3/check/check-tempest-**devstack-vm-full/f46b756/logs/**
 screen-c-api.txt.gz?level=**ERRORhttp://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-c-api.txt.gz?level=ERROR
 ERROR level being used for 404s of volumes

 Example 4:
 ===
 glance-api - http://logs.openstack.org/70/**52870/3/check/check-tempest-**
 devstack-vm-full/f46b756/logs/**screen-g-api.txt.gzhttp://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-g-api.txt.gz

 2013-10-23 12:23:27.436 23731 ERROR glance.store.sheepdog [-] Error in
 store configuration: Unexpected error while running command.Command:
 collieExit code: 127Stdout: ''Stderr: '/bin/sh: 1: collie: not found\n'
 2013-10-23 12:23:27.436 23731 WARNING glance.store.base [-] Failed to
 configure store correctly: Store sheepdog could not be configured
 correctly. Reason: Error in store configuration: Unexpected error while
 running command.Command: collieExit code: 127Stdout: ''Stderr: '/bin/sh: 1:
 collie: not found\n' Disabling add method.

 part of every single Tempest / Devstack run, even though we aren't trying
 to configure sheepdog in the gate


 I think we can, and should do better, and started trying to brain dump
 into this etherpad - https://etherpad.openstack.**org/p/icehouse-logging-*
 *harmonizationhttps://etherpad.openstack.org/p/icehouse-logging-harmonization(examples
  included).

 This is one of those topics that I think our current 6 track summit model
 doesn't make easy address, as we really need general concensus across any
 project that's using oslo-logging, so I believe mailing list is the better
 option, at least for now.


 Goals - Short Term
 ===
 As much feedback as possible from both core projects and openstack
 deployers about the kinds of things that they believe we should be logging,
 and the kinds of levels they think those things should land at.


Deprecation warnings!

Based on the approach we're taking in the patch below, we'll be able to
notate how imminently a feature is facing 

Re: [openstack-dev] Announcing Project Solum

2013-10-23 Thread Russell Bryant
On 10/23/2013 03:03 PM, Adrian Otto wrote:
 OpenStack,
 
 OpenStack has emerged as the preferred choice for open cloud software
 worldwide. We use it to power our cloud, and we love it. We’re proud to
 be a part of growing its capabilities to address more needs every
 day. When we ask customers, partners, and community members about what
 problems they want to solve next, we have consistently found a few areas
 where OpenStack has room to grow in addressing the needs of software
 developers:
 
 1)   Ease of application development and deployment via integrated
 support for Git, CI/CD, and IDEs
 
 2)   Ease of application lifecycle management across dev, test, and
 production types of environments -- supported by the Heat
 project’s automated orchestration (resource deployment,
 monitoring-based self-healing, auto-scaling, etc.)
 
 3)   Ease of application portability between public and private
 clouds -- with no vendor-driven requirements within the application
 stack or control system
 
 
 Along with eBay, RedHat, Ubuntu/Canonical, dotCloud/Docker, Cloudsoft,
 and Cumulogic, we at Rackspace are happy to announce we have started
 project Solum as an OpenStack Related open source project. Solum is a
 community-driven initiative currently in its open design phase amongst
 the seven contributing companies with more to come.
 
 We plan to leverage the capabilities already offered in OpenStack in
 addressing these needs so anyone running an OpenStack cloud can make it
 easier to use for developers. By leveraging your existing
 OpenStack cloud, the aim of Project Solum is to reduce the number of
 services you need to manage in tackling these developer needs. You can
 use all the OpenStack services you already run instead of standing up
 overlapping, vendor-specific capabilities to accomplish this.
 
 We welcome you to join us to build this exciting new addition to the
 OpenStack ecosystem.
 
 *Project Wiki*
 https://wiki.openstack.org/wiki/Solum
 
 *Lauchpad Project*
 _https://launchpad.net/solum_
 
 *IRC*
 Public IRC meetings are held on Tuesdays 1600 UTC
 _irc://irc.freenode.net:6667/solum_

Cool stuff!  I'm very happy to see a group of people looking to attack
this problem space with a solution that takes advantage of OpenStack
services as much as possible.  I look forward to seeing how this comes
together.

Thanks!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC - Icehouse logging harmonization

2013-10-23 Thread Robert Collins
On 24 October 2013 08:14, Dolph Mathews dolph.math...@gmail.com wrote:

 On Wed, Oct 23, 2013 at 1:20 PM, Sean Dague s...@dague.net wrote:


 Deprecation warnings!

 Based on the approach we're taking in the patch below, we'll be able to
 notate how imminently a feature is facing deprecation. Right now, they're
 just landing in WARNING, but I think we'll surely a desire to silence,
 prioritize or redirect those messages using different log levels (for
 example, based on how imminently a feature is facing deprecation).

 https://review.openstack.org/#/c/50486/

Huh, I did not see that go by. Python already has built in signalling
for deprecated features; I think we should be using that. We can of
course wrap it with a little sugar to make it easy to encode future
deprecations.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC - Icehouse logging harmonization

2013-10-23 Thread John Griffith
On Wed, Oct 23, 2013 at 1:03 PM, Clark Boylan clark.boy...@gmail.comwrote:

 On Wed, Oct 23, 2013 at 11:20 AM, Sean Dague s...@dague.net wrote:
  One of the efforts that we're working on from the QA team is tooling that
  ensures we aren't stack tracing into our test logs during normal tempest
  runs. Random stack traces are scary to cloud admins consuming OpenStack
  logs, and exceptions in the logs should really be exceptional events (and
  indicative of a failing system), not something that we do by default. Our
  intent is to gate code on clean logs (no stacktraces) eventually (i.e. if
  you try to land a patch that causes stack traces in OpenStack, that
 becomes
  a failing condition), and we've got an incremental white list based
 approach
  that should let us make forward progress on that. But on that thread -
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-October/017012.html
  we exposed another issue... across projects, OpenStack is very
 inconsistent
  with logging.
 
  First... baseline, these are the logging levels that we have in OpenStack
  today (with numeric values, higher = worse):
 
  CRITICAL = 50
  FATAL = CRITICAL
  ERROR = 40
  WARNING = 30
  WARN = WARNING
  AUDIT = 21  # invented for oslo-logging
  INFO = 20
  DEBUG = 10
  NOTSET = 0
 
  We also have TRACE, which isn't a level per say, it happens at another
  level. However TRACE is typically an ERROR in the way we use it.
 
 
  Some examples of oddities in the current system (all from a single
  devstack/tempest run):
 
  Example 1:
  ==
 
  n-conductor log in tempest/devstack -
 
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz
 
  Total log lines: 84076
  Total non DEBUG lines: 61
 
  Question: do we need more than 1 level of DEBUG? 3 orders of
 magnitude
  information change between INFO - DEBUG seems too steep a cliff.
 
  Example 2:
  ==
 
  ceilometer-collector -
 
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-ceilometer-collector.txt.gz
 
  AUDIT log level being used as DEBUG level (even though it's higher
 than
  INFO).
 
  2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush
 pipeline
  meter_pipeline
  2013-10-23 12:24:20.093 26234 AUDIT ceilometer.pipeline [-] Flush
 pipeline
  cpu_pipeline
  2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush
 pipeline
  meter_pipeline
  2013-10-23 12:24:20.094 26234 AUDIT ceilometer.pipeline [-] Flush
 pipeline
  cpu_pipeline
 
  (this is every second, for most seconds, for the entire run)
 
  Example 3:
  ===
 
  cinder-api -
 
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-c-api.txt.gz?level=ERROR
  ERROR level being used for 404s of volumes
 
  Example 4:
  ===
  glance-api -
 
 http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-g-api.txt.gz
 
  2013-10-23 12:23:27.436 23731 ERROR glance.store.sheepdog [-] Error in
 store
  configuration: Unexpected error while running command.Command: collieExit
  code: 127Stdout: ''Stderr: '/bin/sh: 1: collie: not found\n'
  2013-10-23 12:23:27.436 23731 WARNING glance.store.base [-] Failed to
  configure store correctly: Store sheepdog could not be configured
 correctly.
  Reason: Error in store configuration: Unexpected error while running
  command.Command: collieExit code: 127Stdout: ''Stderr: '/bin/sh: 1:
 collie:
  not found\n' Disabling add method.
 
  part of every single Tempest / Devstack run, even though we aren't
 trying to
  configure sheepdog in the gate
 
 
  I think we can, and should do better, and started trying to brain dump
 into
  this etherpad -
  https://etherpad.openstack.org/p/icehouse-logging-harmonization(examples
  included).
 
  This is one of those topics that I think our current 6 track summit model
  doesn't make easy address, as we really need general concensus across any
  project that's using oslo-logging, so I believe mailing list is the
 better
  option, at least for now.
 
 
  Goals - Short Term
  ===
  As much feedback as possible from both core projects and openstack
 deployers
  about the kinds of things that they believe we should be logging, and the
  kinds of levels they think those things should land at.
 
  Determining how crazy it is to try to harmonize this across services.
 
  Figure out who else wants to help. Where help means:
   * helping figure out what's already concensus in services
   * helping figure out things that are really aberrant from that concensus
   * helping build concensus with various core teams on a common
   * helping with contributions to projects that are interested in
  contributions to move them closer to the concensus
 
  Determining if everyone just hates the idea, and I should give up now. :)
  (That is a valid response to this RFC, feel free to 

Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-10-23 Thread Robert Collins
On 24 October 2013 07:34, Mark Washenberger
mark.washenber...@markwash.net wrote:
 Hi folks!

 1) Adopt 0.1.8 as the minimum version in openstack-requirements.
 2) Do nothing (i.e. let Glance behavior depend on iso8601 in this way, and
 just fix the tests so they don't care about these extra formats)
 3) Make Glance work with the added formats even if 0.1.4 is installed.

I think we should do (1) because both (2) will permit surprising,
nonobvious changes in behaviour and (3) is just nasty engineering.
Alternatively, add a (4) which is (2) with whinge on startup if 0.1.4
is installed to make identifying this situation easy.

The last thing a new / upgraded deployment wants is something like
nova, or a third party API script failing in nonobvious ways with no
breadcrumbs to lead them to 'upgrade iso8601' as an answer.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC - Icehouse logging harmonization

2013-10-23 Thread Robert Collins
On 24 October 2013 08:28, John Griffith john.griff...@solidfire.com wrote:
 So I touched on this a bit in my earlier post but want to reiterate here and
 maybe clarify a bit.  I agree that cleaning up and standardizing the logs is
 a good thing, and particularly removing unhandled exception messages would
 be good.  What concerns me however is the approach being taken here of
 saying things like Error level messages are banned from Tempest runs.

 The case I mentioned earlier of the negative test is a perfect example.
 There's no way for Cinder (or any other service) to know the difference
 between the end user specifying/requesting a non-existent volume and a valid
 volume being requested that for some reason can't be found.  I'm not quite
 sure how you place a definitive rule like no error messages in logs unless
 you make your tests such that you never run negative tests?

Let me check that I understand: you want to check that when a user
asks for a volume that doesn't exist, they don't get it, *and* that
the reason they didn't get it was due to Cinder detecting it's
missing, not due to e.g. cinder throwing an error and returning 500 ?

If so, that seems pretty straight forward; a) check the error that is
reported (it should be a 404 and contain an explanation which we can
check) and b) check the logs to see that nothing was logged (because a
server fault would be logged).

 There are other cases in cinder as well that I'm concerned about.  One
 example is iscsi target creation, there are a number of scenarios where this
 can fail under certain conditions.  In most of these cases we now have retry
 mechanisms or alternate implementations to complete the task.  The fact is
 however that a call somewhere in the system failed, this should be something
 in my opinion that stands out in the logs.  Maybe this particular case would
 be well suited to being a warning other than an error, and that's fine.  My
 point however though is that I think some thought needs to go into this
 before making blanketing rules and especially gating criteria that says no
 error messages in logs.

I agree thought and care is needed. As a deployer my concern is that
the only time ERROR is logged in the logs is when something is wrong
with the infrastructure (rather than a user asking for something
stupid). I think my concern and yours can both be handled at the same
time.


-Rob


---
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tools

2013-10-23 Thread Joshua Harlow
Thanks, did some cleanup and pushed a newer version.

https://pypi.python.org/pypi/gerrit-view/0.1.1

-Josh

On 10/23/13 8:16 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:

Wow, awesomeness!

I'll put out a 0.2 on pypi when u are ready with that, very cool.

Sent from my really tiny device...

 On Oct 23, 2013, at 7:12 AM, Daniel P. Berrange berra...@redhat.com
wrote:
 
 On Sun, Oct 20, 2013 at 05:01:23AM +, Joshua Harlow wrote:
 I created some gerrit tools that I think others might find useful.
 
 https://github.com/harlowja/gerrit_view
 
 The neat one there is a curses based real time gerrit review receiver
 that uses a similar mechanism as the gerrit irc bot to sit on the
 gerrit event queue and receive events.
 
 Actually, from my POV, the neat one there is the qgerrit script - I had
 no idea you could query this info so easily. I've done some work on it
 to allow you to filter based on project name, commit message string,
 approval flags, and best of all, file path changed. I also improved the
 date display to make it clearer how old patches are, which may help
 people prioritize reviews for oldest stuff.
 
 With this, I can now finally keep an eye on any change which impacts the
 libvirt driver code:
 
 eg to see all code touching 'nova/virt/libvirt', which has not been
 -1'd by jenkins
 
 $ qgerrit -f url -f subject:100 -f approvals -f lastUpdated -f
createdOn -p openstack/nova -a v1 nova/virt/libvirt
 
++---
+--+--+--
---+
 | URL| Subject
 | Created  | Updated  | Approvals
|
 
++---
+--+--+--
---+
 | https://review.openstack.org/33409 | Adding image multiple location
support| 127 days | 17 hours | v=1
c=-1,1  |
 | https://review.openstack.org/35303 | Stop, Rescue, and Delete should
give guest a chance to shutdown   | 112 days | 2 hours  | v=1,1 c=-1
 |
 | https://review.openstack.org/35760 | Added monitor (e.g. CPU) to
monitor and collect data  | 110 days | 18 hours | v=1,1
c=-1,-1   |
 | https://review.openstack.org/39929 | Port to oslo.messaging
 | 82 days  | 7 hours  | v=1,1
|
 | https://review.openstack.org/43984 | Call baselineCPU for full
feature list| 56 days  | 1 day|
v=1,1 c=-1,1,1,1|
 | https://review.openstack.org/44359 | Wait for files to be accessible
when migrating| 54 days  | 2 days   | v=1
c=1,1,1 |
 | https://review.openstack.org/45993 | Remove multipath mapping device
descriptor| 42 days  | 4 hours  | v=1,1 c=-1
 |
 | https://review.openstack.org/46055 | Remove dup of
LibvirtISCSIVolumeDriver in LibvirtISERVolumeDriver | 42 days  | 18
hours | v=1,1 c=2   |
 | https://review.openstack.org/48246 | Disconnect from iSCSI volume
sessions after live migration| 28 days  | 5 days   | v=1
|
 | https://review.openstack.org/48362 | Fixing ephemeral disk creation.
 | 27 days  | 16 hours | v=1,1 c=2
|
 | https://review.openstack.org/49329 | Add unsafe flag to libvirt live
migration call.   | 21 days  | 6 days   | v=1,1
c=-1,-1,1,1,1 |
 | https://review.openstack.org/50857 | Apply six for metaclass
 | 13 days  | 6 hours  | v=1,1
|
 | https://review.openstack.org/51193 | clean up numeric expressions
with byte constants  | 12 days  | 9 hours  | v=1
|
 | https://review.openstack.org/51282 | nova.exception does not have a
ProcessExecutionError  | 11 days  | 21 hours | v=1,1
  |
 | https://review.openstack.org/51287 | Remove vim header from from
nova/virt | 11 days  | 2 days   | v=1,1
c=-1,-1   |
 | https://review.openstack.org/51718 | libvirt: Fix spurious backing
file existence check.   | 8 days   | 5 days   | v=1 c=1
   |
 | https://review.openstack.org/52184 | Reply with a meaningful
exception, when libvirt connection is broken. | 6 days   | 16 hours |
v=1,1 c=2   |
 | https://review.openstack.org/52363 | Remove unnecessary steps for
cold snapshots   | 6 days   | 45 mins  | v=1,1
c=-1  |
 | https://review.openstack.org/52401 | make libvirt driver
get_connection thread-safe| 5 days   | 3 hours
| v=1,1   |
 | https://review.openstack.org/52581 | Add context as parameter for
resume   | 5 days   | 2 days   | v=1,1,1
c=1,1   |
 | https://review.openstack.org/52777 | 

[openstack-dev] [oslo] team meeting this friday

2013-10-23 Thread Doug Hellmann
The Oslo team will be meeting Friday at 1400 UTC in the #openstack-meeting
channel on IRC.

The main item on the agenda is finishing the plans for fixing the delayed
translation feature of gettextutils. See
https://wiki.openstack.org/wiki/Meetings/Oslo for more details.

Thanks,
Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC - Icehouse logging harmonization

2013-10-23 Thread David Stanek
On Wed, Oct 23, 2013 at 3:26 PM, Robert Collins
robe...@robertcollins.netwrote:

 On 24 October 2013 08:14, Dolph Mathews dolph.math...@gmail.com wrote:
 
  On Wed, Oct 23, 2013 at 1:20 PM, Sean Dague s...@dague.net wrote:

 
  Deprecation warnings!
 
  Based on the approach we're taking in the patch below, we'll be able to
  notate how imminently a feature is facing deprecation. Right now, they're
  just landing in WARNING, but I think we'll surely a desire to silence,
  prioritize or redirect those messages using different log levels (for
  example, based on how imminently a feature is facing deprecation).
 
  https://review.openstack.org/#/c/50486/

 Huh, I did not see that go by. Python already has built in signalling
 for deprecated features; I think we should be using that. We can of
 course wrap it with a little sugar to make it easy to encode future
 deprecations.


The initial patch used the the warnings modules instead of using the
deprecated logging provided by oslo.  We talked about it IRC and I switched
because there existed a way to do deprecation messaging is olso and it had
a configurable way to turn warnings into exceptions.

I intend to submit a patch to oslo-incubator based off of the above patch.
 I'd love to get some feedback about the needs of the other projects.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC - Icehouse logging harmonization

2013-10-23 Thread Sean Dague

On 10/23/2013 03:35 PM, Robert Collins wrote:

On 24 October 2013 08:28, John Griffith john.griff...@solidfire.com wrote:

So I touched on this a bit in my earlier post but want to reiterate here and
maybe clarify a bit.  I agree that cleaning up and standardizing the logs is
a good thing, and particularly removing unhandled exception messages would
be good.  What concerns me however is the approach being taken here of
saying things like Error level messages are banned from Tempest runs.

The case I mentioned earlier of the negative test is a perfect example.
There's no way for Cinder (or any other service) to know the difference
between the end user specifying/requesting a non-existent volume and a valid
volume being requested that for some reason can't be found.  I'm not quite
sure how you place a definitive rule like no error messages in logs unless
you make your tests such that you never run negative tests?


Let me check that I understand: you want to check that when a user
asks for a volume that doesn't exist, they don't get it, *and* that
the reason they didn't get it was due to Cinder detecting it's
missing, not due to e.g. cinder throwing an error and returning 500 ?

If so, that seems pretty straight forward; a) check the error that is
reported (it should be a 404 and contain an explanation which we can
check) and b) check the logs to see that nothing was logged (because a
server fault would be logged).


There are other cases in cinder as well that I'm concerned about.  One
example is iscsi target creation, there are a number of scenarios where this
can fail under certain conditions.  In most of these cases we now have retry
mechanisms or alternate implementations to complete the task.  The fact is
however that a call somewhere in the system failed, this should be something
in my opinion that stands out in the logs.  Maybe this particular case would
be well suited to being a warning other than an error, and that's fine.  My
point however though is that I think some thought needs to go into this
before making blanketing rules and especially gating criteria that says no
error messages in logs.


Absolutely agreed. That's why I wanted to kick off this discussion and 
am thinking about how we get to agreement by Icehouse (giving this lots 
of time to bake and getting different perspectives in here).


On the short term of failing jobs in tempest because they've got errors 
in the logs, we've got a whole white list mechanism right now for 
acceptable errors. Over time I'd love to shrink that to 0. But that's 
going to be a collaboration between the QA team and the specific core 
projects to make sure that's the right call in each case. Who knows, 
maybe there are generally agreed to ERROR conditions that we trigger, 
but we'll figure that out overtime.


I think the iscsi example is a good case for WARNING, which is the same 
level we use when we fail to schedule a resource (compute / volume). 
Especially because we try to recover now. If we fail to recover, ERROR 
is probably called for. But if we actually failed to alocate a volume, 
we'd end up failing the tests anyways, which means the ERROR in the log 
wouldn't be a problem in and of itself.



I agree thought and care is needed. As a deployer my concern is that
the only time ERROR is logged in the logs is when something is wrong
with the infrastructure (rather than a user asking for something
stupid). I think my concern and yours can both be handled at the same
time.


Right, and I think this is the perspective that I'm coming from. Our 
logs (at INFO and up) are UX to our cloud admins.


We should be pretty sure that we know something is a problem if we tag 
it as an ERROR, or CRITICAL. Because that's likely to be something that 
negatively impacts someones day.


If we aren't completely sure your cloud is on fire, but we're pretty 
sure something is odd, WARNING is appropriate.


If it's no good, but we have no way to test if it's a problem, it's just 
INFO. I really think the not found case falls more into standard INFO.


Again, more concrete instances like the iscsi case, are probably the 
most helpful. I think in the abstract this problem is too hard to solve, 
but with examples, we can probably come to some concensus.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Distributed Virtual Router Discussion

2013-10-23 Thread Yapeng Wu
Hello, Swami,
I am interested in this topic. Please include me in the discussion.
Thanks,
Yapeng

From: Vasudevan, Swaminathan (PNB Roseville) 
[mailto:swaminathan.vasude...@hp.com]
Sent: Tuesday, October 22, 2013 2:50 PM
To: cloudbengo; Artem Dmytrenko; yong sheng gong (gong...@unitedstack.com); 
OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] Distributed Virtual Router Discussion

Hi Folks,
Thanks for your interests in the DVR feature.
We should get together to start discussing the details in the DVR.
Please let me know who else is interested, probably the time slot and we can 
start nailing down the details.
 https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
https://wiki.openstack.org/wiki/Distributed_Router_for_OVS
Thanks
Swami

From: Robin Wang [mailto:cloudbe...@gmail.com]
Sent: Tuesday, October 22, 2013 11:45 AM
To: Artem Dmytrenko; yong sheng gong (gong...@unitedstack.com); OpenStack 
Development Mailing List; Vasudevan, Swaminathan (PNB Roseville)
Subject: Re: Re: [openstack-dev] [Neutron] Distributed Virtual Router Discussion

Hi Artem,

Very happy to see more stackers working on this feature. : )

Note that the images in your document are badly corrupted - maybe my questions 
could already be answered by your diagrams. 
I met the same issue at first. Downloading the doc and open it locally may 
help. It works for me.

Also, a wiki page for DVR/VDR feature is created, including some interesting 
performance test output. Thanks.
https://wiki.openstack.org/wiki/Distributed_Router_for_OVS

Best,
Robin Wang

From: Artem Dmytrenkomailto:nexton...@yahoo.com
Date: 2013-10-22 02:51
To: yong sheng gong 
\(gong...@unitedstack.com\)mailto:gong...@unitedstack.com; 
cloudbe...@gmail.commailto:cloudbe...@gmail.com; OpenStack Development 
Mailing Listmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Distributed Virtual Router Discussion
Hi Swaminathan.

I work for a virtual networking startup called Midokura and I'm very interested 
in joining the discussion. We currently have distributed router implementation 
using existing Neutron API. Could you clarify why distributed vs centrally 
located routing implementation need to be distinguished? Another question is 
that are you proposing distributed routing implementation for tenant routers or 
for the router connecting the virtual cloud to the external network? The reason 
that I'm asking this question is because our company would also like to propose 
a router implementation that would eliminate a single point uplink failures. We 
have submitted a couple blueprints on that topic 
(https://blueprints.launchpad.net/neutron/+spec/provider-router-support, 
https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing) and would 
appreciate an opportunity to collaborate on making it a reality.

Note that the images in your document are badly corrupted - maybe my questions 
could already be answered by your diagrams. Could you update your document with 
legible diagrams?

Looking forward to further discussing this topic with you!

Sincerely,
Artem Dmytrenko


On Mon, 10/21/13, Vasudevan, Swaminathan (PNB Roseville) 
swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com wrote:

 Subject: [openstack-dev] Distributed Virtual Router Discussion
 To: yong sheng gong 
(gong...@unitedstack.commailto:gong...@unitedstack.com) 
gong...@unitedstack.commailto:gong...@unitedstack.com, 
cloudbe...@gmail.commailto:cloudbe...@gmail.com 
cloudbe...@gmail.commailto:cloudbe...@gmail.com, OpenStack Development 
Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Monday, October 21, 2013, 12:18 PM









 Hi Folks,
 I am currently working on a
 blueprint for Distributed Virtual Router.
 If anyone interested in
 being part of the discussion please let me know.
 I have put together a first
 draft of my blueprint and have posted it on Launchpad for
 review.
 https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr



 Thanks.

 Swaminathan Vasudevan
 Systems Software Engineer
 (TC)


 HP Networking
 Hewlett-Packard
 8000 Foothills Blvd
 M/S 5541
 Roseville, CA - 95747
 tel: 916.785.0937
 fax: 916.785.1815
 email: swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com







 -Inline Attachment Follows-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-10-23 Thread Dolph Mathews
On Wed, Oct 23, 2013 at 2:30 PM, Robert Collins
robe...@robertcollins.netwrote:

 On 24 October 2013 07:34, Mark Washenberger
 mark.washenber...@markwash.net wrote:
  Hi folks!
 
  1) Adopt 0.1.8 as the minimum version in openstack-requirements.
  2) Do nothing (i.e. let Glance behavior depend on iso8601 in this way,
 and
  just fix the tests so they don't care about these extra formats)
  3) Make Glance work with the added formats even if 0.1.4 is installed.

 I think we should do (1) because both (2) will permit surprising,
 nonobvious changes in behaviour and (3) is just nasty engineering.
 Alternatively, add a (4) which is (2) with whinge on startup if 0.1.4
 is installed to make identifying this situation easy.


I'm in favor of (1), unless there's a reason why 0.1.8 not viable for
another project or packager, in which case, I've never heard the term
whinge before so there should definitely be some of that.



 The last thing a new / upgraded deployment wants is something like
 nova, or a third party API script failing in nonobvious ways with no
 breadcrumbs to lead them to 'upgrade iso8601' as an answer.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-23 Thread Georgy Okrokvertskhov
Hi,

Looking through the thread I mentioned couple definitions of software
orchestration. I would like to summarize definitions before we go to deep
technical discussion about actual implementation.


There are two major areas and approaches covered by software orchestration:


*Software component installation* - aimed to install specific software
component to VM and configure it. This is a typical task for Heat. HOT
component is a best way to easily describe what component should be
installed and what are configuration parameter. Heat engine will figure out
by itself how to do this, probably with some hints from a user in terms of
dependencies and placement rules.


*Software service installation and life cycle management* - aimed to
provision a complex multi component software service over multiple VMs.
Also defines actions on specific events and manages software over the
entire environment life. This approach is closer to PaaS like solution and
relies on specific workflows sequences defined for different events and
situations. Instead of defining what should be installed, this approach
defines how to react on specific situation and what to do if some event has
been triggered. This workflow approach is what is covered by Mistral
project from the engine viewpoint. Mistral is going to orchestrate task
execution in distributed fashion on both VM and OpenStack levels and it has
events and schedule semantics. Actual workflow implementation for OpenStack
may be found in Murano project which already defines different workflows
for software installation and configuration depending on situation.


As I see, both approaches have their own user, and both approaches can
coexist in OpenStack ecosystem being complementary to each other. For
example workflow can generate HOT template to do some task which fits best
to Heat engine and in the same time HOT template can reference external
workflow to do the task which fits best to workflow approach.


Thanks
Georgy


On Wed, Oct 23, 2013 at 11:36 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700:
  Dear Steve and All,
 
  If I may add up on this already busy thread to share our experience with
  using Heat in large and complex software deployments.
 

 Thanks for sharing Patrick, I have a few replies in-line.

  I work on a project which precisely provides additional value at the
  articulation point between resource orchestration automation and
  configuration management. We rely on Heat and chef-solo respectively for
  these base management functions. On top of this, we have developed an
  event-driven workflow to manage the life-cycles of complex software
  stacks which primary purpose is to support middleware components as
  opposed to end-user apps. Our use cases are peculiar in the sense that
  software setup (install, config, contextualization) is not a one-time
  operation issue but a continuous thing that can happen any time in
  life-span of a stack. Users can deploy (and undeploy) apps long time
  after the stack is created. Auto-scaling may also result in an
  asynchronous apps deployment. More about this latter. The framework we
  have designed works well for us. It clearly refers to a PaaS-like
  environment which I understand is not the topic of the HOT software
  configuration proposal(s) and that's absolutely fine with us. However,
  the question for us is whether the separation of software config from
  resources would make our life easier or not. I think the answer is
  definitely yes but at the condition that the DSL extension preserves
  almost everything from the expressiveness of the resource element. In
  practice, I think that a strict separation between resource and
  component will be hard to achieve because we'll always need a little bit
  of application's specific in the resources. Take for example the case of
  the SecurityGroups. The ports open in a SecurityGroup are application
  specific.
 

 Components can only be made up of the things that are common to all users
 of said component. Also components would, if I understand the concept
 correctly, just be for things that are at the sub-resource level.
 Security groups and open ports would be across multiple resources, and
 thus would be separately specified from your app's component (though it
 might be useful to allow components to export static values so that the
 port list can be referred to along with the app component).

  Then, designing a Chef or Puppet component type may be harder than it
  looks at first glance. Speaking of our use cases we still need a little
  bit of scripting in the instance's user-data block to setup a working
  chef-solo environment. For example, we run librarian-chef prior to
  starting chef-solo to resolve the cookbook dependencies. A cookbook can
  present itself as a downloadable tarball but it's not always the case. A
  chef component type would have to support getting a cookbook from a
  public 

Re: [openstack-dev] VPNaaS questions...

2013-10-23 Thread Nachi Ueno
Hi Paul

I rebased the patch, and working on unit testing too
https://review.openstack.org/#/c/41827/


2013/10/23 Paul Michali p...@cisco.com:
 See PCM: in-line.


 PCM (Paul Michali)

 MAIL p...@cisco.com
 IRC   pcm_  (irc.freenode.net)
 TW   @pmichali

 On Oct 23, 2013, at 9:41 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi Paul,


 On Wed, Oct 23, 2013 at 9:56 PM, Paul Michali p...@cisco.com wrote:


 Hi guys,

 Some questions on VPNaaS…

 Can we get the review reopened of the service type framework changes for VPN
 on the server side?
 I was thinking of trying to rebase that patch, based on the latest from
 master, but before doing so, I ran TOX on the latest master commit. TOX
 fails with a bunch of errors, some reporting that the system is out of
 memory. I have a 4GB Ubuntu 12.04 VM for this and I see it max out on
 memory, when TOX is run on the whole Neutron code for py27. Anyone seen
 this?


 I see this too. On 4GB Ubuntu 13.04 VM, I have over 1GB swap while
 running the whole test
 and the test slows down after swap begins….


 PCM: Whew! I was worried that it was something in my setup.  Any idea on a
 root cause/workaround? Is this happening when Jenkins runs?





 I have tried the current patch of service type framework, and found that
 client changes are needed too. I have changes ready for review, should I
 post them, or do we need to wait (or indicate some dependency on the server
 side changes)?


 My suggestion is to post a patch with WIP status.
 We can test the server side patch with CLI. It really helps us all.


 PCM: Thanks! I wasn't sure how to proceed as the client change is useless
 w/o the server change.

Yeah, please push wip :)


 I see that there is VPN connection status and VPN service status. What is
 the purpose of the latter? What is the status, if the service has multiple
 connections in different states?


 I see the same.


 PCM: Yeah, need to understand what the desired meaning is for the service
 status in this context.


In openswan impl,
vpnservice state is the state of openswan process.
ipsec-site-connection state is actual connection state.

so let's say we have two site.
Vpnservice will be ACTIVE and ipsec-site-connection's state will be DOWN after
 we setup only one site.




 Have you guys tried VPNaaS with Havana and the now default ML2 plugin? I got
 a failure on connection create, saying that it could not find
 get_l3_agents_hosting_routers() attribute. I haven't looked into this yet,
 but will try as soon as I can.


 I think https://bugs.launchpad.net/neutron/+bug/1238846 is same as
 what you encountered.
 I believe this bug was fixed in the final RC. Doesn't it work?


 PCM: Ah, I missed that bug review. I probably need to update my repo with
 the latest to pick this up.  Thanks!

 Regards,

 PCM



 Thanks,
 Akihiro


 Thanks!

 PCM (Paul Michali)

 Contact info for Cisco users http://twiki.cisco.com/Main/pcm



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Project Solum

2013-10-23 Thread Stefano Maffulli
On 10/23/2013 12:03 PM, Adrian Otto wrote:
 Along with eBay, RedHat, Ubuntu/Canonical, dotCloud/Docker, Cloudsoft,
 and Cumulogic, we at Rackspace are happy to announce we have started
 project Solum as an OpenStack Related open source project. Solum is a
 community-driven initiative currently in its open design phase amongst
 the seven contributing companies with more to come.

Wonderful news! Like Russell said, it's great to see this effort start
from the ground up as a collaboration among different vendors. Way to go
guys, I'm already a Solum fan!

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal runs

2013-10-23 Thread Rochelle.Grober


John Griffith wrote:
On Wed, Oct 23, 2013 at 8:47 AM, Sean Dague 
s...@dague.netmailto:s...@dague.net wrote:
On 10/23/2013 10:40 AM, John Griffith wrote:



On Sun, Oct 20, 2013 at 7:38 AM, Sean Dague 
s...@dague.netmailto:s...@dague.net
mailto:s...@dague.netmailto:s...@dague.net wrote:

Dave Kranz has been building a system so that we can ensure that
during a Tempest run services don't spew ERRORs in the logs.
Eventually, we're going to gate on this, because there is nothing
that Tempest does to the system that should cause any OpenStack
service to ERROR or stack trace (Errors should actually be
exceptional events that something is wrong with the system, not
regular events).


So I have to disagree with the approach being taken here.  Particularly
in the case of Cinder and the negative tests that are in place.  When I
read this last week I assumed you actually meant that Exceptions were
exceptional and nothing in Tempest should cause Exceptions.  It turns
out you apparently did mean Errors.  I completely disagree here, Errors
happen, some are recovered, some are expected by the tests etc.  Having
a policy and especially a gate that says NO ERROR MESSAGE in logs makes
absolutely no sense to me.

Something like NO TRACE/EXCEPTION MESSAGE in logs I can agree with, but
this makes no sense to me.  By the way, here's a perfect example:
https://bugs.launchpad.net/cinder/+bug/1243485

As long as we have Tempest tests that do things like show non-existent
volume you're going to get an Error message and I think that you should
quite frankly.

Ok, I guess that's where we probably need to clarify what Not Found is. 
Because Not Found to me seems like it should be a request at INFO level, not 
ERROR.


ERROR from an admin perspective should really be something that would suitable 
for sending an alert to an administrator for them to come and fix the cloud.

From my perspective as someone who has done Ops in the past, a Volume Not 
Found can be either info or an error.  It all depends on the context.  That 
said, we need to be able to test ERROR conditions and ensure that they report 
properly as ERROR, else the poor Ops folks will always be on the spot for not 
knowing that there is a problem.  A volume that has gone missing is a problem. 
 Ops would like an immediate report.  They would trigger on the ERROR 
statement in the log.  On the other hand, if someone/thing  fatfingers an 
input and requests something that has never existed, then that's just info.

We need to be able to test for correctness of errors and process logs with 
errors in them as part of the test verification.  Perhaps a switch in the test 
that indicates log needs post processing, or a way to redirect the log during a 
specific error test, or some such?  The question is, how do we keep test system 
logs clean of ERRORs and still test system logs for intentionally triggered 
ERRORs?

--Rocky


TRACE is actually a lower level of severity in our log systems than ERROR is.

Sorry, by Trace I was referring to unhandled stack/exception trace messages in 
the logs.


-Sean

--
Sean Dague
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Project Solum

2013-10-23 Thread Monty Taylor


On 10/23/2013 03:03 PM, Adrian Otto wrote:
 OpenStack,
 
 OpenStack has emerged as the preferred choice for open cloud software
 worldwide. We use it to power our cloud, and we love it. We’re proud to
 be a part of growing its capabilities to address more needs every
 day. When we ask customers, partners, and community members about what
 problems they want to solve next, we have consistently found a few areas
 where OpenStack has room to grow in addressing the needs of software
 developers:
 
 1)   Ease of application development and deployment via integrated
 support for Git, CI/CD, and IDEs

Lucky for you - we have some great projects for you here!

 2)   Ease of application lifecycle management across dev, test, and
 production types of environments -- supported by the Heat
 project’s automated orchestration (resource deployment,
 monitoring-based self-healing, auto-scaling, etc.)
 
 3)   Ease of application portability between public and private
 clouds -- with no vendor-driven requirements within the application
 stack or control system
 
 
 Along with eBay, RedHat, Ubuntu/Canonical, dotCloud/Docker, Cloudsoft,
 and Cumulogic, we at Rackspace are happy to announce we have started
 project Solum as an OpenStack Related open source project. Solum is a
 community-driven initiative currently in its open design phase amongst
 the seven contributing companies with more to come.
 
 We plan to leverage the capabilities already offered in OpenStack in
 addressing these needs so anyone running an OpenStack cloud can make it
 easier to use for developers. By leveraging your existing
 OpenStack cloud, the aim of Project Solum is to reduce the number of
 services you need to manage in tackling these developer needs. You can
 use all the OpenStack services you already run instead of standing up
 overlapping, vendor-specific capabilities to accomplish this.
 
 We welcome you to join us to build this exciting new addition to the
 OpenStack ecosystem.
 
 *Project Wiki*
 https://wiki.openstack.org/wiki/Solum
 
 *Lauchpad Project*
 _https://launchpad.net/solum_
 
 *IRC*
 Public IRC meetings are held on Tuesdays 1600 UTC
 _irc://irc.freenode.net:6667/solum_

Happy to have you guys started with this. As I mentioned in IRC, I think
there is some good overlap with some of infra that can be leveraged, and
I'll be excited to work with you guys on that!

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Integration with Murano proposal

2013-10-23 Thread Georgy Okrokvertskhov
Hi,


I am really excited to see Solum announcement. This is a fantastic idea to
create developer friendly environment for application creation on
OpenStack. I believe that this developer friendly environment will attract
a lot of developers who want to write software for OpenStack platform. I
think Solum will bring PaaS features to OpenStack and converts OpenStack
from pure IaaS to a more complete OpenStack platform.


I represent the Murano team who works on middle level orchestration for
application installation. Murano initially had started as Windows services
automation but we recently defined more broad roadmap for Murano service.
Here is our view on Murano roadmap
https://wiki.openstack.org/wiki/Murano/ApplicationServiceCatalog.

Our idea is to bring existing 3rd party applications and services like
Microsoft AD, MS Sharepoint  to OpenStack platform by providing an
integration layer for both software creators and software users. We want to
provide a publishing mechanism for service creators and self-service
catalog for end users.


I see that Murano service is complimentary to Solum. While you are
providing a framework to create a new application, this application still
may have a dependencies from existing software components which may be
provisioned by Murano. It will be beneficial for Solum to be able to
request any 3rd party software listed in Murano catalog and provision it in
application environment. Application developer will be focused on actual
development without spending time on solving the problems of 3rd party
components deployment.


We already have working service for OpenStack which allows to deploy
complex applications over multiple VMs in order to prepare an environment
for some specific application. We provide a simple UI which allows you
configure application environments in easy way by adding software services
like Active Directory, MS SQL cluster, IIS server farm.


I and potentially some of our Murano team members would like to participate
in Solum design and development. Do you have design sessions scheduled?
What will be the best way to discuss integration between these services?

--
Georgy Okrokvertskhov

Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disable async network allocation

2013-10-23 Thread Nachi Ueno
Hi Phil

2013/10/21 Day, Phil philip@hp.com:
 Hi Folks,



 I’m trying to track down a couple of obsecure issues in network port
 creation where it would be really useful if I could disable the async
 network allocation so that everything happens in the context of a single
 eventlet rather than two (and also rule out if there is some obscure
 eventlet threading issue in here).   I thought it was configurable – but I
 don’t see anything obvious in the code to go back to the old (slower)
 approach of doing network allocation in-lien in the main create thread ?


May I ask the meaning of   async network allocation ?


 One of the issues I’m trying to track is Neutron occasionally creating more
 than one port – I suspect a retry mechanism in the httplib2 is sending the
 port create request multiple times if  Neutron is slow to reply, resulting
 in Neutron processing it multiple times.  Looks like only the Neutron client
 has chosen to use httplib2 rather that httplib – anyone got any insight here
 ?

This is a quite interest findings. so If we use httplib, this won't happen?



 Sometimes of course the Neutron timeout results in the create request being
 re-scheduled onto another node (which can it turn generate its own set of
 port create requests).Its the thread behavior around how the timeout
 exception is handled that I’m slightly nervous of (some of the retries seem
 to occur after the original network thread should have terminated).


I agree. The kind of unintentional retry causes issues.


 Thanks

 Phil


Best
Nachi


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron - an issue regarding what API to follow

2013-10-23 Thread Nachi Ueno
Hi GROSZ, Akihiro

Yes, the wiki is only for discussion.
Please see the official API docs.

I updated the wiki page as OUTDATED.
https://wiki.openstack.org/wiki/Neutron/VPNaaS#OUTDATED

Thank you for your pointing out.

Best
Nachi

2013/10/21 Akihiro Motoki amot...@gmail.com:
 Hi,

 The API document is the official one, and Wiki is used during the
 development.
 We may be better to add a note to the wiki page to avoid such confusion.

 I am not sure what confused you. Could you give me an example?

 Thanks,
 Akihiro

 2013年10月21日月曜日 GROSZ, Maty (Maty) maty.gr...@alcatel-lucent.com:

 Hey *,



 I got a little confused with what API should we follow regarding Neutron
 VPN service…

 There is this wiki page https://wiki.openstack.org/wiki/Neutron/VPNaaS
 that handles VPN APIs, where as the formal Neutron API documentation,


 http://docs.openstack.org/api/openstack-network/2.0/content/vpnaas_ext_ops_service.html,
 describes different API version and URL structure.



 Generally, my decisions are always follow the formal API documentation.
 But in this case I am little confused…



 Can anyone help? What are the actual APIs?



 Thanks,



 Maty.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disable async network allocation

2013-10-23 Thread Melanie Witt
On Oct 23, 2013, at 5:56 PM, Aaron Rosen aro...@nicira.com wrote:

 I believe he's referring to:
  https://github.com/openstack/nova/blob/master/nova/network/model.py#L335
 https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1211

I found some more background on the feature (not configurable) which might help 
in trying revert it for testing.

https://blueprints.launchpad.net/nova/+spec/async-network-alloc

There was also addition of config option 'network_allocate_retries' which 
defaults to 0:

https://review.openstack.org/#/c/34473/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal runs

2013-10-23 Thread David Kranz

On 10/23/2013 05:08 PM, Rochelle.Grober wrote:


John Griffith wrote:

On Wed, Oct 23, 2013 at 8:47 AM, Sean Dague s...@dague.net 
mailto:s...@dague.net wrote:


On 10/23/2013 10:40 AM, John Griffith wrote:




On Sun, Oct 20, 2013 at 7:38 AM, Sean Dague s...@dague.net
mailto:s...@dague.net

mailto:s...@dague.net mailto:s...@dague.net wrote:

Dave Kranz has been building a system so that we can ensure that
during a Tempest run services don't spew ERRORs in the logs.
Eventually, we're going to gate on this, because there is nothing
that Tempest does to the system that should cause any OpenStack
service to ERROR or stack trace (Errors should actually be
exceptional events that something is wrong with the system, not
regular events).


So I have to disagree with the approach being taken here.
 Particularly
in the case of Cinder and the negative tests that are in place.
 When I
read this last week I assumed you actually meant that Exceptions
were
exceptional and nothing in Tempest should cause Exceptions.  It turns
out you apparently did mean Errors.  I completely disagree here,
Errors
happen, some are recovered, some are expected by the tests etc.
 Having
a policy and especially a gate that says NO ERROR MESSAGE in logs
makes
absolutely no sense to me.

Something like NO TRACE/EXCEPTION MESSAGE in logs I can agree
with, but
this makes no sense to me.  By the way, here's a perfect example:
https://bugs.launchpad.net/cinder/+bug/1243485

As long as we have Tempest tests that do things like show
non-existent
volume you're going to get an Error message and I think that you
should
quite frankly.


Ok, I guess that's where we probably need to clarify what Not Found 
is. Because Not Found to me seems like it should be a request at 
INFO level, not ERROR.



ERROR from an admin perspective should really be something that
would suitable for sending an alert to an administrator for them
to come and fix the cloud.

From my perspective as someone who has done Ops in the past, a
Volume Not Found can be either info or an error.  It all depends
on the context.  That said, we need to be able to test ERROR
conditions and ensure that they report properly as ERROR, else the
poor Ops folks will always be on the spot for not knowing that
there is a problem.  A volume that has gone missing is a problem. 
Ops would like an immediate report.  They would trigger on the

ERROR statement in the log.  On the other hand, if someone/thing
 fatfingers an input and requests something that has never
existed, then that's just info.

It is not just a case of fatfingers. Some of the delete apis are 
asynchronous and the only way to know that a delete finished is to check 
if the object still exists. Tempest does such checks to manage resource 
usage, even if there were no negative tests. The logs are not full of 
ERRORs because almost all of our apis, including nova, do not log an 
ERROR when returning 404.


I think John's point is that it can be hard or impossible to tell if an 
object is not found because it truly no longer exists (or never 
existed), or if there is something wrong with the system and the object 
really exists but is not being found. But I would argue that even if 
this is true we cannot alert the operator every time some user checks to 
see if an object is still there. So there has to be some thing that 
gets put in the log which says there is a problem with the system, 
either a bug or ran out of disk or something. The appearance of that 
thing in the log is what an alert should be triggered on, and what 
should fail a gate job. That is pretty close to what ERROR is being used 
for now.


We need to be able to test for correctness of errors and process
logs with errors in them as part of the test verification. 
Perhaps a switch in the test that indicates log needs post

processing, or a way to redirect the log during a specific error
test, or some such?  The question is, how do we keep test system
logs clean of ERRORs and still test system logs for intentionally
triggered ERRORs?




--Rocky

We might be able to do that in our test framework, but it would not help 
operators. IMO the least of evils here by far is to log events 
associated with an api call that returns 4xx in a way that is 
distinguishable from how we log when we detect a system failure of some 
sort.


 -David





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] tempest failure: No more IP addresses available on network

2013-10-23 Thread Robert Kukura
On 10/23/2013 07:22 PM, Nachi Ueno wrote:
 Hi folks
 
 This patch was the culprit, so we have reverted it.
 https://review.openstack.org/#/c/53459/1 (Note, this is revert of revert )
 
 However even if it is reverted, the error happens
 http://logstash.openstack.org/index.html#eyJzZWFyY2giOiJcIklwQWRkcmVzc0dlbmVyYXRpb25GYWlsdXJlQ2xpZW50OiBObyBtb3JlIElQIGFkZHJlc3NlcyBhdmFpbGFibGUgb24gbmV0d29yayBcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzgyNTcwMTcxMjQ3fQ==
 
 I tested tempest in local, it works with the patch and without patch.
 # As you know, this is a timing issue.. so this doesn't mean there is no 
 issue..
 
 I'm reading logs, and I found some exceptions.
 https://etherpad.openstack.org/p/debug1243726
 
 It looks like issue in IPAllocationRange.
 
 My next culprit is the delete_subnet in ML2
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L371
 
 Kyle, Bob
 Do you have any thought on this?

Hi Nachi,

Are you suggesting with_lockmode('update') is needed on the queries used
to find the ports and subnets to auto-delete in ML2's delete_network()
and delete_subnet()? I had tried that, but backed it out due to the
postgres issue with joins. I could be wrong, but I also came to the
conclusion that locking for update locks the rows returned from the
select, not the whole table, and thus would not prevent new rows from
being added concurrently with the transaction.

Or do you have another theory on what is going wrong?

Also, is this the same issue we'd been hitting for a while with the
openvswitch plugin, or is it only since we switched devstack to default
to ml2?

-Bob

 
 Best
 Nachi
 
 
 2013/10/23 Terry Wilson twil...@redhat.com:
 Hi,

 I just noticed several number of neutron check
 (tempest-devstack-vm-neutron-isolated) fails with the same error:
 No more IP addresses available on network.
 This errors suddenly starts to occur from yesterday.

 I checked there is any commit merge around the time when this failure
 started,
 but there is no commit around the time.
 I am not sure it is a temporary issue.

 I files a bug in neutron: https://bugs.launchpad.net/neutron/+bug/1243726

 It is late in my timezone. I hope someone jumps into this issue.


 logstash stats is here:
 http://logstash.openstack.org/index.html#eyJzZWFyY2giOiJcIklwQWRkcmVzc0dlbmVyYXRpb25GYWlsdXJlQ2xpZW50OiBObyBtb3JlIElQIGFkZHJlc3NlcyBhdmFpbGFibGUgb24gbmV0d29yayBcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4MjUzNzk0OTgxMH0=

 Thanks,
 Akihiro

 Yes. This is still happening on my patch (and many others) as well.

 Terry

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev